text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Welcome to Internap’s Big Data Video series. First, let’s cover the Big Data Basics.
What is meant by the term, Big Data? Why is it important? And what are some common Big Data use cases?
Big data is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications.
Big data is also defined by three characteristics: volume, variety, and velocity.
Volume refers to the enormous amount of data being stored. It is a characteristic of a big data project or application that uses, potentially, petabytes of data and tens of millions of transactions per hour. For example, Twitter alone generates more than 7 terabytes of data every day.
Variety refers to the wide range of data types used as part of the analytical and decision making process. Much of these unstructured or semi-structured data sets don’t fit into typical organizational schemas. For example, tweets, social media blurbs, security camera images, weather reports and the like, are all examples of data that can be highly variable.
Velocity is the speed at which information arrives, is processed, and is delivered in an actionable presentation. Within a big data scenario, data streams with real-time or near real-time analysis requirements are not uncommon, and they can be far faster than transactional streams. The combination of these elements requires significantly more flexibility in organizing, processing and analyzing than traditional approaches can deliver.
In the 1970s, data management systems were primitive, very structured, typically relied on mainframes and lacked complex relational capabilities. In the ’80s and ’90s, data became more usable via the development of multifaceted relational databases.
Fortunately, a number of tools and processes have been developed to address big data processing and analysis needs within the past several years. MapReduce, a large-scale parallel processing system, developed and patented by Google, distributed file systems like HDFS and NoSQL databases, as well as on-demand, virtual and bare metal infrastructure, as a service.
What is a big data being used for? Oftentimes, companies are either trying to use as much applicable data as available to answer why something happened, predict what’s going to happen next, or to determine which questions to ask.
One common use scenario involves marketers using big data to understand consumer purchasing behavior. For example, when you’re at your local grocery store and you scan your savings card, an abundance of information is captured, such as what you bought, what time it was, was it on sale, what complementary products were also purchased, the time of the year, and so on, and then marketers attempt to use this information to put their products in front of you at the most advantageous time and price.
Our own IP architecture group here at Internap provides a real-life, close-to-home example of Big Data in action. Managed Internet Route OptimizerTM (MIRO), our proprietary IP routing algorithm, captures more than 1 trillion path and performance data points over the course of a 90 day period. These include real time, NetFlow, and SMTP data, latency and jitter statistics, as well as path plotting decisions.
Through the use of a commercial Hadoop distribution in our own AgileSERVERS, we’re able to scale out to address exponential data growth, while efficiently processing and analyzing tremendous amounts of high-velocity variable data.
In our next video, we’ll provide more detail on the different tools and processes, and infrastructure architectures used in big data applications and tell you which ones fit, and which ones don’t.
Watch next: Why is Big Data Important? | <urn:uuid:0c04dba9-f653-41b3-878a-9e74b6a552df> | CC-MAIN-2017-04 | http://www.internap.com/resources/video-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00475-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935506 | 776 | 3.421875 | 3 |
The American Textile History Museum in Lowell, Mass., is offering the Suited for Space exhibition from the Smithsonian Institution's National Air and Space Museum, exploring the "wearable spacecraft" that keep astronauts alive as they travel beyond the bonds of Earth.
The exhibition explores nearly a century of spacesuit design and development, from the earliest high-altitude pressure suits to the iconic white suits of Apollo and Skylab. It runs from Dec. 15, 2012 through March 3, 2013.
Suited for Space features large-scale photographs of spacesuits by Smithsonian photographer Mark Avino, as well as new X-ray images by Avino and Ronald Cunningham that provide a unique view of the interiors of the spacesuits. It also features a replica Apollo spacesuit on loan from NASA and objects from the National Air and Space Museum's collection.
Visitors can examine unusual details of every suit, get up close and personal with objects and artifacts, take a photograph "wearing" an Apollo suit – and even walk in Buzz Aldrin's footsteps on the gallery floor.
A wearable spacecraft
In 1961, President John F. Kennedy declared: "I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth." To reach that lofty goal, astronauts needed not only a vehicle capable of launching them into space, but also clothing that would keep them alive during the journey. Like a form-fitting personal spacecraft, an astronaut's spacesuit ensures survival in the vacuum of space.
The result of years of research, design and engineering, the spacesuit made Kennedy's vision a reality. "These spacesuits are, in many ways, the smallest of spacecraft - designed to keep an astronaut alive and well in the most hostile environment imaginable..." says Dr. Allan Needell, curator of Human Space Flight for the Smithsonian National Air and Space Museum in Washington, D.C.
Suited for Space is developed by the Smithsonian Institution Traveling Exhibition Service (SITES) in collaboration with the Smithsonian's National Air and Space Museum. The national exhibition tour is supported by DuPont. | <urn:uuid:7604e61b-b97b-4983-a247-af06296767dc> | CC-MAIN-2017-04 | http://apparel.edgl.com/news/-Suited-for-Space--Exhibition-Explores-Astronauts---Wearable-Aircraft-83591?googleid=83591 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00109-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940094 | 439 | 2.6875 | 3 |
NASA has released a second new, high-resolution “Blue Marble” image (click for original, high-resolution image), this time showcasing Africa and the Middle East.
From NASA: The new image is a composite of six separate orbits taken on January 23, 2012 by the Suomi National Polar-orbiting Partnership satellite. Both of these new 'Blue Marble' images are images taken by a new instrument flying aboard Suomi NPP, the Visible Infrared Imaging Radiometer Suite (VIIRS).
Compiled by NASA Goddard scientist Norman Kuring, this image has the perspective of a viewer looking down from 7,918 miles (about 12,742 kilometers) above the Earth's surface from a viewpoint of 10 degrees South by 45 degrees East. The four vertical lines of 'haze' visible in this image shows the reflection of sunlight off the ocean, or 'glint,' that VIIRS captured as it orbited the globe. Suomi NPP is the result of a partnership between NASA, NOAA and the Department of Defense. | <urn:uuid:a80eddab-e957-48a0-97fb-36a4e6050c10> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Highest-Definition-Image-of-Earth-Ever-Part-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885635 | 215 | 2.703125 | 3 |
Is the future of sharing the Semantic Web?
Sharing takes the next steps towards a more complete understanding of the data
Current information sharing is based mostly on using so-called Web 1.0 search tools and Web 2.0 tools such as social media to collect and display data and information on the Web and make it easier for people to access. The next generation of Web technologies — collectively called the Semantic Web — will move sharing into the era of Web 3.0.
Semantic Web technologies are already used in government sites such as Data.gov and Recovery.gov, which are part of the Obama administration’s push for open government. Now whole organizations such as the Defense Department are committing to the Semantic Web as a way to improve data discovery and information sharing throughout the military enterprise.
Unlike previous generations of Web tools, which rely on keyword searches of databases and metadata to extract and link data, the Semantic Web enables machines to talk to one another and link data and information through terms that more closely represent the meaning contained in the data.
Like a good analyst, the Semantic Web is constantly taking various data points and making associations between them to come to a new level of understanding of the material, though at a much faster rate and using much larger volumes of data than any human analyst can. Because it’s a standards-based approach, it also means that information can easily move between applications and enterprises, which greatly increases the ease of sharing.
The Defense Information Systems Agency, which provides IT services to DOD and the military services, wants to use that Semantic Web capability to provide better and more timely information to military decision-makers. In a request for information published in November 2011, DISA described the potential for using Semantic Web technology to build what it’s calling an Enterprise Information Web (EIW).
The information necessary for decision-making is now often included in multiple systems scattered throughout the military enterprise, DISA said in the RFI, and in order to get information, each organization has to receive data requests, interpret the question, and then query its databases to collect, combine and present the information.
However, the RFI states, military forces today work together more closely than ever on missions, and that joint approach increases the kinds of enterprise-level information that decision-makers need.
“In this context,” DISA’s RFI states, “the Office of the Secretary of Defense and the Joint Chiefs of Staff require current, accurate and timely information from authoritative data sources to make effective, informed decisions affecting the DOD.”
DISA said a Defense Advanced Research Projects Agency technology called “Semantic Web” was used as a model-driven concept demonstration for the EIW and proved effective.
Semantic technologies have been slow to catch on broadly in government because they are not easy to manipulate and are disruptive. However, they have been suggested for other uses beyond sites such as Data.gov and Recovery.gov. An article in Government Computer News in March 2011, for example, said the National Information Exchange Model, which forms the technical backbone of the Information Sharing Environment, could also make use of semantic technology.
As other challenges — such as big data, for which the automated data gathering of semantic technology seems ideally suited — start to work their way onto the list of IT issues the government has to deal with, the attraction of the Semantic Web is only likely to grow. | <urn:uuid:3d0ec882-f236-4f20-8591-b0267db96b23> | CC-MAIN-2017-04 | https://gcn.com/microsites/2012/information-sharing/05-semantic-web--sharing-influences.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942074 | 708 | 2.59375 | 3 |
Two studies, one from HP, and one from DNS and security vendor OpenDNS, took a look at the dangers IoT devices pose, and both concluded the same thing: They’re real, they’re here, and they’re more widespread than you might imagine. Following are summaries of each study.
Internet of Things in the Enterprise 2015
OpenDNS is a DNS provider that routes more than 70 billion Internet requests daily from approximately 50 million consumer and enterprise users in more than 160 countries. For its report on IoT dangers, OpenDNS sampled statistically relevant data on the 15th day of February, March, and April in 2015.
The company found an astonishing number of security holes in enterprises, including highly regulated ones such as healthcare, education, energy infrastructure, manufacturing, government, financial services, and others.
To continue reading this article register now | <urn:uuid:de53a41c-a8e1-4fdd-b148-511e5d202e08> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2996025/mobile-security/surveys-say-iot-dangers-are-here-theyre-real-and-theyre-widespread.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950922 | 179 | 2.53125 | 3 |
Given the huge amount of malware variants created each year, it is understandable that malware researchers count on automated threat analysis systems to single them out for additional manual analysis.
These automated systems consist of a sandbox – a virtual testing ground for untrusted and potentially malicious code – that lets the programs do their thing and logs their behavior.
Unfortunately, malware developers are aware of this and are always trying out new tricks for making their wares seem harmless.
Among the techniques they have used in the past are making the malware able to check for registry entries, drivers, communication ports and processes whose presence indicates the virtual nature of the environment in which they are run, and well as executing special assembler code or enumerating the system service list with the same goal in mind.
If these tests prove that is indeed the case, the malware stops itself from running.
But all of these techniques require specific skills and knowledge from the malware makers, and not all of them possess them, so they have turned towards less technical approaches.
According to Symantec researchers, one consists of making the malware run only if it detects mouse movement or clicking, and the other of inserting delays between the execution of the various malware subroutines.
The rationale behind the first test is that automated threat analysis systems don’t use the mouse, while regular computer users do, and so the lack of this movement signals to the malware that it is probably being run in a sandbox.
The reason for the subroutine execution delays – often spanning over 20 minutes for each – is that given the number of files the system must test, it usually spends only a small amount of time on each file, and chances are the file will be categorized as harmless and discarded before the first subroutine is even run. | <urn:uuid:80b44a31-10e7-487b-8450-a9b62d403ed3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/10/29/malware-authors-turn-to-simpler-detection-evasion-techniques/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942426 | 361 | 2.671875 | 3 |
In one of our most popular blog posts, we take a look at consent vs authorization, as they are defined under specific HIPAA regulations.
What is Consent? (According to HIPAA)
A consent as defined by the Privacy Rule is a general document that gives health care providers, which have a direct treatment relationship with a patient, permission to use and disclose all PHI for TPO. It gives permission only to that provider, not to any other person. Health care providers may condition the provision of treatment on the individual providing this consent. One consent may cover all uses and disclosures for TPO by that provider, indefinitely. A consent need not specify the particular information to be used or disclosed, nor the recipients of disclosed information.
Did you know? Our HIPAA compliance software contains help and guidance on the governance of consent and authorization.
Only doctors or other health care providers with a direct treatment relationship with a patient are required to obtain consent. Generally, a “direct treatment provider” is one that treats a patient directly, rather than based on the orders of another provider, and/or provides health care services or test results directly to patients. Other health care providers, health plans, and health care clearinghouses may use or disclose information for TPO without consent, or may choose to obtain a consent.
What is Authorization (According to HIPAA)
An authorization is a more customized document that gives covered entities permission to use specified PHI for specified purposes, which are generally other than TPO, or to disclose PHI to a third party specified by the individual. Covered entities may not condition treatment or coverage on the individual providing an authorization. An authorization is more detailed and specific than a consent. It covers only the uses and disclosures and only the PHI stipulated in the authorization; it has an expiration date; and, in some cases, it also states the purpose for which the information may be used or disclosed.
An authorization is required for use and disclosure of PHI not otherwise allowed by the rule. In general, this means an authorization is required for purposes that are not part of TPO and not described in § 164.510 (uses and disclosures that require an opportunity for the individual to agree or to object) or § 164.512 (uses and disclosures for which consent, authorization, or an opportunity to agree or to object is not required). Situations in which an authorization is required for TPO purposes are identified and discussed in the next question.
All covered entities, not just direct treatment providers, must obtain an authorization to use or disclose PHI for these purposes. For example, a covered entity would need an authorization from individuals to sell a patient mailing list, to disclose information to an employer for employment decisions, or to disclose information for eligibility for life insurance. A covered entity will never need to obtain both an individual’s consent and authorization for a single use or disclosure. However, a provider may have to obtain consent and authorization from the same patient for different uses or disclosures. For example, an obstetrician may, under the consent obtained from the patient, send an appointment reminder to the patient, but would need authorization from the patient to send her name and address to a company marketing a diaper service.
Register for one of Clearwater’s complimentary webinars on risk analysis and risk management basics and get to grips with these issues and more.
Latest posts by Bob Chaput (see all)
- The Importance of Improving Medical Device Security - November 14, 2016
- Trump’s Impact on Health Data Privacy, Security - November 10, 2016
- Clearwater American Hospital Association Exclusive Endorsement - November 6, 2016 | <urn:uuid:6442b785-27b4-4a7b-a6ad-a88458a198e1> | CC-MAIN-2017-04 | https://clearwatercompliance.com/hipaa-hitech-news/difference-consent-authorization-privacy-rule/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00220-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925611 | 743 | 2.640625 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: 1.1-1_COMPUTER_PARTS_AND_ITS_HARDWARE
Select a size
AUTHOR, DESIGN AND CONCEPT
INSTALL AND CONFIGURE COMPUTER SYSTEM / LEARNING OUTCOME 1 ASSEMBLE COMPUTER/
PARTS OF COMPUTER AND ITS PERIPHERALS
INFORMATION SHEET 1.1-1
Parts of the Computer and its Peripherals
After reading this INFORMATION SHEET, STUDENT(S) MUST be able to:
Identify different type and parts of computer.
Explain hardware component of a computer.
Are you new to computers? Do you wonder what they do and why you would want to use one? Welcome—you're in the right place. This information gives an overview of computers: What they are, the different types of computer.
In the workplace, many people use computers to keep records, analyze data, do research, and manage projects. At home, you can use computers to find information, store pictures and music, track finances, play games, and communicate with others—and those are just a few of the possibilities.
You can also use your computer to connect to the Internet, a network that links computers around the world. Internet access is available for a monthly fee in most urban areas, and increasingly, in less populated areas. With Internet access, you can communicate with people all over the world and find a vast amount of information.
If you use a desktop computer, you might already know that there isn't any single part called the "computer." A computer is really a system of many parts working together. The physical parts, which you can see and touch, are collectively called hardware. (Software, on the other hand, refers to the instructions, or programs, that tell the hardware what to do.)
The following illustration shows the most common hardware in a desktop computer system. Your system might look a little different, but it probably has most of these parts. A laptop computer has similar parts but combines them into a single, notebook-sized package.
WHAT IS A COMPUTER?
A computer is an electronic device that manipulates information, or "data." It has the ability to store, retrieve, and process data. You can use a computer to type documents, send email, and browse the internet. You can also use it to handle spreadsheets, accounting, database management, presentations, games, and more.
For beginning computer users, the computer aisles at an electronics store can be quite a mystery, not to mention overwhelming. However, computers really aren't that mysterious. All types of computers consist of two basic parts:
Hardware is any part of your computer that has a physical structure, such as the computer monitor or keyboard.
Figure1.1 From left to right, monitor, and printer are examples of hardware
Software is any set of instructions that tells the hardware what to do. It is what guides the hardware and tells it how to accomplish each task. Some examples of software are web browsers, games, and word processors such as Microsoft Word.
Figure1.2 Microsoft Office Word 2016 Screenshot
The first electronic computer, the Electronic Numerical Integrator and Computer (ENIAC), was developed in 1946. It took up 1,800 square feet and weighed 30 tons.
What are the different types of computers?
When most people hear the word "computer" they think of a personal computer such as a desktop or laptop computer. However, computers come in many shapes and sizes, and they perform many different functions in our daily lives. When you withdraw cash from an ATM, scan groceries at the store, or use a calculator, you're using a type of computer.
Desktop computers are designed for use at a desk or table. They are typically larger and more powerful than other types of personal computers. Desktop computers are made up of separate components. The main component, called the system unit, is usually a rectangular case that sits on or underneath a desk. Other components, such as the monitor, mouse, and keyboard, connect to the system unit.
Figure 1.3 All in One Desktop Computer
Is battery or AC-powered personal computer that are more portable than desktop computers, allowing you to use them almost anywhere.
Figure 1.4 A Laptop running with Windows 10
Since a laptop is smaller than a desktop, it's more difficult to access the internal components. That means you may not be able to upgrade them as much as a desktop. However, it's usually possible to add more RAM or a bigger hard drive.
Is a computer that "serves up" information to other computers on a network
Figure 1.5 Server
Servers also play an important role in making the internet work: they are where web pages are stored. When you use your browser to click a link, a web server delivers the page you requested.
Other Types of Computers
Today, there lots of everyday devices those are basically specialized computers, even though we don't always think of them as computers. Here are a few common examples:
Tablet Computers: These use a touch-sensitive screen for typing and navigation. Since they don't require a keyboard or mouse, tablet computers are even more portable than laptops. The iPad is an example of a tablet computer.
Mobile Phones: Many mobile phones can do a lot of things a computer can do, such as browsing the internet or playing games. These phones are often called smartphones.
Figure1.6 From left to right, Windows Phone, iPhone, Android, and Blackberry.
Game Consoles: A game console is a specialized kind of computer that is used for playing video games. Although they are not as fully-featured as a desktop computer, many newer consoles, such as the Nintendo Wii, allow you to do non-gaming tasks like browsing the internet
Figure1.7 From left to right, Nintendo Wii, PlayStation, and Xbox logos
Smart TV: Many TV Display now include applications (or apps) that let you access various types of online content. For example, you can view your Facebook news feed or watch streaming movies on Netflix.
Figure 1.8 Smart TV
Two Main Style of Personal Computer
Personal computers come in two main "styles": PC and Mac. Both styles are fully functional, but they do have a different look and feel, and many people prefer one or the other.
PC: This type of computer began with the original IBM PC that was introduced in 1981. Other companies began to create similar computers, which were called IBM PC Compatible (often shortened to PC). Today, this is the most common type of personal computer, and it typically includes the Microsoft Windows operating system.
Figure 1.9 PC
Mac: The Macintosh computer was introduced in 1984, and it was the first widely sold personal computer with a Graphical User Interface, or GUI (pronounced gooey). All Macs are made by one company, Apple Inc., and they almost always use the Mac OS X operating system.
Figure1.10 Mac Desktop
Although PC can refer to an IBM PC Compatible, the term can also be used to refer to any personal computer, including Macs.
BASIC PARTS OF COMPUTER
The system unit is the core of a computer system. Usually it's a rectangular box placed on or underneath your desk. Inside this box are many electronic components that process information. The most important of these components is the central processing unit (CPU), or microprocessor, which acts as the "brain" of your computer. Another component is random access memory (RAM), which temporarily stores information that the CPU uses while the computer is on. The information stored in RAM is erased when the computer is turned off.
Figure1.11 System Unit
Almost every other part of your computer connects to the system unit using cables. The cables plug into specific ports (openings), typically on the back of the system unit. Hardware that is not part of the system unit is sometimes called a peripheral device or device.
Mouse is used to interact with items on your computer screen. You can move objects, open them, change them, throw them away, and perform other actions, all by pointing and clicking with your mouse.
Figure 1.12 Mouse
Image source: Windows 7 help file
Understanding Mouse Buzzwords
When mice burst into the PC world in the early ’80s, Macintosh models had one button. PC models came with two buttons. Then somebody introduced a three-button mouse for PCs, and the world went wild.
Kinds of Mouse
Mouse ball: A little rubber ball rests in the belly of a mouse; when you move the mouse, you also roll the little ball. The movement of the ball tells the computer the direction and speed to move the on-screen pointer. Optical: Optical mice ditch the ball/roller mechanics for a small glowing light and a sensor. Trackball: Trackballs are, in essence, upside-down mice. TrackPoint/AccuPoint: Found on some laptops, this pointing device looks like a pencil eraser protruding from the middle of your keyboard.
Touchpads: Found on many laptops, this square pad lets you move the cursor by dragging your finger across its surface.
Scroll wheel: This little wheel protrudes from the mouse’s back, usually between the two buttons.
Wireless: Wireless mice work just like their keyboard counterparts; in fact, some share the same receiving unit, which plugs into your computer’s USB or mouse PS/2 port.
PS/2: An older mouse comes with a PS/2-style connector, which still work fine. Just don’t ever unplug the mouse while the computer is turned on, the mouse will stop working — even after you frantically plug it back in.(Restart the computer, and the mouse will begin working again.)
Your PC’s video circuits send images to your monitor, where you can see the action. Because monitors and your PC’s video circuits (known as video cards or display adapters) work as a team. When you shop for either a monitor or video card, these words show up on newspaper ads, showroom signs, and the fine print of product boxes.
Monitors come in different types, each described below.
CRT (cathode ray tube)
By comparison, old school CRT monitors seem boring and bulky.
Fading fast from the marketplace, CRT (Cathode Ray Tube), shown in Figure 3-2, monitors resemble small (but expensive) TV sets. Although some CRT monitors call themselves “flat screen,” that merely means their glass screens are relatively flat. They’re not flat panel monitors, an honor belonging only to LCD monitors.
Figure 1.13 Cathode Ray Tube Monitor
LCD (liquid crystal display)
LCD monitors look slim and hip on any desktop.
The most popular monitor today, LCD (Liquid Crystal Display) monitors look much like large laptop screens mounted on a stand. LCD monitors, like the one shown above, are also called flat-panel monitors.
Figure1.14 Liquid Crystal Display
OLED (organic light emitting diode)
A display technology that offers bright, colorful images with a wide viewing angle, low power, high contrast ratio and fast response time for sports and action movies. The OLED technology differs greatly from the screens in plasma and LCD/LED Monitors/Display.
LED (light emitting diode)
An LED display is a
http://en.wikipedia.org/wiki/Flat_panel_displayflat panel display, which uses an array of
http://en.wikipedia.org/wiki/Light-emitting_diodelight-emitting diodes as a
http://en.wikipedia.org/wiki/Video_displayvideo display. An LED panel is a small display, or a component of a larger display.
Figure 1.15 Light Emitting Diode Display
The differences between OLED and LED are much more substantial than an extra vowel in their names. OLED is not just next-generation LED; it's an all-new technology that results in different pros and cons when it comes to performance, design, and energy consumption.
LED display are very similar to existing LCD display. The difference lies in how the screens are lit. While traditional LCD Display use florescent backlights, LED display use smaller, more energy-efficient LEDs. Though LED display are slimmer than traditional LCDs, the need for backlighting still makes LED display larger than they could be. While LED screens produce great color, the brightness of the lights can also wash out blacks on the screen.
OLED display have elements that generate their own light and don't require an extra lighting source. Their screens can produce vibrant colors by drawing on electrical current, and don't need active current at all to produce a true black color. This means thinner sets, better blacks, and lower energy consumption.
Keyboard is input device the main way to enter information into your computer. But did you know you can also use your keyboard to control your computer? Learning just a few simple keyboard commands (instructions to your computer) can help you work more efficiently.
The keys on your keyboard can be divided into several groups based on function:
Typing (alphanumeric) keys. These keys include the same letter, number, punctuation, and symbol keys found on a traditional typewriter.
Control keys. These keys are used alone or in combination with other keys to perform certain actions. The most frequently used control keys are Ctrl, Alt, the Windows logo key , and Esc.
Function keys. The function keys are used to perform specific tasks. They are labeled as F1, F2, F3, and so on, up to F12. The functionality of these keys differs from program to program.
Figure 1.16 Keyboard
Navigation keys. These keys are used for moving around in documents or webpages and editing text. They include the arrow keys, Home, End, Page Up, Page Down, Delete, and Insert.
Numeric keypad. The numeric keypad is handy for entering numbers quickly. The keys are grouped together in a block like a conventional calculator or adding machine.
Keyboard comes with three ports supported technology USB, PS2, and Wireless.
Specialized keyboard keys require special drivers. Those specialized keys won’t work until you install the keyboard’s bundled software.
Wireless keyboards bear no cords, making for tidy desktops. Most come in two parts: the keyboard and a receiving unit, which plugs into your PC’s USB part. Unfortunately, they’re battery hogs.
Speakers are used to play sound. They can be built into the system unit or connected with cables. Speakers allow you to listen to music and hear sound effects from your computer.
To connect your computer to the Internet, you need a modem. A modem is a device that sends and receives computer information over a telephone line or high-speed cable. Modems are sometimes built into the system unit, but higher-speed modems are usually separate components.
Figure 1.18 Modem
Like most computer peripheral, printers come with their own secret vocabulary.
Figure 1.19 from left to right, Inkjet Printer, Laser Printer, All in One Printer
Kinds of Printer
Popular for their low price and high quality, inkjet printers (shown in figure below) squirt ink onto a page, creating surprisingly realistic images in color or black and white.
Laser printers might sound dangerous, but these printers) use technology similar to their ho-hum equivalent, copy machines; they sear images into the paper with toner. Black-and-white laser printers cost a little more than inkjet printers; double that price for color laser printers. Although laser printers can’t print digital photos, they’re cheaper in the long run for general office paperwork.
Laser printers are supposed to heat up. That’s why you shouldn’t keep dust covers on laser printers when they’re running. If you don’t allow for plenty of air ventilation, your laser printer might overheat. After you’re through using your laser printer, let it cool off; then put on the dust cover to keep out lint and small insects.
All-in-one (AIO): Popular with small offices, this type of printer combines a laser or inkjet printer, copy machine, scanner, and a fax machine into one compact package.
Photo Printer: Many color inkjet printers do a fair job at printing digital photos, but photo printers contain extra colors, letting them print with more finesse. Some photo printers print directly from your camera’s memory card, letting you print without firing up your PC.
The Internal Hardware
Motherboard is the main circuit board within a typical desktop computer, laptop or server. Its main functions are as follows:
To serve as a central backbone to which all other modular parts such as CPU, RAM, and hard drives can be attached as required to create a computer.
To accept (on many motherboards) different components (in particular CPU and expansion cards) for the purposes of customization.
To distribute power to PC components.
To electronically co-ordinate and interface the operation of the components.
Form factor is the specification of a motherboard – the dimensions, power supply type, location of mounting holes, number of ports on the back panel, etc.
Figure 1.20 Motherboard form factors
Figure 1.21 Motherboard Form Factors
Central Processing Unit (CPU)
The PC processor also called the central processing unit. It is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.
Clock Speed is a processor’s rating that measure a certain number of information processed per second.
FSB Front Side Bus serves as the processors connection to the system memory. FSB transfer speed allows better processor performance.
L2 Cache enables the processor to speedily access recently used information. Currently a processor operates on Level2 (L2) which provides faster data transfer between processor and main system memory.
32-bit (x86) CPU vs. 64-bit (x64)
There are two different types of CPUs. There is a 32-bit CPU and there is a 64-bit CPU. The main difference between these two processors is the structure. The older processor which is the 32-bit processor has a structure that can process instructions less efficiently than a 64-bit processor.
APU Accelerated Processing Unit is a processing system that includes additional processing capability designed to accelerate one or more types of computations outside of a CPU. Is a term AMD gave their CPU that also has a graphics core inside the CPU chip. Is a simply a processor that combines CPU and GPU elements into a single architecture.
Intel Core i7 Extreme ProcessorMulticore Processor enables the system to handle more than one thread at a time by switching the threads between the core to provide faster information processing. Integrates multiple physical processors on a single chip dividing the application between the processors to allow the system function faster by running multiple threads.
Software is like a rope made up of individual threads. It uses one thread at a time, and other use many thread called multi-thread.
In the figure shown right is the best illustration how multi core perform and boost your pc.
Figure 1.22 How Multiprocessor works
Overclocking is the term used in pushing a processor to operate higher than what is officially rated by its vendor. It enables the system to extend its capabilities by almost or more than 25%, definitely a high boost to computer performance.
Random Access Memory (Ram)
Although manufacturers have created many types of memory over the years, all of the memory looks pretty much the same: A fiberglass strip about four inches long and an inch tall, with little notches in its sides and edges. Different types of memory fit into different types of socket little slots that hold the strip’s bottom and sides. The notches on the memory module must mesh with the dividers and holders on their sockets. If they don’t line up, you’re inserting the wrong type of memory into the socket.
The Main Types Of Memory
SIMMs come in two main sizes, as shown in figure below, so both sizes require a different-size socket. Ancient, pre-Pentium computers use the smaller size (31⁄2 inches long), which has 30 pins and usually holds less than 20MB of memory.
Early Pentium computers used a larger size (4 1⁄4 inches long), which has 72 pins and usually holds no more than 64MB of memory. Both types simply push into a socket, held in place by friction.
Figure 1.23 30 pin SIMM(Above), 72 pin SIMM(Below)
NOTES: SIMMs are yesterday’s technology from early ’90s computers. Don’t buy SIMMs for modern PCs.
SDRAM DIMM (Synchronous Dynamic Random Access Memory Dual In-line Memory Modules)
To meet the increased memory demands of newer and more powerful Pentium and AMD CPUs, designers created the speedier SDRAM DIMMs. With 168 pins, the 5 1⁄4-inch DIMMs (as shown below) look much like longer SIMMs. They slide into newly designed slots with little clips holding them in place.
Figure 1.24 168 pin SDRAM DIMM
NOTES: Usually called simply SDRAM, DIMMs ruled the computer world through most of the ’90s.
RDRAM (Rambus Dynamic Random Access Memory) or RIMM
Rambus, Inc., created a super-fast, super-expensive memory in the late 1990s and covered the chips with a cool-looking heat shield. The speedy 5 1⁄4-inch-long memory modules, shown in the figure below, enchanted Intel so much that the CPU maker designed its Pentium 4 CPUs and motherboards around them. The rest of the computer industry ignored RDRAM because of its high price and licensing fees. Intel’s main competitor, AMD, stuck with standard motherboards and SDRAM, the existing industry standard. RDRAM and SDRAM use different slots, so stick with the type of memory your computer is built around.
Figure 1.25 RDRAM (Rambus Dynamic Random Access Memory)
NOTES: Unless you’re using a Pentium 4 with an Intel motherboard, you probably won’t be using RDRAM.
DDR SDRAM (Double Data Rate SDRAM)
The biggest competitor to RDRAM, this stuff does some tricky piggybacking on the memory bus to speed things up dramatically. The catch? Because your motherboard must be designed to support it, these 5 1⁄4-inch memory modules use slots with different notches than those designed for traditional SDRAM. That means that DDR SDRAM modules, like the one in the figure shown below, don’t fit into a regular SDRAM slot or an RDRAM slot.
NOTE: Pentium 4 computers that don’t use RDRAM often use DDR SDRAM memory. However, make sure your motherboard specifically supports DDR SDRAM before buying it. (DDR is also known as Dual Channel.)
DDR2 SDRAM (Double Data Rate 2 SDRAM)
DDR2 SDRAM (shown below) is simply a newer, faster version of DDR SDRAM. Yet again, your motherboard must be designed to support it, as these modules use yet another system of slots and notches.
Figure 1.20DDR2 SDRAM
DDR3 SDRAM (Double Data Rate 3 SDRAM)
Figure 1.20 DDR3 (Double Data Rate 3)
DDR4 SDRAM (Double Data Rate 3 SDRAM)
An abbreviation for double data rate fourth-generation synchronous dynamic random-access memory, is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth ("double data rate") interface.
Hard drives constantly move to new technologies to pack more information into successively smaller spaces. These words describe the storage technology built into the drives found today and yesterday.
Common HDD Terms to Remember
IDE/ATA/PATA (Integrated Drive Electronics or Intelligent Drive Electronics):
This fast and cheap standard quickly chased its decrepit precursors out of the barroom a long time ago. Today, most hard drives still use some form of IDE technology, often referred to as ATA (ATAttachment). Because these drives use technology called parallel, they’ve picked up the acronym PATA to describe all drives from this old school.
UDMA, UIDE, AT-6, Fast ATA, Ultra ATA, UDMA, and more:
These subsequent flavors of IDE/ATA technology each add new technologies and longer acronyms. The result? More speed and more storage capacity.
SATA (Serial ATA)
The newest incarnation of the IDE/ATA drives, these offer still greater performance. Older drives moved information to your computer through awkward, stubby ribbon cables. SATA drives transfer their information faster through sleek, thin cables that route through your computer’s innards more easily.
You guessed it, external SATA drives live outside your PC and plug into special eSATA ports you can add to your PC.
SCSI (Small Computer Systems Interface), Fast Wide SCSI, Ultra SCSI, WideUltra2 SCSI)
Pronounced “scuzzy,” this popular drive variety worked its way into the hearts of power users and network administrators. Today, SATA supersedes SCSI away from even those folks.
Speed and Space
The following terms appear on nearly every hard drive’s box to help you find the drive with the size and speed you need:
Capacity: The amount of data the hard drive can store; the larger, the better. When buying a new drive, look for something with 50 gigabytes (GB) or more. Always buy the biggest drive you can possibly afford.
Access or seek time: The time your drive takes to locate stored files, measured in milliseconds (ms). The smaller the number, the better.
DTR (Data Transfer Rate): How fast your computer can grab information from files after it finds them. Larger numbers are better. Data transfer rates are broken down into burst and sustained each described next.
Burst/sustained:. The burst rate determines the speed at which your computer can fetch one small piece of information from your hard drive. The sustained rate, by contrast, refers to how fast it constantly streams data fetches a large file, for example. Naturally, burst rates are much faster than sustained rates.
5000/7200/10000 RPM: The speed at which your hard drive’s internal disks spin, measured in revolutions per minute (RPM). Bigger numbers mean faster and more expensive drives. (For some reason, techies leave out commas when discussing RPM.)
• When you’re purchasing a drive for everyday work or sound/video editing, buy a very fast one. If you’re looking to simply store large amounts of data, such as MP3s, videos, text, or similar items, save money by buying a slower drive.
• For further information read storage devices section.
POWER SUPPLY UNIT (PSU)
Converts high-voltage alternating current (AC) power into the lower voltage direct current (DC) power that your motherboard and drives need.
Figure 1.27 Power Supply Unit
Power Supply Form Factors
Power supply form factors more or less mirror motherboard form factors. ATX power supplies are the most common, and they plug into all sizes of ATX and BTX motherboards. You can find smaller power supplies that fit microATX, FlexATX, and microBTX motherboards as well.
Figure 1.28 PSU Connectors
Figure 1.29 Power Connectors and Voltages
Mini is also called berg connector.
P1 (20- wire, 24 wire) is also called 24 pin ATX power connector or 20 pin ATX power connector.
P4 connector is also called 12V 4 pin power connector.
Graphics Card (also called a video adapter, display card, graphics card, graphics board, display adapter or graphics adapter) is an expansion card which generates a feed of output images to a display.
Figure 1.30 System Unit with on board and attached video card
Common Types of Graphics Card
Peripheral Component Interconnect (PCI)
Most PCs sold before the late 1990s came with a video card in one of their PCI slots. But today, this type of slot is not use for graphics card however, it is use for several add on cards such as, audio card LAN card, and other type of PCI cards.
Figure 1.31 PCI Video Card
AGP Accelerated Graphics Port (often shortened to AGP)
is a high-speed point-to-point channel for attaching a video card to a computer's motherboard, primarily to assist in the acceleration of 3D computer graphics. Originally it was designed as a successor to PCI type connections. Since 2004, AGP has been progressively phased out in favor of PCI Express (PCIe). By mid-2009, PCIe cards dominated the market; AGP cards and motherboards were still produced, but OEM driver support was minimal.
Figure 1.32 AGP Video Card
PCI-E Peripheral Component Interconnect Express
Officially abbreviated as PCIe, is a high-speed serial computer expansion bus standard designed to replace the older PCI, PCI-X, and AGP bus standards.
Figure 1.33 PCI-E Video Card
What’s on the card?
DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data.
Figure 1.34 DisplayPort
(HDMI) High Definition Multimedia Interface simultaneously transmitting visual and audio data via the same cable.
Figure 1.35 HDMI Port
(DVI) Digital Visual Interface Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors
Figure 1.36 DVI Port
Analog D-Sub is standard interface for analog monitor. It was designed for CRT displays.
Figure 1.37 15 pin Analog D-Sub
Common Graphics Card Terms
GPU’s speed in drawing pixels depending on the number of (ROP) Raster Operation Pipeline units of the graphics card.
GART Graphics Address Remapping Table
A capability of reading texture directly from the memory without the need to copy them to video memory.
Pixels pass through GPU unit called Pipeline which crunch complex vertex and pixel shaders programs for lighting and effects. The more pipelines, the better.
Cheaper cards usually have four pipelines while mid-range to high end have 8-16 pipelines o more.
Also called as pixel processor and unified shaders-usually used for realism gamming. Pixel and Vertexshaders
SLI (Scalable Link Interface) and Crossfire [ATI]
New technologies that allow two or more graphics cards installation for certain intensive graphics application.
Software that programmers use to create advanced visual tricks with video circuitry. Many games use DirectX to display three-dimensional fire-breathing dragons and other spectacular effects. My computer use DirectX version 11 running Windows 7 Ultimate operating system.
A piece of software that lets Windows talk to your hardware in this case, your video card. Without the right driver, your card won’t work properly.
A computer buzzword for connector, this is one of connectors on your PC where you plug in cables. The plug on the end of your monitor’s cable must match your PC’s video port
PS/2 Keyboard and Mouse
Keyboard and Mice Connectors - Old Style 5 Pin DIN Keyboard connector. The 5 pin DIN connectors are rarely used anymore. Most computers use the mini-DIN PS/2 connector, but an increasing number of new systems are dropping PS/2 connectors in favor of USB. Adapters are available to convert 5 pin din to PS/2.
PS2 Keyboard (Purple) and Mouse (Green). NOT interchangeable.
Newer Motherboards may have a single PS2 connector with 1/2 purple and 1/2 Green.
Figure 1.38 PS/2 Mouse and PS/2 Keyboard Ports
Serial or COM-1 port Used for External Modems and old Mice. Being phased out on newer computers. Replaced by USB.
Figure 1.39 Serial Port
Today, serial ports usually remain empty. Modems, their prime users, usually live inside the computer. A handful of other gadgets cling to them, mostly older PocketPCs, Palm Pilots, label printers, and similar nerdy gadgets. Most high-end PCs still include a serial port, but the budget models leave them off.
Parallel / Printer Port
Also called as Line Printer Terminal (LPT Port) used for old printers. Will not be found on newer computers. Replaced by USB.
Figure 1.40 Parallel Port
Hunkered down next to a computer’s two serial ports sits a parallel or printer port. (Nerds call it a DB25 port.) It’s always been there for connecting to the printer.
Like serial ports, parallel ports are being replaced by USB ports. A few printers still use them, though, so they haven’t yet dropped off high-end PCs. You probably won’t find one on a budget PC.
Universal Serial Bus (USB) Ports
Use the USB ports to connect USB devices. (Universal Serial Bus) Used for just about everything.
Figure 1.41 USB standard ports and symbol
FACTS: For the past ten years, manufacturers have shipped their computers with USB ports small, rectangular-shaped holes ready to accept small, rectangular-shaped plugs. At first, everybody ignored them. But slowly, companies began creating items to plug into those holes.
Firewire connectors should not be confused with USB connectors, they look almost the same. Fire Wire is used to connect external devices like hard drives. Sometimes Fire Wire is called IEEE1394a or i-link.
Figure 1.42 Firewire
Audio and Game Ports
SPDIF is a standard for transmitting high-quality digital audio without going through an analogue conversion process. The SPDIF interface can be implemented in two different ways, Coaxial and Optical.
Figure 1.43 Audio Ports, (Green) Line Out, (Pink) Mic In, (Blue) Line In
Connect an RJ-45 jack to the LAN port to connect your computer to the Network.
Figure 1.44 (Left) Ethernet cable (Right) Ethernet port
SELF CHECK 1.1-1
IDENTIFICATION: IDENTIFY WHAT IS BEING ASKED
An electronic device that manipulates information, or "data."
The first electronic computer, It was developed in 1946. It took up 1,800 square feet and weighed 30 tons.
Electric Numerical Integrator and Computer
Electronic Numerical Integrator and Computer
Electronically Numerical Integrator and Computer
Electronic Number Integrator and Computer
It is any set of instructions that tells the hardware what to do. It is what guides the hardware and tells it how to accomplish each task.
Designed for use at a desk or table and made up of separate components.
It was introduced in 1984, and it was the first widely sold personal computer with a Graphical User Interface, or GUI.
Battery or AC-powered personal computers that are more portable than desktop computers, allowing you to use them almost anywhere.
This type of computer began with the original IBM PC that was introduced in 1981.
It is any part of your computer that has a physical structure, such as the computer monitor or keyboard.
It is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.
A computer that "serves up" information to other computers on a network.
ENUMERATION: ENUMERATE THE FOLLOWING.
Other types of computer
Two main styles of personal computer
Answer Key 1.1-1
B. Electronic Numerical Integrator and Computer
C. Desktop computer
C. Personal Computer
ENUMERATION: ENUMERATE THE FOLLOWING QUERIES.
Tablet, TV, Game Console, and Smartphones/Mobile Phones
PC and Mac
TASK SHEET 1.1-1
Title: Identifying Computer’s Parts
Performance Objective: Given are the following materials, you should be able to identify and explain the function of different parts of computer. Allotted time 30 minutes.
Supplies/Materials : Whiteboard,
Equipment : Computer hardware (not working is recommended, it’s use for identification purposes only)
Trainer will prepare his selected hardware for you to identify each purpose by showing each item (includes input/output ports, basic parts of the computer, the internal hardware,)
Read information sheet 1.1-1 Computer and its Hardware
Study and remember all computer parts and its function.
When you are ready to identify the parts. Present yourself to your trainer and start identify the given parts of the computer.
Performance Criteria Checklist 1.1-1
Trainee’s Name:____________________________ Date: ______________
Are computer parts and hardware identified properly?
Are tools, equipment and materials listed?
Are the input/output ports enumerated?
Are internal hardware(s) enumerated?
During the performance of the task, did you considered the following criteria?
Firewire connectors should not be confused with USB connect | <urn:uuid:6df01697-6234-432f-b708-8ebcc9f00f2d> | CC-MAIN-2017-04 | https://docs.com/ismael-manic-balana/8971/1-1-1-computer-parts-and-its-hardware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907228 | 8,063 | 3.359375 | 3 |
In this article you will learn how install and setup apache tomcat under Linux / UNIX environments.
Before we begin installation of Apache tomcat on our Debian based operating system Ubuntu 12.04 LTS, you need to make sure that your hostname is correctly setup.
Note: This works under Ubuntu 10.04 LTS, Ubuntu 13, CentOS / RHEL / Redhat Enterprise Linux, Debian 6 and Debian 7.
If you don’t know on how to set your machine hostname please refer to our previous article “How to setup hostname.”
In our case, our hostname is “example.com”
First, update your operating system’s repositories and then upgrade the packages installed. Please note we are running all commands from a sudo enabled user.
sudo apt-get update
sudo apt-get upgrade
What is Apache Tomcat?
Apache tomcat is a JAVA based web application server. It is one of the most widely used JAVA server and it is a web server and a servlet container for Java web applications.
The latest stable release of Apache Tomcat at the time of writing this article is 7 (May 2013). We will be using latest tarball from the Apache website.
We strongly recommend checking the Apache Tomcat website when installing as installing latest stable version has a lot of security and bug fixes which will help stabilize the server as well as the web application.
We will cover both installations automated installation using “apt-get” and installation from the tarball archive which is available on Apache Tomcat and this applies to other distributions as well “CentOS / RHEL / Redhat Enterprise Linux”
To download and install Apache Tomcat 7 under Debian / Ubuntu:
sudo apt-get install tomcat7
To download and install Apache Tomcat 7 CentOS / RHEL / Redhat Enterprise Linux:
You can download latest stable version of Apache Tomcat from their website: http://tomcat.apache.org/download-70.cgi
Please remember to download tar.gz under the “Core” section of the download page.
We will download the latest stable tarball of Apache Tomcat 7
As the download completes, extract the tarball
tar xvzf apache-tomcat-7.0.40.tar.gz
If you want to run Apache Tomcat 7 under your home directory “~” you may remain this as it is. However if you want to run under some specific directory, you need to make it now before installation.
In this article we are moving our installation to the “/opt/” directory
sudo mv apache-tomcat-7.0.40 /opt/tomcat
We have now installed entire Apache tomcat 7 web server under /opt/tomcat directory. Before you start it, deploy it, or test any application, you need to install Java. This is because Apache Tomcat needs it to start or deploy any web application written in JAVA language.
To install Java:
sudo apt-get install default-jdk
Once JAVA (above) is installed, you need to edit your file toyou’re your environment variables. We are doing that for our user, if you intend to use any other user you need to do it for the users specific “.bashrc” file or you need to set the environment variables globally for the all the users of the operating system.
Add below content to the end of the file “.bashrc”
Save and exit the file “.bashrc”
We either need to log out and log in again to make the changes took effect, or you can restart the .bashrc file by using the command below.
Tomcat is now installed and configured on your machine / server but it is not yet started.
To start Apache Tomcat 7 web server use commands below:
Now we go to the web browser and navigate to you rip address; in our case it is “192.168.10.20” our URL is below:
Why port 8080?
Apache Tomcat 7 web server’s default port is 8080. However you may change this to something else which is not the scope of this article.
This will show us the default page of the Apache Tomcat 7. If you see the default page it will be saying “If you see this page this means that you have successfully installed Apache Tomcat 7”
Great, You just installed and configured Apache Tomcat 7 Web server!
In next article we will cover how to deploy web applications under Apache Tomcat 7 web server. | <urn:uuid:6fd4c83b-d001-482a-8b5f-b58ec5dde7bf> | CC-MAIN-2017-04 | http://www.codero.com/knowledge-base/content/10/306/en/how-to-setup-apache-tomcat.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.841024 | 982 | 2.828125 | 3 |
Context is funny. You don’t miss it or even think about it until you don’t have it. Then suddenly context becomes a really big issue.
As long as the world had structured data, context was not much of an issue (or at least it wasn’t a large issue.) Consider the following simple structured file:
* Not valid phone numbers
|NAME || PHONE ||BANK BALANCE |
| Bill Inmon || 111-123-4567* || $3,122.97 |
| Ron Powell || 222-345-6789* || $10,981.24 |
| Linda Kresl || 333-111-1111* || $512.87 |
Looking at the simple file and knowing what the columns of data mean, we make several assumptions about context. For example, we assume that the phone number is current. We assume that the phone number is in the U.S. We assume that the area code designates some geographical locale. As far as the bank balance is concerned, we assume that it is the current bank balance. We assume that it is in U.S. dollars. We assume that the name that is in the file is associated with the phone number and the account. In a word, because the data is structured, we make a lot of contextual assumptions about the data contained in the file.
Such is the nature of structured data. A big part of the “structure” of structured data is the context of the data found in the record or the file.
But when it comes to unstructured data, there is no context that can be conveniently associated with the data. For example, suppose you are reading an unstructured file. Suppose you encounter the number 7. Now what does “7” mean?
Is it the days in the week? The seven seas? The amount the Dow Jones went up this morning? The number of brothers and sisters you have? The truth is that the number “7” is naked. By itself it means nothing. In order for “7” to have meaning, it MUST have context. And with unstructured data there is no context.
So before you get all excited about “big data” and all the unstructured data you find there, you need to spend some time thinking about how you are going to apply context to your unstructured text. If you are seriously going to ponder that question, spend a few minutes on the larger question: What does context of raw text really mean? It turns out that there are many different kinds of context – some of them more useful than another. Some of the forms of context are:
- What type of data has been stated?
- Who stated it?
- Where was it stated?
- What was it stated in response to?
- When was it stated?
- How was it stated?
- What day and time was it stated?
- What was stated before it? After it?
- And so forth.
This issue of context is a fundamental issue that we take for granted. But when we get into the world of unstructured data, we enter a whole new world where context simply is not present.
The people that are going nuts over big data today seem to either not know or not care that there is this major issue of context that comes with big data.
SOURCE: Big Data Needs Context
Recent articles by Bill Inmon | <urn:uuid:955f1c10-7122-4ca2-9fde-07d53cb89eae> | CC-MAIN-2017-04 | http://www.b-eye-network.com/channels/1134/view/16481 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962487 | 721 | 2.84375 | 3 |
What is a CDN?
CDNs are widely used in today’s Internet landscape, improving the delivery of a significant percentage of all Internet traffic worldwide. But what does that mean, and what really is a CDN? A content delivery network is a highly-distributed platform of servers that responds directly to end user requests for web content. It acts as an intermediary between a content server, also known as the origin, and its end users or clients. To learn more, check out our video and article: “What is a CDN?”
What are the benefits of a CDN?
Content Delivery Networks, also known as CDNs, carry nearly half of the world’s Internet traffic. They are ubiquitous by their presence and mitigate the challenges of delivering content over the Internet. But why are CDNs so pervasive? Why is it that small and medium content providers, as well as large corporations, have come to rely on CDNs to provide a seamless web experience to their end users? To learn more, check out our video and article: “What are the benefits of a CDN?”
How do I maximize the benefits of a CDN?
Content Delivery Networks, or CDNs, are like many strategic tools: easy to learn and benefit from, but difficult to master and get the most out of. Like doing business online, using a CDN involves layers of complexity; and CDN providers spend every moment of every day dealing with some of the most complex technical challenges of doing business online. The challenges met by deploying a global CDN include delivery of enterprise web applications, media and software delivery, and cloud security solutions.
So how can you, whether you have a small technical team or a massive world-class staff, get the most out of your CDN? To learn more, check out our video and article: “How do I maximize the benefits of a CDN?”
Learn more about the Next Generation of Content Delivery Networks (CDN)
- Content Delivery for an Evolving Internet: Choosing the Right CDN for Today and Tomorrow
In this whitepaper, we define the core requirements for such a CDN – a highly distributed architecture, cutting-edge software services, sophisticated security capabilities, and support for agile businesses – and establish why these particular requirements are critical for helping businesses succeed in today’s fast-changing marketplace.
- CDN Buyer's Guide
In this authoritative guide, CDN buyers can get up to speed on the latest developments in the Next Generation CDN. Learn about CDN provider’s capabilities that are critical to delivering greater online experiences, including advanced web performance optimization, high quality video streaming delivery, cloud security, and web application acceleration.
- Next Generation CDN Infographic
A graphical overview of how CDNs are evolving in response to new digital requirements. Learn how digital performance can affect your bottom line, how CDNs are optimizing their networks, and what's driving enterprise buyers to the Next Generation CDN.
- The State of CDN Services
This white paper from Unisphere Research, “The State of CDN Services: Reaching Global Scale Using Content Delivery Networks“, covers the latest content consumption trends and offers insights on how you can use a content delivery network to profit from them. Discover how to optimize your CDN to deliver media to various devices, how different CDN services enhance user experiences, and how to apply CDN practices to meet the expectations of online video viewers.
- CDN Resource Page
An introduction to Akamai's CDN solutions and the benefits they provide to CDN operators, including advanced web performance optimization, improved network security and compliance, application delivery acceleration, additional self-service options, higher quality video delivery and a cloud architecture built for a global audience.
- Content Delivery Network
An overview of the evolution and latest developments of the content delivery network. See how Next Generation CDNs are addressing issues arising from the growth of non-cacheable content, the prevalence of the cloud, new security requirements and the demand for better CDN analytics.
- CDN Services
Meet Akamai Aura Managed CDN, an innovative Software-as-a-Service (SaaS) solution for managed CDN services. This turnkey solution lets CDN providers and operators launch their own video streaming services and optimize their network for content delivery, while reducing deployment time and upfront costs.
- CDN Platforms
CDN platforms are changing, and web experience enhancements are evolving to help enterprises deliver an optimized web experience. Evolving CDN platforms and capabilities include web experience optimization services that increase website speed, media and content delivery.
- CDN Glossary
An invaluable reference tool providing definitions for many of the terms that are part of the quickly evolving CDN landscape. | <urn:uuid:5f930ada-2513-4057-9011-0ab3a3ddd4f1> | CC-MAIN-2017-04 | https://www.akamai.com/us/en/cdn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00210-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910465 | 987 | 2.5625 | 3 |
Here’s a scenario: You’re in an airport waiting to board your flight. You remember that you need to transfer some funds between bank accounts. You open your laptop and are about to connect to a public WiFi hotspot.
Wireless hotspots are extremely common. In high traffic areas (airports, waiting rooms, etc) it is more and more common to see them open for public use. But whose wireless network are you connecting to? Can you judge a book by its cover?
A “man in the middle” attack involves someone getting between you and your destination and intercepting whatever you’re doing. In the context of public WiFi, such an attack could lead to someone obtaining passwords or sensitive emails all because you needed an internet connection for 5 minutes.
With that in mind, a wireless hotspot named “YYC Public WiFi” might not appear out of place if you’re sitting in the Calgary International Airport, but the name alone doesn’t mean it is legitimate. Anyone could host that hotspot from their laptop or mobile device and pretend to be something they’re not. With the right name, tricking people into connecting can be very easy.
So how can you avoid malicious public hotspots? The best option would be to connect your laptop to your phone. Most smartphones allow you to tether your other devices via your own personal WiFi, Bluetooth, or USB connection. Tethering will give your laptop or tablet internet access via your mobile phone network. Banking or emailing while tethered might use up a small amount of data on your mobile plan, but it is well worth the knowledge that you’re connected to a trusted source.
Here are a few links to tethering tutorials to help you get connected:
- Android Tethering: https://support.google.com/nexus/answer/2812516?hl=en
- iPhone Tethering: https://support.apple.com/en-us/HT204023
It’s important to note that if you are tethering you should not be using it to watch movies or download large media. The cellular data plan is limited in size, you could exceed your allowance very quickly with movies and music.
Be mindful of what you’re connecting to and what you’re doing. If you do need to connect to public WiFi, check with local staff or posted signage to ensure an access point is legitimate. If any work you’re doing involves sensitive information it’s always better to tether unless you are absolutely sure the wireless network is safe.
Please contact our helpdesk (email@example.com) if you would like more information. | <urn:uuid:a7960537-9b04-41fd-a793-cb2a2baa1794> | CC-MAIN-2017-04 | http://wiki.sirkit.ca/2016/05/how-safe-is-public-wifi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916034 | 561 | 2.53125 | 3 |
For women across the globe the experience of pregnancy is normally one filled with happiness and excitement. However for those in remote and poverty stricken regions the experience can be filled with anxiety, especially due to the general lack of education in health and in particular pregnancy, labour and childcare.
One of these areas is Bihar. Situated in Northern India, the state is known for its high mortality rates among women and children during childbirth. The maternal mortality ratio in Bihar is 371 per 100,000 live births which is the fourth highest in the country and above the national average of 301.
The state also has one of the highest rates for infant mortality (61 per 1,000 live births) in the country. This compares with just 15 per 1,000 in Goa and Kerala. The under-five mortality rate in Bihar is 85 deaths per 1,000 child births compared to the average of 74 for the whole country.
Although only 32 per cent of adult women in the rural state of Bihar own their own mobile phone, 83 per cent have access to a phone. BBC Media Action, the BBC’s international development charity, is using OnMobile’s platform to deliver information to the mobile phones of those living in difficult-to-reach rural areas.
The aim is to use mobile phones to provide cost effective training and job aids to community health care workers. The majority of community health workers in the state of Bihar are rural women with low levels of technical literacy – however 85 per cent do own their own phone.
The ownership level among health workers is higher than for adult women generally because the latter have a source of income – either a salary from the government or incentive payments linked to performance from the government.
OnMobile provides BBC Media Action with the technical platform to run Mobile Academy, an IVR training course that enables the health workers to expand their knowledge of life saving maternal and child health behaviours.
All the community health worker needs to do is make a phone call from their existing mobile phone to take the course – from anywhere, at any time, at their own pace. The course is divided into chapters and lessons, with a quiz at the end of each chapter. All those who gain an accumulative pass score at the end of the course receive a printed certificate from the government of Bihar.
The service is supported by the Bill & Melinda Gates Foundation and was developed in collaboration with Bihar’s government. Health workers pay for Mobile Academy at the rate of 50 paise (less than 1 US cents) per minute. This is approximately a 90 per cent reduction on what a commercial IVR service typically costs. The total charge for the course is Rs100 ($2) which covers the cost to the mobile operator of delivering the service, including taxes. Operators also share revenues with BBC Media Action and OnMobile to help cover operational costs.
OnMobile’s platform also powers Mobile Kunji, another BBC Media Action service that allows health workers to playback information about maternal and child health via their mobile phones. During their counselling sessions with families, health care workers dial short codes printed on a deck of cards to play IVR audio content that support the sessions.
Mobile Kunji is free to the end user. The first year costs are being covered by the Bill & Melinda Gates Foundation.
But do these services genuinely work? Well, Mobile Kunji has already acquired 85,000 unique users since its launch in May 2012, who have played more than 2.3 million minutes of health content. Mobile Academy has had similar success; 27,000 community health workers have called the service, listening to 2.8 million minutes of educational content; 27 per cent of users have already completed the course. Currently more than 8,000 health workers are eligible for certificates for passing the course.
The results speak for themselves in terms of take-up and if we continue to provide these services efficiently and cost-effectively then we hope to see a marked improvement in the health of pregnant women, mothers and babies in the state.
Next we hope to see the services appear elsewhere in India. BBC Media Action is working to expand the service to Odisha (also known as Orissa) with the support of the state government. And there are discussions to roll it out to other parts of northern India.
Vijay Sai Pratap is director of business development with OnMobile Global.
The editorial views expressed in this article are solely those of the author(s) and will not necessarily reflect the views of the GSMA, its Members or Associate Members. | <urn:uuid:6580d76f-3d9f-464e-87d9-fd748e778c67> | CC-MAIN-2017-04 | https://www.mobileworldlive.com/the-bbc-finds-an-audience-in-rural-india-for-its-mhealth-service | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957078 | 923 | 2.859375 | 3 |
If you don’t know what a firewall is, let’s start there…
A firewall is basically a digital “wall” that sits on the edge of your network or device. When someone makes a connection over a network or the internet to your server, they connect by the IP address + a Port. Firewalls, on a very basic level, say “allow traffic on this port” or “deny traffic on this port.”
So for web traffic you might connect to our server here: 18.104.22.168 on port 80. There are a lot of services that run on any machine and many of them you don’t want to be accessible from the internet. For example, many distributions of Ubuntu come with a running DNS server that is accessible on port 53. If left alone, this could be a route for people to exploit your machine.
One way to think about it is like your home. Your house has a physical address that someone can punch into a GPS and it will take them to your driveway. However to get into the house they will need to go through a door or a window. Ports are those doors and windows. If a person needs access to the services of your kitchen, then they can come through the kitchen door. If they need access to your garage, you can send them through the garage door. On a computer, different doors (ports) tend to correspond to different services (servers). For example, Apache Web Server commonly uses port 80 for HTTP traffic to host a website, or port 443 to host a secure website with SSL. SMTP servers often use port 25 to receive incoming mail. FTP servers often use port 21, and so forth and so on.
So it is advantageous to block certain ports. I.E. you might allow everyone to visit your kitchen but you don’t want everyone in your bedroom. It is best to actually just block all ports by default and only allow specific ports to incoming traffic.
Finally it is worth noting that firewalls can do all kinds of interesting and complex things with traffic. Most of those functions are well outside of the scope of this article, and outside of the scope of UFW, but we will get there.
If you spend enough time around Ubuntu servers you will come across a couple of terms related to security and particularly firewalls:
UFW = Uncomplicated FireWall
ipTables = the default host-based firewall built into Ubuntu and many other Linux distributions
Host Based Firewall vs. Network or Dedicated Firewalls
There are two basic types of firewalls. Host Based and Network. A Host Based firewall runs directly on a computer or server. It only controls traffic to/from that device. A network based firewalls is usually a dedicated piece of hardware that sits at the logical “edge” of your network and protects all devices behind it on the network. Dedicated firewalls can also be installed inside the network to limit traffic between machines on the same local network but that is beyond the scope of this article. You can think of host based firewalls as protecting an individual home and network firewalls like a fence around a gated community, protecting access to all of the homes inside.
Why do I want to bother with this?
Simply put, if you don’t, bad stuff could, and invariably will, happen. Because people do bad things, and setup automated systems to further speed up their badness. Setup a public web server and look at some of your logs, I guarantee within a very short period of time you will see all kinds of odd activity, most of which are automated systems that are driving around your house and looking for doors (ports) and then they start trying the doorknob on each one and if it is unlocked they will start trying different combinations of common usernames and passwords (if the service is protected at all) to see if any of them will work.
By enabling a firewall and setting up a default deny all rule, you are slamming and locking all doors in everyone’s face all the time. Which might be poor neighborhood etiquette but is exactly what you need to be doing with an internet connected system. You still want to allow everyone access to some services though so you unlock and open up some specific doors (like HTTP for a web server).
What is iptables?
ipTables is the builtin default host-based firewall on many distributions of linux, including Ubuntu. It is very powerful and very good at blocking stuff. It is also extremely configurable and can do some fairly advanced traffic routing, hence it is difficult to configure, especially for the majority of folks that don’t need all the extra bells and whistles.
Enter UFW – Uncomplicated FireWall
UFW is a tool that is also pre-installed on most Ubuntu distributions and many other linux distrubutions which seeks to make ipTables easier to manipulate and use for the masses. And it excels at doing so. With only a handful of easy/short commands you can setup a default deny rule and a few rules allowing access to your server.
Okay, what do I do?
If you are now sold on the idea of getting ipTables up and running with UFW on your Ubuntu server, follow along and I will do my best to take you there, as safely as possible. If you are working with a remote server, you have to be careful, because you don’t want to accidentally close the door that is allowing you to administer to the server.
ipTables is disabled by default. So it is nearly ready to go but is just sitting there doing nothing. Before enabling it though you need to first figure out what doors you need to keep open and then put in the rules to open them. Here are perhaps the four most common items:
Webserver: HTTP — 80/TCP
Secure Webserver: HTTPS — 443/TCP
FTP: FTP — 21/TCP
SFTP and SSH: SSH — 22/TCP
If you were already somewhat security conscious, you might have changed your default SSH/SFTP port to something other than 22. To check which port SSH is running on, take a look at the config file by doing the following:
That should return something in the format of “Port ##”. For example the default is “Port 22”, you might be running on “Port 922” or some other odd number if you changed it.
If you are running a public website, you are almost certainly using 80, and if a secure site (HTTPS) then port 443. Put together a list of any other services and their corresponding ports before proceeding. As long as you get SSH (assuming you administer your server remotely from the command line) open/working you at least won’t lock yourself out.
Next, install UFW. It might already be installed but it can’t hurt to run these commands anyhow.
sudo apt-get install ufw
Next setup a default deny rule that says all traffic will not be allowed by default. Remember, ipTables is currently not running so we are changing the config file but none of this takes effect until we turn ipTables on. Furthermore, remember UFW is just a simple front-end for ipTables, they aren’t two separate things per se’.
Next, open up all the ports you are going to need. Lets start with HTTP and HTTPS and SSH:
sudo ufw allow https
sudo ufw allow ssh
We just opened up 80/tcp, 443/tcp, and 22/tcp respectively.
Open up any non-standard ports, for example if you are running SSH on port 922, that would look like this:
If you also have udp ports to open, you can put “udp” in place of “tcp” in the above command.
Great, you should be all set and ready to turn your firewall on. After do so, I recommend you use Putty or whatever and open up another SSH session to ensure you didn’t accidentally close your SSH port. Your existing SSH session probably won’t be terminated, so it is safe to leave it open and make sure you can open up a second, new connection.
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
Use Putty or whatever you are using for SSH and at this point open up another session and make sure it works. If not, go back to your existing SSH window, and disable the firewall
…and go back to figuring out which port you are running SSH on and adding an allow rule in for that port before turning the firewall back on again. If you somehow did kill your SSH port, and you are having a good day, it shouldn’t have killed your existing connection. If it did, your are kind of stuck as the only way to get access to the box is to plug a keyboard and mouse directly into it and access it locally, which often isn’t an option on remote servers :(.
Backtracking – I added something I didn’t want…
Let’s say you messed something else up, commonly someone might have added an allow rule they want to get rid of, to do that, simply run the following (example, I want to close 922/tcp again):
If you opened a bunch of ports and can’t remember everything you did, you can review the rule list in simplified format with the following:
To Action From
-- ------ ----
[ 1] 6522 ALLOW IN Anywhere
[ 2] 6522/tcp ALLOW IN Anywhere
[ 3] 80 ALLOW IN Anywhere
[ 4] 443 ALLOW IN Anywhere
[ 5] 80/tcp ALLOW IN Anywhere
[ 6] 443/tcp ALLOW IN Anywhere
[ 7] 4545/tcp ALLOW IN Anywhere
This is handy for particularly complex rules as you can just delete by “number”. In the above example, you can see item 7 is port 4545 and I want to delete that rule. So I issue the following:
Proceed with operation (y|n)? y
If you have the luxury if a development server or dev environment, I recommend starting and playing around a bit there. If not, as long as you are careful not to kill SSH you should be able to backtrack and fix anything you break.
Hope this has been helpful! | <urn:uuid:bf75cc3a-cc15-4fba-bd7e-f62a51ca92d9> | CC-MAIN-2017-04 | https://www.kiloroot.com/uncomplicated-firewall-ufw-easy-to-configure-host-based-firewall-for-ubuntu-server-you-should-be-running-a-firewall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941704 | 2,218 | 3.375 | 3 |
There are six key technologies that have defined our security process for the last 25 years. Some of these technologies aren’t as effective as they once were. Following a record year for data breaches, is it time to question our security processes?
The answer, of course, is yes. When something is broken, you fix it. But first, a brief history lesson:
- In 1986, the first intrusion detection system was invented.
- In 1987, John McAfee released the first version of VirusScan.
- Syslog as a universal format was created in the 1980s and documented as ).
- Packet filtering firewalls were invented in the late 1980s, and stateful firewalls were brought to market in 1989.
- Nessus was invented in 1998.
- In 1999, the first security event management system was created.
The ’80s and ’90s were very innovative times in Internet security. However, several of these technologies have turned out to be very noisy and produced false positives. Security information and event management (SIEM) correlation of security data, OS log data and vulnerability data was supposed to reduce this noise. This began what’s commonly known as the funnel approach. All security events go in the top of the funnel and the “legitimate” events — the ones we are supposed to be paying attention to — are the ones that come out the bottom as alerts. The funnel model may have worked well 10 or 15 years ago with a couple of hundred security events boiled down to 10 or 20. Today, the security team is looking at 10,000 (or more) events in a day and ending up with 1,000 (or more) alerts for follow-up. This has placed the burden back on the human to always ask if an event is a false positive, which leads to a general distrust of their own tools.
Once the security team has seen something that isn’t a false positive, they begin a manual process of linking disparate log events together by time, IP address, host name or other artifact. It can take a few hours to a few days to complete this task. If the right assumptions have been made, the analyst might get to the right root cause and the “patient zero” of where the attack started. Once that’s done, there is an attempt at attribution to a specific set of user credentials. All bets are off if the attacker switched identities, in which case the process takes even longer or may not complete at all.
There have been many evolutionary advances in cybersecurity as technologies improved and features added (which also increased complexity). The function of many of these base technologies has remained the same. There are two areas of innovation which deserve note. The first is Splunk, which pioneered data indexing and schema-on-the-fly for log search, and the other is malware detection and sandboxing pioneered by a number of companies, including FireEye.
The next new big thing is user behavior intelligence solutions. These automated solutions use machine learning, custom built behavior models, user session assembly, Stateful User Tracking™ and risk scoring to ask dynamic sets of questions about user credential behaviors and access characteristics of your existing SIEM data. They then link security system alerts to user sessions and credentials. The result is a dramatic acceleration of processes that expose the entire attack chain and turns the discovery of an advanced persistent threat into a credit-card-fraud-style Q&A by a tier-one analyst — “was that you using the VPN from Shanghai at an odd time of day, attempting to log into these 5 systems, accessing another and then switching identities? And, by the way, while you were logged into the host, FireEye sent out an alert.”
It’s clear that the technologies of the ’80s and ’90s served us well, but the advent of social engineering has put the attacker back in the driver’s seat with sets of valid credentials that allow them to get past current security detection tools. Further, once the attacker gets beyond initial intrusion detection systems by using credentials, there is no security strategy for detection. User behavior intelligence systems turn the security funnel process on its head by starting with the user credential and attributing to it security sensor data.
Don’t believe me? | <urn:uuid:b84519f5-247c-4865-8861-1166766d62c7> | CC-MAIN-2017-04 | https://www.exabeam.com/life-at-exabeam/user-behavior-intelligence-drives-new-security-processes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96119 | 889 | 2.703125 | 3 |
Definition: A variant of bubble sort that compares each adjacent pair of items in a list in turn, swapping them if necessary, and alternately passes through the list from the beginning to the end then from the end to the beginning. It stops when a pass does no swaps.
Also known as cocktail shaker sort, shaker sort, double-direction bubble sort.
Generalization (I am a kind of ...)
See also bubble sort, gnome sort.
Note: Complexity is O(n2) for arbitrary data, but approaches O(n) if the list is nearly in order at the beginning. Bidirectional bubble sort usually does better than bubble sort since at least one item is moved forward or backward to its place in the list with each pass. Bubble sort moves items forward into place, but can only move items backward one location each pass.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 29 September 2014.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black and Bob Bockholt, "bidirectional bubble sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 29 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/bidirectionalBubbleSort.html | <urn:uuid:424b37b8-0d66-468a-8225-66ba00f24448> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/bidirectionalBubbleSort.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00018-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.879112 | 305 | 2.859375 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Technology and Early childhood Education
Select a size
Caritas Institute of Community Education
Higher Diploma in Early Childhood Education
Technology and Early Childhood Education
Name: CHAN Wai-lam
Date : 2 9 -11-2016
The role of science and technology in education is becoming important. From the policy of various countries, it is not difficult to see that the trend of globalization of science and technology. America department of education in 1998 first proposed the "National Education Technology Plan" (Call to Action for American in the 21 Century)、Singapore's "IT Master Plan in Education"、Even the Hong Kong government has announced "With time and in use of information technology to learn five years strategy". It plays an important role in education with the change of society and the progress of science and technology ( Craig 2000 ) . The rapid popularization of computer, into the family and school after school, the application of information technology in teaching children has become the essential link. Many studies have confirmed the use of technology on children's learning is effective, studies have pointed out that the use of computer can help children to learn basic math concepts and skills development, spatial ability, cultivate creativity and enhance the ability to solve problems ( Casey 2000 ). But also let children use electronic products also exist some problems, I will discuss on the relationship between technology and education.
Role of Technology in Education
Technology has become an indispensable part of today's life. I think in the teaching of kindergarten can make children absorb more of the in the classroom through the integration of information technology or make the teacher's teaching is more easy and rich ( Wong 2002) . As for the timing of the integration of information into teaching, I think have five stages. I think that the five stages of preparation before class, motivation, activity, teaching activities, and after class evaluation. And this is mainly to explore the preparation before the class, cause motivation and activity.
(1) Preparation Before Class
Preparing for all the lessons before class, including the collection, arrangement and production of lesson plan, plan and course content, and also includes the teacher's understanding of the content and the exercise. For example, teachers teach marine organism. Information on marine organism can be collected on the Internet at the time of preparation. You can also find the picture as a teaching aid.
(2) Cause Motivation
From the perspective of time, motivation beginning of the study, before and after the end of teaching. For example content of marine organism, the image of the film dynamic, more than the static text can cause learning interest. So, in technology into the application, you can use a digital camera or DV camera to take pictures or related teaching film. Can also use a variety of multimedia production software, (For example PowerPoint、Flash、Authorware、Director) Self production course materials, interactive games. such as, look for pictures marine organism in the different Internet, and show them to the children in activities. Let them see what it is. Thus, they have aroused their interest in marine organism.
(3) teaching in class
Generally the traditional lecture teaching is a one-way communication. Therefore, are also produced two major shortcomings. Respectively is monotonous dry and not specific. This can not cause the students learning motivation is also lack of resonance interaction. This affects the learning effect. If teachers can integrate technology teaching, play multimedia rich and diverse characteristics. Will be abstract as a concrete, reduce the obstacles thinking to learning. For example, teachers use self-made dynamic Flash materials, combined with text, graphics and sound, teach students to read and read English pronunciation; use PowerPoint to tell the story, can be more vivid. Such as, to understand the growth of tropical fish. Teacher can find the relevant tropical fish grow fragment from the Internet, play for children to watch. Make them understand the growth process of tropical fish or make their own animation and children watch.
Advantages and Disadvantages of technology
The current technology is changing rapidly. Some products are applied to early childhood education and learning. I have reservations about the advantages and disadvantages of these technology products in teaching children's learning. In advantage, it can reduce the weight of teaching books, can reduce paper. Product advanced, easy to manipulate and master the skills , and the diversification of teaching content, to enhance the interest of children to learn ( Moersch 1995 ) . But there are a lot of disadvantages. For example, as early as children in contact with these electronic teaching, the traditional teaching materials, they feel not enough to attract, reducing the habit of reading books. Make learning interest and lack of concentration ( Dede 1 998 ). Also, such as the long-term use of electronic products, many studies have pointed out that in the early childhood development of vision, waist and neck have a bad effect. In addition, the software also has a number of intelligent systems, is to help adults save writing time. So I think that electronic products and now life and learning is not separate. But adults must instill in children some of the right ideas and methods of using these technology products. Individuals believe that the use of attitude is the most important, so that we can make good use of technology to bring us the benefits, to reduce the disadvantages of science and technology.
Reflection on technology
I think the computer can not provide the actual sensory experience. Oppenheimer(1997) Emphasis on the introduction of technology before the child stage of the teacher's most important is to give children a broad sense of emotion, knowledge and the basis of the five senses. For example when teaching of marine organism all is to use video to lead the whole teaching ( Druin 2009 ) . Children may not be able to have a real sense of stimulation. For example, when the teacher introduced marine organism can take out with real fish to make children real feel or take out water to let the children feel the in of living in the water. So, I think we want children to have a comprehensive development. Technology can assist teaching, but it can not rely too much on information technology. So, we can not ignore the importance of real education. Because make that the whole teaching can be integrated. (1013word)
Casey, J. (2000). Creating the early literacy classroom. Englewood, Colo.:
Craig, D. (2000). Technology, math, and the early learner: Models for
learning. Early Childhood Education Journal, 27(3), pp.179-184.
Dede, C. (1998). Learning with technology. 1st ed. Alexandria, Va.: Association
for Supervision and Curriculum Development.
Druin, A. (2009). Mobile technology for children. Amsterdam: Morgan Kaufmann
Uden, L., Liberona, D. and Welzer, T. (n.d.). Learning technology for education
in cloud. 1st ed.
Moersch , C. (November, 1995). Levels of Technology Implementation:
A Framework for Measur ing Classroom Technology Use . Learning
and Leading with Technology Use, 23(3). Eugene, OR: ISTE
Publishing. Retrieved November 12, 2001
Wong, T. (2002). Science and technology for children. 1st ed. Washington, D.C.:
National Science Resources Center.
課前準備 ,通過網絡老師可以對內容的再熟悉與演練。 因為,老師可以從不同的網頁找尋合適自己的資料。例如:主要題走海洋生物時,老師可以找不同的資料收集。
教師使用自行製作的動態Flash教材,結合文字、圖形與聲音,教導學生看圖認字與英文發音;使用PowerPoint說故事,可以更生動。例如,了解 海洋生物。可以從網上找有關 海洋生物片段,播放給兒童們觀賞。令他們了解 不同海洋生物。或自行製作動畫及幼兒們觀賞 ,令他們更容易明白。
mation and children watch. | <urn:uuid:a77f6148-afda-4e6a-8c6e-018340b0eb96> | CC-MAIN-2017-04 | https://docs.com/kelly_1995/3875/technology-and-early-childhood-education | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874973 | 2,013 | 2.65625 | 3 |
Teachers in Pennsylvania are being encouraged to use increasing amounts of technology in the classroom. The downside to this trend is the increased risks that students face online by accessing inappropriate or potentially harmful content.
There’s no doubt that the increased use of technology in Connecticut classrooms enhances students’ learning. As a district technology coordinator in Connecticut, how can you ensure your students are safe online and acting responsibly? Anti-bullying policies and blocking, you may cry. But is this enough?
As mobile devices infiltrate school, work, and personal life, we live in a society that is ‘always on’, with constant access to information. This trend presents school districts in Georgia with the challenge of ensuring all students remain safe online.
As school districts in Oregon continue to invest in technologies such as Chromebooks to enhance learning, the chance of students’ exposure to online risks is increasing. What can schools in Oregon be doing to ensure their students are safe?
Impero, which has headquarters based in the UK, recently chose Portland in Oregon for its new US offices. The move comes as a result of huge business growth, and the adoption of Impero’s classroom management software for education across America, fuelling the demand for timezone coverage. Portland was chosen as the city of choice due… | <urn:uuid:cb679454-2372-4edc-ad4e-918a484f61b0> | CC-MAIN-2017-04 | https://www.imperosoftware.com/author/daniel-phillips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952246 | 266 | 2.5625 | 3 |
Where the left and right side of the brain meet.
16 TIPS TO BETTER NETWORK DIAGRAMS
WELCOME TO NETWORKDIAGRAM101 WHERE THE LEFT AND RIGHT SIDE OF THE BRAIN MEET.
Our brains are divided into two distinct hemispheres, each responsible for different cognitive skills. Some people favor one side over another, while others have a good balance.
I’ve spent more than 16 years finding this balance and creating visually stimulating and informative network diagrams. In order to make quality maps, we’ll need to tap both sides to create informative, but visually appealing diagrams that everyone will want a copy of!
TIP 1TIP 2TIP 3TIP 4TIP 5TIP 6TIP 7TIP 8TIP 9TIP 10TIP 11TIP 12TIP 13TIP 14TIP 15TIP 16 | <urn:uuid:4f0d9e15-42cc-4c93-b847-f853ebbe0fe1> | CC-MAIN-2017-04 | http://networkdiagram101.com/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.763027 | 187 | 3.046875 | 3 |
Author: Lisa Yeo
Publisher: Prentice Hall PTR
Many users think that their personal computers are not susceptible to any kind of attack. Despite their belief, many computers, especially those behind a permanent broadband connection, suffer attacks. What can you do to protect yourself? One of the things you can do is install a personal firewall and this book is here to teach you all about it.
About the author
Lisa Yeo is a systems analyst with the Legislative Assembly Office in Edmonton, Alberta, Canada. Her start in security came in 1997 when she was made responsible for managing a corporate firewall. Since that time, she has acquired the Global Information Assurance Certification Security Essentials and Windows certifications. Lisa currently sits on the GIAC Windows Board.
An interview with Lisa Yeo is available here.
Inside the book
The book starts with a chapter dedicated to security basics. The author introduces general security principles and helps you realize why firewalls exist. Yeo naturally notes that a firewall is not the only method of defense you should use. Here you’ll learn about the three basic principles of information security: confidentiality, integrity and availability. They are all explained with examples that even users without any knowledge of computer security can understand. Following is a part dedicated to risk assessment where you’ll see the various types of risks that you can be affected by. Once you’ve made the risk assessment you can create a security policy and Yeo illustrates the things to consider when creating one. The last part is dedicated to the explanation of what firewalls do, why they are necessary and you’ll see why you might need a firewall.
In order to get an understanding on how firewalls work, you need to learn the basic networking concepts. The author writes about the Internet Protocol Address, the Domain Name System, RFCs, etc. We move on as we read about internet protocol basics and get an understanding about ports, the Transmission Control Protocol, and so on. This chapter provides a solid foundation that will help you understand what follows next.
Yeo shows us the different methods that personal firewalls use to protect your machine. Here you’ll learn about Network Address Translation, static packet filtering, stateful inspection and application proxy. The author notes that personal firewalls often combine these standard methods and additional features like blocking on attack signature and intrusion detection. There are many figures and tables in this chapter that illustrate the material and provide a clearer picture for readers new to how personal firewalls work.
Yeo moves on by discussing the usage of a personal firewall at home. The author identifies the various risks that are a particular concern to the home user. In order to assess your needs you have to think about your skills, interests and aptitudes. You should also create a personal security policy. When you are looking for software there are a few things you should take into consideration: ease of use, configuration, levels of protection, logging, etc. Next we see how the Zone Alarm personal firewall can be configured. It’s also nice to see that the author doesn’t stick only to the Windows platform but also mentioned the Lokkit tool that can be used to configure basic settings for
ipchains. When it comes to managing your firewall Yeo shows us how you can maintain your firewall.
Some risks cannot be mitigated by the traditional corporate firewall. Yeo will show you what risks you can protect yourself from if you use a personal firewall in the workplace. In addition to the things the home user has to take into consideration, a corporate user has to think about centralized management of the product. The example security policy in this case is more elaborate.
What do you do when something goes wrong on your system? You check your logs to see what’s happening “behind the scenes”. The author illustrates why it’s important to use and review your logs. As a minimum, logs can be used to deal with configuration problems when your firewall is not behaving as expected. You can have minimal logging or log every packet. However, logging just unusual traffic is probably the best solution if you don’t want to have your disk space eaten up. Ok, so you’re convinced that logging is a good idea so Yeo moves on to teach you how to read what you’ve logged. There are five main ways you can use your logs to:
- identify configuration errors
- identify scanning activity
- identify attacks
- monitor outbound traffic
- respond to reports of attacks from your computer
The author also notes that if you’ve turned off all services on your host, you patch and block all incoming traffic, you could turn off logging. In any case, at least a minimal amount of logging is a wise thing to have.
The following chapter provides an overview of configuration options as well as a distinction between legitimate and illegitimate traffic. To achieve this, the author writes more about services that run on various ports. As soon as you choose the firewall you think is right for you, there are various things you have to take into consideration. What you learn here is how to choose the right level of protection for yourself. As regards traffic blocking, Yeo gives an overview on what do do and how to do it.
Most personal firewalls come with preconfigured settings to get the end user up and running in no time. The author gives a few pointers that will make you stay protected as time passes, vulnerabilities are uncovered, and you become interested in advanced configuration. The author discusses stealthing, passive fingerprinting and the LaBrea program. LaBrea can be used to mislead attackers and slow them down.
Troubleshooting is what comes next. The author addresses several problems you can encounter as well as some common areas that are regularly broken when using firewall. If you want to use services such as Kazaa and NetMeeting you’ll have to configure your firewall to permit them. The author emphasizes that you have to be careful also when uninstalling a personal firewall – review the instructions provided by the vendor.
Perhaps the most interesting part of the book to an average user will be the first appendix that contains a comparison of several firewalls. This comparison of both hardware and software firewalls will certainly help you choose the right firewall.
My 2 cents
As I see it, this is a very good publication intended for all of you that want to learn more specifically about personal firewalls. The people that will benefit mostly from the material presented in this book are the novice users. The book is written clearly and is very easy to follow, a book that could find its place in many introductory courses for computer security. | <urn:uuid:5a767304-b5cf-4d03-9da3-39af9dddc749> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/03/13/personal-firewalls-for-administrators-and-remote-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946353 | 1,375 | 3.484375 | 3 |
The future of the world is urban. By 2030, 60% of the world’s population will live in cities, which will create 70% of the world’s GDP, according to the Pentagon. Many of these will be megacities with populations of ten million or more.
Managing such behemoth cities using traditional infrastructure will be impossible. So-called smart cities, or at least smarter cities, will be a necessity rather than a luxury—for both megacities as well as smaller cities also wrestling with their own set of problems.
Creating smart cities, however, can be exceptionally challenging. Here are some guidelines:
Create a Clear Road Map
Because comprehensive smart cities initiatives can take a decade or more to implement, they demand a solid game plan. Cities should create a road map that factors in feedback from citizens and officials from city departments, recommends the Smart Cities Council. The document should not live in isolation; its proposals must support the city’s larger goals.
The roadmap should include an assessment of the city’s current technological progress and a vision for the future. To make that vision a reality requires detailed project plans, milestones, metrics, and funding considerations.
The plan should be bold in its vision, but it should also be clear, however, in identifying near-term "low-hanging-fruit" projects with a quick return on investment. Early wins such as installing smart lighting or a network of connected surveillance cameras can pave the way for broader initiatives.
Another central consideration for IoT projects is network infrastructure. Many smart city projects require a high-speed internet network.
Prioritize Citizens and Business over Technology
Technology may be at the heart of smart cities projects, but officials launching them must clearly explain how they improve the quality of life for residents and why they make financial sense in the long run. Projects that fail to do this invite opposition—from rival politicians or citizens.
While many city officials clearly understand the potential benefits of a smart city project, it can be difficult to convince others that it is a worthy investment. “This is a very real risk. City officials advocating for smart cities projects risks being labeled ‘tree huggers’ or having unrealistic expectations,” says Hal Good, a procurement expert and IBM futurist.
“You have to be smart about building a business case,” says Vinay Singh, a senior advisor at U.S. Department of State and coauthor of a recently published smart cities guide. Vendors pitching smart cities projects often offer to help cities establish a business case. “But cities should ask the vendor to look holistically as well—across agencies and bureaus and talk about how lifecycle costs span across departments and the scalability of the project. These are large acquisitions we are talking about,” Singh says.
A proven strategy is to begin a smart city initiative with technologies that provide a quick payback. “Smart street lights are an example of this. They are low hanging fruit,” Singh says. “It is a small win that shows you can get traction with people to show that these things work.”
Get the Public Involved
Even well-intentioned city officials can encounter problems by opting for smart city projects when some citizens assume that the city or vendor in question had nefarious intent when installing them.
One common complaint is that smart cities projects rob citizens of their privacy. While this gripe is common for cities installing networks of connected cameras, it can also sabotage other types of projects. Some residents might assume, for instance, that smart meters or sensors that monitor whether trash cans are full are there to spy on them.
“To those of us working in the smart cities realm, we all recognize the enormous potential to do good with big data and sensors, but how do we tap into that potential to benefit the most people?” asks Sarah Isabel Moe, a senior consultant with DNV GL's sustainable buildings and communities team
While there will always be conspiracy theorists, city planners can sidestep many of these problems through citizen engagement.
First of all, engaging citizens can improve the project, enabling it to meet community needs better, Moe says. “If community members are involved in the project planning, their specific concerns can be addressed and the project shaped to best meet the needs of constituents. Community engagement can yield surprising findings related to specific issues of concern,” she says.
Working with the public also enables city planners to communicate the project’s benefits. “The city can share the overall vision and context for specific smart city projects, to help community members feel more connected and engaged across the city,” Moe says. “Early engagement can reduce resistance and complaints ‘after-the-fact’ by allowing the community to voice concerns and for the city to respond before finalizing investments.”
But smart cities technologies like smart trash cans and smart lights can offer clear financial incentives and other benefits, which have a quick ROI. The San Francisco–based startup Compology, for instance, says that its trash sensors can slash waste collection expenses by up to 40 percent by notifying waste removal trucks when they are ready to be emptied.
Smart street lights also can have a dramatic return on investment.
Have a Smart City Data Plan
Cities have long used sensors to help manage traffic. But the scale of sensors made possible by the Internet of Things is unprecedented. “Much of the buzz around smart city technologies is related to ubiquitous sensors, meaning there is a steep learning curve for cities to manage the way they collect, analyze, and communicate all the data coming in the new smart city future,” says DNV GL’s Sarah Isabel Moe. “ How to store the data is another hot topic. Rules around data archives and policies around data sharing need to be upgraded if they are going to accommodate new data analytics.”
Ubiquitous sensors enable city planners to use systems thinking and can focus on holistic measures of quality of life rather than isolated metrics. Cities can move away from one-off solutions and toward a world where they can make better decisions based on sound data, where city departments integrate their capital projects through smart and sustainable city frameworks, Moe says. “‘Learn to love Big Brother,’ was a common joke at the Smart Cities Week conference recently held in Washington D.C.,” Moe says.
The public is also not used to seeing sensors spread throughout the city, which again calls for community engagement to help explain their benefits. “The sensors are here, they are everywhere, and, it is important to place them intentionally with community buy-in and to communicate the data collected by them in a meaningful and empowering way,” Moe says.
Get Buy-In from City Employees
City officials spearheading smart city initiatives also must convince their colleagues why they are important. “You have to make a strong case to the folks that make things churn why this is important to them,” says Vinay Singh. This requires explaining how the project will make their job better how it impacts them. “A common response for many city workers is to say: ‘well, I am not used to doing things that way,” Singh says. Getting around this resistance requires leadership.
There are compelling reasons for smart cities technology. At Smart Cities Week, David Graham, deputy chief operating officer of San Diego pointed out four: aging infrastructure, climate change, security, and the siloed nature of government. All of these reasons can be convincing reasons both city officials as well as citizens to support smart city projects.
Smart Cities Demand New Procurement Guidelines
Many cities around the world need to streamline their procurement process for smart cities projects, says Vinay Singh. It is a different animal than procuring for traditional cities.
Hal Good agrees. Many of the procedures that worked for cities decades ago don’t work for smart cities projects. For instance, many cities only feel comfortable buying goods or services when they put them out to bid in a traditional 'low bid' process. “That typically locks out anybody who has something new and innovative,” Good says.
Other cities operate under antiquated rules where they can only purchase technology that has been on the market for three or more years, which can cause them to ignore all promising new technology in the process.
Some cities focus so much on the cost of the goods they buy that they don’t weigh on the other types of benefits it might have. They buy 'lowest cost' as opposed to 'best value.' It might cost more, for instance, to time traffic lights for emergency vehicles, but it could also save lives.
Another pitfall is to create a fixed budget for smart city projects that doesn’t account for inflation in the delay between budget formation and the actual purchase. “The key is not to embed hard static numbers in your city code which may require frequent action by elected officials to change Put an index in so as inflation goes up, award thresholds and budgeted authorizations follow that index,” Good says.
Breaking Down the Silos
In the industrial space, many companies implementing IoT initiatives struggle to extend them throughout each division of the enterprise. This struggle is perhaps even more challenging for cities, which have numerous departments that act largely autonomously. “If you look at the cities that are succeeding, they are doing a good job at collaboration,” says Vinay Singh. “Steve Adler, the mayor of Austin, TX, has a weekly team meeting that brings together the various agencies in the city government to whatever it is that is going on and the impact across departments,” Singh says.
The Right Political Dynamics
Smart cities projects and smart infrastructure are long-term projects that span across political cycles. “Some of the successful ones we have seen are in governments that are either socialist or authoritarian like in China,” Singh says. “This approach has worked in democracies like Australia, where that country has created an independent group for their infrastructure development that isn’t constrained by political cycles.” Having a quasi-governmental body allows the city to manage smart cities projects with a long-term view.
This goes back to the approach in Austin, TX, where city agencies have regular meetings and experience collaborating. Singh says: “You could make the mandate that you want to make the various city agencies work together because a smart city legacy will hold in the future regardless of who is at the top or not.” | <urn:uuid:54d40c45-8862-435e-9375-c8a2c7440107> | CC-MAIN-2017-04 | http://www.ioti.com/smart-cities/guide-launching-smart-city-projects | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00072-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953382 | 2,170 | 3.125 | 3 |
On Friday and Saturday, DARPA, the federal agency responsible for developing new technologies for the military, is holding the DARPA Robotics Challenge near Miami, where 17 teams are competing by having their autonomous robots perform eight different tasks.
The goal is to have these robots enter a disaster area -- such as the Fukushima Daiichi nuclear zone -- or other hostile environments to investigate, make repairs or even rescue victims. To do that, the robots must use stairs, navigate through rubble and walk across rubble, to name a few of the difficult tasks they're required to perform.
Watch the trials live here:
Several competitions are occurring simultaneously, and can be views at the IEEE Spectrum.
In one year, the finalists will compete again to determine who will win the $2 million prize. | <urn:uuid:7befda6c-c0f4-4738-b4ed-4422fd1478bf> | CC-MAIN-2017-04 | http://www.govtech.com/federal/DARPA-Holds-Robotics-Challenge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00008-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949792 | 157 | 2.65625 | 3 |
Appendix A Mathematical concepts
The purpose of this Appendix is to give a brief description of some of the mathematical concepts mentioned in this document. For a more thorough treatment of modular arithmetic and basic number theory, consider any undergraduate textbook in elementary algebra. For more information about groups, rings, and fields, we recommend [Fra98]. For more details on analysis and the theory of limits, consult any undergraduate textbook in analysis. A good introduction to complexity theory is given in [GJ79]. | <urn:uuid:1331601b-b09a-47e5-b3f1-bee2f2ed6b4b> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/appendix-a-mathematical-concepts.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00037-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.843825 | 99 | 2.96875 | 3 |
Four ways to protect your data
Physically losing your smartphone, laptop, iPad, or other mobile device is never fun, but what about the information on those devices? What can you do to protect your information and to get back to “normal” again? Here are some tips that may help you protect your digital identity, data, and files.
You are a target. Hacking, distributed denial-of-service (DDoS) attacks and other approaches are becoming part of the Internet. It seems as if a new type of attack or data breach is found every week. In order to help protect yourself, be sure to back up your data. There are a number of services that automatically back up your data. If you prefer a solution with a one-time cost, an external standalone hard drive may be the best option. Storage capacity of hard drives is increasing and the costs are decreasing.
Entry points for malicious attacks are everywhere. Gaming systems, apps, and many games on mobile devices are utilizing “always on” Internet connections. This constant connection to the Internet creates a potential access point to your personal data. Anti-virus software, firewalls, passwords, and data encryption should be used whenever offered on any device.
You get what you pay for. Make sure that the security software you purchase includes all applicable security options. Review the features and functions of your anti-virus software. Make sure it keeps you safe from viruses, worms, malware, Trojans, risky e-mails, and problematic websites.
Encryption is the key. Many people encrypt their laptops and desktops but forget a key area of vulnerability – thumb drives. Thumb drives, often called USB sticks or flash drives, should be encrypted so that the data on them cannot be accessed if they are lost. These small devices are easily lost and easily stolen. | <urn:uuid:f4b0847b-e7e8-46aa-b88f-db9b3e05bede> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/four-ways-to-protect-your-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00523-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924697 | 380 | 2.8125 | 3 |
This sculpture represents the HIV virus, which is one of several types of viruses that have an icosahedral shape. The materials used in this sculpture include polycarbonate for the membrane, inside the membrane are the wires from CAT5 cable representing DNA strands. The 72 nodules that cover the surface are similar to the toroidal inductors found in PC power supplies.
Computer Weekly has covered the devastating human impact of toxic technology waste, but sculptor Forrest McCluer has found a way of using computer parts creatively.
The self-replication characteristic of biological viruses led security researcher Fred Cohen to coin the term "computer virus" but McCluer brings a new twist to the term by constructing 3D representations of biological viruses using old computer parts.
McCluer's project to deconstruct 30 discarded PCs and create a variety of sculptures from all their parts is documented on his website. | <urn:uuid:b9c290f3-0f38-4803-8dd0-820f248e9659> | CC-MAIN-2017-04 | http://www.computerweekly.com/photostory/2240109083/Photos-Virus-sculptures-from-computer-waste/1/Wilco | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00523-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947697 | 179 | 2.65625 | 3 |
3.1.4 What are strong primes and are they necessary for the RSA system?
In the literature pertaining to the RSA algorithm, it has often been suggested that in choosing a key pair, one should use so-called "strong" primes p and q to generate the modulus n. Strong primes have certain properties that make the product n hard to factor by specific factoring methods; such properties have included, for example, the existence of a large prime factor of p-1 and a large prime factor of p+1. The reason for these concerns is that some factoring methods - for instance, the Pollard p-1 and p+1 methods (see Question 2.3.4) - are especially suited to primes p such that p-1 or p+1 has only small factors; strong primes are resistant to these attacks. Strong primes are required in for example ANSI X9.31 (see Question 5.3.1).
However, advances in factoring over the last ten years appear to have obviated the advantage of strong primes; the elliptic curve factoring algorithm is one such advance. The new factoring methods have as good a chance of success on strong primes as on "weak" primes. Therefore, choosing traditional "strong" primes alone does not significantly increase security. Choosing large enough primes is what matters. However, there is no danger in using strong, large primes, though it may take slightly longer to generate a strong prime than an arbitrary prime.
It is possible that new factoring algorithms may be developed in the future which once again target primes with certain properties. If this happens, choosing strong primes may once again help to increase security. | <urn:uuid:de857588-fab1-4bd2-8e23-a5b6a52bbcff> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/strong-primes.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941621 | 358 | 3.984375 | 4 |
If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless.
On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!).
Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks.
Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks.
Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.”
For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects.
From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network.
TIME TO DIE
And this is exactly what Kelley and Harrigan say they have done in real-life tests. They realized that iPads and other iDevices constantly check various “network time protocol” (NTP) servers around the globe to sync their internal date and time clocks.
The researchers said they discovered they could build a hostile Wi-Fi network that would force Apple devices to download time and date updates from their own (evil) NTP time server: And to set their internal clocks to one infernal date and time in particular: January 1, 1970.
The result? The iPads that were brought within range of the test (evil) network rebooted, and began to slowly self-destruct. It’s not clear why they do this, but here’s one possible explanation: Most applications on an iPad are configured to use security certificates that encrypt data transmitted to and from the user’s device. Those encryption certificates stop working correctly if the system time and date on the user’s mobile is set to a year that predates the certificate’s issuance.
Harrigan and Kelley said this apparently creates havoc with most of the applications built into the iPad and iPhone, and that the ensuing bedlam as applications on the device compete for resources quickly overwhelms the iPad’s computer processing power. So much so that within minutes, they found their test iPad had reached 130 degrees Fahrenheit (54 Celsius), as the date and clock settings on the affected devices inexplicably and eerily began counting backwards.
Harrigan, president and CEO of San Diego-based security firm PacketSled, described the meltdown thusly:
“One thing we noticed was when we set the date on the iPad to 1970, the iPad display clock started counting backwards. While we were plugging in the second test iPad 15 minutes later, the first iPad said it was Dec. 15, 1968. I looked at Patrick and was like, ‘Did you mess with that thing?’ He hadn’t. It finally stopped at 1965, and by that time [the iPad] was about the temperature I like my steak served at.”
Kelley, a senior penetration tester with CriticalAssets.com, said he and Harrigan worked with Apple to coordinate the release of their findings to ensure doing so didn’t predate Apple’s issuance of a fix for this vulnerability. The flaw is present in all Apple devices running anything lower than iOS 9.3.1.
Apple did not respond to requests for comment. But an email shared by the researchers apparently sent by Apple’s product security team suggests the company’s researchers were unable to force an affected device to heat to more than 45.8 degrees Celcisus (~114 degrees Fahrenheit). The note read:
“1) We confirmed that iOS 9.3 a
ddresses the issue that left a device unresponsive when the date is set to 1/1/1970.
2) A device affected by this i
ssue can be restored to iOS 9. 3 or later. iTunes restored the iPad Air you provided to us for inspection.’
3) By examining the device, we
determined that the battery temperature did not exceed 45. 8 degrees centigrade.”
According to Harrigan and Kelley, the hardware needed to execute this attack is little more than a common Raspberry Pi device with some custom software.
“By spoofing time.apple.com, we were able to roll back the time and have it hand out to all Apple clients on the network,” the researchers wrote in a paper shared with KrebsOnSecurity. “All test devices took the update without question and rolled back to 1970.”
The researchers continued: “An interesting side effect was that this caused almost all web browsing traffic to cease working due to time mismatch. Typically, this would prompt a typical user to reboot their device. So, we did that. At this point, we could confirm that the reboot caused all iPads in test to degrade gradually, beginning with the inability to unlock, and ultimately ending with the device overheating and not booting at all. Apple has confirmed this vulnerability to be present in 64 bit devices that are running any version less than 9.3.1.”
Harrigan and Kelley say exploiting this bug on an Apple iPhone device is slightly trickier because iPhones get their network time updates via GSM, the communications standard the devices use to receive and transmit cell phone signals. But they said it may be possible to poison the date and time on iPhones using updates fed to the devices via GSM.
They pointed to research by Brandon Creighton, a research architect at software testing firm Veracode who is perhaps best known for setting up the NinjaTel GSM mobile network at the massive DefCon security conference in 2012. Creighton’s network relied on a technology called OpenBTS — a software based GSM access point. Harrigan and Kelley say an attacker could set up his own mobile (evil) network and push date and time updates to any phones that ping the evil tower.
“It is completely plausible that this vulnerability is exploitable over GSM using OpenBTS or OpenBSC to set the time,” Kelley said.
Creighton agreed, saying that his own experience testing and running the NinjaTel network shows that it’s theoretically possible, although he allows that he’s never tried it.
“Just from my experimentation, theoretically from a protocol level you can do it,” Creighton wrote in a note to KrebsOnSecurity. “But there are lots of factors (the carrier; the parameters on the SIM card; the phone’s locked status; the kind of phone; the baseband version; previously joined networks; neighboring towers; RF signal strength; and more). If you’re just trying to cause general chaos, you don’t need to work very hard. But if, say, you were trying to target an individual device, it would require an additional amount of prep time/recon.”
Whether or not this attack could be used to remotely ruin iPhones or turn iPads into expensive skillets, it seems clear that failing to update to the latest version of Apple iOS is a less-than-stellar idea. iPad users who have not updated their OS need to be extremely cautious with respect to joining networks that they don’t know or trust.
iOS and Mac OS X have a feature that allows users to prevent the devices from automatically joining wireless networks. Enabling this “ask to join networks” feature blocks Apple devices from automatically joining networks they have never seen before — but the side effect is that the device may frequently toss up prompts asking if you wish to join any one of several available wireless networks (this can be disabled by unselecting “Ask to Join Networks”). But enabling it doesn’t prevent the device from connecting to, say, “attwifi” if it has previously connected to a network of that name.
The researchers have posted a video on Youtube that explains their work in greater detail.
Update, 1:08 p.m. ET: Added link to video and clarified how Apple’s “ask to join networks” feature works. | <urn:uuid:3e5df576-1142-4fc9-a57d-db93c249b1f3> | CC-MAIN-2017-04 | https://krebsonsecurity.com/2016/04/new-threat-can-auto-brick-apple-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00093-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947707 | 2,086 | 2.546875 | 3 |
Most government leaders are restlessly on the search for new ideas, for innovation, for whatever is next. It may be their good luck that this is shaping up to be a Golden Age for engaging citizens, customers and employees. For evidence of this, one need look no further than the rapidly expanding use of "crowdsourcing." This social-media tool is going mainstream in many communities as a source of innovative ideas.
The growing interest in engaging the crowd to identify or develop innovative solutions to public problems was inspired by wildly successful efforts in the commercial world to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.
In the government sphere, crowdsourcing is an approach that uses online tools to break a problem down into manageable tasks and engages people to voluntarily help produce those results, according to Daren C. Brabham, a scholar at the University of Southern California who is following this phenomenon. In a recent report for the IBM Center for the Business of Government, Brabham says that an important distinction between crowdsourcing and other forms of online participation is that crowdsourcing "entails a mix of top-down, traditional, hierarchical process and a bottom-up, open process involving an online community."
Crowdsourcing in the public sector can be done within government, among employees as a way to surface ideas -- such as the New York City government's "Simplicity" initiative -- or it can be done by nonprofit groups in ways that influence government operations. For example, a transportation advocacy group in New York City has created a site where citizens can report "near miss" accidents, which are then mapped to determine patterns. The idea is that, while the city government already maps accidents that have happened, hazardous traffic zones can be detected and resolved faster by mapping near-misses without waiting for a large number of actual accidents.
Brabham offers a strategic view of crowdsourcing and when it is useful to address public problems. His report also identifies four specific approaches, describing which is most useful for a given category of problem:
Brabham notes that crowdsourcing is not just a collection of technology tools but rather is a strategic process, and he observes that it has "enjoyed quite an enthusiastic embrace" by governments even though the term did not exist seven years ago. "In the spirit of participatory democracy," he writes, "this is no doubt a good sign."
This article originally appeared on GOVERNING.com. | <urn:uuid:a6150c7e-48f3-40be-96a1-8d4f172b589b> | CC-MAIN-2017-04 | http://www.govtech.com/internet/Beyond-the-Suggestion-Box-Governments-Crowdsourcing-Revolution-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946298 | 509 | 2.84375 | 3 |
Is Surfing the Internet Altering your Brain?By Reuters - | Posted 2008-10-27 Email Print
A neuroscientist at UCLA in California who specializes in brain function has found that Internet searching and text messaging has made brains more adept at filtering information and making quick decisions. But while technology can accelerate learning and boost creativity it can have drawbacks as it can create Internet addicts whose only friends are virtual and has sparked a dramatic rise in Attention Deficit Disorder diagnoses.
CANBERRA (Reuters) - The Internet is not just changing the way people live but altering the way our brains work with a neuroscientist arguing this is an evolutionary change which will put the tech-savvy at the top of the new social order.
Gary Small, a neuroscientist at UCLA in California who specializes in brain function, has found through studies that Internet searching and text messaging has made brains more adept at filtering information and making snap decisions.
But while technology can accelerate learning and boost creativity it can have drawbacks as it can create Internet addicts whose only friends are virtual and has sparked a dramatic rise in Attention Deficit Disorder diagnoses.
Small, however, argues that the people who will come out on top in the next generation will be those with a mixture of technological and social skills.
"We're seeing an evolutionary change. The people in the next generation who are really going to have the edge are the ones who master the technological skills and also face-to-face skills," Small told Reuters in a telephone interview.
"They will know when the best response to an email or Instant Message is to talk rather than sit and continue to email."
In his newly released fourth book "iBrain: Surviving the Technological Alteration of the Modern Mind," Small looks at how technology has altered the way young minds develop, function and interpret information.
Small, the director of the Memory & Aging Research Center at the Semel Institute for Neuroscience & Human Behavior and the Center on Aging at UCLA, said the brain was very sensitive to the changes in the environment such as those brought by technology.
He said a study of 24 adults as they used the Web found that experienced Internet users showed double the activity in areas of the brain that control decision-making and complex reasoning as Internet beginners.
"The brain is very specialized in its circuitry and if you repeat mental tasks over and over it will strengthen certain neural circuits and ignore others," said Small.
"We are changing the environment. The average young person now spends nine hours a day exposing their brain to technology. Evolution is an advancement from moment to moment and what we are seeing is technology affecting our evolution."
Small said this multi-tasking could cause problems.
He said the tech-savvy generation, whom he calls "digital natives," are always scanning for the next bit of new information which can create stress and even damage neural networks.
"There is also the big problem of neglecting human contact skills and losing the ability to read emotional expressions and body language," he said.
"But you can take steps to address this. It means taking time to cut back on technology, like having a family dinner, to find a balance. It is important to understand how technology is affecting our lives and our brains and take control of it."
(Editing by Paul Casciato)
© Thomson Reuters 2008 All rights reserved | <urn:uuid:57c3ac63-4011-4f25-a48c-eea4b4a32c08> | CC-MAIN-2017-04 | http://www.baselinemag.com/mobility/Is-Surfing-the-Internet-Altering-your-Brain | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94923 | 683 | 2.703125 | 3 |
To better understand how these twisters form, researchers have been meshing data from a number of factors that can have an impact on tornado formation. More specifically, researchers are trying to pinpoint how wind direction and height for instance can cause the updraft of storm winds to begin to spin, which is the first warning that a tornado could form.
At the heart of this research is Amy McGovern from the University of Oklahoma. McGovern has been creating models with supercomputers that are crunching vast amounts of data to understand how a number of potential storm variables interact with one another to form twisters.
At the beginning of the research endeavor, McGovern and her team used observational data from a two-decade old storm as the basis for a few hundred storm simulations. The problem was, according to the article, that these simply did not generate enough data. The team realized they needed a supercomputer and harnessed Kraken, a Teragrid machine, to revolutionize research. Under this paradigm, the team generates almost 50 terabytes for 50 simulations then turns this data to another supercomputer, Nautilus at the University of Tennessee, to sift through the data to find patterns and meaning.
As McGovern said in an interview, ”In the longer term, we would like to bring our findings and methods to the weather forecasters who actually issue the tornado warnings. We would like to develop an interface that provides them with immediate and useful information, which they can use to improve their tornado warnings.” | <urn:uuid:5f2f02da-2de5-4eac-88c6-92edfc55da7d> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/07/06/kraken_nautilis_mine_for_tornado_clues/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938891 | 307 | 3.859375 | 4 |
BARCELONA -- Is your memory as sharp as it used to be?
If you sometimes need a nudge to remember things, that nudge might someday come from a wearable device the size of a brooch or a Bluetooth earpiece that records your daily activities and conversations. If you can't remember what happened on a particular date a few weeks or even several years earlier, the device would be able to give you a digital reminder of an important meeting or a particularly beautiful sunset.
Intel evangelist Manny Vara, in an interview Monday, said wearable computers could be two to five years away. Unlike some early models that were too bulky, the devices Vara envisions would be small, light and convenient.
Intel is working on creating the tiny microprocessors that would power these devices.
"These will be wearable computers that are very small and unobtrusive," Vara told Computerworld during Intel's European Research and Innovation Conference. "Imagine wearing something that would tell you, before you shake someone's hand, that she's Mary and it will tell you where you met her last. I would like that. That would help me."
Vara said he has seen early versions of wearable computers that were the size of the palm of a person's hand and had a thick cable leading to a large memory device that had to be lugged around.
"The concept worked, but you'd never wear it because it would be ridiculous," he added. The devices that he is talking about would be much smaller and lighter.
In an effort to help make such devices a reality, Intel is working to develop computer chips that would be smaller than the company's low-voltage Atom processors, which power mobile phones and tablets. According to Vara, the chips could be less than half the size, or perhaps less than a quarter of the size, of an Atom chip.
"With Atom, we're talking about 1 or 2 or 3 watts. With these, it would be in the milliwatt range," said Vara. "These are being explored."
As a way to minimize the size and power requirements of these next-generation chips, Intel probably will forgo the instruction set normally found today's microprocessors, said Vara.
"Potentially, it could have graphics built in," he said. "Right now, it's early in the game so we're looking at what makes sense to put in there. You'd want to have some memory built in and maybe some graphics, because you'd want to have one chip ... maybe two chips, but size-wise you want to keep it small."
Other challenges are memory and batteries.
A chip in a wearable device wouldn't have enough memory to record what you do all day. Therefore, it would probably have to feed any recorded information to a flash memory device that the user would keep in his pocket, for example. The device would also need a high-capacity battery.
Last summer, Google made a splash with its Google Glass computerized eyeglasses, at its Google I/O developers conference. The Android-powered eyeglasses are equipped with a processor, memory, a camera, GPS sensors and a display screen.
According to industry analysts, in the not-so-distant future, computers will be worn, possibly incorporated into a pair of glasses or a piece of jewelry, such as a bracelet or a pendant.
Zeus Kerravala, an analyst at ZK Research, said trying to get in on the wearable computer market before it takes off is a smart move for Intel.
The company has struggled in recent months as the PC market, where it collects much of its revenue, has been weakened by a sluggish economy and by competition from the burgeoning tablet business. Getting in on a new wave of technology would be a step in the right direction, Kerravala said.
"PCs have virtually no growth right now," Kerravala said. "The only way Intel can grow is to find other uses for its chips.... Think of all that is possible here."
Vara said the advent wearable computers that record people's activities and conversations would likely necessitate the development of privacy and security technologies to protect users of such devices -- and the people they encounter.
For instance, individuals who prefer that their activities remain private might be interested in carrying small jamming devices that could stop other people's wearable computers from recording.
"I think one of the things we really need to do is be very conscientious about the fact that people like their privacy," Vara said. "People would like this, and they'd be eager to use it, but some people aren't going to want to be videotaped.... That's a really important thing to figure out before letting loose some of these things in the market."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin or on Google+, or subscribe to Sharon's RSS feed . Her email address is email@example.com. | <urn:uuid:b1bb7413-0565-4bf1-ae97-afd795f060f2> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2492728/emerging-technology/intel-strives-to-develop-tiny-chips-to-run-wearable-computers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00239-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.980459 | 1,046 | 2.65625 | 3 |
Forensic computer science deals with the preservation and processing of computer evidence. Forensics is basically applying science to the evidentiary process. In the case of computer evidence, the science is computer science and the evidence is data stored in any number of forms on a variety of computer storage media. Some have likened computer forensics to the autopsy of a computer. Precision and accuracy are essential in the processing of computer evidence, and this cannot be achieved without using the right set of tools. To do otherwise would be like trying to do brain surgery with a pocket knife.
Law enforcement agencies are woefully underfunded. This is especially true regarding computer evidence and related technology issues. It is tough enough for law enforcement management to pay salaries and keep a fleet of vehicles running in these tight budgetary times. However, computer evidence is here to stay, and every law enforcement agency will have to deal with computer evidence issues in time. The good news is that the price of computer technology is at an all-time low. An adequate setup that meets the minimum requirements for most small law enforcement departments can be purchased for under $6,000. This includes both computer hardware and software.
It is important to preserve computer evidence and safely transport the seized computer to a secure location so a bit stream backup can be made of all computer media. This is required before processing the evidence to avoid triggering potential destructive processes that may have been planted in the computer by the crooks. It also avoids the accidental overwrite of data stored in the form of erased files, in the Windows swap file and in file slack. To process computer evidence without making bit stream backup of the "best evidence" is playing with fire. You are going to get burned badly at some point. The catch is that you must have the proper tools before the evidence can be backed up and processed.
The price of computer hard-disk drives has dropped substantially over the past year. As a result, forensic computer specialists are encountering large volumes of potential data stored on huge hard-disk drives. To put this in perspective, 10 years ago, a 20 megabyte hard disk-drive was considered standard. Today, it is not uncommon for a desktop computer to have multiple hard-disk drives with storage capacities exceeding 2 gigabytes (GB) per drive. For those unfamiliar with these terms, a 20 megabyte hard-disk drive has the capacity to store approximately 20 million characters of data. A 2GB hard-disk drive has the capacity to store approximately 100 times that capacity. To make matters worse, from a computer evidence standpoint, 5GB hard-disk drives are now available and will surely find their way into police evidence lockers.
These small storage devices are not much bigger than a deck of cards, but they have the potential of storing the content of hundreds of thousands of printed pages. For these reasons, plan on spending some money on computer hard-disk drives and storage media.
Even after making a bit stream backup, processing should rarely be done on the seized computer. To do so could subject the seized computer to excessive wear and tear. Your worst nightmare might involve your expert testimony in court about how you came to break the subject computer. To avoid living this nightmare, always plan on restoring bit stream backup, made from the seized computer, to a law enforcement computer. A lightning fast computer is normally not required. With the exception of some specialized automated fuzzy logic forensic tools, most forensic software tools operate quite nicely on lower-end Pentium-based computers or the equivalent, e.g. Pentium 133MHZ to 150MHZ. However, plenty of storage capacity is a requirement, and it is also a good idea to buy at least 64MB of Random Access Memory (RAM) to ensure that you can run and evaluate the software retrieved from the seized computer.
Flexibility is the name of the game when you create a computer system that will process computer evidence. You must prepare to deal with the seizure of a variety of computer systems and equipment configurations. As a result, it is wise to equip your processing computer with multiple floppy disk drives, a color SVGA monitor and plenty of external storage capacity to supplement the onboard hard disk drives. Opinions may vary, but I recommend the following hardware configuration as a minimum system for use in computer evidence processing:
* Pentium 133MHz or equivalent
tower desktop computer
* SVGA 14-inch color monitor
* Two 5GB hard-disk drives
* One Iomega Zip disk drive
* One SyQuest SyJet (or Iomega
Jazz) disk drive
* One 5.25-inch, 1.2MB floppy disk drive
* One 3.5-inch, 1.44MB floppy disk drive
* One CD-ROM (8x recommended)
* One Uninterruptible Power Supply (UPS)
* One laser printer (6 pages per
The recommended system should meet the computer-evidence needs of most small- to medium-size law enforcement agencies. As stated, this is the minimum system configuration and it should be supplemented with an adequate supply of floppy disks and storage cartridges, e.g. Zip disks. Further, I strongly recommend the use of a second law enforcement notebook computer for documentation purposes. When the processing computer is used to document findings, there is a potential for parts of the text in the reports to "cross pollinate" the backup copies of the evidence. The potential of a memory dump into file slack is the culprit. By using a separate computer to document findings, this potential problem is eliminated. Inexpensive notebook computers can be purchased for under $1,000 and may come in handy for other tasks in the department as well.
The Forensic Computer Setup
Computer evidence processing can't begin without forensic software tools. The recommended tool kit should include the following:
* MS DOS 6.22 (DOS 7.0 is not recommended) Disk Mgt. Software to take full advantage of large hard disks under DOS Norton Disk Edit
* A bit stream backup utility (SafeBack
by Sydex is recommended)
* A virus scanning utility
* A DOS shell utility with file view
* Password recovery utilities (Access
Data's utilities are recommended)
* A text search utility
* Other specialized disk utilities
Please be aware that the capacity of hard-disk drives increase continually. Normally, forensic processing is performed under DOS rather than Windows, to avoid overwriting potential evidence in the form of erased files. However, DOS will not access huge hard-disk drives without disk management software. For this purpose, OnTrack Data Recovery offers a disk utility that is inexpensive and can be purchased via their Internet Web site at .
The purchase of computer components and forensic software is a step in the right direction for most law enforcement agencies who desire to begin dealing with computer evidence issues.
Michael R. Anderson, who retired from the IRS's Criminal Investigation Division in 1996, is internationally recognized in the fields of forensic computer science and artificial intelligence. Anderson pioneered the development of federal and international training courses that have evolved into the standards used by law enforcement agencies worldwide in the processing of computer evidence.
He also authored software applications used by law enforcement agencies in 16 countries to process evidence and to aid in the prevention of computer theft. He continues to provide software free of charge to law enforcement and the military. He is currently a consultant. P.O. Box 929 Gresham, OR 97030. E-mail . | <urn:uuid:c92cf2f4-bf03-4620-b199-281ca5b17806> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Evidence-Processing-Computer-Autopsy.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929979 | 1,511 | 3.25 | 3 |
The massive effort that will ultimately bring together the world largest radio telescope got a little more focused this week. The organization running the venture picked some three hundred and fifty scientists and engineers, representing 18 nations and drawn from nearly one hundred institutions, universities and industry to complete the design phase of the Square Kilometer Array (SKA) Project.
[RELATED: Quick look: The hot Asian space industry]
The SKA telescope will include 3,000 dish-shaped antennae and other hybrid receiving technologies dishes spread over a collecting area of about 3,000 kilometers, making it 50--100 times more sensitive than today's best radio telescopes and cover the frequencies 0.15 to 30 GHz (2 m to 1 cm wavelength).
The $2 billion SKA project once operational, will be used to address some of "humankind's greatest questions, such as our understanding of gravity, the nature of dark energy, the very formation of the Universe and whether or not life exists elsewhere," the group says.
The announcement this week included the formation of an array of groups that will oversee various key development areas of the SKA. For example, The Dish Consortium, led by Dr. Mark McKinnon of Commonwealth Scientific and Industrial Research Organization in Australia will oversee all activities necessary to prepare for the procurement of the SKA dishes, including local monitoring & control of the individual dish in pointing and other functionality, their feeds, necessary electronics and local infrastructure. DSH includes planning for manufacturing of all components, the shipment and installation on site of each dish and the acceptance testing. Other groups like the he Low Frequency Aperture Array Consortium will manage the development of antennas, on board amplifiers and local processing required for the Aperture Array telescope of the SKA.
As SKA development continues, its developers put up a list of the radio telescope's "most amazing" facts as they see them. The amazing facts list looks like this:
- The data collected by the SKA in a single day would take nearly two million years to playback on a typical MP3 player.
- The SKA central computer will have the processing power of about one hundred million PCs.
- The SKA will use enough optical fiber linking up all the radio telescopes to wrap twice around the Earth.
- The dishes of the SKA when fully operational will produce 10 times the global internet traffic as of 2013.
- The aperture arrays in the SKA could produce more than 100 times the global internet traffic as of 2013.
- The SKA will generate enough raw data to fill 15 million 64 GB MP3 players every day.
- The SKA supercomputer will perform 1018 operations per second - equivalent to the number of stars in three million Milky Way galaxies - in order to process all the data that the SKA will produce.
- The SKA will be so sensitive that it will be able to detect an airport radar on a planet 50 light years away.
- The SKA will contain thousands of antennas with a combined collecting area of about one square kilometer (that's 1,000,000 square meters).
- Analysts estimate the London Olympics was the most data-heavy event in recent history - with some 60 Gbytes, the equivalent of 3,000 photographs, travelling across the network in the London Olympic Park every second. This however is only equivalent to the data rate from about half of a single low frequency aperture array station in SKA phase one.
Check out these other hot stories: | <urn:uuid:862c37a7-a957-496e-ad81-a82c58c1d9c7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225754/wireless/10-amazing-facts-about-the-world-s-largest-radio-telescope.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900253 | 704 | 2.953125 | 3 |
While it is usually desirable to set all your Wi-Fi nodes at the maximum possible data rate (11Mbps for 802.11b and 54Mbps for 802.11g and 802.11a), there are a few exceptions to every rule. Let’s look at a few of the pros and cons of doing so:
PRO: Theoretically, higher speeds should result in higher throughput and performance, and thus a better user experience.
PRO: Transmitting at maximum speed saves battery life because faster transmissions take less time.
PRO: Dropping data rates back to their adaptive speeds can cause interference on its own in other parts of the network. Transmissions can go farther at slower speeds. Let’s say you’ve built a “honeycomb” or “checkerboard” of non-overlapping 2.4GHz channels 1, 6, 11, for your 802.11b/g networks and set the data rate of all devices for their maximum speed of 11Mbps. Then, interference enters the picture, causing the data rate to adapt, or fall back, to a lower speed, sending the signals farther distances. Those signals could cross over into other “1-6-11 cells” of the honeycomb and interfere with the corresponding channels there, slowing those data rates.
CON: The lower data rates use less complex and more redundant methods of encoding the data, making them less susceptible to interference and signal attenuation.
CON: There might be clients performing suboptimally because of the quality of the client software driver. If you discover lots of retransmissions from a particular client, your monitoring analysis might indicate that the particular client should be set back to one of the adaptive rates at all times - such as to 5.5Mbps in an 802.11b network – and account for the potentially increased reach of that client in the design scheme. | <urn:uuid:80cf80b0-46f4-4c62-8dea-f15695b51fbf> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2295906/network-security/adaptive-rate-selection-in-wi-fi-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918177 | 391 | 2.984375 | 3 |
In ten years 90% of Americans will have affordable access to 100Mbps broadband, with schools, hospitals and army bases getting 1Gbps access, if Congress adopts the national broadband plan published today by the Federal Communications Commission (FCC).
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
To make this happen the FCC will free up and make available 500MHz of radio spectrum, ten times the bandwidth now available, remove barriers to entry for existing and new broadband providers, and improve the amount and quality of information available on broadband markets.
"The National Broadband Plan is a 21st century roadmap to spur economic growth and investment, create jobs, educate our children, protect our citizens, and engage in our democracy," said FCC chairman Julius Genachowski.
He said action was needed to meet the challenges of global competitiveness, and harness the power of broadband to help address "so many vital national issues".
Blair Levin, who led the broadband initiative, said the plan was "a call to action" to connect America anew. "If we meet it, we will have networks, devices, and applications that create new solutions to seemingly intractable problems," he said.
The US had failed to harness broadband to transform delivery of government services, healthcare, education, public safety, energy conservation, economic development, and other national priorities, the FCC said.
Background research showed that nearly 100 million Americans lacked broadband at home, and 14 million could not access it at all. Only 42% of people with disabilities used broadband at home, and only 5% of people living on tribal lands had access.
The cost of digital exclusion for the student unable to access the internet to complete a homework assignment, or for the unemployed worker who could not search online for a job, continued to grow, the FCC said.
The FCC warned of "a looming shortage of wireless spectrum" that could hurt US innovation and leadership in popular wireless mobile broadband services. "More useful applications, devices, and content are needed to create value for consumers," it said.
The goals for the next 10 years were to:
• Connect 100 million households to affordable 100Mbps service;
• Provide every American community with affordable access to a minimum 1Gbps broadband at anchor institutions such as schools, hospitals, and military sites;
• Ensure that the US led in mobile innovation by making 500MHz of spectrum newly available for licensed and unlicensed use;
• Move adoption rates from roughly 65% to more than 90%, and ensure that every child is digitally literate when he or she leaves high school;
• Switch existing universal service funds from supporting analogue to digital technologies to bring affordable broadband to rural communities, schools, libraries, and vulnerable populations;
• Promote competition through greater transparency, removing barriers to entry, and conducting market-based analysis with quality data on price, speed, and availability;
• Enhance public safety by providing every first responder with access to a nationwide, wireless, interoperable public safety network.
The plan was mandated by the American Recovery and Reinvestment Act in February 2009 and produced by an FCC task force. It arose via 36 public workshops, nine field hearings, and 31 public notices that produced 75,000 pages of public comments.
The online debate, with 131 blogposts, triggered 1,489 comments; 181 ideas that picked up 6,100 votes; 69,500 views on YouTube; and 335,000 Twitter followers, plus independent research and data-gathering.
The FCC said about half the plan's recommendations were for itself, while the rest were for Congress, the White House, state and local government, and the private and nonprofit sectors.
The full plan will be published later today. | <urn:uuid:f7a85c02-e69a-4a9b-870c-e73fac40c8f0> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280092361/National-broadband-plan-to-set-US-on-path-to-digital-inclusion | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943735 | 769 | 2.5625 | 3 |
A U.S. soldier is on patrol with his squad when he kneels to check something out, unknowingly putting his knee into a puddle of contaminants.
The soldier isn't harmed, though, because he or she is wearing a smart suit that immediately senses the threat and transforms the material covering his knee into a protective state that repels the potential deadly bacteria.
Scientists at the Lawrence Livermore National Laboratory, a federal government research facility in Livermore, Calif., are using nanotechnology to create clothing designed to protect U.S. soldiers from chemical and biological attacks.
The researchers turned to nanotechnology to overcome the tough task of creating military-grade protective clothing that's breathable and isn't heavy to wear.
"The threat is nanoscale so we need to work in the nano realm, which helps to keep it light and breathable," said Francesco Fornasiero, a staff scientist at the lab. "If you have a nano-size threat, you need a nano-sized defense."
For a little more than a year, the team of scientists has focused on developing a proof of concept suit that's both tough and inexpensive to manufacture. The lab group is teaming up with scientists from MIT, Rutgers University, the University of Massachusetts at Amherst and other schools to get it done.
Fornasiero said the task is a difficult one, and the suits may not be ready for the field for another 10 to 20 years.
Ross Kozarsky, a senior analyst with Boston-based Lux Research, said the effort could also lead to a lot of other uses for smart nano-based clothing or devices.
"I think it's definitely innovative. It's a pretty powerful platform technology," he added. "Materials that intelligently react to their external surroundings -- that is certainly an interesting class of materials. This is at the front end of the tunnel. Imagine an athlete wearing some kind of clothing that reacts to humidity or temperature and can make itself a lighter or warmer shirt."
Kozarsky also noted that smart clothing could be used for personal tasks, like measuring a user's heart beat, pulse and blood pressure.
The technology could also lead to smart footwear, which could, for example, transform itself to repel potential danger found in water and keeping the user's feet dry.
The military also might consider adapting the base technology so instead of a nano-infused fabric transforming itself to protect a human from a biological or chemical attack, the smart material could be body armor that automatically strengthens itself based on the stress it's under.
"This is a big step forward for nanotech," said Ming Su, an associate professor of biomedical engineering at Worcester Polytechnic Institute. "It can lead to a big area of bionics. Basically, you are dealing with man-made stuff that ... can achieve certain biological functions -- having a self-sensing ability or self-healing abilities, or localized protection from toxic materials."
Think, he added, of a baby blanket or baby clothes that could become warmer when the temperature drops. The same technology could be used to make gloves that can detect high heat or hazmat suits that become more protective when they detect toxins.
"This is very good work, definitely," said Su. "I would say it will have a large impact."
Building better protection
The U.S. military today does have protective gear for soldiers who are under threat of biological or chemical attacks, but it's big, bulky, heavy and hot to wear.
Today's suits can only be worn for an hour or two at a time, according to Fornasiero. "Your physical abilities drop and you can get heat stroke [wearing them]," he said. "It's a big problem."
The Lawrence Livermore team isn't taking just one track to make that happen. They're working on at least two different options for the carbon nanotubes.
One option is to use carbon nanotubes in a layer of the suit's fabric. Sweat and air would be able to easily move through the nanotubes. However, the diameter of the nanotubes is smaller than the diameter of bacteria and viruses. That means they would not be able to pass through the tubes and reach the person wearing the suit.
However, chemicals that might be used in a chemical attack are small enough to fit through the nanotubes. To block them, researchers are adding a layer of polymer threads that extend up from the top of the nanotubes, like stalks of grass coming up from the ground.
The threads are designed to recognize the presence of chemical agents. When that happens, they swell and collapse on top of the nanotubes, blocking anything from entering them.
A second option that the Lawrence Livermore scientists are working on involves similar carbon nanotubes but with catalytic components in a polymer mesh that sits on top of the nanotubes. The components would destroy any chemical agents they come in contact with. After the chemicals are destroyed, they are shed off, enabling the suit to handle multiple attacks.
"We are not selecting either option," said Fornasiero. "We have multiple options and we don't know what will work so we will keep looking."
This story, "Gov't developing smart suits to protect U.S. troops from bio attacks" was originally published by Computerworld. | <urn:uuid:db643eb4-dc6f-43ab-9325-6ecbc372e8c1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2175393/data-center/gov--39-t-developing-smart-suits-to-protect-u-s--troops-from-bio-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961629 | 1,103 | 2.984375 | 3 |
Updated rules to avoid phishing scams
By now, you’ve probably seen an example or two of a phishing attempt. Maybe it was an e-mail message that asked you to quickly follow a mysterious URL to “verify your account” or “confirm billing information.” Once you have clicked on the link or supplied personal information, the phisher is able to access your accounts. Once the phisher has access to this information, your chances of becoming a victim of identity theft triple. A phishing phone call may appear to be from a familiar source, a highly-recognizable business or a survey conductor attempting to gain personal and financial information.
Recent reports cite phishing (one of the oldest computer scams) is still one of the fastest growing forms of fraud, and one of the most successful. As consumers and employees, it’s important to be able to identify a phishing scam to not only protect our personal and financial data, but also the company data we can access.
Below is a list of general rules to help you avoid phishing scams:
- Be cautious when opening emails that manipulate you emotionally. Phishers understand human psychology, and will use all sorts of tricks to get you to open or respond to emails: promising free gifts, warning you that your account has been suspended or even an urgent security warning that seems to come from your computer technician should all be suspect if they ask for inappropriate information (like your social security number or usernames and passwords).
- Never respond to emails that request personal or financial information. Your bank or your employer will never ask you for bank account details, Social Security number or passwords by email. The email requesting this information may look absolutely legitimate – it can have the right logo, even the right design and typeface, of a reputable company – or it may even seem to be from someone you personally know and trust. Still, always delete these without replying or taking any action. If ever in doubt, call the bank or the person the email is supposedly from to verify that they sent it.
- Never go to your bank’s or a vendor’s website by clicking on a link included in an email. Do not click on hyperlinks or links attached in emails, as they could take you to fraudulent websites that lure you into “logging in” to your bank or other high-value e-commerce account. These fraudulent websites might look absolutely genuine, but what you are really doing is handing over they keys to your accounts to criminals. Type in the URL directly into your browser whenever you want to visit a financial or e-commerce website.
- Check that the websites you visit are secure. If the websites you visit are on secure servers, they should start with https:// (the “s” stands for “security”) rather than the usual http://. Never enter personal or financial information except into an https web page.
- Keep your computer secure. Phishing emails often contain spyware and keyloggers (programs that can record your keystrokes and what you do online) or create a back door to allow attackers into your computer. Make sure you have antivirus software and that it’s up to date to catch these malicious programs before they can do harm.
At CenturyLink, we encourage customers to be aware of and to report suspicious activity to firstname.lastname@example.org. Read the full article originally published on our Bright Ideas blog “Five Tips to Avoid Falling for Phishing Tricks.” | <urn:uuid:e40bf97b-9d54-4998-a32d-da8f977761c5> | CC-MAIN-2017-04 | http://news.centurylink.com/blogs/security/updated-rules-to-avoid-phishing-scams | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00533-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931035 | 730 | 2.609375 | 3 |
Let’s Get Personal—Firewalls
Firewalls are the gatekeepers of the Internet. Like bouncers at a club, they determine who gains entry and who does not. Simply put, a firewall is an electronic barrier designed to keep unauthorized computer programs or Internet users from accessing a private computer or network. Before information may enter or leave, it must pass through the firewall. The firewall examines data packets and blocks those that do not meet certain security criteria. With the exception of the built-in Windows XP firewall, which only filters incoming traffic, most personal firewalls today are bi-directional and monitor traffic heading both to and from your computer. The benefit of this type of personal firewall is that it will block attempts to connect to the Internet by unauthorized programs—for instance, after a rootkit or Trojan horse has surreptitiously made its way into your system. These firewalls will also alert you when such unauthorized connections are attempted.
Personal Firewall Facts
While a firewall can be a stand-alone hardware device, a software program or both, a personal firewall is usually implemented in software format only. As a result of the heightened Internet security threats we’re seeing today, users are increasingly availing themselves of personal firewalls for their systems. Personal firewalls are scaled down versions of their industrial-strength brethren, used to protect individual systems instead of entire networks. While their fundamental function is to filter dangerous network traffic, many personal firewalls also log traffic so that you can review it and determine how your system has fared against attacks. Most personal firewalls allow you to customize their configuration so that only the particular types of traffic you desire may reach your system. Another important benefit of many personal firewalls is their ability to block attempts by Trojan horse programs and other types of malicious code from transmitting data from your machine—complementing your antivirus software.
As one can imagine, there’s no shortage of personal firewall solutions. When choosing a personal firewall, keep the following facts in mind.
At a minimum, a personal firewall should:
- Offer clear, easy-to-use configuration options.
- Provide either a manual or automatic update option.
- Hide your computer’s ports, making it “invisible” to Internet scans.
- Offer bi-directional filtering to help detect spyware and/or “adware” and block it from sending personal information from your computer.
- Alert you of attacks and/or log attack information for later review.
Several of the most popular and free (for non-commercial use) Windows-based personal firewalls are:
- Sygate Personal Firewall (smb.sygate.com).
- ZoneAlarm (www.zonelabs.com).
- Agnitum Outpost Personal Firewall (www.agnitum.com).
- Kerio Personal Firewall (www.kerio.com/us).
All the aforementioned vendors also offer inexpensive commercially available products for businesses of all sizes. A few other commercially available Windows-based personal firewalls are:
- BlackICE PC Protection (blackice.iss.net).
- Norton Personal Firewall (www.symantec.com/smallbiz/npf).
- McAfee Personal Firewall Plus (us.mcafee.com).
- Trust EZ Firewall (www.my-etrust.com/).
Following the tragic events of Sept. 11, security awareness is at an all-time high. Established vendors continue to enhance their products to address new breeds of threats. While their basic functions have remained essentially the same, product usability and features will continue to improve. Although personal firewalls are useful in improving the security of virtually any computer system, they are especially useful if a system connects to the Internet via DSL or cable modem. These always-on connections make it easier for attackers to spot your computer and increase your risk of being attacked. The main limitation of most personal firewalls is that because they filter and inspect data packets, they may also somewhat slow down your computer. For most users, that’s a small price to pay for the added security they deliver.
An outreach program devised by Paul Robertson (director of risk assessment at TruSecure) was created to help raise personal firewall awareness. “Personal Firewall Day” debuted this past Jan. 15 and focuses on helping home users better secure their PCs. Protecting the personal home PC in turn amounts to added protection for office networks when they’re connected together remotely. Businesses are finding that their remote employees who don’t have adequately firewall-protected home PCs quickly become a liability to their entire network. For additional information regarding Robertson’s noble cause, visit www.personalfirewallday.org.
On a final note, two software firewalls should never be used simultaneously, as they may interfere with each other’s proper operation. Before installing any personal firewall products on a Windows XP-based computer, be sure that the built-in Intenet Connection Firewall (ICF) is not activated. Detailed instructions can be found at www.microsoft.com/windowsxp/ pro/using/howto/networking/icf.asp.
Douglas Schweitzer, A+, Network+, i-Net+, CIW, is an Internet security specialist and the author of “Securing the Network from Malicious Code” and “Incident Response: Computer Forensics Toolkit.” He can be reached at email@example.com. | <urn:uuid:95be878f-e8da-4476-8c9c-4f83ca670277> | CC-MAIN-2017-04 | http://certmag.com/lets-get-personal-firewalls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00349-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898963 | 1,163 | 3.09375 | 3 |
Recovery Type: Desktop
Drive Capacity: 320GB
Manufacturer: Western Digital
Model Name: Blue
Model Number: WD3200AAKS-00L9A0
Manufacture Date: 04 Feb 2010
Main Symptom: Clicking
Type of Data: Business Documents, Images, Videos
Data Recovery Grade: 10
Binary Read: 99.99%
When the power goes out, there are a lot of questions that come to mind. What happened to cause the outage? When will the power come back on? Lower on the list of concerns may be whether or not your computer shut down properly when it lost power.
In this case, the user’s desktop computer did not shut down properly. Oftentimes, this does not create problems for the computer’s equipment, but in this case it caused the hard drive to fail. When the computer was turned back on, the drive began to make a clicking noise and platters were not spinning.
When a hard drive runs normally, the spinning platters generate enough “wind” to keep the read/write heads floating above their surface. During a normal shutdown, the platters spin to a stop slowly, and the heads repark to their original position without touching the surface of the platters.
However, in this case, the power outage caused an improper shutdown, and the read/write heads did not repark correctly. Instead of floating over the surface of the platters, they dragged across them, resulting in damage to the heads. Engineers determined they would need to be replaced in order to recover the data.
Since the read/write heads needed replacing, the drive would need to be brought into our cleanroom and opened for repair. Engineers replaced the broken heads with working ones and recalibrated the drive to dial it in and get the new heads working well enough to extract the lost data.
After replacing the read/write heads, engineers were able to get back 99.99% of the data from the drive. Failures due to power outages are common, especially in summer during thunderstorms. However, they are also easily preventable. By properly powering down and unplugging computers during severe weather, users can avoid electrical damage to their hard drives.
When hard drives fail, users more often than not lose their files and do not have them in another location. For this reason, the best way to keep files safe from data loss is using an online backup solution. By using a secure, cloud-based online backup service, users can be sure their files are protected from data loss threats such as viruses fires, or failed hard drives.
Want to learn more? Tune in soon for another Data Recovery Case Study Blog Post.
EDIT: As of September 2016, Gillware Online Backup has been acquired by StorageCraft. Click here to learn more about their backup solutions. Click here to learn more about becoming a StorageCraft Partner. | <urn:uuid:3e1efab0-5579-4351-9d08-422dba126ef3> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery-case/case-study-western-digital-blue-wd3200aaks-00l9a0-making-clicking-noise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949549 | 601 | 2.5625 | 3 |
When it comes to transportation, the United States is No.1. We have more miles of highway and railroad, we drive more miles in our vehicles, and we fly more passengers in our airplanes than any other country in the world.
The transportation market is a $600 billion industry, according to BizStats.com, with the public sector contributing vast sums. For example, in 2005, the Census Bureau reported that federal, state and local governments spent $66 billion on roads. And according to the American Public Transportation Association, public transportation is a $27 billion industry.
Also in 2005, state and local governments spent $1.8 billion on transportation IT systems, according to INPUT. By 2009, IT spending will have increased to $2.5 billion. Meanwhile the federal government has proposed spending $2.7 billion on transportation-related IT projects in fiscal 2007.
While it's hard to envision the vast number of IT projects taking place in the transportation industry, it is even harder to imagine an overall strategy for sharing data between the various levels of government. Repeated requests to the U.S. Department of Transportation's Office of the CIO for an overview of its data sharing strategy were unanswered.
But in reality, a lot of sharing is going on. Many agencies at the federal, state and local levels are working on individual projects to integrate information at different levels of government, or across jurisdictions at the same level. They are also working to improve data sharing between transportation and law enforcement, as well as between environmental protection and other disciplines.
Data on Changing Conditions
As acting strategy manager for the government practice at SAS, a business analytics software firm, Alyssa Alexander works with many transportation departments. She observed that state departments of transportation (DOTs) are increasingly sharing data on road conditions through statewide geographic information councils and plotting the data into map displays. The GIS shop can code the maps to indicate where the roads are in the best and worst condition due to construction projects. The DOTs also analyze stretches of highway to predict which are likely to see more accidents because of construction.
"They're making sure public safety officers are aware of changes in road conditions so that they're able to respond quickly to traffic incidents if there's a higher likelihood," Alexander said, adding that the DOTs are trying to develop better mechanisms for routing this road condition data through their GIS departments and to the public safety departments.
Safety research -- particularly concerning incident tracking -- is one of the priorities in transportation data sharing, according to Alexander.
State departments of public safety normally collect incident-related data. "They must communicate the appropriate data points into the department of transportation, so that at a planning level, the department of transportation can improve the safety of a particular intersection or the speed along the roadway," she said.
SAS has been working with clients on systems for collecting incident data, as well as Web presentations, Alexander said. "The Web sites that provide this information through static reports or ad hoc queries are the ones we see a lot more of. They're in development, and there's some grant funding from the Federal Highway Administration (FHWA) to support those types of programs."
Efforts to share incident data, and many other data sharing initiatives, could get a major boost from a program to develop sets of extensible markup language (XML) schemas for transportation. Government and industry participants recently completed a project, funded by the National Cooperative Highway Research Program, which is sponsored by state DOTs and the FHWA, to define TransXML data exchange formats for applications in four areas: survey/roadway design, transportation construction/materials, highway bridge structures and transportation safety.
Participants hope the TransXML framework will eventually cover many more disciplines. "They have planned interfaces for public safety, rail, local transit, ferries, accounting and aerospace applications," Alexander said.
TransXML is intended to be a one-stop shopping umbrella that covers the transportation industry, said Steve Brown, applications development manager at the Nebraska Department of Roads (NDOR)."But it's still in its infancy." A self-proclaimed "data sharing/XML evangelist," Brown serves on the technical applications architecture task force of the American Association of State Highway and Transportation Officials -- a committee that played a major role in writing the project's proposal.
Like any XML framework, TransXML provides a way to exchange data among different software applications, even those not designed to be interoperable. With schemas already in place for exchanging data on transportation safety, local law enforcement agencies using a variety of systems to record data on highway crashes could easily submit that information to a state transportation department. "We can then keep it centrally, so we can do our hazardous location analysis, safety analysis, and then make that available -- all of our data -- back to the local law enforcement if they're interested in utilizing it," Brown said. The state could also send the collected data to federal agencies without worrying about data format requirements.
Hopefully, Brown said, officials at the NDOR will start using TransXML in the next 18 months.
Some other applications for TransXML will have to wait until more schemas are developed. Local governments could use it to store information about tax lots and landowners, and submit it to the states. "We can use it when we buy right of way, when we permit, when we do access control," Brown said. "Then all that information can be submitted nationally, hopefully, for census and other land management uses."
In long-haul trucking, TransXML could streamline the process of obtaining oversized/overweight permits. Today, if a truck needs to haul an oversized or overweight load across several states, the trucking company or its agent must apply separately to each state the load will cross. With TransXML formats, the company could apply once and then submit that one application to each state, paying the necessary fees and getting the permits in one transaction, Brown said.
TransXML could also help governments cooperating on cross-border highway facilities. When Nebraska works with Iowa or South Dakota on a road with a cross-border bridge, Brown said, road and bridge design is a collaborative and cooperative effort using their respective systems. Using TransXML, the states can create a single transportation system model, "even though it's designed with different software, with different teams, with different people."
The same principle applies when the state builds a highway that crosses city or county lines, he said.
While the group has achieved its goal of developing and demonstrating TransXML schemas in the four defined areas, as of March it had still not completed one important task -- finding an organization to take long-term ownership of TransXML and keep the initiative going. "If there is no long-term owner and keeper, it falls apart," Brown said.
Road Maps and GIS Intersect
Many data sharing arrangements that involve transportation focus on GIS. The Tucson, Ariz., DOT (TDOT) has been exchanging GIS files with Pima County and other members of the Pima Association of Governments since the 1990s.
Transportation agencies started by making a variety of digital maps on their Web sites available to anyone in the public or private sectors. Local governments also continue to share data via TDOT's maps and records server. "We pass orthophotography around like it was candy," said Ron Platt, IT manager of the TDOT. Since all the participants use GIS software from ESRI of Redlands, Calif., there are no compatibility issues, he said.
One county project that uses GIS data from TDOT is the Sonoran Desert Conservation plan, a land-use planning project to protect the desert's habitat. Many of the washes -- stream beds that contain no water -- have been digitized off the orthophotography and shared with the county, Platt said.
GIS is also the focus of a project in the state of Washington that will involve data integration with other departments. The effort comes as part of a project to replace many of the critical information management systems at the Washington State DOT (WSDOT). The plan is to tie the new systems -- for managing highway construction, finance and a host of other activities -- to the WSDOT's GIS tools, and add a geographic dimension to the information.
"When you're trying to make a decision on what investments have been made or need to be made in an area, you can call up everything from engineering drawings to financials" connected with any project on the map, said David Hamrick, WSDOT's CIO. The agency plans to include data layers provided by the state Department of Natural Resources and other departments that own assets across the state, he said.
In another sharing initiative, Washington state is working with Oregon and local governments within the two states on a Web-based trip planning system, which Hamrick described as almost like a MapQuest for public transportation. When the system is complete, "you can go in and say, 'I need to get from Spokane to this address in Portland,' and it will map all the possible public transportation methods you can use to get there," he said.
Each transportation authority will continue to maintain its own schedule data. Initially authorities will have to periodically upload fresh schedules to keep the integrated system up to date. "In a future phase, we'll start looking at connections to be able to just automatically update from local systems," Hamrick said.
Commercial Drivers, Problem Drivers
For years, state motor vehicle departments have shared data on commercial drivers through the Commercial Driver License Information System (CDLIS), operated by the American Association of Motor Vehicle Administrators. Each state maintains CDL data in its own management system, and the CDLIS operates as a "pointer system," said Barry Goleman, specialist leader for the transportation and motor vehicle practice at Deloitte Consulting in Sacramento, Calif. When an employee at a DMV in one state makes a query about a commercial driver, the system can see who has licensed that driver and route the request to that state.
If a truck driver from Florida applies for a CDL in New York, for example, the DMV will query the system to make sure the license is on record in Florida, and that the driver has only one license record, Goleman said. When New York issues the license, the record is electronically transferred to Florida. "And if that driver subsequently gets a traffic conviction in Illinois, after that conviction is processed, Illinois electronically routes that to New York for posting on his home state driver record," he said.
The U.S. DOT's National Highway Transportation Safety Administration operates a parallel system for noncommercial drivers' licenses. Called the National Driver Register (NDR), it allows motor vehicle officials in one state to check DMV databases in other states before issuing new licenses, Goleman said.
"That prevents somebody who has a Maryland license and gets suspended for drunken driving from going across the border to Virginia and saying, "'I've never had a license before; I want to get a license here; I've just moved here,'" Goleman said. Like the CDLIS, the NDR uses a federated data system; each state maintains its own data, but other states' DMVs can access the information as needed.
Other transportation agencies also query the NDR to obtain driver license data for activities that they regulate: the Federal Aviation Administration for airman medical certification; the Federal Railroad Administration for locomotive operators; the Coast Guard for merchant marines and servicemen; and the National Transportation Safety Board and Federal Motor Carrier Safety Administration for accident investigations.
Had the system been available in 1989, it might have averted the Exxon Valdez disaster, Goleman said. The oil tanker's captain, Joseph Hazelwood, had been arrested several times for drunken driving. "They check all their maritime certificates against this database to look for people who have a history of those kinds of convictions." | <urn:uuid:fa1622d7-d974-4aa5-a008-537054df1881> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/pcio/The-Integration-Highway.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94914 | 2,452 | 2.578125 | 3 |
Over the years there have been more than a few definitions of open. Here are some:
- Open as in open source. The source code is available for download and review, and you are free to copy, modify, link to, and distribute the source and binaries. But, just because your product uses open source software doesn't make your product open--as in open source--nor does it confer all the things that open source implies.
- Open as in an open sandbox. Meaning, anyone can play in your sandbox as long as they play by your rules. A vendor can make a proprietary system but expose an API that anyone can integrate with. This is good for the vendor and the customer. Encumbering that "open" API with licenses, fees, non-disclosure agreements and other limitations is not open. It is in fact, closed.
- Open as in free. If you give something away, it's not open. It's free. If you make an API that anyone can use, but they can't treat it like open source, then that is not open. Even if you give the API away with a license saying it is free, that doesn't make it open.
- Open as in a limited duration trial license that you can freely use for the duration of the trial, but no longer. As in the case of "free," this is not open either; it is closed. This is trial ware. Free for non-commercial use is also not open.
- Open as in using open protocols. If your product uses open protocols like XML or HTML, that doesn't make your product open. It makes your product accessible to others, but doesn't make it open. I write in English to make my scintillating thoughts accessible to English speakers, but still closed to all non-English speakers -- that is not open.
- Open as in open standards. Standards that are developed openly and are made available to anyone to use and are unencumbered by licensing are open standards. It really doesn't matter which organization created them. The IETF is one example. The IEEE is another. Note that some standards are developed behind closed doors (Trusted Computing Group) but are available to anyone once published.
- Open as in de facto standards. De facto standards may or may not be open. In fact, they may require paying a license to some entity or another. These standards, if they are in fact de facto, are simply widely adopted. Like the Graphics Interchange Format or the RSA algorithms prior to September 21st, 2000 when the patents expired. (RSA Security released the algorithms to the public domain two weeks prior. Big whoop.)
I know why vendors use the word open so much. Open is good and closed is bad, right? Nobody wants to be closed. Rather, no vendor wants to be perceived as closed, but they all are closed to one degree or another. Nearly every networking vendor talks about their open API to integrate their stuff with everyone elses' stuff, provided that everyone else comes and plays in their sandbox and agrees to their licensing terms and perhaps pays a licensing fee.
How do you know if something is truly open? Well, that is when you have to ask the questions to define what they mean by open or what the open applies to. For example, the nature of open will become increasingly important with public and private cloud computing where a number of different systems need to integrate together. There is standards work being developed in various groups, but the crux of the matter is that the standards, if they are going to take hold, have to be open in the truest sense of the word. And don't get me started on standards. | <urn:uuid:38cf8142-e11f-42d5-88ad-e5b2ae4c8fac> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/many-shades-open/1321915673 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00314-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973052 | 756 | 3.015625 | 3 |
What are common DSS architectural patterns or styles?
by Dan Power
Information technology continues to evolve, but one can identify various architectural patterns or styles that seem to reoccur in the development of computerized decision support systems. New systems seem to imitate or adopt prior patterns by incorporating updated hardware and networking technologies. It is likely historical architectural patterns will persist with service oriented and message-based implementations of decision support systems (cf., Natis, 2003; Whetten, 2001).
Advocates of a unified modeling approach to building systems perceive that design patterns or common architectural styles exist for classes of systems with similar purposes (cf., Booch, Rumbaugh and Jacobson, 1999; Eeles, 2006; Gamma, Helm, Johnson and Vlissides, 1995). Inadequate attention has been given to defining these patterns or styles for DSS, but some material that has been written on the topic.
An architecture for a computerized Decision Support System documents the plan for deploying the components of the envisioned DSS or how the components were actually deployed in an implemented decision support application. In general, DSS architecture specifications focus on the dialog/user interface, model base and data base components and how they are interconnected. "Architecture is the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution (IEEE 1471-2000)."
According to Sprague and Carlson (1982), the components of the DSS technology framework include dialogue management, data base management, model base management, and DSS architecture. They argued the DSS architecture describes the mechanism and structure for the integration of the dialogue, data base and model management components. They identified 4 architectures: the DSS Network, the DSS Bridge, the DSS Sandwich, and the DSS Tower. The DSS Network has multiple dialogue, modelling and data base components that are interconnected and can share data through a component interface. The Bridge has a standard interface with local dialogue and modeling components that link to remote modelling and data base components. A Sandwich architecture has a single dialogue and data base component, but multiple model components are linked by the architecture. The dialogue and data base components are the "bread" and the model components provide the "meat" for the application. Finally, the Tower includes more vertical components or tiers with data extraction tools integrating diverse data base components. The rest of the Tower architecture is similar to a Network structure.
Power and Kaparthi (1998) identified six DSS architectures: distributed dialogue, remote dialogue, distributed model, distributed data, remote data and stand alone. The distributed dialogue is basically a thin-client web architecture with the dialogue presented on the client and with models and data accessed from one or more servers using a network connection. The remote dialog is a more traditional thick-client application with the entire dialogue interface on the client and the model and data base components on one of more servers. In the distributed model, the application software on the client expands and model capabilities are distributed for more efficient processing. The distributed data architecture requires accessing data across the network for processing. With a remote data archictecture, some data is download to the client for faster processing. Finally, a stand alone architecture has the entire DSS application on a stand alone computer with no provision for network access to server based components.
In a similar framework, Schay (1992) of the Gartner Group defined five different styles of client-server computing. The difference in these styles was the portion of the computing process that is "distributed" to another computer over the network. The five styles are defined in relation to the three main processes in a computerized application: (1) presentation (user interface), (2) process (application logic) and (3) data storage (data management). The five styles are called: 1) Distributed presentation, 2) Remote presentation, 3) Distributed logic, 4) Remote data management, and 5) Distributed database. The descriptive names capture the architectural differences.
Most architectural patterns or styles are very general. According to Eeles (2006), "An important aspect of an architecture is not just the end result, the architecture itself, but the rationale for why it is the way it is. Thus, an important consideration is to ensure that you document the decisions that have led to this architecture and the rationale for those decisions." Potentially generic architecture patterns can lead to a better understanding of how to build computerized decision support systems. DSS of all types, communications-driven, data-driven, document-driven, knowledge-driven and model-driven, are increasingly integrated with other Information Systems and a given DSS may have multiple decision support subsystems of different types. The architectures of data-driven DSS emphasizes database performance and scalability. Most model-driven DSS architectures store the model software on a server and distribute the user interface software to clients. Networking issues create challenges for many types of DSS but especially for a geographically distributed, communications-driven DSS. Much more needs to be done to model the increasingly complex patterns in DSS.
Historically the focus has been on structural software system components of DSS, but there is an increasing need to identify patterns in other "views" for particular types of DSS. "Views" are analogous to the different blueprints created for a complex building by an architect. A DSS architecture can potentially be diagrammed in terms of four layers: the business process map, the systems architecture, the technical architecture, and an output delivery architecture. The business process map shows how decision making tasks are completed. The systems architecture shows the traditional software components. The technical architecture focuses on computing hardware, protocols and networking. The output delivery architecture focuses on the results and representations of the system (cf., Power, 2002). Identifying patterns in process maps and output delivery can also assist DSS architects.
Having a well-defined and well-communicated architecture for a specific DSS provides an organization with significant benefits. An architecture diagram helps developers work together, improves planning, increases the development team's ability to communicate system concepts to management, increases the team's ability to communicate needs to potential vendors, and increases the ability of other groups to implement systems that must work with the specific DSS. Technical benefits of defining a DSS architecture include the ability to plan systems in an effective and coordinated fashion and to evaluate technology options within a context of how they will work rather than abstractly. A specific DSS vision and an architecture for a new DSS helps communicate the future and provides a consistent goal for making individual design decisions. Achieving all these benefits requires that both information system professionals and prospective DSS users must cooperate closely in defining the intended architecture. Design patterns can help DSS architects and potential users evaluate and select solutions. Identifying patterns or styles can also assist a DSS architect in preparing early-phase project estimates to make a business case for a proposed decision support system.
Design the DSS before you build it. As always, your comments, suggestions and feedback are welcomed.
Blythe, K. C., "Client-server Computing Management Issues," CAUSE/EFFECT, Volume 16, Number 2, Summer 1993, URL http://www.educause.edu/ir/library/text/CEM9320.txt .
Booch, G., J. Rumbaugh, and I. Jacobson, The Unified Modeling Language User Guide, Boston: Addison-Wesley, 1999.
Eeles, P., What is a software architecture? IBM, February 15, 2006, URL http://www-128.ibm.com/developerworks/rational/library/feb06/eeles/ .
Gamma, E., R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Boston: Addison-Wesley, 1995.
IEEE Std 1471-2000, "IEEE Recommended Practice for Architectural Description of Software-Intensive Systems," URL http://standards.ieee.org/reading/ieee/std_public/description/se/1471-2000_desc.html .
Mallach, E. G. Understanding Decision Support and Expert Systems. Burr Ridge, IL: Richard D. Irwin, Inc., 1994.
Natis, Y. V., "Service-Oriented Architecture Scenario," Gartner, April 16, 2003, URL http://www.gartner.com/DisplayDocument?doc_cd=114358 .
Nolan, R. L. "Building the company’s computer architecture strategic plan." Stage by Stage (Nolan, Norton & Company) 2 (Winter): 1983: 1-7.
Power, D. J. and S. Kaparthi. "The Changing Technological Context of Decision Support Systems", In Berkeley, D., G. Widmeyer, P. Brezillion & V. Rajkovic (Eds.) Context-Sensitive Decision Support Systems. London: Chapman and Hall, 1998.
Schay, P., "How will Traditional Mainframe and Midrange Systems Evolve to Support Client/Server Computing?" in the Proceedings of the Gartner Group Symposium 1992, Scenarios, Vol. 1 (Stamford, Conn.: Gartner Group, 1992), p. 6 of Client/Server section.
Sprague, R.H. and E.D. Carlson. Building Effective Decision Support Systems. Englewood Cliffs, NJ: Prentice-Hall, 1982.
Welsh, M. J., "Enterprise architecture essentials, Part 3: Design and build your enterprise architecture," IBM, September 11, 2007, URL http://www.ibm.com/developerworks/library/ar-enterarch3/ .
Whetten, B., "Message-Based Computing: The Fourth Wave of Integration," 12/23/2001, URL http://www.ebizq.net/topics/jms/features/1579.html?&pp=1 .
Last update: 2007-10-03 02:04
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:34507f29-cbde-44be-8fa3-26221bdb859b> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=144 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880864 | 2,111 | 2.84375 | 3 |
The Hiller Advanced Research Division (A.R.D.) incorporated a five foot fiberglass round wing, (ducted fan) with twin counter rotating coaxial propellers powered by two 44hp/4000 rpm, four cylinder opposed, two-cycle, Nelson H-59 Engines. The Nelson engine was the first two-cycle engine certified by the FAA for aircraft use. Utilizing the Bernoulli principle, 40% of the vehicle's lift was generated by air moving over the ducted fan's leading edge. The remaining 60% of lift was generated by thrust from the counter rotating propellers. Of the six Flying Platforms that were built, the (ONR) vehicle is on exhibit at the Hiller Aviation Museum, and the National Air & Space Museum. | <urn:uuid:38da1e0d-a81b-4fd2-983d-bce4ce46b50d> | CC-MAIN-2017-04 | http://www.cio.com/article/2369562/government/151670-The-zany-world-of-identified-flying-objects.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964594 | 158 | 2.9375 | 3 |
The Future of Wi-Fi
It’s in your laptop, it’s in your smartphone, it’s in your watch, it’s everywhere. It’s Wi-Fi and even though it's barely 15 years old, it’s already become the glue binding our digital lifestyles. But if you’ve ever tried getting a good Wi-Fi signal in a coffee shop, on the street, or in a busy airport, you know that Wi-Fi has limitations. When too many people are using the same connection, Wi-Fi quickly overloads and becomes sluggish.
Wi-Fi is everywhere because it runs over unlicensed spectrum which isn’t reserved for any specific user or technology. But the future is cloudy - some new technologies could begin to interfere with Wi-Fi and it’s popularity means we need more spectrum to let it grow. There is a solution. We can solve this Wi-Fi crunch by freeing up more spectrum for unlicensed use and ensuring that new technologies work politely.
Learn More: Watch the Video
Wi-Fi is the most used technology for accessing the Internet. In fact, more data is carried over Wi-Fi than mobile and wired combined. Watch the video to learn more about how with better spectrum management, we can make sure Wi-Fi works better for our broadband needs today, and for our ever-increasing needs in the future.
national public cable Wi-Fi hotspots
Wi-Fi devices in an average U.S. home
of in-home broadband usage will be via Wi-Fi by 2017 | <urn:uuid:4946a560-abab-4f9d-854c-d7956e7c5f86> | CC-MAIN-2017-04 | https://www.ncta.com/positions/unlicensed-spectrum | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00368-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929789 | 329 | 2.796875 | 3 |
Master Port Scanning with Nmap
What's on your network and how vulnerable is it to a hacker attack? Having a clear picture of this is a vital part of effective network administration, and one way to build up such a picture is by network mapping using a port scanner.
Port scanning is the art of sending packets onto the network and analyzing what comes back and what doesn't. By sending packets to specific ports and IP addresses it's possible to build up a picture of the IP addresses of devices that are connected, what OSes they are running, what ports they have open, and the services running on those ports. (Of course there are other ways of doing this, but since port mapping is one of the first types of reconnaissance a hacker is likely to perform, doing your own port mapping will give you a clear idea of what hackers may find out.)
There are many open source port mappers, the best known one of which is called Nmap (short for network mapper.) Nmap is available for Linux, Windows, Solaris and other platforms, from http://insecure.org/nmap/ . It's a very flexible scanner with stealth scan options designed to evade intrusion detection systems (IDS), and by using these you can get practice in spotting the signs of intrusion attempts in your logs.
In this article we'll be looking at some of the more straightforward uses of Nmap. The examples used are based on Nmap 4.20 running on Linux, but the same commands should work on any other platform. If you read our article about building a portable security tool with the ASUS Eee PC and Ubuntu, Nmap is an excellent candidate for immediate installation.
Start Nmap in a terminal window by simply typing nmap and you'll see a long list of options as in figure 1.
Once you've got the hang of the basics it's worth experimenting with some of these, but to get started with a very quick indication of the machines on your network, type nmap sP 192.168.1.*.
The sP option makes Nmap perform a ping scan on all the IP addresses in the specified IP range (in this case 192.168.1.1-255), listing the hosts which respond, as in figure 2.
By default Nmap actually performs a ping scan before doing any other type of scan to establish which IP addresses are actually in use, ignoring any addresses which don't reply to the ping. This means that if any remote hosts or anything between you and the remote hosts blocks these pings then Nmap will not be aware that they exist, and won't attempt to interrogate them further. Fortunately you can get around this by using the p0 option, which forces Nmap to scan any addresses you specify, regardless of whether they respond to a ping.If you know you have a host on your network at 192.168.1.150 that is not responding to a ping, you can still investigate it using nmap P0 192.168.1.150. (See figure 3) By default Nmap only scans a subset of all the available ports, so to investigate a machine more rigorously you can use the p option to specify the ports you want to scan for example all ports in the range 1-65535: nmap p 1-65535 192.168.1.150 (See figure 4)
The p option is also useful if you want to investigate machines on your network with a specific port open, such as port 139 (Netbios session service):
To restrict your scan of port 139 to a subset of your network, simply type in an IP address range: nmap p 139 192.168.1.1-20 (See figure 5)
As well as various TCP scans, nmap can be made to perform a UDP scan using the sU option to get further port information: Nmap sU 192.168.1.150 (See figure 6)
It's worth noting that UDP scanning works in the opposite way to TCP scanning. Since a TCP session is initiated by the three-way handshake, nmap's default SYN scan can tell if a TCP port is open when it receives a SYN/ACK packet in response to its SYN packet. UDP sends no such an acknowledgement the only response it is likely to receive is an ICMP_PORT_UNREACH error packet from a closed port. So no response indicates that a UDP port might be open, but just to make things more complicated, no response could also simply mean that the UDP or ICMP packet got lost (or filtered). Nmap retransmits packets that may have got lost to cut down on false positives, but the bottom line is that when Nmap reports an open/filtered UDP port, this may not actually be the case.
It's also worth noting that non-Microsoft systems limit the number of ICMP Port Unreachable messages generated in a given time period, so scanning these systems can be very slow indeed.
Variations in different vendors' TCP/IP stacks mean that it's possible to identify or have a good stab at identifying the OS running on each device on the network by analyzing the packets received from them. Nmap can do this for you using its own OS-identification engine if you specify the O option: nmap O 192.168.1.5. (See figure 7)
If you don't like using command line programs and remembering the various options, the good news is that a number of Nmap front-ends are available, including NmapFE and the more flexible and arguably easier to use UMIT
With UMIT you can enter a target IP range, and choose a preset scan from a drop down box. If none of these suit your needs the Command Wizard allows you to build a scan by clicking boxes in a series of forms. As you choose the various components of your scan an Nmap command is slowly built up so you can see the command line options corresponding to the choices you make.
The custom scans you build using the wizard can be saved for reuse later.
The results are presented both graphically by host or by service and in a terminal window within the GUI. (See figure 9)
Port scanning is an important way of getting a handle of what's on your network, and is also a key way for hackers to scout out any vulnerabilities. Nmap is a very important security tool, and the more you pay around with it and explore its features, the more you'll know about you network and the work that needs to be done to secure it.
So do yourself and your organization a favor: download a copy sooner rather than later, and have at it! | <urn:uuid:b6043077-d551-480e-b07f-ea6fdf4b5019> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/10952_3716606_2/Master-Port-Scanning-with-Nmap.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00094-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936412 | 1,373 | 2.703125 | 3 |
Platform as a Service
Author: Evi Rachmilewitz, ClickSoftware Contributor
PaaS is one of the building blocks that comprise the term “cloud computing.” PaaS provides the infrastructure and the services needed to run applications over the internet. The idea is to “tap” to whatever service you need without worrying about the complexity behind the scenes. In addition, just like other utilities, PaaS is built on a pay-by-usage model or what is known is metered billing.
Who uses PaaS?
ISVs and corporate IT divisions use PaaS and benefit from it because this model allows them to focus on innovation rather than spend precious time on maintaining and configuring systems. Thus PaaS decouples innovation from deployment and the benefit is that R&D engineers can optimize their code while working in their local, atomic environments and then have the code deployed and tested, fairly easily, on various cloud computing systems.
Unlike traditional models, in the PaaS model R&D engineers can move their systems from an MS-SQL server to a MySQL server fairly easily, they can test their applications against a single application server or against multiple servers without having to go through daunting installation processes, they can apply automatic rules that determine when to scale their applications, they can test their work against several flavors of web servers and they can get reports on the environment that currently runs their code.
It is important to reiterate that the advantage of this approach is significant. Software engineers and QA engineers will tell you how time consuming it is to configure in-house systems for integration tests. With PaaS this requirement no longer exists, hence more time is available for pure R&D work.
Who are the major PaaS provides?
Google Apps Engine
The Google apps engine lets you run your applications on Google’s infrastructure. App engine applications are built and deployed easily and are easy to scale as traffic and data storage needs grow. With app engine developers can write their application’s code, test it on their local machine and upload quite easily to Google.
The Google apps engine supports several programming languages. It has a java runtime environment hence it supports standard java technologies including j2se, j2ee, and Ruby. In addition, it has a Python runtime environment.
Apps engine is considered a low cost solution that provides 500 MB of default storage and enough bandwidth to support up to 5 million page views per month.
Apps engine also offers a “zero to sixty” capability which enables to scale up applications automatically without worrying about manual administration of machines.
Force.com is a PaaS environment provided by Salesforce.com (NYSE: CRM). Force.com is strongly integrated to salesforce’s CRM SaaS application. It provides a secured, backed-up and scalable environment that allows software engineers to build apps that include built-in social and mobile functionality, business processes, reporting and search.
According to Gartner, force.com serves over 1,000 customer accounts, in addition to tens of thousands that use this platform in conjunction with Salesforce.com.
With force.com software developers can build apps using various technologies. This includes .net, java, apex and visualforce.
A downside of force.com is that it is highly integrated to Salesforce.com. Software developers do not necessarily want to clutter their apps with CRM functionality; for them force.com might not be a first option.
Windows Azure is a cloud services operating system that serves as the service management environment for the Windows Azure Platform. Windows Azure provides on-demand computing and storage to host and manage web applications over Microsoft’s data centers.
Windows Azure supports both Microsoft and non-Microsoft languages and environments. Most users of Windows Azure develop with Microsoft’s visual studio tools. Windows Azure also supports common protocols / standards like SOAP, REST, XML and PHP.
Common use cases of Windows Azure:
- Build applications to the web with minimal on premises resources.
- Perform services off premises. A good example is batch processing and large volume computations.
- Add web services capabilities to existing package applications.
Cloud Foundry is an open source PaaS environment from Vmware. Cloud Foundry is built to support multiple development frameworks and automates deployment of applications and their infrastructure across multiple cloud infrastructures. Cloud Foundry comes in three models:
- Cloudfoundry.com – This model provides developers with a multi-tenant PaaS environment to deploy and scale applications. It supports a number of programming languages (Java, Ruby) and frameworks (spring) and provides additional standalone services such as databases (mongo DB, MySql) and messaging services.
- Micro Cloud Foundry (MCF) – This model provides a downloadable instance of Cloud Foundry contained within a virtual machine on the developer’s desktop. The advantage of this model is that it first allows you to test your application in your own environment and then once it is stable deploy it onto foundry based private or public clouds.
- Cloudfoundry.org – This model is an open source model under the Apache 2 license. It allows developers to inspect and modify cloud foundry source code based on their needs while minimizing the risks of lock-in.
It is important to mention that I didn’t cover all PaaS vendors. The PaaS market offers additional players. This includes AppFog, Apprenda, Cloudify, Stackato, NetSuite and more.
What should you look for when seeking a PaaS provider?
- Flexibility – You must make sure that your PaaS provider does not offer a lock-in environment. By that I mean that your PaaS provider will allow you to deploy your code on any cloud environment with minimal integration issues.
- Technology – You must make sure that your PaaS provider supports your technology in terms of programming languages and additional services such as databases.
- Cost – You need to decide whether you will be using the services of an open source PaaS environment or a license based environment.
- IaaS – You need to know where the physical data center(s) of your PaaS provider exists. It is important to verify that your PaaS provider offers a multi-tenant environment, 99% up time, 24 X 7 support, backup services and revision history of files.
- Security – You need to make sure that your PaaS vendor provides high security standards in every level. This includes the physical level, the sign-in and log-in levels, the API level, the data level and the network level (for more information see my previous posts on cloud computing and security). In addition, if your application requires it, you need to make sure that your PaaS vendor has the right certifications. This includes SAS 70 type II and HIPAA for healthcare applications. | <urn:uuid:4f5e9f02-b36c-4b26-b5f4-2a4906e17d64> | CC-MAIN-2017-04 | https://www.clicksoftware.com/blog/platform-as-a-service/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924409 | 1,435 | 2.65625 | 3 |
If you've been reading this blog for a while, you've probably seen us refer to the Arduino microcontroller on a number of occasions. This little circuit board is at the heart of many DIY projects, from robotics to art projects and just about everything in between.
But what on Earth is Arduino, anyway? What makes it so versatile? And what can you do with it?
Limor Fried built her business on DIY tools and technologies. Limor, better known to the Internet as Ladyada, is a cofounder of Adafruit Industries, a site that sells kits, tools, and accessories geared toward the DIY set. Ladyada was gracious enough to take a few minutes to explain a little about what makes Arduino boards so cool--and useful--to anyone who's ever wanted to build or hack their own devices.
Arduino is described by its makers as "an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software", whatever that means. In short, it is a popular open-source electronic board that is capable of controlling just about any DIY hardware project. And there's a lot you can do with it.
As Ladyada explains:
"The 'what is Arduino?' is still a little vague, and that's the Arduino's strength. It's the glue people use to connect tasks together.The best way to describe an Arduino is with a few examples. Want to have a coffee pot tweet when the coffee is ready? Arduino. Want to have a Professor X Steampunk wheelchair that speaks and dispenses booze? Arduino. Want to make a set of quiz buzzers for an event out of Staples' Easy Buttons? Arduino.
"Arduino was mostly designed by artists for artists and designers...I think it's been the most important product/project in the world of educational electronics."
Arduino is sold under a Creative Commons Share-Alike (CC-SA) license, so you can make changes to the original Arduino board or how it's programmed and release it to the public, so long as you release it under the same CC-SA license. As you might expect, this has resulted in plenty of variations of the original Arduino board. Ladyada points to the Gameduino board, which is made with DIY gaming in mind. And Teagueduino is essentially an Arduino board put together in a kit to help people learn how to program it.
Seeing as Ladyada's job essentially allows her to play with Arduino boards and create her own projects, we asked her for an example of her best and worst creations:
"Luckily I think I can answer this using the same project! The best/worst Arduino creation was an open-source Homeland Security non-lethal weapon project: "THE BEDAZZLER: A Do-it-yourself Handheld LED-Incapacitator".
"After attending a conference where the $1 million "sea-sick flashlight" (named "THE DAZZLER") was demonstrated by the US Department of Homeland Security, we decided to create our own version using an Arduino. For under $250, you can build your own dazzler and we've released the source code, schematics and PCB files to make it easy. A great Arduino project for people who really like blinking LEDs. We also added in a mode selection so you can put it into some pretty color-swirl modes, great for raves and parties!"
Making It Work
Before you start messing with an Arduino board, you will need to learn how to program one. Luckily, there are a number of resources that can help you get started.
Ladyada has a few suggestions:
"A lot of people use my free tutorials on my personal website. "I have six lessons all together and many people over the years have said their is where they started their journey. There are also some great books, two of my favorites are: Getting Started with Arduino By Massimo Banzi and Practical Arduino by Jon Oxer & Hugh Blemings. The free open-source Arduino IDE (how you program Arduinos) also has tons of code examples and libraries.
"Next up, there are Hackerspaces for in-person learning and workshops and lastly (but not least) the amazing Arduino online community. You can visit the arduino.cc forums or Adafruit forums and see thousands of people helping each other and sharing code! It's a wonderful community and very inclusive to beginners!"
For some additional inspiration on what you could make with your Arduino kit, follow geeky technology blogs (such as this one!) that showcase the finest hacks, or check out Freeduino for a listing of handy tips when you're fine-tuning the microcontroller. Make's blog and Instructables also have good Arduino sections. If you idea or project could be beneficial to other people, drum up some support on Kickstarter.
So if you are looking for an affordable way to start programming cool robotics and other projects, and Arduino is a great--and fun--place to start. So what are you going to create with Arduino? Don't forget to tip us off when you're done with your next awesome creation!
Like this? You might also enjoy...
- Gigapixel Hack Makes Photos More Interactive
- DIY Ambient Lighting Reduces Eye Strain, Looks Awesome
- Teagueduino Teaches You How To Bring Your Ideas To Life
This story, "Geek 101: What is Arduino?" was originally published by PCWorld. | <urn:uuid:f46c55e5-9390-4b1b-89ba-f6435c503bee> | CC-MAIN-2017-04 | http://www.itworld.com/article/2736953/hardware/geek-101--what-is-arduino-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00120-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951821 | 1,132 | 2.90625 | 3 |
| ||Tea Ceremony|
The Art And Essence Of Japanese Tea
Japanese tea ceremony is an art form.
Learning Japanese tea ceremony is a way to cultivate onself. Learning how to make the tea is not that difficult. But to understand the true meaning of the ceremony as well as the tea tools is difficult. It requires long term learning.
Before drinking the tea it is necessary to eat the sweet as the sweet helps to counter the bitterness of the tea. One must be able to appreciate the four elements of tea ceremony: harmony, respect, purity and loneliness.
Harmony refers to the spirit of being harmonious, to be peaceful. Respect is to show respect.
Purity is the state of purtiy of the mind. And loneliness means quiet serenity. Drinking tea enables oneself to have spiritual release.
When everyone sits on the tatmi, drinking tea , admiring the calligraphy and the beautiful music, one feels that he is leaving the hustle and bustle of city life and drifting into utopia.
Drinking tea can help in longevity. tea has many elements that helps to maintain one's health and vitality. Whisking tea powder in the bowl is also a form of exercise.
Japanese tea came from China during the 13th century when a prominent monk went to China to meditate. He brought it back to Japan from China.
A tea ceremony is performed with up to five guests. The ceremony could be divided into three parts, the preliminary part, the middle part, and the final part.
In the first part, the windows are curtained off by bamboo screens to darken the room, the scroll is removed, and a new one is put in its place.
In the middle part of the ceremony a very simple meal is served, followed by sweet cakes, after which the guests could go and relax in the inner garden.
The final part of the tea ceremony is called
nochiseki. The scroll in the alcove is replaced by a
floral arrangement, and the water jar, tea caddy and
the tea utensils will be placed in the area where the
ceremony will take place. The atmosphere of the room
is changed to a bright room. The host picks up the
ladle, a signal for his/her assistant to roll up the
bamboo screen, brightening the room once again. The
host performs the ceremony in silence, while the
guests concentrate on his movement. This is the climax
of the ceremony. The main guest will then speak to the
host while the other guests remain silent. Once the
tea has been drunk, silence continues. The fire is
smothered by adding more charcoal to the fire pit and
the sound of the boiling kettle dies down. Then, thin
tea is served, which signifies that the tea ceremony
is coming to an end.
| || ||
Read about the:
Home | Research | Dictionary | Galleries | About Us | Help | <urn:uuid:5f06393c-6494-47c7-aa68-ddd14fc109e4> | CC-MAIN-2017-04 | http://www.easterntea.com/ceremony/tea_art.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00056-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93719 | 636 | 2.875 | 3 |
Silliman B.R.,Duke University |
Mozdzer T.,Bryn Mawr College |
Angelini C.,University of Florida |
Brundage J.E.,University of Maryland University College |
And 7 more authors.
PeerJ | Year: 2014
Invasive species threaten biodiversity and incur costs exceeding billions of US$. Eradication efforts, however, are nearly always unsuccessful. Throughoutmuch of North America, land managers have used expensive, and ultimately ineffective, techniques to combat invasive Phragmites australis in marshes. Here, we reveal that Phragmites may potentially be controlled by employing an affordable measure from its native European range: livestock grazing. Experimental field tests demonstrate that rotational goat grazing (where goats have no choice but to graze Phragmites) can reduce Phragmites cover from 100 to 20% and that cows and horses also readily consume this plant. These results, combined with the fact that Europeans have suppressed Phragmites through seasonal livestock grazing for 6,000 years, suggest Phragmites management can shift to include more economical and effective top-down control strategies. More generally, these findings support an emerging paradigm shift in conservation from high-cost eradication to economically sustainable control of dominant invasive species. © 2014 Silliman et al. Source
Veenklaas R.M.,University of Groningen |
Veenklaas R.M.,Bosgroep Noord Oost Nederland Forest Support Group |
Koppenaal E.C.,University of Groningen |
Bakker J.P.,University of Groningen |
And 2 more authors.
Journal of Coastal Conservation | Year: 2015
Salt marshes provide an important and unique habitat for plants and animals. To restore salt marshes, numerous coastal realignment projects have been carried out, but restored marshes often show persistent ecological differences from natural marshes. We evaluate the effects of elevation and marsh topography, which are in turn affected by drainage and livestock grazing, on soil salinity after de-embankment. Salinity in the topsoil was monitored during the first 10 years after de-embankment and compared with salinity in an adjacent reference marsh. Additionally, salinity at greater depths (down to 1.2 m below the marsh surface) was monitored during the first 4 years by measuring the electrical conductivity of the groundwater. Chloride concentration in the top soil strongly decreased with increasing elevation; however, it was not affected by marsh topography, i.e. distance to creek or breach. Chloride concentrations higher than 2 g Cl−/litre were found at elevations below 0.6 m + MHT. Salinization of the groundwater, however, took several years. At low marsh elevations, the salinity of the deep groundwater (at 1.2 m depth) increased slowly throughout the full 4-year period of monitoring but did not reach the level of seawater. Compared to the ungrazed treatment, the grazed treatment led to lower accretion rates, lower soil-moisture content and higher chloride content of soil moisture. The de-embankment of the agricultural grasslands resulted in a rapid increase of soil salinity, although deeper ground-water levels showed a much slower response. Elevation accounted for most of the variation in the salinization of the soil. Grazing may enhance salinity of the top soil. © 2015 The Author(s) Source
Chang E.R.,University of Groningen |
Veeneklaas R.M.,University of Groningen |
Veeneklaas R.M.,Bosgroep Noord Oost Nederland Forest Support Group |
Bakker J.P.,University of Groningen |
And 3 more authors.
Applied Vegetation Science | Year: 2016
Questions: How successful was the restoration of a salt marsh at a former summer polder on the mainland coast of the Dutch Wadden Sea 10 yr after de-embankment? What were the most important factors determining the level of restoration success? Location: Noard-Fryslân Bûtendyks, northwest Netherlands. Methods: The frequencies of target plant species were recorded before de-embankment and monitored thereafter (1, 2, 3, 4, 6 and 10 yr later) using permanent transects. Vegetation change was monitored using repeated mapping 14 yr before and 1, 7 and 10 yr after de-embankment. A large-scale factorial experiment with 72 sampling plots was set up to determine the effects of distance to a breach point, distance to a creek and grazing treatment on species composition. Abiotic data were also collected from the permanent transects and sampling plots on elevation, soil salinity and redox potential. Results: Ten years after de-embankment, permanent transect data showed that 78% to 96% of the target species were found at the restoration site. Vegetation mapping, however, showed that the diversity of salt marsh communities was low, with 50% of the site covered by the secondary pioneer marsh community. A multivariate analogue of ANOVA indicated that the most important experimental factor determining species composition was the interaction between distance to the nearest creek and livestock grazing. The combination of proximity to a creek and exclusion from livestock grazing always resulted in development of the high marsh community. In contrast, the combination of being located far from a creek, grazed and situated at low elevation with accompanying high salinity resulted in development of the secondary pioneer marsh community. Conclusions: Using target species as criteria, restoration success could be claimed 10 yr after de-embankment. However, the diversity of communities in the salt marsh was lower than desired. Variable grazing regimes should be applied to high-elevation areas to prevent dominance by single species of tall grasses and to promote formation of vegetation mosaics. Low-elevation areas need lower grazing pressure. Also, an adequate soil drainage network should be preserved or constructed in low-elevation areas before de-embankment. © 2016 International Association for Vegetation Science. Source
Bos D.,Altenburg and Wymenga Ecological Consultants |
Bos D.,University of Groningen |
Boersma S.,Fryske Feriening foar Fjildbiology FFF |
Engelmoer M.,Fryske Feriening foar Fjildbiology FFF Op Dijksman |
And 5 more authors.
Journal of Coastal Conservation | Year: 2014
In this study we evaluate the effect of coastal re-alignment on the utilisation of coastal grasslands by staging geese. We assessed vegetation change and utilisation by geese using repeated mapping and regular dropping counts in both the restored marsh and adjacent reference sites. All measurements were started well before the actual re-alignment. In addition, we studied the effects of livestock grazing on vegetation and geese, using exclosures. The vegetation transformed from fresh grassland into salt-marsh vegetation. A relatively large proportion of the de-embanked area became covered with secondary pioneer vegetation, and the overall cover of potential food plants for geese declined. Goose utilisation had initially dropped to low levels, both in autumn and in spring, but it recovered to a level comparable to the reference marsh after ten years. Exclosure experiments revealed that livestock grazing prevented the establishment of closed swards of grass in the poorly drained lower area of the restored marsh, and thereby negatively affected goose utilisation of these areas during spring staging. Goose grazing in the restored marsh during spring showed a positive numerical response to grass cover found during the preceding growing season. (1) The value of restored salt marsh as foraging habitat for geese initially decreased after managed re-alignment but recovered after ten years. (2) Our findings support the idea that the value of foraging habitats depends largely on the cover of forage plants and that this can be manipulated by adjusting both grazing and drainage. © 2014 Springer Science+Business Media Dordrecht. Source | <urn:uuid:acd26a9f-407a-417b-ba33-16f855a0f7ca> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/puccimar-ecological-research-and-consultancy-580511/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00204-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92574 | 1,672 | 2.796875 | 3 |
New ways to use water for data center temperature monitoring
Monday, Apr 8th 2013
One San Francisco Bay Area robotics company has developed a new ingenious data center temperature monitoring technique: Stick the entire server in water. A California manufacturer recently release a floating robot that functions as a remote data center facility.
The device, dubbed the Wave Glider, looks like a lost surfboard floating aimlessly at sea. The board is equipped with solar photovoltaic panels in addition to numerous cameras, sensors and computing power. The on-board systems are powerful enough to store and analyze large amounts of data remotely and then beam back its conclusion to land-based networks via satellite, CNET reported.
Although a small, floating and self-contained data center with limited processing power may seem like a fruitless invention, The New York Times reported that it offers a few distinct advantages. For one, the system is designed to function with little or no oversight from IT staff, which can be a boon for data center operators. In addition, the robot is made to withstand hurricanes, thus making the latest Wave Glider a potentially ideal disaster recovery and business continuity option.
"Networking is so different in the real world," John Gage, the chief scientist behind the project, said. "Bandwidth can be wide open or a soda straw. If something goes off in a data center, you assume it's dead, or a human can come and fix it; that is not true here, where things can be working but away from the network. It's a fascinating problem in ambiguity."
Another major benefit of this design is that it means operators do not need to spend exorbitant sums on air conditioning units to maintain a static data center temperature. Since surface ocean temperatures near the United States coast remain relatively stable and are within safe server operational ranges, the need to use a traditional temperature sensor is diminished. For example, surface sea temperatures in Alameda, California - which is approximately 40 miles away from the robotics manufacturer's headquarters - peak at about 66 degrees Fahrenheit, according to the National Oceanographic Data Center. Considering that industry standards dictate that the server room temperature can be as hot as 80 F, the likelihood of the Wave Glider's onboard system overheating is slim.
Simpler ways to use water for data center temperature monitoring
While the Wave Glider may be one of most unique ways facilities operators use water for data center temperature monitoring, it is far from the only example. As energy costs and usage rises in the industry, IT professionals are increasingly looking to HVAC alternatives to keep computing infrastructure cool. As a result, water-based temperature monitoring solutions are becoming far more common in the industry.
For example, a data center in Stockholm, Sweden, pumps in sea water to keep servers cool. According to Data Center Knowledge, this system reduced the facility's electricity usage by 80 percent and helped make the facility one of the most energy efficient data centers in Europe.
Additionally, some of Google's data center facilities use similar temperature monitoring techniques. Its operations in Finland utilize sea water as well, while its data center in Belgium pumps in canal water to keep servers at ideal temperatures. At the technology company's operations in Douglas County, Georgia, municipal waste water is used to cool the equipment.
"Evaporation is a powerful tool," Google said on its website. "In our bodies, it helps us maintain our temperature even when outside temperatures are warmer than we are. It also works similarly in our cooling towers. As hot water from the data center flows down the towers through a material that speeds evaporation, some of the water turns to vapor. A fan lifts this vapor, removing the excess heat in the process, and the tower sends the cooled water back into the data center."
While these systems help reduce energy costs, operators still need to use data center temperature monitoring equipment to avoid unplanned downtime and ensure that servers are always operational. | <urn:uuid:0600b35b-7350-496a-b9bd-8c6e2e87c9d8> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-ways-to-use-water-for-data-center-temperature-monitoring-418334 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928449 | 796 | 2.796875 | 3 |
When all you have is a hammer, they say, everything looks like a nail. Instead, we give you a look at several language-hammers, so you can make a reasonable decision about when each is the best tool for the job.
Deciding when to use any language—including Ruby—depends on the appropriateness to task and the amount of yak shaving necessary. Zed Shaw explains when Ruby's MRI or JRuby is the best language for the job, and when it really isn't.
Python is a powerful, easy-to-use scripting language suitable for use in the enterprise, although it is not right for absolutely every use. Python expert Martin Aspeli identifies when Python is the right choice, and when another language might be a better option.
PHP may be the most popular Web scripting language in the world. But despite a large collection of nails, not every tool is a hammer. So when should it be used, and when would another dynamic programming language be a better choice? We identify its strengths and weaknesses.
Zend's John Coggeshall responds to CIO.com's earlier PHP article with his own list of the Good, the Bad and the Ugly of PHP application development.
Every programming language has its strengths...and its weaknesses. We identify five tasks for which perl is ideally suited, and four that...well, really, shouldn't you choose something else? | <urn:uuid:1fa8b17c-a04a-4692-b5ea-40286820167d> | CC-MAIN-2017-04 | http://www.cio.com/article/2437007/developer/you-used-that-programming-language-to-write-what--.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941164 | 289 | 2.59375 | 3 |
What could emergency responders do with a little more spectrum, a lot more money and a healthy assist from modern technology?
To begin with, they could set up effortless communication between local emergency responders and out-of-towners who are called in for a major disaster but whose equipment works on a different frequency, said Chris Essid, director of the Homeland Security Department's Office of Emergency Communications.
A large block of communication spectrum could also enable standardized nationwide systems and protocols so responders' tools would look the same from one county to the next and radios from one county could simply roam onto the next county's network, according to Essid, who was speaking at a Wilson Center for International Scholars event focused on how private sector technologies could aid emergency responders.
More spectrum and standardized equipment would also allow police and firefighters to better utilize new technologies, panelists at the event said, such as sensors that tell firefighters which floors of a building are on fire and what chemicals are present there; heat sensors that tell police where a suspect is hiding in a darkened house and space-age band aids that take an injured person's vital signs and match them with a photo and name so family members are able not only to track the victim to a particular hospital or shelter but also to find out how they're doing.
New technology could even push mapping data about nearby fire hydrants and building layouts directly to screens on firefighters masks, panelists said.
There's been a push in Congress since soon after communication difficulties hampered the 9/11 response to reserve a portion of spectrum for a national public safety broadband network. So far, though, those efforts have been unsuccessful. Currently law requires that portion of spectrum, known as Block D to be auctioned off to commercial bidders. | <urn:uuid:a9448077-7ca1-4ad0-b204-47dc4bc3b1a8> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/tech-insider/2011/11/for-a-few-mhz-more-spectrum-and-emergency-responders/54956/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96909 | 355 | 2.90625 | 3 |
About the TLS Extension Server Name Indication (SNI)
When website administrators and IT personnel are restricted to use a single SSL Certificate per socket (combination of IP Address and socket) it can cost a lot of money. This restriction causes them to buy multiple IP addresses for regular https websites from their domain host or buy hardware that allows them to utilize multiple network adapters.
However, with Apache v2.2.12 and OpenSSL v0.9.8j and later you can use a transport layer security (TLS) called SNI. SNI can secure multiple Apache sites using a single SSL Certificate and use multiple SSL Certificates to secure various websites on a single domain (e.g. www.yourdomain.com, site2.yourdomain.com) or across multiple domains (www.domain1.com, www.domain2.com)—all from a single IP address. The benefits of using SNI are obvious—you can secure more websites without purchasing more IP addresses or additional hardware.
Since this is a fairly recent update with Apache, browsers are only recently supporting SNI. Most current major desktop and mobile browsers support SNI. One notable exception is that no versions of Internet Explorer on Windows XP support SNI. For more information on which browsers support SNI, please see SNI browser support.
To use SNI on Apache, please make sure you complete the instructions on the Apache SSL installation page. Then continue with the steps on this page.
Setting up SNI with Apache
To use additional SSL Certificates on your server you need to create another Virtual Host. As a best practice, we recommend making a backup of your existing .conf file before proceeding. You can create a new Virtual Host in your existing .conf file or you can create a new .conf file for the new Virtual Host. If you create a new .conf file, add the following line to your existing .conf file:
Next, in the NameVirtualHost directive list your server's public IP address, *:443, or other port you're using for SSL (see example below).
Then point the SSLCertificateFile, SSLCertificateKeyFile, and SSLCertificateChainFile to the locations of the certificate files for each website as shown below:
NameVirtualHost *:443 <VirtualHost *:443> ServerName www.yoursite.com DocumentRoot /var/www/site SSLEngine on SSLCertificateFile /path/to/www_yoursite_com.crt SSLCertificateKeyFile /path/to/www_yoursite_com.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost> <VirtualHost *:443> ServerName www.yoursite2.com DocumentRoot /var/www/site2 SSLEngine on SSLCertificateFile /path/to/www_yoursite2_com.crt SSLCertificateKeyFile /path/to/www_yoursite2_com.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost>
If you have a Wildcard or Multi-Domain SSL Certificate all of the websites using the same certificate need to reference the same IP address in the VirtualHost IP address:443 section like in the example below:
<VirtualHost 192.168.1.1:443> ServerName www.domain.com DocumentRoot /var/www/ SSLEngine on SSLCertificateFile /path/to/your_domain_name.crt SSLCertificateKeyFile /path/to/your_private.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost>
<VirtualHost 192.168.1.1:443> ServerName site2.domain.com DocumentRoot /var/www/site2 SSLEngine on SSLCertificateFile /path/to/your_domain_name.crt SSLCertificateKeyFile /path/to/your_private.key SSLCertificateChainFile /path/to/DigiCertCA.crt </VirtualHost>
Now restart Apache and access the https site from a browser that supports SNI. If you set it up correctly, you will access the site without any warnings or problems. You can add as many websites or SSL Certificates as you need using the above process. | <urn:uuid:55b4a279-13c0-429b-be0e-1ff2f48a3946> | CC-MAIN-2017-04 | https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.738231 | 940 | 2.75 | 3 |
Database Design Directions
February 23, 2012
We came across a quite useful checklist every database architect should keep on hand. Java Code Geeks give us “20 Database Design Best Practices.” The list covers everything from the commonsense:
“Use well defined and consistent names for tables and columns (e.g. School, StudentCourse, CourseID …).”
To the more advanced:
“Normalization must be used as required, to optimize the performance. Under-normalization will cause excessive repetition of data, over-normalization will cause excessive joins across too many tables. Both of them will get worse performance.”
With a little strong opinion mixed in:
“Lack of database documentation is evil.”
If you design (or oversee those who design) databases, do yourself a favor and check it out.
Most people think of search as providing access to unstructured information. Examples of unstructured information include email, Word documents, and Excel. Our extensive work in enterprise search has spanned structured data; that is, information in a database.
Search Technologies can handle difficult content acquisition tasks when needed information is held within Microsoft SQL Server, IBM DB2, Oracle, or a similar data management system. In addition, Search Technologies can set up automated processes to handle extraction, transformation, and loading of data or subsets of data.
For more information about our capabilities to make structured and unstructured data more findable, navigate to www.searchtechnologies.com.
Iain Fletcher, February 23, 2012
Sponsored by Pandia.com | <urn:uuid:4967112f-e6cd-4a8c-96e1-7de1d069b81a> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/2012/02/23/database-design-directions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.890267 | 332 | 2.75 | 3 |
California Natural Resources Agency Secretary Mike Chrisman and State Chief Information Officer Teri Takai yesterday announced the launch of a new Cal-Atlas Geospatial Clearinghouse Web site to help government agencies better coordinate their geospatial efforts and allow public access to geospatial data. The innovative approach to technology will allow the general public to access maps, data and information that has not previously been accessible on a single site or from a single source. Geospatial data is information based on geographic locations or characteristics.
The new Web site will centralize a variety of data and information. Cal-Atlas provides a number of important Web accessible services. These include:
Digital maps linked to related information via GIS technologies provide unique capabilities that have gained public awareness. Some of those more commonly known tools include ArcGIS Explorer, Google Earth and Maps, Microsoft Visual Earth, Yahoo Maps and NASA World Wind to name a few. The maps and information available on Cal-Atlas will help users answer important questions related to where to go in cases of an emergency, where a new road might be routed, where are the best places for different activities and recreation.
Using maps to see where things are in relation to each other is also a key to being able to plan and deliver more effective public services. Cal-Atlas is an effort to make sure that agencies have accurate, complete and up-to-date GIS data. It will also help organizations to coordinate their activities, avoid duplication of effort and ensure that they make the most of their data investments. More maps or links to maps will be offered as state agencies roll out their own interactive map based Web sites.
Governor Schwarzenegger last year called for the creation of a GIS task force to develop a statewide strategy to enhance the technology for environmental protection, natural resource management, traffic flow, emergency preparedness and response, land use planning and health and human services. The task force issued a report to the governor recommending, among other things, that the state should have a single office to oversee and coordinate its use of this technology.
Late last year, state CIO Teri Takai told Government Technology she wanted to create a chief geographic officer position. | <urn:uuid:98d5ef9c-1d04-4ab9-b8ab-2c9010f244d5> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Geospatial-Coordination-Web-Site-Launched-by.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924219 | 445 | 2.625 | 3 |
Breaking from the cocoon of the iPhone 5S, 64-bit ARM processors will start delivering breakthrough performance in servers, aided by graphics cards used in some of the world's fastest computers.
The first high-performance servers with ARM 64-bit processors have been announced with Nvidia's Tesla graphics cards, which is also in the U.S. Department of Energy's Titan, the world's second fastest supercomputer. The servers from Cirrascale, E4 Computer and Eurotech were announced at the International Supercomputing Conference in Leipzig, Germany.
The first 64-bit ARM processor was used by Apple in the iPhone 5S, which was introduced last year. No 64-bit ARM products have been announced since, but there is a growing interest in low-power ARM servers to process lightweight tasks such as responding to search and social networking requests.
Hewlett-Packard, Dell and others have plans to ship 64-bit ARM servers that could help cut electric bills in data centers. ARM processors alone cannot deliver the horsepower necessary for complex scientific and math calculations, while GPUs can speed up such tasks.
There is an interest in ARM processors combined with GPUs in research areas like protein folding, drug discovery and atomic simulations, said Ian Buck, vice president of accelerated computing at Nvidia.
"They now have an alternative than x86 servers, they can validate more choice," Buck said.
Most supercomputers today use processors from Intel or Advanced Micro Devices based on the x86 microarchitecture, which is also in PCs. But based on historic trends, researchers have argued that smartphone chips could ultimately replace the more expensive and power-hungry x86 server processors in supercomputers.
The first ARM servers from Cirrascale and E4 Engineering will ship later this year. The Cirrascale RM1905D and the E4 EK003 have eight-core AppliedMicro X-Gene processors and Tesla K20 GPUs, with support for up to DDR3 memory, 10-gigabit Ethernet and PCI-Express 3.0. The systems come with 400 watt power supplies. The Cirrascale 1U server is for cloud and high-performance applications, while the E4 Engineering 3U server is for Web computing, analytics, video rendering and science applications. Configuration details for Eurotech's ARM server was not available, but it will have liquid cooling, according to Nvidia.
For Nvidia, ARM servers represent a big opportunity to sell its graphics cards. The Tesla GPUs are already compatible with x86 chips and IBM's Power processors, and support for ARM-based chips is the next logical step, Buck said.
"There are lot of interest in ARM64," Buck said.
AppliedMicro's X-Gene chip is based on its proprietary chip design, and has server features such as error correction and RAS (reliability, availability and serviceability), which are typically not available in ARM mobile chips. The X-Gene also has I/O, networking and signal-processing components.
But ARM servers lack software support as most server applications are written for x86 chips. But Hadoop, OpenStack and the LAMP stack already support ARM, and native Java support is coming in 2015.
Nvidia at ISC is also announcing CUDA 6.5, a set of proprietary parallel programming tools that can harness the joint computing power of CPUs and GPUs. CUDA 6.5 adds support for ARM processors.
Beyond AppliedMicro, Nvidia is keeping a close watch on other ARM-based server chip makers such as Broadcom and Cavium, Buck said. He hinted that GPU support could also come for those chips.
Nvidia faces competition from AMD, which is developing an ARM server processor and sells graphics processors such as FirePro for the supercomputing space. Nvidia lacks a CPU specifically for servers and is not developing one, Buck said. | <urn:uuid:1cb26725-26e5-4e90-82fb-d5a816984477> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2491056/computer-hardware/arm-64-bit-chips-move-beyond-iphone-into-servers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947677 | 794 | 2.515625 | 3 |
Many SQL-injection techniques rely on tautologies: adding an expression that is always true to the where-clause of a select statement. Like OR 1=1. 1=1 is a tautology, it’s an expression that always yields true.
So if SELECT * FROM USERS WHERE USERNAME = ‘ADMIN’ and PASSWORD = ‘UNKNOWN’ doesn’t select any rows because the password is not correct, injecting ‘ OR 1=1 — gives SQL statement SELECT * FROM USERS WHERE USERNAME = ‘ADMIN’ and PASSWORD = ” OR 1=1 –‘ which will return all rows, because the where-clause is always true (OR 1=1).
There are several security applications (WAFs, SQL firewalls, …) designed to monitor the stream of SQL statements and reject statements with tautologies, i.e. the result of a SQL-injection. Some are very simple and just try to match pattern 1=1. Bypassing them is easy: 1>0 is also a tautology. Others are more sophisticated and try to find constant expressions in the where-clause. Constant expressions are expressions with operators, functions and constants, but without variables. If a constant expression is detected that always evaluates to true, the firewall assumes it’s the result of a SQL-injection and blocks the query.
This is all classic SQL-injection, but now comes the interesting part.
What if I use an expression that is not a tautology in it’s mathematical sense, but is almost one… Say I use expression RAND() > 0.01 ? The RAND function is a random number generator and returns a floating point value in the range [0.0, 1.0[. Expression RAND() > 0.01 is not a tautology, it’s not always true, but it is true about 99% percent of the time. I call this a quasi-tautology.
A firewall looking for tautologies will not detect this, because it is not a tautology. But when you use it in a SQL-injection, you stand a 99% chance of being succesful (provided the application is vulnerable to SQL-injection)!
There are other functions than RAND to create quasi-tautologies. An expression comparing the seconds of the current system time with 59 is also a quasi-tautology.
The GreenSQL firewall will detect SQL statements with quasi-tautologies, not because it looks for them, but because it builds a whitelist in training mode. | <urn:uuid:bb561695-9c07-4e83-ab8e-e6b3695c7c8c> | CC-MAIN-2017-04 | https://blog.didierstevens.com/2010/02/02/quickpost-quasi-tautologies-sql-injection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.86553 | 557 | 2.65625 | 3 |
NOAA puts 170 years of hurricane history into one interactive site
Hurricanes are never good news, but they do make history. The National Oceanic and Atmospheric Administration has put a lot of that history in one place, with its Historical Hurricane Tracks website, which puts more than 170 years of global hurricane data into an interactive map.
The site serves up data on global hurricanes as they made landfall going back to 1842, long before hurricanes were given names, and provides links to information on tropical cyclones in the United States since 1958, and other U.S. storms dating back to 1851. The most recent addition to the site provides details on last year’s Hurricane Sandy.
Visitors to the site can search by location, storm name or ocean basin and select the search area (by nautical miles, statute miles or kilometers). Selecting Miami, for example, will display a map on south Florida criss-crossed by the tracks of many a hurricane.
Hover the cursor over any of the tracks, which are color-coded to indicate their strength on the Saffir-Simpson Hurricane Wind Scale and how their strength changes during their course, and a table on the left will show the name of that storm, if it has one. Clicking on the track or the name in the table will isolate that storm, so the track appears alone on the map, with information in the table showing the wind speed and air pressure when it hit land.
So weather fans can follow the track of 1982’s Andrew through south Florida, 2005’s Katrina when it hit New Orleans or the unnamed marauder that swept through Galveston, Texas, in 1900 and which is still the deadliest hurricane in U.S. history.
Users can zoom in or out of the maps, select views by county and click links to details on a storm as well as NOAA’s report on that storm. The site also has information on population changes along U.S. coastal counties from 1900 to 2000, indicating the growing number of people and infrastructure at risk from hurricanes.
The site, which was developed by the NOAA Coastal Services Center along with the agency’s National Hurricane Center and National Climatic Data Center, offers a fairly comprehensive and easily customizable tool for checking a hurricane’s history. As hurricane season gets into its busiest months, it’s not a bad time to look back.
Posted by Kevin McCaney on Sep 23, 2013 at 12:25 PM | <urn:uuid:beed6c1d-eca7-40f9-9fcd-51675710aa78> | CC-MAIN-2017-04 | https://gcn.com/blogs/pulse/2013/09/noaa-hurricane-history.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00149-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939712 | 509 | 2.984375 | 3 |
Like many natural and man-made disasters before it, Hurricane Sandy has shown once again the power of social media to keep people informed, to coordinate rescue efforts and ultimately save lives.
There are countless stories already about how Facebook, Twitter and other social media platforms have helped people in need, reassured their relatives that they were safe, engaged neighbors and perfect strangers in mutual help. Some signs started with Hurricane Katrina in the US five years ago, then they became stronger with the bushfires in Victoria Australia three years ago, and even more so with floods in Queensland almost two years ago.
The scale of Sandy and the greater penetration of social media has made the social networking aspect of this major event almost predominant.
However, for how great and deep the social media impact can be, this is yet another proof that social media is nothing else than a tool that many people as well as governments decide to rely upon when something out of the ordinary happens, and normal processes (such as 911 calls and public safety intervention) do not suffice to deal with the scale and severity of the event.
As I wrote a while ago, in other places where the role of social media in managing the emergency was celebrated and even led to awards and recognition, when the water levels dropped and life returned to normal, authorities were left with unanswered questions about how to incorporate all this exciting and important stuff into their strategies and their normal course of business.
The simple answer is that they can’t and they shouldn’t. Social media can serve an important purpose when something extraordinary happens. When we all stop chatting about sport results, or favorite actors, or how to bake, and feel compelled to collect and relay information that can help other people, then it is time for authorities to join the chatter, search for patterns, use this additional and powerful channel.
But when things are back to normal, and we go back to chatting about sports and cakes, making social media an institutional tool for public safety is a tougher call.
Of course social media is an important channel for mass notification, and is an important tool for listening to what people are saying and to uncover patterns. But when it comes to how authorities can really make a difference, it is up to how each commander, officer or firefighter to decide whether and how to use these tools to help people and save lives.
Tactics, more than strategies, make the difference.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:07d3114b-2726-4cc4-a57e-3f7792e0587b> | CC-MAIN-2017-04 | http://blogs.gartner.com/andrea_dimaio/2012/10/31/hurricane-sandy-confirms-the-tactical-nature-of-social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948698 | 604 | 2.65625 | 3 |
EMC Greenplum President Bill Cook explains how big data is the "engine for creating new economic value."
Big data analytics explores the granular details of business operations and customer interactions that seldom find their way into a data warehouse or standard report.
By 2020, the quantity of electronically stored data will reach 35 trillion gigabytes, a forty-four-fold increase from 2009. According to IDC's 2011 Digital Universe Study sponsored by EMC, we reached 1.2 million petabytes, or 1.2 zettabytes, by the end of 2010. That's enough data to fill a stack of DVDs reaching from the Earth to the moon and backabout 240,000 miles each way.
For alarmists, this is an ominous data storage doomsday forecast. For opportunists, it's an information gold mine whose riches will be increasingly easy to excavate as technology advances.
When data volumes skyrocketed in the early 2000s, storage and CPU technologies were overwhelmed by the numerous terabytes of big data to the point that IT faced a scalability crisis. Then, we were once again snatched from the jaws of defeat by Moore's law. Storage and CPUs not only developed greater capacity, speed and intelligence; they also fell in price. Enterprises went from being unable to afford or manage big data to allocating budgets on the collection and analysis of it.
What used to be a technical problem is now a tremendous business opportunity.
Digital data is ubiquitous; it is in every industry, in every economy, in every organizationmaking big data relevant for leaders across every sector.
Putting those zetabytes of structured and unstructured data (from operational systems, cell phones, Twitter, Facebook and other modern sources) to work can play a role in helping reduce costs, improve customer relationships, develop new products, accelerate and synchronize delivery, ask and answer deeper questions as well as enhance and simplify decision making.
Big data is about putting all that information to work. Access to the many zetabytes of data in the world is meaningless to your business until you start applying next generation analytic tools and methods to generate insights that enable you to do something that helps move your organization forward.
The McKinsey Global Institute suggests that if US Healthcare could use big data creatively and effectively to drive efficiency and quality, the potential value from data in that sector could be more than $300B in value, two thirds of which would be in the form of reducing national health care expenditures by about 8%. In the private sector, McKinsey estimates that a retailer using big data analytics to the full has the potential to increase its operation margin by more than 60%.
Big data allows you to innovate beyond what you can currently comprehend to solve problems and invent products or services that you've never thought of before!
Big data is part of an IT continuum and is a crucial piece of an enterprise information strategy.
Simple reporting, spreadsheets and even fairly sophisticated drill-down analysis have become commonplace expectations of business intelligence. However, there are types of analysis that BI can't produce and that is where big data comes into the equation.
Big data analytics explores the granular details of business operations and customer interactions that seldom find their way into a data warehouse or standard report, because a growing share of this information is unstructured data that can't be warehoused or analyzed in neat columns and rows. Also, this data is constantly in motion so its velocity defies the current RDBMS model.
When you are after things like predictive analysis, natural language processing, machine learning or advanced statistical techniques, even if you want to analyze and mash up unstructured content in the BI mix - you need new technologies and ways of working with data to get at the insight that can bring value to your organization.
By leveraging the power of big data analytics you develop an "information advantage" and can transform from a reactive organization that uses data to understand the lessons of the past, into a predictive, proactive organization that uses the insight contained in big data to anticipate and execute on opportunities of the future.
The possibilities of big data continue to evolve rapidly, driven by innovation in the underlying technologies, platforms, and analytic capabilities for handling data, as well as the evolution of behavior among its users and as more and more individuals live digital lives.
Using big data technologies and analytics will become a key indicator for competition and will likely create new competitors or ways of competing in your industry. As you move forward, think about a couple of things: | <urn:uuid:ca99a738-56f7-47ba-88e7-6202fb33c18c> | CC-MAIN-2017-04 | https://www.emc.com/microsites/cio/articles/greenplum-bill-cook/index.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941386 | 911 | 2.625 | 3 |
American IT departments’ decisions could inadvertently put organizations at risk of an information security breach if they don’t have sufficient protocols for the disposal of old electronic devices.
Even those with established processes could unwittingly initiate a security leak if they rely on wiping or degaussing hard drives, or handing over their e-waste to an outsourced recycler. Worse yet, some organizations might be stockpiling old technology with no plan at all.
Despite the many public wake-up calls, most American organizations continue to be complacent about securing their electronic media and hard drives. Processes and protocols surrounding the destruction of electronic devices have been slow to adapt to new reality: that businesses large and small are increasingly dependent on digital information.
Congress is hoping to hold businesses accountable for the protection of confidential information with the introduction of the Data Security and Breach Notification Act of 2013, which will require organizations that acquire, maintain, store or utilize personal information to protect and secure this data. However, legislation only goes so far and American organizations of all sizes must be more vigilant to protect themselves from a data breach that could damage their bottom line, with the prospect of losing revenue, reputation or clients.
To mitigate the risk of fraud, businesses should consider the following tips:
Think prevention, not reaction. There is no one-size-fits-all data protection strategy. Develop preventative approaches that are strategic, integrated and long-term, such as eliminating security risks at the source and permanently securing the entire document lifecycle in every part of your organization;
Be security savvy. Put portable policies in place for employees with a laptop, tablet or smartphone to minimize the risk of a security compromise while travelling;
Protect electronic data. Ensure that obsolete electronic records are protected as well. Simply erasing or degaussing a hard drive or photocopier memory does not remove information completely—physically crushing the device is the only way to ensure that data cannot be retrieved;
Create a culture of security. Train all employees on information security best practices to reduce human error. Explain why it’s important, and conduct regular security audits of your office to assess security performance.
“For every desktop computer, printer or mobile device purchased, there should be a secure disposal plan for outgoing technology,” said Michael Collins, Shred-it Regional Vice President. “More often than not, those devices are loaded with sensitive company or customer information that is recoverable if the hard drives aren’t physically destroyed.” | <urn:uuid:1907fc88-7f89-4596-bb31-86385b10134e> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/12/10/inadequate-electronic-disposal-protocols-can-lead-to-security-leaks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917358 | 511 | 2.59375 | 3 |
Google Chrome launches into space with 'A Spacecraft for All'
Google seems to be on a bit of a space travel kick lately. The search-giant recently launched Google Maps for Mars and the Moon. At first, that seemed a bit odd; I mean, other than some NASA nerds, who really cares to view those terrains? Before you raise your hand and say you do, please know I did it extensively as a test, and saw nothing but rocks and craters. Quite frankly, I would sooner explore Dollywood; at least there is something to see.
Sure enough though, Google seems committed to space, as today, the company announces that users of Google Chrome can get involved with ISEE-3. Don't know what that is? I didn't either. Google explains it by saying, "originally launched in 1978 to study the Sun, it was the first spacecraft in the world to fly by a comet and has been orbiting the sun for billions of miles since 1986". Damn, it's been travelling since the last time the Mets won the World Series!
"In a new Chrome Experiment called A Spacecraft for All, you can follow the unlikely odyssey of the ISEE-3 using Chrome’s interactive WebGL graphics and video. You can re-live its story, read its re-activated data instruments, learn about its current position and trajectory -- and explore space along the way. It's all designed to make space science simple, fun and accessible enough for anyone eager to learn -- whether you're a Ph.D. or grade school student", says Suzanne Chambers, Executive Producer & Space Cadet, Creative Lab New York.
Chambers further explains, "the experience will build up to a live event this Sunday, August 10, when the ISEE-3 will fly by the Moon for the first time in decades. We'll document every second with a live lunar flyby demo, and we're inviting the entire world to join in. You can follow the spacecraft’s trajectory real-time, along with interviews with the Reboot team, visits from the original ISEE-3 Flight Director, and live data measurements coming directly from space".
This will be happening at 1:30pm Eastern Time on Sunday. As cool as this sounds, Google has some serious competition on the TV front. Coincidentally, the Mets, who I mentioned previously, will be playing at that time. Plus, Futurama is on Comedy Central, The Golden Girls are on TV Land and Keeping up with the Kardashians is on E! -- decisions, decisions. All joking aside though, ISEE-3 flying near our moon is a rarity and I will definitely check it out. This would make an awesome learning experience for your children, if you have any. You can check it out here.
Do you think 'Spacecraft for All' will be hot or not? Tell me in the comments. | <urn:uuid:58d8a97f-71e3-4628-8fb1-1ebeee07eb0b> | CC-MAIN-2017-04 | http://betanews.com/2014/08/08/google-chrome-launches-into-space-with-a-spacecraft-for-all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00535-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958211 | 596 | 2.546875 | 3 |
IT Management Slideshow: IBM Quiz: Test Your Big Blue SmartsBy Dennis McCafferty | Posted 07-05-2011
When the company was formed in 1911, what was its name?
A. International Business MachinesB. Computing Tabulating and Recording CompanyC. Business Computing Machinery CompanyD. Recording Machinery and Computing
AnswerB. Computing Tabulating and Recording Company.
This name was the result of a merger of four companies in 1911. In 1924, it switched to IBM, which was the name of a subsidiary.
What was IBMâs first product?
A. Meat grindersB. Cheese slicersC. ClocksD. All of the above
D. Punched card machines shortly followed these, as the company evolved into a data capture/analytics giant.
What is the companyâs iconic slogan?
A. FORWARDB. BELIEVEC. BIG BLUED. THINK
IBM icon Thomas Watson Jr. actually came up with the idea of using "Think" as a slogan when he was the head of sales at National Cash Register Co. in Dayton, Ohio.
IBM invented each of the following technologies, except for which one?
A. RAMAC, the first magnetic hard disc driveB. Floppy diskC. Personal computerD. Fortran, the first true high-level programming language
AnswerC. Personal computer.
When the IBM PC was introduced in the early 1980s, it cost $1,565, included a keyboard and monitor.
IBM researchers have won Nobel Prizes in which of the following categories?
A. LiteratureB. EconomicsC. PhysicsD. Chemistry
Answer: B (Economics) and C (Physics).
IBM researchers have won a total of five Nobel Prizes—one in economics and four in physics. | <urn:uuid:c4eb7db2-b24e-4d39-88d1-fa90eb80cc67> | CC-MAIN-2017-04 | http://www.cioinsight.com/print/c/a/IT-Management/IBM-Quiz-Test-Your-Big-Blue-Smarts-118074 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920459 | 381 | 2.8125 | 3 |
Even though technology changes every day and the security ecosystem of most enterprises demands constant updates and layers, there are some legacy systems that security newbs should know how to handle simply because they just work.
Greg Hoffer, vice president of engineering at Globalscape, said that even though these systems have been around and deployed long before those who are fresh out of school and entering the security industry were born, the newbs still need to understand and watch these legacy systems.
- FTP Servers –Though decades old, they serve their purpose very well. Often legacy systems exist in the deep dark corners that people don't know about. FTP in and of itself is not secure. There are still a lot of people who move information around the internet without any security, and this might cause threat vectors or risks. Many FTP servers are homegrown, some lie open and unknown.
- Fax machines – An old technology, but they are still very widely used for many business transactions, including health and finance, while being incredibly insecure both digitally and physically. The scariest part of that is that there has been transition from machine to voice over IP, so the data itself is effectively flowing over these insecure channels.
- Modems – These are even older than fax machines, but a little younger than FTP servers. Currently, they are probably not as big of an issue as they were in the late 80s to early 90s. Often they were for one specific purpose. A company leased a line that went from a bank to the information provider. Sometimes, though, modems are a 2-in-1 machine with both fax and modem. Modems can allow a form of access into a computer that is otherwise protected by firewalls and all other technologies to make sure no one gets into your network. They remain an attack vector in some companies where modems sit in a dark corner and nobody knows it.
- Industrial/manufacturing control systems. These are SCADA systems (or other like systems) that are often found at large industrial or manufacturing plants. They monitor turbines that power electricity through steam, or nuclear power processing plants. While they are secure to the best degree through airgaps. The systems are hardwired and have no connection between the controls and internet. In reality, though, there have been reports that show that with wifi networks the SCADA systems are connected and vulnerable.
- Environmental controls--There are older systems that are somewhat comparable to today's IoT. One of Globalscape's customers uses a software to manage heating and AC on top of Buckingham Palace. The system was likely installed 15 years ago, but it may have connectivity of some sort. They are using mechanisms to remotely access controls on that system so that folks aren't climbing up to the palace roof. Instead, they are using FTP to get sensor data. The vulnerabilities of these types of systems are surprisingly similar to the IoT vulnerabilities seen today.
Certainly there are the more modern environmental controls that have gotten some press with the explosion of IoT. These too can't be ignored. Even devices that consumers use in order to have remote access over their homes can pose security risks to the enterprise.
There is a trade off with end users being able to do amazing things at home remotely. From turning the temperature up and down to managing their HVAC systems and refrigerators, there is so much more that is accessible via the internet. "As soon as you attach any device to wifi, that device becomes the weakest link in the security chain," said Hoffer.
[ ALSO ON CSO: Forgotten risks hide in legacy systems ]
Bad guys try to exploit the vulnerabilities to take advantage of the device, but they can also now move laterally throughout your house. If the device is one also used for work purposes, a malicious actor can exploit a VPN and get into the inner sanctum of corporate headquarters.
These are many of the challenges with technology that we don't think about on a daily basis. "We focus a lot of our attention on the things that are very prominent because people think about credit card transactions and point of sale security," said Hoffer. But securing the enterprise demands that practitioners young and old have an understanding of systems old and new.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:04d94e63-bf16-440a-87fb-71a9546ea395> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3095711/internet-of-things/legacy-systems-that-security-newbs-need-to-watch.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97321 | 874 | 2.578125 | 3 |
The demise of the mainframe computer has been predicted for nearly 20 years. First the mini-computer from vendors like Computervision, Data General, DEC, Honeywell, Hewlett Packard, IBM, Prime, and Wang Computer challenged the dominance of the mainframe and expanded computing to midsized enterprises. Advances in x86 based micro-computer technologies all but eliminated the mini-computer market and now these same devices are threatening the existence of the mainframe.
The technologies where the mainframe dominated are migrating to the smaller systems:
- High availability.
- Shared storage.
- Single point of management.
While the mainframe, in particular the one remaining star of the field, the IBM System z, continues to have advantages over distributed x86-based systems, advances in technology at the “low end” have now reached the point of seriously threatening the mainframe world for all but legacy applications. | <urn:uuid:cf096569-d2b1-429d-8042-db8abc7adaa1> | CC-MAIN-2017-04 | https://www.infotech.com/research/ibm-system-z-growing-versatility-on-a-fading-platform | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859877 | 188 | 2.703125 | 3 |
The IT community in Africa is rising massively as far as cloud computing is concerned. This is primarily because technology has yet to find a foothold in this continent and if cloud computing can show progress here then it becomes easier for it to find a foothold in the world. Africa is the least computerized continent and also has the lowest density in terms of telephone network. There is a huge projected growth in mobile technology though and ironically, the 138 million mobile users, the estimated numbers for 2015, will not have electricity.
While cloud computing may not be able to work miracles as far as remedying the many problems that beset the continent, an IT revolution of sorts should be able to not only connect the continent in a better manner but also serve as a testbed for the cloud. It can help foster social change and better community health and welfare. On a larger scale, the learning from this experience could be applied back into other places across the world too.
Read More About Cloud Computing | <urn:uuid:dac786be-049f-4a8f-b89a-fa0f62b08ef9> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/africa-and-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955949 | 196 | 2.578125 | 3 |
4.1.1 What is key management?
Key management deals with the secure generation, distribution, and storage of keys. Secure methods of key management are extremely important. Once a key is randomly generated (see Question 188.8.131.52), it must remain secret to avoid unfortunate mishaps (such as impersonation). In practice, most attacks on public-key systems will probably be aimed at the key management level, rather than at the cryptographic algorithm itself.
Users must be able to securely obtain a key pair suited to their efficiency and security needs. There must be a way to look up other people's public keys and to publicize one's own public key. Users must be able to legitimately obtain others' public keys; otherwise, an intruder can either change public keys listed in a directory, or impersonate another user. Certificates are used for this purpose (see Question 184.108.40.206). Certificates must be unforgeable. The issuance of certificates must proceed in a secure way, impervious to attack. In particular, the issuer must authenticate the identity and the public key of an individual before issuing a certificate to that individual.
If someone's private key is lost or compromised, others must be made aware of this, so they will no longer encrypt messages under the invalid public key nor accept messages signed with the invalid private key. Users must be able to store their private keys securely, so no intruder can obtain them, yet the keys must be readily accessible for legitimate use. Keys need to be valid only until a specified expiration date but the expiration date must be chosen properly and publicized in an authenticated channel. | <urn:uuid:2a206f5f-f300-40a3-ac30-f83726784bf7> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-key-management.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905073 | 334 | 3.3125 | 3 |
There have been a lot of articles written recently about Android SSL problems for applications, which were recently reported by German university researchers. My issue? Nobody is telling the users what to do about the problems.
This reminded me of a discussion I had where it was stated Android to iOS today is like PC to Mac.
In the old days, Apple came out with the Macintosh. It had a GUI and Apple provided the hardware, operating system and most of the applications. At the same time, Microsoft had DOS and then Windows. The Microsoft operating systems were designed to work on a PC. The PCs were made by many manufacturers. Microsoft had their own applications, but so did many other vendors.
In the end, the openness of the PC model was better and largely accepted in the market. This also made the PC a target to attack; it was a larger target and the openness dropped the level of security.
In today’s mobile era, Apple introduced the iPhone. Apple provides the hardware, operating system and some applications. It looks like Apple has improved their process as they have many other suppliers also providing applications. Apple has more developer rules, so the applications need to meet certain criteria to be made available through the App Store.
Conversely, Android is an open-source project with Google playing the key role. The hardware that uses Android is made by many suppliers. The applications are wide open to many suppliers and appear to have fewer criteria to be made suitable for use on Android. With this model, it looks very much like the PC model — and Android is steadily gaining market share because of it. This open model allows the applications to do things that they can’t do on the more closed iOS platform.
All this to say, there are some security problems with Android applications. So what did we expect? If the controls are low, how do we expect high quality?
What should users do? These phones can do some cool stuff. You can talk with them, send emails, browse the Web and use thousands of applications. But who says the cool stuff is secure? I’m sure there is much more developer time invested in coolness than security.
Users are advised to be careful when using mobile device for anything where security needs to be a priority. Users should:
- Use different passwords for each application
- Don’t use applications for secure needs unless they have been reviewed and approved (either corporately or by a trusted security researcher)
Along with the above advice, users should secure their mobile device by doing the following:
- Have the mobile automatically lock and require an unlock passcode
- Review and adjust application privacy settings
- Review location and data sharing permissions
- Be careful what links you click
- Enable remote locking, wiping and tracking of devices
- Do not jailbreak or root your device as a large percentage of malicious applications can only run on these types of devices | <urn:uuid:f4ba1e94-d9ae-4d01-a2df-f1e9533d5cad> | CC-MAIN-2017-04 | https://www.entrust.com/android-ssl-problems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.973213 | 593 | 2.8125 | 3 |
Using the FTP/FTPS monitor in Anturis Console, you can set up monitoring of general availability and response time for any remote file server connected to the internet. It also enables you to set up a notification when a certificate for a secure TLS/SSL connection is about to expire. You can send requests either from one of the components in your infrastructure, or use one of the available Public Agents that are maintained by Anturis in different geographical locations.
File Transfer Protocol (FTP) is used to send files between computers connected to a network. It is an application-level protocol that uses the Internet protocol suite, also known as TCP/IP. When you view a web page on the internet, it is transferred from the server to your web browser via HTTP, but uploading the page on the web server is done via FTP. You also use FTP to download files from the internet to your computer.
The default command port number that an FTP server listens on is 21. Port number 20 is used for data transfer over FTP. An FTP server would generally require you to authenticate with a username and password. The server may also be configured to make certain or all files available via anonymous connections.
FTP does not encrypt messages, so your credentials may be read by a third party involved in the connection. To provide an encrypted connection, FTP can be used over the Transport Layer Security (TLS) protocol, which was previously known as Secure Socket Layer (SSL). When FTP is used over a TLS/SSL layer, this is called an FTPS connection that enables you to securely encrypt an FTP session. In an FTPS connection, commands are directed through port number 990 by default, and data is transferred through port number 989.
TLS/SSL are cryptographic protocols for secure communication over computer networks. They are based on the exchange of X.509 certificates and public keys for encrypting and decrypting messages. Digital certificates are issued by a certificate authority (CA) trusted by both parties involved in the communication. A certificate binds the public key to a person or organization for a predetermined period of time (until the certificate expires).
By regularly sending FTP requests and tracking the time it takes for a response to return (also known as round-trip delay time, latency, or timeout), you can ensure the availability and performance of your critical file servers. This directly affects the quality of your service, because your clients or employees may rely on the availability of those files. The sooner you are able to detect a possible issue, the faster you will be able to react to it. If the file server uses TLS/SSL security, it is also important to monitor the certificate expiration date.
©2017 Anturis Inc. All Rights Reserved. | <urn:uuid:f8edd098-dc6c-40c1-a5d8-99bc2889b659> | CC-MAIN-2017-04 | https://anturis.com/monitors/ftp-monitor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00434-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926884 | 555 | 2.859375 | 3 |
Validating Identity and Code Authenticity
Software on its own can have a variety of weaknesses because it is written code that can be read, analyzed, and rewritten. This makes securing devices in the Internet of Things more challenging. With Code Signing Certificates, a digital signature can be used to sign scripts to verify identity and ensure that the code has not been tampered with. Signing the code essentially “seals” it, identifies who the code is from and with the time-stamp feature, makes sure the code has not been modified.
DigiCert's signing services provide a way to verify that an IoT device configuration settings, software, firmware, etc. have not been modified during startup. During startup the secure boot process can use the code signing signatures to verify that the software installed on devices has not been changed. Additionally, once an IoT device is in the wild, vendors may also need to update the software/hardware on the device. Again, Code Signing Certificates provide a way for devices to verify that device firmware and software updates are from a trusted source.
To talk to an expert, call 1-801-877-2119 »Let Us Contact You
DigiCert Code Signing Certificates
In the Internet of Things, checks and balances are needed to make sure connected device code remains secure, to make sure only valid software or firmware updates are received, to make sure patches come from the proper source, etc. Code Signing Certificates are the solution to making sure code is secure.
Digital signatures provide a means to verify that device code has not been modified. Digital signatures also can identify who sends upgrades or patches, and verify that these messages have not been modified during transmission.
Code Signing Permissions
DigiCert Code Signing Certificates provide a way to manage internal and partner code signing permissions. Code Signing Certificates let you control what code is allowed to run on IoT devices and systems by providing a way to verify that the code internal developers, partners, and third parties share is legitimate. These certificates identify who provided the code and verifies that the code has not been altered.
DigiCert Code Signing Certificates will work with most chipsets. This includes the Secure Boot chipset—the one that runs at startup to verify device code has not been tampered with. DigiCert Code Signing Certificates can be used to sign application-level code along with code verification using common cryptographic libraries (e.g., OpenSSL).
DigiCert EV Code Signing Certificates
As a best practice for IoT implementations, DigiCert recommends using EV Code Signing Certificates for digitally signing code. Not only is the EV Code Signing Certificate verification process more rigorous, but private keys are stored on secure hardware tokens. For additional protection, DigiCert EV Code Signing Certificates support hardware security modules (HSM), which provide greater control for key management and key permissions.
Talk to an IoT PKI Expert
If you have specific questions about our PKI solution for securing IoT devices, please enter your information in the form below, and an IoT security expert will contact you for a personal consultation.
|Request More Information|
|Fill out this form to request more information or call an expert at 1-801-877-2119| | <urn:uuid:bde7b5c2-9a49-4586-ac3f-d2ff3b18e1e0> | CC-MAIN-2017-04 | https://www.digicert.com/iot/signing-services.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00342-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903502 | 687 | 2.625 | 3 |
Now that the science teams working around the Large Hadron Collider (LHC) have confirmed the existence of the Higgs boson, there are questions about what will be researched next as the facility is shut down for two years to perform hardware upgrades.
Helping point the direction of the LHC’s next research project is Gordon, a supercomputer at the San Diego Supercomputing Center on the campus of the University of California at San Diego (UCSD).
Through a partnership between the UCSD physics department and the Open Science Grid (OSG), which is a multidiscipline effort funded by the US Department of Energy and the National Science Foundation, Gordon has been processing massive datasets that are generated by one of the particle detectors at the LHC called the Compact Muon Solenoid, or CMS.
Gordon has shown its scientific computing prowess by processing 125 terabytes of data over four weeks, reportedly making it available for analysis months before it was scheduled to. According to UCSD, this was accomplished over 1.7 million core hours on Gordon, encompassing about 15 percent of the supercomputer’s compute capacity.
“With only a few weeks’ notice,” said Frank Wuerthwein, physics professor at UCSD and member of the CMS project, “we were able to gain access to Gordon and complete the runs, making the data available for analysis in time to provide crucial input toward international planning meetings on the future of particle physics.”
That future might include the search for dark matter, a search which could prove to be orders of magnitude more difficult and complex than the search for the Higgs boson. The properties of the Higgs were already predicted by the standard model of particle physics, meaning physicists had more of an idea of what to look for.
With dark matter, a material that accounts for what is estimated to be 90 percent of the universe’s mass, the properties are less clear. A principle called supersymmetry, which is represented in part below, could be the theoretical guide.
However, according to Wuerthwein, no physical evidence yet exists to support the hypothesis that unifies all fundamental forces at the moment of the Big Bang.
Whichever hypothesis or principle ends up driving physics forward, Gordon figures to be a part of that research as scientists convene and plan out the future of the study of particle physics. “Giving us access to the Gordon supercomputer effectively doubled the data processing compute power available to us,” adds Lothar Bauerdick, OSG’s executive director and the U.S. software and computing manager for the CMS project. “This gives CMS scientists precious months to get to their science analysis of the data reconstructed at SDSC.” | <urn:uuid:387f06e7-055d-4b6a-94ea-cbc91e4e6fc1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/04/11/sdsc_s_gordon_to_help_guide_future_of_particle_physics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942901 | 570 | 3.21875 | 3 |
City planners now have another tool at their disposal for optimizing management strategies with IBM’s free-to-use CityOne game simulation.
The online game, introduced in 2010, was designed to explain how complex systems can evolve industries in such areas as water management, energy, banking and retail. The game represents an initial effort by IBM in what are called “city sims.”
The game is intended to help users address current and future challenges, said Phaedra Boinodiris, IBM Serious Games program manager and the game’s lead designer. Users are challenged to enhance the game’s urban environment by meeting profitability and revenue goals, increasing customer and citizen satisfaction, and improving the city’s environment — all with a limited budget. For example, one challenge is to deliver high-quality water in the most cost-efficient way in a world where water usage has increased at twice the rate of population growth.
A major objective of the game is inducing users to harness new technologies — like cloud computing and online collaborative technologies — to drive innovation in city management. Ultimately the game affords users the opportunity to prepare for future urban problems by making smart investment decisions.
CityOne is part of a broader serious games initiative at IBM, Boinodiris said. Future versions of the game may enable users to select cities at different maturity levels, she said. The game will also be expanded to better demonstrate how investments in some systems affect investments in others. Plans are under way for future versions of the game that allow users to propose solutions to challenges presented in the game.
Currently game users can explore solutions geared to their specific needs through Blueworks Live, an IBM cloud community with business processing integrated into CityOne, Boinodiris said. For example, users can utilize the application to convene a group of experts to devise “localized water management solutions” for their municipalities, she said.
IBM partnered with the U.S. Environmental Protection Agency to develop the game. The agency was responsible for providing much of the game’s content, including raw data, white papers and other research, Boinodiris said.
So far, she said the game has been well received, with a wide range of city, state, national and international organizations, and industries using the game.
The game is being used by governments in China, France and South America. Tens of thousands of users globally have played the game, according to Boinodiris. Numerous governments have expressed an interest in customized versions of the game, including some U.S. states.
Players have shown interest in the game’s ability to address “bottom-line” business issues, said Boinodiris. For some players, the game has also highlighted real-life shortcomings in infrastructure support for new technologies. For example, certain players expressed concern that their cities haven’t adequately supported the development of electric vehicles — an opinion amplified by their experiences with the game.
Chris Moore, CIO of Edmonton, Ontario, said it makes sense to develop a game like CityOne that helps users engage in the process and problems of city government, and taps into the collective thinking of the community. But Moore said he’s unsure when such a game will go mainstream and believes that serious games like city simulations need to be consumerized. Moore has briefly explored the game, but said his agency doesn’t use the game, as it already has sufficient modeling and planning tools.
Michael Mascioni is a market research consultant in digital media and freelance writer. | <urn:uuid:d4471721-6829-476f-82ca-853e395a9912> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Online-Game-City-Planners.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00278-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947876 | 731 | 2.53125 | 3 |
When it comes to coding Big Data and analytical applications, a select group of programming languages have become the default choices.
This is because their feature sets make them well suited to handling large and complicated datasets. Not only were they originally designed with statistical purposes in mind, but a broad developer ecosystem has also evolved around them. This means there are extensions, libraries, and tools out there for performing just about any analytics functions you might need.
R, Python, and the relative newcomer Julia are currently three of the most popular programming languages chosen for Big Data projects in industry today. They have a lot in common, but there are important differences that have to be considered when deciding which one will get the job done for you. Here’s a brief introduction to each of them, as well as some ideas about applications where one may be more suitable than the others.
R, which has been around since 1993, has long been considered the go-to programming language for data science and statistical computing. It was designed first and foremost to carry out matrix calculations – standard arithmetic functions applied to numerical data arranged in rows and columns.
R can be used to automate huge numbers of these calculations, even when the row and column data is constantly changing or growing. It also makes it very easy to produce visualizations based on these calculations. The combination of these features has made it an extremely popular choice for crafting data science tools.
Because R has been around for a while, it has a large and active community of users and enthusiasts. They’ve spent the last couple decades building extensions and libraries that increase the scope of what the language can do, make it simpler for the user to access its functions, and automate monotonous jobs.
Among the popular extensions are SparkR, which provides access to Apache Spark; ggplot2, which provides visualizations; and an extension that has recently been announced that will allow R to access IBM’s Watson cognitive computing engine.
The fact is, though, that in becoming the ultimate programming language for statistical applications, R has sometimes fallen flat in other areas. Other languages competing for developers’ affections – including those mentioned below – are often more generalized. Because of this, a common approach is to first build the framework of an analytical application in R, taking advantage of its modular nature and support infrastructure. Then, once a solution – such as a working analytics engine – has been devised, the code might be recreated in another, more general purpose programming language to complete the application’s production.
Python is far more general purpose than R and will be more immediately familiar to anyone who has used object-oriented programming languages before.
Python’s sheer popularity has helped cement its place as the second most common tool for data science – and although it may not be quite as widely used as R, its user base has been growing at a greater rate. It’s certainly easier to get used to than R if you don’t already have a solid background in statistical computing.
Python’s user base has devoted itself to producing extensions and libraries aimed at helping it match the usefulness of R when it comes to data wrangling. One of the first of these tools was the NumPy extension, which gives it many of the same matrix-based algorithm capabilities as R. This attracted coders interested in analytics and statistics to the language, and over the years it has led to the development of more and more complex functions and methodologies.
Because of this, Python has become a popular choice for applications using the most cutting edge techniques, such as machine learning and natural language processing. Open source applications such as scikit-learn and Natural Language Toolkit make it relatively simple for coders to put these technologies to work, and PySpark gives it access to the Apache Spark framework. However, if you’re only interested in more traditional analytical and statistical computing, then you may find that R presents a more complete and integrated development environment than Python.
R and Python are still the reigning champions when it comes to data and analytics-oriented programming languages, but there are several other languages attracting attention for their suitability in this field.
One that is certainly worth giving a mention to is Julia. It has only been in development for a few years but is already proving itself to be a popular choice. Like Python and R, Julia is built for scalability and speed of operation when handling large data sets. It was designed with a “best of all worlds” ethos — the idea was that it would combine the strengths of other popular analytics-oriented programming languages. One key influence was the widely used statistical programming language MATLAB, with which it shares much of its syntax.
Julia has specific features built into the core language that make it particularly suitable for working with the real-time streams of Big Data industry wants to leverage these days, such as parallelization and in-database analytics. The fact that code written in Julia executes very quickly adds to its suitability here.
In a head-to-head comparison with R or Python, Julia’s youth is her Achilles’ heel. Its ecosystem of extensions and libraries is not as mature or developed as it is for the more established languages. It is getting there, however, and most popular functions are available, with more emerging at a steady rate.
The Right Tool for the Job
From a general perspective, it may seem that R would be the natural choice for running large numbers of calculations against big-volume datasets, Python would be the go-to for advanced analytics involving AI or ML, and Julia a natural fit for projects involving in-database analytics on real-time streams.
In reality, the nuanced differences between each language and the environment they provide to the programmer means there’s rarely a one-size-fits-all solution. It’s also worth remembering that their open nature (they are all open source projects) means that they don’t pretend to live in isolation. The active communities behind each language frequently cooperate to port functionality between them, and extensions can be used to run code written with one language from within another language.
All of the languages here are living projects that are constantly evolving and updated to be capable of new things. Each has its strengths and weaknesses, but they are all robust choices for enterprise initiatives involving Big Data and analytics.
Bernard Marr is a bestselling author, keynote speaker, strategic performance consultant, and analytics, KPI, and big data guru. In addition, he is a member of the Data Informed Board of Advisers. He helps companies to better manage, measure, report, and analyze performance. His leading-edge work with major companies, organizations, and governments across the globe makes him an acclaimed and award-winning keynote speaker, researcher, consultant, and teacher.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks. | <urn:uuid:24338886-5d59-42d4-b64e-c880d496765d> | CC-MAIN-2017-04 | http://data-informed.com/big-data-programming-languages-what-are-the-differences-between-python-r-and-julia/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959879 | 1,424 | 2.875 | 3 |
Created in the late 1980s by Dutch programmer Guido van Rossum as a side project during his Christmas vacation, Python is a popular interpreted, dynamic programming language. Python’s syntax allows programmers to express concepts in fewer lines of code than in Java, C++ and other languages. Programming paradigms supported by Python include object-oriented, imperative and functional programming or procedural styles and it has a large standard library as well as a dynamic type system and automatic memory management.
Python code can run on a wide variety of operating systems since its interpreters are available for a wide array of operating systems. Python can also be used on most common operating systems with no need to install a Python interpreter since it is able to be packaged into stand-alone executable programs.
Despite sharing a similar background with Perl, Python has a different philosophy which emphasizes support for “common programming methodologies such as data structure design and object-oriented programming, and encourages programmers to write readable (and thus maintainable) code by providing an elegant but not overly cryptic notation.”
Drawing its name from creator, and benevolent dictator for life (BDFL), Guido van Rossum’s love of Monty Python, this programming language was designed to be a “descendant of ABC that would appeal to Unix/C hackers.” Python was essentially designed to emphasize both code readability and productivity on the side of the developer. These two traits shine in terms of its simple syntax which is quite easy to learn and read, as well as the fact that its lack of a compilation step results in a rapid edit-test-debug cycle.
In a brief summary written by van Rossum, he notes that other influences for creating Python include his gripes about many features of the ABC language, such as its lack of extensibility, which he remedied in Python. Additionally, the error handling in the Amoeba language also made van Rossum work to include exceptions as a feature in Python.
While Python implementation began in December 1989, it was in February 1991 that the first code was published to alt.sources. Python 1.0 was released in January 1994 and included functional programming tools such as lambda, map, filter and reduce. Python 2.0 was released in October 2000 as the core development team moved to BeOpen.com where the PythonLabs team was formed. Included in Python 2.0 were list comprehensions as well as a garbage collection system for reference cycles. Version 3.0 (also known as “Python 3000” or “Py3K”) was released in December 2008 and broke backward compatibility. Major features included changing print from a statement to a built-in function, changing integer functionality and more.
Core Python concepts taken from the Zen of Python written in 1999
Django is a free and open-sourced web framework written in Python. As a web framework that follows the model–view–controller (MVC) pattern, Django allows for an easier creation of complex, database driven websites such as Pinterest, Instagram, The Washington Times, Bitbucket and others. Written in 2003, Django is named after the musician Django Reinhardt and was released under the permissive BSD software license in 2005 and since 2008 it has been maintained by the Django Software Foundation (DSF).
Python powers some of the largest sites on the internet with its clean code, reliability and satisfaction amongst the developers using it that comes from the fact that it both powerful and fun to work with. Some of the most notable websites using Python are:
As with any coding language, security should be at the forefront for all Python and Django developers, especially those who are dealing with giant databases of sensitive personal information that could lead to terrible consequences if exploited or breached.
Checkmarx’s CxSAST, a static code analysis solution, stands out amongst Python testing solutions as not only the solution which will keep your Python code free from security and compliance issues, but also as the tool which will contribute to your organization’s advancement when it comes to application security maturity.
CxSAST works with the tools your developers are already using as it seamlessly integrates with most of the common development programs available at every stage of the SDLC. CxSAST’s features such as incremental code scanning and the best fix location made it ideal for any continuous integration continuous development (CICD) environment.
When vulnerabilities are detected in the Python code, CxSAST will not only identify the best fix location, but will also offer resources to the developer to understand how the attack vector work as well as remediation advice which will help them ensure similar mistakes are avoided in the future.
If you’re interested in reading about how Python compares to Ruby and PHP, be sure to check out PHP vs. Python vs. Ruby- All you ever wanted to know
Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now.
Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you. | <urn:uuid:47d56e7b-4ceb-4ae3-97ca-e3052bb6e9a8> | CC-MAIN-2017-04 | https://www.checkmarx.com/sast-supported-languages/python-security-vulnerabilities-and-language-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954675 | 1,142 | 3.421875 | 3 |
Apple security protections were put to the test in the past week when hackers embedded ransomware in Transmission, a popular BitTorrent client. Ransomware is an increasingly common form of attack, where malicious software encrypts data on an infected computer and demands a ransom payment from the user in exchange for the key to decrypt their data. While ransomware has become disturbingly commonplace for Windows users, Mac has been largely immune from these threats. Apple has a long history of building strong security protections into their operating systems—such as GateKeeper and XProtect anti-malware schemes. However, as Mac usage continues to grow, it’s now a more attractive target for hackers. And, as with any computer system, there will be vulnerabilities that hackers seek to exploit.
In this case, hackers went after a software vendor rather than attacking the Mac system directly. They were able to embed malicious code in the Transmission.app installer. When a Mac user ran the installer, it passed the GateKeeper inspection because it was signed with a valid developer certificate issued by Apple. As soon as the malicious software was reported, Apple immediately revoked the developer certificate and updated XProtect anti-malware definitions to prevent any further infection. This incident demonstrates that while no system is entirely immune, Apple’s security scheme does its job by drastically reducing the window of time for an exploit to spread.
This attack is a good reminder of the importance of preventative security policies. Keeping your operating system up to date is the best course of action to protect your Mac from this type of attack. GateKeeper and XProtect are included with Apple’s OS X operating system and can automatically receive updates from Apple. Check your App Store preferences yourself and confirm that the “Install system data files and security updates” option is checked.
IT Professionals tasked with managing a fleet of Macs should consider taking additional steps to ensure the malicious software is removed from your network. Several IT pros detailed their approach here: https://jamfnation.jamfsoftware.com/discussion.html?id=19090 | <urn:uuid:377e46e8-cb73-458a-9e7b-b164414b48be> | CC-MAIN-2017-04 | https://www.jamf.com/blog/wrangling-osx-keranger-a-how-apple-stopped-a-malicious-exploit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945284 | 422 | 2.625 | 3 |
You can either use shared memory (SHMEM) routines alone or mix them into a program that primarily uses PVM (glossary, ) or MPI (glossary, ), thereby offering opportunities for optimizations beyond what the message-passing protocols can provide. Be aware, however, that SHMEM is not a standard protocol and will not be available on machines developed by companies other than Silicon Graphics and Cray Research. SHMEM is supported on Cray PVP systems, Cray MPP systems, and on Silicon Graphics systems.
For background information on SHMEM, see Section 1.1.2. For an introduction to the SHMEM routines, see the shmem_intro(3) man page.
This chapter describes the following optimization techniques:
Improving data transfer rates in any CRAY T3E program by using SHMEM get and put routines (see Section 3.1). This section provides an introduction to data transfer, which is the most important capability that SHMEM offers.
Improving the performance of a PVM or MPI program by adding SHMEM data manipulation routines (see Section 3.2).
Avoiding performance pitfalls when passing 32-bit data rather than 64-bit data (see Section 3.3).
Copying strided (glossary, ) data while maintaining maximum performance. The strided data routines enable you, for example, to divide the elements of an array among a set of processing elements (PEs) or pull elements from arrays on multiple PEs into a single array on one PE (see Section 3.4).
Gathering and scattering data and reordering it in the process (see Section 3.5).
Broadcasting (glossary, ) data from one PE to all PEs (see Section 3.6).
Merging arrays from each PE into a single array on all PEs (see Section 3.7).
Executing an atomic memory operation (glossary, ) to read and update a remote value in a single process (see Section 3.8).
Using reduction (glossary, ) routines to execute a variety of operations across multiple PEs (see Section 3.9).
In general, avoiding communications between PEs (including data transfer) improves performance. The fewer the number of communications, the faster your program can execute. Data transfer is, however, often necessary. Finding the fastest method of passing data is an important optimization, and the SHMEM routines are usually the fastest method available.
The SHMEM_PUT64 and SHMEM_GET64 routines avoid the extra overhead (glossary, ) sometimes associated with message passing routines by moving data directly between the user-specified memory locations on local and remote PEs.
For both small and large transfers, the SHMEM_PUT64 routine, which moves data from the local PE to a remote PE, and the SHMEM_GET64 routine, which moves data from a remote PE to the local PE, are virtually the same in terms of performance. At times, SHMEM_PUT64 may be the better choice because it lets the calling PE perform other work while the data is in the network. Because SHMEM_PUT64 is asynchronous, it may allow statements that follow it to execute while the data is in the process of being copied to the memory of the receiving PE. SHMEM_GET64 forces the calling PE to wait until the data is in local memory (glossary, ), meaning that no early work can be done.
Passing data in large chunks is always faster than passing it in small chunks because it saves subroutine overhead. Whenever possible, put all of your data (such as an array) into a single SHMEM_PUT64 or SHMEM_GET64 call rather than calling the routine iteratively.
In the following example, eight 64-bit words are transferred from PE 1 to PE 0 by using SHMEM_PUT64. PE numbering always begins with 0.
Example 3-1. Example of a SHMEM_PUT64 transfer
1. INCLUDE "mpp/shmem.fh" 2. INTEGER SOURCE(8), DEST(8) 3. INTRINSIC MY_PE 4. SAVE DEST 5. C On the sending PE 6. IF (MY_PE() .EQ. 1) THEN 7. DO I = 1,8 8. SOURCE(I) = I 9. ENDDO 10. C PE 1 sends the data to PE 0. 11. CALL SHMEM_PUT64(DEST, SOURCE, 8, 0) 12. ENDIF 13. 14. C Make sure the transfer is complete. 15. CALL SHMEM_BARRIER_ALL() 16. 17. C On the receiving PE 18. IF (MY_PE() .EQ. 0) THEN 19. PRINT *, 'DEST ON PE 0: ', DEST 20. ENDIF 21. 22. END
See the following figure for an illustration of the transfer.
The output from the example is as follows:
DEST ON PE 0: 1, 2, 3, 4, 5, 6, 7, 8
Defining the number of PEs in a program and the number in an active set (glossary, ) as powers of 2 (that is, 2, 4, 8, 16, 32, and so on) helped performance on CRAY T3D systems. Also, declaring arrays as powers of 2 was necessary if you were using Cray Research Adaptive Fortran (CRAFT) on CRAY T3D systems. Both have changed as follows on CRAY T3E systems:
Declaring arrays such as SOURCE and DEST as multiples of 8 helps SHMEM speed things up somewhat, since 8 is the vector length of a key component of the PE remote data transfer hardware. Declaring the number of elements as a power of 2 does not affect performance unless that number is also a multiple of 8.
Defining the number of PEs, whether you are referring to all PEs in a program or to the number involved in an active set, as a power of 2 does not usually enhance performance in a significant way on the CRAY T3E system. Some SHMEM routines, notably SHMEM_BROADCAST, still do benefit somewhat from having the number of PEs defined as a power of 2.
For information on optimizing existing PVM and MPI programs using SHMEM_GET64 and SHMEM_PUT64, see Section 3.2. For a complete description of the MPP-specific statements in the preceding example, continue on with this section.
In the SHMEM_PUT64 example (see Example 3-1), line 1 imports the SHMEM INCLUDE file, which defines parameters needed by many of the routines. The location of the file may be different on your system. Check with your system administrator if you do not know the correct path.
1. INCLUDE "mpp/shmem.fh"
Line 3 declares the intrinsic function MY_PE, which returns the number of the PE on which it executes. Two versions of MY_PE function exist on the CRAY T3E system, one in the external library and one as an intrinsic. The intrinsic version is marginally faster than external library version, but the external library version is now, and will be in the future, on more Cray Research and Silicon Graphics supercomputer systems. Declaring MY_PE as an intrinsic is not necessary, but it will ensure you of getting the slightly faster version of the routine.
3. INTRINSIC MY_PE
The defined constant N$PES, which returns the number of PEs in a program, is also slightly faster than the more portable external library routine NUM_PES. Like MY_PE, both versions return the same information.
The intrinsic function MY_PE and the constant N$PES are also faster than using equivalent message-passing routines, such as SHMEM_MY_PE and SHMEM_N_PES. Both methods return the same information and are available on Cray PVP systems as well as Cray MPP systems.
Line 4 ensures that the remote array (DEST) is symmetric (glossary, ), which means that it has the same address on remote PEs as on the local PE.
4. SAVE DEST
You can make sure DEST is symmetric in any of the following ways. (None of these methods is significantly faster than the others.)
Name it in a SAVE statement, as in the example.
Include it in a common block.
If it is an array, allocate it by using shpalloc(3).
If it is a stack variable, declare it by using the !CDIR$ SYMMETRIC directive.
In line 6, the MY_PE function is called. The function returns the number of the calling PE, meaning only PE 1 will execute the THEN clause. As a result, the array SOURCE is initialized only on PE 1.
5. C On the sending PE 6. IF (MY_PE() .EQ. 1) THEN 7. DO I = 1,8 8. SOURCE(I) = I 9. ENDDO
In line 11, PE 1 executes the SHMEM_PUT64 routine call that sends the data. SHMEM_PUT64 is the variant of SHMEM_PUT64 that transfers 64-bit (KIND=8) data. It sends eight array elements from its SOURCE array to the DEST array on PE 0.
11. SHMEM_PUT64(DEST, SOURCE, 8, 0) 12. ENDIF
Line 15 is a barrier (glossary, ), which provides a synchronization point (glossary, ). No PE proceeds beyond this point in the program until all PEs have arrived. The effect in this case is to wait until the transfer has finished. Without the barrier, PE 0 could print the DEST array before receiving the data. Calling SHMEM_BARRIER_ALL is as fast as calling the BARRIER routine directly.
15. CALL SHMEM_BARRIER_ALL()
Line 18 selects PE 0, which is passively receiving the data. Because SHMEM_PUT64 places the data directly into PE 0's local memory, PE 0 is not involved in the transfer operation. After being released from the barrier, PE 0 prints DEST, and the program exits.
18. IF (MY_PE() .EQ. 0) THEN 19. PRINT *, 'DEST ON PE 0: ', DEST 20. ENDIF | <urn:uuid:d5e17c50-48ed-4e1f-bab3-d372b0734ad3> | CC-MAIN-2017-04 | http://docs.cray.com/books/004-2518-002/html-004-2518-002/z826920364dep.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.872899 | 2,257 | 2.828125 | 3 |
Considering the Options
Considering the options
Understanding that file transfer security is a critical component of a successful IT strategy is only the first step. It's also important to understand the different types of file transfer methods.
Physical methods such as media devices (for example, laptops, thumb drives, PDAs) and printed documents pose the greatest risk, as they can be easily lost or stolen. The security threat is proliferating as thousands of files can easily be stored on a 500MB USB drive. As these devices become ubiquitous, organizations need to account for the changing threats of moving physical files. Electronic file transfer can provide an audit trail and eliminate the "Where's my file?" confusion. While files moved virtually can be tracked easier than physical methods, this method can still be too problematic when trying to securely manage large amounts of data.
E-mail attachments are not a viable option because there are too many size, space, security and control issues. Generally, HTTP Secure (HTTPS) encryption is used when sending attachments via e-mail but it's simple and easy to crack. In addition, the risks escalate if a user downloads files at a coffee shop using public Internet access; the minute it's downloaded, the file is now wide open. Similarly, if e-mail is accessed via unprotected home WiFi networks, files are put at incredible risk. While a quick e-mail attachment might seem like the easiest way for employees and partners to exchange files, it can be detrimental to the organization if the proper restrictions and security measures aren't in place. | <urn:uuid:8a805c77-adfc-49d6-ad6d-a93e36161080> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/How-to-Securely-Exchange-Massive-BusinessCritical-Files/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928635 | 311 | 2.578125 | 3 |
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Cisco Router Basics.
Basics Of Cisco Routers
Cisco is well known for its routers and switches. I must admit they are very good quality products and once they are up and running, you can pretty much forget about them because they rarely fail.
We are going to focus on routers here since that's the reason you clicked on this page !
Cisco has a number of different routers, amongst them are the popular 1600 series, 2500 series and 2600 series. The ranges start from the 600 series and go up to the 12000 series (now we are talking about a lot of money).
Below are a few of the routers mentioned :
All the above equipment runs special software called the Cisco Internetwork Operating System or IOS. This is the kernel of Cisco routers and most switches. Cisco has created what they call Cisco Fusion, which is supposed to make all Cisco devices run the same operating system.
We are going to begin with the basic components which make up a Cisco router (and switches) and I will be explaining what they are used for, so grab that tea or coffee and let's get going !
The basic components of any Cisco router are :
- The Processor (CPU)
- Internetwork Operating System (IOS)
- RXBoot Image
- Flash memory
- Configuration Register
Now I just hope you haven't looked at the list and thought “Stuff this, it looks hard and complicated” because I assure you, it's less painful than you might think ! In fact, once you read it a couple of times, you will find all of it easy to remember and understand.
These allow us to use the router ! The interfaces are the various serial ports or ethernet ports which we use to connect the router to our LAN. There are a number of different interfaces but we are going to hit the basic stuff only.
Here are some of the names Cisco has given some of the interfaces: E0 (first Ethernet interface), E1 (second Ethernet interface). S0 (first Serial interface), S1 (second Serial interface), BRI 0 (first B channel for Basic ISDN) and BRI 1 (second B channel for Basic ISDN).
In the picture below you can see the back view of a Cisco router, you can clearly see the various interfaces it has:(we are only looking at ISDN routers)
You can see that it even has phone sockets ! Yes, that's normal since you have to connect a digital phone to an ISDN line and since this is an ISDN router, it has this option with the router. I should, however, explain that you don't normally get routers with ISDN S/T and ISDN U interfaces together. Any ISDN line requires a Network Terminator (NT) installed at the customer's premises and you connect your equipment after this terminator. An ISDN S/T interface doesn't have the NT device built in, so you need an NT device in order to use the router. On the other hand, an ISDN U interface has the NT device built in to the router.
Check the picture below to see how to connect the router using the different ISDN interfaces:
Apart from the ISDN interfaces, we also have an Ethernet interface that connects to a device in your LAN, usually a hub or a computer. If connecting to a Hub uplink port, then you set the small switch to “Hub”, but if connecting to a PC, you need to set it to “Node”. This switch will simply convert the cable from a straight through (hub) to a x- over (Node):
The Config or Console port is a Female DB9 connector which you connect, using a special cable, to your computers serial port and it allows you to directly configure the router.
The Processor (CPU)
All Cisco routers have a main processor that takes care of the main functions of the router. The CPU generates interrupts (IRQ) in order to communicate with the other electronic components in the router. The Cisco routers utilise Motorola RISC processors. Usually the CPU utilisation on a normal router wouldn't exceed 20 %.
The IOS is the main operating system on which the router runs. The IOS is loaded upon the router's bootup. It usually is around 2 to 5MB in size, but can be a lot larger depending on the router series. The IOS is currently on version 12, and Cisco periodically releases minor versions every couple of months e.g 12.1 , 12.3 etc. to fix small bugs and also add extra functionality.
The IOS gives the router its various capabilities and can also be updated or downloaded from the router for backup purposes. On the 1600 series and above, you get the IOS on a PCMCIA Flash card. This Flash card then plugs into a slot located at the back of the router and the router loads the IOS “image” (as they call it). Usually this image of the operating system is compressed so the router must decompress the image in its memory in order to use it.
The IOS is one of the most critical parts of the router, without it the router is pretty much useless. Just keep in mind that it is not necessary to have a flash card (as described above with the 1600 series router) in order to load the IOS. You can actually configure most Cisco routers to load the image off a network tftp server or from another router which might hold multiple IOS images for different routers, in which case it will have a large capacity Flash card to store these images.
The RXBoot Image
The RXBoot image (also known as Bootloader) is nothing more than a “cut-down” version of the IOS located in the router's ROM (Read Only Memory). If you had no Flash card to load the IOS from, you can configure the router to load the RXBoot image, which would give you the ability to perform minor maintenance operations and bring various interfaces up or down.
The RAM, or Random Access Memory, is where the router loads the IOS and the configuration file. It works exactly the same way as your computer's memory, where the operating system loads along with all the various programs. The amount of RAM your router needs is subject to the size of the IOS image and configuration file you have. To give you an indication of the amounts of RAM we are talking about, in most cases, smaller routers (up to the 1600 series) are happy with 12 to 16 MB while the bigger routers with larger IOS images would need around 32 to 64 MB of memory. Routing tables are also stored in the system's RAM so if you have large and complex routing tables, you will obviously need more RAM ! When I tried to upgrade the RAM on a Cisco 1600 router, I unscrewed the case and opened it and was amazed to find a 72 pin SIMM slot where you needed to attach the extra RAM. For those who don't know what a 72 pin SIMM is, it's basically the type of RAM the older Pentium socket 7 CPUs took, back in '95. This type of memory was replaced by today's standard 168 pin DIMMs or SDRAM.
The NVRAM (Non-Volatile RAM)
The NVRAM is a special memory place where the router holds its configuration. When you configure a router and then save the configuration, it is stored in the NVRAM. This memory is not big at all when compared with the system's RAM. On a Cisco 1600 series, it is only 8 KB while on bigger routers, like the 2600 series, it is 32 KB. Normally, when a router starts up, after it loads the IOS image it will look into the NVRAM and load the configuration file in order to configure the router. The NVRAM is not erased when the router is reloaded or even switched off.
ROM (Read Only Memory)
The ROM is used to start and maintain the router. It contains some code, like the Bootstrap and POST, which helps the router do some basic tests and bootup when it's powered on or reloaded. You cannot alter any of the code in this memory as it has been set from the factory and is Read Only.
The Flash memory is that card I spoke about in the IOS section. All it is, is an EEPROM (Electrical Eraseable Programmable Read Only Memory) card. It fits into a special slot normally located at the back of the router and contains nothing more than the IOS image(s). You can write to it or delete its contents from the router's console. Usually it comes in sizes of 4MB for the smaller routers (1600 series) and goes up from there depending on the router model.
Keeping things simple, the Configuration Register determines if the router is going to boot the IOS image from its Flash, tftp server or just load the RXBoot image. This register is a 16 Bit register, in other words has 16 zeros or ones. A sample of it in Hex would be the following: 0x2102 and in binary is : 0010 0001 0000 0010.
We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career. | <urn:uuid:ac6abe20-3e95-494d-a9ca-2f7db01b4a8b> | CC-MAIN-2017-04 | https://www.certificationkits.com/cisco-router-basics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941338 | 1,977 | 3.28125 | 3 |
What is the End Goal?
The principles of adult learning are founded on the notion that as we age, having a purpose behind learning becomes much more important. When we’re young, we are more likely to accept that the future may yield an unforeseeable need for the information we’re absorbing. We believe in the need for a well-rounded education, even if we can’t see an immediate need for certain pieces of information.As we age, however, the room we have left for memory storage becomes smaller, so to speak, and we have more defined goals for the information we hold on to. An adult brain needs to know that learning more about a topic has a personal connection to something important that will definitely be of use in the future. Inevitably, whether they say so or not, the number one question your employees will need answered before a training session begins is “Why are we doing this?”
Maintaining a Focus
As a business leader who already sees a need for the training materials, it’s easy to feel annoyed by this question and wish for some trust on the part of your employees, but allowing your trainees to understand the answer to the “Why?” question before beginning can make an enormous difference in how they approach, categorize, and retain the information you are presenting.
Think about how you study new information when you are truly engaged. Perhaps, for example, you take notes. Note-taking feels much more meaningful and worthwhile if you know what specifics are important enough to record in your notes. Give your trainees an understanding of how this information will be important to them. Is there a set of questions they will have to answer at the end? Consider showing them the questions at the beginning. Will they have to memorize key words and phrases to use with customers in the future? Explain that before getting started.
All of what you are sharing with them is important or you wouldn’t have included it, but there is a big difference between a fact that must be memorized and an anecdote that serves to better illustrate a concept. If the participants know what to pay attention to and listen for, they will be more engaged throughout.
Other than bits and pieces they may want to pay special attention to along the way, your audience should know what the overall objective is. There may be more than one, but try to limit your overarching objectives to keep things clear and simple. An objective is a clearly defined goal with a means for assessment. For example, your objective may be that by the end of the training session, participants will be able to bring a customer through a problem-solving process over the phone to troubleshoot a technology issue. To assess completion of that goal, participants may engage in a role-play exercise at the end of the training. Objectives start with an acknowledgement of time frame and audience, then end with an assessment of the goal.
Whatever the objective, by the end of the training, your participants should be able to tell if they’ve collected and retained the information they need moving forward. In the next posting of this series, we’ll discuss how to maintain engagement throughout the training session by actively involving the participant at assessment intervals. Ultimately, the goal of your training is employee takeaway and change, and that’s only possible if the goals are shared with your participants. | <urn:uuid:8f9ec9e4-0097-48ab-84f2-4ed6b2212983> | CC-MAIN-2017-04 | http://blog.contentraven.com/learning/driving-engagement-with-a-clear-purpose | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00508-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956929 | 696 | 3.125 | 3 |
The Non–Inverting Buffer
now spend some time investigating useful circuit elements
that do not directly implement Boolean functions.
The first element is the non–inverting buffer.
This is logically equivalent to two NOT gates in a row.
are engineering differences between the two, most notably that the
non–inverting buffer delays the signal less than a chain of two NOT gates.
This is best considered a “voltage adjuster”.
logic 1 (voltage in the range 2.0 – 5.0 volts) will be output as 5.0 volts.
A logic 0 (voltage in the range 0.0 – 0.8 volts) will be output as 0.0 volts.
The output of a circuit element does not change instantaneously with the input, but only after a delay time during which the circuit processes the signal.
This delay interval, called “gate delay” is about 10 nanoseconds for most simple TTL circuits and about 25 nanoseconds for TTL Flip–Flops.
The simplest example is the NOT gate.
Here is a trace of the input and output. Note that the output does not reflect the input until one gate delay after the input changes.
For one gate delay time we have both X = 1 and Y = 1.
For some advanced designs, it is desirable to delay a signal by a fixed amount.
One simple circuit to achieve this effect is based on the Boolean identity.
A circuit to implement this delay might appear as follows.
Here is the time trace of the input and output.
The Pulse Generator
circuit represents one important application of the gate delay principle.
We shall present this circuit now and use it when we develop flip–flops.
This circuit, which I call a “pulse generator”, is based on the Boolean identity.
Here is the circuit
Here is a time plot of the circuit’s behavior.
pulse is due to the fact that for 1 gate delay, we have both X = 1 and Y = 1.
This is the time it takes the NOT gate to respond to its input and change Y.
The Tri–State Buffer
time ago, we considered relays as automatic switches.
The tri–state buffer is also an automatic switch.
Here are the diagrams for two of the four most popular tri–state buffers.
An enabled–low buffer is the same as an enabled–high buffer with a NOT gate.
does a tri–state buffer do when it is enabled?
What does a tri–state buffer do when it is not enabled?
What is this third state implied by the name “tri–state”?
An Enabled–High Tri–State Buffer
an enabled–high tri–state buffer, with the enable signal called “C”.
When C = 1, the buffer is enabled.
When C = 0, the buffer is not enabled.
What does the buffer do?
The buffer should be considered a switch. When C = 0, there is no connection between the input A and the output F. When C = 1, the output F is connected to the input A via what appears to be a non–inverting buffer.
Strictly speaking, when C = 0 the output F remains connected to input A, but through a circuit that offers very high resistance to the flow of electricity. For this reason, the state is often called “high impedance”, “impedance” being an engineer’s word for “resistance”.
What is This Third State?
a light attached to a battery. We
specify the battery as 5 volts,
due only to the fact that this course is focused on TTL circuitry.
0 volts to lamp Third state 5 volts to lamp
the switch is closed and the lamp is connected to the battery, there
is a voltage of +5 volts on one side, 0 volts on the other, and the lamp is on.
the case at left, both sides of the lamp are connected to 0 volts.
Obviously, it does nothing.
The middle diagram shows the third state. The top part of the lamp is not directly connected to either 0 volts or 5 volts.
this third state, the lamp is not illuminated as there is no power to it.
This is similar to the state in which the top is set to 0 volts, but not the same.
Understanding Tri–State Buffers
The best way to understand a tri–state buffer is to consider this circuit.
C = 0 The
top buffer is outputting the value of A (logic 0 or logic 1)
The bottom tri–state buffer is not active.
F = A
C = 1 The
top tri–state buffer is not active.
The bottom buffer is outputting the value of B.
F = B
Due to the arrangement, exactly one tri–state buffer is active at any time.
shall use tri–state buffers to attach circuit elements to a common bus,
and trust the control circuitry to activate at most one buffer at a time. | <urn:uuid:f385edf9-cc70-4d54-a8ee-612f28635200> | CC-MAIN-2017-04 | http://edwardbosworth.com/My5155_Slides/Chapter03/OtherCircuitElements.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00508-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916919 | 1,073 | 3.96875 | 4 |
Information technology has traditionally been a male-dominated field with an insular culture, but it is also a global business in which skilled workers are at a premium.
One of the most influential companies in IT history, IBM, has a long record of leadership on diversity issues--not just in IT, but in the corporate world. Its legacy goes back to the punch-card days, when the young enterprise included women and blacks in its workforce at the start of the 20th century, and continued through the decades with ahead-of-the-curve policies toward disabled and gay workers. In time, diversity and opportunity "became a recruitment tool" for IBM, says Ted Childs, the company's former head of global workforce diversity.
But that type of legacy is not widespread in business. "IBM is the exception rather than the rule," says Karen Sumberg, assistant vice president at the Center for Work-Life Policy, a not-for-profit organization that studies women and work.
Researchers at the University of North Carolina's Institute on Aging gave a presentation on perceptions of older workers in the tech workplace in which they noted, "IT has an image of being youthful, male and white." Older workers, one of the groups recognized under the diversity umbrella, are under-represented in IT and are more likely to lose their jobs than their younger colleagues.
Still, the nature of technology work seems to attract people who make decisions based on rational inputs rather than emotion, says Samir Luther, workplace project manager for the Human Rights Campaign Foundation, a gay, lesbian, transgender and transsexual rights group. Of course, IT people are not immune from prejudices, but they may be open to the logical case made for diversity. For whatever reasons, Luther says anecdotal evidence shows that as gender identity becomes a hot topic in diversity discussions, IT seems to attract a relatively large number of transgender workers. | <urn:uuid:9f051cd7-374c-45fe-bfb7-0a8a98a1ae30> | CC-MAIN-2017-04 | http://www.cioinsight.com/it-strategy/a-tale-of-two-cultures | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971266 | 382 | 2.609375 | 3 |
We’ve all heard this about students: “If they are engaged, they are managed.” And this is absolutely the truth. But we still need rules, routines, trust, and student ownership to make a classroom run smoothly and effectively, especially with the presence of technology and digital devices. These handy classroom management tips will address the practical aspects of managing a classroom, and possibly some new ideas to spice up your current routine.
Classroom Management Tips:
Tip 1: Find three, then ask me
Encourage kids to find three other ways to get the answer than raising their hand and asking the teacher. Examples could be search engines, books, a neighboring student. This method instills the student with the practice of using resources to find solutions on their own.
Tip 2: Showcase good work
When you catch a student in the act of doing good work on their computer or device, display it on the classroom big screen or projector. This lets the student know that you noticed their hard work, and shows the rest of the classroom an example of good behavior. Also, it encourages more students to try their hardest so that they too can get recognized publically.
Tip 3: Talk & Listen
Have students use a software tool, such as Impero Education Pro, to record notes, lessons, vocabulary lists, etc. then the student can play his own recording back to help remember the information. This method is especially good for auditory learners.
Tip 4: Re-Teach
Often during a lesson, the teacher finds that there were many similar questions about the subject matter, or all the same questions missed on a quiz. This is a good indicator that the information needs to be taught again in a different format. Taking a quick quiz on student devices can help poll students on learned concepts to help inform comprehension.
Tip 5: Quick quiz
As mentioned in tip 4, giving a quick quiz at the end of a lesson can help the teacher formulate the level of comprehension of a concept. Exit exams are also a great strategy. Give students a list of questions at the beginning of class. Teach a lesson, then have the students answer those questions right before they leave.
Tip 6: Let them drive
During a lesson, put a student in the driver’s seat by letting them teach or demonstrate a concept. This gives students validation in their skills, makes them more responsible, and re-enforces knowledge.
Tip 7: Focus the class
When the classroom gets off task on a digital project,or students start wandering away from paying attention, turn off all the websites except the one you want them to look at. This allows you to ensure all eyes are on important content. Software programs like Education Pro make it easy to set one website on all student devices with one click.
Tip 8: Assess your effectiveness
After a lesson, reflect on student behavior during class. Think about the percentage of students were wandering to entertainment or personal websites. Then think about how the lesson could have been made more engaging to keep attention on you.
The Impero Education Pro software solution provides a plethora of tools that allow the teacher to put all of the classroom management tips above into practice. Some of the main features of Education Pro are:
To find out more about how Impero education network management software can help your school with classroom management, network management, and internet safety, request a free demo and trial on our website. To talk to our team of education experts, call 877.883.4370, or email Impero now to arrange a call back. | <urn:uuid:a54ecc9a-7f0c-41aa-aa43-3dc3510630aa> | CC-MAIN-2017-04 | https://www.imperosoftware.com/make-technology-work-for-you-with-classroom-management-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944662 | 728 | 3.015625 | 3 |
Work that NASA's Marshall Space Flight Center did to create an inexpensive spacecraft with off-the-shelf parts has helped one company facilitate better and cheaper satellite communications.
NASA's Fast, Affordable, Science and Technology Satellite, also know as FASTSAT, was built to demonstrate researchers’ capability to build, deploy and operate a science and technology flight mission at lower costs than previously possible. In 2012, the satellite wrapped up a successful two-year, on-orbit demonstration mission.
Part of building the satellite meant developing a low-cost telemetry unit, which is used to facilitate communications between the satellite and its receiving station.
Alabama-based Orbital Telemetry Inc. licensed the NASA technology and is offering to install the cost-cutting units on other commercial satellites. | <urn:uuid:e72d4fd8-94e9-42fb-8db6-904067bcd070> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473447/emerging-technology/151319-nasas-spinoff-technologies-are-outta-this-world.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948424 | 157 | 3.46875 | 3 |
Analysts with UK-based Internet research firm Netcraft have discovered a considerable number of fake SSL certificates in the wild, created to impersonate banks, social networks, payment and ecommerce providers, and so on.
The certificates are used to make users believe that they are on the right website when they are not, allowing attackers to perform Man-in-the-Middle attacks and, thusly, be able to get all the information sent and received by the users and the sites, with both the users and the companies being none the wiser.
As explained by security analyst Paul Mutton:
The fake certificates bear common names which match the hostnames of their targets (e.g. www.facebook.com). As the certificates are not signed by trusted certificate authorities, none will be regarded as valid by mainstream web browser software; however, an increasing amount of online banking traffic now originates from apps and other non-browser software which may fail to adequately check the validity of SSL certificates.
Fake certificates alone are not enough to allow an attacker to carry out a man-in-the-middle attack. He would also need to be in a position to eavesdrop the network traffic flowing between the victim’s mobile device and the servers it communicates with. In practice, this means that an attacker would need to share a network and internet connection with the victim, or would need to have access to some system on the internet between the victim and the server. Setting up a rogue wireless access point is one of the easiest ways for an individual to carry out such attacks, as the attacker can easily monitor all network traffic as well as influence the results of DNS lookups (for example, making www.examplebank.com resolve to an IP address under his control).
Online banking apps for mobile devices are notoriously bad at SSL certificate validation, and as Mutton points out, “both apps and browsers may also be vulnerable if a user can be tricked into installing rogue root certificates through social engineering or malware attacks.”
Among the fake SSL certificates they have discovered was one used to “legitimize” a Facebook phishing page served from a server in Ukraine; one “wildcard” certificate served from a machine in Romania and possibly used to impersonate a variety of Google services; one to impersonate a large Russian bank and one to mimic a Russian payment services provider; one to imitate Apple iTunes.
It’s interesting to note that they have also found a phony certificate used to impersonate GoDaddy’s POP mail server. “In this case, the opportunities could be criminal (capturing mail credentials, issuing password resets, stealing sensitive data) or even state spying, although it is unexpected to see such a certificate being offered via a website,” Mutton pointed out. | <urn:uuid:fea5a6a1-ce23-4f8a-bd98-7c851e080237> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/02/13/fake-ssl-certificates-used-to-impersonate-facebook-google-banks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948773 | 573 | 2.71875 | 3 |
Farmers are leading the way with the Industrial Internet of Things (IIoT), but do they need ‘precision farming’ to ensure global food security?
“We need to achieve long-term, sustainable global food security without causing environmental damage.”
This was the call to action from Richard Green, head of engineering research for the National Center for Precision Farming, at the Smart Summit London late last month.
With a warning that we face an expected global population of nine billion by 2050, Green also spoke of the need to build homes, roads and other important infrastructure. All of this means we will lose land for growing crops, and the end result is that we will need to grow more food with less.
Precision farming – how does it work?
Green’s solution: precision farming – a management concept born out of the advent of global position system (GPS) and global navigation satellite system (GNSS) technology.
This method is based on observing, measuring and responding to inter and intra-field variability in crops. The point is to define a decision support system for management of entire farms, with the aim of optimizing returns while preserving resources.
“It’s a solution that is both sustainable and economic,” he claimed.
Green’s argument, following years of research, is that Precision Farming — or satellite farming as it is sometimes known — can give farmers more control of the IIoT.
By using satellites precise locations in a field, farmers gain ‘better’ data on plants and insight on methods that work well. With better information, plants, and technology, farmers can sustainably improve yields and profits while optimizing the use of resources.
“Using robotics and autonomous vehicles [with in-built GPS], we are trying to treat each plant individually. We haven’t got there yet but that’s the aim.”
Farmers and the IIoT
It has been well-documented that farmers are already using Internet of Things (IoT) technology to great effect, in some parts of the world. So are they doing it right?
According to Green: “Today, farmers complain that they have too much data and don’t know what to do with it. Tomorrow they will realize they have too little and that what they do have isn’t good enough!”
Instead of accruing more data for the sake of it, Green is trying to encourage farmers globally to be more precise with their data accumulation. Everything must be geo-tagged so it can be linked to a precise location.
His goal is to build a ‘giant data cloud’ from farms and businesses across the world. In theory, farmers would then be able to access this database to ask questions and make better informed decisions for how they grow their own produce.
As an example, Green suggested that precision farming could help farmers see what would happen if they changed the date of farming, and how that affects a yield. They could also see how much ‘each square meter of the farm will make in profit if they incorporate data analytics.’
Targeting developing economies
“Small marginal gains are possible, which increases our ability to adapt to climate change,” he notes.
Whether the gains would be as great in more advanced countries is questionable. In the UK, where the best farmers can yield 10-12 tonnes per hectare, efficiency could improve slightly, Green admits.
What he’s really excited about, though, is the potential for developing countries. These are countries that produce 1-2 tonnes per hectare, ‘so the opportunity is huge’. If data analytics can help farmers in developed countries to just double their yield, that is still ‘incredibly significant’.
The problem Green and his team face is that many different parties are invested in precision farming, but they are not yet working together. A large part of his role is to encourage them to collaborate.
“Only by all of us coming together, can we actually make it work.”
Another major factor is whether or not the developing countries Green spoke of actually have the money and the capability to invest in the upfront costs of IIoT technology without support. There is already a major food shortage of in some parts of the world, so while the IoT will be useful for some countries in time, there are others that need a solution today. | <urn:uuid:57e3469e-17c0-42a4-a2d9-323577d9bf0c> | CC-MAIN-2017-04 | https://internetofbusiness.com/precision-farming-security-iiot/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00317-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949613 | 919 | 2.90625 | 3 |
"Open standards have played a crucial role in enhancing the interoperability of diverse systems and in helping organisations provide better services, saving significant costs to both public and private enterprises",says Steven Ramage, Executive Director, Marketing and Communications, OGC.
The Open Geospatial Consortium (OGC) was established in 1994 with a membership of 5. Today the OGC has more than 420 member organisations including government, academic and private sector organisations from 34 different countries. Traditional GIS vendors are involved, along with technology integrators, data providers, and companies at the cutting edge of location services. In the commercial sector, the OGC has attracted Fortune 500 companies, as well as numerous smaller technology providers, and globalisation is expanding the participation of companies from previously underserved world regions.
The mission of the OGC is "To serve as a global forum for the collaboration of developers and users of spatial data products and services, and to advance the development of international standards for geospatial interoperability".
To support this mission, the OGC manages four programmes:
- Interoperability - to establish communities of interest and encourage collaboration. A good example is aviation with the requirement to create a shared air space, such as for all of Europe.
- Standards development and maintenance - a consensus process in which the OGC members collaborate to define, develop, and maintain international geospatial standards.
- Compliance testing - to provide the resources, procedures, and policies for improving software implementation compliance with OGC standards.
- Marketing and communications - to explain and promote the value of open standards, creating more interest and participation, resulting in more benefits for all.
A key aspect of the work of the OGC is to enable the effective and seamless sharing of location data. Beginning in 1998, OGC members defined a new OGC standard called the Geography Markup Language (GML). GML became an official OGC standard in 2000. In 2007, GML also became an ISO standard [ISO is the International Organisation for Standardisation, the world's largest developer and publisher of international standards http://www.iso.org/iso/about.htm]. However, GML by itself does not solve the location data-sharing problem. Common content models, such as for sharing airport or weather information, need to be defined and then encoded using GML.
Working with communities of interest such as Defence and Intelligence, Emergency and Disaster Management, Hydrology, Aviation, and Meteorology, these content models and associated GML schema (encodings) have been, or are in the process of being, developed. Examples of community-specific standards that utilise GML include:
- AIXM - Aviation Information Exchange Model;
- EDXL - Emergency Data Exchange Language;
- GeoSciML - GeoScience Markup Language for the Geology community with 109 countries currently collaborating;
- WXXM - Weather Information Exchange Model.
These community-specific models are often evaluated and endorsed by the OGC as conforming to OGC standards, specifically GML.
Standards implemented in applications enable many-to-many interoperability. This many-to-many concept underlies one of the most important trends in information and communication technology today: cloud computing. Robust networks and service-oriented architectures (SOA) drive cloud computing in which enterprises, large and small, gain flexible and efficient access to hardware, software, and networks as services. This removes the need to purchase and maintain possibly underutilised expensive in-house resources. The cloud approach enables access to all these resources as services, quickly and flexibly meeting current as well as future needs.
Distributed operations use open standards to enable information flow. OGC standards have removed the technical boundaries that previously limited communication between complex information systems, making geospatial information "just another data type" accessible across boundaries to applications of all kinds.
In 2004, the US National Aeronautics and Space Administration (NASA) conducted a Return on Investment (ROI) study through Booz Allen Hamilton (BAH) to assess the impact of using open standards that enable geosciences interoperability among its partnering agencies. This study did not explicitly assess the value of service-oriented architectures but, rather, anticipated them. The open standards assessed then are now part of virtually all current architectures that use web services and enable publishing, discovery, access and use of geospatial information. The NASA study compared one government programme using open geospatial interface standards with another government programme not using those standards. The study focused on geospatial standards developed by the OGC, the US Federal Geographic Data Committee and the International Organization for Standardization (ISO) Technical Committee 211.
The results of the ROI initiative can be seen in detailed charts and tables in the study (http://www.egy.org/files/ROI_Study.pdf). These results revealed a significant improvement in functionality and decrease in cost when using open standards as opposed to proprietary standards. The project, using open standards, saved NASA 26.2% of project costs compared to the project that relied on a proprietary standard. It was stated that for every $100m spent on proprietary standards, the same results could have been achieved for $75m using open standards.
The study also reported that open standards-based projects:
- Have lower maintenance and operation costs
- Have greater first-time system planning and development costs, but future projects using the same standards will have significantly reduced planning and development costs.
Standards make the distribution of geospatial information easier and more understandable - not just for government technologists, managers, and decision support analysts, but also for all stakeholders, including industry partners.
Contact details and credits
This article was written in collaboration with Steven Ramage. For further information contact email@example.com and view the OGC website, http://www.opengeospatial.org | <urn:uuid:46619155-7c7b-4668-9113-2d6cd285fac5> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/location-standards-boring-realise/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913027 | 1,211 | 2.9375 | 3 |
Companies around the world are taking steps to act responsibly when it comes to sourcing material for their products. The World Resources Institute (WRI) aims to make that process easier by offering a tool that helps businesses analyze their supply chains and identify commodities whose production could contribute to deforestation.
The WRI's free app, Global Forest Watch Commodities, is a mapping tool that provides forest-related data and analysis for palm oil, beef, soy, pulp and paper products. It was created for the WRI by Blue Raster LLC in partnership with Esri.
The app pulls information from dozens of sources and analyzes disparate types of data, including figures on annual changes in tree cover, views of protected areas, NASA data on active fires and satellite data with near real-time updates on tree cover loss from an advocacy group called Forest Monitoring for Action.
"Businesses use the tool to help them quantify and get detailed information on exactly what their impact on the forest looks like, and to credibly communicate what they're doing to address that or where they've improved over time," says Sarah Lake, commodities research analyst at WRI in Washington. The biggest challenges were gathering current, high-quality data and, sometimes, persuading stakeholders to make data publicly available, she says.
Today, about a dozen of the largest commodity traders and buyers in the world use the app.
The Roundtable on Sustainable Palm Oil (RSPO) uses the tool for its alert and fire monitoring system to track fires and deforestation activity.
"Companies who are certified by RSPO had far fewer fire alerts" on their land, says Sanath Kumaran, head of impacts for RSPO in Kuala Lumpur, Malaysia. "During the last six months, only 10 fire hotspots occurred in RSPO-certified [land] compared to over 2,000 total fire hotspots in all other oil palm [land]."
This story, "World Resources Institute" was originally published by Computerworld. | <urn:uuid:8abdcf42-ba62-4b63-8704-3716a601484b> | CC-MAIN-2017-04 | http://www.itnews.com/article/2977562/data-analytics/world-resources-institute.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959247 | 404 | 2.546875 | 3 |
In the current version of the ITIL Foundation class, the following exam question appears in one of the two sample exams used in the class:
Which one of the following is the CORRECT sequence of activities for handling an incident?
- identification, logging, categorization, prioritization, initial diagnosis, escalation, investigation and diagnosis, resolution and recovery, closure
- prioritization, identification, logging, categorization, initial diagnosis, escalation, investigation and diagnosis, resolution and recovery, closure
- identification, logging, initial diagnosis, categorization, prioritization, escalation, investigation and diagnosis, resolution and recovery, closure
- identification, initial diagnosis, investigation, logging, categorization, escalation, prioritization, resolution and recovery, closure
The correct answer to this question is 1, however students often disagree with that answer choice. The rationale behind the answer is simply, “The correct order is given in the diagram in the incident management process, and in the subsections of [SO] 4.2.5.” In this post, I will provide a better explanation of why choice a is the correct answer.
First of all, the flow of activities in the incident management process is described in the Service Operation book section 4.2.5, and shown visually in Figure 4.3. Figure 4.3 shows the following flow of activities for incident management:
As shown in Figure 4.3, the correct flow of activities in the incident management process begins with identification, which is followed by logging, which in turn is followed by categorization. Initial diagnosis occurs later in the process flow following prioritization.
While the Service Operation book is clear about the flow of activities, the logic behind why the activities are in this order is not completely clear. Very few people disagree that the incident management process begins with identification, which in turn is followed by logging. The disagreement primarily exists in what follows logging, whether it is categorization or initial diagnosis. A good way to summarize the flow of activities is that they flow from general to specific.
It often helps to clarify what the steps in the process do. Categorization allocates the type of incident that is occurring. In practice, organizations often use a multi-level categorization scheme, where the top-level consists of a few broad high-level categories. Subsequent levels of categorization might provide an additional level of detail. Practically, I’ve always thought of categorization as a way of identifying at a high-level what general area an incident should belong to. For example, common top-level categories include things like “hardware”, “software”, “network”, “user induced”, “supplier induced”, etc.. In fact, I once worked at a large organization that processes about 50,000 incident tickets per month with a set of 8 top-level categories. In other words, when categorization is done, we’re really just trying to identify a general area to which the incident most likely belongs. Categorization can be revisited, and often changes throughout the lifecycle of an incident.
Prioritization accounts for the impact and urgency of the incident and assigns a pre-defined code that guides an organization’s response to an incident. In any population of incidents, an effective prioritization scheme tells the organization which incident to work on first. The ability to do this is critically important in high-volume environments where the organization has limited and shared resources capable of responding to numerous, simultaneous incidents. In other words, organizations have to make decisions about how to marshal resources based on their impact to the business and how quickly service must be restored.
Initial diagnosis is described in the Service Operation book in section 184.108.40.206 as the activity where the service desk attempts to understand all symptoms of the incident in an effort to uncover what is wrong and attempt to correct it. During this activity, the service desk staff might use the known error database to speed incident resolution, or diagnostic scripts to identify the service fault.
The logical reason why these steps are in this order is because during categorization and prioritization we try to uncover enough details about the incident so that it can be routed correctly throughout the process. For example, organizations might choose to handle hardware or network incidents differently than they handle software incidents. The same is true for prioritization. Prioritization seeks to establish facts about the incident in terms of its impact and urgency such that proper routing decisions can be made; for example, the highest priority is what is typically known as a “major incident”, which will often follow a specific procedure dedicated to handling major incidents.
Therefore, the early steps in the incident management process are focused on properly routing the incident. Knowing the category and priority help organizations make effective decisions about routing incidents. Improperly routed incidents will result in delayed resolution of service, which impacts users and customers and decreases satisfaction. For example, it would not make sense for a service desk to attempt initial diagnosis if they are not properly trained or equipped to investigate that category of incident. In fact, a service desk spending time doing initial diagnosis for incident categories where they are improperly trained and do not have effective scripts and tools will often result in delayed restoration of service, increased impact to users, and a negative impact to customer satisfaction.
Clearly, according to ITIL, categorization occurs early in the incident management process, and there are good reasons why this is the case. | <urn:uuid:b63cbd72-2335-4274-af1d-292240dcb296> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/04/18/incident-management-process-flow-which-comes-first-categorization-or-initial-diagnosis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00279-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951719 | 1,118 | 2.578125 | 3 |
Four in ten (40%) smartphone users in the United States agree that they don’t understand cyber security well enough to know how to protect themselves, according to LifeLock.
The online survey, conducted in August by Harris Interactive, asked more than 2,000 smartphone users about their smartphone security habits and assessed their knowledge of and participation in potentially risky behavior.
Many smartphone users reported sharing data that can leave them susceptible to identity theft and other identity fraud dangers. Users are engaging in several compromising behaviors that may leave them at risk to fraudulent activity:
- 44% of smartphone users have a personal banking or finance app on their smartphone.
- 35% of those who have a social networking app on their smartphone allow the app to know the GPS location of their phone when downloading the app.
- 36% of those surveyed have not utilized protection such as a PIN, tracking software, and/or remote wiping capabilities for their smartphone.
“It’s clear that the majority of those surveyed don’t take simple steps to secure their devices,” said Neil Chase, Vice President of Education with LifeLock. “And it varies with age. The survey found that people ages 18-34 are significantly more likely to use the same password for every app than those who are 35-54, and people 55 and over are even more careful.”
Despite the prevalence of identity theft, users don’t see their smartphones as the biggest risk. Seventy-one percent of users agree that losing their wallet is a bigger risk for identity theft than losing a smartphone. Further, 36% of participants believe they are more likely to have their car stolen than their identity stolen. However, based on an annual survey published by Javelin Strategy & Research, identity fraud affected 12.6 million adult consumers in 2012, while, according to the FBI, fewer than one million cars were stolen in 2012.
LifeLock recommends the following actions to secure a smartphone:
- Protect the device with a strong password. Do not use date of birth or banking PIN as a password.
- Do not allow downloaded apps to access GPS location.
- Ensure all apps use different usernames and passwords.
- Wipe all personal information from the device before replacing or upgrading. | <urn:uuid:88f81169-f22a-470a-99ee-d6cd463a8d9d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/10/23/smartphone-users-still-unaware-of-identity-theft-threats/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00187-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958506 | 464 | 2.59375 | 3 |
Password Expiry is a mechanism used by some Operating Systems to
require users to change their Passwords regularly. This is done
because over time users tend to give out their passwords, write
it down, or otherwise compromise the secrecy of their Passwords.
Passwords leaked over time pose a security risk, which is
mitigated by Password Expiry. | <urn:uuid:658bc83d-ddef-49f5-986c-1ad089a716e1> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/password_expiry.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93926 | 75 | 2.515625 | 3 |
Helkern Epidemic - events chronology
27 Jan 2003
It is possible to state with certainty that 'Helkern' appeared far before the 25th
of January when anti-virus companies first brought it to the attention of the mass media. January 20, 2003 at 19:07 marked the first time data similar to 'Helkern' worm copies were detected by Kaspersky Lab. The data was sent from a computer belonging to an U.S.-based Internet service provider. However this doesn't mean that company's employees created 'Helkern' - most likely their server was remotely infected. Therefore the truth about the virus's origin might be hiding in the request log-files of that server.
Same day, a bit later, the "Helkern" code was found in a request from a Dutch server. After that the worm did not show up until 20:21 on January 23 when another copy of the worm was registered in the request from another Dutch server. The explosion of "Helkern" activity only occurred early morning January 25. The incubation period for this worm lasted for almost 5 days. During this time this virus infected the critical number of servers, which caused the destructive chain reaction.
According to other data, the epicenter of the worm was based in China from where it sneaked into North Korean and Philippines computer systems. From there it reached the western and central regions of U.S.A., where it then divided into two streams - the first one head to Australia and New Zeeland and the second to Western Europe.
Geographic spread of 'Helkern':
As the above table roughly shows, this epidemic reached into almost all counties. This once again proves the inefficiency of the idea to fight with cybernetic weapons such as computer viruses. It has an obvious boomerang property, which makes it inapplicable for military purposes.
Currently - on January 27, the epidemic is practically neutralized and the usual Internet operating capacity has been restored. Copies of 'Helkern' are being constantly registered in the network, but its number is hundreds of times lower than at the peak of activity. In general their presence doesn't influence Internet traffic and doesn't disturb normal network performance. The neutralization of 'Helkern' in many ways was due to the coordinated work of Internet providers, who implemented measures for the filtration of the hazardous data packages sent by 'Helkern' and by users who alertly patched on the vulnerability in the security system of Microsoft SQL Server that was being exploited by the 'Helkern' worm.
| Countries || Amount of infected servers (as a % of the total number of server infections)|
| USA ||48.4%|
| Germany ||8.2%|
| South Korea ||4.9%|
| Great Britain ||4.9%|
| Canada ||4.9%|
| China ||3.3%|
| Netherlands ||2.7%|
| Taiwan ||2.7%|
| Greece || 2.2%|
| Sweden || 2.2%| | <urn:uuid:3395273c-39fb-43ac-954d-4d56f44b4b05> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2003/Helkern_Epidemic_events_chronology | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00307-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963656 | 645 | 2.59375 | 3 |
We live in an increasingly digital world, and rely on the devices we use to store our more or less valuable information and to perform critical tasks.
In fact, according to a global Kaspersky Lab survey, 98 percent of us use our mobile phones, tablets, laptops and / or desktop computers to conduct financial operations, 74 percent regularly use e-wallets and payment systems, and the great majority use the devices to do online shopping, use social media, do online banking, store data online, use instant messaging, and so on.
But while on one hand many of these users are aware of some of the risks tied with the everyday use of these devices, other dangers are less thought about.
For example, 69 percent of the survey-takers are aware that the personal data on their devices requires additional security, and 73 percent are updating the software on them regularly.
Meanwhile, one third of them take no security measures when using public Wi-Fi networks, and
over 40 percent trust websites and their banks to keep their passwords secure and to return any of their money that might get stolen by cyber crooks.
On this last point reality is a bit different, as 41 percent of respondents who were victims of a financial attack were unable to get all their money back.
The average money loss suffered due to such an attack was $74 per person. Users who don’t back up their data suffer an even bigger loss: on average, the loss of media collection following a malicious attack or device failure will cost the user $418. It’s interesting to note here that Russians and Chinese users are especially bad at performing regular backups or any backup at all.
Privacy concerns top the list of users’ worries. 69 percent are anxious about their personal data being stolen and used by other people, and they are especially worried about the data they share with companies and government agencies.
Cyberwarfare and state-sponsored attacks are, on the other hand, not something that concerns many users. In fact, the great majority of them have never heard of things like zero-day vulnerabilities, botnets, Mini Flame or the Zeus Trojan (click on the screenshot to enlarge it):
“Although 31% of respondents said they were worried about cyberwarfare and the damage it could cause, relatively little is known about the kinds of weaponry used,” the survey results say. “There also seems to be little concern over trends like “hacktivism’, the fact that some cyberattacks have the indirect backing of governments, or attacks on software, video game and media companies such as Adobe, Microsoft, Oracle, Sony, and The New York Times.”
Users have a pretty accurate view of which platforms are the most vulnerable (Windows, Java and Android), but some more than one third of the respondents still believe that Macs are immune to cyberthreats.
When it comes to passwords, some 40 percent of the respondents expectedly said they had just one password, or at best a small collection, for all of their accounts. Only 26 percent use a different password for each account.
When it comes to mobile devices, most users used it to work with and store sensitive data, but also few recognize the dangers of using free public Wi-Fi access points, less than 25 percent use anti-theft software for mobile devices, and just 40 percent of smartphone owners and 42 percent of Android tablet owners use security solutions for these devices.
The report also includes statistics on actual cyber attacks and threats the respondents have personally faced and either deflected or fell victim to. | <urn:uuid:13ce303b-bbfc-4a00-b975-d9706c694bcd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/10/25/exploring-the-dangers-of-a-mobile-lifestyle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00427-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95682 | 733 | 2.75 | 3 |
This article is Part 3 of our series on Introduction to Internet Programming series. See the Introduction and Part 1, Introduction to HTML. The Introduction to Perl covers the basics of Perl, including how to write and debug simple scripts..
Perl is a computer programming language that is very good at
manipulating strings, doing pattern matches, and pretty much just
about everything else! It is a relatively slow language compared
to compiled languages like C, however it very easy to get programs
up and running very quickly in Perl.
Read the rest of this post » | <urn:uuid:43ad7e98-dc45-440d-b288-555503de7a61> | CC-MAIN-2017-04 | https://luxsci.com/blog/tag/programming | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896559 | 115 | 3.46875 | 3 |
Sandia delves into brain-modeled computer systems
- By Mark Rockwell
- May 16, 2014
Researchers at Sandia National Laboratories are seeking to develop computing systems modeled on neurons in the brain, such as these green fluorescent protein-labeled neurons in a mouse neocortex. Photo by Frances S. Chance, courtesy of Janelia Farm Research Campus.
Researchers are closing in on technology that would supercharge computing capabilities behind big-data analysis and make unmanned vehicles more autonomous, according to an Energy Department research lab.
Industry and government researchers have been working on computer systems that are modeled on the delicate neural network of the human brain rather than traditional computing's parallel processing systems. Sandia National Laboratories researcher Murat Okandan said the brain-modeled systems would be ideal for operating unmanned aerial vehicles (UAVs), robots and remote sensors, and could help solve big-data problems.
Today, those problems and devices need more computational power and better energy efficiency, he told FCW, adding that the developing capabilities could also power continuous diagnostics and mitigation (CDM) technology by sharpening the ability to detect ever-changing intrusion anomalies.
Sandia recently added neuro-inspired computing as a long-term research project in support of future computer system development. Neuro-inspired computing seeks to create algorithms for computers that function more like a brain than a conventional CPU, Okandan said. The new capabilities would operate above the more traditional architecture and oversee how that architecture is used. The technology would consume a minuscule amount of power, which would make the systems more portable and free up room for more computing.
Neuro-based systems could also act more autonomously in a number of roles, Okandan said. In a CDM environment, for instance, a neuro-based system might be able to better detect anomalies without having seen them before or allow a UAV to make its own decisions about where to look and what to look for instead of relying on a remote operator.
Sandia has tapped diverse facilities for its research, including its Microsystems and Engineering Sciences Applications fabrication plant, which can build interconnected computational elements; its computer architecture group with its long history of designing and building supercomputers; and the lab's cognitive neurosciences researchers with expertise in areas such as brain-inspired algorithms.
Okandan said the first breakthroughs in neural-modeled systems are not far off. Although he declined to predict exactly when the capabilities would make their way into the market, he said the initial commercial applications could start appearing in the next five years. More advanced, specialized capabilities developed by government research facilities might take longer, he added.
Mark Rockwell is a staff writer at FCW.
Before joining FCW, Rockwell was Washington correspondent for Government Security News, where he covered all aspects of homeland security from IT to detection dogs and border security. Over the last 25 years in Washington as a reporter, editor and correspondent, he has covered an increasingly wide array of high-tech issues for publications like Communications Week, Internet Week, Fiber Optics News, tele.com magazine and Wireless Week.
Rockwell received a Jesse H. Neal Award for his work covering telecommunications issues, and is a graduate of James Madison University.
Click here for previous articles by Rockwell.
Contact him at firstname.lastname@example.org or follow him on Twitter at @MRockwell4. | <urn:uuid:99428a70-6344-46db-b92a-29a110243363> | CC-MAIN-2017-04 | https://fcw.com/articles/2014/05/16/sandia-brain-modeled-computer-systems.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950104 | 692 | 2.984375 | 3 |
Software patches are fixes that are released by software vendors. Patches generally fall into one of three categories: patches that improve functionality, patches that repair or restore functionality, and patches that repair vulnerabilities. Understand why companies install patches, how they choose which patches to install and the due diligence that needs to be performed before patch installation.
Reasons for Installing Patches
Info-Tech recently performed a study of software patching. It revealed security vulnerabilities are the main reason companies patch software. Figure 1 reinforces the idea that the security of companies' IT systems is always on IT's front burner. Patches meant to improve or enhance software functionality are probably installed less frequently than security patches because not all of these patches may be necessary for companies; if a company does not use a particular function of a software program then it is not necessary to install patches meant to enhance its performance. | <urn:uuid:6e9ac907-de8d-4ad0-97ed-e3d3d5e12626> | CC-MAIN-2017-04 | https://www.infotech.com/research/a-closer-look-at-software-patching-practices | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944702 | 175 | 2.640625 | 3 |
Over U.S. aviation’s 100-year history, 320,000 people have registered to operate manned aircraft. Over the past eight months, 520,000 people have registered to use Unmanned Aerial Vehicles (UAVs).
According to Michael Huerta, administrator of the Federal Aviation Administration (FAA), UAVs, also known as drones, will play a key role in shaping the future of the United States. At the White House Office of Science and Technology Policy (OSTP) Workshop on Drones and the Future of Aviation, Huerta announced that the FAA has commissioned an Unmanned Aircraft Safety Team and a Drone Advisory Committee.
The Unmanned Aircraft Safety Team will include representatives from the drone and aviation industry; this team will analyze safety data from drones and attempt to mitigate safety concerns. The Drone Advisory Committee, chaired by Brian Krzanich, CEO of Intel, will develop policy and regulations for future drone use.
“This is an industry that’s moving at the speed of Silicon Valley. We at the FAA know we can’t move at the speed of government,” Huerta said. “America has the most complex airspace in the world, and it’s FAA’s job to ensure that everything operating up there is doing it safely for the public and for everyone who wants to use it.”
The FAA released a set of regulations on drone use on June 21. Huerta announced at the OSTP Workshop that the first set of rules for commercial drone use will be enacted on Aug. 29. This rule states that commercial users must fly their UAVs, which must weigh less than 55 pounds, in sparse areas at a height no greater than 400 feet. Huerta said the FAA hopes to propose more rules later this year.
“The only limit of this technology is our imagination,” Huerta said. “I’m confident that, by working with our partners, we will succeed.”
Krzanich, who collaborates with the FAA to discuss UAVs, said that drone use could foster a $16 billion market in the private business sector by 2020. Through his work at Intel, Krzanich tests drones on factors such as collision avoidance and beyond line-of-sight flight capabilities. He plans to take the test to receive an unmanned drone license on Aug. 29; he and his coworkers are competing to see who will receive the best score.
Drones can change how filmmakers shoot movies and conduct examinations of the country’s infrastructure, such as train tracks and pipelines. According to Krzanich, one of the most important possibilities drones offer is the chance to expedite search and rescue work. He used the example of a person wielding 100 drones that could detect a person stuck in the desert more quickly than a search team operating one helicopter.
“[Drones] truly can bring a better experience,” Krzanich said. “We can save lives.” | <urn:uuid:6a4f32c8-5106-4364-a234-00dcc12028a7> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/faa-announces-drone-advisory-committee/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00473-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945261 | 624 | 2.8125 | 3 |
What is aggregation?
In the past, the IBM i operating system has only provided redundant Ethernet capabilities through the proxy Address Resolution Protocol (ARP) or "automatic failover" between line descriptions. Unfortunately, this implementation is neither the industry standard nor practical in an enterprise environment. Link aggregation, defined by the Institute of Electrical and Electronics Engineers (IEEE) as 802.3ad or 802.1ax, provides both redundancy and performance advantages. When properly implemented, link aggregation can both increase the resiliency of your system to network failures and provide a significant performance benefit. This tip refers to the technology as aggregation; however, it is also known as EtherChannel, teaming, or trunking.
Advantages of aggregation
There are three major advantages to aggregation over redundancy:
- Resiliency. Aggregation increases your system's resiliency by eliminating three single points of failure. First, the Ethernet port on your system can fail. Second, the Ethernet cable itself can fail. And third, the switch or switch port your system is connected to can fail. Aggregation can overcome all of these failures without any impact to your system or its users.
- Performance. With aggregation, TCP/IP traffic is allowed to traverse any of the available paths to the switch. The traffic is spread across the resources according to a configured preference, which means that each 1 Gbps line adds to the overall throughput capabilities of your system (for example, two 1 Gbps lines equals 2 Gbps of theoretical bandwidth). You can add up to eight ports in an aggregated line configuration. This is also true of 10 Gbps Ethernet lines, meaning that your maximum throughput could theoretically be as much as 80 Gbps of bandwidth. This is a big advantage over redundancy, in which traffic would simply flow over one or the other physical connections, limiting you to the bandwidth of that single connection.
- Routing simplification. With redundant Ethernet lines on IBM i, routing was difficult, because the setup required an interface to be assigned with the subnet mask 255.255.255.255, or 32 bit. This interface acted as the master and would use proxy ARP to point to one of two physical interfaces, each with its own IP address. This configuration becomes a problem, because you cannot route traffic out that "master" interface because of the 32-bit subnet mask. Often times, that "master" interface is the IP Domain Name System (DNS) point that caused confusion or even made the scenario impossible, because inbound traffic would come across one IP address, but outbound traffic could possibly come from one of the two other IP addresses. With aggregation, this problem is solved, because the IP address points to a new media access control (MAC) address that is unique and therefore can be assigned to whatever subnet is necessary to route traffic correctly. The TCP/IP setup is exactly the same as if the underlying line description were a physical device, making routing and IP address assignment much simpler and cleaner.
There are four prerequisites necessary for implementing Ethernet aggregation on IBM i version 7.1 with technology refresh 3:
- At least two gigabit Ethernet physical ports assigned to the partition. This assignment can include one host Ethernet adapter (HEA) port if it's the only logical port assigned to that physical HEA port.
- The following program temporary fixes (PTFs) must be applied: MF53900, MF54074, MF54188, MF54229, MF99003, SI42593, and SI42997.
- The ports to be used in the aggregate line must be connected to an EtherChannel capable switch or switch pair. When using a switch pair the attached switches will need to be configured in a Virtual Link Aggregation Group (VLAG) also known as a stacked switch pair. Your network administrator will need to enable the switch ports to use EtherChannel in a static configuration with Link Aggregation Control Protocol (LACP) off.
The first step in implementing Ethernet aggregation on IBM i is to identify
the communication resources you'll use as part of the aggregate resource
list in your new line description. To do so, run the
WRKHDWRSC TYPE(*CMN) command to list all
available communication resources. Look for resources with the text
description "Ethernet Port," and record the resource names to be used (for
example, CMN01). For demonstration purposes, resource names CMN01
and CMN02 are used in this tip.
Now, create your new line description with the CMN resources specified in the
AGGRSCL parameter. Other parameters you
need to be aware of are
LIND, the name of
the line description;
RSRCNAME, which should
*AGG to specify that this is an aggregate line;
AGGPCY, which is the type of aggregation
standard and policy to use. As of the time of writing, the only standard
supported is *ETHCHL. The policy you choose is up to you, but IBM
recommends *SRCDESTPRT, which uses the source and destination port of
the TCP/IP traffic to determine which physical Ethernet port to transmit
on—essentially using both ends of the conversation to determine
which link to use.
Here is an example of the command to create an aggregate line description called ETHERLIN01 using these parameters with CMN01 and CMN02 in the aggregate resource list:
CRTLINETH LIND(ETHERLIN01) RSRCNAME(*AGG) AGGPCY(*ETHCHL *SRCDESTP) AGGRSCL(CMN01 CMN02)
The last step is to configure your TCP/IP address to use the new line description.
To do so, run the command
ADDTCPIFC, like so:
ADDTCPIFC INTNETADR('10.10.10.1') LIND(ETHERLIN01) SUBNETMASK('255.255.255.0')
And that's it: You now have an interface that is redundant and aggregated. Figure 1 provides a visual representation of the necessary components for the link aggregation.
Figure 1. Drawing showing the steps necessary to create an aggregate interface
Management and testing
Now that you have created your new line description, there are a couple of new things
to be aware of. First, if you run the command
again, you will see a new device listed with a device ID of 6B26 and a description
of AGGxx: This is the logical representation of the new device that you created.
Also, if you run
DSPLIND LIND(ETHERLIN01) OPTION(*AGGRSCL),
you will notice the CMN resources that you identified earlier in your aggregate resource
list and their current status. Output should look similar to Listing 1.
Listing 1. DSPLIND sample output
Display Line Description Line description . . . . . . . . . : ETHERLIN01 Option . . . . . . . . . . . . . . : *AGGRSCL Category of line . . . . . . . . . : *ELAN -Aggregated Resource List-- Name Status CMN01 LINK UP CMN02 LINK UP
You can test the aggregate feature in several different ways. Physically unplugging the Ethernet cable from one of the ports that you defined in your resource list causes that link to go down but not the entire interface. You can also use a dynamic logical partitioning (DLPAR) function to unassign one of the cards. Your network administrator could shut down one of the ports on the attached switch, as well. None of these tests should affect traffic to or from your IBM i system, but during the test, you should see one of the CMN resources in the aggregated resource list change from LINK UP to LINK DOWN.
You can increase your system's resiliency and performance by using the steps outlined in this tip. By doing so, you can prevent unwanted downtime to your users.
- Visit the IBM i information center to learn more about the commands referenced in this tip.
- The IEEE 802.3 Ethernet Working Group provides overwhelming information on Ethernet standards that apply to this tip.
- Cisco's website provides all the necessary information to configure EtherChannel on their equipment.
- The IBM i zone provides a wealth of information relating to all aspects of IBM i systems administration.
- New to IBM i? Visit the New to IBM i page to learn more.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends.
- Follow developerWorks on Twitter.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
Get products and technologies
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.
- Participate in the IBM i forums:
- Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis. | <urn:uuid:cbe67d64-c420-4cd6-9112-7011e22eceda> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/ibmi/library/i-ethernetlines/index.html?cmp=dw&cpb=dwibmi&ct=dwnew&cr=dwnen&ccy=zz&csr=071312 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00410-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894249 | 1,958 | 3.375 | 3 |
NASA launches challenges using OpenNEX data
NASA is launching two challenges to give the public an opportunity to create innovative ways to use data from the agency’s Earth science satellites.
The open data challenges will use the Open NASA Earth Exchange (OpenNEX), an Amazon Web Services data and supercomputing platform where users can share knowledge and expertise.
A component of the NASA Earth Exchange, OpenNEX also features a large collection of climate and Earth science satellite data sets, including global land surface images, vegetation conditions, climate observations and climate projections.
“OpenNEX provides the general public with easy access to an integrated Earth science computational and data platform,” said Rama Nemani, principal scientist for the NEX project at NASA's Ames Research Center in Moffett Field, Calif.
“These challenges allow citizen scientists to realize the value of NASA data assets and offers NASA new ideas on how to share and use that data.”
To educate citizen scientists on how the data on OpenNEX can be used, NASA is releasing a series of online video lectures and hands-on lab modules.
The first stage of the challenge offers as much as $10,000 in awards for ideas on novel uses of the data sets. The second stage, beginning in August, will offer between $30,000 and $50,000 for the development of an application or algorithm that promotes climate resilience using the OpenNEX data, and based on ideas from the first stage of the challenge. NASA will announce the overall challenge winners in December.
OpenNEX is hosted on the Amazon Web Services cloud and available to the public through a Space Act Agreement.
Posted by GCN Staff on Jun 25, 2014 at 12:18 PM | <urn:uuid:8bf1360f-fd75-4690-affe-12dc3c5cc11e> | CC-MAIN-2017-04 | https://gcn.com/blogs/pulse/2014/06/nasa-opennex-challenge.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882213 | 356 | 2.9375 | 3 |
AUSTIN, Texas -- Does the world need another operating system? Yes, according to the creative minds in the computer science department at the University of California at Berkeley, which have come up with one called the Tessellation Operating System that's intended to light up the future of the Internet of things.
At the Design Automation Conference (DAC) here this week, John Kubiatowicz, professor in the UC Berkeley computer science division, offered a preview of Tessellation, describing it as an operating system for the future where surfaces with sensors, such as walls and tables in rooms, for example, could be utilized via touch or audio command to summon up multimedia and other applications. The UC Berkeley Tessellation website says Tessellation is targeted at existing and future so-called "manycore" based systems that have large numbers of processors, or cores on a single chip. Currently, the operating system runs on Intel multicore hardware as well as the Research Accelerator for Multiple Processors (RAMP) multicore emulation platform.
According to Kubiatowicz, Tessellation -- which is a math term for how shapes can be arranged to fill a plane without any gaps -- is an innovative OS that looks to define resources such as bandwidth for cloud storage, latency to response, and requests for database services in a continuous adaptive manner based on its concept of resource containers.
A key concept in Tessellation is the abstract idea of the "cell" as "a user-level software component with guaranteed resources," said Kubiatowicz during the session at DAC. Cells provide guaranteed fractions of system resources (such as processors, cache, network or memory bandwidth, fractions of system services). As part of this framework, the new OS makes use of a quality-of-service method and scheduling. There's a novel way to message information in lieu of moving data around. There are services for keyboard and mouse, and network services are measured to maximize throughput.
The idea is that if Tessellation catches on, one day there will be a "swarm of services," either local or in the cloud, that users can invoke. At this juncture, UC Berkeley's new "Swarm Lab" is swarming all over app development to see how far they can take Tessellation.
A lot of hopes are riding on what Kubiatowicz -- known as "Kubie" to his friends -- is spearheading with his team under the futuristic TerraSwarm project, said Edward Lee, also a UC Berkeley professor, who spoke during another DAC session on the topic of the Internet of things. Lee said if it's successful, it will provide an "open application development platform" that could form the basis for home-based automation innovation and much more in the future. "We'll all be surprised by what comes out of it," he predicted. He added some colleagues like to call it the "Unpad vision" because it can in theory work without a physical mobile device of any kind.
Is security an issue? Yes, Kubiatowicz acknowledges, suggesting cryptography, for one thing needs to be part of it.
In his closing keynote at DAC today, Alberto Sangiovanni-Vincentelli, a veteran of the design automation industry, who helped found Cadence and Synopsys, and now holds the Buttner Chair in Electrical Engineering and Computer Sciences at UC Berkeley, also predicted the world may see the "Swarm" concept take root. He added the Berkeley OS project is supported by Semiconductor Research Corp.
So just when will Tessellation and Swarm applications be unleashed upon the world? Kubiatowicz wouldn't be pinned down but said he anticipated "soon."
Ellen Messmer is senior editor at Network World, an IDG publication and website, where she covers news and technology trends related to information security. Twitter: @MessmerE. Email: firstname.lastname@example.org. | <urn:uuid:fe78a604-cb66-401c-89af-80af704ce067> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2167038/software/futuristic-uc-berkeley-operating-system-uniquely-controls-discrete--manycore--resources.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942998 | 796 | 3.125 | 3 |
3.4.1 What are DSA and DSS?
The National Institute of Standards and Technology (NIST) (see Question 6.2.1) published the Digital Signature Algorithm (DSA) in the Digital Signature Standard (DSS), which is a part of the U.S. government's Capstone project (see Question 6.2.3). DSS was selected by NIST, in cooperation with the NSA (see Question 6.2.2), to be the digital authentication standard of the U.S. government. The standard was issued in May 1994.
DSA is based on the discrete logarithm problem (see Question 2.3.7) and is related to signature schemes that were proposed by Schnorr [Sch90] and ElGamal (see Question 3.6.8). While the RSA system can be used for both encryption and digital signatures (see Question 2.2.2) the DSA can only be used to provide digital signatures. For a detailed description of DSA, see [NIS94b] or [NIS92].
In DSA, signature generation is faster than signature verification, whereas with the RSA algorithm, signature verification is very much faster than signature generation (if the public and private exponents, respectively, are chosen for this property, which is the usual case). It might be claimed that it is advantageous for signing to be the faster operation, but since in many applications a piece of digital information is signed once, but verified often, it may well be more advantageous to have faster verification. The tradeoffs and issues involved have been explored by Wiener [Wie98]. There has been work by many authors including Naccache et al. [NMR94] on developing techniques to improve the efficiency of DSA, both for signing and verification.
Although several aspects of DSA have been criticized since its announcement, it is being incorporated into a number of systems and specifications. Initial criticism focused on a few main issues: it lacked the flexibility of the RSA cryptosystem; verification of signatures with DSA was too slow; the existence of a second authentication mechanism was likely to cause hardship to computer hardware and software vendors, who had already standardized on the RSA algorithm; and that the process by which NIST chose DSA was too secretive and arbitrary, with too much influence wielded by the NSA. Other criticisms more related to the security of the scheme were addressed by NIST by modifying the original proposal. A more detailed discussion of the various criticisms can be found in [NIS92], and a detailed response by NIST can be found in [SB93]. | <urn:uuid:a1ac3161-08a5-4258-9d29-10f167f760a0> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/dsa-and-dss.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963807 | 539 | 3.25 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.