text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Most IT Security Breaches Due to Human Error But…
According to a new survey from the Computing Technology Industry Association (CompTIA), provider of A+ and Security+ certification, human error is the primary cause of IT security breached. The survey also showed that training and preparation can help organizations limit the impact of security breaches.
Even though the 900 organizations surveyed this year reported higher awareness of IT security threats, more emphasis on security practices and procedures and more spending on preventive measures, 84 percent said human error was to blame, at least in part, for their last major security breach. In last year’s survey, only 63 percent of security breaches were blamed on human error. Almost 60 percent of organizations said they had experienced at least one major IT security breach in the past six months. A major IT security breach is one that causes real harm, results in loss of confidential information or interrupts operations.
Training and awareness can help mitigate the effects of these breaches though. “Human knowledge and action are critical to making networks and IT infrastructure secure,” said John Venator, president and CEO of CompTIA. “And while awareness of the threat posed by IT security breaches has increased dramatically, many organizations have been slow to make the appropriate investments in time and budget to properly address these threats.”
According to the survey, training and certification had a positive impact on security. Organizations that trained a quarter of their IT security staff in security were less likely to experience a departmental security breach than those that trained less than a quarter of their IT staff in security. The same goes for staff members. Eighty percent of organizations that invested in staff security training said it helped improve security. And, 70 percent of the organizations that invested in security certification said that it helped improve security.
Training and certification lead to better identification of potential risks, higher awareness of security issues, better security measures and the knowledge and skill to respond rapidly to problems.
For more information, see http://www.comptia.org. | <urn:uuid:b0aa1047-37b9-4f61-876b-20b0504bb342> | CC-MAIN-2017-04 | http://certmag.com/comptia-survey-most-it-security-breaches-due-to-human-error-but-training-helps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962372 | 410 | 2.6875 | 3 |
Nippon Telegraph and Telephone Corporation, Mitsubishi Electric Corporation and the University of Fukui have jointly developed an authenticated encryption algorithm offering robust resistance to multiple misuse.
The algorithm has been entered in the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR) project, based on which the algorithm is expected to be deployed for increasingly secure and reliable information technology.
The new algorithm’s major advantage is its resistance to multiple misuse in authenticated encryption operations that provide simultaneous confidentiality and integrity.
One problem of misuse is an attacker making a fake message if plaintexts are released before their integrity is verified. Once a conventional system outputs decrypted plaintext from tampered data without authentication, the attacker can show tampered data as being non-tampered. Whereas this occurs with many conventional systems, the new algorithm fixes the problem, thereby enabling relatively low-memory devices to handle large-volume data.
Another typical problem is the reuse of nonce. In the case of a common authentication algorithm called Advanced Encryption Standard with Galois Counter Mode (AES-GCM), a non-repeatable special parameter, or nonce, is required to achieve security. However, the algorithm is largely bleached if the nonce is reused, so the new algorithm fixes this problem to maintain security even after multiple reuse.
The new algorithm accepts messages longer than the 64-gigabyte limit of AES-GCM, and it works faster than AES-GCM on many platforms.
CAESAR is a competition organized to thoroughly evaluate authenticated encryption algorithms by testing their resistance to multiple third-party cryptanalyzing attacks to prove their security, applicability and robustness. Algorithms that receive third-party cryptanalysis through CAESAR are expected to gain wide acceptance, which is why this new algorithm has been submitted to the competition. Candidate algorithms will be screened annually and the first results will be announced on January 15, 2015, with the final results to be announced on December 15, 2017.
Based on the results of the CAESAR competition, NTT and Mitsubishi Electric intend to research and develop services and products for machine-to-machine (M2M) applications incorporating their new algorithm, thereby contributing to increased security and reliability in information technology. | <urn:uuid:6e81535b-0492-4eee-9d0f-eacc7cc200da> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/03/17/new-authenticated-encryption-algorithm-is-resistant-to-multiple-misuse/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00040-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937953 | 469 | 2.5625 | 3 |
A new job study was released from Brookings that analyzed data supplied by Burning Glass. The report details a shortage of science, technology, engineering, and math (STEM) professional jobs and reveals that these STEM positions take longer to fill than openings in other fields.
The report states, “These job openings data provide new evidence that, post-recession, STEM skills, particularly those associated with high levels of educational attainment, are in high demand among employers. Meanwhile, job seekers possessing neither STEM knowledge nor higher education face extraordinary levels of competition for a scarce number of jobs.”
The Brookings report went on to say, “Governments at all levels, educators, training organizations, and civic leaders can utilize job vacancy data to better understand the opportunities available to workers and the specific skills required of them. Improving educational and training opportunities to acquire STEM knowledge should be part of any strategy to help unemployed or low-wage workers improve their earnings and employability.”
The Numbers And What They Mean
STEM job numbers show a sharp drop in filling vacancies after the initial job listing, with just two-thirds of the jobs listed finding a likely candidate within the first 33 days. One-fifth of those job postings stay up at least 70 days, and half of those require high levels of knowledge when it involves a STEM job.
The data shows that filling STEM jobs becomes more difficult when the level of knowledge required to fill that job increases. Other factors that result in longer durations for filling STEM jobs include a higher level of education and the higher pay that results from that, as well as possessing skills more highly valued in association with STEM career fields.
Advertising Times Increase With Education Requirements
As a result of this, STEM vacancies entail longer advertising times for job vacancies when compared to non-STEM occupations. The study results also show that as the education requirements for a vacancy increase, so does the required time to fill it. And while the job advertisement requirements for filling a STEM job vary according to the level of education required, STEM jobs in general show a longer duration overall when it comes to filling them.
A survey of over 1,700 executives nationwide by the Technology Councils of North America (TECNA) has shown a glaring shortage of qualified individuals in the technology sector. According to Steven G. Zylstra, TECNA chairman, in a November 2013 BSM article, “Companies are feeling better about business conditions, but the talent shortage issue has the potential to sidetrack growth.”
And according to a Huffington Post article, during the period from 2009 to 2012, there was a shortage of STEM professionals, with nearly two STEM-related jobs being posted for every person with the required knowledge to fill the position. | <urn:uuid:832c1df3-9509-404c-bd64-e08f3d5bb3c0> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/study-shows-difficulty-filling-stem-jobs-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00252-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94961 | 556 | 2.734375 | 3 |
In 2010, 25 percent of new worms have been specifically designed to spread through USB storage devices connected to computers, according to PandaLabs. These types of threats can copy themselves to any device capable of storing information such as cell phones, external hard drives, DVDs, flash memories and MP3/4 players.
This distribution technique is highly effective. With survey responses from more than 10,470 companies across 20 countries, it was revealed that approximately 48 percent of SMBs (with up to 1,000 computers) admit to having been infected by some type of malware over the last year. As further proof, 27 percent confirmed that the source of the infection was a USB device connected to a computer.
So far, these types of infections are still outnumbered by those that spread via email, but it is a growing trend. “There are now so many devices on the market that can be connected via USB to a computer: digital cameras, cell phones, MP3 or MP4 players,” says Luis Corrons, Technical Director of PandaLabs. “This is clearly very convenient for users, but since all these devices have memory cards or internal memory, it is feasible that your cell phone could be carrying a virus without your knowledge.”
How does it work?
There is an increasing amount of malware which, like the dangerous Conficker worm, spreads via removable devices and drives such as memory sticks, MP3 players and digital cameras.
The basic technique used is as follows: Windows uses the Autorun.inf file on these drives or devices to know which action to take whenever they are connected to a computer. This file, which is on the root directory of the device, offers the option to automatically run part of the content on the device when it connects to a computer.
By modifying Autorun.inf with specific commands, cyber-crooks can enable malware stored on the USB drive to run automatically when the device connects to a computer, thus immediately infecting the computer in question.
To prevent this, Panda Security has developed Panda USB Vaccine, a free product which offers a double layer of preventive protection, disabling the AutoRun feature on computers as well as on USB drives and other devices. | <urn:uuid:e93fb0ab-bdb8-4250-ba26-0d6545d2d342> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/08/26/25-of-new-worms-are-designed-to-spread-through-usb-devices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956467 | 451 | 2.84375 | 3 |
A recent post on Government Technology recounts the stressful tale of a data center fire in an Iowa government facility and the subsequent scramble to restore operations. Despite a second available data center, the team chose to get the original facility up and running. Thanks to the fire department and their own fire suppression systems, the equipment was salvaged and back online in just twelve hours.
With the amount of electrical equipment and heat generated in a data center, fires are a real and constant threat. Whether your company has in-house infrastructure or hosts with a data center service provider, knowing which suppression systems to use and how to respond if they fail is essential to avoid downtime should disaster strike.
Traditional sprinkler systems aren’t ideal on the data center floor. Although most municipalities require a water sprinkler system in addition to other methods, flooding the room with H20 leads to a few thousand square feet of bricked equipment that must now be replaced. Hooray, you saved your servers from the fire—now they’re waterlogged!
To avoid these issues and still meet requirements for water fire suppression systems, many data centers (including Green House Data) use a pre-action system. Conventional sprinklers store water in the overhead pipes, so as soon as it is hot enough, the sprinklers release water. With pre-action systems, a fire detection event (heat or smoke detector) and/or sprinkler activation must occur before water is pumped into the overhead pipes. With a double-interlock system, both of these must occur. If the sprinkler operates or a leak influences the air pressure, a trouble alarm will go off instead.
An early detection (VESDA) system can also alert employees to smaller fires before the sprinklers are activated, giving them a chance to put out the fire with another less destructive method.
Using a dry suppression system in conjunction with the required sprinklers will protect equipment, as the dry system will deploy prior to the sprinklers. This is known as a double interact system.
Halon was the most common gas used to extinguish fires for some time, but it turns out it wreaks havoc on the atmosphere, destroying significant chunks of ozone. The two main alternatives for waterless fire fighting gas are clean agent systems and inert gases. Clean agent systems remove heat while inert gases remove oxygen (remember your fire triangle?) Generally, inert gas requires more storage space.
These gases can be corrosive to equipment, but the large majority cause no damage to IT infrastructure. They limit general smoke and debris damage in the rest of the facility as well, as water can carry ash and other particles throughout the facility.
At Green House Data, we use 3m Novec Fire Protection Liquid, which is non-conductive, non-corrosive, evaporates cleanly, and can even be used on active, energized equipment. This material also does not deplete the ozone, with a five-day atmospheric lifetime and a global warming potential of just 1.
Request more information from the experts.
A fire is just one of many types of disaster for which companies must be prepared. Organizations with a second site or a disaster recovery plan may be more inclined to stick with traditional sprinkler systems, as in the rare case of a fire they can simply move their systems to the backup location while they restore equipment in the primary data center. This is still an expensive proposition, but inert gas or clean agent systems are also pricier than sprinklers, so a cost-risk assessment is in order.
For large enterprises or companies that handle lots of sensitive data, a second data center site may be the most prudent option, but even companies in these circumstances are turning to cloud-based disaster recovery more often. Deploying a completely redundant second location for critical systems can be pricey and time consuming. The backup systems may not be tested and ready to go on the day of a data center fire. A cloud disaster recovery solution can be a cost-effective way to setup a failover site, whether close to the original location or in a geographically diverse setup.
Even the best-prepared data centers can be unlucky enough to fall victim to a fire, and if clean agent or inert gas systems fail, water or the fire itself can destroy equipment. A disaster recovery plan is necessary to move operations to a second site or, as in the case of the Iowa government data center, to guide the restoration process in a timely and calm manner.
Posted By: Joe Kozlowicz | <urn:uuid:87494fb4-dcc8-4539-bc65-3ca8983a462a> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/avoiding-damage-or-at-least-downtime-from-a-data-center-fire | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923185 | 915 | 2.640625 | 3 |
White Paper: Five Steps for Protecting Australian Government Information
According to the Information Security Manual (ISM), the primary cyber threat to Australia is cyber exploitation: malicious activities designed to silently gather information from ICT systems. The disclosure of sensitive commercial or government information can threaten national interests. The disclosure of sensitive personal information can enable malicious activities against individuals. The security of sensitive government and commercial information is critical for ensuring that Australia continues to be a safe place to do business online. This paper outlines Five Steps to protect critical information. | <urn:uuid:66569d06-418f-4471-8850-796f357835d5> | CC-MAIN-2017-04 | https://www.imperva.com/lg/lgw.asp?pid=474 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00325-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863009 | 104 | 2.671875 | 3 |
The name says it all. Google translate is a Free translation service from Google. The main purpose of google translate is to translate any language in text, speech, image real-time videos, website pages from a specific language to another.
Function of Google Translate
The function is more than imagination as it can be used for so many purposes. The Google translate has an inbuilt language pronunciation for some languages that have been translated by the service.
For easy understanding, it highlights the matching words or phrase in a way that it aims at the text. However, most users see Google translate as a language dictionary because of the flexibility in language translation.
The function is also functional on the web interface as it automatically suggest an substitute translations for correct mistakes by the user. The service enables users to input language for translation on either Text, handwriting, keyboard entry and speech detection.
Google translate english to spanish
Translating from English to Spanish is very simple. It is very simple, as all you have to do to translate from one language to another is very simple. All you have to do is to use the Online service or download the app for android, iOS (iPhone, iPad, iPod, Tablet etc..).
Follow step bellow to Google translate english to spanish:
- Visit: www.translate.google.com
- On the left side, select English.
- At the right-side, click on the little drop down icon and select Spanish.
- Go back to the box on the left side and type anything in English and watch it translate automatically to Spanish language.
- You can copy the translated copy for any
Google translate other language
It can translate virtually close to 200 languages. You can also use the method above to translate any other languages of your choice either by text, handwriting, speech etc… This service is updated on a regular bases on by the Google Team.
Google translate app
The app comes in different platform. However, this is to make translation flexible and easy for it’s users.
Download Google translate app for Android and iOS
Who can Access Google Translate
The google translate does not have any restriction as anybody can make use of the service. To enjoy the service, you can download the App for your Android device or iOS device. However, you can still make use of the service online direct.None found. | <urn:uuid:e39d806c-2153-4d41-b001-d4c3f44322d7> | CC-MAIN-2017-04 | http://mikiguru.com/google-translate-english-spanish-translate-now/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862502 | 486 | 2.546875 | 3 |
Technical Characteristics of RFID
The term RFID denotes a range of wireless identification technologies, not a single device type. One way to classify tags is according to their source of power. The most inexpensive and compact tags are passive, meaning that they derive all of their transmission power from the reading device. Passive tags are also the most physically robust RFID tags. Active tags contain batteries, and are capable of broadcasting at much longer distances than passive ones. (Loosely speaking, your mobile phone is a sophisticated active RFID tag.) Semi-active tags make use of battery power to run local circuitry, but use reader power for communication.
Another important axis of classification is the frequency at which an RFID tag operates. In general, lower frequencies have shorter associated ranges, but offer better penetration of materials; higher frequencies offer greater range, but are subject to greater physical interference. The two most important RFID-frequency categories are as follows:
Ultra-High Frequency (UHF): UHF tags operate in the 868-956 Mhz frequency band. This is the same part of the radio spectrum in which cordless phones and some mobile phones operate. UHF RFID tags will see the widest use in supply-chain and retail applications. One of the big benefits of passive UHF tags is that they have a range, in many environments, of over ten feet (and sometimes as much as tens of feet). Additionally, RFID readers can scan hundreds of UHF tags simultaneously.
A major drawback of UHF tags is that they cannot be easily read in the presence of high concentrations of liquids, as found such things as beverage containers and human beings!
High-Frequency (HF): By comparison with UHF tags, passive HF tags have the drawback of low transmission range -- generally on the order of just over a foot. In general, they are also larger than UHF tags; flat HF tags are typically about 50mm by 100mm in size. HF tags, however, have the advantage of being readable in the presence of water.
HF tags operate at 13.56 Mhz, a frequency known as the industrial-scientific-medical (ISM) band. HF tags are popular in some smartcard applications and also for various industrial uses.
Other frequencies: RFID tags also come in a low-frequency (LF) variety operating at 120-140 Khz. These tags tend to be popular for use in building-access badges and animal tagging. RFID tags can also operate at higher UHF frequencies, most notably at 2.45 GHz.
In order for an RFID reader to identify many tags in its read range, it must engage with the tags in what is known as an anti-collision or singulation protocol. If all tags were to transmit to the reader simultaneously, then their signals would interfere with one another, rendering reading ineffective. A singulation protocol addresses this problem by enabling tags to take turns in transmitting to a reader.
For UHF tags, singulation is generally a variant of a protocol known as tree-walking. Briefly stated, in tree-walking, the space of k-bit identifiers is viewed as the leaves in a tree of depth k. A reader traverses the tree, asking subsets of tags to broadcast a single bit at a time. A feature of the basic tree-walking protocol is that the RFID reader broadcasts tag serial numbers over very large distances, which can introduce vulnerability to eavesdropping.
The anti-collision protocol used in HF tags is generally a variant of the classic ALOHA protocol. Briefly stated, tags in the ALOHA protocol transmit their identifiers to the reader at a variety of randomly determined times so as to avoid transmission collisions. ALOHA-based RFID reading leaks less information than most UHF tree-walking protocols. On the other hand, most HF readers are capable of scanning only several dozen tags simultaneously.
The least expensive RFID tags, such as basic EPC tags, are read-only. Writeable tags are more expensive, while rewritable tags (containing EEPROM) are still more expensive. In a highly networked environment, however, large amounts of information can easily be associated with read-only tags in a database; in this case the tag simply serves as a pointer to an associated database entry.
Cryptography and Security
The tags that will be most inexpensive and most prevalent, such as basic EPC tags, lack the computing power to perform even basic cryptographic operations. (They will have about 500-5000 gates, many devoted to the basic tag functions. By contrast, the Advanced Encryption Standard (AES) requires some 20,000-30,000 gates.) Such tags are at best capable of employing static keys, i.e., PINs and passwords as security mechanisms. For example, the "kill codes" used to disable EPC tags for purposes of privacy, are secured by PINs. The limited capabilities of such RFID tags make privacy and security enforcement a special challenge.
More expensive RFID tags are capable of advanced functionality, and often include the ability to perform basic cryptographic algorithms, such as symmetric-key encryption and challenge-response identification protocols. (Public-key cryptographic is expensive, and used on few RFID tags.) | <urn:uuid:924f9a1b-8805-424e-9c17-2dd5b2baac87> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/research-areas/technical-characteristics-of-rfid.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923712 | 1,075 | 3.796875 | 4 |
In this multi-part series, we’ll examine the effects of Cisco IOS “network” statements for various IP routing protocols. Let’s start with the IGPs (Interior Gateway Protocols). The IGPs for which Cisco IOS uses network statements are:
Note that Cisco IOS does not use network statements for IS-IS (another IP IGP). Also, BGP, which is an EGP (Exterior Gateway Protocol), uses its network statements differently, so we’ll discuss it later.
Refer to the example topology shown in Figure 1:
As you can see, we have multiple logical networks connected to a router, those being:
What we want to do is get RIP running on the Fa0/1, Fa0/2 and Fa0/3 interfaces, but not on Fa0/0. To do this, we use “network” statements under RIP, as follows:
router rip network 172.16.0.0 network 10.0.0.0
Note that a “network” statement has two functions:
- It tells the router on which interfaces to run the routing protocol. Since we have network statements that cover the 10.0.0.0 and 172.16.0.0 networks, the Fa0/1, Fa0/2 and Fa0/3 interfaces will run the routing protocol. The meaning of this varies, but in the case of RIP, it means start sending RIP updates on the interfaces, and listen for incoming RIP updates.
- It tells the router to inject the logical networks of the interfaces into the routing protocol. In our example, those would be the 10.1.1.0/24, 10.2.2.0/24 and 172.16.1.0/24 prefixes.
Note that when it comes to advertising the prefixes to neighbor routers, the exact result takes into account the type of route summarization, if any, that is being performed by the routing protocol. Let’s look at this in more detail. In our case, the router will advertise the following prefixes on its interfaces:
- Fa0/0 – nothing (this interface is not running the protocol)
- Fa0/1 – 10.1.1.0 and 172.16.0.0
- Fa0/2 – 10.0.0.0
- Fa0/3 – 10.2.2.0 and 172.16.0.0
Recall that RIPv1 is a classful protocol, meaning that the updates do not contain subnet masks. Because of this, RIPv1 performs automatic route summarization at the boundary between classful networks, which is why the router is advertising the classful network 172.16.0.0 on Fa0/1 and Fa0/3, and likewise 10.0.0.0 on Fa0/2. Note that the network 10.0.0.0 subnets are advertised on the interfaces belonging to that network (10.1.1.0 on Fa0/1, and 10.2.2.0 on F0/3). Since classful protocols do not allow VLSM (Variable-Length Subnet Masks), the assumption is that any neighbor routers will be using the same subnet mask with that classful network.
Next time, we’ll look more closely at the actions of network statements under RIP.
Author: Al Friebe | <urn:uuid:f93301fb-19b3-4079-b982-5d6b941e2b3a> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/09/03/network-statements-part-1-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.864606 | 727 | 2.640625 | 3 |
In an increasingly interconnected world that’s reliant on technology for critical services, a number of states are tightening the coordination between IT professionals in government and industry to minimize the potential impact of a disruption to computerized systems.
Rhode Island, Massachusetts and New Hampshire are coordinating plans for responding to interruptions in services due to cyber-attacks or natural disasters that disrupt computer systems that facilitate critical services.
Government IT departments in the region have traditionally done a good job of maintaining, securing and restoring their cyber-infrastructure, according to Adam Wehrenberg, project director of the New England Regional Catastrophic Preparedness Initiative. But there was a coordination gap between IT and emergency management. “As our world increasingly hinges on technology, we have to shift thinking so that we begin to view cyber-disruptions as potentially significant events, rather than just inconveniences,” Wehrenberg wrote in an e-mail. “Cyber-disruption may not result in a simple e-mail outage, but may be the cause (or effect) of a much greater emergency.”
In 2009, Rhode Island officials met with representatives from hospitals, financial institutions, colleges, universities, the military, cable and communications industries, and utilities to identify who the stakeholders were and who could contribute resources to a cyber-disruption response team (CDT).
Go to Emergency Management to read about the cyber-disruption response team. | <urn:uuid:b72f7ba0-4e9a-4306-a002-3f0b769514aa> | CC-MAIN-2017-04 | http://www.govtech.com/security/New-England-States-Coordinate-Cyber-Security.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939255 | 294 | 2.703125 | 3 |
By Fernando Arnaboldi
Recursion is the process of repeating items in a self-similar way, and that’s what the XML Entity Expansion (XEE) is about: a small string is referenced a huge number of times.
Technology standards sometimes include features that affect the security of applications. Amit Klein found in 2002 that XML entities could be used to make parsers consume an unlimited amount of resources and then crash, which is called a billion laughs attack. When the XML parser tries to resolve, the external entities that are included cause the application to start consuming all the available memory until the process crashes. | <urn:uuid:7b946bda-398f-434d-8f6f-92a5ddc915e8> | CC-MAIN-2017-04 | http://blog.ioactive.com/2014_11_01_archive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00169-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941079 | 125 | 3.1875 | 3 |
In the last couple of days you cannot fail to have seen the huge number of media articles about the so-called Heartbleed bug. In this article, we'll try and answer some of the common questions that users of Apple products have raised about this issue.
What is the Heartbleed bug?
The Heartbleed Bug is a serious vulnerability that could lead to malicious hackers spying on what were thought to be secure Internet communications. A programming bug in the widely-used OpenSSL software library could allow information to be stolen, which—under normal conditions—would be protected by SSL/TLS encryption.
Typical information which could be stolen includes email addresses and passwords, and private communications; data which normally you expect to be transmitted down the equivalent of a "secure line."
As well as "Heartbleed," the bug is also known officially by the rather nerdy name of CVE-2014-0160.
How long has this bug existed? It sounds like it's really bad.
Yes, it is really bad. I hope you're sitting down. It looks like it's been around for two years.
Does that mean people have been able to scoop up private information for the last couple of years?
Has that been happening? I mean, have bad guys been stealing information this way?
We simply don't know. Exploitation of the bug leaves no trace, so it's hard to know if anyone has been abusing it. However, lots of people have demonstrated in the last couple of days that the bug can be exploited, and they've proven that it works.
What versions of OpenSSL are vulnerable?
OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable. OpenSSL 1.0.1g, OpenSSL 1.0.0 branch and OpenSSL 0.9.8 branch are NOT vulnerable.
Am I at risk if I use a Mac? What about an iPhone or iPad?
Unfortunately this bug doesn't care what kind of device you are using to communicate via the Internet. This means that iPhones, iPads and Macs are just as much at risk as, say, a computer running Windows 8.1.
Is there a fix?
Yes. A new version of OpenSSL, version 1.0.1g, was released this week. Internet companies are scrabbling to update vulnerable servers and services. Some sites weren't vulnerable in the first place, others have since fixed their systems.
Have any big websites been shown to be vulnerable to the Heartbleed bug?
Is Yahoo big enough for you? Some researchers have uncovered hundreds of Yahoo users' passwords and email addresses by exploiting the flaw. Other big websites reported to have been affected include Flickr, Imgur, OKCupid, Stackoverflow and Eventbrite.
Can Apple roll out the patch for the bug?
Unfortunately this isn't a bug in Apple's software or hardware. The bug exists in open source software that some web servers and networked appliances use to establish secure SSL connections. In other words, there is no patch for your computer or smartphone or tablet computer, as the problem exists on the websites themselves.
There is a version of OpenSSL shipped with OS X Mavericks 10.9, but it is unaffected by the bug.
How can I test whether a website is impacted by the Heartbleed bug or not?
Are Apple's own website secure, or are they affected by the vulnerability?
Tests indicate that Apple's own websites are not impacted by the bug.
Where can I find out more about Heartbleed?
Check out this webpage all about the Heartbleed bug by the folks at Codenomicon. | <urn:uuid:eff383ea-50d5-4d87-a1be-169309848178> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/heartbleed-openssl-bug-faq-for-mac-iphone-and-ipad-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00105-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956903 | 761 | 2.65625 | 3 |
A fresh malware attack targeted specifically at businesses and consumers who use Facebook has been devised, making use of social engineering and phishing.
The Comodo Threat Research Lab team has found that the Facebook malware tries to represent itself as an email from Facebook which states there is a new message for the recipient. The email address and sender’s name tries to brand itself as Facebook, but the sender’s email address is from different domains and not in any way related with the Facebook company.
The subjects of the emails are pretty straightforward: A brief vocal e-mail was delivered; an audio announcement has been delivered!; an audible warning has been missed; you got a vocal memo!; you recently missed a short audible notice; and Ein Videohinweis wurde vermisst! (German for “a video note was missed”).
“In this age of cyber attacks, being exposed to phishing is a destiny for every company, well-known or not. It may not be the most groundbreaking attack method cyber-criminals use—but there’s no denying that cyber-criminals are becoming more clever when crafting their messages,” said Fatih Orhan, director of technology for Comodo and the Comodo Threat Research Lab, in a blog. “More frequently, they’re using well-known applications or social platforms and also action-oriented language in the subject lines to entice recipients to open the emails, click the links or attachments and spread the malware.”
Each subject line ends with a set of random characters like ‘sele’ or ‘Yqr’. These are most likely being used to bypass antispam products. And the malware in the email itself is in a .zip file, sent as an attachment. Inside the zip file there is an executable file containing a variant of the Nivdort malware family.
Nivdort is identified as a trojan that interferes with internet connections and prevents the user from accessing websites. It also distributes a large number of malicious files throughout a victim’s hard drive, which can be used to exploit the user's computer to install ransomware applications and other remote controlled malware.
The initiative is very similar to a campaign that targeted WhatsApp users earlier in the month. As part of a random phishing campaign, cyber-criminals were sending fake emails representing the information as official WhatsApp content to spread malware when victims clicked on the attached “message.” The Facebook effort was most likely designed by the same perpetrators.
“Users should be cautious of any email that requires information or that redirects to a URL web page— and especially if there is a file download,” said Orhan.
Photo © dolphin/Shutterstock.com | <urn:uuid:ee02dfd2-6231-4f31-bd51-1253faeaa405> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/malware-campaign-targets-facebook/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935187 | 575 | 2.71875 | 3 |
The influence of the digital world is having more of an impact on our everyday lives than we could have predicted. Although we don’t yet have flying cars, in many ways we’re living in a pretty futuristic society, and a lot of that is thanks to the Internet – specifically, a concept known as “the Internet of Things.” This concept has been quietly changing the way that we connect with the world around us, as well as how the world can connect with us through integrated technology. Combine this innovation with another forward-thinking concept referred to as “the sharing economy” and you have a partnership that could have a dramatic impact on our society’s industries.
So where can you see this combination in action right now, and what does the future hold for this dynamic duo? Let’s find out:
Behind the Concepts
The Internet of Things and the sharing economy both appear quite complicated, but in fact, they’re pretty easy to understand. The former— although it sounds like a metaphysical concept at first—refers to how “smart” digital technology is beginning to be installed in everything from roads to cars to streetlights. This technology is tapped into an information grid that can obtain data from various points and use it to adjust for convenience and speed; essentially, machines talk to each other, and inform each other in ways that can make our lives easier.
“The Internet of Things really comes together with the connection of sensors and machines,” explains this article about IoT from Wired. “That is to say, the real value that the Internet of Things creates is at the intersection of gathering data and leveraging it.” And it all means that big things could be ahead for how our environment interacts with us.
The sharing economy is something that goes hand-in-hand with the Internet of Things. It’s more than just Craigslist and Uber – it means using social networks to find ways to share resources or make money over the Internet. The sharing economy is one that removes the middleman of a sales broker or a bank; two people can arrange the sale of a product or service online and then use an app to pay for it. The digital world is making the sharing economy that much easier, and when you add the Internet of Things, that’s where it starts to get interesting.
A Symbiotic Partnership
Anybody can buy or sell a product or service online, but everyone has a few basic necessities: that the transaction be secure, that it be quick, and that the process be seamless. And although you can go through an online car-rental booking process step by step, or send emails back and forth with a prospective buyer of your unused furniture, many people in the tech industry know that there can be a better way. Enterprising individuals have been finding ways to apply the Internet of Things to the sharing economy to make it more advanced and more convenient for people to use.
The example you’ll find used most often is AirBnB. The revolutionary system that allows homeowners to rent out their residences was created to enable direct communication between hosts and travelers, creating a more personal – and economically viable – alternative to large-scale hotels. This is the sharing economy: the idea of taking a space that one might not be using and allowing another person to pay a fee to occupy it.
Now imagine how the Internet of Things could be used to make renting through AirBnB even more advanced. GigaOM talks about the idea for a type of “smart key” called Lock-Bot, which offsite AirBnB owners could use to allow guests to access the units they’ve rented. Here’s how Lock-Bot would work:
“The cylindrical device holds harmonically combines cellular, Wi-Fi and RFID. After a booking is confirmed, the renter is sent a code that is used to unlock the Lock-Bot, which opens to reveal a physical key attached to an RFID keychain. At the end of the rental, the renter returns the key to the Lock-Bot, which return notified the renter that the key has been returned.”
Although this concept didn’t reach its Kickstarter goal, GigaOM notes that it would be wise for AirBnB to consider utilizing something like this in the future. Making sure that the home rental process is both secure and streamlined using the Internet of Things could put AirBnB light years ahead in its field. “As the cost of connectivity continues to drop, sharing economy companies will have increasing incentives to equip their customers with tools that make their members more competitive with centrally managed ways of accessing services,” says the article.
What the Future Holds
Aside from new advances in secure transactions, combining the Internet of Things with the sharing economy could yield some interesting results in the near future. For example, a post on the IBM blog envisions how much easier car rental could be: “Discovery can occur in real-time, just looking around for cars that announce they’re available for rent through Bluetooth beacons. Unlock and drive away using your smartphone or wearable as a kind of bluetooth identity card, and upon return, a GPS can let the owner where the car has been returned and calculate miles traveled for an automatic charge to your credit card.” Definitely a lot smoother than having to pick up a physical key and drop it off in a specified location afterward.
The blog post goes on to point out that many of our useful goods sit idle throughout the day – everything from our cars to our home appliances – and that finding a way to profit from them would “reshape entire global industries.” This sort of movement feels almost like a revolution: a new way to put money in the hands of the people who know how to use the Internet of Things – and the sharing economy – to their wallet’s advantage.
The Future of The Internet
In many ways, combining new technology with digital connections is creating the futuristic world that we once saw in science fiction movies. Fortunately, this isn’t including any sort of robot war – rather, the Internet of Things is working to make the world around us reach an even higher level of convenience. Put this together with the sharing economy, and you have a partnership that could go a long way in changing how industries both large and small operate across the world.
How do you think combining the Internet of Things with the sharing economy will change our lives? Tweet your thoughts to us @fieldnation. | <urn:uuid:741aeb7a-ddd3-4415-8607-e3f7bd6d50b2> | CC-MAIN-2017-04 | https://fieldnation.com/blog/how-the-internet-of-things-influences-the-sharing-economy | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00253-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95166 | 1,350 | 2.796875 | 3 |
Giard D.,McGill University |
Choiniere D.,Consumaj Inc. |
Cordeau S.,McGill University |
Barrington S.,McGill University |
Barrington S.,Consumaj Inc.
Environmental Technology (United Kingdom) | Year: 2013
In-storage psychrophilic anaerobic digestion (ISPAD) is a technology allowing livestock producers to operate an anaerobic digester with minimum technological know-how and for the cost of a conventional storage cover. Nevertheless, the system is exposed to ambient temperatures and biogas production is expected to vary with climatic conditions. The objective of the project was therefore to measure ISPAD biogas production during the winter and fall seasons for a region east of Montreal, Canada. A calibrated biogas monitoring system was used to monitor biogas methane and carbon dioxide concentrations inside a two-year-old field installation with a 1000 m 3 storage capacity. Despite a leaking pumping hatch, winter 2010 (January to March) methane concentrations varied directly with solar radiation and maximum exterior temperature, rather than with manure temperature at 2.4 and 1.2 m depths which remained relatively constant between 1 and 5°C. During a six-month-period from November 2009 to April 2010, inclusively, the field ISPAD degraded 34% of the manure volatile solids corresponding to an average methane production of 40 m3/d. The ISPAD biogas production could be further increased by improving its air tightness and intrusion and by regularly pumping out the biogas. © 2013 Taylor & Francis. Source
Choiniere D.,Consumaj Inc. |
Giard D.,Consumaj Inc.
Chemical Engineering Transactions | Year: 2012
Public awareness towards environmental odours has increased with higher living standards. Still, nowadays, no instrument can yet replace the human's specific perception of odours because of the complexity of the relationship and interaction between the odour constituting gases. Olfactometry, which is the science of measuring odours, remains the only statistical method to characterize odours and their effects on human perception. This relatively recent science has evolved since the early 1970's and is now regulated by international guidelines available in Europe (CEN 13725, 2003; VDI 3882, 2003) and North America (ASTM 679, 2011). These guidelines recommend the use of a specific instrument to characterise odours using a jury of "noses". This instrument is called an olfactometer. This paper will present a state-of-the-art stationary dynamic dilution olfactometer designed by Consumaj. This dynamic olfactometer meets the European and North American standards in olfactometry analysis. This paper will emphasize the process of design and development involved in the concept of this dynamic olfactometer in order to meet the different international standards and to provide accurate and precise odour measurements. This stationary olfactometer, named Onose-8®, is presently in operation at Consumaj laboratories, in St Hyacinthe, Canada. This olfactometer is designed to accommodate up to 16 assessors simultaneously, which meets the VDI 3882 (2003) standards. This particularity is made possible because of its nonagon (9-sided) shape providing ergonomic features for more comfort, space and ease of work to the assessors. The dilution of odorant samples with fresh air is performed using mass flow controllers that can also be automatically verified and calibrated using a protocol provided with the interface software that controls the olfactometer. Copyright © 2012, AIDIC Servizi S.r.l. Source | <urn:uuid:b73c10e0-97d8-491e-8680-b443bb982cd5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/consumaj-inc-1430724/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910321 | 753 | 2.578125 | 3 |
The two recent explosions involving commercial spacecraft are unlikely to daunt NASA's used of private companies for future space exploration.
The fiery explosion of an Orbital Sciences rocket and the spacecraft it was carrying was followed days later by the deadly crash of a Virgin Galactic rocket ship late last week. The accidents raised questions about the readiness of the fledgling commercial space industry.
But those following the space industry say the accidents, which were unrelated, shouldn't put an added burden on NASA, which is contracting with commercial space flight companies. However, it may open an opportunity for other commercial players.
"I think this is going to raise fears about commercial space flight, but I think they are unjustified," said Howard McCurdy, a professor at American University who specializes in space policy and history. "There's a reason NASA has more than one commercial partner. I think there's a lot of redundancy or slack in the system. It's set up to handle these kinds of issues. You want everything to be perfect but it never is. Since it never is, the goal is to set up a system that can recover. That's what NASA has done."
On Oct. 29, an Orbital Sciences Cygnus spacecraft, riding aboard an Antares rocket, exploded moments after liftoff. The unmanned spacecraft had earlier been successfully used for two cargo resupply mission launches to the International Space Station.
No one was injured in the accident. The rocket and spacecraft cost more than $200 million. The Cygnus was carrying more than 5,000 pounds of supplies and scientific equipment. NASA had contracted with Orbital Sciences for eight missions.
Then on Oct. 31, the Virgin Galactic rocket ship, designed for space tourism, broke apart miles above the ground. One pilot was killed and a second was seriously injured.
The space agency is not without a means of ferrying supplies to the orbiting station, since it also has contracted with SpaceX to fly 12 resupply missions. The company has flown four successful missions and its next one is scheduled for December.
The two high-profile accidents raised speculation that commercial space flight was moving too fast and possibly too recklessly.
Scott Hubbard, an aeronautics and astronautics professor at Stanford University and former director of NASA's Ames Research Center, said the two accidents need to be looked at separately and can't be held against NASA.
"There is a commercial space community made up of many different industries, and NASA's commercial cargo and crew program has nothing to do with the commercial tourism industry," Hubbard said. "I view these two things happening as pure coincidence. I didn't attach any significance other than synchronicity to that… You need to disentangle or reduce the perception that the commercial space [industry] is all one monolith and going to hell in a hand basket."
Hubbard and McCurdy both noted that the Orbital Sciences accident, which is the only one to affect NASA, only hurts the company itself since NASA contracted with a second commercial partner, SpaceX.
McCurdy, who wrote the book Space and the American Imagination, said it may take Orbital Sciences as much as a year to piece together what caused the rocket failure. In that time, the company may not be able to fly any new missions.
That would be a big opportunity for SpaceX to step up and possibly increase the number of its contracted missions with NASA. It's also possible that if Orbital Sciences falls too far behind, NASA may look for a third commercial partner for resupply missions.
The space agency may wait a year for Orbital Sciences, though likely not much longer, according to McCurdy.
"The Antares accident will raise questions over whether more government oversight might have lessened the risk, but there is no question that privately provided services to NASA for carrying out government missions will continue," said John Logsdon, former director of the Space Policy Institute at George Washington University. "By contrast, the debate over the ethics of space tourism -- is it worth the risk -- will intensify. But people who try to climb Everest sometimes die and that does not stop the attempts."
Both Hubbard and Logsdon served on the board that investigated the deadly Columbia space shuttle accident in 2003.
The experts said the Orbital Sciences accident is not expected to slow NASA's use of commercial spacecraft to ferry supplies to the space station. It's also unlikely that it will delay NASA's goal of launching spacecraft carrying astronauts from U.S. soil by 2017.
NASA partnered with SpaceX and Boeing Co. to build spacecraft to carry astronauts to the space station, freeing the U.S. from depending on Russia to carry its astronauts. The space agency has not launched astronauts since the space shuttles were retired in 2011. | <urn:uuid:36a534cb-036f-4dea-8056-096fb5746d4c> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2844392/commercial-space-accidents-not-expected-to-slow-nasa.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00371-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966428 | 961 | 2.609375 | 3 |
University of Pennsylvania researchers are touting their creation of an all-optical switch that uses nanowires to transmit and process information using light pulses rather than electricity.
RELATED: Is quantum computing real?
Such a development could pave the way for quantum computers that are exponentially faster than current systems.
The Penn researchers, who published their findings in the journal Nature Nanotechnology, built their switches using tiny cadmium sulfide nanowires and combined them into a logic gate to process data. Earlier research by the team showed that cadmium sulfide nanowires are well equipped to manipulate light. (The image here shows laser light being emitted from the end of a cadmium sulfide nanowire.)
"The biggest challenge for photonic structures on the nanoscale is getting the light in, manipulating it once it's there and then getting it out," said Associate Professor Ritesh Agarwal of the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science. "Our major innovation was how we solved the first problem, in that it allowed us to use the nanowires themselves for an on-chip light source."
While we're still a ways away from commercial products using such techniques, Agarwal is buoyed by what the findings could mean.
"We see a future where 'consumer electronics' become 'consumer photonics,'" Agarwal said. "And this study shows that is possible."
The research was supported by the U.S. Army Research Office and the National Institutes of Health's New Innovator Award Program.
Bob Brown tracks network research in his Alpha Doggs blog and Facebook page, as well on Twitter and Google +.
Read more about data center in Network World's Data Center section.
This story, "Penn researcher: "We see a future where 'consumer electronics' become 'consumer photonics'"" was originally published by Network World. | <urn:uuid:4074f810-3189-4057-9531-9c6cd4d8152b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720841/hardware/penn-researcher---we-see-a-future-where--consumer-electronics--become--consumer-photonics--.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942674 | 391 | 3.203125 | 3 |
ContactCenterWorld - Definition
On a telephone, the handset is a device the user holds to the ear to hear audio. Modern handsets typically contain a microphone as well, but in early telephones the microphone was mounted on the phone itself, which often was attached to a wall at a convenient height for talking. Handsets on such phones were called receivers, a term often applied to modern handsets. Until the advent of the cordless telephone, the handset was usually wired to the base unit, typically by highly flexible tinsel wire. A cordless phone uses a radio transceiver for the handset, and a radio transceiver for the base station. On a mobile telephone, the entire unit is a radio transceiver that communicates through a remote base station. | <urn:uuid:159cce7d-227e-4597-9633-833c0aea4f5f> | CC-MAIN-2017-04 | https://www.contactcenterworld.com/define.aspx?id=f0fdc8dc1d4d482c9f070f27fc5a3866 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965854 | 152 | 3.34375 | 3 |
VPN Setup Tutorial Guide
A VPN (Virtual private network) is a secure connection between two or more endpoints. It can also be seen as an extension to a private network.
Site to Site VPN
In a site to site VPN data is encrypted from one VPN gateway to the other, providing a secure link between two sites over the internet. This would enable both sites to share resources such as documents and other types of data over the VPN link.
Remote Access VPN
In a remote access VPN scenario which is also known as mobile VPN a secure connection would be made from an individual computer to a VPN gateway. This would enable a user to access their e-mail, files and other resources at work from where ever they may be, providing they have an internet connection. There are two common forms of technology that exists in remote access VPN known as IPSec and SSL that are covered further below.
Why have a VPN
A VPN saves organisations \ companies from renting expensive dedicated leased lines, VPN's give the ability for users to work from home and saves cost on resources such as e-mail servers, file servers, etc, as all these can be accessed on the VPN connection at the central site.
A real world example would be if a company was split into two sites (When referring to sites we mean offices), the main site in the US and a smaller site in the UK. The US site has already a full network and storage infrastructure in place which consisted of active directory, an exchange server, file server and so on. The UK site only consisted of a small number of users, let’s say 10 employees. To make this particular scenario cost effective a VPN connection from site to site would be the best solution. Providing a VPN tunnel from the UK site to the US site would save costs from having to install another network infrastructure, exchange server, active directory server and so on. As the US site would already have administrators maintaining servers and the infrastructure and can now maintain the VPN connection as well as other resources would prove another area where savings would be made.
Another cost saving scenario to the above example would be to close the UK site down where employees based in UK could work from home. A remote access VPN scenario would be suited if the 10 users were not based anywhere in particular, and there was no UK based office. In this case they would just require an internet connection and a configured VPN client software enabling them to securely connect to their corporate network in the US. If they were using SSL VPN then they would not even require a configured client side software, they would just require the URL address to connect to the VPN portal.
So VPN’s provide a superb and cost effective solution for companies with several branch offices, partners, and remote users to share data and connect to their corporate network in a secure and private manner.With normal internet traffic, packets can be sniffed and read by anyone. However sending data via a VPN tunnel encapsulates all data packets providing high level of security. If packets which were sent securely over the internet were sniffed, they would be unreadable and if modified this would also be detected by the VPN gateway.
VPN Networking Protocols
VPN tunnels use one of four main networking protocols, which provide the sufficient level of security as shown below;
PPTP (Point to Point tunneling protocol)
PPTP is a protocol or technology that supports the use of VPN’s. Using PPTP, remote users can access their corporate networks securely using the Microsoft Windows Platforms and other PPP (Point to Point tunneling Protocols) enabled systems. This is achieved with remote users dialing into their local internet security providers to connect securely to their networks via the internet.
PPTP has its issues and is considered as a weak security protocol according to many experts, although Microsoft continues to improve the use of PPTP and claims issues within PPTP have now been corrected. Although PPTP is easier to use and configure than IPSec, IPSec outweighs PPTP in other areas such as being more secure and a robust protocol.
L2TP (Layer 2 Tunneling Protocol)
L2TP is an extension of the PPTP (Point to point tunneling protocol), used by internet service providers to provide VPN services over the internet. L2TP combines the functionality of PPTP and L2F (Layer 2 forwarding protocol) with some additional functions using some of the IPSec functionality. Also L2TP can be used in conjunction with IPSec to provide encryption, authentication and integrity. IPSec is the way forward and is considered better than the layer 2 VPN’s such as PPTP and L2TP.
IPSec (IP Security)
IPSec operates on layer 3 and so can protect any protocol that runs on top of IP. IPSec is a framework consisting of various protocols and algorithms which can be added to and developed. IPSec provides flexibility and strength in depth, and is an almost perfect solution for securing VPN’s. The only drawback is IPSec requires setting up on the corporate network and on the client end and is a complex framework to work with. IPSec is used for both site to site and remote user connectivity.
SSL VPN (Secure Socket Layer)
SSL VPN provides excellent security for remote access users as well as ease of use. SSL is already heavily used such as when you shop online, accessing your bank account online, you will notice an SSL protected page when you see the “https” in your browser URL bar as opposed to “http”.
The difference in using SSL VPN to IPSec is with IPSec a remote user would require client software which would need installing, configuring and sometimes troubleshooting. However with SSL there is no client software if a user was using the SSL portal. The portal is a GUI interface that is accessed via a web browser and contains tools and utilities in order to access applications on the network such as RDP and Outlook. SSL can also imitate the way IPSec works via a lightweight software. If a user required client SSL software, it can be installed with very little effort via a browser which simplifies the process in securely accessing to the corporate network.
Using SSL VPN would mean thousands of end user’s would be able to access the corporate network without the support of an administrator and possible hours of configuring and trouble shooting, unlike IPSec. The end user would just need to know the address of the SSL VPN portal. Another advantage is they can do this from any computer as they do not have to rely on a configured client side software.
Advantages and Disadvantages using a VPN
VPN’s eliminate the need for expensive leased lines. Historically T1 lines have been used connecting office locations together in a secure manner. If the office locations are further away, the cost of renting these least lines can be unbearable. A VPN though, only requires you to have a broadband internet connection, and so avoiding paying a hefty sum of monthly rental on dedicated leased lines. VPN’s are also a replacement for remote access server’s and dial up network connections although rarely used anymore.
Having many branch offices over the globe requires many leased lines, and so does not scale well. Each office would require a leased line to all other offices. VPN’s connecting via the Internet is a far more scalable solution, as opposed leased lines.
Through the use of link balancing and link bonding VPN's can use two or more internet connections, so if one connection at your company had a problem all VPN traffic can be sent over the remaining connections, and will automatically use the original connection when it is back up again.
You have to remember though, having a VPN means having to rely on the Internet, and having to rely that your ISP (Internet Service Provider) is reliable, although this problem can be reduced by having two or more ISP’s and using the 2nd in a VPN failover scenario.Also VPN’s require careful configuration, possibly some troubleshooting and the terminology can be overwhelming for administrators not familiar with the technology.
Setting up VPN with IPSec
Below is a basic overview in the typical way a site to site VPN is configured using IPSec. IPSec is chosen as the example because it’s the most commonly used technology and is known to be a solid, robust and secure VPN technology.
You may be new to all the VPN terminology, so clicking on the links in this VPN article will give you a good understanding on meanings within the below guide.
Basics in setting up a site to site VPN with IPSec
Below covers what is required to set up a VPN connection on a VPN gateway with IPSec. It is not really aimed at a specific vendor and is fairly general.
First you would decide how your going to authenticate both VPN peers to each other. Either select a Pre-shared key or install a digital certificate. This is used for authentication and to ensure the VPN gateways are authorised. This would prove their identities to each other. Both gateways must use the same type of credentials, so either both use pre-shared keys or both use digital certificates. Also if you are using pre-shared keys, then both keys would have to match.
1) You will need to specify both gateway addresses. So you would specify the address of the local VPN gateway and you would also specify the address of the remote VPN gateway. You can either specify an IP address or a domain name. On some VPN gateways you could also specify an e-mail address, or if you use a digital certificate you could specify the certificates subject field.
2) Main mode or aggressive mode can be selected depending on which one you would want to use. Main mode is more secure, but slower than aggressive mode. In Main mode peers exchange identities with encryption, and Aggressive mode, although faster exchanges identities without encryption. Main mode is the more commonly used. Aggressive mode is typically for when one or both of the VPN gateway's have a dynamic IP address.
3) Specify whether to use Nat-Traversal. This is selected if your VPN gateway is behind a NAT device. Also specify whether you want both peers to use IKE keep-alive. This ensures that if a VPN gateway’s interface is not responding it will failover to the second interface. This is true when your ISP goes down and your secondary interface is a backup ISP.4 You would now decide on your transform set. This includes the type of encryption, authentication and how long your security association will last. For your authentication you can either use Sha1 or MD5. Sha1 is the stronger authentication algorithm.
You can specify a limit before your SA expires, which will add more security to your VPN if your keys have been hacked. Although this will also have a slight affect on performance as well.
You will need to specify a Diffie-Hellman key group, usually 1, 2, 5 or 14 in which 14 is the most secure group.
You can optionally set up extra transform sets if needed. If you’re not sure on your peers transform settings, then you may want to set up more transform sets. Although it is recommended to know your peers settings and create the minimum transform set’s required as it is more secure this way.
1) You will need to specify what traffic will go across the VPN. So you would be specifying an IP address, Network address, or IP address range. This is access to your internal network, so either remote users from home, or the peer office can have access to resources behind the VPN gateway.
2) You can choose whether to use PFS (Perfect forward secrecy), for optional and an extra layer of security. If you will be using PFS, remember that both VPN peers must support and use PFS. You can select which Diffie-Hellman group to use for new keying material. The higher the group you select, the stronger the key.
You would now need to specify some more parameters in securing your data within the IPSec SA (Phase 2), also known as phase 2 proposals. The parameters are made up of encryption and authentication algorithms.
4) If you have specified ESP, which the majority would choose, then you would specify your authentication and encryption. For authentication and integrity you can select SHA1 or MD5, where SHA1 is the strongest algorithm. For encryption you can select DES, 3DES or AES 128, 192, or 256-bit key strength. AES 256 is the strongest encryption protocol.
5) You may want to specify a value for when your key would expire. This would ensure your encryption keys would change over a period of time, adding more security, as well as having a slight affect on performance. The majority leave these settings as the default. However if your a bank or any other company dealing with confidential data then you may want to force keys to expire, and have them re-created.
You may now need to create policies or rules to allow your VPN traffic in and out of your firewall. This may have already been done for you when you had completed configuring your gateway, and you may have had the option to either enable or disable your VPN gateway to automatically doing this for you, all depending on the product functionality.
You can now save all changes to your VPN gateway.
You are done in configuring your VPN gateway, and you can now configure the peer VPN gateway. Remember to configure your peer VPN gateway with the exact same settings as you configured your local gateway or else the VPN tunnel will not form successfully.
The above article is not specific to any VPN gateway so you may find differences in order of settings or slight difference in terminology used, but nothing more than that. Whatever firewall you may use for VPN connectivity such as Watchguard, Fortinet, SonicWALL, Cisco and so on they all support IPSec which is a standardised internationally known framework with a standard set of parameters and settings and so you will find the above instructions to be very like how you would set up your firewall VPN gateway. The only differences you would see would lie within the GUI, and possibly some slight naming alterations.
In a nutshell, with all VPN gateways using IPSec you would have to configure your VPN gateway addresses, phase 1 settings, phase 2 settings, create VPN firewall policies (some firewalls automatically create VPN policies for you) and save the configuration in which ever vendor product you work with.
Wikipedia's guide to VPN
Our new sister site is now up and running providing a central IT security resource site. | <urn:uuid:26de7a52-b790-4de2-82d7-4c3a1a09a764> | CC-MAIN-2017-04 | http://internet-computer-security.com/VPN-Guide/VPN-Tutorial-Guide.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937933 | 2,979 | 2.796875 | 3 |
The quality of United States medical education is a matter of concern to every person in the country. In our nation of 315 million people, we log 1.3 billion doctor visits annually -- or an average of about 4 visits per person per year. If doctors are poorly educated, we stand to lose money, time, health, peace of mind, and in some cases, even our lives. If we do a good job of educating physicians, we reap substantial benefits, avoiding unnecessary care and harmful mistakes and enjoying longer, healthier lives.
About 60% of students who apply each year are not admitted, and many more students give up hopes of attending medical school before they ever apply. The more than 20,000 students who begin studies toward an M.D. degree each year in the U.S. have even greater investments ahead of them. Newly admitted medical students can expect to pay a small fortune over four years. The average cost of attending medical school at a public institution is about $50,000 per year, and this swells to $70,000 per year at private institutions. The typical public-school student graduates $150,000 in debt, while the figure is $180,000 for private school students.
And medical school graduation is far from the end of training. To become fully qualified physicians and sit for a board exam, newly minted M.D.s must then complete residency training, which typically ranges from as few as three years (in fields such as family medicine and pediatrics) to as many as seven years (in fields such as neurosurgery). Many will then pursue additional fellowship training, for one to three years. Students who graduate from college at the age of 22 years with a goal of entering my field of pediatric radiology would typically complete medical school at 26 years, radiology residency at 31 years, and fellowship training at age 32. | <urn:uuid:12babad7-fdb9-4a84-9dc1-6e70d5e01410> | CC-MAIN-2017-04 | http://www.nextgov.com/health/2012/12/great-health-care-requires-great-medical-educators/60048/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00456-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97453 | 379 | 2.734375 | 3 |
“I remember I had this little computer with 16K of memory, and everyone was astonished! What was I going to do with all this memory!” Hans Zimmer around 1983.
Music and technology have been walking side by side for millenniums. Musical instruments have been following the advancements in technology. They evolved with mechanical and acoustics advancements, followed by advancements in electronics, and finally, they now transition into virtual reality, based on powerful code and efficient computational resources. History taught us that different musical instruments gave us different sound palettes and eventually different genres of music. Mastering new technologies has always helped us to develop new compositional styles, and enhance production approaches and sonics.
There are computer technologies that, as they go from one generation to the next, improve by an average factor of 2. With high performance computing and supercomputers, these improvements can actually be a factor of 10, or more. In a classic supercomputing style let’s look for things that will substantially change the way people perform their audio and music work and eventually how the audience enjoy their products.
Music and Technology – An Ancient Bond
Around the 5th century BC ancient Greeks created the Chorus, a homogeneous, non- individualized group of performers who communicated with the audience usually in song form. The Chorus originally consisted of fifty members. Tragedians, such as Sophocles and Euripides, changed this number through various experimentations. At the same time, on their quest to optimize the audience experience, the ancient architects built venues with custom designed acoustics. During the 18th Century, the chamber orchestra was found, also consisting of fifty musicians. Later, the full symphonic orchestra came along with about 100 musicians facilitated in custom-acoustic auditoriums that defined the sound of the experience. Music, orchestration and acoustics were always treated as one and there is a good reason for this.
The symphonic orchestra is truly a piece of technology: Every instrument is a different technological wonder and concert halls around the world are subjects of tremendous acoustic research. However, the most important element of an orchestra is the conductor. The conductor acts as the central piece of a very low message-passing latency and high-bandwidth fabric. The conductor is directing the musical performance in real time. This system architecture is the reason we have “Classical Music”. It became a reality based on organic nodes (human players), acoustic and physics laws and predetermined music written by the composer. The only limitations of this very advanced form of expression are that the music is already written by the composer and the acoustics are also more or less predetermined. To put that in perspective, in Jazz, the music can change in real time (improvisation) but the amount of people interacting in real time is greatly reduced.
The Time Machine
During the last 40 years, with the advancement of supercomputers and high-performance computing, we realized that we can scientifically create virtual environments, in which we can define specific questions and get answers. The better the questions are formed, the more defined the answers will be. This is what supercomputers have allowed us to do for many decades now and in many industries. They are like time machines. They allow us to understand the past and create the future.
But what is the ultimate answer to Music? Maybe we can discover this by moving backwards, and this is the main reason for this historic introduction to music technology. If we take one of the highest forms of human collaboration and expression, the symphonic orchestra and classical music, and we investigate those forms of expressions by a modern prism, we might get the answers we are looking for.
What are the ingredients of the modern hybrid recipe of orchestral music? Hollywood is the best place to look as scoring movies is the modern way of creating future classics.
Creating the HPC384 Spec.
I will use another Hans Zimmer quote here: “Music is organized chaos! ….but not necessarily in a bad way, as organized chaos can sound pretty good!” Composers might be inherently good in organizing chaos.
For the past 17 years, programmers from all around the world have built virtual instruments and effects based on software interfaces like VST, which runs seamlessly over an x86 microprocessor architecture. Among the high-performance computing systems, HPC clusters provide an efficient performance compute solution based on industry-standard hardware connected by a high-speed network.
Using HPC we can work with advanced physics to model plate reverbs, create evolving non-linear auditorium acoustics and emulate multi-microphone positions that will give sound endless possibilities. It is no longer necessary to work with oversampled peak detection in order to estimate the peak samples on a signal. We have overcome those barriers of conventional underpowered discrete-time systems. We process the actual audio and not ‘the estimation of it’ without any more fighting with conventional CPU or DSP constraints. There is no way we can overload an HPC music production system when we work with 88.2 kHz, 96 kHz, 192 kHz or even 384 kHz. Moreover, HPC allows us to have different sound qualities in the same project so we can push the engines hard when we want to emulate analog synthesizers, luscious reverbs or accurate solid-state and thermionic valve circuitry that needs advanced resolution at a microsecond’s time domain.
At this critical juncture of entertainment evolution, with 3D & HDR, IMAX Cinema, Dolby® Atmos, DTS® Headphone X, 6K Cinema and 4K TV with HDMI 2 (which has an audio bandwidth of 1536 kHz), the industry creates a roadmap for a quality aware audience. A true quality upgrade of the overall cinematic experience is on-going. HPC384 Spec. is here to keep music production on par with those innovations and it will provide the necessary tools, specifications and revolutionary techniques so that music professionals will be able to produce and deliver high quality content to meet the demands and expectations of their audience.
In our preliminary tests we rendered the first ever reverb at 1536kHz using U-He Zebra 2 VST clocked at 384 kHz as our sound generator. This sound is quite likely the most mathematically complex and harmonically rich single sound ever created in the digital domain. Sound examples here: http://www.hpcmusic.com/#!hpc384/crrb
U-He Diva, which is an advanced VST instrument, could playback in real time at 384 kHz with infinite notes of polyphony while the same instrument when used in a top-of-the-range workstation cannot perform more than few notes at 192 kHz. The highest bandwidth we managed to work with was 6144 kHz. We use bandwidth as a measure of efficiency of the system when it comes to music production. This way, when software developers are ready for heavy mathematics in low latency, almost real-time performance, we would know how to setup this reality-engine. Moreover, Dolby is heavily experimenting with many surround channels in order to enhance the localization information of sound. Using HPC we can go a step further and enhance the localization information of music (and not only sound) by composing and arranging in many-channel surround formats in a fully discrete way (3D Music)
On a cost per GFLOPS basis, we found that HPC for music can be roughly 35X better than the current industry-standard solutions, with 10X more bandwidth we can operate in real-time performance per audio track and enable unlimited track counts (high scalability).
The future is about the audience experience
As for next steps, we need to work on the form factor of those solutions and further explore software opportunities. The evolution of music creation leads to an evolution of music enjoyment. In the same way that the vinyl record, walkman, CD and MP3 changed music for the better (or sometimes for the worse), we now see new products on the horizon that can revolutionize the audience experience.
More info at www.hpcmusic.com | <urn:uuid:1ba1b52a-a12e-4f95-814f-43d47bc467dc> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/23/hpcs-role-defining-musics-creation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940339 | 1,662 | 3.609375 | 4 |
The White House announced Friday that the Global Connect Initiative is working to connect 1.5 billion people to the Internet worldwide by 2020.
“It can change lives by connecting schools to the web, bringing telemedicine to rural health centers, lowering the barriers to political participation, and supplying up-to-date market information to businesses and entrepreneurs,” said Suhas Subramanyam, of the Office of Science and Technology Policy, in a blog post.
About 60 percent of the world’s population is without Internet and the number increases to 95 percent in the poorest countries, according to Subramanyam.
The 40 countries participating in the initiative decided to treat the Internet as critical infrastructure, similar to roads, bridges, and ports, and increase funding and resources to build Internet infrastructure.
The Overseas Private Investment Corporation, the Federal government’s development finance institution, announced that it has invested over $1 billion in Internet connectivity infrastructure projects to support development in 15 countries across the Americas, Asia, Europe, and the Middle East. Countries are consulting technical and business experts on how to make the most out of these investments by using cost-saving network designs, Internet infrastructure opportunities, and local skills development and training.
The State Department is working with Tunisia, India, and Argentina to write policies that will increase digital growth and create an open and accessible Internet .
In June, President Obama created the Global Connect International Connectivity Committee (GCICC), made up of 16 Federal agencies and led by the State Department to coordinate United States projects related to worldwide Internet access.
President Obama said that the Global Connect Initiative is “bringing wonders of technology to far corners of the globe, accelerating access to the Internet, [and] bridging the digital divide.” | <urn:uuid:a3a7f75c-2f45-4eef-a485-34ab21b5f45b> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/worldwide-initiative-plans-to-connect-1-5-billion-by-2020/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915191 | 362 | 2.59375 | 3 |
Security breaches seem to be occurring on a regular basis lately, as more and more reports of lost data and hackers flood news headlines. Many businesses store their information in a virtual environment, but do little to protect it once it gets there. Complacency and a lack of understanding is contributing the the number of attacks – and businesses aren’t the only ones being targeted by hackers.
In an annual report to Parliament on Tuesday, commissioner Jennifer Stoddart reported that the number of data breaches reported by federal institutions between April 2012 and March 2013 rose from 80 to 109 during the same period the year before (click here for report). Hackers are breaking into federal networks in record numbers, yet it seems as though this issue isn’t being taken seriously. Several of the reported incidents could have been prevented if the proper security measures were in place. Treating cyber crime as random and unpredictable is counter productive for government and business.
Employee negligence, or “human error”, was responsible for a majority of the federal government’s stolen data, with hacking and malware encompass the rest. Some of the stolen data included:
- Human Resources Development Canada (now called Employment and Social Development Canada) reported that a staff member lost a portable hard drive that contained 585,000 personal records
- A Justice Department employee lost a USB key that contained sensitive information on 5,000 people
- A USB key, papers, and a laptop that contained information used by the Financial Transaction and Reports Analysis Centre (FINTRAC) was stolen in Calgary
- A Security Intelligence Officer working for Corrections Canada had dropped a USB key containing personal information about 152 prisoners was lost while the Officer was dropping off a child at school
- The personal tax information of 46 people was stolen along with an employee’s laptop
And the list goes on. It’s frightening to think that federal employees are so complacent with the personal information of others, but it happens every day. No one believes that it will happen to them, until it does. However, ignorance is not bliss, nor is it an effective method of data protection.
Employees need to be responsible for the protection of portable devices, especially the devices containing private information. Many business and government establishes take the time to install the best security measures, but the moment an employee transports data – the risk of a data breach increases drastically. This is becoming increasingly difficult to control as virtual environments continue to increase in use. Although it may be convenient, companies need to be aware of the risks associated with virtually accessible and transported data.
Some of the ways that companies can help decrease the amount of data lost to “human error” is through education, awareness, and guidelines. By educating and alerting your employees about the methods used by cyber criminals to gain access to private data, they’ll have a better understanding of how to keep the data secure. Additionally, creating awareness will show your employees that cyber crime is a reality that can happen to anyone, anytime. It’s not just something you hear about on the news, it’s something that hundreds of companies have experienced across North America.
Establishing some rules and guidelines around transporting sensitive data, either in a USB key, laptop, or external hard drive, can also help keep data safe. By attaching consequences to an employees actions, such as losing a USB key, it’s likely that they’ll remain vigilant. The other option would be to restrict the transportation of data all together by utilizing cloud technology. By moving all your data to a online environment, your employees can access the information from anywhere, anytime.
To learn more about storing your data in a safe location, click here.
Blog author: Vanessa Hartung | <urn:uuid:17307baa-76b9-4124-b119-edd07f78b0a5> | CC-MAIN-2017-04 | http://blog.terago.ca/2013/10/31/having-trouble-securing-your-data-so-is-the-federal-government/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95466 | 764 | 2.5625 | 3 |
The "learn to code" movement is ramping up in full force this week, Computer Science Education Week, with major tech companies and celebrities supporting Code.org in its "Hour of Code" mission to get students of all ages to learn programming. If you'd like some free hands-on training, Apple will help you this Wednesday.
Head to any Apple Retail Store on December 11 at 5pm for a free one-hour workshop. One of the limitations of learning iOS programming is you need a Mac to do it, but by trying it out at an Apple Store, you can see what all the fuss is about and whether it's for you.
If that hour doesn't work for your schedule, there are tons of one-hour online tutorials available at Code.org, including ones taught by Mark Zuckerberg (with the Angry Birds), Bill Gates, and other top names in coding. Even more exciting for educators and students, perhaps: there are "unplugged" computer science lessons, so you can learn the programming mindset without the need for any devices (like expensive Macs).
President Obama put it this way: "Learning these skills isn't just important for your future, it's important for America's future." "Don't just buy a new video game, make one. Don't just download the latest app, help design it. Don't just play on your phone, program it."
Read more of Melanie Pinola’s Tech IT Out blog and follow the latest IT news at ITworld. Follow Melanie on Twitter at @melaniepinola. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:ea93384a-0a81-4baf-8775-86813732145f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2703539/consumerization/learn-to-code-for-free-at-an-apple-store-this-week.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00144-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941677 | 342 | 2.9375 | 3 |
There is currently a scare or anticipation of the implementation of alternate access methods into technology software, devices and chipsets.
A backdoor can be defined as an access method that bypasses the traditional authentication means typically known and used by the system.
A recent news story depicts a Field-Programmable Gate Array (FPGA) chip that is said to have been made by Actel now Microsemi, a Chinese firm, having a detected backdoor.
This can immediately set fire to public concern of cyber aggressive nation-states and makes us all feel a little bit uneasy with any electronic device or system used.
There is more speculation than solid facts to go on at this point, and some can speculate along the lines of following author who is more curious about the inexpensive hardware used to find it:
I really don’t care if these things can be found with pocketknife and a penny picked up off the street!
Any “whistleblower’s” findings should be followed up on, especially if the components are widely found in US military and commercial applications, in a sensible world.
There is additional speculation of purposeful implementations for communications monitoring on and off the Internet by the US government in the name of national security but this is not a new mindset.
Are they the same?
The easy answer is yes and no. According to the reports, the FPGA’s had another, yet semi-secured avenue to wipe or reprogram the chip. It would be mostly be in a closed-system and this may not be accessible.
I don’t believe I am an offender to be found by monitoring the communication by the US government but it really depends on the government.
I could probably offend whole sections of the world if they know what I feel about them and their qualities, approach or culture.
If there is the ability through any technical design for a government to monitor communication, how can we be assured that another government is not using the same means but for different purpose when designed to be able to do so? | <urn:uuid:c2745282-a9d8-41f1-aab2-4aa0bc76d038> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/21502-To-Backdoor-or-Not.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95396 | 421 | 2.53125 | 3 |
Plants respond a bit better to global warming than scientists had thought, according to a new study (AFP Photo/David McNew) More Washington (AFP) - Plants respond a bit better to global warming than scientists had thought, according to a new study that suggests their potential contribution to worsening global warming is not likely as bad as researcher believed. When it gets hotter, plants breathe harder. And the greenhouse gas carbon dioxide is produced by respiration. That's why researchers think that as Earth is warmed by CO2 from people's activities, plants may add to the emissions and make warming worse. Plants generally take in carbon dioxide during daytime photosynthesis and release carbon dioxide during respiration at night. But plants take up much more carbon dioxide in photosynthesis than they release in respiration. But now "with this new model, we predict that some ecosystems are releasing a lot less CO2 through leaf respiration than we previously thought," said coauthor Kevin Griffin, a plant physiologist at Columbia University's Lamont-Doherty Earth Observatory. The study was published Monday by the Proceedings of the National Academy of Sciences. The research found that rates of increase slow in a predictable way as temperatures rise, in every region. And the newly defined curve leads to sharply reduced estimates of respiration, especially in the coldest regions. "What we thought was a steep curve in some places is actually a little gentler," said Griffin. The biggest changes in estimates are in the coldest regions, which recently have seen warming far beyond that in temperate zones. "All of this adds up to a significant amount of carbon, so we think it's worth paying attention to," said Griffin. Lead author Mary Heskel, of Massachusetts' Marine Biological Laboratory, said the study would go far toward helping estimate "carbon storage in vegetation, and predicting concentrations of atmospheric carbon dioxide and future surface temperatures."
News Article | April 20, 2016
Sharks, skates, and rays are oddities among the fish: They have appendages growing out of the gill arch, a small cradle of bones that supports the gills. This anatomical peculiarity has led to the proposal that the paired limbs of humans, and before that the paired fins of fish, evolved from the transformation of gill arches in early fish. Genetic evidence for this theory is offered in a new study led by J. Andrew Gillis, a Royal Society University Research Fellow at the University of Cambridge, U.K., and a Whitman Center scientist at the Marine Biological Laboratory (MBL) in Woods Hole, Mass. The study, published this week in Development, demonstrates striking similarities in the genetic mechanism used to pattern gill arch appendages (called branchial rays) and fins/limbs. Studying embryos of the little skate, Gillis focused on the gene Sonic hedgehog, which produces a signaling protein whose function is well understood in the mammalian limb. Remarkably, he found that Sonic hedgehog's role in branchial rays closely parallels its role in the limb: it sets up the axis of development and, later, maintains growth of the limb skeleton. "The shared role of Sonic hedgehog in patterning branchial rays and limbs may be due to a deep evolutionary relationship between the two," Gillis says, "or it may simply be that two unrelated appendages independently use the same gene for the same function." Ongoing studies comparing the function of other genes during branchial ray and fin/limb development will help to resolve this. Gillis will continue his research at the MBL this summer using skates collected and supplied by the Marine Resources Department. "Branchial rays will figure prominently in the story of the evolutionary origin of vertebrate animal appendages, either by shedding light on the evolutionary antecedent of paired fins/limbs, or by teaching us about the genetic mechanisms that animals can use to invent new appendages," Gillis says.
All non-transgenic, transgenic and control mice used in this study were derived from in house breeding colonies backcrossed > 12 generations onto C57/BL6 backgrounds. All mice used were young adult females between two and four months old at the time of spinal cord injury. All transgenic mice used have been previously well characterized or are the progeny of crossing well-characterized lines: (1) mGFAP-TK transgenic mice line 7.115, 16, 49; (2) mGFAP-Cre-STAT3-loxP mice generated by crossing STAT3-loxP mice with loxP sites flanking exon 22 of the STAT3 gene50 with mGFAP-Cre mice line 73.1217, 18; (3) loxP-STOP-loxP-DTR (diphtheria toxin receptor) mice21; (4) mGFAP-Cre-RiboTag mice generated by crossing mice with loxP-STOP-loxP-Rpl22-HA (RiboTag)26 with mGFAP-Cre mice line 73.1217, 18; (5) loxP-STOP-loxP-tdTomato reporter mice51. All mice were housed in a 12-h light/dark cycle in a specific-pathogen-free facility with controlled temperature and humidity and were allowed free access to food and water. All experiments were conducted according to protocols approved by the Animal Research Committee of the Office for Protection of Research Subjects at University of California, Los Angeles. All surgeries were performed under general anaesthesia with isoflurane in oxygen-enriched air using an operating microscope (Zeiss, Oberkochen, Germany), and rodent stereotaxic apparatus (David Kopf, Tujunga, CA). Laminectomy of a single vertebra was performed and severe crush spinal cord injuries (SCI) were made at the level of T10 using No. 5 Dumont forceps (Fine Science Tools, Foster City, CA) without spacers and with a tip width of 0.5 mm to completely compress the entire spinal cord laterally from both sides for 5 s16, 17, 18. For pre-conditioning lesions, sciatic nerves were transected and ligated one week before SCI. Hydrogels were injected stereotaxically into the centre of SCI lesions 0.6 mm below the surface at 0.2 μl per minute using glass micropipettes (ground to 50–100 μm tips) connected via high-pressure tubing (Kopf) to 10-μl syringes under control of microinfusion pumps, two days after SCI52. Tract tracing was performed by injection of biotinylated dextran amine 10,000 (BDA, Invitrogen) 10% wt/vol in sterile saline injected 4 × 0.4 μl into the left motor cerebral cortex 14 days before perfusion to visualize corticospinal tract (CST) axons, or choleratoxin B (CTB) (List Biological Laboratory, Campbell, CA) 1 μl of 1% wt/vol in sterile water injected into both sciatic nerves three days before perfusion to visualize ascending sensory tract (AST) axons33. AAV2/5-GfaABC1D-Cre (see below) was injected either 3 or 6 × 0.4 μl (1.29 × 1013 gc ml−1 in sterile saline) into and on either side of mature SCI lesions two weeks after SCI, or into uninjured spinal cord after T10 laminectomy. All animals received analgesic before wound closure and every 12 h for at least 48 h post-injury. Animals were randomly assigned numbers and evaluated thereafter blind to genotype and experimental condition. Adeno-associated virus 2/5 (AAV) vector with a minimal GFAP promoter (AAV2/5 GfaABC1D) was used to target Cre-recombinase expression selectively to astrocytes53, 54, 55. Diblock co-polypeptide hydrogel (DCH) K L was fabricated, tagged with blue fluorescent dye (AMCA-X) and loaded with growth factor and antibody cargoes as described38, 39, 52. Cargo molecules comprised: human recombinant NT3 and BDNF were gifts (Amgen, Thousand Oaks, CA, (NT3 Lot#2200F4; BDNF Lot#2142F5A) or were purchased from PeproTech (Rocky Hill, NJ; NT3 405-03, Lot#060762; BDNF 405-02 Lot#071161). Function blocking anti-CD29 mouse monoclonal antibody was purchased from BD Bioscience (San Diego, CA) as a custom order at 10.25 mg ml−1 (product #BP555003; lot#S03146). Freeze dried K L powder was reconstituted on to 3.0% or 3.5% wt/vol basis in sterile PBS without cargo or with combinations of NT3 (1.0 μg μl−1), BDNF (0.85 μg μl−1) and anti-CD29 (5 μg μl−1). DCH mixtures were prepared to have G′ (storage modulus at 1 Hz) between 75 and 100 Pascal (Pa), somewhat below that of mouse brain at 200 Pa (refs 38, 39). GCV (Cytovene-IV Hoffman LaRoche, Nutley, NJ), 25 mg kg−1 per day dissolved in sterile physiological saline was administered as single daily subcutaneous injections starting immediately after surgery and continued for the first 7 days after SCI. Bromodeoxyuridine (BrdU, Sigma), 100 mg kg−1 per day dissolved in saline plus 0.007 M NaOH, was administered as single daily intraperitoneal injections on days 2 through 7 after SCI. Diphtheria toxin A (DT, Sigma #DO564) 100 ng in 100 μl sterile saline was administered twice daily as intraperitoneal injections for ten days starting three weeks after injection of AAV2/5-GfaABC1D-Cre to loxP-DTR mice (which was 5 weeks after SCI) (see timeline in Extended Data Fig. 1d). Two days after SCI, all mice were evaluated in open field and mice exhibiting any hindlimb movements were not studied further. Mice that passed this pre-determined inclusion criterion were randomized into experimental groups for further treatments and were thereafter evaluated blind to their experimental condition. At 3, 7, 14 days and then weekly after SCI, hindlimb movements were scored using a simple six-point scale in which 0 is no movement and 5 is normal walking17. After terminal anaesthesia by barbiturate overdose mice were perfused transcardially with 10% formalin (Sigma). Spinal cords were removed, post-fixed overnight, and cryoprotected in buffered 30% sucrose for 48 h. Frozen sections (30 μm horizontal) were prepared using a cryostat microtome (Leica) and processed for immunofluorescence as described16, 17, 18. Primary antibodies were: rabbit anti-GFAP (1:1,000; Dako, Carpinteria, CA); rat anti-GFAP (1:1,000, Zymed Laboratories); goat anti-CTB (1:1,000, List Biology Lab); rabbit anti-5HT (1:2,000, Immunostar); goat anti-5HT (1:1,000, Immunostar); mouse anti-CSPG22 (1:100, Sigma); rabbit-anti haemagglutinin (HA) (1:500 Sigma); mouse-anti HA (1:3,000 Covance); sheep anti-BrdU (1:6,000, Maine Biotechnology Services, Portland, ME); rabbit anti-laminin (1:80, Sigma, Saint Louis, MO); guinea pig anti-NG2 (CSPG4) (E. G. Hughes and D. W. Bergles56, Baltimore, MA); goat anti-aggrecan (1:200, NOVUS); rabbit anti-brevican (1:300, NOVUS); mouse anti-neurocan (1:300, Milipore); mouse anti-phosphacan (1:500, Sigma); goat anti-versican (1:200, NOVUS); rabbit anti-neurglycan C (CSPG5) (1:200, NOVUS). Fluorescence secondary antibodies were conjugated to: Alexa 488 (green) or Alexa 350 (blue) (Molecular Probes), or to Cy3 (550, red) or Cy5 (649, far red) all from (Jackson Immunoresearch Laboratories). Mouse primary antibodies were visualized using the Mouse-on-Mouse detection kit (M.O.M., Vector). BDA tract-tracing was visualized with streptavidin-HRP plus TSB Fluorescein green or Tyr-Cy3 (Jackson Immunoresearch Laboratories). Nuclear stain: 4′,6′-diamidino-2-phenylindole dihydrochloride (DAPI; 2 ng ml−1; Molecular Probes). Sections were coverslipped using ProLong Gold anti-fade reagent (InVitrogen, Grand Island, NY). Sections were examined and photographed using deconvolution fluorescence microscopy and scanning confocal laser microscopy (Zeiss, Oberkochen, Germany). Axons labelled by tract tracing or immunohistochemistry were quantified using image analysis software (NeuroLucida, MicroBrightField, Williston, VT) operating a computer-driven microscope regulated in the x, y and z axes (Zeiss) by observers blind to experimental conditions. Using NeuroLucida, lines were drawn across horizontal spinal cord sections at SCI lesion centres and at regular distances on either side (Fig. 1a) and the number of axons intercepting lines was counted at 63× magnification under oil immersion by observers blind to experimental conditions. Similar lines were drawn and axons counted in intact axon tracts 3 mm proximal to SCI lesions and the numbers of axon intercepts in or near lesions were expressed as percentages of axons in the intact tracts in order to control for potential variations in tract-tracing efficacy or intensity of immunohistochemistry among animals. Two sections at the level of the CST or AST, and three sections through the middle of the cord for 5HT, were counted per mouse and expressed as total intercepts per location per mouse. To determine efficacy of axon transection after SCI, we examined labelling 3 mm distal to SCI lesion centres, with the intention of eliminating mice that had labelled axons at this location on grounds that these mice may have had incomplete lesions. However, all mice that had met the strict behavioural inclusion criterion of no hindlimb movements two days after severe crush SCI, exhibited no detectable axons 3 mm distal to SCI lesions regardless of treatment group. Sections stained for GFAP, CSPG or laminin were photographed using constant exposure settings. Single-channel immunofluorescence images were converted to black and white and thresholded (Fig. 1d and Extended Data Fig. 2b) and the amount of stained area measured in different tissue compartments using NIH ImageJ software. Areas are shown in graphs as mean values plus or minus standard error of the means (s.e.m.). Statistical evaluations of repeated measures were conducted by ANOVA with post hoc, independent pairwise analysis as per Newman-Keuls (Prism, GraphPad, San Diego, CA). Power calculations were performed using G*Power Software v220.127.116.11 (ref. 57). For quantification of histologically derived neuroanatomical outcomes such as numbers of axons or percentage of area stained for GFAP or CSPG, group sizes were used that were calculated to provide at least 80% power when using the following parameters: probability of type I error (α) = 0.05, a conservative effect size of 0.25, 2–8 treatment groups with multiple measurements obtained per replicate. Using Fig. 5j as an example, evaluation of n = 5 biological replicates (with multiple measurements per replicate) in each of 8 treatment groups provided greater than 88% power. For dot blot immunoassay of chondroitin sulfate proteoglycans (CSPG), spinal cord tissue blocks were lysed and homogenized in standard RIPA (radio-immunoprecipitation assay) buffer. LDS (lithium dodecyl sulfate) buffer (Life Technologies) was added to the post-mitochondrial supernatant and 2 μl containing 2 μg μl−1 protein was spotted onto a nitrocellulose membrane (Life Technologies), set to dry and incubated overnight with mouse anti-chondroitin sulfate antibody (CS56, 1:1000, Sigma Aldrich), an IgM-monoclonal antibody that detects glyco-moieties of all CSPGs22. CS56 immunoreactivity was detected on X-ray film with alkaline phosphatase-conjugated secondary antibody and chemiluminescent substrate (Life Technologies). Densitometry measurements of CS56 immunoreactivity were obtained using ImageJ software (NIH) and normalized to total protein (Poncau S) density58. Densities are shown in graphs as mean values plus or minus standard error of the means (s.e.m.). Two weeks after SCI, spinal cords of wild-type control (GFAP-RiboTag) and STAT3-CKO (GFAP-STAT3CKO-RiboTag) mice were rapidly dissected out of the spinal canal. The central 3 mm of the lower thoracic lesion including the lesion core and 1 mm rostral and caudal were then rapidly removed and snap frozen in liquid nitrogen. Haemagglutinin (HA) immunoprecipitation (HA-IP) of astrocyte ribosomes and ribosome-associated mRNA (ramRNA) was carried out as described26. The non-precipitated flow-through (FT) from each IP sample was collected for analysis of non-astrocyte total RNA. HA and FT samples underwent on-column DNA digestion using the RNase-Free Dnase Set (Qiagen) and RNA purified with the RNeasy Micro kit (Qiagen). Integrity of the eluted RNA was analysed by a 2100 Bioanalyzer (Agilent) using the RNA Pico chip, mean sample RIN = 8.0 ± 0.95. RNA concentration determined by RiboGreen RNA Assay kit (Life Technologies). cDNA was generated from 5 ng of IP or FT RNA using the Nugen Ovation 2 RNA-Seq Sytstem V2 kit (Nugen). 1 μg of cDNA was fragmented using the Covaris M220. Paired-end libraries for multiplex sequencing were generated from 300 ng of fragmented cDNA using the Apollo 324 automated library preparation system (Wafergen Biosystems) and purified with Agencourt AMPure XP beads (Beckman Coulter). All samples were analysed by an Illumina NextSeq 500 Sequencer (Illumina) using 75-bp paired-end sequencing. Reads were quality controlled using in-house scripts including picard-tools, mapped to the reference mm10 genome using STAR59, and counted using HT-seq60 with mm10 refSeq as reference, and genes were called differentially expressed using edgeR61. Individual gene expression levels in the Fig. 4e histogram are shown as mean FPKM (fragments per kilobase of transcript sequence per million mapped fragments). Additional details of differential expression analysis are described in the legends of Fig. 4 and Extended Data Figs 3 and 4. Raw and normalized data have been deposited in the NCBI Gene Expression Omnibus and are accessible through accession number GSE76097. To ensure the widespread distribution of these datasets, we have created a user-friendly website that enables searching for individual genes of interest https://astrocyte.rnaseq.sofroniewlab.neurobio.ucla.edu.
News Article | April 7, 2016
In just about every nook, cranny, and crevice of our planet, some sort of life manages to thrive—whether it’s under an Antarctic ice sheet, in super-salty Arctic water, or in Chile’s Atacama desert, one of the driest and harshest environments in the world. A US scientist has found something living in another surprising place: in the rocky sediment deep under the Atlantic Ocean, 50 to 250 meters beneath the seafloor, which is itself under 4.5 km—that’s more than 2.7 miles—of ocean water. With no sunlight and few nutrients, not to mention extreme pressure, you won’t find fish or many other creatures that deep. These tiny microbes can eke out a living in deep ocean sediment and rock. Learning about them could help us find life in bizarre environments on other planets, too. In the new paper, Julie Huber, associate scientist at the Marine Biological Laboratory in Woods Hole, Mass., describes the microbial community she and her team found way out at the bottom of the Atlantic. “It’s the middle of the ocean,” she told Motherboard. “Water, as far as the eye can see.” At North Pond in the mid-Atlantic, the ocean crust is young, and its circulating fluids are cold. Photo: Marine Biological Laboratory Deep down underneath all that water is a sediment pocket called North Pond, on the western edge of the Mid-Atlantic Ridge. That’s where new ocean crust is being formed as plates push apart. That crust isn’t static, she continued. “Fluids are still moving through it,” as seawater rushes through its crevices. Samples were collected with the Integrated Ocean Drilling Program, an international project that drills deep into the seafloor for science—not the same as offshore oil drilling, although technologies have been swapped between the two. For all the strangeness of the environment they live in, these microbes aren’t necessarily “extremophiles,” Huber said. “They appear to be closely related to [others found] in seawater. But we’re finding genetic signatures suggesting they are slightly different,” in ways that aren’t yet understood. She and others are trying to piece that together now, studying their DNA. Huber wasn’t surprised to learn that something could live in a “cold crustal aquifer,” as she calls this deep-ocean environment. (Most of her other work focuses on high-temperature hydrothermal vents and underwater volcanoes.) “It’s pretty rare not to find microbes,” she said. “They seem to find a way.” Given that life takes hold basically everywhere on our planet, could we find it on another one? Scientists are excited by the idea that they could one day find something living on Enceladus, an icy moon of Saturn. “Based on modelling, it looks like pretty much the only energy available there is methane and carbon dioxide,” maybe a bit of hydrogen, said Huber, who’s received NASA funding for some of her research. “Only a handful of microbes can use those on our planet, and they’re pretty specialized.” By studying life in lower-energy environments—like the rock and sediment at the bottom of the ocean—we’ll learn more about what tricks microbes use to eke out a living. Hopefully, it’s preparation for one day getting to Enceladus.
Protected areas such as rainforests occupy more than one-tenth of the Earth’s landscape, and provide invaluable ecosystem services, from erosion control to pollination to biodiversity preservation. They also draw heat-trapping carbon dioxide (CO ) from the atmosphere and store it in plants and soil through photosynthesis, yielding a net cooling effect on the planet. Determining the role protected areas play as carbon sinks — now and in decades to come — is a topic of intense interest to the climate-policy community as it seeks science-based strategies to mitigate climate change. Toward that end, a study in the journal Ambio estimates for the first time the amount of CO sequestered by protected areas, both at present and throughout the 21st century as projected under various climate and land-use scenarios. Based on their models and assuming a business-as-usual climate scenario, the researchers projected that the annual carbon sequestration rate in protected areas will decline by about 40 percent between now and 2100. Moreover, if about one-third of protected land is converted to other uses by that time, due to population and economic pressures, carbon sequestration in the remaining protected areas will become negligible. “Our study highlights the importance of protected areas in slowing the rate of climate change by pulling carbon dioxide out of the atmosphere and sequestering it in plants and soils, especially in forested areas,” said Jerry Melillo, the study’s lead author. Melillo is a distinguished scientist at the Marine Biological Laboratory (MBL) in Woods Hole, Massachusetts, and former director of the MBL’s Ecosystems Center. “Maintaining existing protected areas, enlarging them and adding new ones over this century are important ways we can manage the global landscape to help mitigate climate change.” Based on a global database of protected areas, a reconstruction of global land-use history, and a global biogeochemistry model, the researchers estimated that protected areas currently sequester 0.5 petagrams (500 billion kilograms) of carbon each year, or about 20 percent of the carbon sequestered by all land ecosystems annually. Using an integrated modeling framework developed by the MIT Joint Program on the Science and Policy of Global Change, they projected that under a rapid climate-change scenario that extends existing climate policies; keeps protected areas off-limits to development; and assumes continued economic growth and a 1 percent annual increase in agricultural productivity, the annual carbon sequestration rate in protected areas would fall to about 0.3 petagrams of carbon by 2100. When they ran the same scenario but allowed for possible development of protected areas, they projected that more than one-third of today’s protected areas would be converted to other uses. This would reduce carbon sequestration in the remaining protected areas to near zero by the end of the century. (The protected areas that are not converted would be the more marginal systems that have low productivity, and thus low capacity to sequester carbon.) Based on this analysis, the researchers concluded that unless current protected areas are preserved and expanded, their capacity to sequester carbon will decline. The need for expansion is driven by climate change: As the average global temperature rises, so, too, will plant and soil respiration in protected and unprotected areas alike, thereby reducing their ability to store carbon and cool the planet. “This work shows the need for sufficient resources dedicated to actually prevent encroachment of human activity into protected areas,” said John Reilly, one of the study’s coauthors and the co-director of the MIT Joint Program on the Science and Policy of Global Change. The study was supported by the David and Lucille Packard foundation, the National Science Foundation, the U.S. Environmental Protection Agency, and the U.S. Department of Energy. | <urn:uuid:231923ab-1c49-4910-ad7c-2ed855a70687> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/biological-laboratory-868648/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00446-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932541 | 5,785 | 3.21875 | 3 |
Identity injection allows you to add information to the URL or to the HTML page before it is posted to the Web server. The Web server uses this information to determine whether the user should have access to the resource, so it is the Web server that determines the information that you need to inject to allow access to the resource.
Identity injection is one of the features of Access Manager that enable you to provide single sign-on for your users. When the policy is configured correctly, the user is unaware that additional information is required to access a Web server.
IMPORTANT:Identity Injection policies allow you to inject the user’s password into the HTTP header. If you set up such a policy, you should also configure the Access Gateway to use SSL between itself and the back-end Web server. This is the only way to ensure that the password is encrypted on the wire.
This section describes the elements available for an Identity Injection policy, but your Web servers determine which elements you use. | <urn:uuid:f7412f3b-73ee-4c61-a184-54b1b43b642e> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/novellaccessmanager31/policyhelp/data/b5547ku.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00170-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.859299 | 205 | 2.6875 | 3 |
IBM Cites 'Breakthrough' in Phase-Change Memory Development
IBM has produced 90-nanometer-size processors that can store multiple bits of data per cell over time without the data becoming corrupted. This is a problem that has been nagging PCM development for 10 years.Every so often, the IT world gets news of a "breakthrough" in new storage media research, and this week it was IBM's turn to announce one in relation to a possible long-term replacement for NAND flash solid-state disks.
Big Blue on June 30 revealed that its Zurich-based PCM (phase-change memory) research unit has produced 90-nanometer-size chips that can store multiple bits of data per cell over time without the data becoming corrupted. This is a problem that has been nagging development since IBM started this project nearly 10 years ago. Previously, each PCM cell was able to hold a single data bit, and even those became lost or corrupt at unpredictable times. IBM said this latest development can lead to solid-state chips that can store as much data as NAND flash disks (which now are up to 1TB in capacity) but feature about 100 times the data movement speed, to go with a much longer life span.
NAND flash is inherently slowed down by so-called erase-write cycle limitations. This is because NAND flash requires that data first be marked for deletion before new data is written to the disk, which slows the process considerably. PCM does not require erase-write cycles. | <urn:uuid:cb77351d-450a-460a-931f-604f78a604fa> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/IBM-Cites-Breakthrough-in-PhaseChange-Memory-Development-105476 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96513 | 313 | 2.796875 | 3 |
Minor vulnerabilities, poor user behavior, and outdated security software—they all add up to a big headache for IT and security professionals. Small errors on the part of computer users or their IT departments may not wreak havoc on their own, but in combination, they dramatically increase security challenges. Here’s a recipe for the “nightmare formula” that organizations need to avoid or mitigate.
- Easy-to-guess passwords and password reuse: Obvious strings of numbers (like “123456”), mothers’ maiden names, or simply using the word “password” as a password make it easier for criminals to break into accounts and to reset passwords. Even more problematic is the reuse of the same or similar pass-words, or the same answers to password recovery questions, from site to site.
- Inconsistent patching: Conficker, the big botnet of 2009, gained traction because computer users failed to download a patch that was readily available from Microsoft. Although most of today’s attacks are launched via social media networks, criminals still look for ways to exploit these old-style vulnerabilities.
- Getting too personal: By disclosing information, such as birth dates and hometowns, social media users make it far too easy for criminals to break into private accounts and gain control by resetting passwords. Corporate users are not immune to this trend, frequently using Twitter to discuss business projects.
- Overdose of trust: Social media users are placing too much trust in the safety and privacy of their networks, responding to messages, supposedly from their connections, with malware-laden links.
- Outdated virus protection: Computer users fail to update their anti-virus software or let subscriptions lapse, leaving their systems more vulnerable to attacks that might normally be easy to block. Worse, they may be running fake anti-virus software. In addition, individual users may fail to enable easily available security features built into their operating systems or web browsers, such as firewalls. Ensuring virus software is updated provides some protection, but criminals are now hiring services to test their malware and ensure that it is not flagged by anti-virus programs.
- Not using available security products: Users often assume anti-virus is all they need to be “safe.” Thus, they don’t take advantage of simple, tried-and-true security measures, such as personal firewalls and browser security features, which can provide an extra layer of protection.
- “It won’t happen to me” syndrome: This is perhaps the most potent ingredient in the Nightmare Formula. Users intentionally violate policies and knowingly engage in risky behavior online because they believe they won’t be the victim of a cyber attack or compromise their employer’s cybersecurity.
Excerpt from Cisco 2009 Annual Security Report: Highlighting global security threats and trends. Copyright © 2009 Cisco Systems, Inc. Download the complete report online. | <urn:uuid:d39b4f2b-12b4-4dbd-ae2e-087a958f806b> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/12/30/the-security-nightmare-formula/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925474 | 604 | 2.8125 | 3 |
Badu-Apraku B.,International Institute Of Tropical Agriculture |
Oyekunle M.,International Institute Of Tropical Agriculture |
Menkir A.,International Institute Of Tropical Agriculture |
Obeng-Antwi K.,CRI CSIR |
And 3 more authors.
Crop Science | Year: 2013
Maize (Zea mays L.) is a major staple crop in West Africa and has the potential to mitigate the food insecurity in the subregion. However, maize grain yield is severely constrained by drought. A study was conducted at 13 locations in West Africa for 2 yr to determine genetic gains in yield of cultivars developed during three eras, 1988 to 2000 (first-generation cultivars), 2001 to 2006 (second-generation cultivars), and 2007 to 2010 (third-generation cultivars) under drought and optimal conditions. Under drought, yield ranged from 1346 kg ha-1 for first-generation cultivars to 1613 kg ha-1 for third-generation cultivars with a genetic gain of 1.1% yr-1. Under optimal conditions, yield gain ranged from 3363 kg ha-1 for first-generation cultivars to 3956 kg ha-1 for third-generation cultivars with genetic gain of 1.3%. The average rate of increase in yield was 14 and 40 kg ha-1 yr-1 under drought and optimum conditions. Genetic gains in yield from first- to third-generation cultivars under drought was associated with improved plant aspect and husk cover, whereas under optimum conditions it was associated with plant and ear aspects, increased ears per plant, plant and ear heights, and improved husk cover. Cultivars TZE-W DT C2 STR, DTE-W STR Syn C1, DT-W STR Synthetic, 2009 DTE-W STR Syn, and EV DT-W 2008 STR were high yielding and stable across drought environments. Substantial progress has been made in breeding for drought tolerance during the last three decades. © Crop Science Society of America. Source | <urn:uuid:045dadf5-33b1-4e58-a995-c2ed2f0fc67f> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/csir-sari-1383973/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00014-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923975 | 416 | 2.625 | 3 |
A recent Evans Data survey indicates that developers are concerned that advancements in artificial intelligence could mean fewer jobs for programmers.
Despite being among the leaders of the digital revolution, software developers apparently are just as concerned as workers in other fields that automation and technological advances could, at some point, endanger their jobs, according to a recent Evans Data survey
Indeed, the Evans Data study indicates that developers fear that their own obsolescence will be spurred by artificial intelligence
(AI). The company surveyed more than 550 developers across a variety of industries. When asked to identify the most worrisome thing in their careers, nearly one-third (29.1 percent) selected the "I and my development efforts are replaced by artificial intelligence" category.
Developers' next biggest worries were related to platform concerns. Twenty-three percent of the respondents said they were worried that the platforms they work on might become obsolete, and 14 percent said they were worried that the platform they are targeting might not gain significant adoption.
"Another dimension to this finding is that over three-quarters of the developers thought that robots and artificial intelligence would be a great benefit to mankind, but a little over 60 percent thought it could be a disaster," said Janel Garvin, CEO of Evans Data, in a statement. "Overlap between two groups was clear which shows the ambivalence that developers feel about the dawn of intelligent machines. There will be wonderful benefits, but there will also be some cataclysmic changes culturally and economically."
Some observers note that developers often see firsthand the power of AI and are more keenly aware of its potential.
"It does not surprise that developers, who have the skills to understand AI at a deeper level than most folks, would be concerned about it," said Al Hilwa, an analyst with IDC. "However, I would say there are many other jobs and roles that are less creativity-centric—e.g. news reporting—that are more vulnerable in the first order."
Yet, from a broader perspective, this has been a major anxiety in recent history over how technology, which in the early days was largely mechanical and electrical, would replace humans, Hilwa said.
"Over time, it did, but the net result is a transformation in the nature of work towards knowledge work, and the shift in the nature of economies and the products produced," he said. "Overall, there has been dislocation of course, but also incredible growth and net improvement in lifestyles at almost every level of the income scale."
The Evans Data study comes at the emergence of what IBM CEO Ginni Rometty
calls the "cognitive era." With its Watson cognitive computing system, IBM is pushing into the cognitive era in a major way. Big Blue's Watson features a natural language interface, which enables users to directly query the system in natural language. Watson understands and responds in natural language. The system can ingest vast amounts of data and analyze it in milliseconds. It also learns from itself and builds its base of knowledge every time it used.
During a keynote at the Consumer Electronics Show
(CES) in January, Rometty
announced several new advances and partnerships built around the IBM Watson cognitive computing platform. Each of those advances has the potential to impact jobs at some level, including IBM's plans with Softbank Robotics
to take their partnership on a Watson-powered robot global. Through their joint work, Softbank has infused Watson into its "empathetic" robot Pepper, enabling it to understand and answer questions in real time, opening up new possibilities for the use of robotics in business scenarios such as banking, retail and hospitality. | <urn:uuid:c7a9262a-d41c-4aa7-af1a-25655d241d6c> | CC-MAIN-2017-04 | http://www.eweek.com/developer/developers-worried-that-ai-may-take-their-jobs.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00226-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966793 | 738 | 2.546875 | 3 |
Organizations Using the Internet
People's Republic of China, Taiwan, and Tibet
China, People's Republic of
- People's Republic of China official government site — http://www.gov.cn/
PRC government masquerades and attacks —
There are several Internet sites controlled by the Beijing
government, yet pretending to be something very different:
TCP/IP attacks by Beijing government systems —
Several Canadian and American ISP's hosting web pages for the Falun
Gong organization have been attacked from hosts in the
gov.cndomain and controlled by the Beijing government. This was reported in:
- The IEEE Cipher newsletter
- The Economist, 22 July 2000, pg 28 (which indicates that the attacking hosts were in Beijing's Public Security Bureau)
- China Internet Corporation — Since the Internet provides free and open communication, it is perceived as a major threat by the Beijing government. Thus, said government maintains sites containing only officially sanctioned news and information through this dummy organization.
- State control of information — As only four state-controlled entities are allowed to connect to the global Internet (The Economist, 22 July 2000, pg 25), the government strictly controls what web sites are visible from within mainland China. The sites of the BBC, CNN, the Washington Post, and human-rights organizations are blocked almost continuously.
- Narus, a major U.S. corporation, has sold the PRC government the equipment it uses to control and monitor telecommunications.
Anti-Taiwan Propaganda — The domain
taiwan.comwas registered in August 1999 to the Xinhua news agency of the People's Republic of China, calling themselves
china.com, based in Hong Kong, PRC. For the Beijing government masquerading as Taiwan, see: http://www.taiwan.com/
- China Society for Human Rights Studies — Since it's run by the Beijing government, this is about as ironic as their Ministry of Religion.... http://www.humanrights-china.org/
- TCP/IP attacks by Beijing government systems — Several Canadian and American ISP's hosting web pages for the Falun Gong organization have been attacked from hosts in the
Chinese military dummy organizations —
The PLA, People's Liberation Army, has vast
business holdings, including many based in the U.S.
- Information on PLA-owned companies operating in the U.S. can be found at: http://www.churchward.com/cpla/index.html
- COSCO, Chinese Overseas Shipping Company — COSCO is largely owned and operated by the PLA. COSCO almost took over the Long Beach Naval Shipyard when it was shut down. The U.S. military and the city of Long Beach both objected strenuously, but the Clinton administration applied lots of pressure to make it happen. I had thought that it was a done deal, but apparently it was overturned at the last minute.... http://www.cosco.com.hk
- Southwest China Research Institute of Electronic Equipment — part of the military-industrial complex, and possibly everything under the parent company as well.... http://www.ceiec.com/company/swiee.html
- Standard Chartered Bank — http://www.stanchart.com — One of the three issuers of Hong Kong currency, with major operations in New York and elsewhere in the U.S. There are serious allegations that it's controlled by the PLA through various fronts. For details on that, see the 27 May 1999 report by Pennsylvania congressman Curt Weldon, there may still be a link at http://www.freerepublic.com/forum/a374e109120e8.htm
- PRC recruitment for intelligence gathering — ``organizations'' calling on overseas Chinese to help to gather information useful for military, technical, and scientific applications. See http://www.ncix.gov/nacic/cind/2000/mar00.html for more details.
- The People's Daily — get the Party Line straight from the source:
- Hong Kong Voice of Democracy — http://www.democracy.org.hk/EN/index.html
Dissident voices —
Those outside mainland China risk TCP/IP based attacks from the
Beijing government (see above), those within mainland China risk
imprisonment and torture.
See The Economist, 22 July 2000, pp 24-28 for examples.
Among many others, they include:
- Lin Hai, a programmer in Shanghai, was arrested in 1998 for supplying an American dissident magazine with 30,000 Chinese e-mail addresses.
- Qi Yanchen, a journalist in Hebei, was jailed for posting excerpts from his book to the net.
- Huang Qi, whose web site http://www.6-4tianwang.com refers to the taboo Tiananmen massacre of 4 June 1989, was arrested 3 June 2000 and is expected to be charged under new state-secrecy laws.
The People's Republic of China has violently invaded and forcibly
occupied a few nations in the past few decades,
engaging in a little genocide along the way.
Unfortunately, the oppressed people aren't big contributors
to U.S. political parties.
Check out the following
for details on Chinese terror, torture, and genocide.
- Tibet and the Chinese People's Republic, A Report to the International Commission of Jurists by its Legal Inquiry Committee on Tibet (Geneva, 1960)
- In Exile from the Land of Snows, John F Avedon (Alfred A Knopf, 1984)
East Turkistan — Home of the Uighur people,
with Turkish language and genes, Islamic religious faith,
and under heavy suppression.
- International Taklamakan Uighur Human Rights Association — http://www.taklamakan.org/index.html
- Eastern Turkistan Information Center — http://www.uygur.com/index.html
- Eastern Turkistan National Freedom Center — http://www.uyghur.org/
- Free East Turkistan — http://www.caccp.org/et/
- Citizens Against Communist Chinese Propaganda — Their ``Free East Turkistan!'' page: http://www.afn.org/~afn20372/pol/fet.html
Tibet — Tibet is similarly non-Chinese.
Completely different genes, language, culture, religion (they
actually have one!), so Beijing calls for ethnic cleansing.
- Tibetan Government in Exile Two sites, different information:
- Rangzen - Independence for Tibet — http://www.rangzen.com/
- Voice of Tibet — Shortwave broadcasts from various relay transmitters.
- Radio Free Tibet — Broadcasts from Lithuania, where people understand just how wrong communist government can go. email@example.com Also see the Belarussia section for related broadcasts.
- Friends of Tibet — an international network of supporters of Tibet — http://www.friendsoftibet.org
- Citizens Against Communist Chinese Propaganda — Their ``Free Tibet!'' page: http://www.afn.org/~afn20372/pol/caccp.html
- Chushi Gangdruk — historical material on guerrilla fighting starting in 1949, supported at least partially by the U.S. CIA through 1973: http://www.chushigangdruk.org/
Karmapa Ogyen Trinley Dorje —
The Karmapa Lama escaped from Chinese-occupied Tibet
in 2000, traveling via back roads to Mustang, in
Nepal, then continuing to Dharamsala, India.
In early 2001 he was granted refugee status in India.
- BBC news item — http://news.bbc.co.uk/hi/english/world/south_asia/newsid_1300000/1300112.stm
- Karma Triyana Dharmachakra — http://www.kagyu.org/
- Kagyu Thubten Chöling — http://www.kagyu.com/
- Nalanda Bodhi — http://www.nalandabodhi.org/
- Also see the list at: http://www.mathaba.net/www/tibet/index.shtml
Southern Mongolia — a.k.a. Inner Mongolia.
- Southern Mongolia Freedom Federation — http://members.aol.com/yikhmongol/smff.htm
- Inner Mongolian People's Party — http://members.aol.com/imppsite/
- Citizens Against Communist Chinese Propaganda — Their ``Free Southern Mongolia!'' page: http://www.afn.org/~afn20372/pol/fm.html
- Inner Mongolia People's Party http://members.aol.com/imppsite/index.htm
Falun Gong, Falun Dafa, and Zhong Gong —
Qi gong is ``a system of traditional Chinese breathing and
meditation exercises that seek to channel the vital energy of the
body and the universe to various ends.'' (New York Times,
31 July 2000, pg A3)
Some groups have organized themselves around the practice of
qi gong, notably Falun Gong and Zhong Gong,
attaching to qi gong social and political aspects well
removed from its traditional focus, and drawing the ire of the
PRC government against unaffiliated qi gong practictioners
The People's Republic of China is asking the U.S. to extradite leaders
which have fled to the U.S. — Li Hongzhi, the Falun Gong
founder, a permanent resident of the U.S., and Zhang Hongbao, who
arrived in Guam in February 2000 without a visa.
Beijing government systems have been implicated in TCP/IP attacks
against web servers with qi gong info (see above).
That NYT article says, regarding Zhong Gong,
``The group, whose full name translates as China Life Preservation
and Intellect Improvement Discipline, was one of dozens of schools
started by self-styled masters during the period of loose social
controls that led to the pro-democracy demonstrations of 1989.
After the violent suppression of the democracy movement by military
force, millions of people flocked to the qi gong movement for
spriritual solace, a sort of mass recoil from the perils of political
engagement and the soulless materialism then sweeping the country.
a mass demonstration by Falun Gong followers at the central government
compound in Beijing last year set off a crackdown on it and
Falun Gong, Zhong Gong, and a handful of other qi gong schools have
since been outlawed and hundreds of their senior members have been
For some background on qi gong practices and beliefs,
which — it must be stressed — have nothing to do with politics, see:
- Zhong Gong
Falun Dafa and Falun Gong
- Shijie Falun Dafa Guangbo Diantai, the Falun Dafa / Falun Gong shortwave broadcast station, is heavily jammed by PRC transmitters. It broadcasts on shifting frequencies in the range 11-13 MHz around 2200-0300 UTC: http://www.falundafaradio.org/
China, Republic of (Taiwan)
- See the People's Republic of China section above for details on how the PRC is trying to masquerade as Taiwan on the Internet.
- In addition to claims of sovereignty by the People's Republic of China versus Taiwanese desires for independence, there seem to be some separatist movements, although these might just be particularly outspoken anti-PRC groups:
Travel suggestions for visiting the People's Republic of China
Intro Page Cybersecurity Home Page | <urn:uuid:e00535ea-57e8-4aa3-8587-3b742e25f02c> | CC-MAIN-2017-04 | http://cromwell-intl.com/cybersecurity/netusers/Index/cn | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00042-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867283 | 2,493 | 2.765625 | 3 |
After the 1992 moratorium on underground testing of nuclear weapons in the US went into effect, the Department of Energy’s National Nuclear Security Administration’s (NNSA) was tasked to maintain the country’s nuclear weapon deterrent via computing simulations. As a result, Lawrence Livermore National Laboratory (LLNL) and its two sister labs at Los Alamos and Sandia became the recipients of some of the most muscular computing hardware in the world. Today these institutions are at the forefront of supercomputing expertise, both hardware and software.
Because the weapons simulation applications are always looking to achieve higher resolution, higher fidelity, and full-system modeling, there is an ongoing demand for ever-more powerful capability-class supercomputers. Today, Los Alamos houses what is ostensibly the world’s most powerful computer — Roadrunner — which clocks in at over a petaflop. In a couple of years, LLNL is slated to deploy “Sequoia,” a 20-petaflop IBM Blue Gene/Q machine, and a likely contender for the top supercomputer in 2011. Sequoia’s predecessor, “Dawn,” is a 500 teraflop Blue Gene/P machine installed earlier this year at Livermore.
But according to Mike McCoy, who heads Livermore’s Scientific Computing and Communications Department, it’s not all about these elite capability machines. He says 10 to 30 percent of the computational resources at the lab are devoted to capacity systems, that is, commodity HPC Linux clusters. The reason is simple. There is a lot of computing to be done, and time on the expensive capability systems is dear. By necessity a lot of application work has to be developed and tested on these smaller, less expensive machines as a way to contain costs.
There is also quite a bit of unclassified science work performed at the lab in the areas of climate, biology, molecular dynamics, and energy research. Some of this basic science supports the weapons programs, but the remainder is just part of the NNSA’s larger mission of furthering national security. The unclassified work also serves to nurture the lab’s scientists, and without them, there is no weapons program. In any case, the vast majority of this class of computing takes place on vanilla Linux clusters, albeit very large ones.
Today at Livermore, capacity clusters account for 404 teraflops of computing power, while the capability machines deliver 1,324 teraflops. Another 205 teraflops are available in visualization and collaboration systems. The most powerful capability system at the facility is the half-petaflop Dawn, while the largest capacity cluster is Juno, which weighs in at 167 teraflops.
Livermore has relied on a number of cluster computer vendors over the years. In 2002, the now-defunct Linux Networx installed a the MCR cluster, which delivered a 7.6 teraflops, a performance level that earned it the number three spot on the TOP500 list in June 2003. A more recent vendor is Appro, who won the Peloton contract in 2006 and then the subsequent Tri-Lab Linux Capacity Cluster (TLCC) deal, which served all three NNSA labs.
Today Lawrence Livermore appears to be grooming Dell for some major deployments. Up until last year, the only Dell machines at the lab were sitting on people’s desks. But in November 2008, the company became the cluster partner on the Hyperion project, a testbed system to be used to develop system and application software for HPC. The idea was to provide a platform for developers to build and test codes at scale before they are deployed on larger production systems. That effort has produced some early results including simulating the file system and I/O rates of the future Sequoia system using Hyperion’s InfiniBand and Ethernet SANs.
Last week, Michael Dell met with LLNL officials at Livermore to get a sense of what the NNSA is expecting from its future cluster system. The agency’s goal is to maintain at least a 1:10 performance ratio between capacity systems and capability systems. Today that means you need roughly a 100 teraflop cluster to match up with the purpose-built one-petaflop supers. With Sequoia coming online in 2011, the folks at LLNL are already thinking about clusters in the two-petaflop range. Beyond that the lab see the need for 100-teraflop commodity machines in 2018, in anticipation of capability machines hitting the exaflop mark. That means vendors need to scale today’s commodity clusters by a factor of 10 over the next 9 years.
Recently Dell installed “Coastal,” an 88.5 teraflop system that is being used by the Lawrence Livermore’s National Ignition Facility to help with fusion research. Next year, with Dell’s help, the lab will be more than doubling the performance of the 90 teraflop Hyperion system with “Sierra,” a new cluster that is spec’ed to reach 220 teraflops.
Michael Dell is hoping that’s just the beginning. From his point of view, designing systems pushing the envelope of scalability and technology dovetails nicely with the company’s other big server segments, namely web services infrastructure and cloud computing. For example, the inclusion of SSD technology to increase I/O performance in the Livermore’s Coastal cluster also turned out to be a good solution for Dell servers deployed for a Web search provider in China (presumably Baidu). He sees the demand for these super-sized machines inside and outside of HPC as two sides of the same hyperscale coin. And, he says, the technology transfer travels in both directions. “You always learn from your best customers,” says Dell. | <urn:uuid:5cd41a1f-ae04-4713-bbe3-ce6a29eecbc6> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/09/23/lawrence_livermore_builds_stable_of_workhorse_clusters/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930605 | 1,227 | 2.765625 | 3 |
Dive into virtualization certification with these core certs
This feature first appeared in the Summer 2016 issue of Certification Magazine. Click here to get your own print or digital copy.
Virtualization, in all its myriad forms, is the fundamental enabling technology behind cloud computing. Therefore, virtualization lies at the center point of the knowledge base required of any current-generation systems administrator.
Formally defined, virtualization is a software abstraction layer between physical computer hardware and one or more logical instances of operating systems or applications. For example, we can set up a single physical server to host five or six virtual machines (VMs) that appear to the rest of the network as separate physical servers.
Deploying a VM means representing each major hardware software subsystem in software. That is, the VM’s virtual hard disk is simply a (large) file in the physical host server’s file system, and the VM’s virtual CPU, RAM, and networking hardware is “borrowed” from its physical host.
Virtualization takes place at many different levels in 21st century IT. To wit:
Server Virtualization — Deploying fully functional servers by means of a hypervisor. The hypervisor is the underlying “engine” behind virtualization, and is built into modern CPU hardware. Server virtualization is also sometimes called operating system (OS) virtualization.
Application Virtualization — Here we deploy individual applications in isolated containers that can potentially be streamed across an internal LAN or the public internet. This virtualization method is useful when, for instance, you need to support different versions of the same app on the same target computers.
Virtual Desktop Infrastructure (VDI) — VDI enables businesses to host their employees’ desktop operating system environment on one or more physical host servers. The user normally needs nothing more than a monitor, keyboard, mouse, and network adapter at their desktop; their entire client OS experience is streamed from a central server.
Server virtualization specifically conveys huge benefits to any organization. These benefits include but are definitely not limited to the following:
Agility — You can move VMs from one physical host to another (in most cases) without having to shut down the VM.
Cost Savings — Instead of purchasing and configuring separate hardware servers, you can buy a single physical host and run the other services as VMs. This saves the company hardware costs, electricity, and the human capital necessary to maintain separate servers.
Ease of Backup — Backing up and restoring entire VMs is trivially easy because all VM data and configuration exists in the host server’s file system and memory space.
Now that we understand a bit of what virtualization is and why it’s so highly prized in the IT industry, let’s turn our attention to virtualization certifications.
Virtualization Certification — The Big Picture
Most of the major IT service vendors have their own virtualization products, and offer certifications to help people validate their skills with particular platforms. In this article we’ll examine certifications by the following five vendors. I’ve also listed each vendor’s flagship virtualization product or platform:
● VMware (vSphere)
● Microsoft (Hyper-V)
● Citrix (XenApp; XenDesktop)
● Oracle (Oracle VM)
● Red Hat (Red Hat Enterprise Virtualization)
As you know by now, there are several different ways virtualization can be accomplished. To that point, each aforementioned vendor offers entire portfolios of virtualization-related certifications.
The approach we’ll take today is to explain each vendor’s entry-level or associate certification. This way, you’ll have a feel for how each vendor’s program works. You are then encouraged to visit the vendor’s certification web site to learn of related offerings. Let’s get started!
VMware has been in the virtualization market for a long time. Let’s take a look at their VMware Certified Associate 6 – Data Center Virtualization (VCA6-DCV) credential. Historically, VMware requires their certification candidates to attend an in-person, VMware-authorized training course as a prerequisite to taking a VMware exam.
The good news here is that the VCA6-DCV has no course requirement. You simply need to pass Exam 1V0-601, VCA6-DCV Fundamentals to obtain your first VMware certification. This computer-based exam includes 50 multiple-choice questions, has a time limit of 75 minutes, and costs $120 per attempt.
That’s right — most IT certification exams are paid per attempt, and in general you receive no discount on subsequent exam registrations. The “6” in VCA6-DCV stands for vSphere version 6; it’s important to note that VMware’s certifications are always aligned to a particular product version.
That said, only the VMware Certified Professional (VCP) exams, along with VMware’s even higher-level titles, require recertification. Your associate-level title will remain valid indefinitely.
Microsoft’s Hyper-V hypervisor, available in both Windows Server and modern Windows Client operating system, is perhaps VMware’s biggest competitor in business.
As of this writing in late spring 2016, Microsoft has a single specialist certification in Hyper-V and their System Center datacenter management suite. It’s exam 70-409, Server Virtualization with Windows Server Hyper-V and System Center.
In the Microsoft technology stack, the Hyper-V hypervisor is only part of a toolchain that includes System Center products such as System Center Virtual Machine Manager (SCVMM) and System Center Data Protection Manager (DPM).
Like the VCA6-DCV, this Hyper-V/System Center specialist credential has no classroom training requirement. You register to take this computer-based test through Pearson VUE, and each attempt costs $150.
Microsoft has historically been generous in offering certification exam discounts — be on the lookout for Microsoft Learning’s certification-related promotions, because you may get a really good deal.
Citrix is in the application (XenApp) and VDI (XenDesktop) market segment, and they have significant market penetration to prove their successful track record.
Consider the Citrix Certified Associate – Virtualization (CCA-V) title to get your feet wet with their technologies. At the associate level, Citrix wants you to validate your skills against their XenDesktop product. Exam 1Y0-201, Managing Citrix XenDesktop 7.6 Solutions, is a 180-minute computer-based test offered at Pearson VUE testing centers that costs $200 per attempt.
One thing to keep in mind concerning Citrix exams is that they include simulation items in addition to your bread-and-butter multiple choice items. Simulation items present a mocked-up XenDesktop environment in which you’re asked to perform various configuration tasks.
Simulation items are nice, because you can prove to yourself and Citrix that you actually know how to do the work. On the other hand, some Pearson VUE testing centers have such old, rickety testing PCs that simulation items are a bit unstable. Do your research before scheduling an exam!
When Oracle purchased Sun Microsystems in 2010, they inherited Sun’s VirtualBox VM desktop hypervisor. Oracle has since expanded on that hypervisor, turning it into an enterprise server virtualization platform called, appropriately enough, Oracle VM.
The Oracle VM 3.0 for x86 Certified Implementation Specialist (say that three times quickly) has no classroom training requirement and mandates that you pass only one exam: 1Z0-590, Oracle VM 3.0 for x86 Essentials.
Exam 1Z0-590 includes 72 multiple-choice questions given over 120 minutes and has a passing score of 61 percent. Exam registration occurs through Pearson VUE and costs $245 per attempt.
Red Hat has been a leading enterprise Linux provider since at least 1999. In keeping with modern industry trends, Red Hat has their own hypervisor platform called Red Hat Enterprise Virtualization Manager, as well as an associated credential called Red Hat Certified Virtualization Administrator (RHCVA).
Red Hat list the following as prerequisites for Exam EX318, so take care to prepare yourself accordingly:
● Experience using Red Hat Enterprise Virtualization
● Experience using and installing software on Windows
● Experience using VNC to view remote desktops
The 3-hour test costs $600 if taken via classroom at one of Red Hat’s prearranged locations (in California, Georgia, New Jersey, New Jersey, Texas and Washington, D.C.), but there is also an option to take the test at your work site. Like all Red Hat certification exams, Exam EX318 is entirely performance-based.
In summary, your decision to go for a virtualization vendor’s entry-level, professional-level, or expert-level certification depends entirely upon your overall virtualization experience and familiarity with the vendor’s products.
A general trend with these certifications is that you don’t need to prove you’ve worked with the platform for a given number of months or years. As long as you meet the exam registration (and sometimes classroom training) requirements, then you can sit for the test, receive your credential, and go on your merry way.
If nothing else, a virtualization certification is likely to give you an edge over your competitors in your next job hunt. Also, IT contracts with governmental or other bureaucratic organizations oftentimes require proof of industry certification. | <urn:uuid:acc64d3f-5380-4d3a-bfaa-b4f25b29db39> | CC-MAIN-2017-04 | http://certmag.com/dive-virtualization-certification-core-certs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913388 | 1,999 | 3.46875 | 3 |
The following will also be used as part of my day job. To ITSec types, it’s likely old hat, but it may be new to some of my readers. Some terminology in here is my-job-specific, but I did try to generalize it as much as possible.
What is it?
Firesheep is a readily-available and easily-installed add on for Firefox, a commonly-used web browser. It allows the person using it to, within certain parameters, steal web cookies used for authorization on many common websites, mostly social networking but also, most notably, Hotmail. This is a form of attack commonly known as sidejacking. Note that while this document largely talks about uw-wireless, everything about that network applies equally to any open or unsecured wireless network, such as uw-guest or those commonly found in coffee shops, restaurants, and other places offering free service to customers.
Please note that discussing a tool does not condone its use. In particular, the use of Firesheep on a campus network is a violation of the principles laid down in the Guidelines on Use of Waterloo Computing and Network Resources and, as noted by these Guidelines, is subject to discipline under the appropriate University Policy or Policies. Use of Firesheep here, or anywhere else, may furthermore violate local laws and regulations on privacy, mischief, or wiretapping.
What does it mean?
Firesheep makes it easy for anybody who can click a mouse to be able to access these websites as if they were the person whose credentials they’ve stolen – and that person may not even be aware of the accesses.
How does it work?
The victim has to be on the same wireless access point, using the same wireless network, as the attacker. That implies a certain physical proximity. The wireless network must be open, and the victim must be using the network at the same time as the attacker, and must be accessing these sites (but see below, the victim may not necessarily know they were using a particular website, embedded content can cause problems).
When Firesheep is started, the attacker’s computer starts passively watching the network for authentication credentials. When it sees such, it saves them and presents an icon for the attacker to click; this allows the attacker to easily access the website as the victim.
It should be noted that a user may inadvertently access a website. Many websites will have badges to follow the site’s author on Twitter, or link to them on Facebook. Sometimes the simple act of loading that website can cause your browser to send and receive authentication cookies, and therefore expose this information to the attacker.
I don’t use social networks.
Firesheep doesn’t only work on social networks, so you still may not be safe. Web services as varied as Evernote, Cisco, eBay, Amazon, and Slicehost have had Firesheep handlers written for them.
Why doesn’t the University stop it from happening?
This attack takes advantage of two weaknesses in the way victims might access vulnerable websites.
The first weakness is that open networks are precisely as the name implies. All clients transmit all data unencrypted using their wireless radio. Like any radio, anybody can listen in. This means that unless the client takes extra precautions to encrypt data, such as using TLS/SSL encryption on the data stream itself, that data is exposed for anybody within range who’s listening. This means that many websites which do encryption properly, such as almost all banking websites, are not vulnerable to authentication theft in the way that Firesheep accomplishes it.
The second weakness is in the way the vulnerable websites perform authentication and authorization. Without getting too technical, these sites rely on cookies, and merely having that cookie implies both that you are who you say you are, and you’re allowed to access the content you’ve requested. Not all websites have this issue; as noted, banking websites don’t rely on this model. Other sites such as GMail and corporate applications at the University of Waterloo don’t either, and so your credentials there are safe from this attack.
What can I do to protect myself?
The simplest thing you can do is not use open wireless networks. These are ones which do not require a key or password to access. uw-wireless is one such network, but many coffee shops and restaurants and other companies provide such networks for the convenience of their customers. This also affords attackers certain conveniences.
The next-best thing is to not use sites vulnerable to Firesheep attack whilst connected to an open wireless network. Be aware that some sites you visit may effectively force you to give up your credentials anyway, as noted under How Does It Work?
Some clients customized for use by some social networks, such as Tweetdeck, may allow these networks to be used in a manner that does not expose authorization credentials to Firesheep. That does not necessarily mean that these clients always operate in a safe manner.
Some people have authored Firefox extensions which could potentially warn you about the use of Firesheep on your network segment. The use of these extensions (the most commonly mentioned are FireShepherd and BlackSheep) is prohibited on University wireless networks, as they have a deleterious effect on the operation of the campus network and, not incidentally, of the remote service. | <urn:uuid:e475be5a-3431-451e-ad7e-f990ff211fe0> | CC-MAIN-2017-04 | http://snowcrash.ca/tag/social-media/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00308-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94599 | 1,117 | 2.703125 | 3 |
Using RAID technology? Alex Young provides some help.
RAID (redundant array of independent disks) technology has enabled users to get much more out of their hard disk drives, namely data protection, fault-tolerance, increased performance and larger capacity. In a RAID array, which can be configured to different levels e.g. 0, 1 or 5, data is shared and/or replicated across multiple disks. Below are the top ten tips existing and potential users of RAID technology should consider for their implementation of the technology:
1. Compare like-for-like: when comparing RAID products don’t focus on the CPU clock speed as that doesn’t always mean faster performance; you should look at the performance figures (MB/sec or IOPS) instead. Only when two RAID products are based on identical RAID ASIC, RAID firmware, and hardware design, does comparing the CPU clock speed might make sense. Each RAID product manufacturer offers different CPU, architecture, firmware, hardware, RAID ASIC, etc, hence the CPU clock speed comparison alone is not a reliable clear performance comparison criterion.
2. Keep it flowing: if you have purchased a 24- or 16-bay RAID subsystems to meet future growth and have not installed disk drives in each bay, install empty trays in the chassis as this will ensure sufficient air flow.
3. Needle in a haystack: if you, like many RAID users, have deployed dozens or even hundreds of RAID subsystems, let the software display a text string that the administrator can identify when searching for a specific unit. For example you can use the IP address, the name of the connecting host computer, or you can even give the RAID subsystem a name or number.
4. Ensure you have enough capacity: before proceeding with a RAID migration project make sure you have sufficient free capacity or unused drives in your RAID subsystem. RAID 6 arrays require at least four member drives and use additional capacity for the distribution of secondary parity. For example if you decide to migrate a three-drive RAID 5 array to RAID 6 you will need one additional disk drive or enough unused space to have a second parity.
5. Protect the cached data: the life span of a battery varies according to the number of recharging / discharging cycles hence you should replace the Battery Backup Module (BBU) after 12 months of operation in order to safeguard the cached data should a mains power failure occur.
6. Check your writes: the Media Scan should be performed regularly. Unless you enabled the Write-Verify function for the normal writes, the disk drives usually do not verify the data when writing. Performing Media Scan can decrease the risk of having multiple data blocks missing, and lowers the risk of data loss. You can have the RAID subsystem perform the Media Scan monthly by using the automatic scheduler function.
7. Reduce latency: when the cache Write-Back is disabled (Write-Through Mode) the entire host IOs are passed directly to the disk drives after RAID operations. All the disk drives will be accessing the data blocks in an order related to the host, and most of the time will be moving the read/write arm and waiting for the data blocks (the so called Latency Time). When the cache Write-Back is enabled, the Write data from the hosts are collected in the cache memory, optimised with the cache algorithms and then flushed to the disks by the RAID controller. The Write-Back cache mode does save a big percentage of disk drive latency time and provides a much better Write performance in most situations, compared to the Write-Through mode (Write-Back disabled).
8. Beware of slow PCI slots: you might be struggling to reproduce the highest performance figure of the RAID subsystem. This might be due to the host computer where the SCSI or Fibre Channel HBAs are usually installed. Often there will be only one or two PCI slots in a computer and while other PCI slots might look the same they might be running at lower speeds. Depending on the computer’s internal design, in many situations multiple PCI devices will have to share the PCI bandwidth and these will all limit the maximum performance that can be performed by the SCSI / Fibre Channel HBA, and affect the performance test results.
9. Plan for growth: when creating RAID Logical Drives, plan to accommodate any future drive capacity variation and be aware that drives from different manufacturers which are supposed to offer the same capacity will actually vary in size. The capacity of the disk drive is measured by the numbers of available data blocks, often labelled on the drive as ‘LBA’ (Logical Block Address, each block is 512 bytes). In a RAID Logical Drive all member drives will be used up to the maximum common denominator so if for example three disk drives in a RAID 5 Logical Drive, have 100, 99 and 101 blocks, only 99 blocks will be used on each disk drive when creating the RAID 5 onto these three disks. This discrepancy means that when a drive has failed and the replacement drive is slightly smaller in capacity the ‘rebuild’ won't start. The solution? When you create the RAID Logical Drive use 1 percent less capacity than the ‘official’ one.
10. Hard copies not just hardware: keep a print copy of the RAID array configurations and connection schemes as in some situations if the complete system is being replaced and the replacement is not the same product (e.g. from different vendors), the original configuration file might not be work. Keeping a hard copy of these details can ensure you quickly have the new unit up and running.
Alex Young is director of technical and marketing EMEA, Infortrend www.infortrend-europe.com
•Date: 8th Dec 2006• Region: World •Type: Article •Topic: IT continuity
Rate this article or make a comment - click here | <urn:uuid:9e34bc94-85ea-490f-86ff-80439bac3f51> | CC-MAIN-2017-04 | http://www.continuitycentral.com/feature0418.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899628 | 1,202 | 2.515625 | 3 |
The writing was on the wall as far back as the ‘80s: IPv4, the fourth version of the Internet Protocol, a standards-based routing method for the vast majority of Internet traffic, was going to run out of addresses. Finally, last year, the American Registry for Internet Numbers (ARIN) ran out of their supply of IPv4 addresses. Although official exhaustion was reached in 2011, network design and routing tricks prolonged the supply, as did the trading of IP addresses on the open market.
Read on to learn how the switch to the relatively new IPv6 affects data centers. But first, a quick primer on IP addresses in general.
IPv4 is used on packet-switched networks, using 32-bit addresses. This means that there is an upper limit of 4,294,967,296 addresses. Each address is used to identify an individual device connected to the internet, which is then used to direct traffic to and from the device.
IPv4 is most often represented in the dot-decimal notation, with four sets of numbers separated by periods. In this format, because the actual value is 32-bit, the quad-dot IP address 192.0.2.235 represents 3221226219.
Out of the over 4 billion addresses allocated in IPv4, there are subdivisions for private networks, shared addresses, benchmark tests, and other specific uses.
As the number of Internet users increased dramatically, plus more and more devices connected to the internet, it became very obvious that IPv4 would run out of the total possible number of addresses. In fact, over 4 billion devices already share IP addresses.
IPv6 adds 340 trillion trillion trillion additional IP addresses, more than enough for every person on the planet to have dozens of devices connected. IPv6 launched on June 6, 2012, but many organizations have not adopted measures to accommodate the new system.
As opposed to the 32-bit addresses used by IPv4, IPv6 uses 128-bit addresses. They two are not interoperable, so internet providers and network technicians must double up on any equipment that can not read both protocols. IPv6 is represented in 8 groups of 16 bits, written as 4 digits and separated by colons, like 2001:0db8:0000:0000:0000:ff00:0042:8329. Because they are so much more unwieldy to type and say, there are proposals for standard text translations.
Right now, according to Google, the United States is sitting at around 24% IPv6 adoption, while globally adoption rates are closer to 8.5%.
The new protocol also adds some additional security features, simplifies router processing, and implements multicasting, in which a single network packet is sent to multiple destinations in a single operation.
Because virtualization proliferates far more machines than physical hardware would otherwise allow, many data centers have already been forced to adopt IPv6. However, older devices must be moved to the new protocol, and the entire data center must be able to support both versions for years to come.
In July of 2013, the Internet Engineering Task Force drafted Operation Guidelines for Datacenters regarding IPv6. They stated that there are three transition stages:
During this time, the data center keeps a native IPv4 infrastructure, with gateway routers and application gateways performing adaptation to IPv6 traffic arriving from the outside internet.
While the two protocols are not interoperable, there are methods to allow transitioning between them, like IPv4-translated IPv6 addresses. Some functionality is naturally lost. In this process an algorithm translates the packet. Other modes include tunnel brokers, 6rd, Nat64 servers, 464XLAT, and more. Some of these have specific uses, like allowing IPv6 networks to communicate with technologies that are currently limited to IPv4.
This is going to be the stage for most data centers for the foreseeable future, until IPv6 is the most common protocol in use. In the dual stack phase, both native IPv4 and IPv6 are present I the infrastructure, up to whatever layer in the interconnection scheme where Level 3 is applied to packet forwarding.
Pretty self-explanatory, this stage involves a pervasive IPv6 infrastructure, including IPv6 hypervisors, which use tunneling or NAT if required by applications using IPv4.
For now, it seems that all network devices along the path must support both protocols. That means endpoints, routers, and switches. Most backbone and internet service providers likely already support both. In many cases, this will be as simple as turning IPv6 on for a compatible device. In other cases, new hardware might be necessary, and close monitoring of traffic will let network technicians know where they need to implement 6in4 protocol translators like tunnelbrokers. | <urn:uuid:0bdeb69c-5ebf-4cdc-acea-05ffaf1b137d> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/ipv6-what-does-it-mean-for-data-centers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936993 | 977 | 3.265625 | 3 |
Use of the Java language in real-time systems isn't widespread for a number of significant reasons. These include the nondeterministic performance effects inherent in the Java language's design, such as dynamic class loading, and in the Java Runtime Environment (JRE) itself, such as the garbage collector and native code compilation. The Real-time Specification for Java (RTSJ) is an open specification that augments the Java language to open the door more widely to using the language to build real-time systems (see Resources). Implementing the RTSJ requires support in the operating system, the JRE, and the Java Class Library (JCL). This article explores the challenges to using the Java language to implement real-time systems and introduces a development kit and runtime environment that tackles those challenges. Subsequent articles in this series will cover in greater depth the concepts and technologies that this article introduces.
Real-time (RT) is a broad term used to describe applications that have real-world timing requirements. For example, a sluggish user interface doesn't satisfy an average user's generic RT requirements. This type of application is often described as a soft RT application. The same requirement might be more explicitly phrased as "the application should not take more than 0.1 seconds to respond to a mouse click." If the requirement isn't met, it's a soft failure: the application can continue, and the user, though unhappy, can still use it. In contrast, applications that must strictly meet real-world timing requirements are typically called hard RT applications. An application controlling the rudder of an airplane, for example, must not be delayed for any reason because the result could be catastrophic. What it means to be an RT application depends in large part on how tolerant the application can be to faults in the form of missed timing requirements.
Another key aspect of RT requirements is response time. It's critical for programmers writing hard or soft RT applications to understand the response-time constraint. The techniques required to meet a hard 1-microsecond response are significantly different from those required to meet a hard 100-millisecond response. In practice, achieving response times below tens of microseconds requires a combination of custom hardware and software, possibly with no -- or a very thin -- operating-system layer.
Finally, designers of robust RT applications typically need some quantifiable level of deterministic performance characteristics in order to architect an application to meet the response-time requirements. Unpredictable performance effects large enough to impact a system's ability to meet an application's response-time requirements make it difficult and maybe even impossible to architect that application properly. The designers of most RT execution environments devote considerable effort to reducing nondeterministic performance effects to meet the response-time needs of the broadest possible spectrum of RT applications.
Challenges for RT Java applications
Standard Java applications running on a general-purpose JVM on a general-purpose operating system can only hope to meet soft RT requirements at the level of hundreds of milliseconds. Several fundamental aspects of the language are responsible: thread management, class loading, Just-in-time (JIT) compiler activity, and garbage collection (GC). Some of these issues can be mitigated by application designers, but only with significant work.
Standard Java provides no guarantees for thread scheduling or thread priorities. An application that must respond to events in a well-defined time has no way to ensure that another low-priority thread won't get scheduled in front of a high-priority thread. To compensate, a programmer would need to partition an application into a set of applications that the operating system can then run at different priorities. This partitioning would increase the overhead of these events and make communication between the events far more challenging.
A Java-conformant JVM must delay loading a class until it's first referenced by a program. Loading a class can take a variable amount of time depending on the speed of the medium (disk or other) the class is loaded from, the class's size, and the overhead incurred by the class loaders themselves. The delay to load a class can commonly be as high as 10 milliseconds. If tens or hundreds of classes need to be loaded, the loading time itself can cause a significant and possibly unexpected delay. Careful application design can be used to load all classes at application start-up, but this must be done manually because the Java language specification doesn't let the JVM perform this step early.
The benefits of GC to application development -- including pointer safety, leak avoidance, and freeing developers from needing to write custom memory-management tooling -- are well documented. However, GC is another source of frustration for hard RT programmers using the Java language. Garbage collects occur automatically when the Java heap has been exhausted to the point that an allocation request can't be satisfied. The application itself can also trigger a collection.
On the one hand, GC is a great thing for Java programmers. Errors introduced by the need to manage memory explicitly in languages such as C and C++ are some of the most difficult problems to diagnose. Proving the absence of such errors when an application is deployed is also a fundamental challenge. One of the Java programming model's major strengths is that the JVM, not the application, performs memory management, which eliminates this burden for the application programmer.
On the other hand, traditional garbage collectors can introduce long delays at times that are virtually impossible for the application programmer to predict. Delays of several hundred milliseconds are not unusual. The only way to solve this problem at the application level is to prevent GC by creating a set of objects that are reused, thereby ensuring that the Java heap memory is never exhausted. In other words, programmers solve this problem by throwing away the benefits of the managed memory by explicitly managing memory themselves. In practice, this approach generally fails because it prevents programmers from using many of the class libraries provided in the JDK and by other class vendors, which likely create many temporary objects that eventually fill up the heap.
Compiling Java code to native code introduces a similar problem to class loading. Most modern JVMs initially interpret Java methods and, for only those methods that execute frequently, later compile to native code. Delayed compiling results in fast start-up and reduces the amount of compilation performed during an application's execution. But performing a task with interpreted code and performing it with compiled code can take significantly different amounts of time. For a hard RT application, the inability to predict when the compilation will occur introduces too much nondeterminism to make it possible to plan the application's activities effectively. As with class loading, this problem can be mitigated by using the
Compiler class to compile methods programmatically at application start-up, but maintaining such a list of methods is tedious and error prone.
The Real-time Specification for Java
The RTSJ was created to address some of the limitations of the Java language that prevent its widespread use in RT execution environments. The RTSJ addresses several problematic areas, including scheduling, memory management, threading, synchronization, time, clocks, and asynchronous event handling.
RT systems need to control strictly how threads are scheduled and guarantee that they're scheduled deterministically: that is, that threads are scheduled the same way given the same set of conditions. Although the JCL defines the concept of thread priority, a traditional JVM is not required to enforce priorities. Also, non-RT Java implementations typically use a round-robin preemptive scheduling approach with unpredictable scheduling order. With the RTSJ, true priorities and a fixed-priority preemptive scheduler with priority-inheritance support is required for RT threads. This scheduling approach ensures that the highest-priority active thread will always be executing and it continues to execute until it voluntarily releases the CPU or is preempted by a higher-priority thread. Priority inheritance ensures that priority inversion is avoided when a higher-priority thread needs a resource held by a lower-priority thread. Priority inversion is a significant problem for RT systems, as we'll describe in more detail in RT Linux®.
Although some RT systems can tolerate delays resulting from the garbage collector, in many cases these delays are unacceptable. To support tasks that cannot tolerate GC interruptions, the RTSJ defines immortal and scoped memory areas to supplement the standard Java heap. These areas allow tasks to use memory without being required to block if the garbage collector needs to free memory in the heap. Objects allocated in the immortal memory area are accessible to all threads and are never collected. Because it is never collected, immortal memory is a limited resource that must be used carefully. Scope memory areas can be created and destroyed under programmer control. Each scope memory area is allocated with a maximum size and can be used for object allocation. To ensure the integrity of references between objects, the RTSJ defines rules that govern how objects in one memory area (heap, immortal, or scope) can refer to objects in other memory areas. More rules define when the objects in a scope memory are finalized and when the memory area can be reused. Because of these complexities, the recommended use of immortal and scoped memory is limited to components that cannot tolerate GC pauses.
The RTSJ adds support for two new thread classes that provide the basis for executing tasks with RT behaviour:
NoHeapRealtimeThread (NHRT). These classes provide support for priorities, periodic behaviour, deadlines with handlers that can be triggered when the deadline is exceeded, and the use of memory areas other than the heap. NHRTs cannot access the heap and so, unlike other types of threads, NHRTs are mostly not interrupted or preempted by GC. RT systems typically use NHRTs with high priorities for tasks with the tightest latency requirements,
RealtimeThreads for tasks with latency requirements that can be accommodated by a garbage collector, and regular Java threads for everything else. Because NHRTs cannot access the heap, using these threads requires a high degree of care. For example, even the use of container classes from the standard JCL must be carefully managed so that the container class doesn't unintentionally create temporary or internal objects on the heap.
Synchronization must be carefully managed within a RT system to prevent high-priority threads from waiting for lower-priority threads. The RTSJ includes priority-inheritance support to manage synchronization when it occurs, and it provides the ability for threads to communicate without synchronization via wait-free read and write queues.
Time and clocks
RT systems need higher-resolution clocks than those provided by standard Java code. The new
Clock classes encapsulate these time services.
Asynchronous event handling
RT systems often manage and respond to asynchronous events. The RTSJ includes support for handling asynchronous events triggered by a number of sources including timers, operating-system signals, missed deadlines, and other application-defined events.
IBM WebSphere Real Time
Implementing the RTSJ requires broad support from the underlying operating system as well as components of the JRE. IBM® WebSphere® Real Time, released in August 2006 (see Resources), includes full RTSJ compliance as well as several new technologies aimed at improving RT systems' runtime behaviour and facilitating the work application designers must do to create RT systems. Figure 1 shows a simplified representation of WebSphere Real Time's components:
Figure 1. Overview of WebSphere Real Time
WebSphere Real Time is based upon IBM's cross-platform J9 technology. Open source RT patches applied to the Linux operating system provide the fundamental RT services required to support RT behaviours, particularly those mandated by the RTSJ. Significantly enhanced GC technology supports 1-millisecond pause times. JIT compilation can be used for softer RT scenarios where compilation can occur when no higher priority work needs to be done. A new Ahead-of-time (AOT) compilation technology (not shown in Figure 1) has also been introduced to provide harder RT performance in systems where JIT compilation is inappropriate. The following sections introduce each of these technologies; later articles in this series will give more details on how each technology works.
WebSphere Real Time runs on a customized, fully open source version of Linux. Several changes were applied to create an environment for RT Java. These changes provide a fully preemptible kernel, threaded interrupt handlers, high-resolution timers, priority inheritance, and robust mutexes.
Fully preemptible kernel
RT Java threads are implemented with fixed priority scheduling, also known as static priority scheduling, with a first-in-first-out scheduling policy. A standard Linux kernel provides soft RT behaviour, and although there's no guaranteed upper bound on how long a higher-priority thread waits to preempt a lower-priority thread, the time can be roughly approximated as tens of microseconds. In RT Linux, almost every kernel activity is made preemptible, thereby reducing the time required for a lower-priority thread to be preempted and allow a higher-priority one to run. Remaining critical sections that cannot be preempted are short and perform deterministically. RT scheduling latencies have been improved by three orders of magnitude and can now be measured roughly in tens of microseconds.
Threaded interrupt handlers for reduced latency
Almost all interrupt handlers are converted to kernel threads that run in process context. Latency is lower and more deterministic because handlers become user-configurable, schedulable entities that can be preempted and prioritized just like any other process.
High-resolution time and timers provide increased resolution and accuracy. RT Java uses these features for high-resolution sleep and timed waits. Linux high-resolution timers are implemented with a high-precision, 64-bit data type. Unlike traditional Linux, where time and timers depend on the low-resolution system tick -- which limits the granularity of timer events -- RT Linux uses independently programmable high-resolution timer events that can be made to expire within microseconds of each other.
Priority inheritance is a technique for avoiding the classic priority inversion problem. One of the simplest examples of priority inversion, illustrated in the top diagram in Figure 2, involves three threads: one high (H), one medium (M), and one low (L) priority thread. Imagine H and M are initially dormant waiting for events to be triggered and that L is active and holds a lock. If H wakes up to handle an event, it will preempt L and begin to execute. Consider what happens if H blocks on the lock held by L. Because H cannot make progress until L releases the lock, H blocks and L begins executing again. If M is now triggered by an event, M will preempt L and execute for as long as it needs to. This situation is called priority inversion because M can starve H even though H has higher priority than M.
Figure 2. Example of priority inversion and priority inheritance
RT Linux prevents priority inversion through a policy known as priority inheritance (also known as priority lending), illustrated in Figure 2's bottom diagram. When H blocks on the lock held by L, H gives its priority to L, which guarantees that no task of lower priority than H can preempt L before it releases the lock needed by H. As soon as the lock is released, L's priority reverts to its original value so that H can make progress without waiting further on L. The application designer should still strive to avoid situations where a higher-priority thread requires a resource held by a lower-priority thread, but this priority-inheritance mechanism increases robustness so that priority inversion is prevented.
Robust mutexes and rt-mutexes
Linux pthread mutexes are supported by fast user-space mutexes, known as futexes. Futexes optimize the time to obtain an uncontested lock without relying on the kernel; kernel intervention is required only for contested locks. Robust mutexes solve the problem of cleaning up locks properly after an application holding locks crashes. Also, rt-mutexes extend the priority-inheritance protocol to robust mutexes, which allows the RT JVM to rely on priority-inheritance behaviour via the pthread library.
Deterministic garbage collection
Given an RT operating system, such as RT Linux, that provides the basis for RT behaviours, other major pieces of the JVM can be built to also exhibit RT behaviour. GC is one of the larger sources of nondeterministic behaviour in a JVM, but this nondeterminism can be mitigated through careful design and reliance on the features of RT Linux.
The nondeterministic effects of GC pauses wreak havoc on an RT application's ability to complete tasks under specific deadlines (see Garbage collection). Most GC implementations interfere with an RT application's latency goals to the point where only tasks with larger scale and loose timing requirements can afford to rely on GC technology. The RTSJ's solution to this problem is the introduction of programmer-managed memory allocation via immortal and scope memory areas and NHRTs, but this solution can become a huge headache for Java application designers.
WebSphere Real Time lets programmers rely on the RTSJ memory areas if they desire, but this approach is recommended only for tasks with extremely tight latency requirements. For tasks able to tolerate GC pause times on the order of 1 millisecond, IBM has created deterministic GC technology that lets programmers benefit from the ease of programming with automatic memory management and manage tasks with predictable performance.
IBM's deterministic GC technology is based on two simple premises:
- No single GC pause exceeds some maximum upper bound.
- GC will consume no more than some percentage of any given time window by controlling the number of pauses during that window.
Managing GC activities with these two premises in mind dramatically increases the likelihood that an application can achieve its RT goals.
WebSphere Real Time uses the Metronome GC to achieve deterministic low-pause-time GC behavior in the JVM (see Resources). The Metronome GC uses a time-based method of scheduling, which interleaves the collector and the application (known in GC parlance as the mutator because, from the garbage collector's point of view, the application acts to change the graph of live objects over time) on a fixed schedule.
The reason for scheduling against time instead of allocation rate is that allocation is often uneven during an application's execution. By completing GC work as a tax against allocation, it's possible to have uneven distribution of GC pauses and as such reduce the level of determinism in the GC behaviour. By using time-based scheduling, the Metronome GC can achieve consistent, deterministic, bounded pause times. Further, because no language extensions or modifications to existing code are required, regular Java applications can make use of Metronome transparently and benefit from its deterministic characteristics.
Metronome divides time into a series of discrete quanta, approximately 500 microseconds but no more than 1 millisecond in length, that are devoted to either GC work or application work. Although quanta are very short, if several quanta were devoted to GC work, the application could still experience a longer pause time that might jeopardize RT deadlines. To better support RT deadlines, Metronome distributes quanta devoted to GC work so that the application should receive some minimum percentage of time. This percentage is known as utilization, a parameter the user supplies. Over any time interval, the number of quanta devoted to the application should be no fewer than the specified utilization. By default, the utilization is 70%: in any 10-millisecond time window, at least 7 milliseconds will be devoted solely to the application.
The user can set the utilization at program start-up. Figure 3 shows an example of application utilization over a longer time period. Note the periodic dips corresponding to time quanta where the garbage collector is active. Across the entire time window shown in Figure 3, the application utilization remains at or above the specified 70% (0.7).
Figure 3. Sample utilization graph
Figure 4 demonstrates how deterministic GC pause times are with the Metronome technology. Only a small fraction of pauses exceeds 500 microseconds.
Figure 4. GC pause-time histogram
To keep individual GC pauses short, Metronome uses write barriers within the heap and associated metastructures to track live and potentially dead objects. Tracing live objects requires a series of GC quanta to determine which objects should be kept alive and which should be reclaimed. Because this tracing work is interleaved with program execution, the GC can lose track of certain objects that the application can "hide" through executing loads and stores.
This hiding of live objects is not necessarily the result of malicious application code. It's more commonly because the application is unaware of the garbage collector's activities. To ensure no objects are missed by the collector, the GC and VM cooperate by tracking the links between objects as they are created and broken via store operations that the application executes. A write barrier executed before the application performs a store operation does this tracking. The write barrier's purpose is simply to record the change to how objects are linked together if this store could cause a live object to become hidden. These write barriers represent both a performance and memory-footprint overhead that balance the need for deterministic behaviour.
The allocation of large objects can be troublesome for many GC strategies. In many cases, the heap is too fragmented to accommodate a single large object, such as an array. Consequently, it must incur a long pause to defragment, or compact, the heap to coalesce many smaller free memory areas into larger free memory areas to satisfy a large allocation request. Metronome uses a new two-level object model for arrays called arraylets. Arraylets break up large arrays into smaller pieces to make large array allocations easier to satisfy without defragmenting the heap. The arraylet object's first level, known as the spine, contains a list of pointers to the array's smaller pieces, known as leaves. Each leaf is the same size, which simplifies the calculation to find any particular element of the array and also makes it easier for the collector to find a suitable free space to allocate each leaf. Breaking arrays up into smaller noncontiguous pieces lets arrays be allocated within the many smaller free areas that typically occur on a heap, without needing to compact.
Unlike traditional STW garbage collector implementations that have the concept of a GC cycle to represent the start and end of a garbage collect, Metronome performs GC as a continuous process throughout the application's lifetime. Application utilization is guaranteed over the application's lifetime, with potentially higher utilization than the minimum in situations where not much GC work is needed. Free memory fluctuates upward and downward as the collector finds free memory to return to the application.
Native code compilation for RT
Most modern JVMs use a combination of interpretation and compiled code execution. To eliminate interpretation's high performance cost, a JIT compiler selects frequently executed code to be translated directly to the CPU's native instructions. The Java language's dynamic characteristics typically cause this compiler to operate as the program executes rather than as a step that occurs before the program is run (as is the case for languages like C++ or Fortran). The JIT compiler is selective about which code it compiles so that the time it takes to do the compilation is likely to be made up by the improvements to the code's performance. On top of this dynamic compilation behaviour, traditional JIT compilers employ a variety of speculative optimizations that exploit dynamic characteristics of the running program that might be true at one point during one particular program's execution but might not remain true for the duration of execution. Such optimizations can be "undone" if the assumption about this characteristic later becomes false.
In a traditional non-RT environment, compiling code while the program executes works well because the compiler's actions are mostly transparent to the application's performance. In an RT environment, however, the JIT compiler introduces an unpredictable run-time behaviour that wreaks havoc on worst-case execution time analysis. But the performance benefit of compiled code is still important in this environment because it enables more-complex tasks to complete in shorter periods of time.
WebSphere Real Time introduces two solutions to balance these two requirements at different trade-off points. The first solution is to employ a JIT compiler, operating at a low non-RT priority, that has been modified to perform fewer aggressively speculative optimizations. Operation at a non-RT priority lets the operating system guarantee that the compiler will never interfere with the execution of a RT task. Nonetheless, the fact that the code performance will change over time is a nondeterministic effect that makes this solution more appropriate for softer RT environments rather than hard RT environments.
For harder RT environments, WebSphere Real Time introduces AOT compilation for application programs. Java class files stored in JAR files can be precompiled through a simple command line into Java eXEcutable (JXE) files. By specifying these JXE files, rather than the original JAR files, on the application classpath, the application can be invoked so that the AOT-compiled code is executed -- rather than bytecodes being interpreted or native code being compiled by a JIT compiler. In the first WebSphere Real Time release, using AOT code means that no JIT compiler is present, which has two primary advantages: lower memory consumption and no dynamic performance impact from either the JIT compilation thread or the sampling thread that identifies frequently executing code.
Figure 5 shows how Java code executes in WebSphere Real Time when AOT code is being used:
Figure 5. How AOT code is used
Starting at the upper left of Figure 5, the developer compiles Java source code to class files as in any Java development project. Class files are bundled into JAR files, which are then AOT compiled using the
jxeinajar tool. This tool can either compile all the methods in all the classes in the JAR files, or it can selectively compile some of the methods based on output generated by a sample JIT-based execution of the program that identifies the most important methods to compile. The
jxeinajar tool compiles the methods in a JAR file and constructs a JXE file that contains both the contents of the original JAR file and the native code generated by the AOT compiler. The JXE files can be directly substituted for JAR files when the program is executed. If the JVM is invoked with the
-Xnojit option, then the AOT-compiled code in JXE files on the classpath is loaded (according to the rules of the Java language). During program execution, methods loaded from JAR files or uncompiled methods loaded from JXE files are interpreted. Compiled methods loaded from JXEs execute as native code. In Figure 5, the
-Xrealtime command-line option is also necessary to specify that the RT VM should be invoked. This command-line option is only available in WebSphere Real Time.
Disadvantages of AOT code
Although AOT code enables more-deterministic performance, it also has some disadvantages. The JXEs used to store AOT code are generally much larger than the JAR files that hold the class files because native code is generally less dense than the bytecodes stored in class files. Native code execution also requires a variety of supplementary data to describe how the code needs to be bound into a JVM and how to catch exceptions, for example, so that the code can be executed. A second disadvantage is that AOT-compiled code, though faster than interpreted code, can be substantially slower than JIT-compiled code. Finally, the time to transition between an interpreted method and a compiled method, or vice versa, is higher than the time to call an interpreted method from another interpreted method or to call a compiled method from a compiled method. In a JVM with an active JIT compiler, this cost is eventually eliminated by compiling "around the edges" of the compiled code until the number of transitions is too small to impact performance. In a JVM with AOT-compiled code but no JIT compiler, the number of transitions is determined by the set of methods that were compiled into the JXEs. For this reason, we typically recommend AOT compiling the entire application as well as the Java library classes on which the application depends. Expanding the number of compiled methods, as we mentioned above, has a footprint impact although the benefit to performance is usually more critical than the footprint increase.
The reason AOT code is generally slower than JIT code is because of the nature of the Java language itself. The Java language requires that classes be resolved the first time the executing program references them. By compiling before the program executes, the AOT compiler must be conservative about classes, fields, and methods referenced by the code it compiles. AOT-compiled code is often slower than JIT-compiled code because the JIT has the advantage that it is performing compilation after the executing program has resolved many of these references. However, the JIT compiler must also carefully balance the time it takes to compile a program because that time adds to the program's execution time. For this reason, JIT compilers do not compile all code with the same degree of optimization. The AOT compiler does not have this limitation, so it can afford to apply more-aggressive compilation techniques that sometimes yield better performance than JIT-compiled code. Moreover, more methods can be AOT compiled than a JIT compiler might decide to compile, which can also result in better performance with AOT compilation than JIT compilation. Nonetheless, the common case is that AOT-compiled code is slower than JIT-compiled code.
To avoid nondeterministic performance effects, neither the JIT compiler nor the AOT compiler provided in WebSphere Real Time applies the aggressively speculative optimizations generally applied by modern JIT compilers. These optimizations are generally performed because they can produce substantial performance improvements, but they are not appropriate in a RT system. Furthermore, supporting the various aspects of the RTSJ and the Metronome garbage collector introduces some overheads into compiled code that traditional compilers need not perform. For these reasons, code compiled for RT environments is typically slower than the code compiled for non-RT environments.
More can be done to make an RT Java environment faster, in terms of both predictable performance and raw throughput. We see two key areas of advancement that must occur for the Java language to succeed in the RT application space:
- Provide RT technology to users who want better predictability while running on traditional operating systems.
- Make it much easier to use this technology.
Toward soft RT
Many features of WebSphere Real Time are useful to programmers targeting a traditional operating system. Incremental GC and priority-based threads would clearly be useful in many applications, even if hard RT guarantees could not be met and only soft RT performance could be provided. An application server providing predictable performance without unpredictable GC delays, for example, is an attractive idea to many developers. Similarly, enabling applications to run high-priority Java health-monitoring threads with reasonable scheduling targets would simplify Java server development.
Making RT easier
Simply bringing the advantages of using the Java language to the process of creating RT systems is a tremendous benefit to developers. But there's always room for improvement, and we are constantly evaluating new features that could simplify RT programming even further. You can go to our IBM alphaWorks site to try out our expedited real-time threads research technology that lets developers manage extremely high-frequency events with very little tolerance for variance or delay (see Resources). The tooling achieves highly deterministic behaviour by preloading, preinitializing, and precompiling the code to handle events and then running the code independently of the garbage collector with fewer and less onerous restrictions than the NHRTs in the RTSJ. You'll also find tooling called TuningFork, which traces paths from the operating system through the JVM and into applications, making it easier to perform detailed performance analysis.
- Real-time Java series: Read the other parts in this series.
- "A real-time garbage collector with low overhead and consistent utilization" (David F. Bacon, Perry Cheng, and V.T. Rajan, Proceedings of the 30th Annual ACM SIGPLAN/SIGACT Symposium on Principles of Programming Languages, 2003): This paper presents a dynamically defragmenting collector that overcomes the limitations of applying GC to hard RT systems.
- JSR 1: Real-time Specification for Java: You'll find the RTSJ at the Java Community Process site.
- "IBM WebSphere Real Time V1.0 delivers predictable response times using Java standards": Read the product announcement for Real Time.
- The Real-time Linux Wiki: A self-described "Wiki Web for the CONFIG_PREEMPT_RT community, and real-time Linux in general."
- Hrtimers and Beyond: Transforming the Linux Time Subsystems (Thomas Gleixner and Douglas Niehaus, 2006 Ottawa Linux Symposium).
- RTSJ Reference Implementation (RI) and Technology Compatibility Kit (TCK): TimeSys is authorized through the Java Community Process to maintain and modify the RI and the TCK required to certify RTSJ compliance.
- Apogee Aphelion: Apogee's customized Aphelion offerings for RT platforms include RTSJ-compliant development and runtime environments.
- Java SE Real-Time: Sun Microsystems' commercial RTSJ-comformant implementation.
- Metronome: Learn more about Metronome, the GC technology incorporated in WebSphere Real Time.
- developerWorks Java technology zone: Hundreds of articles about every aspect of Java programming.
Get products and technologies
- WebSphere Real Time: WebSphere Real Time lets applications dependent on a precise response times take advantage of standard Java technology without sacrificing determinism.
- Real-time Java technology: Visit the authors' IBM alphaWorks research site to find cutting-edge technologies for real-time Java.
- linux-rt-users mailing list: Subscribe to this mailing list by sending an e-mail with "subscribe linux-rt-users" as the content of the mail.
- Check out developerWorks blogs and get involved in the developerWorks community. | <urn:uuid:d871a62b-d383-4eef-9a3a-39db5cc83f56> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/java/library/j-rtj1/index.html?S_TACT=105AGY75 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918409 | 7,035 | 3.21875 | 3 |
The definitive definition of Platform as a Service (PaaS) from the experts for the kids (and adults, too).
Gary Calcott, technical marketing manager, Progress
"Imagine you get to run your own production line to build your very own sports car. Instead of filling out lots of long boring forms, all you need to do is press a few buttons on your computer to select whatever colour, shape or speed you want and the robots do the rest!"
Angela Eager, research director for TechMarketView
"It’s a really quick way of making something like a new computer game of your own because someone else does all the boring bits and makes sure they are ready whenever and wherever you want to use them on your computer, so you can do the fun bits of inventing the story and the characters, then playing the game."
Lee Norvall, CTO, Fusoin Media Networks
"You have built your Lego house with everything a house needs, but it is empty. Now your little sister has seen it and she want to play with it. She takes the little Lego people and starts ‘deploying’ them, making them do little ‘services’ and tasks like delivering the milk, mowing the lawn and cleaning the windows. Just what a house needs."
John Coldicutt, head of KashFlow
"Platform as a Service (PaaS) is a way of working or playing on a computer where you have everything you need to play games and work on that computer in one place. It is similar to being given a stack of building blocks and deciding what you want to build with it. You are given everything you need to help you to build what you want and if you built something like a car or a boat you will have lots of help to make it move or to float."
Zahid Jiwa, VP UK & Ireland, OutSystems
"Using PaaS, customers can modify existing apps or develop entirely new apps that meet the needs of their industry and business. PaaS offerings facilitate the deployment of applications without the cost and complexity of buying and managing the underlying hardware and software stacks. PaaS offerings may also include facilities for application design, development, testing, deployment, monitoring, and management, as well as other services. In essence, PaaS is a development platform that abstracts the infrastructure, operating system and middleware to drive developer productivity.
Dr Kevin Curran, senior member of the Institute of Electrical and Electronic Engineers
PaaS is a web based platform service for making the creation of software easier. It provides computing platforms including operating system, virtualistion, storage, programming language execution environment, database, web server. This makes software development easier. Examples include Amazon Web Services Elastic Beanstalk and Windows Azure."
Len Padilla, VP Product Strategy, NTT Europe
Let’s think about the different types of cloud platforms as if they were toys or games. With PaaS you have some basic rules (the platform), but still plenty of freedom. It’s like Minecraft – there are building blocks and rules to follow, but you can build almost whatever you like (you programme it).
Kevin Scott-Cowell, CEO of 8×8 Solutions
"Platform as a Service means you pay someone to look after your laptop or iPad that you need in order to run the computer games and programmes you use. So you can just concentrate on playing the games and don’t have to worry about ever having to fix the programmes or your laptop or iPad if they go wrong." | <urn:uuid:484fb06b-b648-496e-bcb7-aaf65af9bb56> | CC-MAIN-2017-04 | http://www.cbronline.com/news/cloud/aas/8-ways-to-explain-platform-as-a-service-paas-to-a-five-year-old-4320947 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00539-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954054 | 738 | 2.578125 | 3 |
Here is a collection of highlights from this week’s news stream as reported by HPCwire.
New Algorithm Reduces Linear Equation Runtimes
Computer scientists at Carnegie Mellon University have developed a ground-breaking algorithm that can solve systems of linear equations used in important applications, including image processing, logistics and scheduling problems, and recommendation systems. The new algorithm is incredibly efficient and may make it possible for a desktop workstation to solve systems with a billion variables in just a few seconds.
Linear systems are used to model real-world systems, such as transportation, energy, telecommunications and manufacturing, which often include millions, or even billions, of variables. Solving such complex systems is time-consuming on even the fastest systems and has confounded computer scientists and stymied research goals for a long long time. In fact, solving simultaneous equations quickly and accurately is truly an age old mathematical problem. One of the classic algorithms for solving linear systems, which is today dubbed Gaussian elimination, was first published by Chinese mathematicians 2,000 years ago.
Researchers from Carnegie Mellon’s Computer Science Department have experienced a breakthrough, one that has great practical potential. The algorithm that they’ve devised relies on new tools from graph theory, randomized algorithms and linear algebra to greatly speed the time to completion for these linear system problems, with runtimes up to a billion times faster than with Gaussian elimination.
The algorithm applies to a class of problems known as symmetric diagonally dominant (SDD) systems, which have gained prominence in recent years. Recommendation systems, like that used by Netflix, use SSD to compare the preferences of an individual to those of millions of other customers. Image processing, logistics, and engineering are other key uses cases for SSD.
The press release highlights the importance of this achievement:
“The new linear system solver of Koutis, Miller and Peng is wonderful both for its speed and its simplicity,” said Spielman, a professor of applied mathematics and computer science at Yale. “There is no other algorithm that runs at even close to this speed. In fact, it’s impossible to design an algorithm that will be too much faster.”
The work will be presented at the annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), Oct. 23-36 in Las Vegas, and the group’s research paper, “Approaching Optimality for Solving SDD Linear Systems,” can be downloaded at http://www.cs.cmu.edu/~glmiller/Publications/Papers/KoutisApproaching-2010.pdf.
University of Queensland Deploys SGI Supercomputer
This week the University of Queensland increased its technical computing prowess with a high performance computing (HPC) solution from SGI. The SGI Rackable half-depth servers will be used to support a broad range of research from the fields of bioinformatics, computational chemistry, finite element analysis, computational fluid dynamics, earth sciences, market economics and image processing.
According to Professor Max Lu, deputy vice-chancellor or research at the University of Queensland, “These computers will strengthen an important part of the University’s research capacity. Tasks such as processing enormous amounts of biological data generated through techniques such as genome-sequencing, micro-arrays and imaging cannot be done on standard desktop computers.”
This will be one of the biggest deployments in Australia. The new SGI system boasts 3,144 processor cores, specifically Intel Xeon 5500 and 7500 series processors, with 11.52 TB memory and 249 TB of disk storage. Other specifications include InfiniBand QDR interconnect with Voltaire Grid Director 4700 switches and Unified Fabric Manager switching technology, and a Panasas file system. DC-based racks and innovative cooling techniques were selected for their energy-efficiency. The design offers flexible configurations to suit the university’s current and future requirements. The university opted for SGI Professional Services to provide project management, installation services, datacenter services, training, as well as ongoing consultation and maintenance.
The new machine will be put to work handling the complex research and data needs of universities in Queensland and partner organizations, such as the Queensland Cyber Infrastructure Foundation (QCIF), Commonwealth Scientific and Industrial Research Organisation (CSIRO) and Bioplatforms Australia. Additionally, the infrastructure will be hosting several projects, including the National Computational Infrastructure (NCI) Specialised Facility in Bioinformatics and the European Molecular Biology Laboratory (EMBL) Australia, European Bioinformatics Institute (EBI) Mirror project. | <urn:uuid:0b536feb-b96a-4016-a8fd-b2ae463621df> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/10/21/the_week_in_review/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00355-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92249 | 958 | 2.703125 | 3 |
|Image Source: Accredited Online|
The western world is absolutely dependent on a healthy banking and financial services industry. The governments of the world outsourced (or maybe never really took control of) the infrastructure for handling money at the level of individuals, so the bail-outs that the banks benefited from are not surprising - without banks, we all suffer. The way that the bail-outs was sold to the taxpayers though does not fit what many are seeing in practice, with lending a flow of money as bad as ever. At the level of the unbanked (those who can not get a bank account or where it is financially impractical to do so), the problem is likely to seem irrelevant. If the banks don't appear to be handing out money, you probably don't care if you couldn't even get a bank account to put your money into.
According to the WBJournal story:
“We see people that are just still not using mainstream financial services, and they’re being taken advantage of in many ways,” he said.In Massachusetts, 4.1 percent of households are unbanked and 11.4 percent are underbanked. Among households with incomes under $15,000, 24.8 percent are unbanked, and another 18.1 percent are underbanked. Nationally, 7.7 percent of all households are unbanked, and 17.9 percent are underbanked. The national numbers for households under $15,000 are 27.1 percent and 22.3 percent.
So, Massachusetts does better than the national average on persuading people that use of the mainstream system is better for them, but there are still huge numbers of people without access to those services. As a comparison, the United Kingdom, with a population of 61 million people shows 0.89 million individuals live in a household without access to a bank account. This equates to approximately 1.5%. This isn't about national competitiveness, just a number to help show that there is still room for improvement.
For banks to ever meet an acceptable level of social and local community responsibility in their provision of banking services to all there are several things that have to happen:
- Banks need flexible account opening procedures, to handle the less common cases, especially where an individual does not have a history of bank usage, or has unusual identity documentation
- In order to keep the costs to customers at close to zero, the efficiency of back office processes needs to be kept high, to keep transaction costs low
- A change in attitude may be required, to help banks see the new potential customers as a long term investment, rather than a burden they feel resentful of welcoming to their customer ranks
Quite frankly, #1 and #2 are easy to handle - with streamlined and well managed business processes that cut much of the waste and time-lags from a process, while ensuring the flexibility to handle complex cases. If a bank or credit union needs help understanding the opportunities here to help all customers, not just the unbanked, there are many resources on this blog that refer to business process management for account opening and financial transactions. Or feel free to contact me. Keeping costs down is not about cutting jobs; its about opening your available market to a broader set of people.
#3 is harder though. Attitudes can be changed in any business when appropriate information is made available. If it can be seen that in the long term, previously unbanked customers are responsible account users, and eventually become profitable borrowers through mortgages and loans, perhaps banks will be more likely to extend a welcoming hand. This is more likely to happen if banks have a full customer profile available, and can see that on average customers falling into this segment make decent business sense. Without information on the whole profile of a client, any business is likely to make rash decisions at an individual and group level.
I hope to see these numbers again in another twelve months and see the number of unbanked much lower. | <urn:uuid:5c239938-a585-4ceb-8c1f-abf4ed4c4b6d> | CC-MAIN-2017-04 | http://blog.consected.com/2010/03/banking-unbanked.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965376 | 806 | 2.828125 | 3 |
The Heritage of
the IBM System/360
Earlier Commercial Computers
Edward L. Bosworth, Ph.D.
TSYS Department of Computer Science
Outline and Rationale
The goal of this lecture is to
describe the early history of computing
machines manufactured by the International Business Machines Company.
I hold that many of the design
choices seen in the IBM S/370 architecture
reflect the use made of earlier computing machines, especially in 1930 – 1960.
Outline of this lecture:
1. Discussion of technologies and computer generations.
2. Early mechanical and electro–mechanical computers.
3. Early electronic computing machines.
4. The genesis of the IBM System/360: its immediate predecessors.
5. Design choices in the IBM System/360 and System/370.
The Classic Division by Generation
Here is the standard definition of computer generations.
Note that it ignores all work before 1945.
1. The first generation (1945 – 1958) is that of vacuum tubes.
2. The second generation (1958 – 1966) is that of discrete transistors.
third generation (1966 – 1972) is that of small-scale and
medium-scale integrated circuits.
fourth generation (1972 – 1978) is that of large–scale
and very–large–scale integrated circuits; called “LSI” and “VLSI”.
term “fifth generation” has been widely used to describe a
variety of devices. It still has no standard definition.
Early Computing Machines Were Mechanical
Here is an unexpected example: the Jacquard Loom. Though not
a computer, it was powered by steam and controlled by punched cards.
A Reconstructed Jacquard Loom
Scheutz’s Difference Engine
This is an example of a purely mechanical computing machine.
Note the gears and the hand crank.
Electro–Mechanical Computing Machines
in about 1900, the computing machines used electro–mechanical
relays. Below is a diagram of such a relay.
This is an electrically activated switch.
relay input is energized, the electromagnet is energized, and the
pivoted armature is moved. This causes the switch to close.
The Hollerith Type–III Tabulator
This electro–mechanical computing machine dates from 1932.
Note the plug–board with wires. These were used to program the machine.
The IBM 405: IBM's High–End Tabulator
It was first one to be called an Accounting Machine.
It was programmed by a removable plug–board with over 1600 functionally significant "hubs", with access to up to 16 accumulators.
The machine could tabulate at a rate of 150 cards per minute, or tabulate and print at 80 cards per minute.
A More Modern Plug–Board
This is a plug–board from the IBM 405, manufactured about 1946.
The Printer for the IBM 402 (1950)
Note that the typebars on the right can print only numerical characters.
All–Electronic Computing Machine
were built with vacuum tubes, which only became sufficiently reliable in
the 1940’s. Even then, it was common for the machine to run for only 4 hours.
Here are four vacuum tubes from my private collection.
The ENIAC (1945)
ENIAC was possible only after the problem of vacuum tube reliability
had been solved. There was also the problem of rodents eating the wiring.
programming the ENIAC (Miss Gloria Ruth
on the left and Mrs. Ester Gertson on the right)
The IBM NORC (1954)
Here is a picture of a typical large computer of the early 1950’s.
Note the trays of vacuum tubes in the background. These form the computer.
The IBM 650 (Circa 1955)
This computer used vacuum tube technology.
The IBM 650
– Power Supply, Main Unit, and Read-Punch Unit
Source: Columbia University [R41]
A Block of Vacuum Tubes from the IBM 701
The IBM 701, produced in 1952, used replaceable
to facilitate maintenance.
Transistors and Integrated Circuits
a picture showing some smaller tubes, with transistors and an
integrated circuit (presumably with a few thousand transistors).
1960’s, computers were fabricated from circuit boards populated with
discrete transistors. Again, this allowed for module replacement.
These were manufactured by the Digital Equipment
The “black hats” are the transistors.
Circuit Boards Plugged Into a Backplane
Figure: A Rack of Circuit Cards from the Z–23 Computer (1961)
An IBM Engineer with Three Generations of Components
The first generation tube component is one that we have already seen.
second generation discrete transistor board is a bit out of focus.
Presumably it has the same function.
Note the pencil pointing to one of nine integrated circuits on the 3rd generation component. Presumably, it also has the same function as the first board.
Note also the coat and tie. This was the IBM corporate culture.
The IBM S/360 might be considered an early 3rd generation computer.
Some Line Printers
printers were used to print large volume outputs, typical of a data center.
Here are two such printers, the IBM 716 and the IBM 1403.
IBM 716 IBM 1403
A Typical “IBM Shop” of the 1960’s
Seen here (at left) is an IBM 523 gang summary punch, which could process 100 cards a minute and (in the middle) an IBM 82 high-speed sorter, which could process 650 punched cards a minute.
Early IBM Product Lines
In 1960, IBM had four major “lines” of computers:
1) the IBM 650 line – a small, general purpose computer.
IBM 701 line, for scientific computations.
This had hardware for floating–point arithmetic, but not packed decimal.
This includes the IBM 701, 704, 709, 7090, and 7094.
702 line, for commercial computations.
This had hardware for packed decimal arithmetic, but not floating–point.
This includes the IBM 702, 705, and 7080.
4) the IBM 7030 (Stretch).
This was a research computer. It was not produced in volume.
issue is that none of these computer lines were compatible, either in
the software sense or hardware sense.
Field technicians generally were trained for the 701 line or 702 line, but not both
The IBM S/360 “Family Tree”
This shows the chronological “descent” of the IBM S/360.
Some Design Goals for the System/360
Here are a number of goals for the system.
1. To replace a number of very successful, but
lines with a single computer family.
2. To provide “an expandable system that would
serve every data processing
need”. It was to excel at all “360 degrees of data processing”. [R11, p 11]
3. To provide a “strictly program compatible”
family of processors,
which would “ensure that the user’s expanding needs be easily
accommodated by any model [in the System/360 family]”.
The System/360 was announced on April 7, 1964.
The first offerings included Models 30, 40, 50, 60, 62, and 70 [R49].
The first three began shipping in mid–1965, and the last three were replaced by the Model 65 (shipped in November 1965) and Model 75 (January 1966).
Strict Program Compatibility
issued a precise definition for its goal that all models in the S/360
family be “strictly program compatible” [R10, page 19].
of computers is defined to be strictly program compatible if and
only if a valid program that runs on one model will run on any model.
There are a few restrictions on this definition.
program must be valid. “Invalid
programs, i.e., those which
violate the programming manual, are not constrained to yield
the same results on all models”.
program cannot require more primary memory storage or types of
I/O devices not available on the target model.
logic of the program cannot depend on the time it takes to execute.
The smaller models are slower than the bigger models in the family.
“Programs dependent on execution–time will operate
compatibly if the dependence is explicit, and, for example,
if completion of an I/O operation or the timer are tested”.
The Term “Architecture”
introduction of the IBM System/360 produced the creation and
definition of the term “computer architecture”.
According to IBM [R10]
“The term architecture is used here to
describe the attributes of a system
as seen by the programmer,, i.e., the conceptual structure and functional
behavior, as distinct from the organization of the data flow and controls,
the logical design, and the physical implementation.”
The IBM engineers realized that “logical structure (as seen by the programmer) and physical structure (as seen by the engineer) are quite different. Thus, each may see registers, counters, etc., that to the other are not at all real entities.”
see in another lecture that any specific logical structure may be
supported by a number of physical implementations.
NOTE: The reference numbers in this set of slides
are those from the
original textbook. For that reason, they are out of order.
R_11 Mark D. Hill, Norman P. Jouppi, &
Gurindar S. Sohi, Readings in Computer
Architecture, Morgan Kaufmann Publishers, 2000, ISBN 1 – 55860 – 539 – 8.
R_10 G. M. Amdahl, G. A. Blaauw, & F. P.
Brooks, Architecture of the IBM
System/360, IBM Journal of Research and Development, April 1964.
Reprinted in R_11.
R_12 D. W.
Model 91: Machine Philosophy and Instruction–Handling,
IBM Journal of Research and Development, January 1967. Reprinted in R_11.
R46 C. J. Bashe, W. Buchholz, et. al., The
Architecture of IBM’s Early
Computers, IBM J. Research & Development, Vol. 25(5),
pages 363 – 376, September 1981.
Web Sites of Interest | <urn:uuid:d87afd80-203a-4673-9d73-017e1eb7c7f8> | CC-MAIN-2017-04 | http://www.edwardbosworth.com/MY3121_LectureSlides_HTML/IBM370_Heritage.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.907146 | 2,278 | 3.234375 | 3 |
What You'll Learn
- Network fundamentals and build simple LANs
- Establish Internet connectivity
- Manage and secure network devices
- Operate a medium-sized LAN with multiple switches, supporting VLANs, trunking, and spanning tree
- Troubleshoot IP connectivity
- Describe how to configure and troubleshoot EIGRP in an IPv4 environment, and configure EIGRP for IPv6
- Configure and troubleshoot OSPF in an IPv4 environment and configure OSPF for IPv6
- Characteristics, functions, and components of a WAN
- How device management can be implemented using the traditional and intelligent ways.
- QoS, virtualization and cloud services, and network programmability related to WAN, access and core segments.
- A solid understanding of IPv4 and IPv6 based networks
- A basic knowledge of network subnetting and IP routing
- At least one year of networking experience
Who Needs To Attend
- Network administrators
- Network support engineers
- Network engineer associate
- Network specialist
- Network analyst
- Cisco channel partners
- Individual pursing the CCNA Routing and Switching certification
Note: This course, like all Global Knowledge courses, is provided only to individuals sponsored by an employer (business, government agency, non-profit, etc.). Fee-paying members of the general public are not permitted to register. For more information, please contact us at 1-866-716-6688.
This course is part of the following programs or tracks: | <urn:uuid:f18367fc-8adc-4a8c-adee-99435ccef1cc> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/151939/ccnax-v30-ccna-routing-and-switching-boot-camp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00437-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.834707 | 323 | 2.8125 | 3 |
Utility computing, to some observers, is part of the Internet's coming evolutionary advance and the logical future of how we operate. Also called on-demand computing, it allows users to access more computational power while maintaining fewer resources.
Numerous endeavors have been established in the worlds of academia and science, and the West Virginia High Technology Consortium (WVHTC) Foundation and the Minnesota Historical Society (MNHS) are already taking advantage of utility computing's potential.
Most experts agree that grid computing will let utility computing truly flourish.
With inherent Orwellian overtones, "the grid" sounds shadowy and sinister. In reality, a grid is a colossal network of computers that allows individual machines to share computational capacity with others on the network -- in essence, creating one massive computer.
Grid users can access applications and take advantage of the combined processing power of thousands of computers, and perform extremely high-level computations on their own machines. They need not invest in powerful mainframes because a grid uses idle desktops and servers to create a virtual mainframe of almost limitless potential power.
Carl Kesselman, director of the Center for Grid Technologies at the University of Southern California's Information Sciences Institute -- and one of the grid's pioneering researchers -- said he believes grid computing will change the way people work and interact with each other.
"The grid is an infrastructure that allows the controlled sharing of diverse resources to facilitate collaborative activities," he explained. "It's the idea of a virtual organization. On the Web, everything looks like a document, and you are basically just sharing documents. The grid is about understanding the process in back of that -- understanding that there is capability and learning to manipulate that. It's a very powerful idea."
Grid computing came to life in 1995 through the Globus project, which developed the basic mechanisms and infrastructure for grids, and created the Globus Toolkit -- the underlying infrastructure used by most major grid projects.
Initially the Defense Advanced Research Projects Agency funded the Globus project, and the U.S. Department of Energy and the National Science Foundation provided later funding. The project led to creation of the Globus Alliance, a research and development organization now working to create open source, standards-based software to enable development of grids worldwide -- and worldwide grids.
Other organizations, such as the Global Grid Forum, seek to develop standards and best practices for grids. The Global Grid Forum also hosts workshops around the globe that bring researchers and industry professionals together to discuss the grid's future.
The grid has two versions. As is often the case with new technologies, the grid started first as an academic resource and later begat a commercial offshoot. Presently academia and the private sector have their own grids, and emerging between the two is a government grid.
One famous grid, SETI@home, is an academic grid disguised as a screen saver. SETI, the Search for Extraterrestrial Intelligence, analyzes radio signals from deep space, hoping one will be artificial and prove there is life beyond earth.
SETI generates huge amounts of data, and processing such massive data is no simple task. SETI@home acts as a data-analysis program to take advantage of the processing power of idle desktops around the world. Users of SETI@home may not know it, but they are part of one of the first and largest grids.
"When a problem is too hard for one computer, you can slice it up, give it to lots of different computers and bring those answers back together to solve it," said Tim Hoechst, senior vice president of technology for OraclePublic Sector. "A great example of this is SETI@home. We call these 'academic grids' because in academia, they are building large arrays of computers to address computationally difficult problems."
On the flipside of academic grids are those used for commercial applications. The technology is the same, but applied differently.
"We use the term 'enterprise grid,'" said Hoechst. "For us, that means multiple computers sharing the same disk. To an application, these computers look, smell and act like one computer, but in reality, they're multiple computers cooperating."
With academia and the private sector creating, or hoping to create, grids of their own, where does that leave government agencies? The trend toward resource sharing and consolidation is evident, and the grid has potential to create functional, efficient IT environments.
The problem is that most agencies have program-specific infrastructure. Some rely on old legacy mainframe systems that only a few people know how to manage. Creating a grid would require administrators to not only share control of the resources, but also to manage a beast that can grow quickly and wildly.
That is, unless someone else could manage it.
Utility computing is very much like setting up a grid, except someone else sets it up and charges for its use.
"Sometimes the terms 'grid' and 'utility' get kind of muddied," said Sara Murphy, HP's marketing manager for grid computing. "Utility computing is a model of how you pay for computing resources. It's purchasing computer resources in a pay-per-use model. Grid and utility are complementary concepts. The grid is the infrastructure for sharing resources. Utility is the concept of paying for what you need."
In the real world, grids providing utilities such as gas, electricity or water were created to supply on-demand access to consumers who want to use those services and will pay for them.
Utility computing is no different. The concept should especially appeal to government agencies that experience seasonal spikes in demand, which require more power but may not justify purchasing -- or the agency simply can't afford -- new hardware.
George Westerman, a research scientist at MIT's Center for Information Systems Research, said this notion of utility computing makes the concept valuable.
"If you have demand that varies greatly, like at the end of the month but not at the beginning of the month, you can buy processing power for when you need it, instead of having a lot of computers sitting around doing nothing," Westerman said. "That's the key value proposition for utility computing. In addition, somebody else is managing your computers so they're going to work right."
Another benefit of a grid-based utility model is that there is safety in numbers. If one, five or even 100 computers go down, the remaining computers work together to make up for the loss.
"When a big machine fails, it has failed," said Hoechst. "In the grid, you replace a node. The grid itself never goes down."
Utility computing allows users to figuratively flip a switch and access vast computing resources only when they need those resources. The model is similar to the way wireless phones access the Internet. Some wireless providers charge on a per-kilobyte basis, so users are charged for the amount used.
If someone has an instantaneous need where a ton of processing is required, he or she can access the grid, grab everything he or she needs for a second, and then disconnect from the grid, Westerman explained. "If I just have an hour's worth of work but it takes a thousand computers to do it, I don't need to have those thousand computers." The user can connect to the grid for an hour, he said, and then disconnect when the work is complete.
Utility computing should be particularly appealing in times of lean budgets and high demand. Mark Forman, former administrator of the federal Office of Management and Budget's E-Government and Information Technology Office, said he believes the approach presents an extraordinary cost-savings platform for governments at all levels.
"The cost-savings from taking advantage of grid computing in an on-demand environment are huge," said Forman. "Most local governments are strapped for money and would love to take advantage of other people's assets on information and applications."
Dan Hushon, chief technologist for Sun's Strategic Development Business Unit, said government agencies can save substantial amounts of money with the utility computing model.
"The government's average cost of delivering IT is somewhere in the $9-$18 per-CPU-hour range," Hushon said. "Here, we are at $1 per CPU-hour. If you decide it's cheaper to rent or simply utilize computer space rather than buy it and operate it yourself, you have that option."
Two states have launched projects implementing grid-based utility computing, both of which have unique goals, and harness the power and potential of on-demand computing in a grid environment.
Robert Horton is a state archivist and head of the Collections Department at the MNHS, which is working with the San Diego Supercomputer Center (SDSCC) at the University of California, San Diego, to test whether the SDSCC can host terabytes of spatial data from the MNHS while simultaneously allowing access to any requested data.
"The primary data is digitized material," said Horton. "Surveys and maps from the 19th century, for example. They are big maps, very high-quality with very high resolution."
The SDSCC plays the role of utility provider by hosting the data and making it accessible to those who ask for it. Horton admits the MNHS staff is not technologically sophisticated enough to manage such large data sets. By creating an on-demand, grid-based environment, Horton hopes to someday manage all of their data within a virtual organization.
"We're archivists, not technology experts," said Horton, adding that government won't ever likely have expertise equivalent to the private sector or higher education with these types of networks. "Governments have primary business functions, and that's what they know best. They need some collaboration along these lines to manage the data they create and use."
At the WVHTC Foundation, CEO Jim Estep is creating the Global Grid Exchange to boost West Virginia's position in the academic world while simultaneously luring new business to the state.
"The idea is that we put our computing nodes on all the various computers that the state has in its inventory," Estep said. "In turn, the state can use the computational capability of our grid for their work. They can leverage the grid and save, we hope, millions of dollars."
But bringing new life to West Virginia's economy is the primary goal of the Global Grid Exchange. Staff members are applying the utility model themselves, essentially reversing how a standard government utility model looks.
"We hope the big bang for the buck is going to be the businesses that spend millions every year doing computations," said Estep. "We as a state want to offer a package to them to improve their margins and bottom lines with this resource we have built. We can use that as an enticement for businesses to locate in West Virginia, thus creating jobs for the people."
By making use of existing state resources, the Global Grid Exchange can create tremendous new business opportunities that benefit all of West Virginia.
"This is, for all intents and purposes, the movement of computation into the utility environment," said Estep. "We are basically building a utility. In the same way electrical grids are organized, you'll see our grid organized." | <urn:uuid:d36794ef-bf5f-4933-bc70-0c9b7658c5b0> | CC-MAIN-2017-04 | http://www.govtech.com/featured/Witnessing-an-Evolution.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945112 | 2,303 | 3.28125 | 3 |
There are more than 43,000 healthcare apps available through the Apple iTunes App Store. That sounds like a wealth of useful apps, but the truth is that only a handful of them are actually being used. According to a survey published in October 2013 by the IMS Institute for Healthcare Informatics, most have been downloaded less than 500 times, and very few offer robust functionality. Worse, many offer information to patients that is inaccurate or unproven. Worst of all, some apps that are designed for clinical use provide clinically inappropriate/inaccurate advice to physicians.
Finding a good one that provides useful functionality is the proverbial search for a needle in a haystack.
FDA provides limited regulation
Currently, the FDA does not regulate consumer medical apps, so, like the supplement industry, it’s a buyer-beware situation. Without rigorous clinical trials, there is no way to know which, if any, of these apps will actually improve health outcomes. Since few of these apps have been tested in clinical trials, their efficacy and safety are largely unknown.
Fortunately, in September 2013, the FDA announced that it will regulate non-consumer medical apps that have the potential to harm patients. The FDA is focusing its oversight on two kinds of apps:
- Those intended to be used as an accessory to a regulated medical device – for example, an application that allows a health care professional to make a specific diagnosis by viewing a medical image from a picture archiving and communication system (PACS) on a smartphone or a mobile tablet; or
- Those that transform a mobile platform into a regulated medical device – for example, an application that turns a smartphone into an electrocardiography (ECG) machine to detect abnormal heart rhythms or determine if a patient is experiencing a heart attack.
Expect an update on this topic this week when FDA Commissioner Margaret Hamburg offers the closing keynote address at the 2013 mHealth Summit on Dec. 11.
While that oversight is important, it leaves a vast, unregulated gray area of apps that are intended to offer health advice or monitor health status. The best guidance for finding the few good apps among the many questionable offerings comes from independent review sites, such as iMedicalApps, or from physician specialty and subspecialty groups, which provide ratings for apps used by their members. Many such groups, like the American Gastroenterological Association, offer apps for physicians that detail evidence-based treatment guidelines to help promote higher quality care. Because these are rigorously reviewed prior to being offered, they are likely to be accurate.
Remote patient monitoring tools are the standouts among apps
The apps that stand out from the crowd for being the most useful are those that have been developed as part of a comprehensive healthcare strategy for managing chronic diseases. Remote monitoring of patient clinical markers and general health status is proving to be an effective means of improving health in patients with chronic illnesses. Many of the apps developed for remote monitoring are undergoing rigorous testing in well-designed clinical trials, offering solid proof of their effectiveness. The Agency for Healthcare Research and Quality (AHRQ) has reported on several studies that have shown lower mortality and lower costs as a result of remote monitoring using cell-phone based technology.
As the healthcare industry becomes more attuned to mobile devices, I expect that there will be more app developers willing to test their tools in clinical trials like these, and the apps that prove themselves will be the ones that will be widely used.
In the meantime, physicians should be cautious about recommending apps for patients to use. Beyond usefulness and accuracy, the apps should be vetted for their ability to protect patient privacy and the security of patient data. Read the reviews online, check with your medical or specialty association for recommendations, and then test the app yourself. Patients likewise should ask tough questions about any app that involves the uploading of personal healthcare information. Consumers should also check with their physician or healthcare provider before following any advice. | <urn:uuid:baaa41f2-099c-457b-b0dc-607426b86c95> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474276/healthcare-it/caution--untested-mhealth-apps-proliferate--but-few-good-ones-work-well.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954206 | 795 | 2.578125 | 3 |
The biggest threat to modern web applications is developers who exhibit Advanced Persistent Ignorance. Developers rely on all sorts of APIs to build complex software. This one makes code insecure by default. API is the willful disregard of simple, established security designs.
First, we must step back into history to establish a departure point for ignorance. This is just one of many. Almost seven years ago on July 13, 2004 PHP 5.0.0 was officially released. Importantly, it included this note:
A new MySQL extension named MySQLi for developers using MySQL 4.1 and
later. This new extension includes an object-oriented interface in addition
to a traditional interface; as well as support for many of MySQL’s new
features, such as prepared statements.
Of course, any new feature can be expected to have bugs and implementation issues. Even with an assumption that serious bugs would take a year to be worked out, that means PHP has had a secure database query mechanism for the past six years.1
The first OWASP Top 10 list from 2004 mentioned prepared statements as a countermeasure.2 Along with PHP and MySQL, .NET and Java supported these, as did Perl (before its popularity was subsumed by buzzword-building Python and Ruby On Rails). In fact, PHP and MySQL trailed other languages and databases in their support for prepared statements.
SQL injection itself predates the first OWASP Top 10 list by several years. One of the first summations of the general class of injection attacks was the 1999 Phrack article, Perl CGI problems.3 SQL injection was simply a specialization of these problems to database queries.
So, we’ve established the age of injection attacks at over a dozen years old and reliable countermeasures at least six years old. These are geologic timescales for the Internet.4
There’s no excuse for SQL injection vulnerabilities to exist in 2011.
It’s not a forgivable coding mistake anymore. Coding mistakes most often imply implementation errors — bugs due to typos, forgetfulness, or syntax. Modern SQL injection vulns are a sign of bad design. For six years, prepared statements have offered a means of establishing a fundamentally secure design for database queries. It takes actual effort to make them insecure. SQL injection attacks could still happen against a prepared statement, but only due to egregiously poor code that shouldn’t pass a basic review. (Yes, yes, stored procedures can be broken, too. String concatenation happens all over the place. Never the less, writing an insecure stored procedure or prepared statement should be more difficult than writing an insecure raw SQL statement.)
Maybe one of the two billion PHP hobby projects on Sourceforge could be expected to still have these vulns, but not real web sites. And, please, never in sites for security firms. Let’s review the previous few months:
November 2010, military web site.
December 2010, open source code repository web site.
February 2011, HBGary Federal. Sauron’s inept little brother. You might have heard about this one.
February 2011, Dating web site.
March 2011, MySQL.com. Umm…speechless. Let’s move on.
Looking back on the list, you might first notice that The Register is the xssed.org of SQL injection vulns. (That is, in addition to a fine repository of typos and pun-based innuendos. I guess they’re just journalists after all, hackers don’t bother with such subtleties.)
The list will expand throughout 2011.
For all the articles, lists, and books published on SQL injection one must assume that developers are being persistently ignorant of security concepts to such a degree that five years from now we may hear yet again of a database hack that disclosed unencrypted passwords.
If you’re going to use performance as an excuse for avoiding prepared statements then you either haven’t bothered to measure the impact, you haven’t understood how to scale web architectures, and you might as well turn off HTTPS for the login page so you can get more users logging in per second. If you have other excuses for avoiding database security, ask yourself if it takes longer to write a ranting rebuttal or a wrapper for secure database queries.
There may in fact be hope for the future. The rush to scaleability and the pious invocation of “Cloud” has created a new beast of NoSQL data stores. These NoSQL databases typically just have key-value stores with grammars that aren’t so easily corrupted by a stray apostrophe or semi-colon in the way that traditional SQL can be corrupted. Who knows, maybe security conferences will finally do away with presentations on yet another SQL injection exploit and find someone with a novel, new NoSQL Injection vulnerability.
Advanced Persistent Ignorance isn’t limited to SQL injection vulnerabilities. It has just spectacularly manifested itself in them. There are many unsolved problems in information security, but there are also many mostly-solved problems. Big unsolved problems in web security are password resets (overwhelmingly relying on e-mail) and using static credit card numbers to purchase items.
SQL injection countermeasures are an example of a mostly-solved problem. Using prepared statements isn’t 100% secure, but it makes a significant improvement. User authentication and password storage is another area of web security rife with errors. Adopting a solution like OpenID can reduce the burden of security around authentication. As with all things crypto-related, using well-maintained libraries and system calls are far superior to writing your own hash function or encryption scheme.
The antidote to API is the continuous acquisition of knowledge and experience. Yes, you can have your cake and eat it, too.
1 MySQL introduced support for prepared statements in version 4.1, which was first released April 3, 2003.
4 Perhaps a dangerous metaphor since here in the U.S. we still have school boards and prominent politicians for whom a complete geologic time spans a meager 4,000 years. Maybe some developers enjoy using ludicrous design patterns. | <urn:uuid:db9e614b-94d2-4db3-b866-efbcb66b0c33> | CC-MAIN-2017-04 | https://deadliestwebattacks.com/2011/04/14/advanced-persistent-ignorance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927293 | 1,276 | 2.8125 | 3 |
Over the past few years we’ve heard more about smartphone encryption than, quite frankly, most of us expected to hear in a lifetime. We learned that proper encryption can slow down even sophisticated decryption attempts if done correctly. We’ve also learned that incorrect implementations can undo most of that security.
In other words, phone encryption is an area where details matter. For the past few weeks I’ve been looking a bit at Android Nougat’s new file-based encryption to see how well they’ve addressed some of those details in their latest release. The answer, unfortunately, is that there’s still lots of work to do. In this post I’m going to talk about a bit of that.
Background: file and disk encryption
Disk encryption is much older than smartphones. Indeed, early encrypting filesystems date back at least to the early 1990s and proprietary implementations may go back before that. Even in the relatively new area of PCs operating systems, disk encryption has been a built-in feature since the early 2000s.
The typical PC disk encryption system operates as follows. At boot time you enter a password. This is fed through a key derivation function to derive a cryptographic key. If a hardware co-processor is available (e.g., a TPM), your key is further strengthened by “tangling” it with some secrets stored in the hardware. This helps to lock encryption to a particular device.
The actual encryption can be done in one of two different ways:
- Full Disk Encryption (FDE) systems (like Truecrypt, BitLocker and FileVault) encrypt disks at the level of disk sectors. This is an all-or-nothing approach, since the encryption drivers won’t necessarily have any idea what files those sectors represent. At the same time, FDE is popular — mainly because it’s extremely easy to implement.
- File-based Encryption (FBE) systems (like EncFS and eCryptFS) encrypt individual files. This approach requires changes to the filesystem itself, but has the benefit of allowing fine grained access controls where individual files are encrypted using different keys.
Most commercial PC disk encryption software has historically opted to use the full-disk encryption (FDE) approach. Mostly this is just a matter of expediency: FDE is just significantly easier to implement. But philosophically, it also reflects a particular view of what disk encryption was meant to accomplish.
In this view, encryption is an all-or-nothing proposition. Your machine is either on or off; accessible or inaccessible. As long as you make sure to have your laptop stolen only when it’s off, disk encryption will keep you perfectly safe.
So what does this have to do with Android?
Android’s early attempts at adding encryption to their phones followed the standard PC full-disk encryption paradigm. Beginning in Android 4.4 (Kitkat) through Android 6.0 (Marshmallow), Android systems shipped with a kernel device mapper called dm-crypt designed to encrypt disks at the sector level. This represented a quick and dirty way to bring encryption to Android phones, and it made sense — if you believe that phones are just very tiny PCs.
The problem is that smartphones are not PCs.
The major difference is that smartphone users are never encouraged to shut down their device. In practice this means that — after you enter a passcode once after boot — normal users spend their whole day walking around with all their cryptographic keys in RAM. Since phone batteries live for a day or more (a long time compared to laptops) encryption doesn’t really offer much to protect you against an attacker who gets their hands on your phone during this time.
Of course, users do lock their smartphones. In principle, a clever implementation could evict sensitive cryptographic keys from RAM when the device locks, then re-derive them the next time the user logs in. Unfortunately, Android doesn’t do this — for the very simple reason that Android users want their phones to actually work. Without cryptographic keys in RAM, an FDE system loses access to everything on the storage drive. In practice this turns it into a brick.
For this very excellent reason, once you boot an Android FDE phone it will never evict its cryptographic keys from RAM. And this is not good.
So what’s the alternative?
Android is not the only game in town when it comes to phone encryption. Apple, for its part, also gave this problem a lot of thought and came to a subtly different solution.
Starting with iOS 4, Apple included a “data protection” feature to encrypt all data stored a device. But unlike Android, Apple doesn’t use the full-disk encryption paradigm. Instead, they employ a file-based encryption approach that individually encrypts each file on the device.
In the Apple system, the contents of each file is encrypted under a unique per-file key (metadata is encrypted separately). The file key is in turn encrypted with one of several “class keys” that are derived from the user passcode and some hardware secrets embedded in the processor.
The main advantage of the Apple approach is that instead of a single FDE key to rule them all, Apple can implement fine-grained access control for individual files. To enable this, iOS provides an API developers can use to specify which class key to use in encrypting any given file. The available “protection classes” include:
- Complete protection. Files encrypted with this class key can only be accessed when the device is powered up and unlocked. To ensure this, the class key is evicted from RAM a few seconds after the device locks.
- Protected Until First User Authentication. Files encrypted with this class key are protected until the user first logs in (after a reboot), and the key remains in memory.
- No protection. These files are accessible even when the device has been rebooted, and the user has not yet logged in.
By giving developers the option to individually protect different files, Apple made it possible to build applications that can work while the device is locked, while providing strong protection for files containing sensitive data.
Apple even created a fourth option for apps that simply need to create new encrypted files when the class key has been evicted from RAM. This class uses public key encryption to write new files. This is why you can safely take pictures even when your device is locked.
Apple’s approach isn’t perfect. What it is, however, is the obvious result of a long and careful thought process. All of which raises the following question…
Why the hell didn’t Android do this as well?
The short answer is Android is trying to. Sort of. Let me explain.
As of Android 7.0 (Nougat), Google has moved away from full-disk encryption as the primary mechanism for protecting data at rest. If you set a passcode on your device, Android N systems can be configured to support a more Apple-like approach that uses file encryption. So far so good.
The new system is called Direct Boot, so named because it addresses what Google obviously saw as fatal problem with Android FDE — namely, that FDE-protected phones are useless bricks following a reboot. The main advantage of the new model is that it allows phones to access some data even before you enter the passcode. This is enabled by providing developers with two separate “encryption contexts”:
- Credential encrypted storage. Files in this area are encrypted under the user’s passcode, and won’t be available until the user enters their passcode (once).
- Device encrypted storage. These files are not encrypted under the user’s passcode (though they may be encrypted using hardware secrets). Thus they are available after boot, even before the user enters a passcode.
Direct Boot even provides separate encryption contexts for different users on the phone — something I’m not quite sure what to do with. But sure, why not?
If Android is making all these changes, what’s the problem?
One thing you might have noticed is that where Apple had four categories of protection, Android N only has two. And it’s the two missing categories that cause the problems. These are the “complete protection” categories that allow the user to lock their device following first user authentication — and evict the keys from memory.
Of course, you might argue that Android could provide this by forcing application developers to switch back to “device encrypted storage” following a device lock. The problem with this idea is twofold. First, Android documentation and sample code is explicit that this isn’t how things work:
Moreover, a quick read of the documentation shows that even if you wanted to, there is no unambiguous way for Android to tell applications when the system has been re-locked. If keys are evicted when the device is locked, applications will unexpectedly find their file accesses returning errors. Even system applications tend to do badly when this happens.
And of course, this assumes that Android N will even try to evict keys when you lock the device. Here’s how the current filesystem encryption code handles locks:
While the above is bad, it’s important to stress that the real problem here is not really in the cryptography. The problem is that since Google is not giving developers proper guidance, the company may be locking Android into years of insecurity. Without (even a half-baked) solution to define a “complete” protection class, Android app developers can’t build their apps correctly to support the idea that devices can lock. Even if Android O gets around to implementing key eviction, the existing legacy app base won’t be able to handle it — since this will break a million apps that have implemented their security according to Android’s current recommendations.
In short: this is a thing you get right from the start, or you don’t do at all. It looks like — for the moment — Android isn’t getting it right.
Are keys that easy to steal?
Of course it’s reasonable to ask whether it’s having keys in RAM is that big of concern in the first place. Can these keys actually be accessed?
The answer to that question is a bit complicated. First, if you’re up against somebody with a hardware lab and forensic expertise, the answer is almost certainly “yes”. Once you’ve entered your passcode and derived the keys, they aren’t stored in some magically secure part of the phone. People with the ability to access RAM or the bus lines of the device can potentially nick them.
But that’s a lot of work. From a software perspective, it’s even worse. A software attack would require a way to get past the phone’s lockscreen in order to get running code on the device. In older (pre-N) versions of Android the attacker might need to then escalate privileges to get access to Kernel memory. Remarkably, Android N doesn’t even store its disk keys in the Kernel — instead they’re held by the “vold” daemon, which runs as user “root” in userspace. This doesn’t make exploits trivial, but it certainly isn’t the best way to handle things.
Of course, all of this is mostly irrelevant. The main point is that if the keys are loaded you don’t need to steal them. If you have a way to get past the lockscreen, you can just access files on the disk.
What about hardware?
Although a bit of a tangent, it’s worth noting that many high-end Android phones use some sort of trusted hardware to enable encryption. The most common approach is to use a trusted execution environment (TEE) running with ARM TrustZone.
This definitely solves a problem. Unfortunately it’s not quite the same problem as discussed above. ARM TrustZone — when it works correctly, which is not guaranteed — forces attackers to derive their encryption keys on the device itself, which should make offline dictionary attacks on the password much harder. In some cases, this hardware can be used to cache the keys and reveal them only when you input a biometric such as a fingerprint.
The problem here is that in Android N, this only helps you at the time the keys are being initially derived. Once that happens (i.e., following your first login), the hardware doesn’t appear to do much. The resulting derived keys seem to live forever in normal userspace RAM. While it’s possible that specific phones (e.g., Google’s Pixel, or Samsung devices) implement additional countermeasures, on stock Android N phones hardware doesn’t save you.
So what does it all mean?
How you feel about this depends on whether you’re a “glass half full” or “glass half empty” kind of person.
If you’re an optimistic type, you’ll point out that Android is clearly moving in the right direction. And while there’s a lot of work still to be done, even a half-baked implementation of file-based implementation is better than the last generation of dumb FDE Android encryption. Also: you probably also think clowns are nice.
On the other hand, you might notice that this is a pretty goddamn low standard. In other words, in 2016 Android is still struggling to deploy encryption that achieves (lock screen) security that Apple figured out six years ago. And they’re not even getting it right. That doesn’t bode well for the long term security of Android users.
And that’s a shame, because as many have pointed out, the users who rely on Android phones are disproportionately poorer and more at-risk. By treating encryption as a relatively low priority, Google is basically telling these people that they shouldn’t get the same protections as other users. This may keep the FBI off Google’s backs, but in the long term it’s bad judgement on Google’s part. | <urn:uuid:fe166fc3-ed46-4953-8c3b-29394167c98a> | CC-MAIN-2017-04 | https://blog.cryptographyengineering.com/category/android/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00182-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930469 | 2,962 | 2.671875 | 3 |
A patch cable, also called a patch cord, is an electrical or optical cable used to connect (“patch-in”) one electronic or optical device to another for signal routing. Devices of different types (e.g., a switch connected to a computer, or a switch to a router) are connected with patch cables. Patch cables are usually produced in many different colors so as to be easily distinguishable, and are relatively short, perhaps no longer than two meters. Types of patch cables include microphone cables, headphone extension cables, XLR connector, Tiny Telephone (TT) connector, RCA connector and ¼” TRS phone connector cables (as well as modular Ethernet cables), and thicker, hose-like cords (snake cable) used to carry video or amplified signals.
Among all the patch cables, the fiber optic patch cable is the most popular one. It is a fiber optic cable that can be directly connected to other equipment for connecting and managing convenience. It is used for making patch cords from equipment to fiber optic cabling. Having a thick layer of protection, it is used to connect the optical transmitter, receiver and the terminal box. Its core has a high refractive index, used for transmitting light. The coating has low refractive index, to reflect light back into the core. The jacket protects the interior. It is widely used in communication room, FTTH (Fiber to The Home), LAN (Local Area Network), FOS (fiber optic sensor), fiber optic communication system, optical fiber connected and transmitted equipment, defense combat readiness, etc, with the characteristics of low Insertion Loss, high Return Loss, good Repeatability, good Interchange, excellent environmental adaptability. The connectors of fiber optic cables are various, like FC, SC, ST, LC, MTRJ, MU and E2000. We can make either two of them together according to customers, need.
Wire loom is a flexible, tube-shaped product that installs over bundles of wires to keep them neat, organized, and protected from abrasion and other damage. Also known as corrugated tubing, wire loom has a ridged surface that lets it bend easily, but also gives it extra strength. Wire loom is generally slit along its length, making it easy for users to insert and remove cables, but there are also a few unsplit varieties available, which can be used as a type of flexible conduit when extra cable protection is needed. To use wire loom, simply slip cables in through the side slit – this can be done by hand, or with a wire loom tool, which folds around cable bundles and helps you “zip” them into place in just seconds. Cables are equally easy to remove, which makes wire loom perfect for use in situations that call for frequent cable updates and reconfigurations. Wire loom is extremely versatile, and can be used at home (to prevent small children and pets from chewing on power cords) or at the office (to organize and conceal the computer cables under your desk). There’s even chrome automotive wire loom, which is used to protect and add some custom aftermarket shine to engine bay and motorcycle cables. | <urn:uuid:7f19170c-be84-4722-ae75-9309e59cb650> | CC-MAIN-2017-04 | http://www.fs.com/blog/patch-cable-and-wiring-loom.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938306 | 653 | 3.109375 | 3 |
Have a question for us? Use our contact sales form:
The next time you perform a web search think of this: According to an analysis by US physicist Alex Wissner-Gross in an article published by the BBC, searching the internet is bad for global warming.
While the players in this drama can dispute the facts, there is no denying that computers and the networks that connect them consume power. While I have not seen Dr. Wissner-Gross' analysis, I would be willing to bet he did not factor in the power of the entire world-wide network that is involved with connecting the client to the Google data center and which lies between Google's data center and the destinations that they scan for information.
This is good food for thought for the next time I work at home smugly thinking that I am saving the planet by not driving my car to work that day.
It appears all the arguments in the article seem to focus on the amount of power consumed from carbon-based energy (as opposed to renewable resources). Of-course, nobody talks about the heat footprint in terms of the thermal impact of all this technology, but the article does mention that 2% of all global emissions comes from IT. Estimates put US energy consumption in the neighborhood of 760 GW. If we were to conjecture that there is a direct proportion between emissions and power consumption, we could argue that IT in the US alone would account for at least 15 GW. According to data I gathered on the internet (doing searches), there are over 32 million computer servers world wide. Remember that every watt it takes to power them must be doubled to allow for cooling them. A typical server can average 150W - 200W of power, but none of this computes the power of the network required to interconnect them. This begs the question: "If nobody uses the Internet, does it still consume electricity?" Rest-assured it does, so you need not feel guilty reading this stupid blog, yet you can see why so many companies associated with computers are working towards keeping this number from growing.
In a previous blog I bragged that I saved the planet form 1320 tons of carbon emissions over the last twelve years; that is equivalent to approximately 273398 grams per day or roughly 3 searches per second, based on Wissner-Gross' estimations. So watch your web searches. If you find yourself searching this fast, you might want to go for a drive. | <urn:uuid:52711832-f02f-421b-bec5-a296584400fe> | CC-MAIN-2017-04 | http://www.dialogic.com/den/d/b/corporate/archive/2009/01/13/web-searches-are-killing-the-planet.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958041 | 500 | 2.828125 | 3 |
3.1.1 What is the RSA cryptosystem?
The RSA cryptosystem is a public-key cryptosystem that offers both encryption and digital signatures (authentication). Ronald Rivest, Adi Shamir, and Leonard Adleman developed the RSA system in 1977 [RSA78]; RSA stands for the first letter in each of its inventors' last names.
The RSA algorithm works as follows: take two large primes, p and q, and compute their product n = pq; n is called the modulus. Choose a number, e, less than n and relatively prime to (p-1)(q-1), which means e and (p-1)(q-1) have no common factors except 1. Find another number d such that (ed - 1) is divisible by (p-1)(q-1). The values e and d are called the public and private exponents, respectively. The public key is the pair (n, e); the private key is (n, d). The factors p and q may be destroyed or kept with the private key.
It is currently difficult to obtain the private key d from the public key (n, e). However if one could factor n into p and q, then one could obtain the private key d. Thus the security of the RSA system is based on the assumption that factoring is difficult. The discovery of an easy method of factoring would "break" RSA (see Question 3.1.3 and Question 2.3.3).
Suppose Alice wants to send a message m to Bob. Alice creates the ciphertext c by exponentiating: c = me mod n, where e and n are Bob's public key. She sends c to Bob. To decrypt, Bob also exponentiates: m = cd mod n; the relationship between e and d ensures that Bob correctly recovers m. Since only Bob knows d, only Bob can decrypt this message.
Suppose Alice wants to send a message m to Bob in such a way that Bob is assured the message is both authentic, has not been tampered with, and from Alice. Alice creates a digital signature s by exponentiating: s = md mod n, where d and n are Alice's private key. She sends m and s to Bob. To verify the signature, Bob exponentiates and checks that the message m is recovered: m = se mod n, where e and n are Alice's public key.
Thus encryption and authentication take place without any sharing of private keys: each person uses only another's public key or their own private key. Anyone can send an encrypted message or verify a signed message, but only someone in possession of the correct private key can decrypt or sign a message. | <urn:uuid:0be7b017-d4b9-4cf5-b90e-9f53e7c96261> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-the-rsa-cryptosystem.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927955 | 564 | 4.21875 | 4 |
3,500 pounds of steel, a license to drive, and the judgment of a 16-year-old equals the stuff nightmares are made of. Although we teach, test, and then hand over the keys to the car at this impressionable age, parents (and fellow drivers alike) wince at the thought of a teen behind the wheel. But while the thought is nightmare provoking, we know that practice can make perfect (or at least, practice makes “better”).
When it comes to the use of technology in the classroom, Damien Barrett, Mac system technician for Montclair Kimberly Academy (MKA), has embraced an approach not unlike the Driver’s Ed class you took in High School. Aptly called “MKA Computer Driver’s Test and Driver’s Exam”, students ages 9-18 are taught the importance of responsibly handling a few pounds of Apple technology. Their approach to management, making each student an administrator on their own device, is what truly sets them apart from other K-12 institutions. Through the use of the Casper Suite (and plenty of training), students are encouraged to wield the administrator sword with caution and confidence.
Join the discussion on JAMF Nation: | <urn:uuid:1d1b5dae-556b-4099-9d3f-074eafdc475c> | CC-MAIN-2017-04 | https://www.jamf.com/blog/teaching-students-ethics-and-responsibility-through-the-use-of-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931532 | 252 | 3.15625 | 3 |
Intersperience survey reveals strong emotional dependency on technology
• 49% of kids under 12 would be “sad” without the internet
• 70% of teenagers chat on Facebook
• Two year olds dominate the family iPad
• Children perform more daily tasks online than adults
London, UK, 30 January 2012 – Young children in the UK have a powerful emotional connection to the internet with 49% of under-12s reporting that they would be “sad” without it while one in five would be “lonely”, according to a new study by international consumer research specialist Intersperience.
The findings emerged from the ‘Digital Futures’ project, which surveyed 1,000 young people in the UK between the ages of eight and 18 on the impact of online and digital technology in their lives.It uncovered radical differences between the ways children and adults relate to the internet with under 18s primarily valuing it for social and entertainment purposes and older teens particularly keen on mobile internet.
The study found that teenagers are even more emotionally bound to the internet than either primary school age children or adults, with 60% reporting that they would be “sad” if they could not connect to the internet, while 48% (more than double the proportion of under 12s) said they would be lonely.
Teenagers are the heaviest users of mobile devices, particularly smartphones, and their number one online activity is chatting to friends - more than 70% of teenagers said they chat on Facebook. However, they are still keen on real-life conversation as more than half like to talk to friends face-to-face, compared to 35% who like to talk to friends online.
Intersperience Chief Executive Paul Hudson said: “The fact that children have a strong emotional attachment to the internet is often regarded as a negative thing but in fact it is perfectly natural for a generation whose social life is largely online. It’s equivalent to taking a phone away from older people, they’d feel sad and lonely too.”
Under 12s emerged as sophisticated internet users with 74% playing online games, 65% using the internet for homework and more than one-third going online to look for things to buy or sell. In a cost-conscious climate, young teens are also using it to check prices for clothes or other fashion items.
Children are also smart about backing up data, with kids as young as eight using hard drives and teens storing information in ‘the cloud’. Young people are also well-informed about online security and discerning about releasing personal data, with one-third unwilling to give their details to organisations online while 22% said they give false ID information.
Even toddlers have acquired a high level of skill with internet-enabled devices according to parents who said their two-year olds are the most likely to dominate the family iPad. Toddlers easily master touch screen technology to access games or stories independently.
Paul Hudson said: “Our Digital Futures project is one of the most comprehensive studies undertaken in the UK on how children interact with the digital world. It shows that even very young children are skilled multi-channel communicators who view the internet as an ever-present virtual playground. However they also have a surprisingly good grasp of complex issues like online security and e-commerce.”
He added: “We matched the results against our Digital Selves research on adult behaviour and it shows that even eight to 11 years olds perform a wider daily range of tasks online than grown-ups. Adults may be concerned about the strong emotional connection kids have to the internet today but our study shows that far from losing the art of conversation, children still prefer chatting to their friends in person.”
As featured in the Telegraph article on 30 January 2012 - British children feel 'sad' without internet connection
NOTES TO EDITORS
Media Contact: Valerie Darroch 07970 737708 E: firstname.lastname@example.org
Intersperience is an international consumer research specialist with expertise in consumer behaviour, experience and attitudes. The team, which is headquartered in Cumbria, has more than 25 years experience in analysing consumer behaviour. It employs a range of interpretative models and frameworks including a proprietary online research platform. Intersperience has significant global expertise and an international research hub at Lancaster University which conducts research in more than 60 languages as well as associates in major global markets. Intersperience is an expert in how technology impacts on consumer behaviour and multi-channel customer service strategy. Clients include:The British Council; General Motors; Iceland; Samsung; ScottishPower; and William Hill.
About the Intersperience Digital Futures research project:
Intersperience conducted a wide-ranging survey among 1,000 young people in the UK between the ages of eight and 18 on how they use the internet and internet-enabled devices. Participants mirrored the general UK population in terms of social class and of the total group, 35% were aged between eight and 11, 37% were aged 12 to 14, and the remainder were aged 15 to 17. In addition, the team carried out qualitative research among 15 families with children aged from two to 18 which included participation in family tasks such as video diaries, communication logs and mood diaries. Researchers also carried out 23 in-depth family interviews including 11 face-to-face interviews with under 18s. Field research was carried out between July and August 2011.
For more information:
Tel: + 44 (0) 15395 65450 | <urn:uuid:30f3f8a7-d72a-4265-b700-d7c9e4db6e34> | CC-MAIN-2017-04 | http://www.intersperience.com/news_more.asp?news_id=46 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00108-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953902 | 1,145 | 2.71875 | 3 |
The Internet, a global system of interconnected computer networks that has transformed the lives of billions of people across the world, turned 30 on Tuesday.
Although the origins of the Internet reach back to research of the 1960s - commissioned by the United States government to build robust, fault-tolerant, and distributed computer networks - 1 January 1983 was the day that the Internet as we know it replaced previous networking systems.
It was the day the US Department of Defence's ARPANET network fully switched from the older Network Control Program (NCP) protocol to the Transmission Control Protocol and Internet Protocol (TCP/IP) communications system, which now underpins the entire Internet.
The reason for the switch was that ARPANET had roughly 1,000 computers connected on the network at the time, and it was growing fast, so to handle the larger and increasingly complicated network ARPANET needed a new protocol.
Vint Cerf, VP and Chief Internet Evangelist at Google, was part of the team that developed a new computer communication protocol designed specifically to support connection among different packet-switched networks.
In a blog post to mark Internet's thirtieth year, Cerf, who is widely regarded as the "godfather of the Internet" said that the main emotion he remembers looking back on that day was relief.
"There were no grand celebrations - I can't even find a photograph. The only visible mementos were the 'I survived the TCP/IP switchover' pins proudly worn by those who went through the ordeal," he said.
"Yet, with hindsight, it's obvious it was a momentous occasion. On that day, the operational Internet was born."
English computer scientist Tim Berners-Lee later used the new Internet protocol to host the system of interlinked hypertext documents he invented in 1989, known as the World Wide Web.
This story, "Vint Cerf Hails 30th Birthday of the Modern-Day Internet" was originally published by Techworld.com. | <urn:uuid:3ca4e3da-bf88-4275-9d6a-2b66f35cc1e2> | CC-MAIN-2017-04 | http://www.cio.com/article/2389438/internet/vint-cerf-hails-30th-birthday-of-the-modern-day-internet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968413 | 414 | 3.171875 | 3 |
Quantum cryptography is a new technique of securing computer network communication channel. Existing standard crypto systems are using advanced algorithms to create key pairs which are extremely hard to inverse engineer. Quantum cryptography avoids any mathematical algorithm and uses principles of quantum physics.
Quantum crypto implements a new technique of generating and exchanging crypto keys which makes it impossible for third party entities to get those keys by snooping or to create man in the middle by snooping and sending copies of original key. Keys generated in this way will automatically destroy themselves if read by third-party interferer.
When generated between two sides, using quantum key distribution, secret keys will be used with standard and well known symmetric encryption. The key generation process is the only part which uses quantum principles to work, from there, using this “hyper-secure key” already existing symmetric encryption will be used to encrypt and decrypt data, which will be sent over standard, currently available, optic data networks.
Quantum cryptography usually uses photons to generate the key by sending pieces of data between two sides. It uses of course standard optic communication channel used in computer networks today.
So we see that the mechanism is not only transmitting the key using photon polarization, but the process of sending polarized photons is actually the process which generates the key.
The whole process of generating key, using quantum mechanics characteristics on a photon, is making the quantum cryptography work. The communication channel is actually not really quantum, it’s basically normal optical line. Photons as information qubits are only quantum pieces of this quantum crypto puzzle.
In computer systems today, almost everything we have, and use, is based on Electrodynamics. Deeper understanding of the world of small, under-atom particles, is the next step for physicist of our time to explore. Huge steps are already taken and we today know so much about quantum mechanics. First ideas, emerged in the last 50 years, about exploiting quantum mechanics are starting to give results.
First Quantum Cryptology systems are already available today.
Quantum mechanics explores quantum size particles known as quanta. It further explores characteristics of quantum particles and their entanglement as special way of interaction at a distance. Technologies like cryptography are one of the first examples of quantum mechanics effects in use. In order to perform cryptographic tasks and enhance cryptographic systems, quantum effects are leveraged and we already have first commercial Quantum key distribution (QKD) products available.
Quantum key distribution is the most known and most studies example but is surely not the only one. Many more applications are emerging today and some of the examples include random number generators, delegated quantum computation and secure multiparty computation systems.
Quantum Cryptography takes the advantage from fairly negative set of rules from quantum physics. Quantum physics states that we are unable to measure the system without perturbing it. It is further impossible to accurately measure the position and the momentum of a particle simultaneously. It is also not possible to measure photon polarization in both vertical-horizontal and diagonal basis.
It is possible to measure only one of the basis of the photon and after the measurement we used that photon and cannot make another measurement. Quantum state in a system is impossible to duplicate.
It is hard to imagine that all this negative rules could be used to make something so simply functional like Quantum Key Distribution.
Quantum Cryptography is based on standard cryptography principles which are enhanced by usage of Quantum key distribution system. Quantum key distribution technique enables Quantum Cryptography by employing basic quantum mechanics principles listed above.
Quantum Cryptography today is trying to examine and possibly take advantage of other limitations from quantum mechanics. This includes the impossibility of quantum bit commitment, the issue with quantum rewinding and the definition of quantum security models for classical primitives.
Modern cryptology needs to be enhanced in a way that prevents future quantum computers or similar future calculation systems to break current crypto systems. This issue was shortly mentioned in the introductory article about cryptography.
As we will have the chance to see here, current crypto system are mostly based on prime integer factorization or similar mathematical problem with no known efficient solution. No known efficient solution is good enough for today’s systems as todays computer systems need huge number of calculation steps (CPU cycles) to calculate one prime number out of factorized two big primes.
Creation of quantum computers that can get to the result of that calculation in dramatically reduced number of steps will give those computers a chance to break current popular AES or similar crypto algorithms in the time shorter that the key life time.
Today, Cryptology’s most advanced part is surely Quantum Key Distribution. Quantum Key Distribution is making the enhancement to crypto key distribution system in a way that solves the issue regarding quantum computer brute force attack on the key.
Quantum Cryptology principle is based on quantum mechanics rule which defines that is not possible to take a measurement of a quantum system state without changing it. | <urn:uuid:20c760a6-01d4-4a38-8202-d2828ac78e8e> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2016/quantum-cryptography-introduction | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921373 | 1,002 | 3.578125 | 4 |
Ransomware has been making headlines alongside of more major cyber attacks. Often dismissed as a nuisance, and considered opportunistic, more recent instances of this malware could be a much larger cause for concern.
Ransomware has been around for years. Starting off as a fake anti-virus, and evolving into an encryption based attack, the premise of ransomware is to attack the availability aspect of security; holding the user’s computer or files hostage until the ransom is paid.
In February at the Hollywood Presbyterian Medical Center in Los Angeles, administrators paid the asking price of 40 bitcoin, about $17,000 at the time, to regain access to their data. Companies and individuals in the U.S. paid more than $24 million to cyber attackers for ransomware in 2015, according to the FBI.
For those who haven’t heard of the attack, I’ll sum it up:
- Hollywood Presbyterian Medical Center was infected with ransomware.
- Reports are that ransom was upwards of $3.4 million.
- $17,000 was the eventual negotiated and paid amount.
- The CEO said “It was clearly not a malicious attack, it was clearly a random attack”.
Not so fast. Many may have accepted the CEO’s comment, and moved on, but I’m a bit more skeptical. It may have been a random occurrence that started with a single compromised machine, but a couple of things seem different about this attack. I’ve never heard of a ransom being $17,000, let alone $3.4 million. This, like most ransoms, was paid via Bitcoin, but costs are usually around a couple hundred dollars, not thousands or millions. Some ransomware campaign owners have been known to bargain, so it could be possible that the $3.4 million was a starting point (though I expect that may be some media exaggeration). Reports also claim multiple infections, what they don’t clarify is whether $17,000 was the total paid on all individual computers, or if the amount was paid for a single decryption key for all computers.
Consider the following scenarios that could hint towards the implications for modern ransomware:
- The owners of the ransomware noticed similar machines reporting in, and after basic footprinting realized the victim was a hospital. Knowing the hospital would face regulatory fines from losing patient information, the ransom was raised.
- Hackers targeted the hospital with ransomware. Once they had established a foothold, they spread laterally to multiple systems before beginning encryption.
- After landing and encrypting, the ransomware evaluates the content encrypted with basic pattern matching, or the quantity of data, and dynamically modifies the ransom.
These scenarios present a large risk for future attacks, and should all be on the radar of security analysts and executives alike. With new delivery mechanism displayed in Lockey variants, and the possibility of targeted or contextually aware ransomware, attackers looking for monetary gain are migrating to the mindset of, “Why exfiltrate the data when I can just encrypt it?” If the goal is not to steal or destroy data, simply encrypting and demanding money for the key presents a much lower risk, and higher level of anonymity to the attacker.
Disrupting the Kill Chain of Ransomware
As with other attacks, the deployment of ransomware leverages a predictable set of events. Disrupting any event in this chain can possibly prevent the compromise and subsequent encryption of data.
- Ransomware is typically delivered via email, but can use any typical delivery method including drive by download.
- The malware is run, and begins the encryption process.
- The malware calls back to a command and control server. Usually created by a generation algorithm (DGA), these temporary domains are commonly used to facilitate the communication of the encryption key.
- Files are encrypted.
- The shadow copy admin account used by Windows for file recovery is deleted.
- Ransom Demand
- The user is presented with directions on how to pay the ransom and decrypt files.
Disrupting stages of the attack can leverage both commercial software, as well as properly implemented security policy.
- Use an enterprise grade email filtering system. Block the delivery of harmful file extensions like .exe, .zip, and .rar (any archive file if that’s an option).
- Leverage user education to create a sense of vigilance among employees, identifying suspicious attributes of malicious email.
- Implement web filtering to prevent access or redirection to malicious web pages and drive by downloads.
- While some common samples of ransomware may be prevented by traditional antivirus, advancements in next-generation endpoint protection products has risen the efficacy rating in detecting even 0-day ransomware.
- Removal of local administration privileges may prevent the software from being able to fully execute its encryption module.
- Some advanced web filtering products may be able to block callbacks to DGA sites, eliminating the communication of the key, and potentially the encryption of files.
- Properly controlled network privileges can prevent the encryption of shared files, network drives, and lateral movement of the infection. | <urn:uuid:c0e91bb7-fec9-4cb8-907d-f4795bf79387> | CC-MAIN-2017-04 | https://www.criticalstart.com/2016/04/implications-of-modern-ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935371 | 1,050 | 2.671875 | 3 |
Load balancing allows you to distribute client requests across multiple servers. Load balancers improve server fault tolerance and end-user response time. Load balancing distributes client requests across multiple servers to optimize resource utilization. In a scenario with a limited number of servers providing service to a large number of clients, a server can become overloaded and degrade server performance. Load balancing is used to prevent bottlenecks by forwarding the client requests to the servers best suited to handle them. Thus, balancing the load.
In a load balancing setup, the load balancers are logically located between the client and the server farm. Load balancing is used to manage traffic flow to the servers in the server farm. The network diagram shows the topology of a basic load balancing configuration. Load Balancing can be performed on HTTP, HTTP, SSL, FTP, TCP, SSL_TCP, UDP, SSL_BRIDGE, NNTP, DNS, ANY, SIP-UDP, DNS-TCP, and RTSP.
Load balancing uses a number of algorithms, called load balancing methods, to determine how to distribute the load among the servers. When a load balancer is configured to use the least connection method, it selects the service with the least number of active connections to ensure that the load of the active requests is balanced on the services. This method is the default load balancing method because it provides the best performance.
The following example shows how a NetScaler selects a service for load balancing by using the least connections method. Consider the following three services:
- Server-1 is handling 3 active transactions.
- Server-2 is handling 15 active transactions.
- Server-3 is not handling any active transactions.
The load balancer selects the service by using the value (N) of the following expression:
N = Number of active transactions
The requests are delivered as follows:
- Server-3 receives the first request because the service is not handling any active transactions.
Note: The service with no active transaction is selected first.
- Server-3 receives the second and third requests because the service has the next least number of active transactions.
- Server-1 receives the fourth request.
When Server-1 and Server-3 have same number of active transactions, NetScaler performs load balancing in a round robin manner. Therefore, Server-3 receives the fifth request, Server-1 receives the sixth request, Server-3 receives the seventh request, and Server-1 receives the eighth request and so forth.
Whether it’s load balancing XenApp Web Interface, iPhone/iPad resources, websites, linux servers, windows servers, e-commerce sites, or enterprise applications, NetScaler is the perfect choice. NetScaler, available as a network device or as a virtualized appliance, is a web application delivery appliance that accelerates internal and externally-facing web application up to 5x, optimizes application availability through advanced L4-7 traffic management, increases security with an integrated application firewall, and substantially lowers costs by increasing web server efficiency.
Citrix NetScaler is a comprehensive system deployed in front of web servers that combines high-speed load balancing and content switching with application acceleration, highly-efficient data compression, static and dynamic content caching, SSL acceleration, network optimization, application performance monitoring, and robust application security.
Available as a virtual machine, the NetScaler is perfect for load balancing virtual servers in the datacenter or in the cloud. | <urn:uuid:960f23b4-bb22-4f86-b699-21d5ea26e259> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2010/09/02/load-balancing-least-connections/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.876877 | 711 | 3.375 | 3 |
The international space community needs to get a whole lot more serious about cleaning up the debris in orbit - especially in low Earth orbit where critical satellites operate and future space missions will maneuver.
At the current density of debris, there will be an in-orbit collision about every five years, according to research presented at the 6th European Conference on Space Debris taking place in Darmstadt, Germany this week. The research went on to say that about 10 to 15 large objects or about seven tons of debris need to be removed from space a year to reduce the risk of collisions and damage to other spacecraft.
"While measures against further debris creation and actively deorbiting defunct satellites are technically demanding and potentially costly, there is no alternative to protect space as a valuable resource for our critical satellite infrastructure," said Heiner Klinkrad, Head of European Space Agency's Space Debris Office in a statement. The concern too is that there could be more collisions with larger objects that could create even more fragments.
Future space missions must be sustainable, including safe disposal when they are completed. The current levels mean that we must soon begin removing debris from orbit, with research and development urgently needed for pilot 'cleaning' missions. The removal of space debris is an environmental problem of global dimensions that must be assessed in an international context, including the United Nations, according to the ESA.
The Scientific and Technical Subcommittee of the United Nations (UN) Committee on the Peaceful Uses of Outer Space meeting in February included the topic of space debris, though it was mostly an update on international space debris research and activities during 2012.
In February the European Commission laid out plans to better coordinate the surveillance and tracking of space debris in order to protect satellites. Currently, European satellite operators almost completely depend on United States space surveillance and tracking information. But the new proposals would bring together E.U. member states' existing capacities, such as ground-based telescopes, radars and surveillance and tracking data centers, to help the bloc become more self-sufficient.
Space debris consists of human-made objects in Earth's orbit that no longer have a useful purpose, such as pieces of launched spacecraft. It is estimated that up to 600,000 objects larger than 1 centimeter and at least 16,000 larger than 10 cm orbit Earth. An object larger than 1 cm hitting a satellite would damage or destroy sub-systems or instruments on board and a collision with an object larger than 10 cm would destroy the satellite, according to Commission figures. The number of objects larger than 1 cm is expected to reach around 1 million in 2020.
NASA's Orbital Debris Office recently reported that a small Russian geodetic satellite was knocked slightly from its orbit in January 2013 and shed a piece of debris after apparently being struck by a very small meteoroid or orbital debris. Known as BLITS (Ball Lens In The Space), the satellite was circling the Earth at an altitude of 832 km with an inclination of 98.6 degrees at the time of the event. NASA said the BLITS is a completely inert object consisting of a glass sphere encased in another glass sphere with a total mass of 7.53 kg and a full diameter of 17 cm and is used as a target for laser ranging stations to obtain very precise altitude measurements. The satellite was reported to be spinning at a rate of 5.6 seconds prior to the suspected collision and at a rate of only 2.1 seconds afterwards.
NASA noted collisions between satellites and very small debris are common, but normally go unnoticed and do not produce new trackable debris. In 2002, Cosmos 539 was apparently struck by a small object and also released a piece of debris. That same year the JASON-1 spacecraft was struck by a small particle, producing two new debris pieces. The nature of the fragment from BLITS is still under examination, NASA said.
Check out these other hot stories: | <urn:uuid:4195de8d-30fa-4d50-a2f5-50d537b3ff12> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224538/security/getting-junk-out-of-earth-s-orbit-needs-more-urgency.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950458 | 788 | 3.78125 | 4 |
An avatar translates the spoken word 'good' into the corresponding sign from British Sign Language.
IBM has developed a system called SiSi (Say It Sign It) that automatically converts the spoken word into British Sign Language (BSL) which is then signed by an animated digital character or avatar.
SiSi brings together a number of computer technologies. A speech recognition module converts the spoken word into text, which SiSi then interprets into gestures that are used to animate an avatar which signs in BSL.
Upon development this system would see a signing avatar 'pop up' in the corner of the display screen in use -- whether that be a laptop, personal computer, TV, meeting-room display or auditorium screen. Users would be able select the size and appearance of the avatar.
This type of solution has the potential in the future to enable a person giving a presentation in business or education to have a digital character projected behind them signing what they are saying. This would complement the existing provision, allowing for situations where a sign language interpreter is not available in person.
"IBM is committed to developing IT solutions that are inclusive and accessible to all members of society," said Dr Andy Stanford-Clark, Master Inventor, IBM Hursley.
"This technology has the potential to make life easier for the deaf community by providing automatic signing for television broadcasts, and making radio news and talk shows available to a new audience over the Internet, or by providing automated voicemail transcription to allow them to make better use of the mobile network."
With an estimated 55,000 people in the UK for whom BSL is their first language, there are great opportunities for businesses, including firms in the leisure and entertainment industries, to make themselves more accessible to this audience, and also to communicate more effectively with them.
SiSi has been developed in the UK by a research team at IBM Hursley, as part of IBM's premier global student intern program, Extreme Blue. In the European part of the program, 80 of the most talented students from across Europe were selected to work on 20 projects and given whatever equipment, support and assistance they required. Working for an intense 12 week period alongside IBM technical and industry leaders, they focused on innovative technology projects, such as SiSi, all of which had real business value.
A video demonstration is available here.
Image courtesy of University of East Anglia, UK | <urn:uuid:4a03a47a-5a31-44ec-8171-6ddee63050f0> | CC-MAIN-2017-04 | http://www.govtech.com/products/Avatar-Created-to-Translate-Spoken-Word.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00577-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946019 | 488 | 2.765625 | 3 |
- What is SSH? A basic description
- SSH architecture
- Common use of SSH for UNIX and Linux systems
- SSH security and configuration best practices
- Private and public key pairs for SSH
- Configuring SSH with UNIX applications or scripts
- Creating a trusted host environment using SSH
- Downloadable resources
- Related topics
Getting started with SSH security and configuration
A hands-on guide
What is SSH? A basic description
Secure Shell (SSH) was intended and designed to afford the greatest protection when remotely accessing another host over the network. It encrypts the network exchange by providing better authentication facilities as well as features such as Secure Copy (SCP), Secure File Transfer Protocol (SFTP), X session forwarding, and port forwarding to increase the security of other insecure protocols. Various types of encryption are available, ranging from 512-bit encryption to as high as 32768 bits, inclusive of ciphers, like Blowfish, Triple DES, CAST-128, Advanced Encryption Scheme (AES), and ARCFOUR. Higher-bit encryption configurations come at a cost of greater network bandwidth use. Figure 1 and Figure 2 show how easily a telnet session can be casually viewed by anyone on the network using a network-sniffing application such as Wireshark.
Figure 1. Telnet protocol sessions are unencrypted.
When using an unsecured, "clear text" protocol such as telnet, anyone on the network can pilfer your passwords and other sensitive information. Figure 1 shows user fsmythe logging in to a remote host through a telnet connection. He enters his user name fsmythe and password r@m$20!0, which are both then viewable by any other user on the same network as our hapless and unsuspecting telnet user.
Figure 2. SSH protocol sessions are encrypted.
Figure 2 provides an overview of a typical SSH session and shows how the encrypted protocol cannot be viewed by any other user on the same network segment. Every major Linux® and UNIX® distribution now comes with a version of the SSH packages installed by default—typically, the open source OpenSSH packages—so there is little need to download and compile from source. If you're not on a Linux or UNIX platform, a plethora of open source and freeware SSH-based tools are available that enjoy a large following for support and practice, such as WinSCP, Putty, FileZilla, TTSSH, and Cygwin (POSIX software installed on top the Windows® operating system). These tools offer a UNIX- or Linux-like shell interface on a Windows platform.
Whatever your operating system, SSH touts many positive benefits for commonplace, everyday computing. Not only is it dependable, secure, and flexible, but it is also simple to install, use, and configure—not to mention feature laden.
IETF RFCs 4251 through 4256 define SSH as the "Secure Shell Protocol for remote login and other secure network services over an insecure network." The shell consists of three main elements (see Figure 3):
- Transport Layer Protocol: This protocol accommodates server authentication, privacy, and integrity with perfect forward privacy. This layer can provide optional compression and is run over a TCP/IP connection but can also be used on top of any other dependable data stream.
- User Authentication Protocol: This protocol authenticates the client to the server and runs over the transport layer.
- Connection Protocol: This protocol multiplexes the encrypted tunnel to numerous logical channels, running over the User Authentication Protocol.
Figure 3. SSH protocol logical layers
The transport layer is responsible for key exchange and server authentication. It sets up encryption, integrity verification, and (optionally) compression and exposes to the upper layer an API for sending and receiving plain text packets. A user authentication layer provides authentication for clients as well as several authentication methods. Common authentication methods include password, public key, keyboard-interactive, GSSAPI, SecureID, and PAM.
The connection layer defines channels, global requests, and the channel requests through which SSH services are provided. A single SSH connection can host multiple channels concurrently, each transferring data in both directions. Channel requests relay information such as the exit code of a server-side process. The SSH client initiates a request to forward a server-side port.
This open architecture design provides extensive flexibility. The transport layer is comparable to Transport Layer Security (TLS), and you can employ custom authentication methods to extend the user authentication layer. Through the connection layer, you can multiplex secondary sessions into a single SSH connection (see Figure 4).
Figure 4. SSH within the Seven-layer OSI Model
Common use of SSH for UNIX and Linux systems
You typically use SSH to allow users to log in to a remote host and execute commands. However, SSH also supports tunneling and X11 connections. It can even transfer files using SFTP or SCP. SSH is applicable for numerous applications within most common platforms, including Linux, UNIX, Windows, and Apple® OS X, although some applications may require features that are only available or compatible with specific SSH clients or servers.
Here are a few common SSH syntax examples:
- Remote host shell access (supersedes telnet and rlogin clear text,
# ssh firstname.lastname@example.org [email@example.com] ~
- Executing a single command on a remote host (replacing rsh):
# ssh firstname.lastname@example.org reboot email@example.com's password: ******
- Copying files from a local server to a remote host by way of the
firstname.lastname@example.org's password: ****** file1.txt 100% 0 0.0KB/s 00:00 file2.txt 100% 0 0.0KB/s 00:00
- In combination with SFTP, as a secure substitute to FTP file transfer:
sftp email@example.com Connecting to example.com... firstname.lastname@example.org's password: ******* sftp>
- In combination with rsync to back up, copy, and mirror files
efficiently and securely to a local or remote host:
# rsync -avul --rsh=ssh /opt/edbdata/ email@example.com:/root/backup/ firstname.lastname@example.org's password: ****** building file list ... done ./ file1.txt file2.txt file3.txt file4.txt dir1/file5.txt dir2/file6.txt sent 982813 bytes received 2116 bytes 1374860.38 bytes/sec total size is 982138 speedup is 1.00
- Port forwarding or tunneling a port (not to be confused with a VPN):
ssh -L 8000:mailserver:110 example.com email@example.com's password: ********
- Forwarding X sessions from a remote host (possible through multiple
Edit /etc/ssh/sshd_config and change 2 keywords : AllowTcpForwarding yes X11Forwarding yes # service sshd restart $ export DISPLAY $ ssh -X firstname.lastname@example.org
- With the X11 forwarding configuration in conjunction with an X Windows
client with SSH X11 tunneling to allow for the implementation of a
UNIX or Linux GUI subsystem run over SSH securely on the same Windows
machine host that is the source for the SSH session to the Linux or
UNIX remote host:
ssh -ND 8000 email@example.com Browser Settings, goto 'Manual Proxy Configuration' set "SOCKS Host" to example.com, the 'Port to 8000' , Enable SOCKS v5, and lastly set 'No Proxy for' field to 'localhost, 127.0.0.1'
- Securely mounting a directory on a remote server as a file system on a
local computer using
# yum install sshfs fuse-utils (Install sshfs and fuse-utils) $sshfs example.com:/remote_dir /mnt/local_dir
- Automated remote host monitoring and management of servers through one
or more mechanism:
(Report number of apache processes running on the remote server example.com): $ ssh example.com ps -ef | grep httpd | wc -l firstname.lastname@example.org's password: *****
SSH security and configuration best practices
With some of the previously illustrated code examples, many good systems administrators are nervous about some of the security implementations for SSH usage and functions. Although much has been said and written about the various approaches to SSH security and remote host security in general, here is a list of processes and configurations that you can use to tighten and enhance SSH security with regard to remote host access:
- Restrict the root account to console access only:
# vi /etc/ssh/sshd_config PermitRootLogin no
- Create private-public key pairs using a strong passphrase and password
protection for the private key (never generate a password-less key
pair or a password-less passphrase key-less login):
(Use a higher bit rate for the encryption for more security) ssh-keygen -t rsa -b 4096
- Configure TCP wrappers to allow only selective remote hosts and deny
# vi /etc/hosts.deny ALL: 192.168.200.09 # IP Address of badguy
- On workstations or laptops, disable the SSH server by turning off the
SSH service, and then removing the ssh server package:
# chkconfig sshd off # yum erase openssh-server
- Restrict SSH access by controlling user access:
# vi /etc/ssh/sshd_config AllowUsers fsmythe bnice swilson DenyUsers jhacker joebadguy jripper
- Only use SSH Protocol 2:
# vi /etc/ssh/sshd_config Protocol 2
- Don't allow Idle sessions, and configure the Idle Log Out Timeout
# vi /etc/ssh/sshd_config ClientAliveInterval 600 # (Set to 600 seconds = 10 minutes) ClientAliveCountMax 0
- Disable host-based authentication:
# vi /etc/ssh/sshd_config HostbasedAuthentication no
- Disable users' .rhosts files:
# vi /etc/ssh/sshd_config IgnoreRhosts yes
- Configure firewalls to accept SSH connections only from know network
Update /etc/sysconfig/iptables (Redhat specific file) to accept connection only from 192.168.100.0/24 and 184.108.40.206/27, enter: -A RH-FW-1-INPUT -s 192.168.100.0/24 -m state --state NEW -p tcp --dport 22 -j ACCEPT -A RH-FW-1-INPUT -s 220.127.116.11/27 -m state --state NEW -p tcp --dport 22 -j ACCEPT
- Restrict the available interfaces that SSH will listen on and bind to:
# vi /etc/ssh/sshd_config ListenAddress 192.168.100.17 ListenAddress 18.104.22.168
- Set user policy to enforce strong passwords to protect against brute
force, social engineering attempts, and dictionary attacks:
# < /dev/urandom tr -dc A-Za-z0-9_ | head -c8 oP0FNAUt[
- Confine SFTP users to their own home directories by using
# vi /etc/ssh/sshd_config ChrootDirectory /data01/home/%u X11Forwarding no AllowTcpForwarding no
- Disable empty passwords:
# vi /etc/ssh/sshd_config PermitEmptyPasswords no
- Rate-limit the number of incoming port 2022 connections within a
Redhat iptables example (Update /etc/sysconfig/iptables): -A INPUT -i eth0 -p tcp --dport 2022 -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT -A INPUT -i eth0 -p tcp --dport 2022 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp --sport 2022 -m state --state ESTABLISHED -j ACCEPT
iptablesto allow only three connection attempts on port 2022 within 30 seconds:
Redhat iptables example (Update /etc/sysconfig/iptables): -I INPUT -p tcp --dport 2022 -i eth0 -m state --state NEW -m recent --set -I INPUT -p tcp --dport 2022 -i eth0 -m state --state NEW -m recent --update --seconds 30 --hitcount 3 -j DR
- Use a log analyzer such as
logwatchto better understand the logs and create logging reports. Also, increase logging verbosity within the SSH application itself:
Installation of the logwatch package on Redhat Linux # yum install logwatch
- Configure an increase in SSH logging verbosity:
# vi /etc/ssh/sshd_config LogLevel DEBUG
- Always keep the SSH packages and required libraries up to date on
# yum update openssh-server openssh openssh-clients -y
- Conceal the OpenSSH version, require SSH source code, and re-compile.
Then, make the following updates:
# vi /etc/ssh/sshd_config VerifyReverseMapping yes # Turn on reverse name checking UsePrivilegeSeparation yes # Turn on privilege separation StrictModes yes # Prevent the use of insecure home directory # and key file permissions AllowTcpForwarding no # Turn off , if at all possible X11Forwarding no # Turn off , if at all possible PasswordAuthentication no # Specifies whether password authentication is # allowed. The default is yes. Users must have # another authentication method available .
- Delete the rlogin and rsh binaries from the system, and replace them
# find /usr -name rsh /usr/bin/rsh # rm -f /usr/bin/rsh # ln -s /usr/bin/ssh /usr/bin/rsh
SSH supports numerous, diverse methods and techniques for authentication
that you can enable or disable. Within the /etc/ssh/sshd_config file, you
make these configurations changes by entering the keyword listed for the
authentication method followed by
Here are some of the common configuration changes:
# RSAAuthentication yes # PubkeyAuthentication yes # RhostsRSAAuthentication no # HostbasedAuthentication no # RhostsRSAAuthentication and HostbasedAuthentication PasswordAuthentication yes ChallengeResponseAuthentication no # KerberosAuthentication no GSSAPIAuthentication yes
RequiredAuthentications within the sshd_config file dictate
which authentication methods and configurations are used with SSH Protocol
2 only, and the syntax for them to allow password and public key
authentication is as follows:
# vi /etc/ssh/sshd_config AllowedAuthentications publickey, password RequiredAuthentications publickey, password
Private and public key pairs for SSH
To help validate identities, SSH has a key management capacity and related agents. When configured with public key authentication, your key proves your identity to remote SSH hosts. An SSH-based identity consists of two parts: a public key and a private key. The private SSH key is the user's identity for outbound SSH connections and should be kept confidential. When a user initiates an SSH or SCP session to a remote host or server, he or she is said to be the SSH client. Through a mathematical algorithm, a private key is like your electronic identification card; the public key is like the lock or gate mechanism that you present your ID card to. Your private key says, "This really is Fred Smythe"; the public key says, "Yes, you are indeed the real Fred Smythe; you are now authenticated: Please enter."
Your public key represents who you will allow inbound access to through your gate or lock. Public keys need not be kept secret; they cannot be used to compromise a system or for unwarranted access into a system. On a Linux or UNIX system, these private and public key pairs are stored in ASCII text files; on Windows systems, some programs store the key pairs as text files, some in the Windows registry.
Multiple identifications using multiple private keys can be created with an SSH Protocol 2 configuration. Let's look at how to generate, set up, and configure an SSH private and public key pair on typical Linux hosts (see Figure 5).
Figure 5. Diagram of the SSH private-public key pair transactions, as defined within the SSH defined architecture model
Steps for configuring public and private SSH key pairs
The example shown in step 1 (see Listing 1) uses the
ssh-keygen utility for user fsmythe to create the SSH
private-public key pair with the
Listing 1. Generate the SSH key pair
[email@example.com ~]$ /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/fsmythe/.ssh/id_dsa): Enter passphrase (empty for no passphrase): ****** (Enter 'mypassword') Enter same passphrase again: ****** (Enter 'mypassword') Your identification has been saved in /home/fsmythe/.ssh/id_dsa. Your public key has been saved in /home/fsmythe/.ssh/id_dsa.pub. The key fingerprint is: 33:af:35:cd:58:9c:11:91:0f:4a:0c:3a:d8:1f:0e:e6 firstname.lastname@example.org [email@example.com ~]$
The example shown in step 2 (Listing 2) illustrates copying the public key of the key pair from the source to the destination host's authorized_keys file within the .ssh subdirectory under the home directory of the desired user account on the destination host.
Listing 2. Copy the public key from the source host to the authorized_keys file on the destination host
[firstname.lastname@example.org ~]$ scp -p /home/fsmythe/.ssh/id_dsa.pub email@example.com:/home/fsmythe/.ssh/authorized_keys fsmythe@ thor01.com's password: id_dsa.pub 100% 624 0.6KB/s 00:00
The example shown for step 3 (see Listing 3) makes the
first-time remote SSH call (
ls -d /tmp) to the destination
server, thereby caching the key within your server's .ssh/known_hosts
file. You enter the same passphrase with which you created the SSH
private-public key pair, and the output of the command run on the remote
destination server is seen locally back on your source server.
Listing 3. Verify the SSH access by running a remote command on the target remote host
[firstname.lastname@example.org ~]$ ssh email@example.com ls -d /tmp The authenticity of host 'thor01.com (10.12.53.118)' can't be established. RSA key fingerprint is 84:4f:e5:99:0b:7d:54:d0:1b:3e:2b:96:02:34:41:25. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'thor01.com,10.12.53.118' (RSA) to the list of known hosts. Enter passphrase for key '/root/.ssh/id_dsa': ****** (Enter 'mypassword') /tmp file1.txt file2.txt dir3_5432
Note: For the examples above, you didn't have to enter the
user fsmythe's password. Rather, you enter the passphrase that you set in
the first step. If you would rather not have to enter a passphrase when
accessing the remote destination, create an empty passphrase by typing
enter in step 1 when prompted for the passphrase. Now, you
won't have to type anything to access the thor01.com remote target machine
as the user fsmythe.
Configuring and using the ssh-agent
For the truly paranoid who refuse to create a password-less SSH
public-private key pair, there's the
ssh-agent utility. In a
nutshell, you use the
ssh-agent utility to temporarily grant
password-less SSH access on a public-private key pair configuration that
does have a passphrase set, but only for the current shell session. Before
ssh-agent utility, enter the passphrase as
[firstname.lastname@example.org ~]# ssh email@example.com Enter passphrase for key '/root/.ssh/id_dsa':****** (User must type password) Last login: Sat May 8 06:37:26 2010 from 10.12.53.118
ssh-agent to generate Bourne shell commands on
[firstname.lastname@example.org ~]# ssh-agent -s SSH_AUTH_SOCK=/tmp/ssh-vxZIxF1845/agent.1845; export SSH_AUTH_SOCK; SSH_AGENT_PID=1849; export SSH_AGENT_PID; echo Agent pid 1849;
In step 3, you set the aforementioned environmental variables in the current shell session:
[root@example01 ~]# SSH_AUTH_SOCK=/tmp/ssh-vxZIxF1845/agent.1845;export SSH_AUTH_SOCK SSH_AGENT_PID=1849; export SSH_AGENT_PID;echo Agent pid 1849 Agent pid 1849
Then, verify that the
ssh-agent is running:
[email@example.com ~]# ps -fp $SSH_AGENT_PID UID PID PPID C STIME TTY TIME CMD root 1849 1 0 06:14 ? 00:00:00 ssh-agent -s
Now, list the currently loaded identities within the running
[firstname.lastname@example.org ~]# ssh-add -l The agent has no identities.
In step 6, add the desired SSH identities (preauthenticating them with the correct passphrase for that SSH key):
[email@example.com ~]# ssh-add Enter passphrase for /root/.ssh/id_dsa: Identity added: /root/.ssh/id_dsa (/root/.ssh/id_dsa) ****** (Entered 'mypassword')
Now, you can verify that those identities are loaded into the running
[firstname.lastname@example.org ~]# ssh-add -l 1024 33:af:35:cd:58:9c:11:91:0f:4a:0c:3a:d8:1f:0e:e6 /root/.ssh/id_dsa (DSA)
Finally, test the
ssh-agent with SSH command syntax. Note that
now there's no passphrase prompt:
# Assuming target remote host has correct authorized key for private key from example01 [email@example.com ~]# ssh -A firstname.lastname@example.org Last login: Sat May 8 06:36:27 2010 from 10.12.53.118 [root@example02 ~]# # Assuming target remote host has correct authorized key for private key from example03 [email@example.com ~]# ssh -A firstname.lastname@example.org Last login: Sat May 8 07:04:05 2010 from 10.12.53.119 [root@example03 ~]#
When you enter the passphrase using the
ssh-add command, you
are actually decrypting the private key and then placing it in memory
through the agent for any future SSH connections with that particular
passphrase. Note that you can enter multiple private keys and
pre-authenticate them with the
The SSH tool
ssh-keyscan, shown in Listing
4, allows you to gather the public SSH host keys from multiple
remote SSH hosts. The tool is helpful in building of the
/etc/ssh_known_hosts files and is exceptionally fast and efficient. It is
primarily suited to shell scripts for automation purposes.
Listing 4. Example using ssh-keyscan
[root@example01 ~]# /usr/bin/ssh-keyscan -t rsa,dsa example02.com # example02.comSSH-2.0-OpenSSH_4.3 example02.comssh-dss AAAAB3NzaC1kc3MAAACBALd5/TGn7jCL1DWWzYMw96jw3QOZGBXJgP4m9LACViyM0QHs ewHGo841JdInfE825mVe0nB/UT15iylLOsI/jFCac+ljQRlO+h2q7WOwGveOUN7TxyKlejM+G1pg5DndGt05iYn+2 dDfn5CmEsI+K0F2vk/+mpoSOk9HKq9VgwNzAAAAFQDPeLAth62TRUcN/nTYoqENBmW3SwAAAIEAryoKa+VaG5LQNj wBujAuA7hGl+DIWVb1aZ8xAHkcyL5XgrOWEKNnK9mDmEN66oMLfTMO3w8/OvbJUmcXcU3jnL3zguz2E2OIv6t6vAa F6niL7A/VhxGGxy4CJZnceufStrzZ3UKXRzjwlm0Bwu/LruVF2m3XLvR5XVwUgyWvw+AAAACAaK12k3uC/OOokBgi eu/SuD5wCSBsf9rqG9ZFa32ujZwRZmA/AwPrZd6q3ASxmjtMp6zGQSzxPczUvLH9D9WIJo713bw8wCPo/7pqiQNRs OZXqlQyaXyrDout6CI683b1/rxsZKPrJpFNehrZwjWrwpYhK7VaTuzxvWtrDyDxWec= # example03.comSSH-2.0-OpenSSH_4.3 example03.comssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq5So5VBeH4gPX1A1VEeQkGsb/miiWsWnNTW8ZWYj 2IvU7rKpk/dBIp64WecYYYgDqTK5u0Q+yTijF8wEEI9rRyoh9p5QraM8qy9NxcHzyGqU4vSzfVrblIQrDI8iv7iwz 7PxQAY76NmweaUyGEDfIErty4gCn/ksy85IgffATa9nt36a4iUhiDNifnE8dm1ZrKkvz3lIg0w+Cu0T9MY77AqLWj Moo0WoQArIvYa0soS3VhzgD/Biwu/sh3eHJtFUxTVxnATdkWkHKUI1wxma3j7jF0saTRKEQSvG6492W+U1FhEjFGN r7KeZXH99uFpuUWFA7xO7uaG/MLWSjPJMxw== # example04.comSSH-2.0-OpenSSH_4.3 example04.comssh-dss AAAAB3NzaC1kc3MAAACBALd5/TGn7jCL1DWWzYMw96jw3QOZGBXJgP4m9LACViyM0QHs ewHGo841JdInfE825mVe0nB/UT15iylLOsI/jFCac+ljQRlO+h2q7WOwGveOUN7TxyKlejM+G1pg5DndGt05iYn+2 dDfn5CmEsI+K0F2vk/+mpoSOk9HKq9VgwNzAAAAFQDPeLAth62TRUcN/nTYoqENBmW3SwAAAIEAryoKa+VaG5LQNj wBujAuA7hGl+DIWVb1aZ8xAHkcyL5XgrOWEKNnK9mDmEN66oMLfTMO3w8/OvbJUmcXcU3jnL3zguz2E2OIv6t6vAa F6niL7A/VhxGGxy4CJZnceufStrzZ3UKXRzjwlm0Bwu/LruVF2m3XLvR5XVwUgyWvw+AAAACAaK12k3uC/OOokBgi eu/SuD5wCSBsf9rqG9ZFa32ujZwRZmA/AwPrZd6q3ASxmjtMp6zGQSzxPczUvLH9D9WIJo713bw8wCPo/7pqiQNRs OZXqlQyaXyrDout6CI683b1/rxsZKPrJpFNehrZwjWrwpYhK7VaTuzxvWtrDyDxWec= # example05.comSSH-2.0-OpenSSH_4.3 example05.comssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq5So5VBeH4gPX1A1VEeQkGsb/miiWsWnNTW8ZWYj 2IvU7rKpk/dBIp64WecYYYgDqTK5u0Q+yTijF8wEEI9rRyoh9p5QraM8qy9NxcHzyGqU4vSzfVrblIQrDI8iv7iwz 7PxQAY76NmweaUyGEDfIErty4gCn/ksy85IgffATa9nt36a4iUhiDNifnE8dm1ZrKkvz3lIg0w+Cu0T9MY77AqLWj Moo0WoQArIvYa0soS3VhzgD/Biwu/sh3eHJtFUxTVxnATdkWkHKUI1wxma3j7jF0saTRKEQSvG6492W+U1FhEjFGN r7KeZXH99uFpuUWFA7xO7uaG/MLWSjPJMxw==
Configuring SSH with UNIX applications or scripts
Configuration of SSH access for use by remote shell scripts and remote tools for maintenance, remote backup, and archival systems has great usefulness, but it has always been at the very least a subject of high controversy when it comes to server security. Many shell scripts that a user might want to run, such as:
$ ssh email@example.com /usr/local/bin/dangerous_script.pl
cannot handle a required SSH passphrase prompting him or her to
authenticate but in fact will break unless a password-less private-public
SSH key pair, an
ssh-agent configuration, or possibly a
trusted host network mechanism—something that does not prompt for
an SSH password—has been configured ahead of time. This is because
SSH expects the passphrase from the current terminal associated with that
shell session. A user can get around this issue by using an expect script
or possibly a Perl (see
CPAN Module Net::SSH::Perl) script
(or your shell script could alternatively call one of the aforementioned
types of scripts):
#!/usr/local/bin/expect spawn sftp $argv expect "password:" send "mysshpassowrd\r"
Granting a password-less SSH mechanism for remote host access to typical users is justification enough for a lynching in the eyes of some systems administrators. However, alternative security measures to justify the password-less SSH mechanism for remote host access, such as a user on the remote host machine only given a restricted korn shell (rksh) account or restricted shell (rssh) instead of a full bash or Bourne shell account. It is also possible on an authorized key to restrict a user to a subset of commands in a list so that in effect, the user can only use the exact commands required to run remotely without the possibility for further access or an accidental command run that could damage the system. The SSH restriction example provided in Listing 5 provides such a restriction type.
Listing 5. Example of configuration restricting the authorized_keys file on remote host
[fsmythe@example02 .ssh]$ more authorized_keys command="/usr/local/bin/secureScript.sh",no-port-forwarding,no-X11-forwarding,no-agent-fo rwarding,no-pty ssh-dss AAAAB3NzaC1kc3MAAACBAOFsC6C7cJUGOZG4Ur9W0J6mxTTk5+MYTu5XfRESPLVwQ A7HlUxhsXsxgmb1L1RgvR/g0JZnipDS+fGOrN2/IerSpgyzegTVxYLPrOovvuyCn5TA0+rmyrkV27so6yRDkdqTJc YzWNJOyDndnTrDc/LNmqLFKoGMQ33aur6RNv4VAAAAFQD4leC5Fc1VJqjvXCNsvazBhi84vQAAAIAWbshT80cTESg dX/srxX4KVNAzY1uhBz5V0UYR4FGP+aoe6laxRj+gQvFIvAKIrpikvBjgyW6cdT8+k0t8HGIQp20MzSBdY9sH8xdj 05AG97Nb/L8xzkceB78qfXhV6txaM1CzssUtiOtaAygrywNPBDEN9MbEbwpVVVyd6iqZNgAAAIAmV0SUZoUr8dHdC tagRye4bGOQjoztpb4C0RbXQw+w7Jpzr6cZISdZsK4DTBjODvv2+/OWPm7NEzzWyLzHPBNul8hAHOUCOpp+mYWbXX F78BTk2Ess0SZu8dwpOtasTNEp+xPcsOvQx2Kdr17gTp+28SfpREuLudOr6R3KeTb+hw== fsmythe@example01
User fsmythe at host example01 is only allowed to execute the command
="/usr/local/bin/secureScript.sh in this example.
Creating a trusted host environment using SSH
Finally, I mention the trusted host environment as an alternative to
setting up public-private SSH key pairs. For automation or in a scripted
environment in which these types of calls are necessary, the trusted host
network, though still bearing some security risks, has advantages over the
public-private key pair scenario. A trusted host network or trusted host
authentication relies primarily on preconfigured files that list a
combination of users and hosts that are allowed access. There are two
types of trusted-host authentication. The older (such as for OpenSSH and
SSH1) and weaker uses the clear-text protocol commands (
rlogin); checks the two files; and sets
one keyword in the sshd_config file:
SSH Protocol 2 does not support this method. Instead, for a more secure trusted host network, make the following changes in the /etc/ssh/sshd_config file (which accepts host names or IP Addresses), and configure the shosts.equiv and/or the .shosts files:
To enable a trusted-host environment in the /etc/ssh/sshd_config file for SSH Protocol 2, use:
PermitEmptyPasswords yes AllowSHosts remoteclient.com DenySHosts
For example, if you were on the server example.com and had configured your /etc/shosts.equiv file as follows:
+remoteclient.com fsmythe +secureserver.net sallyh +192.168.100.12 fsmythe -hackers.org james
you would allow user fsmythe trusted host authentication from the remote sources remoteclient.com, 192.168.100.12, and secureserver.net and user sallyh access from secureserver.net, denying access from user james at the remote source hackers.org.
The trusted-host authentication and public-private SSH key pair authentication methods are similar and to a greater end achieve the same results. Table 1 provides a side-by-side comparison of the two authentication methods.
Table 1. Comparison of private-public SSH key pairs with trusted-host configuration
|SSH aspect||Trusted host||Private-public key pair|
|Authenticate by IP address||Yes||Yes|
|Authenticate by host name||Yes||Yes|
|Use other public key features||No||Yes|
|Authenticate by remote user name||Yes||No|
|Allow wildcards in host names and IP addresses||No||Yes|
|Passphrase is necessary for login access||No||No|
|Breaks on IP address or host name change||Sometimes||Yes|
|Configuration required on the server and client||No||Yes|
|Useful for automated tasks or scripting needs||Yes||Yes|
To those admins who are scoffing right now at the thought of allowing a trusted host authentication system using password-less remote SSH access on their network, consider the downside of public-private key pairs when using a script for remote SSH functionality:
- If a server host name or IP address changes, the public-private key pair configuration will break because of the cached known hosts. The old entry will need to be removed in the .ssh/known_hosts file and the SSH remote host name and/or IP address re-cached again. This will break scripts dependant on the private-public key pair.
- Private-public key pair authentication requires both client and server configuration. If an SSH public key changes or the pair is regenerated, all of the remote hosts will need the new public key in their authorized_keys file.
- If the permissions of the .ssh/ folder or private or public key files
themselves change, it could prevent the SSH password-less access from
occurring. To disable strict file and directory permissions checking,
set the keyword
nowithin /etc/ssh/sshd_config file.
- There is no centralized way to revoke a key once a key pair has been generated or to know exactly to whom the key has been distributed.
SSH is a powerful and secure network utility that countless users worldwide
use for numerous tasks. Offered as a safe and secure alternative to the
clear-text protocols such as telnet and the
r* series command
and with multiple offerings of freely distributable SSH clients and
servers, SSH is difficult to beat. Used widely in many networks for mass
remote monitoring, system maintenance, remote system auditing, reporting,
and automation within scripting technologies, it appears that SSH is here
to stay and will continue to evolve.
- Wikipedia: Secure Shell
- The OpenSSH Protocol under the Hood
- Server clinic: Connect securely with ssh | <urn:uuid:cf558ad9-4789-497f-a1a8-ba8de7139588> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-sshsecurity/index.html?ca=drs- | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.767534 | 8,865 | 3.46875 | 3 |
As organizations and individuals continue to look for ways to ensure the security of digital information and move away from a world of passwords, one option that has been tossed around for several years includes the collection and analysis of specific human body characteristics. Eye retina scans, fingerprints and voice samples are all examples of a type of security system commonly referred to as biometrics.
It is important to note, however, that the use of biometrics is not a new concept. In fact, fingerprints have been in use since about 500 B.C., as Babylonian businessmen first pressed them into clay tablets to record business transactions.
Today, this digital security measurement is useful in some low-level security environments, such as clocking into a job— where convenience and ease of use are the primary drivers. In this regard, today’s technologies are making digital fingerprints useful for rudimentary authentication purposes.
When approached as the end-all, be-all solution for digital security, however, there are a few shortcomings that make biometrics less than ideal. Most notably, if stolen, a person’s identity would be in serious jeopardy.
While digital certificates can be revoked, once a person’s biometrics gets into the hands of a malicious third party, there is no way of getting the information back. You can’t, in other words, revoke someone’s DNA structure. For this reason, some jurisdictions maintain that biometrics be stored in devices controlled by the individual and not agencies or organizations.
Further drawbacks can be found in the difficulty and cost of actually collecting biometric information. Voice verification, for instance, requires little to no background interference to work properly. Facial recognition software mandates that a person match up evenly with a camera for authentication purposes. Additionally, a person’s face will change over time.
The use of biometrics presents an interesting case when it comes to the protection of digital information. Right now, options are readily available for consumers. However, until there is an affordable, secure solution to employing them, adoption will be slow.
What are your thoughts on biometrics? Do you have a comprehensive plan as to how they can be carried out in an effective manner? We would love to hear your thoughts in the comments section below. | <urn:uuid:8c3fae01-5e26-4d93-b614-2b71665b1554> | CC-MAIN-2017-04 | https://www.entrust.com/biometrics-answer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00303-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934595 | 466 | 3 | 3 |
Table of Contents
In the past when you needed to resize a partition in Windows you had to use a 3rd party utility such as Partition Magic, Disk Director, or open source utilities such as Gparted and Ranish Partition Manager. These 3rd party programs, though, are no longer needed when using Windows as it has partition, or volume, resizing functionality built directly into the Windows Disk Management utility.
You may be wondering why someone would want to resize a Windows volume. One reason would be if you want to install another operating system such as linux, but do not have enough free space to create a new partition for it. By shrinking the Windows volume, you can free up enough space to create a new partition that can be used to dual boot into linux. Now lets say after trying linux, you decide its not for you. Now you are left with all this leftover space that is not being used by Windows. To reuse this space, you simply need to expand, or extend as Vista calls it, an existing Windows volume so that it uses all the available free space that was previously being used by linux.
When resizing volumes in Windows Vista, Windows 7, and Windows 8 you must be aware of the following criteria:
This section will show you have to shrink a Windows volume, or partition. In order to do this we must open the Windows Disk Management tool. The following steps will walk you through this process.
You have now finished shrinking your partition and have the extra space available to use as necessary.
This section will show you have to extend a Windows volume. In order to do this we must open the Disk Management tool where we can resize our volumes. The following steps will walk you through this process.
Windows provides the ability to resize volumes directly from the command line using the Diskpart utility. The Windows Diskpart utility is a command line program for managing the disk partitions, or volumes, on your computer. Some of the tasks you can do with this utility include repartitioning drives, deleting partitions, creating partitions, changing drive letters, and now shrinking and expanding volumes.
To access the diskpart utility follow these steps:
Before you can expand or shrink a volume using Diskpart you must first select the volume you would like to work with. To do this you need to use the list volume command to find the IDs associated with each volume. When you type list volume and then press enter, diskpart will display a list of Windows volumes on your computer. Next to each volume will also be a numbers that can be used to identify that specific volume. An example of what the list volume command looks like can be found below.
After determining the ID of the volume that you would like to work with, you need to select that volume using the select volume command. To use this command you would type select volume ID, where ID is the ID associated with the volume you found using the list volume command. Now that the volume has been selected, diskpart knows that any further commands will be associated with this particular volume until you enter another select volume command.
To shrink a selected volume you would use the shrink command. The shrink command has two arguments that you can use to define how you want diskpart to shrink the volume. The first argument is desired= which will shrink the volume by the desired amount in MB if possible. The second argument is minimum= which tells diskpart that it should only shrink the volume if it can shrink it by the specified amount in MB. If you do not use either of these arguments, diskpart will shrink the partition by the maximum amount possible. If you would like to determine the maximum amount of space that you can shrink a volume, you can type the shrink querymax command.
Shrink examples are:
What it does
|shrink desired=2048||This command will shrink the volume by 2 GB if possible.|
|shrink minimum=2048||This command will shrink the volume as much as possible, but fail if there is less than 2GB available to shrink it by.|
|shrink||This command will shrink the volume by the maximum it can be.|
Shrinking a volume from the command line
To extend a selected volume you would use the extend command. For the extend command the most common arguments are size and disk. The size= argument will extend the selected volume by the desired amount of MB. The disk= argument allows you to specify the disk which has the free space you wish to extend a volume with. If no argument, or no disk= argument, is provided when using the extend command, diskpart will use all the available space on the current disk to extend the volume. As said previously, we strongly suggest that you do not use the disk= argument to extend a volume onto another disk as this increases your chance of losing data if one of the two drives has a hardware failure.
Extend examples are:
What it does
|extend size=2048 disk=2||This command will extend the volume by 2 GB using the free space from disk 2..|
|extend size=2048||This command will extend the volume by 2GB from the same disk.|
|extend||This command will extend the volume as much as it can be.|
With the ability to extend and shrink a partition using Windows Vista, Windows 7, or Windows 8 you no longer need to worry about installing a new drive in order to dual-boot to an alternate operating system. Now you simply find a drive that has some free space on it, shrink it, and use it as necessary. As always if you have any questions you may have in of our Windows forums.
In order to use a hard drive, or a portion of a hard drive, in Windows you need to first partition it and then format it. This process will then assign a drive letter to the partition allowing you to access it in order to use it to store and retrieve data.
When a hard drive is installed in a computer, it must be partitioned before you can format and use it. Partitioning a drive is when you divide the total storage of a drive into different pieces. These pieces are called partitions. Once a partition is created, it can then be formatted so that it can be used on a computer. When partitions are made, you specify the total amount of storage that you ...
This tutorial focuses on using GParted, or Gnome Partition Editor, a free and open source partition editor. To use GParted, you must first download the CD Image file (.iso file) of GParted Live for this program. Instructions on where to find and how to burn the GParted ISO file are covered in the Preparation step. In this tutorial we will be using Microsoft Windows XP for certain steps. If you use ...
Have you ever had an experience where you are using a lot of programs in Windows, or a really memory intensive one, and notice that your hard drive activity light is going nuts, there is lots of noise from the hard drive, and your computer is crawling? This is called disk thrashing and it is when you have run out of physical RAM and instead Windows is using a file on your hard drive to act as a ...
One of the more frustrating experiences when using a computer is when you want to delete or rename a file or folder in Windows, but get an error stating that it is open, shared, in use, or locked by a program currently using it. | <urn:uuid:34eddf9e-1b83-450a-8929-2e1d3af23fd5> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/shrink-and-extend-ntfs-volumes-in-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9055 | 1,537 | 2.5625 | 3 |
The UN has issued new regulations to ensure that aircraft are tracked at all times – an opportunity perhaps for the Internet of Things (IoT).
Two years ago, Malaysian Airlines flight MH370 set out on a routine trip north from Kuala Lumpur to Beijing, before disappearing somewhere in the Indian Ocean. Aside from debris discovered washed up on the French island of Reunion and in Mozambique, not a trace has been found of the plane or the 239 people on board.
The UN’s international civil aviation organisation (ICAO) has now announced new measures to help prevent similar aircraft disappearances in the future.
The new regulations will require aircraft to transmit their position every minute to make, should a plane go missing again, situations necessitating searches of huge areas a thing of the past. This will be done via “autonomous distress tracking devices”, which could potentially lead to the use of IoT sensors and data analytics.
There have also been changes to expectations regarding flight data recorders, to ensure they are fully-recoverable in the case of an accident. The duration of cockpit voice recordings has also been extended from two to 25 hours. Operators will have until 2021 to comply with all of the new regulations.
These new provisions bring closer the proposal of a Global Aeronautical Distress and Safety System (GADSS), which was put forward by the ICAO last year.
Dr. Olumuyiwa Benard Aliu, ICAO Council President, said: “These developments are consistent with the findings and recommendations of the multidisciplinary Ad-Hoc Working Group ICAO formed after Malaysia Airlines MH370 went missing in May 2014. They directly support the concept of operations for the Global Aeronautical Distress and Safety System (GADSS) which was proposed by ICAO at that time, and will now greatly contribute to aviation’s ability to ensure that similar disappearances never occur again.”
Airlines and aircraft manufacturers may consider all available and emerging technologies which can deliver both the one-minute location tracking requirement and the data recovery systems, which could require deployable flight recorders which exit the plane automatically during or before a crash.
“Taken together, these new provisions will ensure that in the case of an accident the location of the site will be known immediately to within six nautical miles, and that investigators will be able to access the aircraft’s flight recorder data promptly and reliably”.
Tracking challenges – does IoT help?
I spoke with Marc Melviez, CEO of Luciad, a software company specialising in aviation and real-time geospatial visualization. He told Internet of Business that “Identifying and tracking moving things is always challenging. In aviation, Radar has been the tool of choice since the advent of the jet age”.
But radar has several limitations. “The most significant constraints are the limited range – on a world scale, a thousand miles is not much, and ground-based radars do not have that reach. There is also difficulty in identifying an object that does not want to be identified. For example, a commercial plane that does not broadcast its identification.”
Melviez continued: “When it comes to transoceanic aviation, there is no radar coverage for parts of the journey, so other technologies, such as satellites could come into play. But when one starts mixing different technologies to track moving objects, visualization becomes very challenging due to different precision levels. For example, combining a position that is known with 10m accuracy with another position that is known with 1km accuracy, with different refresh rates.”
Luciad provides the technology that can help identify, track and visualize objects in these multi-sensor environments (radar + satellite + radio + others). The technology was originally developed for battlespaces, where constant, real-time identification of friend and foe is critical.
A number of airlines are tracking their staff and cargo through the use of the Internet of Things (IoT), with EasyJet even providing cabin crew with track-able uniforms.
Indeed, a report this week from SITA found that most airlines – and airports – are gearing up for the IoT.
“Half of airlines expect to have IoT initiatives up and running over the next three years. Meanwhile, airports are building the infrastructure to support IoT. Together, this will deliver improved operations and will lead to a change in the passengers’ experience,” said Nigel Pickford, director Market Insight, SITA, in a statement. | <urn:uuid:77acc94c-6d83-422b-b3aa-9f171e75b0a1> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-flight-un-orders-aircraft-tracking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00265-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950247 | 945 | 2.703125 | 3 |
Note:: Complex data types are available in Cray Standard C only. Complex data types are supported in Cray C++ through the complex class library.
Cray Standard C provides float complex, double complex, and long double complex data types for use on all Cray systems. These data types are available only if the nonstandard header <complex.h> is included in your source. These data types are available even if you are compiling in strict conformance mode, as long as the <complex.h> header is included. Complex arithmetic can be performed in much the same way as with real data types (either integral or floating type). Complex variables can be declared and initialized. Most arithmetic operators can be used with one or two complex operands to yield a complex result. Many standard math functions have corresponding functions that take complex arguments and return complex values.
The complex data types are represented in memory with two contiguous parts, the real part and the imaginary part. The characteristics of the imaginary part agree with those of the corresponding real types. For example, the imaginary part of a float complex type has the same characteristics as a float.
An imaginary constant has the following form:
R is either a floating-constant or an integer-constant; no space or other character can appear between R and i. If you are compiling in strict conformance mode (-h conform), imaginary constants are not available.
A complex variable is initialized by using an expression that may contain an imaginary constant. For example:
#include <complex.h> double complex z1 = 1.2 + 3.4i; double complex z2 = 5i;
When the ++ operator is used to increment a complex variable, only the real part is incremented.
Printing a complex value requires the use of two %f specifications and any formatting needed for legibility must be specified as shown in the following example:
double complex z = 0.5 + 1.5i; printf("<.2f,%.2f>\n ", creal(z), cimag(z));
The output from the preceding example is as follows:
A binary operator with one complex operand and one real operand causes the real operand to be promoted to a complex type before the operation is performed. When a real value is promoted to a complex value, the new complex value's real part gets the same value as if the promotion were to the corresponding floating type, and its imaginary part is zero. When a complex value is demoted to a real type, the value of its imaginary part is discarded, and its real part is demoted according to the demotion rules for the corresponding floating type.
A complex type is not valid as an operand for any operator that requires an integral data type. A complex type is not valid as an operand for any of the relational operators, but it is valid for the equality and inequality operators. It is valid as an operand for any other operator that allows floating data types.
Math functions that take complex arguments are declared in the complex.h header and described in the UNICOS System Libraries Reference Manual. | <urn:uuid:fa074a09-c05c-4e09-b25c-02d1c2eff69d> | CC-MAIN-2017-04 | http://docs.cray.com/books/004-2179-003/html-004-2179-003/z893434795malz.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.894881 | 640 | 3.359375 | 3 |
If your answer is primary key then tell me how it get mapped to index of the particular record. Much confused about index whether its a memory pointer??? please help me to understand this concept clear with some Examples....
Joined: 23 Nov 2006 Posts: 19270 Location: Inside the Matrix
A somewhat less technical way to think of this is to think of the index of a vsam file like the index of an IBM manual. The index of the manual is concise containing the "key"s and a reference to the page in the manual containing the information.
The index of a vsam file works in much the same way. VSAM locates the key value in the index and from there gets the reference pointer to the actual data.
The actual data is then read and returned to the program like when we go to the indexed page and read what is there. | <urn:uuid:78577b10-c79b-484e-b05f-46c080ea3868> | CC-MAIN-2017-04 | http://ibmmainframes.com/about32800.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902239 | 175 | 2.90625 | 3 |
The assignment is to write a "lessons learned" column on the aftermath of Hurricane Katrina -- the chaotic, mismanaged response and the collapse of nearly all forms of communication. operability and interoperability;
The people paid to know about these things have been consulted, and the conversations drift back and forth between issues, which include:
state and local first responders' need to adopt a military game plan for communicating, and need for an ad-hoc wireless network;
lack of spectrum, a governance structure, a national standard for communications, and funding for state and local first responders; and
lack of foresight to recognize and prepare for the worst-case scenario.
Was Katrina such a catastrophic event that its effects couldn't be mitigated by better preparation or communication? No one was prepared to go that far.
Questions about interoperability were quickly rebuffed -- how can there be interoperability when first responders in the same agency couldn't even talk to one another?
Suddenly few wanted to talk about interoperability. Is it just a buzzword that has become part of the national landscape after 9/11, perhaps in need of some clarification? What are we really looking for? And is it attainable?
Lt. Col. Joey Booth, deputy superintendent of the Louisiana State Police, said the state had been "working very hard" on the operability/interoperability issue, but the lack of both funds and standards slowed the efforts.
The Shreveport Fire Department sent rotating groups of 25 firefighters to New Orleans to help with rescue efforts, but the Shreveport radio system wasn't registered to the Louisiana State Police system, so there was no way for the two agencies to communicate.
Willis Carter, chief of communications for the Shreveport Fire Department, said that in his 34 years with the department, he'd never seen an instance -- before Katrina -- where it was critical to have the two systems connected. Plus, he said, the cost of doing so was prohibitive.
"I guess we probably did fail to prepare for the absolute worst-case scenario," Carter said. "But in some cases, you just can't anticipate, as hard as you try and as much as we want to. You have to weigh the potential for use versus cost. I guess that's kind of the predicament we were in -- we didn't anticipate we would need that."
As it turned out, Shreveport's simplex radio system didn't work well under the conditions -- multistory buildings, hospitals and other structures -- so some firefighters used walkie-talkies they purchased for hunting and fishing trips, which helped somewhat. And the Louisiana State Police system collapsed under the weight of users who swarmed the area to help.
So after poring over the information from New Orleans and hashing it out with a few experts, what are the lessons learned? State and local communities must prepare for the worst-case scenario instead of assuming a worst-case event will never happen.
That brings us to a final thought from Lt. Col. Booth. "This is a foreseeable problem that will reoccur, in another jurisdiction perhaps, and it could be the result of a large-scale terrorist event or a natural catastrophe." | <urn:uuid:b7a0fee8-84de-4847-a77b-ac7fa2b62f6b> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Lessons-from-Katrina.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00109-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972276 | 659 | 2.53125 | 3 |
Let us send this one up to the “In case you missed it” department, since this happened while many of us were likely on a final summer vacation for labor day1 when the official keepers of the English Language at Oxford added a number of buzzworthy words to its incomparable pages, chief among them the “Internet of things.”
Internet of things (noun) - A proposed development of the Internet in which everyday objects have network connectivity, allowing them to send and receive data. If one thing can prevent the Internet of things from transforming the way we live and work, it will be a breakdown in security.
Candidly, we wish we could have been involved in the defining process as this one seems to have an Op-Ed opinion added to it that I find somewhat unusual in the context of an objective definition, but that’s just me. Quite frankly, security is just one of a number of challenges that face us in delivering on the promise of the IoT or M2M as we prefer to call it.
As far as security is concerned, the market continues to blend “network connectivity” and “Internet connections” together as one, and we believe this creates a fundamental confusion about what the Internet of things really is. As a case in point, the example sentence used in the Oxford entry alludes to security questions about dumb, connected devices becoming a future target of malicious Internet activity. The definition assumes that the Internet of things functions in the same manner as the Internet that we use every day. This is not the case.
The Internet of things infrastructure really doesn’t allow for open access, by design. Machine devices will add significantly to the access doorways into the Internet, to be sure, just as increasing delivery of smartphones does, but there is an important difference. Smartphones typically have open access, with their own individual addresses. But an M2M environment (consisting of Internet of things devices) is quite closed. It is not an extension of the Web into these devices, but rather the devices use dedicated network access (cellular, satellite) to route data solely to and from a specific network resource. And with this data routing comes a quite complex process for ingress to, and egress from those domains. In addition, the streams are often layered with security processes from encryption to SSL support, depending on the application.
Put another way, there are architectural differences in the M2M platform that transcend the level of how humans communicate over the Internet. To call it an Internet of things is actually a misnomer.
While the train has left the station for nomenclature, we certainly hope that the lexicographical stakeholders come to recognize these significant differences and adjust the definition’s wording accordingly. In the meantime, we will continue to make our partners and customers aware of the ways these respective “Internets” are not the same.
What I failed to mention in all this is that our industry moniker received official recognition in the lexicon on the same day as words such as selfie and srsly, but we’ll leave that alone for now.
1 Not me, though. I was busy at the M2M Evolution Conference in Las Vegas. Check back here shortly for my write-up and my impressions from the show floor.
By Stein Soelberg, Director of Marketing
Stein leads a team whose responsibility is to own the branding, advertising, customer engagement, loyalty, partnership and public relations initiatives designed to propel KORE into the 21st century. With over 15 years of technology marketing experience in the business to business software, Internet services and telecommunications industries, Stein brings a proven track record of launching successful MVNOs and building those brands into leaders. | <urn:uuid:e514826e-bf96-4972-ba13-b97e3d9ca65c> | CC-MAIN-2017-04 | http://www.koretelematics.com/blog/internet-of-things-hits-oxford-english-dictionary-are-we-official-now | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953429 | 770 | 2.6875 | 3 |
As I discussed in my previous article, Multi-Protocol Label Switching (MPLS) traffic engineering has three main uses. These are to optimize bandwidth, to support a service-level agreement (SLA), or to enable fast reroute. I already covered how label-switched paths are used for bandwidth optimization. In this piece, I'll explain MPLS traffic engineering in the context of SLAs.
Traffic engineering can be used to meet an SLA. Not all traffic is the same, and not all customers can get the same service. This is business, and there is no free lunch, of course.
Traditionally, voice and video traffic were carried over circuit-based TDM links. These applications are very delay and loss sensitive, so we need to design our packet-switching networks to ensure that they are adequately supported.
MPLS traffic engineering and quality of service (QoS) can both be used -- either alone or together -- to accomplish this goal. These technologies are sometimes confused, but they are independent subjects and not exclusive. Reservation for the traffic engineering tunnels, however, is made on the control planes of devices.
As an example, you can have a 100 Mbit/s link between point A and point B. Assume you reserve bandwidth for two Label-Switched Paths. with 60 Mbit/s and 40 Mbit/s link requirements. From the ingress, 80 Mbit/s of traffic can be sent over the 60 Mbit/s signaled LSP. Since, by default, MPLS traffic engineering tunnels are not aware of the data plane actions, 20 Mbit/s of traffic exceeding the limit will be dropped. Some of that dropped traffic might be very important, so it's in our best interest to protect it.
To make traffic engineering tunnels aware of the data plane traffic, the auto bandwidth feature of traffic engineering might be used. When auto bandwidth is enabled, the tunnel checks its traffic periodically and signals the new LSP with the "make before break" function. If a new LSP is signaled in this way, only the 80 Mbit/s LSP can survive over the 100 Mbit/s link. There is not enough bandwidth for the 40 Mbit/s LSP.
If there is an alternative link, 40 Mbit/s of traffic can be shifted to that link. Otherwise, circuit capacity must be increased or a new circuit must be purchased. If there is no alternate link and no time to bring in a new circuit, QoS could potentially be configured to protect critical traffic. DiffServ QoS with MPLS traffic engineering is mature and commonly used by service providers in these cases.
But how can one MPLS traffic engineering LSP beat another LSP? This is accomplished with the priority feature of the tunnels. Using priority, some LSPs can be made more important than others. To achieve this, the setup priority value of one LSP should be smaller than the hold priority value of the other LSP.
Once the path is computed and signaled, it doesn't mean that traffic by default follows the traffic engineering path. Actually, it still follows the underlying interior gateway protocol path. Since traffic engineering can work only with the link-state protocols Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), traffic follows the shortest path from the cost point of view.
In the first article of this series, I mentioned some methods for sending traffic into the MPLS traffic engineering LSP. These were static routing, policy-based routing, class-of-service-based tunnel selection (CBTS), policy-based tunnel selection (PBTS), autoroute, and forwarding adjacency.
Static routing, policy-based routing and CBTS are static methods and can be cumbersome to manage. But to send specific, important traffic into tunnels, classed-based tunnel selection can be a good option. Based on the EXP bit in the label stack, traffic can be classified and sent to an LSP that is QoS-enabled for protection.
Autoroute and forwarding adjacency, on the other hand, are dynamic methods to send traffic into traffic engineering LSPs.
By default, the shortest path is used for the destination prefix, and next-hop resolution is done for the next direct connection. When the autoroute feature is implemented, the next hop automatically becomes the destination address at the tailend of the tunnel. The drawback of this approach is there is no traffic classification or separation, so all the traffic -- regardless of importance -- is sent through the LSP.
Once MPLS traffic engineering is enabled and autoroute is used, traffic can be inserted only from the ingress node (label-switched router). Any LSR other than the ingress point is unable to insert traffic into the traffic engineering LSP. Thus autoroute can only affect the path selection of the ingress LSR.
What if we want ingress from all the nodes in the domain, and to be able to calculate the shortest path based on the constraints for the tunnel? Then the MPLS forwarding adjacency functionality might be used.
Once we enable this feature, any MPLS traffic engineering tunnel is seen as a "point-to-point link" from the interior gateway protocol point of view. Even though traffic engineering tunnels are unidirectional, the protocol running over an LSP in one direction should operate in the same way on the return path in a point-to-point configuration. | <urn:uuid:6bfa6c7c-b40d-47c1-b520-1c547fa4907a> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/using-mpls-traffic-engineering-meet-slas/1704005093?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91933 | 1,136 | 2.78125 | 3 |
Robert Kosara, a visual analysis researcher at Tableau Software, argues that if you want to learn how to tell stories with data, look at the Web. If you examine the right projects, you will find a classroom full of useful exhibits that use data to tell stories and examine questions—and provide lessons for both decision-makers and analytics professionals.
In a video that accompanies this article, Kosara reviews the lessons from four data visualization projects. These presentations include two from The New York Times (about political parties’ views on a jobs report in September 2012, and discussions about carbon emissions in the U.S. and China at the time of the Copenhagen climate conference).
In the third exhibit, Kosara discusses a chart-filled blog post by Elon Musk of Tesla Motors responding to a Times test drive of his company’s electric car. The fourth visualization is a Washington Post examination of National Rifle Association contributions to candidates for Congress. All use data to tell a story. And each shows how interpretations of data can vary depending on one’s point of view.
Data storytelling is still fairly new, Kosara says, and it pays to study the work of others to glean ideas of what techniques work well and to analyze why. “Looking at what the media are doing in particular when it comes to visual representation and the narrative of the building of the story, can be quite effective and especially because these are things that are available,” he says.
Below are summaries of the data visualizations with website links to the originals, accompanied by excerpts from Kosara’s video presentation.
How Political Parties in the U.S. Viewed a Jobs Report
The visualization: “One Report, Diverging Perspectives,” New York Times, October 5, 2012.
Context: In October 2012, one month before the presidential election, the U.S. reported that employers added 114,000 jobs in September. The jobless rate dipped to 7.8 percent. Coming one month before the presidential election, the news sparked debate about the Obama Administration’s economic policies.
What the visualization shows: The visualization presents three panels: a middle view where the September jobs numbers are put in context with monthly job creation and the unemployment rate. On the left is a view of “how a Democrat might see things” and on the right there is a tab to show “how a Republican might see things.”
Kosara’s take: “It shows something that I find quite common in data reporting or when people talk about data is that there are different ideas about the same data,” he says. “The New York Times here makes a very interesting point about how the same numbers, in this case this is the unemployment numbers that came out late last year, how they can be seen differently, the same numbers can be seen differently, by Democrats or Republicans.”
He adds: “If you were to present this data somewhere in a presentation to a decision-maker, you might either want to try and show both sides, and try and not to take sides, which is what this example is showing. Or if you want to be clear about which side you think the interpretation should go. Then you need to make a point why one side is the right one. And of course that should be something done in business but not in the reporting.”
Different Ways of Counting Carbon Emissions
The visualization: “Copenhagen: Emissions, Treaties and Impacts,” New York Times, December 5, 2009.
Context: Before the international climate conference in Copenhagen in 2009, policymakers discussed stemming the growth of carbon dioxide emissions and how to balance that goal with economic growth in developing countries.
What the visualization shows: This presentation charts data about carbon emissions growth in the U.S., Europe, China and India, and how they can be calculated—by geography, on a per capita basis, or per dollar of GDP. If you examine emissions as a function of GDP, China’s growing economy produces less carbon dioxide per dollar. If you examine emissions by total metric tons, however, China is projected to produce far more pollutants than the U.S. (Note: Kosara’s comments focus on one section of the visualization; the Times project also looks at the Kyoto Protocol climate treaty and projected effects of climate change.)
Kosara’s take: “These are different views, again of the same data as in the previous example. And they are interesting because they are shown quite nicely and the structure walks you through these different views and kind of gives you a sense of why these different views exist and how they impact what the decision would be going forward.
“So this is an interesting template, an interesting idea, for how to present information when you are looking at decision making and picking the path forward.”
A Disputed Test Drive of an Electric Car
The use of data: Charts presented in a blog post, “A Most Peculiar Test Drive,” by Tesla Motors Chairman Elon Musk on the company’s blog, Feb. 13, 2013.
Context: In “Stalled Out on Tesla’s Electric Highway,” on Feb. 8, New York Times reporter John M. Broder published an account of a test drive he took of the Tesla Model S, which ended with his having to hire a tow truck when the car ran out of battery charge. Five days later, Musk published his blog using data from sensors on the car to refute the Times story. Among many reader comments and online discussion about the story, Broder posted his point-by-point response to Musk’s blog. The newspaper’s public editor also weighed in, writing that while Broder “left himself open to valid criticism by taking what seem to be casual and imprecise notes” about his trip, he took the test drive in good faith.
What the visualization shows: Musk’s piece is a criticism of The Times article that includes annotated charts that show what sensors on the Model S recorded, measuring conditions such as speed and distance, cabin temperature, battery charge levels and estimated range based on battery charge, among other data.
Kosara’s take: The charts using data are effective because of the way they are annotated and part of a coherent argument, Kosara says. “It’s interesting that the way they are making this point here is that they are saying well, there are these points that were made in the story, but we have actual data to show that some of these are not actually true, or at least we can [say that the reporter] was just not taking proper notes about what he was doing.”
This approach created a lot of sympathy for Tesla, Kosara notes. “But of course it is also a bit dangerous because there are different interpretations of the same data and if you give the journalists a bit more benefit of the doubt that not all of these numbers were exactly correct, you can see that some of the patterns [the reporter] describes are still visible in the data.”
He concludes: “This is a nice example of using data in a very public way to make a very strong point, and really provides the evidence behind that point.”
The Gun Lobby’s Influence
The visualization: “How the NRA exerts influence on Congress,” The Washington Post, Jan. 15, 2013.
Context: With the gun control debate on the national agenda in the wake of the December 14, 2012 shootings at an elementary school in Newtown, Conn., The Washington Post examined campaign finance data to create a display of candidates, including incumbent members of Congress, who received and did not receive contributions from the National Rifle Association (NRA).
What the visualization shows: The presentation shows dots to represent each candidate for office and then moves and changes the size of those dots depending on whether the candidate won election, how much each received from the NRA, their party affiliation, whether they are in the House or Senate, and how the NRA rates the voting records on gun-related issues. A user can follow a particular lawmaker through the various views.
Kosara’s take: “This is an interesting way of walking through a fairly complex transformation of data to look at different ways of slicing and dicing the data. Looking the parties, looking at winners and losers, looking at the Senate versus the House and so on.”
Kosara says this presentation has lessons for business visualizations. “That’s also a common thing that you would do in a lot of business cases, where you want to present data, not just as one view, as one set of numbers. But there are different ways of looking at data that are not necessarily even about different interpretations, but just different ways of breaking the data down into smaller pieces and then trying to figure out which of those are actually interesting, which of those are useful, to make decisions and so on. You very often need to do that,” Kosara says.
He adds: “But to actually turn that into a reasonably cohesive story is very difficult and so this is a good example of how this can be done.”
Michael Goldberg is editor of Data Informed. Email him at email@example.com.
Editor’s note: The original version of this story referred to Robert Kosara giving a talk at an event on marketing analytics and customer engagement. That event is now a series of webinars, and information about those speakers is available here. | <urn:uuid:7f059fe8-9250-4ca2-8e33-8742274aeefc> | CC-MAIN-2017-04 | http://data-informed.com/tableau-softwares-robert-kosara-on-using-data-to-tell-a-story/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950645 | 1,998 | 2.671875 | 3 |
Difficult to imagine? From our grandparents days Networking across systems is working reliably over TCP and that is what we have seen all throughout. The systems at either end of the network did not have to bother how the TCP connection was being established so the core definition of TCP was “a single connection between two hosts”. While researchers designed TCP/IP protocol suite, they did an awesome job on looking through the requirements which may come up in next couple decades. Given their vision till today we are able to communicate well over TCP.
But what did change in between? The network of devices or the Internet grew at an unexpected rate and broke all the predictions. The internet backbone traffic in 1990 was close to 1 Terabyte which grew to nearly 35000 Terabyte by year 2000. What an exceptional growth and large businesses started transforming themselves on Internet. Was the TCP designed to take up this much load without getting slower and getting to a point where it starts breaking? While all this growth was happening, in the background researchers continued to work on simplifying the congestion control issues with TCP and many new RFCs came up and got adopted as well. Today we all are able to work efficiently using these complex congestion control and avoidance algorithms.
Beyond the scope of TCP, Routing protocols play a critical role in networking today by finding the shortest of the multiple available paths between any two hosts. Once the upper layer sets the properties which should be basis of selecting best path, most of the times best is not really the one which gets picked up. Once the path is picked up then the whole communication is bound to that single path or few equal cost paths for packet transmission. This certainly results in ignoring the possible alternate paths which become available for data transfer between same set of hosts. Even when the routers are trying to make intelligent decisions on distributing packets for same TCP flow across multiple paths, there is a concern of packets reaching out of order. This in turn kicks off the congestion control algorithms and after the threshold TCP believes the packet was lost and retransmission happens. It also reduces the rate of sending the packets by reducing the congestion window by half. To avoid this issue Routers try to ensure that packets belonging to single flow are sent through same path.
You must be asking why to even bother about multiple paths? Good question and the answer is with you… do you carry a smartphone with internet connectivity? Does your phone connect over Wifi? And it must have Bluetooth and USB ports? Then you have the answer, isn’t it?
Every single capability here would allow your smartphone to connect with a remote host which is great news. You can have 2 or 4 parallel paths for communication… but who takes the data from one end to another? Same old TCP and one TCP session can only use one of the paths available. Then how are you going to utilize the other connected flows and what about the scenarios where on the move you disconnect/re-connect to a given network? Let us understand some realistic picture around Mobile. Mobile data usage in year 2012 was nearly 12 times the data volume for connected internet in year 2000. Mobile devices have crossed the number of connected devices by large fold and it is growing at explosive rate. Hence we need to relook at how data transfer worked in fixed line internet versus how it should work with the Mobile devices around.
With the understanding of the problem at hand, let us consider how different data transfer would be if the TCP session could make use of all the available paths in between 2 hosts. Let us quickly list down the benefits of such a scenario:
– Better performance at network layer
– Utilizing the best path which is least congested
– Using the path which has highest bandwidth for bulk transfer
– Much robust App connectivity while the device loses one path
– Causing least trouble to already crowded and congested paths
– Finally much better end user experience… causing the Wow factor
Looks great and something we should move towards but how? Should we build a new protocol suite as TCP is not designed to take care of such scenarios? Will that be acceptable, what happened to SCTP adoption across internet? Many times a great technology does not get similar acceptance because it is trying to change the basics which is used by billions of devices.
Interesting problem and would leave it to you for little bit of brainstorming… stay tuned for the next blog where we take the discussion to next level… | <urn:uuid:d2851320-4915-415c-8163-75e58709f907> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2013/08/16/networking-beyond-tcp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95545 | 896 | 3.65625 | 4 |
Given an infinite sequence of 0's and 1's, how can one best predict the next element of the sequence? Of course, much depends on the definition of what a "best" prediction means, but within the context of this article, "best" means within the historical context of the sequence of 0's and 1's that had preceeded the prediction. This is the same as finding the most likely, as inferred from past behaviour.
To see how the predicition algorithm works, a digression is necessary to see how sequences of 0's and 1's can be built. At heart, a model is a probabilistic finite state automata with the arcs between states being 0's and 1's with associated probabilities of observation. These models can be followed for any finite number of steps to generate a sequence with that number of digits. At that point, a new model could be followed with the new sequence of digits appended to the first sequence. This change of models, or a regime change, can happen many times, but only the most recent model is relevent for prediction. The prediction algorithm attemps to identify the models that sequentially give rise to a sequence, and to use the most recent model for the prediction of the next digit.
This approach is quite different from many other approaches in that it does not assume a particular form for the underlying dynamics to which parameters are to be fitted. Rather, it attempts to infer a computation algorithm from the observed sequence as the best explanation of that sequence. This means that the model identification process has to be initially infinite dimensional and then reduced to some finite but unknown number of explanatory variables. The particular finite state automata are also unusual in that they are capable of a high degree of expressiveness, both being able to model perfect randomness and perfect determinism. Uniquely, this algorithm is not self-incriminating. A criticism of most algorithms, is that if everyone knew how they worked then any profits would be squeezed out of the algorithms by front running, or anticipating its predictions. This algorithm reflects its own success by returning a model of perfect randomness in the ultimate case if it were to be globally implemented, it can reflect its own success against itself
A model is a probabilitic finite state automata, namely it has a set of states, and from each state two arcs labelled 0 and 1 with probabilites p and q emanate to other states (including even the original state). Sequences of digits are generated by following arcs around the model with the probability of choosing that arc according to its assigned value. Models can be represented as tables with current states along the left handside, with arcs along the top, cell entries show the next state the model moves to with its probablity.
This generates the sequence ...0000....:
|State \ Arc||0||1|
|A||A / 100%||A / 0%|
This generates sequences like ...0100011011...:
|A||A / 50%||A / 50%|
This generates sequences like ...00110000001111001111...:
|State \ Arc||0||1|
|A||B / 50%||D / 50%|
|B||C / 100%||C / 0%|
|C||A / 100%||A / 0%|
|D||E / 0%||E / 100%|
|E||A / 0%||A / 100%|
The model identification process is straightforward. A canonical automata is formed to represent the original sequence. This canonical automata may have infinitely many states, it is then pruned to be finite state. (In reality, all the canonical automata are finite with a large number of states since only finitely many observations can be done humanly possible.)
For example, consider how a random sequence of 0's and 1's would have a model identified with it. First a histogram table would be generated showing the probability of each binary sequence occuring in the sequence. Since the sequence is totally random, we know the probability of any binary subsequence is 0.5 raised to the power of the length of the sequence:
The next step is to create an infinite state automata, the nodes of the automata are the binary sequences 0, 1, 00, 01, 10, 11, 000, 001, ... that are in the left column of the histogram. There is a special state called the root state, this is the beginning state from which all the sequences start. The probability of the arcs are the conditional probability that a 0 or 1 would follow a given arc, which in this case are always 50%. The canonical automata would have the following form:
|State \ Arc||0||1|
|Root||0 / 50%||1 / 50%|
|0||00 / 50%||01 / 50%|
|1||10 / 50%||11 / 50%|
|00||000 / 50%||001 / 50%|
|01||010 / 50%||011 / 50%|
|10||100 / 50%||101 / 50%|
|11||110 / 50%||111 / 50%|
|000||0000 / 50%||0001 / 50%|
|001||0010 / 50%||0011 / 50%|
Notice that all the rows in the canonical automata are the same as the first row for the root state. Namely, the subtrees can be identified with the root node, which is the reduction phase of the model identification process. Identifying subtrees that can be identified with prior nodes to reduce the tree from being infinite dimensional to finite dimensional. When this reduction is applied the final resulting automata is the same as that given in the examples for a random sequence of 0's and 1's.
Note that each model can also given the probability of a particular sequence occuring in that model by multiplying the probabilities of the arcs as a sequence is traced through the model. Models which explain a sequence well will have probabilities that explain sequences that match the original histogram.
Regime identification asks the question if a sequence is better explained by having one or more models. At this point we only ask the simple question if one or two models are better in explaining the observation of a particular sequence. The methodology behind regime identification begins by splitting up the sequence at different points. On each subsequence a model is fitted, and from the model a probability can be computed for the observed subsequence. The two probabilities, one for each subsequence, can be multiplied to give a joint probability of the original sequence if it has indeed been composed of two submodels at the point of splitting. The best point at splitting the original sequence, which gives the highest probability is the best candidate for a regime change, if it exists. It may happen that the best model may be the whole sequence, which is just one model. For predicting a sequence it is only sufficient to ask whether one or two models are best, as only the most recent model is used for predictions.
Once a sequence has had its regimes identified, the most recent model is used for binary prediction. The involves taking the subsequence that corresponds to the most recent model and finding the probability of a 0 or 1 occuring. The one with the larger probablity is the prediction for the next element in the sequence.
This has been a very quick overview, of a method to predict binary sequences. Predicting sequences of elements other than binary digits can be accomplished by mapping some attribute of them onto binary digits, for example if prices will go up or not. The most salient part of the algorithm involves finding efficient and accurate algorithms to reduce the canonical automata to finite automata without losing too much information. | <urn:uuid:4d682379-b45f-493a-97e2-c24af5838e12> | CC-MAIN-2017-04 | http://www.intrepid.com/robertl/stock-predict1/algorithm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00340-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931479 | 1,601 | 2.75 | 3 |
What are the possible security risks associated with deploying and using WiMAX?
WiMAX is the much-anticipated broadband wireless access mechanism for delivering high-speed connectivity over long distances, making it attractive to Internet and telecommunications service providers. Designed by the IEEE 802.16 committee, WiMAX was developed after the security failures that plagued early IEEE 802.11 networks. Recognizing the importance of security, the 802.16 working groups designed several mechanisms to protect the service provider from theft of service, and to protect the customer from unauthorized information disclosure.
A fundamental principle in 802.16 networks is that each subscriber station (SS) must have a X.509 certificate that will uniquely identify the subscriber. The use of X.509 certificates makes it difficult for an attacker to spoof the identity of legitimate subscribers, providing ample protection against theft of service. A fundamental flaw in the authentication mechanism used by WiMAX's privacy and key management (PKM) protocol is the lack of base station (BS) or service provider authentication. This makes WiMAX networks susceptible to man-in-the-middle attacks, exposing subscribers to various confidentiality and availability attacks. The 802.16e amendment added support for the Extensible Authentication Protocol (EAP) to WiMAX networks. Support for EAP protocols is currently optional for service providers.
With the 802.16e amendment, support for the AES cipher is available, providing strong support for confidentiality of data traffic. Like the 802.11 specification, management frames are not encrypted, allowing an attacker to collect information about subscribers in the area and other potentially sensitive network characteristics.
WiMAX deployments will use licensed RF spectrum, giving them some measure of protection from unintentional interference. It is reasonably simple, however, for an attacker to use readily available tools to jam the spectrum for all planned WiMAX deployments. In addition to physical layer denial of service attacks, an attacker can use legacy management frames to forcibly disconnect legitimate stations. This is similar to the deauthenticate flood attacks used against 802.11 networks.
Despite good intentions for WiMAX security, there are several potential attacks open to adversaries, including:
Rogue Base Stations
Network manipulation with spoofed management frames
The real test of WiMAX security will come when providers begin wide-scale network deployments, and researchers and attackers have access to commodity CPE equipment. Other attacks including WiMAX protocol fuzzing may enable attackers to further manipulate BSs or SSs. Until then, the security of WiMAX is limited to speculation. | <urn:uuid:65e92728-2c01-4410-8df1-e9920925ce21> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2301883/network-security/wimax-security-issues.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898836 | 503 | 2.796875 | 3 |
How to secure 300,000 smart phones? MegaDroid can help.
- By William Jackson
- Oct 09, 2012
Scientists at the Energy Department’s Sandia National Laboratories in California have created a 300,000-node network of virtual Android devices as part of a program to emulate large-scale networks to help researchers understand and defend complex online environments.
The network, called MegaDroid, is an emulation, not a model or a simulation, said David Fritz, a researcher and member of the senior technical staff at Sandia. “For all intents and purposes it’s a real Android device” on a virtual machine, he said.
Work on MegaDroid, which took a year to develop along with an earlier phase of the program called MegaTux, started in 2009 to create a network of 1 million virtual Linux machines. Still under way is an effort called MegaWin, which began in 2010 to create a virtual Windows network. MegaWin still has a year to go.
Can mobile devices work as ID cards, thin clients on a secure net?
When work is completed, Sandia scientists expect to release the results as an open-source software tool that will let researchers create their own virtual networks on inexpensive off-the-shelf PC clusters. Such a tool would be useful for government agencies from the municipal to the federal level that are deploying Android and other mobile devices on their networks. They would, for example, be able to test how the network handles software glitches, data breaches or natural disasters.
“The software will connect with the same software on all of the machines and bring up a network of all the devices and provide an interface for working with them,” Fritz explained.
The software will be able to scale from a small network of a hundred or so devices running on a single workstation to millions of virtual devices running on hundreds of nodes. The networks can be brought up in about 10 minutes out of the box, he said.
The virtual networks can be used by developers to create an environment in which new applications and platforms can be tested, and by security researchers to better understand and protect against threats on networks, including accidents, natural events and malicious attacks.
Such environments are necessary because of the sheer complexity of networks when large numbers of devices running sophisticated software are interacting with one another.
The Android operating system consists of some 14 million lines of code running on top of a Linux kernel of the same size, Fritz said. The resulting scale of possible interactions is beyond human comprehension.
“You can’t possibly read through 15 million lines of code and understand every possible interaction between all these devices and the network,” Fritz said. The emulated networks could enable real-world testing in a safe environment.
The MegaDroid network could be considered as a population of independent mobile devices connecting with network servers, or as a network of devices also interacting with each other. Such a network is rich with possibilities because of the advanced functionality of the handheld devices and the amount of data about the user’s environment that is routinely being gathered by them.
“You can think of Androids as a distributed sensor network,” Fritz said.
To make use of this data in the emulated network the Sandia scientists have included simulated sensor input, including a simulated Global Positioning System that can feed location data to each device that then is used in the same way as real GPS data.
Fritz said Sandia is eager to collaborate with other research institutions and schools to further the meganetworks platform. And scientists demonstrating the system for other organizations in government, industry and academia already have generated interest in the tool, he added. “I think there is a lot of desire to see the platform released.”
William Jackson is a Maryland-based freelance writer. | <urn:uuid:e96cb825-dbc0-4c77-a74e-47278f01c067> | CC-MAIN-2017-04 | https://gcn.com/articles/2012/10/09/1005-jackson-analysis-megadroid.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00064-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941373 | 787 | 2.953125 | 3 |
A virtual directory is a network service which provides access to two or more data sources in a manner that makes them appear to be a single directory.
The data sources may themselves be actual directory servers -- for example, one or more Active Directory domains or some other LDAP directories. They could just as easily be SQL-type databases or other data sources (e.g., web services, CSV files, etc.).
Virtual directories normally expose the data in a single view, accessed using the LDAP protocol. They must provide read access and may also provide write-back capabilities to their data sources.
The data aggregation performed by a virtual directory may be object-level -- i.e., different directory objects represented in the consolidated view are actually stored (physically) in different data sources. The aggregation may also be attribute-level -- i.e., different attributes of the same object may be pulled from different data sources. | <urn:uuid:d1908d69-a87a-433d-b6f6-2a88f51bf9e4> | CC-MAIN-2017-04 | http://hitachi-id.com/resource/concepts/virtual-directory.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00460-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904998 | 186 | 2.65625 | 3 |
In case you missed it, the end of the age of ever-faster computers is nigh. In his speech (pdf) at last summer’s Hot Chip conference, Bob Colwell, Intel’s former chief architect, said Moore’s law—the prediction that computer power doubles every 18-24 months, which has held largely true since Intel co-founder Gordon Moore made it in 1965—will cease to hold by 2020.
Colwell’s prognosis is dire, but not everyone is accepting it. On Jan. 23, a joint team from Harvard and the non-profit defense contractor MITRE challenged the repeal of Moore’s law with an ultra-dense, nano-scale processor that could add time to computing’s Doomsday Clock.
The sheer number of circuits that can fit on a processor is called the transistor count. Generally speaking, Moore’s law states that processors will double their transistor count every couple of years. A transistor is basically a switch that both stores and processes data; the more of them a computer contains, the more memory and power it has.
The problem is, processors are built with silicon. As silicon transistors get more and more dense, they need more power and better cooling. In other words, it’s not that we can’t design faster chips, it’s just too expensive and difficult to keep them running.
The Harvard/MITRE team’s chip—called the nanoFSM—saves power, and creates less heat, through a combination of size and design. Not only do the tiny wire transistors need less energy, but they are “nonvolatile.” This means that they don’t need a constant electrical current to remember how they’ve been programmed, unlike regular transistors. The nanowires are so-named because they are measured on the nanometer scale, along with DNA and viruses. Where an Intel Core i7—a chip at the heart of high-end personal computers—is roughly the size of a small coin, the prototype nanoFSM would be a speck of dust on the coin’s face (though it’s also less powerful).
Computers can’t keep getting faster forever, but it’s no surprise that the industry wants to stave off the end of the Moore’s law era as long as it can. Without the constant doubling of computer power we wouldn’t have iPads, IBM’s Watson supercomputer, or the internet. Engineers started worrying about the end of Moore’s law around 2005, when the ever-smaller chips stopped being able to outrun the laws of physics that govern heat dissipation.
Nanowires aren’t new, but this is the first time they’ve been made into transistors that can do math and remember information. This technically makes the nanoFSM a computer, but barely so. Currently, the chip is little more sophisticated than all but the earliest digital processors. If it’s going to save Moore’s law, nanoFSM’s creators still need to prove that they can scale this technology up to handle heavier workloads without succumbing to the same problems that threaten the law now—let alone completely new ones. | <urn:uuid:c9b5f485-7a41-44cc-b927-acda4d8f340b> | CC-MAIN-2017-04 | http://www.nextgov.com/emerging-tech/2014/02/these-molecule-sized-wires-could-stop-computer-industry-hitting-brick-wall/78626/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933654 | 686 | 3.296875 | 3 |
The flipped classroom is an expression that’s been around since 2007 when Aaron Sams and Jonathan Bergman introduced the concept to the world in their book, Flip Your Classroom. While the term evokes an image of chairs and desks on the ceiling, the actual methodology is not so different. By reversing the standard teaching blueprint and conducting lectures at home and homework in the classroom, students are empowered to learn at their own pace while absorbing information not only from their teacher, but from their fellow classmates.
Anthony Padrnos, official mathematics genius, and John Wetter, technical services manager at Hopkins Public Schools in MN, have been successfully using the Casper Suite to implement the flipped model in their environments. By utilizing Self Service, and through the power of supervision, students have easy and access to their lesson plans in a manner that is both effective and empowering.
Join the discussion on JAMF Nation: | <urn:uuid:77ee4dff-18d7-40b6-bb72-421296190a00> | CC-MAIN-2017-04 | https://www.jamf.com/blog/flipped-classroom-the-what-who-and-how-to-turn-the-classroom-on-its-head/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946937 | 184 | 2.953125 | 3 |
Critics of Voice over Internet Protocol (VoIP) site that VoIP programs lack security. However, with recent advancements in the telecommunications industry, this is now a claim of the past. All VoIP systems can be individually customized based on a company’s security requirements. For example, the main security issues with VoIP surround a company’s implementation of the system, not the system itself. If traditional networking security is applied to VoIP business systems, VoIP is as secure as any other type of protocol on the market. Originally, when VoIP was introduced to the public, hacking and security were not at the forefront of developer’s minds. As with any type of technology product, years of consumer use and research have helped to create a far more advanced product than what was originally released. As with all products, the more popularity they gain, the more scrutiny they are subject to. Many security issues can be easily resolved by removing the codes for unused VoIP features and performing regular security audits on commonly used features. Most importantly, companies need to define their security requirements ahead of time. Financial institutions and government agencies require higher confidentiality requirements and may require additional, advanced encryption. Implementing the proper tools before switching to VoIP hosted programs helps companies assess all costs up front and prevents any form of cyber attacks from occurring. Some technology experts adamantly maintain that business VoIP systems are in fact, more secure than traditional telecommunication phone systems. Their opinions are backed by statements that highlight how IP systems can install added security that traditional telephone systems cannot. They claim that the vulnerability for VoIP systems occurs when companies neglect to install proper security and IP safety protocols. VoIP services constantly have a telephone tone available, leading to consistent reliability and allowing companies to receive and make calls to customers and clients. One of the top VoIP security threats is when a company neglects to turn on Internet security because they feel it is overly complicated and they do not take the time to ensure their system is adequately protected. Companies that switch to VoIP do so because integrating data and voice plans into one network helps decrease operating expenses and boosts productivity, something that traditional telecommunication services lack. As with all popular products, the more people that use them, the more security risks they are exposed to with hackers. Taking the proper security measures can eliminate these threats, allowing companies to take full advantage of modern VoIP features and systems. | <urn:uuid:54c83e74-8bba-43e8-9515-1bf645a07dba> | CC-MAIN-2017-04 | http://jive.com/blog/verifying-voip-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00450-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958358 | 484 | 2.640625 | 3 |
You have by now noticed the continued news about cyber attacks on businesses, governments and individuals across the globe. The attacks and consequences vary, from stolen identities, to forged documents or encrypted files. Depending upon the ultimate motives of the attacking cyber criminal, your passwords, usernames, personal data or business network can be completely compromised.
In the attached article, a new type of malware (malicious software), can infect your computer simply from clicking on a banner ad within an infected web site. By using sophisticated techniques to hide within the pixels of the banner, the ultimate consequence can be stolen credentials (passwords), key logging, file theft and ransomware. Ransomware is the encryption of your files on a workstation or across your entire network. Ransomware is extortion and requires a payment to the cyber attackers to decrypt your files. It has generated profits in the 100’s of millions of dollars in the last several years.
It might get technical at times, but the following article is an interesting read:
Your business should take the preventative steps necessary to harden your personal and business data defense. Please contact us to start the conversation. | <urn:uuid:3f7dc54d-cd02-472d-b166-25b7b7a47f50> | CC-MAIN-2017-04 | https://www.getadvantage.com/tech-talk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910559 | 233 | 2.546875 | 3 |
Documentary to Tell of the 6 'Granny Hackers'
Sixty years ago, six women became some of the earliest computer hackers in history. A new documentary hopes to give them their credit due.
In the half-century since Rosie the Riveter became the culture icon not just for women who had worked in manufacturing plants while men were off fighting World War II, but for the entire feminist movement that followed, history has all but forgotten six women with their own wartime contributionprogramming the ENIAC (Electronic Numerical Integrator and Computer).
The ENIAC was an 80 foot long, 8 foot tall, black metal machine with an archaic programming interface involving dozens of wire and 3000 switches. The women successfully programmed it to perform a ballistics trajectory, a differential calculus equation that was important to the WWII effort. However, in the decades that followed, the women's story disappeared from history.
A new documentary in the making, called "Invisible Computers: The Untold Story of the ENIAC Programmers," hopes to give these women their due credit by chronicling their stories.
"The names of Betty Snyder Holberton, Jean Jennings Bartik, Kathleen McNulty Mauchly Antonelli, Marlyn Wescoff Meltzer, Ruth Lichterman Teitelbaum and Frances Bilas Spence belong in our history books and computer courses," said Kathy Kleiman, the historian for the ENIAC Programmers and producer of the first full-length documentary to explore their untold story. "Not only did they program the first modern computer, some devoted decades to making programming easier and more accessible for all who followed." | <urn:uuid:84defad7-c56d-4ba6-a5f6-a45207e0b6ab> | CC-MAIN-2017-04 | http://www.eweek.com/careers/documentary-to-tell-of-the-six-granny-hackers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956082 | 340 | 2.84375 | 3 |
As a forensic investigator, you are likely already familiar with the artifacts left in storage on a disk from the use of a web browser. The mainstream browsers all provide, for the most part, the same functionality of things like tabbed browsing, remembering history and exposing it in date ranges, storing bookmarks for later viewing, etc.
One of those features is the topic of this blog post: remembering data that a user typed into a form field so that same value doesn’t have to be typed into that same form next time. This is generally referred to as an autofill form values feature. Firefox, Chrome, Internet Explorer, Safari, they all offer this feature, but each of them store these values in a different way.
Safari makes use of a plist file stored in the following path:
Those of you who are familiar with Apple plist files, you already know that they have a header which indicates that a file is 1) a plist file and 2) in either binary or XML format. The header of bplist00 indicates binary, while <?xml indicates XML. Take a look at the following example plist files.
If you examine the hex of the Form Values file, you will note that neither header is present. If you don’t have an OS X computer in front of you, take a look at mine.
The reason for this comes from the use of AES encryption when storing this plist file to the disk. I promise you that this really is a plist file.
Techie Alert! (Feel free to skip the boring stuff).
Apple makes use of several functions that are part of an encryption platform called CDSA. Apple has taken the open source package from the Open Group and tweaked it a bit for their own use. The good part about this, is that apple has released parts of their implementation on their open source website. The part of CDSA that does the work for us here comes from a few modules in a smaller section known as CSSM.
For example, the Open Group version of CSSM doesn’t allow for AES encryption (among others), so Apple has brought that into their implementation as noted in a customheader file.
The Mach-O binary inside of the Safari application file is a smaller executable that handles more of the functionality relating to the GUI and user experience, and is therefore written in Objective-C. The heavy lifting code is pulled in from a framework written in C and located at: /System/Library/PrivateFrameworks/Safari.framework/Versions/A/Safari
A mildly interesting note comes from the above linked CDSA Overview in the form of this note at the top of the page.
Important This technology is deprecated in OS X v10.7 and is not available in iOS. You should use it only if none of the other cryptographic service APIs support what you are trying to do.Apple continues to use CDSA/CSSM in their frameworks, but they have deprecated it for other developers.
I bet you want to know how we can use all that junk above to get these dang form values, am I right?
Apple stores a chunk of randomly generated data (32 bytes to be exact), inside the user’s keychain. You can see this on your Mac by opening the Keychain Access app, or you can look at the screen shot from mine. Sort on the name column, and look for a record titled Safari Forms AutoFill. Check the show password box, and it will display the UTF8 interpretation of the password data after asking you for the password to unlock the keychain.
This data chunk is flavored with a dash of salt, a sprig of thyme, and a pinch of paprika. Let it simmer for an hour, and viola. Oh wait, that was dinner, and a string instrument? Well, it goes something like that anyways.
The result is then used as a master key to encrypt this plist file with AES-128. The nice part is that AES is symmetric encryption, which means the key used to encrypt the data, is the same used to decrypt the data.
Once I worked out the intricacies of this process, Simon Key (a colleague of mine) was able to put together a little package to prevent us from having to use our fingers and toes. As of right now, it requires that you copy the files out of EnCase, but I’m hoping to get it into an EnScript soon. We wanted to get this out to you instead of waiting to get it packaged up and wrapped with a bow.
Download the utility. There are two files in there. One is a windows exe file and the other is a packaged installer file for OS X which places the executable, named sfvd, to the /usr/bin directory on your Mac, where it will be available in terminal no matter which folder you are in. The two tools are identical in function and allow you to decrypt in either OS X or Windows.
A couple notes on sfvd:
- It is a signed package but you may need to bypass the Gatekeeper security to get it installed. Right click (secondary click) on the package, then hold down the command (⌘) key and click on Open.
- It was developed and tested on 10.8. It will work against Safari files in earlier versions (I’ve tested 10.6), but it may not run on earlier versions.
The user keychain:
The form data:
- <user_keychain> This is the fully qualified path to the login.keychain of the user whose data you are examining. Not using a fully qualified path will result in an error.
- <encrypted_form_values_file> This is the relative or absolute) path to the Safari Form Values file that you want to decrypt.
- <output_file> This is the path (relative or absolute) to write the decrypted Form Values plist file. (I suggest a .plist extension)
Try this command:
sfvd ~/Library/Keychains/login.keychain ~/Library/Safari/Form\ Values output.plist
When you run sfvd against a keychain that you have extracted, then the keychain has to be unlocked before the data can be accessed. OS X will ask you for the password to unlock, and then ask you to allow access. This password is typically the same as the user login, though it can be made to be different. This prompt will still show even if the password is blank (so try blank!).
PlistViewer Plugin from App Central to view the data.
The integer value in each of the folders represents the date of the last time a value was stored into this collection. The format is in Mac Absolute Time. The tool Dcode from DigitalDetective will decode the value.
You can download my sample login.keychain and Form Values file here. I even included the decrypted version for those of you that don’t have access to a Mac right now.
You can download the sfvd tool here @ EnCase App Central:http://tinyurl.com/nvtlu6k
Form Values File | <urn:uuid:b15b95b1-7a94-4f3b-a844-e9bfc622c790> | CC-MAIN-2017-04 | http://encase-forensic-blog.guidancesoftware.com/2013/06/safari-form-values-decryptor.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00110-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926747 | 1,497 | 2.59375 | 3 |
Don't use the computer to find the answer.
Cambridge, UK, May 4, 2000 - Kaspersky Lab Int., a fast-growing international anti-virus software development company, warns about the discovery of the new dangerous worm named I-Worm.LoveLetter. The worm has been found "in-the-wild" and poses a real threat to the computer users. Unknown malefactors have spread an infected "declaration of love" all over the world. It took only a few hours on 4th of May for the worm to infect thousands of computers in many countries.
Detection and disinfection for this worm has been added to Kaspersky Lab's AntiViral Toolkit Pro (AVP).
Spreading and detection:
A user receives an emailwith the subject "ILOVEYOU" and the text message "kindly check the attached LOVELETTER coming from me.."
There is also an attachment called LOVE-LETTER-FOR-YOU.TXT.vbs.
After opening the attachment the worm scans all local and mapped network drives for files with extensions VBS, VBE, JS, JSE, CSS, WSH, SCT, HTA, JPG, JPEG, MP2, MP3 and writes its worm body over those files thus making them irrevocably lost.
The worm I-Worm.LoveLetter creates two copies of itself, naming them Win32.dll.vbs and MSKernel32.vbs and places them in Windows directory. The worm then registers itself in system registry so that it starts every time during windows boot. When the worm is active, it looks through an address book in order to send its body further - to all recipients found. Thus the worm requires only a few minutes to distribute itself to all of your friends and partners and to irretrievably destroy some possibly useful files.
Methods of protection:
DO NOT OPEN THE EMAIL WITH THE SUBJECT "ILOVEYOU"
and more importantly
DO NOT RUN THE ATTACHED FILE LOVE-LETTER-FOR-YOU.TXT.vbs.
If however you have opened the attachment and received a long-awaited declaration of love, you should visit the Kaspersky Lab web site http://www.kasperskylabs.com, where you will find the antidote for this dangerous worm. To remove the worm from an infected computer update the AVP anti-virus database with the latest daily update and scan all drives. AVP will effectively detect and neutralise I-Worm.LoveLetter. | <urn:uuid:e16c0302-3e92-4806-b676-948f2ac2ee2e> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2000/_To_love_or_not_to_love_ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00138-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902531 | 537 | 2.515625 | 3 |
It’s nothing new for technological advances to be inspired by observations of the natural world. For example, Wilbur Wright — of the aviation pioneering Wright brothers — spent significant amounts of time looking at flying birds and noticed they sometimes “tipped” their wings to one side or another to gain balance and adjust to the differences in the lifting forces caused by the air around them. Unlike other aspiring aviators, he realized early on that the problem wasn’t one of powering a craft into the air, but of novel concepts such as lift and drag that helped sustain flight. As a result, he and his brother were the ones who made history as the makers and pilots of the first planes.
Another technology innovator has been in the news lately because of a cutting-edge project he’s working on that’s inspired by nodal-processing patterns within the human brain, or more precisely, the neocortex region. Jeff Hawkins — best known as the genius behind the PalmPilot — has spent the past few years of his career studying how these patterns can be applied to software design. He even formed a company, Numenta, to develop and promote the concept.
Specifically, this idea is manifested in the hierarchical temporal memory (HTM) model that ostensibly can be “trained” not only to recognize objects, but also identify and classify related objects it wasn’t trained on. The HTM system runs on the free-software Numenta Platform for Intelligent Computing (NuPIC) that was recently made available for download on the company’s Web site (http://numenta.com/#start).
Evidently taking architect Daniel Burnham’s advice to “make no little plans | <urn:uuid:8e6fa641-7ca7-4d32-8523-047f725ac0be> | CC-MAIN-2017-04 | http://certmag.com/whither-brainware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962645 | 358 | 3.046875 | 3 |
Shared libraries are a fundamental component for the efficient use of space and resources on a modern UNIX® system. The C library on a SUSE 9.1 system is made up of about 1.3 MB. A copy of that library for every program in /usr/bin (and I have 2,569!) would take up a couple of gigabytes of space.
Of course this number is inflated -- statically linked programs would incorporate only those parts of the library that they use. Nonetheless, the amount of space tied up by all those duplicate copies of
printf() would give the system a very bloated feel.
Shared libraries can save memory, not just disk space. The kernel can keep a single copy of a shared library in memory, sharing it among multiple applications. So, not only do we only have one copy of
printf() on the disk, we only have one in memory. That has a pretty noticeable effect on performance.
In this article, we'll review the underlying technology used for shared libraries and the way in which shared library versioning helps prevent the compatibility nightmares that naive shared library implementations have had in the past. First, a look at how shared libraries work.
How shared libraries work
The concept is easy enough to understand. You have a library; you share the library. But what actually happens when your program tries to call
printf() -- the real way this works -- is a bit more complex.
It is a simpler process in a static linking system than in a dynamically linked system. In a static linked system, the generated code possesses a reference to a function. The linker replaces that reference with the actual address at which it had loaded the function, so that the resulting binary code has the right address in place. Then, when the code is run, it simply jumps to the relevant address. This is a simple task to administer because it lets you to link in only those objects that are actually referred to at some point in the program.
But most shared libraries are dynamically linked. That has several further implications. One is that you can't predict in advance at which address a function will really be when it's called! (There have also been statically linked shared library schemes, such as the one in BSD/OS, but they are beyond the scope of this article.)
The dynamic linker can do a fair amount of work for each function linked, so most linkers are lazy. They only actually finish that work when the function is called. With more than a thousand externally visible symbols in the C library and nearly three thousand more local ones, this idea could save a noticeable amount of time.
The magic trick here that makes it work is a chunk of data called a Procedure Linkage Table (PLT), a table in the program that lists every function that a program calls. When the program is started, the PLT contains code for each function to query the runtime linker for the address at which it has loaded a function. It then fills in that entry in the table and jumps there. As each function is called, its entry in the PLT is simplified into a direct jump to the loaded function.
However, it's important to notice that this still leaves an extra layer of indirection -- each function call is resolved through a jump into a table.
Compatibility's not just for relationships
This means that the library you end up being linked to had better be compatible with the code that's calling it. With a statically linked executable, there is some guarantee that nothing will change on you. With dynamic linking, you don't have that guarantee.
What happens if a new version of the library comes out? Especially, what happens if the new version changes the calling sequence for a given function?
Version numbers to the rescue -- a shared library will have a version. When a program is linked against a library, it has the version number it's designed for stored in it. The dynamic linker can check for a matching version number. If the library has changed, the version number won't match, and the program won't be linked to the newer version of library.
One of the potential advantages of dynamic linking, however, is in fixing bugs. It'd be nice if you could fix a bug in the library and not have to recompile a thousand programs to take advantage of that fix. So sometimes, you want to link to a newer version.
Unfortunately, that creates some cases where you want to link to the newer version and some cases where you'd rather stick with an older version. There is a solution, though -- two kinds of version numbers:
- A major number indicates a potential incompatibility between library versions.
- A minor number indicates only bug fixes.
So under most circumstances, it is safe to load a library with the same major number and a higher minor number; consider it an unsafe practice to load a library with a higher major number.
To prevent users (and programmers) from needing to track library numbers and updates, the system comes with a large number of symbolic links. In general, the pattern is that
will be a link to
in which N is the highest major version number found on the system.
For every major version number supported,
will be a link in turn to
in which M is the largest minor version number.
Thus, if you specify
-lexample to the linker, it looks for
libexample.so which is a symbolic link to a symbolic link to the most recent version. On the other hand, when an existing program is loaded, it will try to load
libexample.so.N in which N is the version to which it was originally linked. Everyone wins!
To debug, first you must know how to compile
To debug problems with shared libraries, it's useful to know a little more about how they're compiled.
In a traditional static library, the code generated is usually bound together into a library file with a name ending in
.a and then it's passed to the linker. In a dynamic library, the library file's name generally ends in
.so. The file structures are somewhat different.
A normal static library is in a format created by the
ar utility, which is basically a very simple-minded archive program, similar to
tar but simpler. In contrast, shared libraries are generally stored in more complicated file formats.
On modern Linux systems, this generally means the ELF binary format (Executable and Linkable Format). In ELF, each file is made up of one ELF header followed by zero or some segments and zero or some sections. The segments contain information necessary for runtime execution of the file, while sections contain important data for linking and relocation. Each byte in the entire file is taken by no more than one section at a time, but there can be orphan bytes that are not covered by a section. Normally in a UNIX executable, one or more sections are enclosed in one segment.
The ELF format has specifications for applications and libraries. The library format is a lot more complicated than just a simple archive of object modules, though.
The linker sorts through references to symbols, making notes about in which libraries they were found. Symbols from static libraries are added to the final executable; symbols from shared libraries are put into the PLT, and references to the PLT are created. Once those tasks are done, the resulting executable has a list of symbols it plans to look up from libraries it will load at runtime.
At runtime, the application loads the dynamic linker. In fact, the dynamic linker itself uses the same kind of versioning as the shared libraries. On SUSE Linux 9.1, for instance, the file
/lib/ld-linux.so.2 is a symbolic link to
/lib/ld-linux.so.2.3.3. On the other hand, a program looking for
/lib/ld-linux.so.1 won't try to use the new version.
The dynamic linker then gets to do all the fun work. It looks to see which libraries (and which versions) a program was originally linked to and then loads them. Loading a library consists of:
- Finding it (and it may be in any of several directories on a system)
- Mapping it into the program's address space
- Allocating blocks of zero-filled memory the library may need
- Attaching the library's symbol table
Debugging this process can be difficult. There are a few kinds of problems you can encounter. For example, if the dynamic linker can't find a given library, it will abort loading the program. If it finds all the libraries it wants but can't find a symbol, it can abort for that too (but it may not act until the actual attempt to reference that symbol occurs) -- this is rare case though because normally, if the symbol isn't there, it will be noticed during the initial link.
Modifying the dynamic linker search path
When linking a program, you can specify additional paths to search at runtime. In
gcc the syntax is
-Wl,-R/path. If the program is already linked, you can also change this behavior by setting the environment variable
LD_LIBRARY_PATH. Usually this is needed only if your application wants to search paths that aren't part of the system-wide default, a rare case for most Linux systems. In theory, the Mozilla people could have distributed a binary compiled with that path set, but they preferred to distribute a wrapper script that sets the library path appropriately before launching the executable.
Setting the library path can provide a workaround in the rare case where two applications require incompatible versions of a library. A wrapper script can be used to have one application search in a directory using the special version of the library it requires. Hardly an elegant solution, but in some cases it's the best you can do.
If you have a compelling reason to add a path to many programs, you can also change the system's default search path. The dynamic linker is controlled through
/etc/ld.so.conf, which contains a list of directories to search by default. Any paths specified in
LD_LIBRARY_PATH will be searched before the paths listed in
ld.so.conf, so users can override these settings.
Most users have no reason to change the system default library search paths; generally the environment variable is a better match for likely reasons to change the search path, such as linking with libraries in a toolkit or testing programs against a newer version of a library.
One useful tool for debugging shared library problems is
ldd. The name derives from list dynamic dependencies. This program looks at a given executable or shared library and figures out what shared libraries it needs to load and which versions would be used. The output looks like this:
Listing 1. Dependencies of /bin/sh
$ ldd /bin/sh linux-gate.so.1 => (0xffffe000) libreadline.so.4 => /lib/libreadline.so.4 (0x40036000) libhistory.so.4 => /lib/libhistory.so.4 (0x40062000) libncurses.so.5 => /lib/libncurses.so.5 (0x40069000) libdl.so.2 => /lib/libdl.so.2 (0x400af000) libc.so.6 => /lib/tls/libc.so.6 (0x400b2000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
It can be a little surprising to find out how many libraries a "simple" program uses. It's probably the case that
libhistory is the one calling for
libncurses. To find out, we can just run another
Listing 2. Dependencies of libhistory
$ ldd /lib/libhistory.so.4 linux-gate.so.1 => (0xffffe000) libncurses.so.5 => /lib/libncurses.so.5 (0x40026000) libc.so.6 => /lib/tls/libc.so.6 (0x4006b000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x80000000)
In some cases, an application may need extra library paths specified. For instance, the first few lines of an attempt to run
ldd on the Mozilla binary came out like this:
Listing 3. Result of ldd for items not in search path
$ ldd /opt/mozilla/lib/mozilla-bin linux-gate.so.1 => (0xffffe000) libmozjs.so => not found libplds4.so => not found libplc4.so => not found libnspr4.so => not found libpthread.so.0 => /lib/tls/libpthread.so.0 (0x40037000)
Why aren't these libraries found? Because they're not in the usual search path for libraries. In fact, they're found in
/opt/mozilla/lib, so one solution would be to add that directory to
Another option is to set the path to
. and run
ldd from that directory, although this is a little more dangerous -- putting the current directory in your library path is just as potentially treacherous as putting it in your executable path.
In this case, it's pretty clear that adding the directory these are in to the system-wide search path would be a bad idea. Nothing but Mozilla needs these libraries.
And speaking of Mozilla, in case you were thinking that you'd never see more than a few lines of libraries, here's a somewhat more typical large application. Now you can see why Mozilla takes so long to launch!
Listing 4. Dependencies of mozilla-bin
linux-gate.so.1 => (0xffffe000) libmozjs.so => ./libmozjs.so (0x40018000) libplds4.so => ./libplds4.so (0x40099000) libplc4.so => ./libplc4.so (0x4009d000) libnspr4.so => ./libnspr4.so (0x400a2000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x400f5000) libdl.so.2 => /lib/libdl.so.2 (0x40105000) libgtk-x11-2.0.so.0 => /opt/gnome/lib/libgtk-x11-2.0.so.0 (0x40108000) libgdk-x11-2.0.so.0 => /opt/gnome/lib/libgdk-x11-2.0.so.0 (0x40358000) libatk-1.0.so.0 => /opt/gnome/lib/libatk-1.0.so.0 (0x403c5000) libgdk_pixbuf-2.0.so.0 => /opt/gnome/lib/libgdk_pixbuf-2.0.so.0 (0x403df000) libpangoxft-1.0.so.0 => /opt/gnome/lib/libpangoxft-1.0.so.0 (0x403f1000) libpangox-1.0.so.0 => /opt/gnome/lib/libpangox-1.0.so.0 (0x40412000) libpango-1.0.so.0 => /opt/gnome/lib/libpango-1.0.so.0 (0x4041f000) libgobject-2.0.so.0 => /opt/gnome/lib/libgobject-2.0.so.0 (0x40451000) libgmodule-2.0.so.0 => /opt/gnome/lib/libgmodule-2.0.so.0 (0x40487000) libglib-2.0.so.0 => /opt/gnome/lib/libglib-2.0.so.0 (0x4048b000) libm.so.6 => /lib/tls/libm.so.6 (0x404f7000) libstdc++.so.5 => /usr/lib/libstdc++.so.5 (0x40519000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x405d5000) libc.so.6 => /lib/tls/libc.so.6 (0x405dd000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000) libX11.so.6 => /usr/X11R6/lib/libX11.so.6 (0x406f3000) libXrandr.so.2 => /usr/X11R6/lib/libXrandr.so.2 (0x407ef000) libXi.so.6 => /usr/X11R6/lib/libXi.so.6 (0x407f3000) libXext.so.6 => /usr/X11R6/lib/libXext.so.6 (0x407fb000) libXft.so.2 => /usr/X11R6/lib/libXft.so.2 (0x4080a000) libXrender.so.1 => /usr/X11R6/lib/libXrender.so.1 (0x4081e000) libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0x40826000) libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0x40850000) libexpat.so.0 => /usr/lib/libexpat.so.0 (0x408b9000)
Learning more about shared libraries
Users interested in learning more about dynamic linking on Linux have a broad field of options. The GNU compiler and linker tool chain documentation is excellent, although the guts of it are stored in the
info format and not mentioned in the standard man pages.
The manual page for
ld.so contains a fairly comprehensive list of variables that modify the behavior of the dynamic linker, as well as explanations of the different versions of the dynamic linker that have been used in the past.
Most Linux documentation assumes that all shared libraries are dynamically linked because on Linux systems, they generally are. The work needed to make statically linked shared libraries is substantial and most users don't gain any benefit from it, although the performance difference is noticeable on systems that support the feature.
If you're using a pre-packaged system off the shelf, you probably won't run into very many shared library versions -- the system probably just ships with the ones it was linked against. On the other hand, if you do a lot of updates and source builds, you can end up with many versions of a shared library since old versions get left around "just in case."
As always, if you want to know more, experiment. Remember that nearly everything on a system refers back to those same few shared libraries, so if you break one of the system's core shared libraries, you're going to get to play with some kind of system recovery tool.
- Linkers and Loaders by John Levine (Morgan Kauffman, October 1999) is an authoritative source devoted to compile-time and run-time processes. (Some manuscript chapters are available online.)
- Try this source for more information on the ELF binary format.
- Read this communique if you ever wondered why a versioning scheme for shared libraries is important.
- Override the GNU C library -- painlessly (developerWorks, April 2002) shows how to use dynamic linking to override individual library functions without root privileges and without rebuilding the entire library.
- Writing DLLs for Linux apps (developerWorks, October 2001) demonstrates how dynamically linked libraries are often a great way to add functionality without writing a whole new Linux application.
- Shared objects for the object disoriented! (developerWorks, April 2001) explains how to write dynamically loadable libraries and suggests tools to use in the process.
- Use shared objects on Linux (developerWorks, May 2004) demonstrates how to make shared memory processes work.
- Find more resources for Linux developers in the developerWorks Linux zone.
- Get involved in the developerWorks community by participating in developerWorks blogs.
- Browse for books on these and other technical topics. | <urn:uuid:645148df-6468-44e9-b298-a28e7db4bc99> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-shlibs/index.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.886731 | 4,449 | 3.78125 | 4 |
Hoolihan J.P.,University of Miami |
Wells R.J.D.,Texas A&M University at Galveston |
Luo J.,University of Miami |
Falterman B.,2021 Lakeshore Drive |
And 2 more authors.
Marine and Coastal Fisheries | Year: 2014
Pop-up satellite archival tags (n = 31) were deployed on Yellowfin Tuna Thunnus albacares in the Gulf of Mexico for periods ranging from 14 to 95 d. Differences in diel vertical behavior were assessed by comparing time spent at temperature relative to the surface temperature (△T). Pooled samples revealed that 31% of darkness hours, 20% of twilight hours, and 12% of daylight hours were spent in the uniform-temperature surface layer (i.e., △T = 0). Total time spent above 100 m was less during daylight (90.0%) than during darkness (99.8%), suggesting greater exploration of deeper depths during daylight hours. Maximum depth visited ranged from 208 to 984 m, and minimum temperature visited ranged from 5.4◦C to 11.8◦C. Only a small proportion of total time was spent at temperatures colder than 8◦C below the surface temperature. Horizontal excursions for the majority of individuals were less than 100 km from the point of release; however, three individuals moved distances of 411–1,124 km, suggesting that this species has the capability to move relatively long distances within the Gulf of Mexico. The △T values are provided in tabular format and serve as direct input variables for use in habitat standardization models. © American Fisheries Society 2014. Source | <urn:uuid:b3bc70ea-cce2-4c82-8151-d40af57f6da0> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/2021-lakeshore-drive-1724361/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934379 | 348 | 2.65625 | 3 |
Technology is rapidly changing. New tools for managing information, providing remote access, and calculating data analytics are being deployed at a feverish pace. Meanwhile, skillful exploits and attacks are being perfected and launched by hacktivists and criminals from across the globe. The ability for an organization to reach out to a world-wide market base has never been so effortless, but at the same time the risks from doing so have never been greater.
Increasingly, the Internet interconnects individuals and businesses which also grants unfettered access by criminals and those who wish to abuse these systems. “Cyber threats” define the attacks that compromise computers, networks, data-sets, and/or their communications. “Cyber attacks” can reach a target from local sources (ie, already on your network) or from across a wide area network link (ie, the Internet). A compromise of IT infrastructure, communications, or data stores can result in serious economic and financial losses. Additionally, security breaches can lead to privacy violations, negative publicity, a depletion of public trust, a reduction of consumer confidence, and loss of market share. Security compromises can cause a violation of regulations, place the organization at risk of losing their license to operate, cause bankruptcy, and potentially trigger criminal or civil penalties for the organization and its officers.
Organizations must take the threat and risk of computer hacking seriously. A well-trained and prepared cyber-work-force is imperative. All personnel in the organization, from the C-level executives to new interns, require cyber-awareness. All organizations benefit from having some personnel trained as cyber warriors. A well-prepared organization is able to build sufficient defenses to ward off most attacks, tune detection systems to discover attempted attacks, and respond to compromises promptly in order to contain and eradicate the violation. The best defense starts with information, knowledge, and education. You need the right-people with the right skills and expertise to counter the ever present onslaught to cyber threats and attacks. Six main security disciplines and their corresponding competencies include:
- Asset Protection
- Threat Management
- Access Control
- Incident Management
- Configuration Management
- Contingency Planning
Continuing next week, this seven part series will teach you to use and understand each of these disciplines to better protect you and your company. | <urn:uuid:75fb77f5-32a6-4ad2-b353-685fd0506686> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2012/04/23/security-competencies-what-they-are-why-we-need-them/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943583 | 474 | 2.8125 | 3 |
Digital devices can now be small enough to carry everywhere we go. But there’s BIG data that they can bring. We are now facing a moment of convergence between science and technology, which has revealed many opportunities for growth in the digital health world, which will only continue from here on out. Constant care maintenance with endless possibilities, research with almost limitless reach, and organic, continuous conversation that will face unprecedented growth in the future decades to come.
According to a TED Talk given by Eric Topol on “The Wireless Future of Medicine” in 2009, he says, "we'll soon use our smartphones to monitor our vital signs and chronic conditions”. (Spoiler Alert: Think Apple’s iPhone integrating this into their OS system!) The future of mobile medicine will contribute positively to keep unnecessary patients out of hospital beds, as individuals will have a better understanding of their health through portable sensors. These new technological developments are the key to how patients can play an active role in constantly monitoring their own health. Indicators such as prediction of diabetes, breast cancer, sudden cardiac death, and atrial fibrillation can be determined by handheld wireless technologies displayed on small screens, like the iPhone or tablets. This new usage of technology won’t necessarily replace doctors but better utilize the available resources and assist doctors in their pursuit in providing the best patient care. Staying up-to-date will no longer be a concern, since all data will be in real time. Imagine seeing your own heartbeat rhythm on your mobile device by just wearing a bandaid-like device.
As I mentioned in my previous post on mobile medicine, the increasing cost of healthcare services necessitates some creativity to move forward to build a healthier nation. From small start-ups to large nonprofits to huge public corporations everyone is starting to think how they can use technology tap into the world of health. It’s no longer a question of if, but an exclamation of when digital technology will become the new norm for medicine.
How will this happen? A couple things I found to be very useful in the development of digital health technologies:
1. Build a leadership team that believes wholeheartedly in the campaign from the inside out.
For example, at Tidepool, the goal is to build a smart platform to communicate diabetes data through cloud-based technology to provide secure patient data in real time, integrated health records, and data visualization for both providers and patients to access. The core team consists of talented individuals that either have a passion to make a difference, have Type 1 Diabetes themselves, or have the clinical or technological background to support the growth of the technology. They are proudly a non-profit organization, to remain closely connected to the cause and find new innovative ways to help those suffering with Type 1 Diabetes.
2. Find ways for user generated content through social media to fuel research and get others on board.
The Health eHeart project is a data collection study to generate masses of information regarding heart health and heart disease through social media and smartphones. By using social media to organically gather data, they aspire to build the largest study regarding heart health than ever before. Perhaps this can lead to the understanding of the predictive factors, the cause, and prevention of heart disease in the future. Cool huh?
3. Find an innovative, yet easy to use concept design.
Apple is often known for its simplicity and user-friendly interface. The words out that even Apple is even jumping on board with their new Healthbook App which will combine both healthcare and fitness tracking to enable individuals to track things such as: fitness and weight to heart rate, blood pressure, blood sugar, hydration, and sleep tracking. Hopefully it will be integrated into their OS8 of their upcoming version of the iPhone. According to an article in Wired, “In other words, Apple Healthbook won’t just promote the fitness of iPhone users. It will boost the well being of an entire ecosystem of healthcare technology companies."
Small devices can and really will make a BIG difference, it’s just a matter of time. | <urn:uuid:9f83e4c5-e475-443b-8d9a-dbb13cbed434> | CC-MAIN-2017-04 | https://www.ibm.com/developerworks/community/blogs/025bf606-020a-48e9-89bf-99adda13e9b1/entry/small_and_mighty_future_of_a_healthier_nation?lang=en_us | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932751 | 828 | 2.625 | 3 |
Welcoming new citizens
Each year during the week of July 4, the Homeland Security Department's U.S.
Citizenship and Immigration Services holds ceremonies to honor people who have recently become citizens.
This year, USCIS hosted 150 July 4 citizenship ceremonies for 18,000 people in various locations nationwide -- many of them at prominent national and historic landmarks.
USCIS' Web site features a self-test about U.S. history and government. It notes that the self-test is not the actual test, which a USCIS officer administers, but it can be used as a study guide.
1. How many senators are there in Congress?
c. Based on the size of the population.
2. Who said, 'Give me liberty or give me death"?
a. George Washington.
b. Benjamin Franklin.
c. Patrick Henry.
d. Thomas Jefferson.
3. What is the basic belief of the Declaration of Independence?
a. That there are 50 states in the Union.
b. That all men are created equal.
c. That George Washington was the first president of the United States.
d. That the flag is red, white and blue.
4. How many times may a congressman be re-elected?
b. There is no limit.
c. Four times.
d. Six times.
5. Can the Constitution be changed?
c. Yes, but only by the president.
d. Yes, but only by the voters.
6. Why are there 100 senators in the Senate?
a. Because that is all that fits in the Senate Gallery.
b. Because it must have half the number of the representatives.
d. Two from each state.
7. What do the stripes on the flag mean?
a. One for each state in the union.
b. One for each Article of the Constitution.
c. The Cabinet.
d. They represent the 13 original states.
8. What are the 13 original states?
a. Connecticut, New Hampshire, New York, New Zealand, Massachusetts, Pennsylvania, Ohio, Delaware, Virginia, North Carolina, South Carolina, Georgia, Rhode Island, Maryland.
b. Connecticut, New Hampshire, New York, New Jersey, Massachusetts, Pennsylvania, Delaware, Virginia, North Carolina, South Carolina, Georgia, Rhode Island, Maryland.
c. Connecticut, New York, New Jersey, Massachusetts, Pennsylvania, Delaware, Kentucky, Virginia, North Carolina, South Carolina, Georgia, Rhode Island, Maryland.
d. Connecticut, New Hampshire, New York, New Jersey, Massachusetts, Pennsylvania, Delaware, Virginia, North Carolina, South Carolina, Georgia, Rhode Island, Maryland, Washington, D.C.
9. What are the three branches of our
a. Democratic, Republican, Independent.
b. Department of Justice, Department of State, Department of Defense.
c. Legislative, Executive, Judicial.
d. Police, Education, Legislative.
10. What is the supreme law of the United States?
a. The Declaration of Independence.
b. The Bill of Rights.
c. The Magna Carta.
d. The Constitution.
Click here for the answers to these questions.
Find a link to the self-test and other citizenship information on FCW.com Download's Data Call. | <urn:uuid:c45f933e-a533-4a63-b79c-60742e1262c0> | CC-MAIN-2017-04 | https://fcw.com/Articles/2006/07/10/Flipside.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863838 | 700 | 3.3125 | 3 |
NASA used 141 wide-angle images to create a panoramic view of Saturn, its moons and rings -- and with Earth in the background. The result is a natural-color, panoramic portrait of Saturn as if it were seen through human eyes.
The images were taken by NASA's Cassini spacecraft this pas summer from about 898 million miles away. The spacecraft turned its cameras back toward Earth so it could grab a photo of the Saturn system, as well as its home planet, from hundreds of millions of miles away.
The space agency had invited people all over the world to run outside and wave up toward Saturn on July 19, the day the pictures were being taken. They then were asked to share their own pictures of the Wave at Saturn event on social networks, like Flickr and Facebook.
Cassini, which was launched in 1997 and has been orbiting Saturn for more than nine years, also snapped photos of Saturn and Earth on June 19. All of the photos were added to the mosaic.
"In this one magnificent view, Cassini has delivered to us a universe of marvels," Carolyn Porco, Cassini's imaging team lead, said in a statement. "And it did so on a day people all over the world, in unison, smiled in celebration at the sheer joy of being alive on a pale blue dot."
The space agency noted that the panoramic image sweeps 404,880 miles across Saturn and its inner ring system, including all of Saturn's rings out to the E ring, which is Saturn's second outermost ring.
For a bit of perspective, the distance between Earth and our moon would easily fit inside the span of the E ring.
On the days that Cassini turned its cameras back toward Earth, the sun had slipped behind Saturn - from the spacecraft's point of view - giving the spacecraft a clear, and not too bright, view of our planet. Normally, it's difficult for Cassini to take images of Earth because the brightness of the sun would damage its sensitive imagers.
Unlike two previous Cassini eclipse mosaics of the Saturn system - one in 2006, which captured Earth, and another in 2012 - the latest images are the first to capture the Saturn system with Earth in natural color.
The Cassini project is a joint effort between NASA's Jet Propulsion Laboratory, the European Space Agency and the Italian Space Agency. NASA reported that it plans to continue the mission through 2017, with the goal of capturing more images of Saturn.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org. | <urn:uuid:fa34e9c8-e44b-4bcc-93b9-9aceb88ec564> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2485837/emerging-technology/nasa-reaches-way-out-for-selfie-of-earth-and-saturn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960886 | 569 | 3.359375 | 3 |
The iPhone 4 and iOS 4 combined to create a highly accessible smart phone. iPhone 4S and iOS 5 have added new features that improve the accessibility and ease of use for people with disabilities.
The first indication of this is the iPhone 4S user manual which is provided in two accessible formats, plain HTML and tagged PDF. All the accessibility features are gathered together into one chapter of 15 pages covering:
- Routing the audio of incoming calls
- Triple-Click Home
- Large Text
- White on Black
- Speak Selection
- Speak Auto-text
- Mono Audio
- Hearing aid compatibility
- Custom Vibrations
- LED Flash for Alerts
- Universal Access in Mac OS X
- TTY support
- Minimum font size for mail messages
- Assignable ringtones
- Visual voicemail
- Widescreen keyboards
- Large phone keypad
- Voice Control
- Closed captioning
Most of these were available in previous release of iOS and made the iPhone an accessible device for many users with disabilities. In iOS 5 the two big accessibility additions are Siri and Assistive Touch. They both move assistive technology forward into new areas and at the same time suggest further improvements that could be made.
Siri is the fun extension so I will discuss Assistive Touch first.
Previous versions of the iPhone were not accessible to people with limited dexterity, for example the volume buttons on the side require a significant force to press down, also gestures such as pinch can be impossible for people with limited or no hand control. Assistive Touch enables all the button controls and any gesture to be controlled with just a single touch, either from a finger or a stick. When Assistive Touch is switched on a small semi-transparent circle appears, touching that brings up a series of menus that include mute, volume up and down, pinch and multi-finger gestures, each of these can be chosen and operated using just one finger or an equivalent pointing device.
Assistive Touch and VoiceOver do not work together; given that Assistive Touch has a visual interface and VoiceOver is designed for people with visual impairments this initially does not appear to be an issue. However, it is a limitation for people who use VoiceOver because they prefer having text read to them: people with dyslexia, or people who find reading difficult, or people with some vision that enables them to see the buttons but not to read the text.
Assistive Touch has another possible use: it could allow Apple to remove some of the physical buttons, this would simplify the design and build, and probably increase the reliability. If this was done then Assistive Touch would have to work with all functions of the new phone including VoiceOver. Assistive Touch is a significant new function that ensures access for people with a range of disabilities who were not supported previously - further improvements could support even more.
Siri is the voice-activated 'humble virtual assistant'; press the home button for a few seconds, or raise the phone to your ear, and Siri starts up and asks 'What can I help you with?', you can then make request such as:
- 'What am I doing on Saturday?'
- 'Call my wife'
- 'How many calories in a bagel?'
- 'Remind me to defrost the chicken when I get home'
- 'What is the time in Vienna?
- 'Book lunch with my son next Friday at one'
- 'Who are you?' with the response 'A humble virtual assistant'.
If you are in the USA there are further request types such as 'Find me an Italian restaurant in Pasadena'. Siri is in beta and Apple have not made the connections to suitable information bases outside the States yet. If the beta is a success, as I am sure it will be, then I assume that Apple will extend the information base to other countries.
This is certainly a significant accessibility feature as the voice activation makes it easy to use for people with vision impairments or limited manual dexterity. Not only does it simplify inputting the request but it also includes significant intelligence as to how to fulfil the request. Many people may not know how to find the time in Vienna but Siri can find the information. As Siri improves over time this intelligence will become a major benefit as users will not have to understand how to search the web and access apps to get the required results. This will be useful for most people but particularly for newcomers to technology, the elderly, some of whom are technophobic, and people with cognitive impairments. It is a step towards making technology transparent to the user.
Siri highlights an accessibility issue that has not been considered much, to date. If someone cannot speak clearly enough for Siri to understand then they will be denied the benefit of the intelligence. This will effect many, but not all, people with a significant hearing impairment, as well as people who have any disease that makes clear speech difficult or impossible. This is the first example I have come across of accessibility issues for people with speech impairments but I would expect other applications in the future to have similar issues. In a future release Siri should provide an alternative input channel besides speech, the obvious one would be text but a really exciting one would be sign language.
Overall the new accessibility features in iOS5 on the iPhone 4S are impressive and provide the base for further enhancements in future releases. | <urn:uuid:d6197f5a-e6be-4224-9c7d-c6b285b585be> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/iphone-4s-accessibility/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947302 | 1,102 | 2.65625 | 3 |
According to the British Library, the average life expectancy of a Web site is between 44 and 75 days and every six months, 10% of .uk Web pages vanish or are replaced by new material.
"With so much material now published online, and considering the growing influence of the Internet on British culture and society, the Web is now a key part of the nation's memory," said Margaret Hodge, the U.K.'s Minister of Culture and Tourism, in a statement. "A failure to record and preserve the UK domain would not just be detrimental to future research but leave a significant gap in our digital heritage."
The .uk Internet domain currently consists of about 8 million Web pages and is expected to reach 11 million by 2011. The British Library currently has 10 people manually archiving the 5 terabytes of U.K. Web page data.
IBM's contribution to the archiving project, BigSheets, is built atop the Apache Hadoop framework, a system for distributed data processing inspired by Google's MapReduce and Google File System, and developed in recent years by Yahoo and others.
"We think of these as big worksheets," said Rod Smith, VP of emerging Internet technologies at IBM, who stresses that the project goes beyond archiving. "You'd like to be more valuable to people than just an archive. In the British Library's case, you'd like to be known as the accurate holder of historical information."
BigSheets will allow British Library researchers, and eventually library patrons, to access Web archive data, conduct queries and visualize the results in forms like a tag cloud or pie chart, for example.
It's about ways to explore and sift data, says Smith.
Smith says it's still too early in the project's evolution to determine whether BigSheets will be adopted by other archiving organizations, like the Internet Archive. | <urn:uuid:251848ac-f1a6-4370-849c-d81c99eeb22d> | CC-MAIN-2017-04 | http://www.networkcomputing.com/storage/ibm-aids-british-library-web-archive/826708968?piddl_msgorder=asc | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938637 | 384 | 3.28125 | 3 |
NSF puts supercomputing power into Japan quake recovery
TeraGrid community offers high-performance computing and storage to Japanese colleagues
- By William Jackson
- Apr 26, 2011
The National Science Foundation is making emergency grants available through its Rapid Response Research (RAPID) program for research on the effects of the March 11 earthquake and tsunami that devastated much of Japan.
The grants, which are expected to be about $50,000 each for one year, could include time on NSF’s high-performance TeraGrid distributed computing platform. The TeraGrid community of researchers already is responding to the disaster by making computing and storage resources available to Japanese colleagues whose infrastructure has been disrupted by the quake and by helping with modeling programs to map and understand the impact of the quake and tsunami.
“I think this is going to be a long-term issue for the Japanese,” said TeraGrid program director Barry Schneider. “The amount of resources that are going to flow out will be minimal; it will have no impact on U.S. researchers.” But the impact on Japanese research programs could be great.
NSF opens new round of grants for TeraGrid time
Japan’s earthquake shows strength of social networking in crisis
According to NSF, the magnitude 9.0 quake has been estimated to be the most costly natural disaster on record, with as much as $330 billion in damage. More than 26,000 are dead or missing, an estimated 400,000 are homeless, and nearly a quarter of Japan’s geography has been altered. The damage and continuing crisis at the Fukushima Daiichi nuclear generating facility could result in a 9 gigawatt power deficiency this summer, resulting in power rationing. It could be years before all Japanese computing resources are back online.
The distributed nature of the TeraGrid can make its resources available for affected Japanese programs and can make discretionary research time available for emergency needs such as creating models to predict the distribution of radioactivity.
TeraGrid is a partnership between NSF’s Office of Cyberinfrastructure and 11 government, educational and research facilities that make computing time available on 15 supercomputing platforms. It is supported by grid software and hig- performance network connections, and provides data storage and management resources in addition to access to the computers themselves.
Total TeraGrid resources now exceed 2 petaflops of combined processing power (a petaflop is 1,000 trillion floating point operations per second) and 50 petabytes of online and archival storage. TeraGrid allocates more than 1 billion processor hours to researchers each year.
Initial TeraGrid responses to the Japanese disaster include the Keeneland Project at Georgia Tech, which has collaborated with Tokyo Tech to produce computer architecture and software for graphics processing. Tokyo Tech’s version of the system already is in production mode but is expected to face temporary shutdowns this summer when power demand increases. Georgia Tech researchers are working to make computing cycles and storage from the Keeneland project available to Tokyo Tech to keep its project up and running.
Indiana University has provided help to responders through quake simulation projects using satellite data and also has used change-detection algorithms to compare before and after satellite images of Japan to determine the extent of damage. Programs at Louisiana State University, the San Diego Supercomputer Center and the Texas Advanced Computer Center (TACC) also have made TeraGrid resources available and collaborated with Japanese researchers.
TeraGrid has responded to other disasters, including last year’s oil spill in the Gulf of Mexico.
“There was an immediate need for modeling,” Schneider said. TACC donated 6.5 million hours of computing time for simulation to predict the path of the spill. “That’s sounds like a lot, but it’s not,” Schneider said. TeraGrid manages about 2.7 billion computing hours a year.
The TeraGrid response to the Japan quake so far has consisted of ad hoc contributions of resources. Schneider said he expects to see RAPID grant applications for TeraGrid resources. RAPID grants also were made available for studying the impact of February’s quake in Christchurch, New Zealand.
RAPID grant applications must be received by April 29. Information for submitting applications is available from NSF.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:589a174e-a482-44e8-86e3-72a132f658c1> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/04/26/nsf-teragrid-computing-assist-japan-quake.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945537 | 903 | 2.5625 | 3 |
P.43 UPPER-CASE Function
The UPPER-CASE function returns a character string that is the same length as argument-1 with each lowercase letter replaced by the corresponding uppercase letter. The type of this function is alphanumeric.
UsageFUNCTION UPPER-CASE (argument-1)
Argument-1 must be class numeric or alphanumeric and must be at least one character in length.
- The same character string as argument-1 is returned, except that each lowercase letter is replaced by the corresponding uppercase letter.
- The character string returned has the same length as argument-1.
- This function is similar to the library routine C$TOUPPER except that the original data is not modified, and the entire string is converted.
Voice: (800) 262-6585 (U.S.A. and Canada)
Voice: (858) 689-4500
Fax: (858) 689-4550
Please share your comments on this manual
or on any Acucorp documentation with the
Acucorp Communications Department.
|© 1988-2003 Acucorp, Inc.|
All rights reserved. | <urn:uuid:83c85268-31d2-43a3-8db4-0237af3c9726> | CC-MAIN-2017-04 | https://supportline.microfocus.com/Documentation/AcucorpProducts/docs/v6_online_doc/gtman4/gt4p45.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.74198 | 258 | 2.59375 | 3 |
Bitcoin, and other digital currencies such as Litecoin and Peercoin, will change the way we exchange money. But they come with a major flaw: they can also be used to turn infected computers into devices that "print" money.
The beauty of the algorithm behind Bitcoin is that it solves two main challenges for cryptocurrencies - confirming transactions and generating money without causing inflation - by joining them together. Confirmations are given by other members of the peer-to-peer network, who in return are given new Bitcoins for their labour. The whole process is known as "mining".
When Bitcoin was young, mining was easy. You could earn Bitcoins by mining on a home computer. However, as the currency's value grew (from $8 to $1000 during 2013) - more people applied to do it, and, in response, mining became (mathematically) harder and required more powerful computers. Unfortunately, those computers don't have to be your own. Some of the largest botnets run by online criminals today are monetized by mining. Any infected home computer could be mining Bitcoins for a cybercrime gang.
Using botnets to mine is big business. The second-largest botnet in the world, ZeroAccess made tens of thousands of dollars a day by using the infected machines to mine for cryptocurrencies. This is especially effective when the infected machines have a high-end GPU chip on its video card.
Mining botnets such as these do not require a human user - just processing power and a network connection. The internet of things will bring millions more connected computers on to the web, embedded in devices such as cars and rubbish bins. And not all of them will have to have as high a spec as even a Windows PC to mine money: Litecoin, for example, uses more memory-intensive algorithms that can be run on a regular CPU rather than on high-end GPUs.
The mythical internet-connected fridge may at last have found an - admittedly criminal - reason to exist.
Mikko Hypponen Originally published in Wired UK 12/2013 | <urn:uuid:107c4ec9-a8cf-4bc6-b0b0-5173c5cd1f88> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002644.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00571-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964217 | 418 | 3.203125 | 3 |
Located at the base of the brain, the cerebellum is responsible for the coordination, control and timing of movements. This structure allows us to walk run and perform other motor-related activities, such as throwing a baseball, without consciously having to orchestrate the individual movements. These cerebellum-driven motor activities are some of the most difficult to reproduce in the robot population.
Japanese researchers Tadashi Yamazaki of the University of Electro-Communications in Tokyo and Jun Igarashi at Okinawa Institute of Science and Technology Graduate University in Okinawa used NVIDIA GPUs to create a 100,000 neuron simulation of the human cerebellum. This is one of the largest simulations of its kind. They put their model to the test by linking it to a robot that relies on the virtual cerebellum to hit a ball.
This field of study, known as “biomimetic” robotics, relies on biological systems to inspire the design and engineering of materials and machines with the aim of developing a more compliant and robust class of robots than today’s current generation.
The research duo has written a paper describing their work and outlining their goals. Originally, they designed the large-scale network model to study the underlying mechanisms of cerebellum motor control, but it soon became apparent that the virtual cerebellum could be used to help robots interact with and respond to their environments in a more natural way, potentially advancing one of the most vexing problems in robot research.
One of the biggest challenges to real-time neural modeling is simulation speed. A CPU-only solution took 98 seconds to generate a response to a one-second stimulus. Owing to its massive parallel computing capability, the GPU-based solution, was one-hundred times faster, making it a suitable candidate for use in real-world scenarios. The parallel implementation of the “Realtime Cerebellum (RC)” platform was carried out using CUDA, NVIDIA’s unified software development environment for GPU programming.
To test their work, the researchers connected RC to a humanoid robot that they had built. Using RC as a real-time adaptive controller, the robot learns to hit a small plastic ball with a round racket. (See video below.)
What makes this work all the more significant is that it was done with inexpensive off-the-shelf hardware. The model used a PC equipped with a single GeForce GTX 580 NVIDIA GPU.
The project could lead to the development of a silicon cerebellum that would allow robots to interact with environmental stimuli in real-time. But this won’t happen overnight. Scientists still need to come to a consensus on a standard working model for the cerebellum and the robotic systems integration will also take time.
The researchers are aiming for a complete understanding of how this region of the brain works. This could one day open the door to better treatments for motor neuron diseases. | <urn:uuid:424622f1-cfb4-4ad1-a1e5-e2eda0643540> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/04/30/cuda-designed_robot_it_s_a_hit_/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945427 | 591 | 3.875 | 4 |
This page is for anyone looking to get a concise, fundamentals-based understanding of this ancient beverage—within a couple of minutes.
First some basics.
- Wine is an ancient beverage, dating back to around 6,000 B.C. in Eurasian Georgia.
- The word comes from the Indo-European for “vine”.
- Wine is made by fermenting the sugars within gapes. Fermentation is the conversion of the sugars to alcohol and carbon dioxide using yeasts and/or bacteria in an anaerobic (airtight) environment.
- The flavor of wine comes primarily from the grapes that were used, which is affected greatly by the year those grapes grew. Things like soil quality, weather, etc. all have an effect on the taste of the grape, and therefore the quality of that vintage.
- Other flavors seen in wines often come from the barrels it was made in, with oak being a dominant presence in many cases.
- Grape types are called varieties.
We’ll now discuss the basic types of wine and the subtypes within them.
Red wine is called red because the grapes used to make them are dark. Grape types are called varieties, and here are a few along with their main characteristics.
- Syrah (Shiraz)
Shiraz is the Australian name for Syrah. They are the same grape, and tend to do well in California, Australia, and France. The flavor is fruity with a sense of pepper and roasting meat.
Merlots are known for being easy to get into, as they’re more subtle than the other reds. It’s grown all over, including California, Chile, and Australia. The taste often includes muted blackberry and/or plum.
- Cabernet Sauvignon
Cabernet is one of the most famous varieties, and is often blended with other grapes, including Merlot. It’s best paired with red meat, has a full-bodied taste, and is grown most successfully in California, Australia, and Chile.
Malbecs come from the French Bordeaux area, and is widely grown in Argentina where it’s the most popular red grape. Its flavor varies greatly based on where it was grown, but generally has hints of plum, berries, and spice.
- Pinot Noir
Pinot Noir is a premier red grape that’s rarely blended. Delicate and fresh, it’s often eaten with salmon, chicken, and sushi. It’s most often produced in Burgundy France, Austria, and California.
As you may have surmised, white wine is called white because the grapes used to make them are lighter. Here are a few white varieties along with their main characteristics.
Chardonnay comes from Burgundy, France (originally), but is now grown in California, France, Australia, and many other locations as well. Chardonnays are typically dry with citrus flavors..
- Sauvignon Blanc
Sauvignon blanc is grown in the Bordeaux region of France as well as New Zealand, and is often blended with semillon. Flavors include bell pepper or freshly mown grass.
- Pinot Grigio
Pinot Grigio is mostly grown in Italy, but is also grown in California. It’s a dry wine with strong acidity and strong fruity flavor. Pairs well with Thai or other spicy cuisines.
Riesling is much lighter and crisper than something like a Chardonnay, and goes well with tuna and salmon. It’s a German grape with limited success in California.
Rosé wines are very similar to white wines, but with some of the color from reds added. They are widely considered to be the oldest type of wine made.
Sparkling wines contain significant amounts of carbon dioxide, making it fizzy. The carbon dioxide may be part of the fermentation process or may be done artificially through injection.
The most common confusion about sparkling wine centers around Champagne. Champagne is a region in France, where the genuine thing called Champagne is produced. Many other sparkling wines are called, and considered, to be Champagne, but are not. This would be like calling a wine Napa, and then having it get produced all over the world as “Napa” when it’s not from there.
Bottom line: Make sure the sparkling wine is actually from Champagne before you call it that.
Blends are wines that are made from a combination of grapes. Some are mixes of many grapes in nearly equal parts. Others are mostly one grape with just a bit of another.
Here we’ll list a few different characteristics that are commonly used to describe various wines.
Dryness is a scale of sweetness, meaning that you have dry on one end and sweet on the other. Dryness or sweetness is accomplished by fermenting more or less of the sugar from the grapes. The more is fermented into alcohol, the more dry it is. The more sugar that’s left behind the more sweet it is.
See Dryness above.
Tannins are materials found in plants, with about half of the dry weight of a given leaf being tannin. Tannin adds bitterness and complexity to wine, and is found more in red wine than in white. You also get some tannin flavor from the wood barrels that wines are made in.
Roundness is desirable in a wine, and generally means it’s balanced—hitting your mouth in many places at once. It means the high tannin kick is not present, like when a wine has fully matured.
Body is a significance in presence in the mouth, like a weight and fullness.
Acidity is key to wine, as it gives it unique characteristics. Hot years produce less acidity than cool years.
Astringency is a harshness or coarseness, but the reasons can vary. Good wines that are too young, i.e. where the tannins have not yet been absorbed, tend to be astringent. But it can also come from the wine just not being made well.
Angular wines are like trying to drink sharp triangles. They hit you in strange parts of your mouth while not touching other parts. They are the opposite of round.
Big generally means strong flavor that hits you in multiple parts of your mouth at the same time.
Bright wines are high in acid, and tend to make your mouth water.
Hopefully this has been informative. Contact me below if you think anything should be added or adjusted.
- There is also another main type of wine, called a Fortified, but it’s significantly less common.
- The Wikipedia article on wine.
- There are many more grape types than the ones mentioned as well, but I’m trying to keep this to the basics. | <urn:uuid:463c2e0b-df03-4f16-bace-e3316fae8215> | CC-MAIN-2017-04 | https://danielmiessler.com/study/wine/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961558 | 1,444 | 3.078125 | 3 |
Let’s take a look at some of the things that we can do with MPLS. As we know, one of those things is Layer-3 MPLS VPNs, a peer-to-peer VPN scheme in which the PE and CE devices are both routers. In this scheme, a PE has a VRF for each customer to which it is attached, and we run MP-BGP to advertise the customer routes and associated VPN labels from one PE to another. From the customer’s perspective, the WAN provider’s MPLS cloud appears as a router, as shown in Figure 1:
Now let’s imagine that a customer has only two sites which they’d like to connect using a WAN service provider. The way this was done in the past would typically be with a leased line. This works, but it requires the customer to have a CE router at each site that converts between the LAN encapsulation (typically Ethernet) and the WAN encapsulation (HDLC, PPP, Frame Relay or whatever).
Wouldn’t it be nice if the customer could just run Ethernet to the PE? If so, they wouldn’t need the CE router. Instead of setting up a VRF for this customer on the PE, let’s go into the interface on the PE to which the customer is attached, and assign a unique “CID” (Circuit ID). On the PE on the far side, we’ll do the same thing, using the same value for the CID. Then, instead of MP-BGP, we’ll use TLDP (Targeted LDP) to connect the interfaces on the two PE’s. From the customer’s perspective, the provider’s MPLS cloud appears as a LAN repeater, as in Figure 2:
This arrangement, which is effectively a Layer-1 VPN, is referred to as “Pseudo Wire” (RFC 3916), and it’s very popular with both customers are providers. You might also see it called “VLL” (Virtual Leased Line), “VPWS” (Virtual Private Wire Service), “TLS” (Transparent LAN Service) or “EoMPLS” (Ethernet over MPLS). Whatever it’s called, it acts like a private point-to-point link between the two customer sites.
What if the customer has more than two sites? The legacy solution might be Frame Relay, but again, this requires a router at each customer site to handle the encapsulations. Instead, we could either start run multiple Pseudo Wires (anything from a hub-and-spoke up to a full mesh), or we could have the provider’s MPLS cloud emulate a LAN switch, as shown in Figure 3:
Since the MPLS cloud is now functioning as an Ethernet switch, while keeping customer traffic separated, it’s effectively a Layer-2 VPN. This is commonly referred to “VPLS” (Virtual Private LAN Service). With VPLS, the CIDs can be advertised using either MP-BGP or TLDP. By the way, Cisco refers to this Layer-1 and Layer-2 MPLS VPN stuff as “AToM” (Any Transport over MPLS).
What about customers that are using other Layer-3 protocols, such as IPv6, IPX, AppleTalk, and so forth? Since the P routers are doing only label switching, they never care about any customer’s routed protocols. The PE routers would need to understand the customer’s routed protocols for Layer-3 VPNs, but no equipment vendors (including Cisco) have implemented VRFs for anything but IPv4 and IPv6. How then to handle the other routed protocols?
The answer is to use Pseudo Wire (VPWS) or VPLS, for which the PEs don’t care about the customers’ routed protocols. Likewise, multicasting can also be supported by making the provider’s MPLS could appear as a Layer-1 or Layer-2 service.
From the provider perspective, MPLS is attractive because all of these services (Layer-1, Layer-2 and Layer-3 VPNs) can all be carried simultaneously over the same infrastructure, with only the PE routers requiring configuration on a per-customer basis.
Well, that’s it for MPLS … for now, anyway!
Author: Al Friebe | <urn:uuid:37624ca9-da7d-4e2e-8076-0fc894ab5b28> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2010/07/07/mpls-part-12/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930669 | 949 | 2.671875 | 3 |
What is it?
IBM's DB2 is a relational database management system (RDBMS), widely used in enterprises. In recent years, IBM has been working to widen the range of platforms its flagship relational database runs on, and also to deepen its capabilities. Recent additions to DB2 9.5 included substantial enhancements to DB2 Data Warehouse Edition (DB2 DWE). Combined with IBM's recent purchase of Cognos, the surge of activity in data warehousing signals a new focus on business intelligence.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
With DB2 9.1, IBM introduced pureXML, turning DB2 into a hybrid relational/XML database. IBM has also been increasing support for other suppliers' implementations of SQL to meet the competition from both traditional suppliers such as Oracle and Microsoft, and open source databases such as MySQL.
IBM is progressively introducing automation, such as "autonomic" tuning, to assist with the administration of DB2, but the company has certainly not yet deskilled the job of the database administrator: DB2 DBA certification is widely regarded as among the most challenging that IBM offers.
Where did it originate?
DB2 was arguably the first commercially available database to build on Ted Codd's relational model, which was developed at IBM in the 1970s. Oracle, first shipped in 1979 with basic SQL functionality, is sometimes claimed to be the first commercial RDBMS. DB2 was first made available on MVS mainframes in 1983. In 1996, DB2 was transformed into the object-relational DB2 Universal Database, and support for Windows, Solaris and HP-UX was added, followed by Linux.
What's it for?
XML data can be queried using either SQL or XQuery, and applications can access and store XML and relational data. DB2 has application programming interfaces for old and new languages: RRexx, PL/I, Cobol, RPG, Fortran, C and C++, Java, Python, Perl, PHP and Ruby, with support for Microsoft's .net Common Language infrastructure.
DB2 9.5 increases support for Perl, PHP and the Ruby on Rails framework: for example, the DB2 Perl driver now supports pureXML. There is a new IBM Data Studio to replace the DB2 Developer Workbench, as well as integration with the IBM-backed Eclipse integrated development environment, and Microsoft's Visual Studio and other .net IDEs.
What makes it special?
Version 9.5 introduced the IBM Data Server Driver for ODBC, CLI, and .net to simplify mass application deployment on Windows.
How difficult is it to master?
Basic DB2 certification will take five days of training for those already familiar with SQL databases, and up to 20 days for beginners. There are various routes in for developers with C, Java and other languages.
DB2 9.5 brought in more autonomic functionality to simplify administration. There is a GUI for administrators that uses lots of wizards, but old hands use the far more flexible and scriptable command-line interface.
What systems does it run on?
As well as the Linux, Unix, Windows (LUW) version, DB2 is available for IBM's mainframe operating system z/OS, with some features exclusive to the mission-critical mainframe environment. DB2 LUW comes in a full-feature Enterprise Edition and reduced-feature editions for workgroups and developers (DB2 Express, which is available as a further reduced free download, DB2 Express-C, for Linux or Windows.
Rates of Pay
Database administrators earn £30,000 to £40,000. A premium is paid for DB2 data warehousing skills.
IBM's own range of classroom and online training can be found on its UK site. The big generalist training companies provide DB2 courses, as do many small specialists. For experienced database specialists, more in-depth information on installation, administration, troubleshooting and application development can be found on IBM's developer site, and also on IBM's developerworks. | <urn:uuid:7eff197a-f894-47d0-a637-2ca3a221bc7f> | CC-MAIN-2017-04 | http://www.computerweekly.com/opinion/Hot-skills-DB-95 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00415-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937874 | 851 | 2.8125 | 3 |
COS partition is not getting created in the extra disk space available in the same hard disk. The script uses existing COS partition and divides it into three partitions. It is not able to find out the extra space available in the same disk and use.
To workaround this issue, do the following steps:
Divide the existing single COS to three COS partitions.
If they have a new empty hard disk, that script will create the COS partitions in the new hard disk considering the already existing COS partition (if any).
Give preference for the empty hard disk over existing COS partition to create COS partitions.
Make sure that it will create COS partitions and the system will have at least three COS partitions.
NOTE:The script does not use the free space available in the same hard disk or other disk for COS partition creation. When the script creates partitions the disk should be EMPTY. | <urn:uuid:a7a3878c-a5ef-4f51-a257-2b94c971d466> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/novellaccessmanager31/accessgatewayhelp/data/brubmjh.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.823404 | 189 | 2.578125 | 3 |
What’s up with all the G’s and my Mobile Use? Gee-Wiz how many G’s do I need?
There are all kinds of G’s used in the Mobile Data Speed and Use being bantered around…2G, 3G and now 4G. These all refer to speed and the ability to download files at a certain speed. With the popularity of advanced devices, Smartphone’s, mini-computers and now tablets the “need for speed” is a real concern of users of these devices. Let’s break this down.
2G is the “older” speed–great for casual data users, however, speeds that resemble a dial-up connection.
3G is the current most available speed, very similar to a “low end DSL, or low end cable” connection.
4G is just now becoming available. Sprint has the largest coverage currently, up to 10 times the speed of the 3G technology. Streaming video, music, video chat, teleconferencing–the list goes on and on. This speed is what people experience with a “fast” DSL at home or their work connection.
Now we get into the other G…Gigabytes of data transmitted or “used”.
What is a Gigabyte? Again let’s break this down. Data is broken down into units; the smallest unit we most frequently hear is kilobyte. The next unit is megabyte and finally gigabyte.
To explain this lets use a typical email without an attachment (which would be separate). Our example email is 10 kilobytes. A megabyte is 1054 kilobytes and a gigabyte is 1000 megabytes. A megabyte of this size email would equal 105 of these emails. A gigabyte of these emails would equal 105,400 emails. Sound like a lot…HUH!
But let’s remember that’s without any attachments, say a spreadsheet (for example 1 megabyte in size) or a picture (3 megabytes in size), or a 1 minute video (30 megabytes in size). Open these on a mobile device and the data use goes up dramatically.
Streaming music? Downloading movies? Watching TV? These can be data hogs and vary dramatically in size.
Most of the carriers (Sprint, Verizon, AT&T etc.) package data in gigabyte increments: 1 gigabyte up to 5 gigabytes; however, go over that and the $$ add up quick.
Sprint 4G is currently unlimited and a pretty safe bet (if you are in one of their 55 soon to be 60+ coverage areas).
Bottom line…try to figure out just what you want to use mobile data for and if you need some assistance, don’t hesitate to contact us. | <urn:uuid:e4068ed2-ed16-4a33-8928-6f28a7bd05ff> | CC-MAIN-2017-04 | http://www.cisp.com/blog/?p=186 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90321 | 597 | 2.59375 | 3 |
Aerial sensors used to map the contours of the Earth’s surface could help Pinellas County, Fla., officials assess how future property values along the coast will be affected by flood risk and the ever-increasing costs of insuring against it.
The property appraiser’s office meticulously is reviewing data collected across the Tampa Bay area several years ago with airborne lidar sensors, which use lasers to gauge land elevation relative to trees, buildings and other objects.
Examining each home or office building to estimate its elevation over flood levels should make it easier to analyze how insurance costs might affect future land values. It also could be useful for challenging the Federal Emergency Management Agency’s controversial flood maps, which have a big effect on rates.
Although Congress moved to slow premium hikes in the federal flood program for older homes in risky areas, rates still can go up 5 percent to 18 percent a year, meaning some homes could be stuck with unaffordable policies within a few years.
“It’s not an issue for this year’s tax roll, but if the rate continues to increase 15 percent a year, there is going to be a point where they double and then they triple,” Pinellas County Property Appraiser Pam Dubov said.
“At some point in time, houses on the same street that normally would have sold for the same prices may start selling for different prices.”
A number of them might be difficult to sell if a buyer sees an exorbitant flood policy on top of their mortgage. But determining how insurance costs will affect each home’s value has proven to be a puzzle, Dubov says.
In any given neighborhood, a home that is only one-foot lower than neighboring residences could face a premium of several thousand dollars more, an obvious negative for its resale value.
A more hopeful scenario would be that private insurers such as Lloyd’s of London continue to lower rates to undercut the government’s flood program and more of them enter the marketplace, which was the goal of legislation passed this spring in the Florida Legislature.
The harshest of those rate increases were repealed with a bipartisan law that went into effect at the beginning of this month.
The linchpin in debates about accurate risk rating is the height of a home’s first living floor compared with the elevation of the land.
The Federal Emergency Management Agency produces generalized maps that rate an area’s risk based on its proximity to water, threat of storm surge and topography, but the only sure way for a homeowner to gauge the danger is to hire a surveyor to take detailed measurements.
The problem is most of the homes targeted by the government for increased rates don’t have professional elevation certificates, so they often are lumped together in the same risk category as their neighbors.
That’s where lidar comes in.
Several years ago the Tampa Bay region was mapped with Light Detection and Ranging, or lidar, with the goal of modeling storm evacuation scenarios.
The technology, often attached to a helicopter or aircraft, uses lasers to measure the distance between sensors and features on the Earth’s surface to produce detailed topographical data.
The information can be used to estimate a building’s elevation in relation to its surroundings as researchers compare the lidar data with aerial photographs, says Alan Lulloff, science program director for the Association of State Floodplain Managers in Madison, Wis.
The one snag can be assessing the height of the first living floor.
Even if you can determine a building’s height, it can be hard to tell from aerial images whether the bottom floor of a house is raised on stilts, which makes a substantial difference in flood risk.
“Unless you get on the ground and look at them, you can’t often tell if they’ve been elevated or not,” Lulloff said.
Pinellas County’s property appraiser is looking at newer homes that already have up-to-date elevation data and comparing them with the lidar data to see how closely they match up.
“Having the certificates of elevation we have provides the means to validate the technology we’re using,” she said.
Staff is also looking to accurately pinpoint where a building is on its parcel, whether at a high or low point.
It will be some time before they finish examining each property, Dubov said.
Should insurance rates stabilize in the future, she said, her office might not even use the data, but there’s a good likelihood elevation will affect values in the long run.
Having more comprehensive data about flood risk to the county’s homes also could result in lower premiums from the government, which rates communities based on broad safety measures, flood mitigation and property owner education.
Of course, if a homeowner wants to dispute the property appraiser’s valuation, Dubov says they always can order an elevation certificate and mail it to her office. “If they disagree,” she said, “they can prove us wrong.”
©2014 the Tampa Tribune (Tampa, Fla.) | <urn:uuid:3557a3da-20b5-45e5-a88d-52d41fe18dbe> | CC-MAIN-2017-04 | http://www.govtech.com/data/New-Data-Add-Dimension-to-Flood-Risk-Assessment-in-Florida-County.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944217 | 1,073 | 2.625 | 3 |
Many people and businesses unknowingly leave their private information readily available to hackers because they subscribe to some common myths about computer and network security. But knowing of the facts will help you to keep your systems secure. Here are some answers to these myths.
MYTH: “I have virus protection software so I am already secure.”
FACT: Viruses and security threats are two completely different things. Your anti-virus software will not tell you about any of the more than 10 000 security threats for which a good vulnerability assessment will test your network. These include whether your financial or customer records are exposed to the Internet or whether your computer is vulnerable to various hacker attacks.
MYTH: “I have a firewall so I don’t need to worry about security threats.”
FACT: Firewalls are great and typically provide a good layer of security. However, firewalls commonly perform services such as port forwarding or network address translation (NAT). It is also surprisingly common for firewalls to be accidentally misconfigured (after all, to err is human). The only way to be sure your network is really secure is to test it. Among the thousands of security threats a good analysis tests for, there is an entire category specifically for firewall vulnerabilities.
MYTH: “I have nothing to worry about; there are too many computers on the Internet.”
FACT: People understand the need to lock their homes, roll up their car windows, and guard their purses and wallets. Why? Because if you don’t then sooner or later you will be a victim. But people are just starting to be aware that the same is true with their computers and networks. A single hacker can scan thousands of computers looking for ways to access your private information in the time it takes you to eat lunch.
MYTH: “I know the security of my network and information is important, but all the solutions are too expensive and/or time consuming.”
FACT: While it is true that some network security products and services are very expensive and time consuming, you can find good network analysis tools that are very robust, efficient and effective, yet still affordable.
MYTH: “I can’t do anything about my network’s security because I’m not a technical wizard.”
FACT: While network security is a technical problem, a sound remote analysis report should provide a solution that is comprehensible to non-technical people and geeks alike. If it’s a true remote automated system you won’t have to download, install or configure anything. A good report will include a business analysis that explains technical issues in plain English with plenty of charts, graphs, and overviews to illustrate it. It must be easily comprehensible by non-technical business people and home users.
MYTH: “I know what is running on my computer and I am sure that it is secure.”
FACT: Only 2% of networks receive a perfect score on our security scans. That means 98% of them have one or more possible security threats or vulnerabilities. These threats could exist in your operating system, the software you run, your router/firewall or files.
MYTH: “I tested my network a few months ago, so I know it is secure.”
FACT: New security threats and vulnerabilities are discovered daily. Telspace has a database of security threats that grows by 5-10 new vulnerabilities every week. Sometimes we have even seen more than 80 new security threats crop up in a single month! Just because your network tested well this month, does not mean it will still be secure next month – even if you don’t change anything. You should frequently update your anti-virus software and analyse your security regularly.
MYTH: “Network and computer security is only important for large businesses.”
FACT: In reality, nothing could be further from the truth. Whether you are a casual home user or a large enterprise, your computer contains valuable and sensitive information. This could be financial records, passwords, business plans, confidential files and any other private data. In addition to your private information, it is also important to protect your network from being used in denial of service attacks, as a relay to exploit other systems, as a repository for illegal software or files, and much more.
MYTH: “A “port scan’ is the same thing as a security analysis scan and some web sites already give me that for nothing.”
FACT: Actually a port scan and a security analysis scan are two very different things. In general terms your computer’s Internet connection has 65,535 unique service ports. These ports are used both by software running on your computer and by remote servers sending data to your computer (when you view a web page or check your email). A port scan will simply tell you which service ports are being used on your computer. It does not test any of these ports for security threats nor does it tell you where your network is vulnerable to possible hackers or attacks.
MYTH: “The best time to deal with network security is when a problem arises.”
FACT: The best time to deal with network security is right now, before a problem arises and to prevent you from ever becoming a victim. Think about it – the best time to lock the doors in your home is before a robbery occurs. Afterwards it is already too late, the damage has been done. This is why it is critical to analyse your network’s security now, to find and fix the vulnerabilities before a break-in happens. | <urn:uuid:b57798e6-5083-45a3-8341-72a7b45c1b99> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2006/03/17/revealing-the-myths-about-network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00075-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954944 | 1,170 | 2.765625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.