text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The accomplishment stems from the work the two companies have been doing together on a type of non-volatile memory called phase-change memory, or PCM. The research partners say they have successfully stacked multiple layers of PCM arrays within a single 64 Mb die. In demonstrating a verticially integrated memory cell comprising PCM and an ovonic threshold switch, researchers have shown that its possible to use the technologies to build chips that cost less and have higher performance and memory densities than traditional NAND flash memory used in a variety of applications, such as system memory in computers and handheld devices and solid-state drives used as an alternative to hard drives in PCs and servers. The expense and storage limitations of today's solid-state drives have been a roadblock to broader use of the technology in data centers, even though SSDs are faster and more reliable than hard drives. The reason PCM could prove a better alternative to NAND is because the former uses far less voltage. Where NAND uses an electrical charge to store and read memory, PCM uses heat on chalcogenide glass, which is the same material used in re-writable optical media, such as CDs and DVDs. Using less voltage means PCM can store much more memory in a single die while using far less power. That's because NAND's use of electrical charges makes it difficult to scale the memory down to less than 20 nanometers and remain stable. PCM, on the other hand, can scale down to less than 5 nm. However, while PCM's reliance on the temperature sensitivity of chalcogenide glass has major advantages, it is also the memory type's most notable drawback. Switching to PCM may require major changes to the production process of manufacturers, which could prove difficult in taking the technology from the lab to the commercial world. Intel's and Numonyx's' latest accomplishment is strictly a research milestone, not a production or commercial one. Nevertheless, researchers say the progress in PCM is encouraging. "The results are extremely promising," Greg Atwood, senior technology fellow at Numonyx, said in a statement released Wednesday. "The results show the potential for higher density, scalable arrays and NAND-like usage models for PCM products in the future. "This is important as traditional flash memory technologies face certain physical limits and reliability issues, yet demand for memory continues to rise in everything from mobile phones to data centers." Intel and Numonyx plan to present a paper on their achievement at the International Electron Devices Meeting in Baltimore, Md, Dec. 9. Intel and STMicroelectronics last year presented a paper describing a high-density, multi-level cell memory device using PCM technology. Moving from a single bit per cell to a MLC significantly increased the density of the memory type. The paper was presented at the International Solid State Circuits Conference in San Francisco. Numonyx is a joint venture formed last year by Intel, STMicroelectronics and private equity firm Francisco Partners. The company was formed to absorbed the tech companies' money-losing flash memory businesses for memory devices. When formed, Numonyx was expected to generate $3.6 billion in annual revenue, mostly from flash memory products for consumers and industry. Its assets included research and development, manufacturing, and sales and marketing from Intel and STMicroelectronics. Blue Cross of Northeast Pennsylvania, the University of Louisville School of Medicine, and a range of large and small healthcare providers are using mobile apps to improve care and help patients manage their health. Find out how. Download the report here (registration required).
<urn:uuid:f1e62318-641d-430e-bfee-d2b8ab36ab84>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/intel-claims-memory-research-milestone/79535764
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945628
744
2.671875
3
Configuration Management Database Configuration Management Database (CMDB) is a centralized repository that stores information on all the significant entities in your IT environment. The entities, termed as Configuration Items (CIs) can be hardware, the installed software applications, documents, business services and also the people that are part of your IT system. Unlike the asset database that comprises of a bunch of assets, the CMDB is designed to support a vast IT structure where the interrelations between the CIs are maintained and supported successfully. Configuration Item Types (CI Types) The CIs within the CMDB are categorized into specific Configuration Item Types (CI Types). Each CI Type is represented with Attributes and Relationshipsthat is unique for the CIs classified under it. Attributes are data elements that describe the characteristics of CIs under the CI Type. For instance, the attributes for CI Type Server can be Model, Service Tag, Processor Name and so on. Relationships, on the other hand, denote the link between two CIs that identifies the dependency or connection between them. A CI Type can form a hierarchical structure by further drilling down to Sub Types. Each Sub Type inherits the Attributes and Relationships from the parent CI Type. The Relationship Map is uniquely designed to provide the ability to understand the dependencies between the CIs. The relationships between the CIs are discovered automatically while populating the CIs into the CMDB through Active Directory or LDAP import, performing Windows Domain Scan or Network Scan. The Relationship Map helps to analyze the impact caused by the CI on a business service, and identify the root cause of the impact, thus establishing appropriate measures to gradually eliminate the perpetual issues faced by your organization. - Tracks and manages all the Configuration Items (CIs) in the IT environment. - Establish and maintain relationships between CIs. - Ability to add default Attributes and Relationships to a CI Type. - Track all your CIs and their details in a centralized repository – The CMDB. - With the help of the relationship map, the impact caused by a CI on other CIs, the root cause of the impact can be identified and appropriate measures can be established to eliminate the issue.
<urn:uuid:6dc00fa0-5296-43db-a9b8-d725cf30063c>
CC-MAIN-2017-04
https://www.manageengine.com/products/asset-explorer/cmdb-configuration-management-database.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00324-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905793
444
2.796875
3
Human beings have two major methods of communication: words and pictures. Mind maps bring these two together in a highly structured and powerful way. Up to now, creating mind maps either on paper or using a computer has required a significant level of dexterity. The keyboard was required for entry texts; the mouse for creating the structure of the diagram. This has meant that mind mapping software has been unavailable to people with disabilities such as: quadriplegia, MS, RSI and Parkinson's disease. A recent collaboration between MindJet and Nuance has enabled the creation of complete mind maps solely through the use of voice commands. This has brought together the excellent voice recognition capabilities of Nuance's Dragon Naturally Speaking (see my article). To begin building a mind map the user issues the instruction 'New topic' then the software tools starts building a map based on the voice commands. Over 150 voice-enabled functions have been incorporated into this release and the users can transcribe and create MindManager 8 visualisations, which can be manipulated, crawled, scaled, zoomed, printed and exported by voice. Text can be dictated directly into topics with formatting, editing and search capabilities. These functions have been developed to help able-bodied people to create mind maps with greater freedom and speed. I can imagine it being used in a brainstorming session, where the facilitator is interacting with the participants by moving between them, and then still be able to create the map without having to be chained to the keyboard and mouse. Exciting as that scenario may be I am really more excited about the idea of the map being created by a person with no use of their arms or legs. The map might just be a way for the disabled person to organise their thoughts more easily, or it might be a way for them to communicate their ideas to a wider audience. In fact the technology should enable them to be the leader of the brainstorming session and to create maps on the fly. This is a great example of technology that creates a level playing-field for people whose brain is willing but their body less able. This collaboration provided a second benefit to people with disabilities. To make voice activation possible it was necessary to ensure that all commands were available via the keyboard, as the voice commands get translated under the covers to keyboard strokes. This has a side benefit for people who cannot, or prefer not to, use the mouse. They can navigate, create and modify maps without resort to the mouse. I found this particularly useful when exploring an existing map as I could very quickly open and close sections of the map concentrating on areas of interest. The package is called VoxEnable and was developed for Nuance and MindJet by their partner Citnexus. The product is very reasonably priced especially if it is bought as part of a package with Dragon, MindManager or both.
<urn:uuid:701eb8e4-6668-4334-b073-a321ba2a7bcf>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/hands-free-mind-mapping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00196-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956448
573
3.15625
3
Saudi Arabia announced late Wednesday that five more people have died and two others are undergoing intensive treatment as a result of the new novel coronavirus (NCoV), a cousin of SARS that causes kidney failure and pneumonia. The latest in a slow trickle of information brings the mortality rate to 16 deaths among 24 known infections — and not unlike China with its bird flu outbreak, the Saudi government isn't exactly being straightforward about how many people are sick. If humans are dying, why don't we know more about how and why? The Saudi Health Ministry, according to the BBC, said in a statement that it is taking "all precautionary measures for persons who have been in contact with the infected people... and has taken samples from them to examine if they are infected." And while the Saudi news agency SPA isreporting by way of the ministry that these seven latest cases come from the eastern province, there's one important public-safety caveat: The chief Saudi health officials aren't making public exactly how many people are sick with NCoV. That could be to prevent fears of a massive outbreak, but this is certainly looking like a very lethal outbreak. And we appear to be receiving word slowly: The first of the infected cases was reported not by the Saudi health ministers but by the World Health Organization, which last said in March that it had been informed of 17 cases and 11 deaths. All of a sudden, the number of known human infections grew by 40 percent, to 24.
<urn:uuid:fec28c04-ac70-4386-a2aa-19cd74f651f1>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/05/what-we-dont-know-about-deadly-new-sars-virus/62935/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.980562
298
2.578125
3
Palm Computing founder Jeff Hawkins has developed a controversial theory of how the brain works, and he's using it to build a new race of computers. CIO INSIGHT: In your book, On Intelligence, you claim to have discovered a new understanding of how the brain works—and how machines can be built to model the brain. It's a powerful idea, but also a controversial one. Can you explain the essence of your theory? HAWKINS:First of all, the theory explains how the neocortex works—not the entire brain. The neocortex makes up roughly half of a human brain; it's where all high-level thought and perception takes place. It's the place where you perceive the world. And it's a type of memory system, though it's different from that of a computer in that it is composed of a tree-shaped hierarchy of memory regions that store how patterns flow over time, like the notes in a melody. We call this Hierarchical Temporal Memory (HTM). Computers must be programmed to solve problems, but HTM systems are self-learning. By exposing them to patterns from sensors (just like the neocortex receives information from the eyes and ears), HTMs can automatically build a model of the world. With this model, an HTM can recognize familiar objects and use that data to predict the future. So we're not claiming to build brains here. We are building things that we think can do what half of a human brain does. How have people reacted to your hypothesis? If you said to someone that you want to figure out how the brain works and then build machines that work the same way, most people would laugh at you. They'd say it's ridiculous, that people have been trying for decades and haven't made any progress. But it isn't ridiculous. Why shouldn't we be able to figure out how brains work? We understand how kidneys work, and how other organs work, so why not the brain? In fact, it ought to be pretty straightforward. It's only our ignorance that makes things look hard. So the response to the book has been mixed. We've had a stream of business-oriented researchers who want to talk about it, and several prominent scientists who think this is a landmark book. Many other scientists have dismissed our theory. But a gentleman named Dileep George, who was working at the Redwood Neuroscience Institute [as a graduate research fellow], actually came up with a mathematical formulation for the biological theory in the book. And he did a convincing enough job that we're certain it can be built to solve practical problems. So we started a company called Numenta. Its focus is essentially on building a platform—like an operating system, but different. What do you expect this platform will be able to do? We believe that we have come up with a new algorithm, a new way of computing—though it isn't a computer. It's a new way of processing information. HTMs essentially do three things. First, they discover how the world works by sensing and analyzing the world around them. Second, they recognize new inputs as part of its model, which we call pattern recognition. Finally, they make predictions of what will happen in the future. We think we can build machines that are in some sense smarter than humans, that have more memory, that are faster and can process data nonstop, because they use hierarchical and temporal data to predict outcomes—the same way the human brain works. Now, what do we mean by hierarchical? Well, there's a hierarchical nature to many things—weather and markets and businesses and biological organisms are all structured hierarchically. When you're born, you know almost nothing. Then, over time, you get sensory inputs. Over a period of years, these inputs help you build a hierarchical model of the world. So you start to understand things like words and sentences, chairs and computers and ideas. Businesses are hierarchical, too—not just the way the people's roles are structured, but how the different parts of a business interact. Let's say I was looking at the manufacturing side of a business, and I wanted to know why a certain metric, such as yield, is going down. Chances are, it's correlated with something else going on nearby, maybe something going on in the supply chain or something like that. It's probably not going to be related, at that level, to something like the rate we pay for advertising. A human would look at that data and try to find the underlying causes, come to a conclusion, and then act upon it. That's what our systems can do. If there's really an underlying cause to the problem, the goal of the HTM system is to find it. You take some data from some kind of system—visual or financial, it doesn't matter. You feed it into the system's hierarchical temporal memory, and over time it builds a model of underlying causes. How is that different from a traditional computer? It's very different. You have to tell a traditional computer what to look for. A big parallel computer that's modeling fluid dynamics—like the weather or a jet engine, for example—tries to model each element, each particle or cubic volume of air. That's just solving mathematical equations. Humans don't operate that way. We don't predict the future by looking at every molecule. We look at problems and seek out high-level causes. We say to ourselves, "I noticed that whenever a storm front comes, there's usually a cold day the next day." As a result, we have these concepts called storms and hurricanes—high-level concepts we have been able to deduce by looking at low-level data. That's what our HTM technology will try to do: discover the underlying causes in the world. If you hook the system up to the right data and expose that system to the data over a long enough period of time, it can build a model of that environment, just like a human brain does. It will automatically come up with a way of representing the world just like humans do, and draw conclusions based on that model. It's compelling, but how do you know it can be done? If you go back 50 or 60 years, when they were building the very first computers, people knew that a computer could be built, even though they didn't have transistors or circuits or hard disks. It's the same thing in this case, though we hope to build it much faster. We did a prototype before we launched Numenta. It wasn't designed to do anything really useful, but our HTM system solved the very difficult problem of pattern recognition, which no one else has been able to solve. What was the problem?? When you look at a picture of, say, a cat, there's almost an infinite number of variations of what a cat might look like. Humans have no problem recognizing any of them as a cat. Computers, on the other hand, can't do that. I know a scientist who proposed that the grand challenge of vision research is to be able to have a computer that can distinguish a picture of a cat from that of a dog. That tells you where the state of the art is in computer vision—it's gone nowhere. We built a machine that solved that issue. They're not impressive-looking pictures, mind you, just silly little line drawings of cats and dogs. Nothing realistic like you'd recognize in a photograph. But our model shows these things can be done. Now we are in the process of building a sophisticated, large-scale tool set that will allow people to build systems that can deal with real-world data and the large volume of data that comes from real-world problems. The kind of systems we're building work just like a human brain—one that lives, breathes and eats manufacturing data or financial-market information 24 hours a day, and never gets tired of it.
<urn:uuid:cd554107-34a7-470c-8e21-9d506e78671f>
CC-MAIN-2017-04
http://www.cioinsight.com/it-management/innovation/emerging-tech-jeff-hawkins-reinvents-artificial-intelligence-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968936
1,633
3.0625
3
The Lab handles tens of thousands of suspicious binaries every day. The only way a relatively small group of human researchers can handle such volume is of course with automation. Each sample that is imported into our malware sample management system is scanned, classified, and executed in a virtual environment. Observations are made and we humans analyze collections of like samples. Malware authors know that antivirus vendors use automation and virtualization to attack the lifespan of their latest variants. (A reason why they produce such a large number of variants each day.) In addition to volume, many malware variants also include virtual machine detection and anti-debugging code, in order to inhibit our research and avoid detection for as long as possible. Sometimes their anti-debugging efforts are too aggressive to the point of being counterproductive. Last week I was analyzing a Zbot (aka ZeuS) variant that used multiple methods to detect the presence of a debugger. If a debugger is detected, ExitProcess is called immediately and no malicious code is executed. The anti-debug tricks used in the sample have been known for years but one of them has an interesting side effect. Here's the assembly code: First, the RDTSC (Read Time-Stamp Counter) instruction is executed. The timestamp counter is incremented on each clock cycle. The high-order 32 bits of the counter are loaded into EDX and pushed onto the stack. Then Sleep(0x7D0) is called which suspends the execution for two seconds. Finally, RDTSC is executed again and the high-order 32 bits is compared to the value that was saved to the stack. If the values are equal, i.e. EDX gets the same value on both times RDTSC is executed, the sample thinks a debugger must be present. This is based on the assumption that at least 2^32 clock cycles happen during the two seconds so the value in EDX should get incremented. What all this means is that the sample assumes the CPU runs at over 2GHz. In other words, with a CPU below 2GHz the sample acts as if it is being debugged, aborts execution and does not infect the system. I tested the sample on an IBM T42 (1.86 GHz) notebook and the system was slow enough to avoid being infected. Another interesting side effect of this Zbot's anti-debugging defense is that any computers it does manage to infect will result in a premium collection of bots. Perhaps the Zbot pusher has discriminating tastes?
<urn:uuid:87842de8-9849-4089-9ef5-c02d87fb9087>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00002067.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957716
516
2.625
3
Challenges are no longer active Each contest is based on a specified cipher. A brief piece of printable ASCII text (containing byte values in hexadecimal notation from 0x20 to 0x7e) will be appended to the fixed 24-character string "The unknown message is:". The result will be padded and then encrypted with the associated cipher under a randomly-generated key. The padding method used will be that specified in the RSA Laboratories' Public Key Cryptography Standards (PKCS) #7 document. In particular, if the total plaintext to be encrypted is s bytes in length then, since both DES and the version of RC5 used in this contest have a block size of eight bytes, the plaintext will be padded with exactly 8-(s mod 8) bytes, each of which take the value 8-(s mod 8). As an example, if the complete plaintext requires exactly one additional byte to produce an integral number of plaintext blocks, then it is padded with the byte that has hexadecimal value 0x01. If the plaintext needs exactly two additional bytes, then it is padded with the bytes 0x02 0x02, and so forth. Finally, if the text needs no additional bytes to produce an integral number of plaintext blocks, then it is padded with eight bytes, each containing the value 0x08. This means that plaintexts with a length in bytes equal to a multiple of eight, are padded with the string 0x08 0x08 0x08 0x08 0x08 0x08 0x08 0x08. Encryption of the padded plaintext will take place in CBC mode (cipher-block chaining mode), with a randomly-generated key and a randomly generated IV (initial value). For example, if the mystery text for a DES challenge were "Clipper chips go well with salsa!" (thanks to Kazuo Ohta for introducing us to Clipper chips at Crypto!), then, after adding in the known header bytes and performing padding, the actual plaintext to be encrypted would be the following eight 64-bit blocks, in this order (shown in hexadecimal): 54 68 65 20 75 6e 6b 6e 6f 77 6e 20 6d 65 73 73 61 67 65 20 69 73 3a 20 43 6c 69 70 70 65 72 20 63 68 69 70 73 20 67 6f 20 77 65 6c 6c 20 77 69 74 68 20 73 61 6c 73 61 21 07 07 07 07 07 07 07 The ciphertext produced, of course, would depend on the key and the IV that is used for the encryption. For this example, the ciphertext would also consist of eight 64-bit blocks.
<urn:uuid:4f217695-bace-419f-aec6-ad9cf54ece0b>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/historical/the-rules.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00462-ip-10-171-10-70.ec2.internal.warc.gz
en
0.831035
564
3.265625
3
Health Care Slideshow: Social Medicine: Is the Internet Transforming Healthcare?By Don Reisinger | Posted 05-13-2011 18 percent of all U.S. Internet users have gone to the Web to find people who have similar ailments. 23 percent of U.S.-based Internet users who suffer from a chronic disease have used to the Internet to find others going through the same issue. Even though the Internet is useful to Americans, 70 percent of adults say they still go to a health professional to get "information, care, or support" for what ails them. Those looking for information on "weight loss or gain, pregnancy, or quitting smoking," are most likely to surf the Web. When Americans want an "accurate medical diagnosis," 91 percent said they will go to a doctor or nurse. Concern for others is one of the more common reasons people surf the Web for health information; 26 percent of those who are caring for someone with an illness look to the Internet to find information. In the middle of a "medical crisis," people try to find information wherever they can; 85 percent say they take to the Web to learn more about the issue. Internet users over age 65 are unlikely to look up information about a medical condition online. In fact, just 10 percent of seniors have done so. When the health concern involves technical issues, professionals are the preferred resource. When the concern involves personal issues of how to cope with a health issue or get quick relief, then non-professionals were preferred by most patients.
<urn:uuid:57fe277c-69b7-4ab8-9e98-016e09ae6dd2>
CC-MAIN-2017-04
http://www.cioinsight.com/print/c/a/Health-Care/Social-Medicine-Is-the-Internet-Transforming-Healthcare-752089
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00278-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941406
319
2.75
3
Author: Joel Schroeder, Director, M2M Program, Inmarsat Today’s scientists and researchers have access to massive amounts of computing power and an insatiable appetite for a constant stream of sensor data they can analyse. With the continuing demand for more real-time data from machine-to-machine (M2M) environments, there has been an associated surge of interest in the use of satellite communications. Although many businesses use satellite to complement their existing fixed line and cellular communications, some are now opting for a 100 percent satellite network. One reason is that satellite M2M services can be used to plug the gaps in terrestrial coverage, extending the reach to devices in more remote, unmanned locations. They can be used as a backup to traditional networks to ensure that mission-critical data continues to be transmitted during terrestrial network outages or when cellular networks are simply too congested. Of crucial importance, L-band satellite services are not impacted by extreme environmental conditions. This dependability is one of the reasons for its growing popularity in regions that regularly experience acute climatic events.Additionally, the terminals operate with a high degree of pointing tolerance,so the network remains connected even if the antenna moves as much as 30 degrees. Typical applications include connectivity for SCADA (Supervisory Control and Data Acquisition) systems on pipelines, smart metering concentrators, remote ATMs and point-of-sale devices, as well as fixed and mobile asset monitoring in other sectors, including, SCADA, and other fixed, remote asset monitoring in utilities, oil and gas, government, and transportation. Users typically require messaging of several bytes or real-time connections with data rates from 20 to 100 kbps. Inmarsat’s M2M services support messaging up to 10 KBs and IP data rates up to half a megabit. One sector where satellite is being adopted for data backhaul of remote M2M is environmental monitoring. Like other industry segments, environmental monitoring organisations need new ways to gather, analyse and distribute field data faster and more efficiently. Applications can be as varied as measuring climate and hydrology or air quality to ensure compliance with regulation,or helping produce growers manage their crops by analyzing soil temperature and saturation levels, amongst a host of others. One organisation that has already conducted research using satellite communications is New Zealand’s National Institute of Water and Atmospheric (NIWA). NIWA’s mission is to conduct leading environmental science to enable the sustainable management of natural resources for New Zealand and its region of the planet. NIWA’s National Centre for Environmental Information is recognized as a leading authority on environmental monitoring and observation, information management, and delivery of high-quality, robust, and inter-operable environmental data that can be used for many purposes. The EI Center oversees a network of environmental monitoring stations across New Zealand and beyond. These stations measure a wide range of environmental parameters including winds, atmospheric pressure, temperature, rainfall, solar radiation, river levels and many more. “Historically we had to compromise on the location where we could do monitoring because one of the criteria was the need for affordable access to communications to use the data in a timely fashion,” said Graham Elley, environmental systems consultant at NIWA. “We commonly stretched the use of terrestrial base communications to the point where we did not always have the reliability we required. With many remote and geographically dispersed stations, we then started to look to upgrade with BGAN satellite communications,” he continued. Having used Inmarsat for several years for general marine communications, NIWA decided to trial Inmarsat BGAN M2M, a new two-way IP data service, designed specifically for backhauling data from machine-to-machine applications. The trial was a success and BGAN M2M is increasingly being deployed. According to Elley, “This technology is empowering and is enabling us to consider changes to the way we undertake our work.”One of the important benefits of L-band satellite communications for NIWA is that the communications is not impacted by those same adverse weather conditions that NIWA needs to monitor. According to Michael Bargh, environmental data operations manager at NIWA: “We had BGANM2M terminals under test, prior to deploying them in a project in the Fiji Islands, when a big snow storm hit. Although buried in snow all the devices continued to work perfectly and didn’t miss a single report. That gave us great confidence that those sort of conditions wouldn’t cause us problems in the future.” The stations collect data every three seconds and regularly transmit data to NIWA’s server.While reliability and durability are mandatory, NIWA had other practical considerations to keep in mind, including the size of the terminal and its power consumption. The BGAN Hughes 9502 terminalsmet the organisation’s requirements for a small footprint and its low energy requirements mean NIWA can now leave the terminals powered on for 24 hours a day and reduce the size of its stationsolar panels. Australian firm, Unidata, leaders in the field of technologies for environmental monitoring, introduced NIWA to M2M. For Unidata, IP-based integration was key. “The BGAN M2M installation experience was straightforward. We integrated the BGAN terminal easily with the Unidata hardware using a single Ethernet cable. We configured it using a standard browser and the wide number of menu options meant that interfacing from an IP point of view was easy too,” said Dave Moyle, senior engineer at Unidata. Elleyconcluded, “The arrival of BGAN M2M is very empowering. A bird in the sky is worth ten in the bush now we have access to reliable communications.”
<urn:uuid:23d5c457-e2ac-4716-8cd7-dc07daade131>
CC-MAIN-2017-04
http://www.machinetomachinemagazine.com/2012/10/27/satellite-gaining-ground-in-m2m-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939683
1,196
2.90625
3
Determining the Source of ‘Self’ Spam Q: My employer is getting a ton of spam delivered to work e‑mail accounts. The messages appear to come from inside the organization, but we have not sent them. What’s causing this, and how do I correct it? A: This problem can be caused by two major sources. The first is spammers who fake the “from” field in the e-mails they send. The second is a malware application running on your computers and sending e-mails without you being aware of it. How can you tell which one is causing the e-mail messages in your inbox? If the message is coming from “yourself,” it is more likely that it is from spammers faking the “from” field. If the message appears to come from someone else in the organization, it’s probably being caused by malware running on his or her computer. Of the two, malware is much more dangerous. For example, in 1999, the Melissa worm, which was a Microsoft Word macro that sent itself to the first 50 contacts in a user’s Outlook address book, quickly spread across the Internet and caused overloaded mail servers to fail. That attack kept us IT professionals very busy for two weeks. Some companies even had to bring down their mail servers to stop the infection cycle until a patch was released. The way to protect against such threats is to make sure you are using a reliable personal security application that combines anti-virus and personal firewall. AVG, Symantec, McAfee, ZoneAlarm and others will be able to prevent malicious applications from infecting your computer and spreading themselves to other users in your organization. Make sure you keep your signatures and policies updated, and you should be safe. It also is important to keep your operating system updated with the latest security patches. The other type of e-mail spam is more common. If the mail server does not check the “from” field, a spammer can set it to anything, including having it mirror the “to” field, so that it looks like a message is coming from you and going to you. To determine the real sender of an e-mail message, you will need to look at the message headers containing the sending server, the IP address from which it came and additional information. In most mail clients, there is a way to reveal these headers. In Microsoft Outlook, you can open the spam message and click on the downward-facing arrow in the lower right-hand corner of the “Options” tab; the opened window will display “Internet headers” at the bottom. In Google’s Web interface, you can click on “Show Original.” Once you gain access to the message headers, there will be a line that starts with the word “Received:” and will be followed by the real domain name and IP address of the sending server. Resist the temptation to e-mail this address, even if just to ask for your removal; it will only cause the spammer to send more messages and sell your e-mail address to other spammers. Instead, there are two major approaches to dealing with spammers. One is to install an integrated anti-spam application on your mail server. The other is to install an anti-spam appliance that stands between your mail server and the external world. The integrated solution is simpler to implement and would not require additional hardware, but it does not scale very well. Depending on your mail server, Symantec, GFI, XWall and many others have solutions for it. The appliance-based solution is what enterprises and commercial mail providers typically use because it can scale to millions of messages per day, freeing your server from the dual task of providing mail services while preventing spam. IronPort, Brightmail and Barracuda are products that fall into this category. Also, as a good Internet citizen, you should report spam to the service provider of the spammer so that he or she can be blocked. Depending on your country of residence, this type of spam also might be illegal and can result in penalties to the originator. In the U.S., the Federal Trade Commission has been enforcing the CAN-SPAM Act (Controlling the Assault of Non-Solicited Pornography and Marketing Act) since 2003; any spam message should be forwarded with full headers to firstname.lastname@example.org. 8 Avner Izhar, CCIE, CCVP, CCSI, is a consulting system engineer at World Wide Technology, Inc., a leading technology and supply chain solutions provider. He can be reached at editor (at) certmag (dot) com.
<urn:uuid:1766486b-3398-41d6-bf3b-a7198399ca92>
CC-MAIN-2017-04
http://certmag.com/determining-the-source-of-self-spam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00214-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942844
981
2.828125
3
UK kids spend over £10 per week on tech British children are more likely to be spending their money on technology than on comics and chocolate according to new research by the Halifax bank. Three-quarters of 8 to 15 year olds have a mobile phone, 65 percent own an MP3 player and 87 percent a games console. Hardly surprising then that they say they spend most of their cash on games and downloads. Girls are more likely to own an MP3 player than boys (70 percent against 60 percent) but boys are likely to spend more on computer games. When asked about their spending habits four out of five of the children surveyed say they have downloaded a film, music, TV show, game or app from the internet. In addition, an average of five paid-for tracks are being downloaded a week, at a potential cost of around £5 with £2.30 a week being spent on computer games. The average pocket money given by parents is only £6.35 a week, but 40 percent of 8 to 15 year olds who get pocket money from their parents also admit to receiving money from grandparents and other relatives as well, which could explain why they can afford to spend so much. Richard Fearon, Head of Halifax Savings says, "Children today are growing up in a world where so many things can be accessed at just a touch of a screen, including an almost limitless number of shops and goods. As a result it can be very easy to spend money without realizing just how much is going out of your account". By the age of 11, 79 percent of children have a mobile phone. And with 74 percent of children having an average monthly mobile bill of £12, it's not surprising that 80 percent of kids with a mobile have the bills paid by their parents on top of any pocket money. Fearon concludes, "Budgeting money is a great responsibility and parents need to make sure that by awarding pocket money they are also giving their children the tools to understand the importance of managing how that is spent. Previous research demonstrates that a large number of children are saving a proportion of their pocket money, but these latest figures show how easy it could be to underestimate the cost of digital spending".
<urn:uuid:3ae16eb1-4499-49e3-b880-d37cfa1ea7bc>
CC-MAIN-2017-04
http://betanews.com/2014/08/27/uk-kids-spend-over-10-per-week-on-tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979429
451
2.609375
3
Enterprises are finding that expanding a wireless LAN (WLAN) can be difficult. The desired result of higher available bandwidth may not be realized. Wireless communications, like hub- based Ethernet, have a limiting factor – the shared bandwidth of the medium. Recognizing wireless networking limitations can help the enterprise plan their WLAN infrastructure. So Little Bandwidth In a WLAN, the total available bandwidth can be increased by pulling more cable and adding more switches. Even the backbone network can have additional capacity added in this same way – more cable and additional switches and routers. While each cable has a fixed bandwidth there is no limit on the number of cables that can be installed. Wireless networking doesn't expand so easily. You can't "pull" more spectrum in the facility.
<urn:uuid:49f30424-340c-412b-9d01-dfd414b9f33d>
CC-MAIN-2017-04
https://www.infotech.com/research/shared-bandwidth-why-is-the-wlan-so-slow
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927264
155
2.59375
3
Popular Security Algorithm Compromised Cryptographic expert Bruce Schneier has reported that the SHA-1 hashing algorithm (define) , upon which several major applications such as SSL (define) and PGP (define) depend for secure digital signing, has been broken. While the computing resources needed to defeat the protection SHA-1 provides remain impressive and out of the reach of most, the discovery set a few experts back, including a security technology group manager at the National Institute of Standards and Technology (NIST) who as recently as last week declared SHA-1 secure for the foreseeable future. SHA-1 is used to generate digital signatures. By processing a message or file with SHA-1, applications produce a hashed version of the data called a digest file. The hash, or digest, is much smaller than the original file. In theory, the hashes from no two signed files should ever be the same, meaning a file or message that's been tampered with will yield a different hash from the original. Cases where two differing files produce the same hash are referred to as a "collision." According to Schneier, a team of Chinese researchers who previously demonstrated weaknesses in the MD5 hashing algorithm (define) have demonstrated that they can produce a collision in SHA-1-hashed data much sooner than previously suspected: within 269 hashing operations instead of 280. The computing hardware needed to generate a collision is still out of reach of all but several governmental agencies, and it would take an impractical amount of time to generate a collision, but the researchers' demonstration proves to the cryptographic community that it's time to look for SHA-1's replacement, since most researchers assume that attacks against a given algorithm will improve over time. Faith in SHA-1 was such that William Burr, a security technology group manager at the National Institute of Standards and Technology (NIST), said just last week that the algorithm had not been broken "and there is not much reason to suspect that it will be soon." NIST has recommended that SHA-1 remain in use through 2010. That timetable will, no doubt, change.
<urn:uuid:2511136d-a6d1-4efc-93e8-b96cc5807cfa>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3484066/Popular-Security-Algorithm-Compromised.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938791
430
3.078125
3
One of the goals in analyzing the DNS query and response stream is to understand possible issues customers might be experiencing with your infrastructure or unwanted traffic being sent to your servers. The last post covered traffic profiling by request type and response code. Covering this topic brought to mind some observations about reverse-tree DNS queries (often just called “reverse queries” these days). These queries associate an IP with its name in the DNS. They are implemented using the in-addr.arpa (IPv4) and ip6.arpa (IPv6) zones and PTR records. There are constant debates about the importance of reverse DNS resolution; this may be centered around RFC 1912 section 2.1 stating “Every Internet-reachable host should have a name.” This is followed by explicitly stating “For every IP address, there should be a matching PTR record in the in-addr.arpa domain.” One thing to keep in mind is this RFC was drafted at a time when IP assignments were generally static, so in many cases there was a one to one relationship between the IP address and a physical device. As people adopted NAT (Network Address Translation) and DHCP (Dynamic Host Configuration Protocol), the utility of the reverse tree became less obvious. NAT eroded the one to one mapping creating questions about accuracy cardinality of IP to PTR relationships and DHCP increased the rate of change of IP assignments. The fact that maintenance of the reverse tree need to be mentioned in the RFC shows that the concern was budding even back in 1996. Misconfiguration of reverse DNS is of special interest as it requires the owner of the IP space to ask their Regional Internet Registry (RIR) to delegate the reverse DNS domains to their DNS provider. The owner makes a conscious decision to change the delegation of the records, but doesn’t always follow through on the maintenance and configuration. As an authoritative DNS provider this is something we see somewhat frequently, mainly in the form of delegated resources without configuration. Do you have your own IP space? Is it delegated to your DNS provider? Is the configuration up to date? Configuring reverse DNS isn’t always as straightforward as it sounds. For instance, Amazon Web Services requires submitting an electronic form from an account with root access, whereas Digital Ocean configures reverse DNS by default after you provide a hostname for your virtual server. This provisioning behavior is most likely driven by the expected response to the question, “How many users are going to find value in having their details entered in the reverse tree?” From the provider’s perspective it might be a case of minimizing DNS updates the customers aren’t asking for. That being said, RFC 1912 reminds readers, “Many services available on the Internet will not talk to you if you aren’t correctly registered in the DNS.” This still holds true in cases such as email systems (including spam filters / appliances) which perform a reverse DNS lookups on the IP address of the originating SMTP server. Looking at PTR queries is a great way to answer the question, “Do you know what is trying to talk to your infrastructure?” When an SSH connection is attempted, most modern operating systems will perform a reverse DNS lookup on the host attempting to connect, very similar to the SMTP verification mentioned earlier. So when 192.0.2.1 and 192.0.2.134 are trying to SSH into your cloud instances by brute force, the default resolver configured in /etc/resolv.conf will recieve these requests. How are you identifying these activities in your environment? If you are a large cloud provider are you measuring these query patterns across customers, data centers, and sites? Or on the other side, if the reverse tree is misconfigured you might be increasing attention on traffic from your endpoints by causing additional NXDomain responses to in-addr.arpa queries. Some argue the exact opposite that properly configured reverse DNS creates a security issues. Reverse DNS can help an attacker enumerate what resources in the environment support what domains. The reverse tree was also leverage in the past to “enhance security” to questionable effect. The basic notion being that a list of trusted hosts could be whitelisted and identification verified via PTR record. Knowledge of past misuse and potential for information leakage help contextualize the risk reward of reverse tree maintenance. What should the reader do once they finish reading this post? Review your current DNS settings (internal and external) and establish an understanding of your PTR record configuration. Determine how much of your infrastructure is fixed vs. ephemeral and use this survey to quantify reverse DNS maintenance. Map out the internal and external recursive resolvers your servers are configured to query as this will impact your data collection options. Review your current DNS log collection and storage to understand potential blind spots in intersystem communication. About the Author Chris is a Principal Data Analyst Dyn, a cloud-based Internet Performance company that helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Follow Dyn on Twitter: @Dyn.More Content by Chris Baker
<urn:uuid:e923dc92-f116-4360-8d91-1da8349ab4d5>
CC-MAIN-2017-04
http://hub.dyn.com/dyn-blog/in-addr-arpa-reverse-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937922
1,057
2.828125
3
Chapter 22: Job Control Language This chapter presents a discussion of JCL (Job Control Language) as used for jobs run on a modern IBM mainframe running a descendant of the OS operating system, such as z/OS. First, we must define the term “job”. A job is a unit of work for the computer to execute. The job comprises identification statements, control statements, possibly program text, and usually data. There are conventions to label the statements that are not program text and data so that the Control Program part of the Operating System can determine which is what. The paradigm for a job is a sequence of cards, each card with one statement. The standard card type is the IBM 80 column card, an example of which is shown below. The use of these cards persisted well into the time at which programs and data could be entered through a computer terminal. Today, in classes associated with this textbook, we skip the card input and create text files for submission as jobs. Just remember that each line of text should be imagined as being the content of an 80 column card. Photo by the author of a card in his collection The card pictured above is a “6/7/8/9 card” used on a CDC–7600. This was a control card used to indicate the end of a specific job. In modern terms, a “7/8/9 card” would have been an EOD (End of Data) and a “6/7/8/9 card” an EOF (End of File). The 7/8/9 cards were green and the 6/7/8/9 cards were orange; this as a convenience to the programmer. The only computer–readable data on any card is found in the pattern of column punches. transition from card input of jobs to other means was driven by the simple inconvenience of handling boxes containing hundreds of cards. The key feature that facilitated that change was the introduction of system disk drives big enough to store significant amounts of user programs and data. This change was not driven by hardware only; it was some time after the introduction of disk drives that the software designers were able to develop a stable operating system based on the use of such drives. Your instructor recalls using a Xerox Sigma–7 The first step in transitioning from card input was the ability to catalog a card deck on a disk file maintained by the computer center. Though the jobs remained card based, they became very short: access this file, change these statements, add these statements, and then run. Soon thereafter, the cards went away. Next, it is important to dispel a misunderstanding that would be almost comical, had it not actually occurred during the teaching of a course based on this textbook. We begin by considering the first few lines of a program that your author assigns as a first lab. //KC02263R JOB (KC02263),'ED BOZ',REGION=3M,CLASS=A,MSGCLASS=H, //FFFPROC JCLLIB ORDER=(TSOEFFF.STUDENT.PROCLIB.ASM) //JESDS OUTPUT PAGEDEF=V06483,JESDS=ALL,DEFAULT=Y,CHARS=GT15 //STEP1 EXEC PROC=HLLASM //ASM.SYSIN DD * The above text is the block of job control language that precedes the text of the first assembler language program. Note that many of the lines begin with “//”. Several students decided that these mandatory lines were optional, since they were obviously comments. The structure of a comment in either a programming language or an execution control statement depends on the language or operating system. It is peculiar to that system. The fact that the “//” character sequence introduces in–line comments in both C++ and Java does not imply similar functioning in other situations. In IBM Job Control Language, comments are prefixed by “//*”, with the asterisk being very significant. The Job Control Language There are six types of job control statements that will interest us at this time. These are: marks the beginning of a job. It gives the user identification, accounting information, and other site–specific data. marks the beginning of a job step by specifying a program or procedure to be executed. request the allocation of an I/O device and describes the data set on that device. It must use the logical device name from the program. //* This is a comment in the job control language. /* This terminates an input stream data set. // This can be used to mark the end of a job. Logical and Physical Devices One of the advantages of the structure of the JCL is the ability to define a logical device using a DCB macro within the code, and use the DD control statement to link that logical device to an actual physical device. With the DCB, the code specifies the logical properties of a device. For example a logical printer might be described as PS (Physical Sequential) with record length of 133 bytes (one control character and 132 characters to be printed). The DD statement might then associate this logical device either with the standard output stream or with a dedicated disk file that can be saved and accessed by another job. Much of this is discussed in chapters 5 and 6 of the IBM Redbook Introduction to the New Mainframe: z/OS Basics [R_24]. The Job Card This identifies the beginning of a job. It must include a name to associate with the job. For use in our classes, that name is most often the user ID. The name must begin in column 3 of the “card”, following the “//” characters. Remember that none of this is free–form input. In general, the format of the JOB statement starts as follows. //name JOB (account number),programmer name Consider our example from the listing of a lab exercise. //KC02263R JOB (KC02263),‘ED BOZ’,REGION=3M,CLASS=A,MSGCLASS=H, The user name consists of from one to eight alphanumeric characters, with the first one being alphabetic. The standard for our course is the user ID with a single letter following it. The job card above shows a user ID of KC02263, with the letter “R” appended. The next entry in this statement is the keyword “JOB” identifying this as a JOB card. This is followed by the account number in parentheses. For our student use, the account number is the same as the user ID. This is followed by the programmer name, which is enclosed in quotes as the name contains a space. The next entry, REGION=3M, specifies the amount of memory space in megabytes required by the step. This could have been specified by REGION=3072K, indicating the same allocation of space. The two size options here are obviously “K” and “M” [R_25, page 16–4]. The entry CLASS=A assigns a job to a class, roughly equivalent to a run–time priority. According to R_25 [page 20-15] the “class you should request depends on the characteristics of the job and your installation’s rules for assigning classes”. This assignment works. The entry MSGCLASS=H assigns the job log to an output class [R_25, page 20-24]. Depending on the MSGLEVEL statement (see below), the job log will have various content. The next line of text in the above example should be considered as a continuation of the job card, in that the information that is found there could have been on the job card. The notify line indicates what user is to be given information about the execution of the job; the level of information is indicated by the integers associated with MSGLEVEL. The first number specifies which job control statements are to be printed in the listing. There are three possible choices. 0 Only the JOB statement is displayed. This is the default for many centers. job control statements are displayed including those generated from a cataloged procedure. This is the default for a student job. Note that a cataloged procedure is a sequence of control statements that have been given a name and placed in a library of cataloged procedures. 2. Only those job control statements appearing in the input stream are displayed. The second number inside the parentheses specifies whether or not the I/O device allocation messages are to be printed. A 1 (the default) indicates that all allocation and termination messages are to be printed, regardless of how the job terminates. The EXEC Statement The execute statement begins a job step that is associated with the program name or procedure name that controls that step. Each EXEC can begin with an optional step name, which must begin in column 3 and be unique within the job. There are three standard forms of the execute statement. //step name EXEC PGM=program name //step name EXEC PROC=procedure name //step name EXEC procedure name The step name is optional, but if it exists it must be unique in the job. For example, we have this line in the job control language of our first lab assignment. This calls for the H–level assembler to be invoked. The procedure takes care of a number of steps that are required, and can be mechanically created. //STEP1 EXEC PROC=HLLASM In some more advanced JCL, there is a control logic that requires step names. In this example, we assign names just to show that we can do that. The PGM option is rarely used by students, who commonly use cataloged procedures. This author views stored procedures in the same light as programming macros; they are predefined sets of statements that have proven useful in the past. The second and third lines are equivalent, indicating that the default is to execute a cataloged procedure. This expands into a sequence of program EXEC and DD statements. Here is an example of the ASMFC cataloged procedure [R_09, page 384]. This is given without explanation in order to show the expansion of a very simple cataloged procedure. //ASM EXEC PGM=IEUASM,REGION=50K //SYSLIB DD DSNAME=SYS1.MACLIB,DISP=SHR //SYSUT1 DD DSNAME=&SYSUT1,UNIT=SYSSQ,SPACE=(1700,(400,50)), X //SYSUT2 DD DSNAME=&SYSUT2,UNIT=SYSSQ,SPACE=(1700,(400,50)) //SYSUT3 DD DSNAME=&SYSUT3,SPACE=(1700,(400,50)), X //SYSPRINT DD SYSOUT=A //SYSPPUNCH DD SYSOUT=B There are a number of parameters to the EXEC statement, but none of these need concern us here. The student who is interested is referred to [R_25, Chapter 16]. The DD (Data Definition) Statement Any data sets used by the program must be described in DD statements. These must follow the EXEC statement for the particular step in which the data sets are accessed. In the lab examples used with the course associated with this textbook, the DD statements follow the assembler procedure invocation and its associated program input. For more information, the reader should consult Chapter 6 of Introduction to the New Mainframe [R_24] or Chapter 12 of the MVS JCL Reference [R-25]. The general format of the DD statement is rather flexible, but all have this form. //proc.ddname DD options part of the name is the procedure step. In our programs, we use “GO” for this. The second part of the name is identical to that used in the DCB macro in the source program, and it further describes the data set referenced in that macro. In general, we have the following sets of relationships within the job. Here is an example of the linkage between DCB and DD as found in our lab 1. FILEIN DCB DDNAME=FILEIN, X PRINTER DCB DDNAME=PRINTER, X //GO.PRINTER DD SYSOUT=* //GO.FILEIN DD * What we have in the above example is a use of the standard input and output data streams. The input stream data set is simply the stream that includes the text of the program and the job control language. The “DD *” indicates that the stream is to be taken as the sequence of 80–character lines immediately following. This stream ends with “/*”. The following represents the last lines in a job intended to print out the text of three lines. Note the three lines of input text immediately following the DD. //GO.FILEIN DD * The statement “DD SYSOUT=*” indicates that the output associated with the ddname PRINTER is to be routed to the standard output stream, called SYSOUT. The flexibility of this linkage between the DCB and DD statements is illustrated in the following fragment, taken from another lab exercise associated with this textbook. We have taken the above and changed only the DD statement. We have as follows: PRINTER DCB DDNAME=PRINTER, X //GO PRINTER DD DSN=KC02263.SP2008.LAB10UT,SPACE=(TRK,(1,1),RLSE), //GO.FILEIN DD * The print output is now saved as a text file, called SP2008.LAB1OUT in the user area associated with the user KC02263, which was at the time your author’s user ID. Neither the name “SP2008” nor the name “LAB1OUT” can exceed eight characters in length. In this version of the DD statement, we use the DSNAME operand, abbreviated as DSN. This identifies the data set (disk file) name to be associated with the output and specifies a few options. The two we use are the disposition option and the space allocation option. The data set disposition operand has the general form as follows. DISP=(file status, normal disposition, error disposition) terms indicates the status of the data set in relation to this job step. The options are: OLD An existing sat set is used as input only to this step. SHR An existing disk data set that can be shared with other jobs concurrently. MOD A partially completed sequential data set. New records to added at the end. NEW A new output data set is to be created for this job step. The second term indicates the disposition of the data set in case of a normal termination of the process associated with the step. There are five options for this one. the data set. PASS Pass the data set to a later job step. DELETE Delete this existing data set. CATLG Catalog and keep the data set. UNCATLG Remove this data set from the catalog, but keep it. The third term specifies disposition in the case of an abnormal termination. The option PASS is not available, as it is presumed that an abnormal termination will be associated with corrupt data. Note that our JCL says DISP=(NEW,CATLG,DELETE), indicating to create a new file and catalog it if the job terminates normally. If the job has an abnormal termination, just discard the file. The space operand has the following format. It is used only for DASD (Direct Access Storage Device, read “disk”) data sets. term indicates the measure of storage space to be used. In order to understand this, one should review the architecture of a typical disk unit. The two options for this term are CYL (cylinder) and TRK (track). Our JCL has the option SPACE=(TRK,(1,1),RLSE), indicating that one track is to be allocated initially for our data set and that additional disk space is to be allocated one track at a time when the existing allocation is exhausted. The RLSE option indicates that the unused space on the DASD (disk drive) is to be released and made available for data storage by other programs when this program terminates and the data set is closed. [R_25, page 12–12]. One option worth mention just for historical reasons is the LABEL option. This was used when accessing data sets on magnetic tape, either 7–track or 9–track. The label was an identifier assigned to an individual physical tape. It was physically written on the label of the tape (to be read by the computer operator) and written in the header record of the tape (to be read by the Operating System). This option would insure that the correct tape was mounted, so that the desired data (and not some other) would be processed. who is interested in tape labels is referred to a few references, including [R_02, page 449; R_24, page 203, and R_25, chapter 12].
<urn:uuid:39746b61-5f34-43fc-84f5-dda03ece1ded>
CC-MAIN-2017-04
http://edwardbosworth.com/My3121Textbook_HTM/MyText3121_Ch22_V01.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913037
3,680
3.140625
3
The internet has expanded rapidly, not just in terms of the number of users, but also the number of devices that connect to it. It is estimated that there are currently five billion devices connecting to internet services, but that is expected to expand to 50 billion by 2020. Such factors make it a prime vector of attack for criminals who are using increasingly sophisticated and insidious attack methods. The internet was originally developed with availability, connectivity and ease of use as its core principles. To deal with its increased scale, there are a number of changes being made that improve both its availability and scalability. Whilst these changes are not specifically designed to improve security, they do have implications for security. This document regarding those security implications and aims to help organisations in their preparations for embracing these changes in a secure manner.
<urn:uuid:5226b10d-f224-488e-8765-67b44fae7f5e>
CC-MAIN-2017-04
http://www.bloorresearch.com/research/spotlight/state-of-internet-infrastructure-security-primer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975326
159
2.8125
3
An automated fingerprint identification system (AFIS) is a complex computer database that reads the whorls and ridges of an individual fingerprint. Long accepted in law enforcement, fingerprints are still considered the surest evidence of a suspect's presence at the crime scene or contact with the instrument of the crime. AFIS technology has made prints an even more powerful tool, but, as one of the most mature technologies in law enforcement's modern arsenal, AFIS systems are undergoing a metamorphosis. This is the second of two parts about AFIS. The first part ("The Latent Potential of Latent Prints," Government Technology January) told the history of AFIS-- where it started and what it does. Two Types of Prints There are two types of prints. The first, nicknamed 10-prints, are taken when someone is arrested for a crime. The 10-print card records the fingers, thumbs and a palm print on a large index card. These prints are carefully taken, clear and easy to read, and make up the bulk of AFIS data available today. Originally, all prints were created on the cards and scanned into the system, but today, more agencies have acquired digital finger- and palm-print readers that increase the quality of the prints in the database. The second type of prints are called latent prints. These are the prints that suspects leave at the scenes of their crimes, and they are often partial or smudged, requiring more detailed computer analysis. Latent prints from the scene of a crime are scanned into the AFIS system, which checks them against all other prints in the system. Usually within moments, the system will kick back a list of possible matches. From that point, it is up to the investigator to examine the prints for a match. Local Databases Expanded by Links Once isolated in locally held databases, AFIS data is now linked in a variety of ways, including national, statewide and regional networks. Many cities are, in a move uncharacteristic of law enforcement, giving up their own fingerprint systems and turning them over to the states, thereby creating larger databases while unburdening themselves of the high cost of keeping up with the rapidly advancing technology. The standardization of 10-print technology, required by the National Institute of Standards and Technology, has led to a national database, maintained by the FBI, that is accessible to state and local law enforcement. Most investigators believe that the larger the database, the more likely there is to be a match. Still, as with any cutting-edge technology, there is some debate. "The best way to look at that issue is as a pyramid. The bottom of the pyramid is the broad base, the bulk of your work, and it is all done locally," said Ken Moses, the former San Francisco Police Department investigator who helped pioneer AFIS in the early 1980s. "Then there is the next tier, which might be adjacent jurisdictions. For San Francisco, there are many cases where the bad guy might have a link to Oakland. Then there is the next tier, statewide, but there will be fewer cases that will benefit from a statewide search. By the time you get up to the level of the FBI, there is a very small volume of cases that benefit from searching there. "The fact is that most criminals are local. They may travel across the Bay Bridge to Oakland, but they aren't suddenly going to go to Idaho. There are exceptions to that, especially in the realm of con artists, but the rule holds: Most criminals are local." Most in law enforcement agree, but in an age dominated by easy air travel and loose family ties, the bad guys, like everyone else, are taking to the open American road. It is not as unusual as it once was for people of all types to pull up stakes and move to Idaho, Wyoming or even to another coast. Hence, statewide and regional networks are showing great success. "Our network has proven invaluable to investigators of all types of crimes," said Victor Fleck, systems development manager for the nonprofit Western Identification Network Inc., which has linked AFIS data from nine states and six federal agencies to create a successful network. Law enforcement experts agree that it helps if the investigator knows the states in which a suspect might have existing ties, but WIN has seen out-of-state 10-print hits soar more than 60 percent for some rural states. Along with the changes in criminal behavior, there are other reasons why regional and, especially, statewide systems are becoming common. City Surrenders System Phoenix recently chose to give its $6 million AFIS, developed by Sagem Morpho Inc., to the state. The transfer was the first step in implementing the Arizona AFIS, one of the nation's most complex and comprehensive AFIS projects. The benefits for Arizona law enforcement were twofold: First, upgrading the 1991 Phoenix system rather than purchasing a brand-new one saved the state millions of dollars; second, the uniform statewide system immediately resulted in criminals being brought to justice. In the first six months after state-sponsored AFIS workstations were installed in the Pima County, Ariz., jail, 300 inmates who had given false identities were exposed. Last April, the system matched the prints on file for one of Arizona's 10 most wanted criminals, a sexual predator, to a man living under an alias in Maricopa County. "Expanding the Morpho system to a statewide AFIS increases our range and efficiency of criminal investigations," said Frank Rodgers, latent prints supervisor for the Phoenix Police Department. "It is a good tool for latent examiners." Tower of Babel is Tumbling Down Since its inception, when only a handful of vendors existed, the AFIS industry has exploded, with dozens of vendors now vying for contracts. The competition meant agencies that had contracted with different vendors could not electronically share AFIS data, because of the different ways systems code their AFIS information. In the early 1990s, the National Institute for Standards and Training (NIST) implemented standards for the 10-prints. It has taken longer to come up with standards for the transfer of latent prints between dissimilar systems. This important step in making sure AFIS technology can be utilized to its full potential is now near completion. In July, the International Association for Identification completed testing of the standards for both the 10-print and latent standards. While no agencies have purchased the system yet, the participating vendors, which included most major AFIS vendors, say the technology is now available to electronically share AFIS data between dissimilar systems. "We see it as good for our customers, and consequently good for us as well. Now it is up to each individual jurisdiction to decide how they want to share the data," said Sandra Salzer, Sagem Morpho's senior communications specialist. "All they need to do is for the participating agencies to purchase what we euphemistically call the 'black box,' a translator that will allow the systems to communicate." For a technology that seems to have gotten stronger each time it has broadened its base of data, these translators could be the final step in the effort to strip away the cloak of anonymity behind which criminals try to hide. Western Identification Network, Inc. Western Identification Network, Inc. Raymond Dussault is a Sacramento, Calif.-based writer and a research director for the Law Enforcement Technology Acquisition Project. Email
<urn:uuid:eaa17065-383b-48ae-b776-7d2b3cf4f543>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/AFIS-Links-Help-Corner-Crooks.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956401
1,510
3.0625
3
Understanding Application Authentication and Authorization Security Bill Cary - IoT 060001Y279 Visits (3239) There are 2 parts to security in an application. Authentication - Where someone is allowed to access the application Authorization - Where someone is given privileges within the application to access particular functions like admin, work, accounting, etc Authentication mechanisms can be generic because they do not need to know anything about what happens inside the application. They just allow or disallow access to the application. Authorization mechanisms must be built by the application since only the designer of the application understands what authorities must be in place to perform any given function. Maximo has only 2 Authentication mechanisms. Native Authentication - Where Maximo stores the username and encrypted password in it's own tables. When using native authentication, the user must enter their credentials each time they access the application and the credentials are unique to just the Maximo application. After login, the username is then used to access the security tables to determine what authorities the user has within the application. LDAP or Application Server Authentication - Where the username and password are stored in an enterprise directory server along with other profile information about the user. When using LDAP (Without SSO) the user must enter their credentials each time they access the application BUT, the credentials are shared with any other applications that use the same directory server. This saves the user from having to maintain a long list of credentials for the various applications they use. When using LDAP, the application server (WebSphere or WebLogic) intercepts the application access request and authenticates to the directory server. Once authenticated, the username is passed to the application with a token that indicates the user has been authenticated. After login, the username is then used to access the security tables to determine what authorities the user has within the application. Since LDAP is what is used to do authentication in an LDAP configuration and that means the application server is actually responsible for that authentication, when you configure Single Sign-on (SSO) you are really configuring the application server and it has nothing to do with the application. SSO will integrate to the LDAP/Directory server for a one time sign in and subsequently pass the same authentication token to any application authorized for use. So far we have talked all about Authentication and not much about Authorization. As I said, after login, the username is passed to the application for authorization but how did the usernames that are in the directory server get from the directory server to the application? Enter synchronization. Maximo has 2 types of synchronization driven by cron tasks on a timed basis. LDAPSYNC - A specific synchronization task dedicated to connecting only to Microsoft Active Directory (MSAD), retrieving user information and copying it (synchronizing it) to the application database. LDAPSYNC can be used with WebLogic or WebSphere but can ONLY connect to Microsoft Active Directory Server. LDAPSYNC uses familiar standard LDAP syntax that users familiar with MSAD will be familiar with. VMMSYNC - A specific synchronization task dedicated to connecting to a WebSphere virtual directory server, retrieving user information and copying it (synchronizing it) to the application database. VMMSYNC can ONLY be used with WebSphere Application Server (not WebLogic) but can theoretically be used with any directory server that can be hooked up to the WebSphere virtual directory. VMMSYNC uses a unique LDAP syntax that users may need to become familiar with to get synchronizing working properly. WebSphere says they can connect to many directory servers. Maximo and other TPAE based applications only support MSAD and Tivoli Directory Server (TDS) because we do not want to support a broad range of third party products as part of our support contract. However, if the data from any directory server can be realized by WebSphere in its virtual directory, the VMMSYNC task should work to synchronize users into the Maximo DB. People interested in using other directory severs can work with WebSphere support to configure use with the WebSphere virtual directory but Maximo support teams will not be able to support concerns or issues for configurations that are not supported. On a final note, authorization in Maximo can be user based or role based and can be controlled by groups that users are a member of. If the implementation wants to manage groups on the directory server side, both sync tasks can be configured to bring over both the users and the groups they are a member of. This will eliminate the need to manage the user once they are in the Maximo DB. If the decision is that groups are not managed by the directory server, once a new user is synchronized to the Maximo environment, an administrator will have to manage the user to assign application authorities. Example: New User "Joe Jones" is added to the directory server and put in a group on the directory server called "Work Management". Using WebSphere configuration, WebSphere is connected to the directory server where WebSphere builds a virtual directory of the users configured. VMMSYNC runs and picks up the new user "Joe Jones" in the WebSphere virtual directory along with the "Work Management" group and copies them to the Maximo DB. Assuming the Maximo application has been configured to use a group called "Work Management" to authorize capabilities in the application, the user is now able to login and perform work management functions. Note that configuring groups in WebSphere may require some additional configuration and this is discussed in my colleague Shane Howard’s blog on custom attributes at the link below.
<urn:uuid:9caa109f-aadf-416f-9a2f-43c8b61efe43>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/a9ba1efe-b731-4317-9724-a181d6155e3a/entry/understanding_application_authentication_and_authorization_security?lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909825
1,146
2.6875
3
The Army has developed and deployed technology to deal with the two signature wounds of the Afghanistan and Iraq wars -- post-traumatic stress disorder and traumatic brain injury, Army Surgeon General Lt. Gen. Patricia Horoho told attendees at the Association of United States Army conference this week. The service operates 87 telebehavioral health nodes in Afghanistan to connect soldiers in remote outposts with mental health professionals in the rear, Horoho said. The service kicked off a telebehavioral health pilot project in Afghanistan in 2010 with just four nodes at Bagram Air Force Base that connected troops and clinicians over a secure videoconferencing link. Soldiers who used the pilot system felt comfortable discussing their mental health issues with a remote provider, although they said face-to-face encounters with health care professionals would improve the experience. The Army plans to field a smartphone system to assess TBI, Horoho said, and in September the Army awarded a $2.8 million contract to BrainScope of Bethesda, Md., for a handheld system that uses advanced algorithms to quantify and characterize features of brain electrical activity associated with TBI . To measure the effects of blasts on soldiers, Horoho said the Army has begun developing a “blast dosimeter” for individual troops to wear to determine the health consequences of exposure to explosions. She added the service also is looking to field rapid bio-marker tests to identify brain trauma. The Amy uses stateside telemedicine systems to support the disability evaluation system for wounded or injured soldiers, Horoho said. Clinicians at hospitals in Hawaii and Washington state support evaluations performed at Fort Hood, Texas, and Fort Wainwright, Alaska, respectively.
<urn:uuid:cc414a5f-2670-47c5-bdf5-fcf11dc7dab6>
CC-MAIN-2017-04
http://www.nextgov.com/health/2012/10/army-expands-telebehavioral-health-care-afghanistan/59026/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934317
340
2.59375
3
As we all know, the optical fiber cable system in the transmission of the optical signal, can not do without optical transceivers and fiber. There are two major types of optical transceivers categories: light-emitting diode (LED) and Laser light emitting device. Although in performance, the laser light is far supperior to the light-emitting diode, the vast majority of LAN users, but due to the manufacturing cost of the problem has been difficult to afford the high prices of laser emitters. Until recently, a new type of light emitting device of the vertical cavity surface emitting VCSELs (Vertical cavity surface emitting lasers) emergence, to solve this problem. VCSELs absorption of the laser light-emitting device performance advantages, such as high-speed response, the transmission spectrum is narrow, and the advantages of the light-emitting diode, such as coupling, high efficiency and low cost. Therefore, the use of low-cost, high-performance VCSELs emitting devices with the multi-mode fiber optic cable can transmit signals up to 10Gb / s. However, another problem appeared in front of the user, that is, the transmission distance. In addition to the transfer rate using fiber optic cable, transmission distance requirements. Experimental results show that the traditional multimode fiber optic cable, either 50μm or 62.5μm can support 10Gb / s network transmission, but its support distances are less than 100 meters, this network backbone can not be met. Multimode fiber transmission bottlenecks – DMD Why can support 100Mbps when 2000m of multimode fiber at 1 Gbps only supports 550 meters? its main reason is due to the phenomenon of multi-mode fiber of the DMD. Tested, we found that the multi-mode optical fiber in the transmission of the optical pulse, the optical pulse in the transmission process will diverge broadening When this divergence is severe to a certain extent between the front and rear pulse superimposed on each other, so that the receiving end does not accurately distinguish between each of the optical pulse signal, a phenomenon we called DMD (Differential Mode Delay). The main reason is that the multimode optical fiber with a optical pulse includes a plurality of modal components, from the angle of view of the optical transmission, each modal component type go in the optical fiber transmission path, for example, a straight line along the center of the fiber transmissionthe light component transmitted through the fiber cladding reflection light component having a different path. From electromagnetic point of view, contained within the three-dimensional space in the multimode fiber diameter the many modal (300-1100) component, their composition is very complex. A new classification of multimode fiber in the ISO standard, at present OM1 refers to the traditional 62.5μm multimode fiber, OM2 refers to the traditional 50 μm multimode fiber, OM3 is the new 10 Gigabit optical fiber . Note that the two modes of the bandwidth of optic fiber indicators., Overfilled Launch Bandwidth the of the matching indicators of was is for a a LED the light emitting device, while the the LASER bandwidth is the against the of the matching indicators of of the Novel laser the light emitting device. The OM3 optic fiber cable at the same time in two modes under the have carried out a optimized. Another should be noted that the choice of transmission wavelength, 850nm or 1300nm. Although the longer the wavelength, the performance will be better, but the cost of the light-emitting device will be doubled, therefore, if possible, try to choose a short-wavelength applications to reduce costs. In In For example, the the new type of VCSELs light-emitting device is to Application environment for the in order to short-wave long, instead of the standard Laser light-emitting device is mainly used for the environment of the long-wavelength. The OM3 fiber of test problems DMD test steps are: using a 5μm single-mode probe with OM3 fiber under test is connected by single mode probe optical pulse to keep the fiber under test, at the same time, the probe scanning move from the fiber axis edge to move every time, every time it moves around 1μm. At the receiving end, for each position of the optical pulse will be recorded and the DMD indicator to form superimposed on with a time-domain diagram. Reach the light pulse will due to the different path generated time difference, at the same time In due to to optical pulse itself will to divergence, the difference of will in these two areas added together, According to the comparison of standards, FOR USE IN THE judgment OM3 fiber-optic from whether to to meet the standard. OM3 fiber performance advantages User network applications in the face of pressure to upgrade from the 1Gb / s to the future 10Gb / s for the application and future smooth upgrade the excessive Each user needs to be carefully considered. In the current 1Gb/s network era, the traditional multi-mode fiber support distance not more than 550 meters, while the use of single mode fiber and mean at the same time the use of expensive laser light-emitting device, cabling system, both of which cost almost the same, but in thenetwork device, the two options means that the price difference is at least doubled. In many cases, when the user transmission over a distance of 500 meters and 1000 meters but had to use laser devices. New OM3 multimode fiber allows for the support of Gigabit Ethernet Distance extended to 1000 meters, without the need to use expensive laser device. So at this stage, the user can bring significant performance advantages.
<urn:uuid:d5a1808f-b90d-4c72-853d-f3d07c02715a>
CC-MAIN-2017-04
http://www.fs.com/blog/10-gigabit-multimode-and-dmd-testing-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91413
1,173
3.046875
3
Along a sandy road in the New Mexico desert, just about a 45-minute drive from Santa Fe, you’ll happen upon the town of Los Alamos, hidden amongst the foothills of Pajarito Mountain. This is the home of a U.S. government laboratory. It was founded under a veil of intense secrecy during the early 1940s, tasked with the infamous Manhattan Project. 70 years on, we now know it’s also where government scientists have been operating what might be the world’s most secure version of the web, the quantum Internet. British born physicist Richard J. Hughes and a team of scientists at the Los Alamos National Laboratory have recently made public that they’ve been operating a network that communicates in an exceptionally safe and potentially hacker-resistant environment since 2011. Before their announcement you’d be forgiven for thinking that quantum Internet technology wouldn’t be within reach for quite a few years. They’ve achieved this by using a technique known as quantum cryptography. But perhaps more remarkable is that the researchers are confident it could readily, cost effectively, and swiftly be implemented across the civilian Internet. At a consumer level, that’d mean safer and speedier online shopping transactions and in a world where it seems just about any institution can succumb to a hack–even Google–the technology could also help to keep state secrets secret. Whenever you purchase something online and you hit the ‘buy’ button and your computer seems to buffer, taking its sweet time to present you with the order confirmation page and you begin to doubt that the order went through successfully, “that’s because of the cryptography,” says Hughes. It takes time to create a secure line to transmit sensitive information, like your card number, between your laptop, eBay, and your bank. But “in our case that just wouldn’t happen,” says Hughes, “in principle [our invention] could speed up the Internet.” Quantum communication is the process of manipulating a photon to empower it to carry binary information, which turns it into something called a ‘qubit’. Hughes created his qubits by polarizing photons. Qubits are fragile beyond words; if another computer so much as glances at one, the qubit’s makeup will change. This has so far limited quantum computing and acted as a major barrier to the discipline. Quantum cryptography takes this perceived weakness and turns it into a strength. If a quantum communiqué has changed in the slightest, it’s a telltale sign that the line has been tapped and someone who shouldn’t be is listening in. In other words, the delicate nature of a qubit allows it to act as a highly sensitive and sophisticated detector of security breaches. Quantum cryptography isn’t exactly cutting-edge, but it’s never quite managed to take off because it’s hasn’t yet been commercially viable. “The news is that we can easily implement this into today’s Internet infrastructure. Others before us have failed here,” says Hughes. Most other quantum communication researchers haven’t tried to work with the current network. This meant what they came up with would have required a complete and expensive overhaul of how the Internet gets to your house. “We took a step back and asked how we could adapt quantum cryptography to what’s already there,” explains Hughes. They created a “quantum smart card,” which is about the size of a house key and sends qubits down the same fiber optic cables that we use to send emails, watch TV and shop online today. They’re currently in the process of producing the second generation of smart cards, which they anticipate to be even smaller. His lab in New Mexico has been using their quantum Internet for about two and a half years, and Hughes says that if a company came to him tomorrow with serious intentions to implement his invention then “we could have it in place within a couple of years.” The MIT Technology Review believes that while Hughes’ invention might be impressive, it may ultimately be short lived and soon to be “obsolete as quantum routers become commercially viable.” But Ron Meyers, a top quantum physicist at the Army Research Lab in Maryland, says it’s not a technology to be so easily dismissed. “Yes, technology is always being upgraded but you can’t really skip this step,” he says. Meyers anticipates that Hughes’ work will be “important for U.S. national security … and if it’s implemented, the country and world are going to benefit.” At the time of writing, Hughes and the national lab declined to comment on the specifics of which companies are pursing his invention for fear of compromising negotiations. But he was able to say that they’ve “received expressions in commercializing the technology from more than 14 companies, the majority of which are U.S. corporations.”
<urn:uuid:d673ed5a-fb54-42db-b4db-cc65819aeb74>
CC-MAIN-2017-04
https://www.ncta.com/platform/broadband-internet/declassified-the-governments-quantum-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00013-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960338
1,051
3.15625
3
An animated video, narrated by the inimitable Professor Heinz Wolff, popped up on YouTube last month, detailing a prototype optical processor with some remarkable claims. The company behind the effort, UK-based startup Optalysys, says it is on track to deliver exascale levels of processing power on a standard-sized desktop computer within the next few years. As Professor Wolff explains, Optalysys is pioneering a new technology that uses light rather than electricity to perform compute-intensive mathematical functions at speeds well in excess of what can be achieved with electronics, and at a fraction of the cost and power consumption. A 340 gigaflops proof-of-concept model is slated for launch in January 2015, sufficient to analyze large data sets, and produce complex model simulations in a laboratory environment, according to the company. Unlike current supercomputers, which still use what are essentially serial processors, the Optalysys Optical Processor takes advantage of the properties of light to perform the same computations in parallel and at the speed of light. “Optalysys’ technology applies the principles of diffractive and Fourier optics to calculate the same processor intensive mathematical functions used in CFD (Computational Fluid Dynamics) and pattern recognition,” explains founder and CEO Dr. Nick New. “Using low power lasers and high resolution liquid crystal micro-displays, calculations are performed in parallel at the speed of light.” The company is developing two products: a ‘Big Data’ analysis system and an Optical Solver Supercomputer, both on track for a 2017 launch. The analysis unit works in tandem with a traditional supercomputer. Initial models will start at 1.32 petaflops and will ramp up to 300 petaflops by 2020. The Optalysys Optical Solver Supercomputer will initially offer 9 petaflops of compute power, increasing to 17.1 exaflops by 2020. Perhaps the most impressive trait of all is the reduced energy footprint. Power remains one of the foremost barriers to reaching exascale with a traditional silicon processor approach, but these optical computers are said to need only a standard mains supply. Estimated running cost: just £2,100 per year (US$3,500). To compare, scaling up today’s technology to exascale levels would require at least 200MW of power, and the current fastest supercomputer, Tianhe-2 in Guangzhou, China, requires 24 MW per year (including cooling) at a cost of about $21 million per year. Optalysys Ltd. raised over £400,000 (US$675,000) in seed money earlier this year, which enabled it to bring its innovative technology to NASA Technology Readiness Level 4 ahead of schedule. The startup will be targeting these systems at the CFD market, which encompasses a quarter of a million engineers and scientists around the world. CFD is essential for a number of disciplines, including weather prediction, automotive and aerospace design and more. “Early conversations with potential customers have been extremely positive and one of the largest weather centres have said they are keen to collaborate with us because the energy cost to produce such high quality forecasts, and deal with the huge data volumes, is unaffordable with current processor technologies,” said the CEO. “Whilst our goals are ambitious they are definitely achievable and we are confident that Optalysys technology will be a game-changer for the global science and engineering communities.”
<urn:uuid:6740efb7-19af-4b9b-bb8a-2c1452bdd6bd>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/06/exascale-breakthrough-weve-waiting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00005-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924828
722
3.15625
3
Teens are now sharing more information about themselves on social media than ever before. This free webinar will explore what privacy means to teens and offer ideas to help them better protect themselves online. The topics covered include identity theft, sexting and digital footprints. “The way that this generation of teens looks at privacy and sharing information online can be seen every day in how they use smartphones and social media apps,” said John Ryan, NCMEC president and CEO. “We want to make sure that safety and the real world impact of online choices are part of their thinking and we’re grateful to CA Technologies for helping us share these important messages.” The webinar, sponsored by CA Technologies and hosted by NCMEC’s NetSmartz Workshop, will feature Larry Magid, a technology journalist and Internet Safety advocate. Magid, who serves as on-air technology analyst for CBS News, is co-director of ConnectSafely.org, founder of SafeKids.com and a NCMEC Board Member. He also wrote NCMEC’s first publication on online safety in 1994. For more than a decade, CA Technologies has supported NCMEC and their mission to find missing kids and prevent child exploitation. In addition to monetary support, the company has contributed software solutions, services and training. CA Technologies funding currently helps support NetSmartz, a free educational resource developed by NCMEC to empower children and help them make safer choices both online and in the real world. “We are proud to partner with the National Center for Missing & Exploited Children on this important safety webinar for teens,” said Erica Christensen, vice president, Corporate Social Responsibility, CA Technologies. “Through this initiative, our goal is to help educate young people about the privacy risks we all face when sharing information online.” This is the fourth in a series of webinars, which began in 2012, hosted by NetSmartz and sponsored by CA Technologies. For more information or to register for the webinar visit: http://engage.vevent.com/rt/ncmewebcasts/index.jsp?seid=48. About the National Center for Missing & Exploited Children The National Center for Missing & Exploited Children is the leading 501(c)(3) nonprofit organization working with law enforcement, families and the professionals who serve them on issues relating to missing and sexually exploited children. Authorized by Congress to serve as the nation’s clearinghouse on these issues, NCMEC operates a hotline, 1-800-THE-LOST® (1-800-843-5678), and has assisted law enforcement in the recovery of more than 199,000 children. NCMEC also operates the CyberTipline, a mechanism for reporting child pornography, child sex trafficking and other forms of child sexual exploitation. Since it was created in 1998, more than 2.7 million reports of suspected child sexual exploitation have been received, and more than 120 million suspected child pornography images have been reviewed. NCMEC works in partnership with the U.S. Department of Justice’s Office of Juvenile Justice and Delinquency Prevention. To learn more about NCMEC, visit www.missingkids.com . Follow NCMEC on Twitter and like NCMEC on Facebook . About CA Together CA Technologies is a global corporation with a local commitment. The company works to improve the quality of life in communities where its employees live and work worldwide and is fully committed to advancing social, environmental and economic sustainability. CA Together, the company’s Corporate Social Responsibility (CSR) program, is driven by the core philanthropic focus of improving the lives of underserved children and communities around the world. CA Technologies does this by supporting organizations, programs and initiatives that enrich the lives and well-being of others with a primary focus on science, technology, engineering and math (STEM) education. CA Together activities encompass employee volunteerism and matching gifts; in-kind donations of CA Technologies products and services; and wide-ranging partnerships and philanthropic support to community organizations worldwide. It also includes the company’s sustainability area and focus on advancing CA Technologies strategy and initiatives toward the triple bottom line of people, planet and profit.
<urn:uuid:3754ffff-3901-4d7b-a516-49a345615f33>
CC-MAIN-2017-04
https://www.ca.com/us/company/newsroom/press-releases/2014/ncmec-and-ca-technologies-tackle-the-truth-about-teens-and-online-privacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939671
886
2.921875
3
Are you ready to turn your finger into a speaker? Called Ishin-Den-Shin after the Japanese phrase “what the mind thinks, the heart transmits,” technology developed by Disney Research uses the body as a sound transmitter — a recorded message can be heard by touching your ear to another person’s finger. How It Works A computer connected to a handheld microphone records the message, transforming it into a sound loop that’s converted into a harmless high-voltage audio signal, which is then transmitted to the microphone’s conductive casing. The signal creates a modulated electrostatic field around the body of the person holding the microphone, allowing him or her to become a sound transmitter. Researchers say the technology can extend beyond the body, turning everyday items into interactive sound devices. Digital currency is becoming actual cash in Vancouver. The world’s first Bitcoin ATM became available in late October, followed by rumors that four more Robocoin ATMs will be installed in Canada. It’s been reported that the machine doesn’t look much different from a standard ATM, with the exception that users are verified through a palm scan instead of the traditional card and PIN combination. Researchers at Fudan University in Shanghai have demonstrated network technology that could be 10 times faster than traditional Wi-Fi by transmitting data as light instead of radio waves. To carry the signal, the light needs to flicker very rapidly and a camera connected to the user’s device needs to be positioned so that it can see the light. The downside is that users need to be within sight of the light bulb, but overhead lights in an office, for example, could be wired to the Web. Source: Quartz John McAfee, eccentric founder of anti-virus software giant McAfee Inc., wants to create a device that blocks the National Security Agency — and other snoopers — from accessing private information. He calls his idea the D-Central, a gadget that would work with smartphones and other devices on a small, private network to prevent unwanted access. While no prototype is available yet, the design is complete. McAfee’s looking for development partners. Source: Mashable
<urn:uuid:939e98a4-4749-4a40-a8d0-7d72cc17bea8>
CC-MAIN-2017-04
http://www.govtech.com/products/Bitcoin-ATMs-Hit-Canada.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928783
449
2.9375
3
The Internet connection we all rely on is about to change, now that WISP is coming to town. Most people get Internet service from either a telephone company or a cable company because those providers already provide physical connections to their homes and businesses.A A WISP (wireless Internet service provider) doesn't need to bring wire to your location, making it a good solution for serving rural areas where telcos and cable companies couldn't be bothered to invest. WISP was unable to match the speed and reliability of DSL and cable modems, however, until recently. As wireless technology has evolved, WISPs are beginning to compete in urban areas on speed and price. Here's how it works. What makes a WISP A WISP is distinct from other wireless services we currently use. Most cell-phone service providers offer wireless Internet service--with 4G LTE being the fastest current technology--but that doesn't make them WISPs. Cell-phone service providers don't expect you to use their service 24/7, and most place very low caps on the amount of data you can transfer over their networks each month (and charge hefty fees if you exceed that amount). Being able to access the Internet while you're out and about is a distinct advantage, but LTE data rates are relatively slow, and coverage can be spotty--especially away from large metropolitan areas. Satellite TV providers that also provide wireless Internet service, such as Dish Network, are closer to being WISPs. They can deliver wireless Internet service to any home that has a clear view of the southern sky. But the data must travel very long distances, which limits the service's speed, and lag can be a big problem--especially for playing games. A true WISP is a mix of cellular provider and satellite provider elements. Like a cell provider, it mounts antennas on towers (or atop buildings) to transmit signals, and it installs an antenna--or in some cases, a dish--on the customer's home or building. Like a satellite service provider, it typically delivers service to a fixed location. Comparing pricing and features Most WISPs offer tiered service levels, charging higher fees for faster speeds and/or more bandwidth. Like telcos, cable companies, and other ISPs, WISPs typically require you to commit to a one- or two-year contract, and they charge an installation or activation fee. Most WISPs are regional operators that serve limited areas. Netlinx, for instance, serves residential and business customers in southern Pennsylvania. The company's prices for residential service range from $30 to $80 per month. At the low end, you get download speeds of up to 1 mbps, with speed bursts of up to 3 mbps. Upload speeds at this tier are 512 kilobits per second. At the high end, you get download speeds of up to 15 mbps (with bursts up to 30 mbps) and upload speeds of 3 mbps. Many WISPs provide faster upload speeds than the typical 5 to 10 mbps that most cable and DSL providers offer. That can be useful for businesses with remote offices, offsite PC or server backup requirements, or other applications where upload speeds are just as important as download speeds. Like other ISPs, some WISPs limit how much data you can use per month, but these limits tend to be more generous than what cell, satellite, and even some cable providers offer. A few, such as Wisper ISP (serving southern Illinois and eastern Missouri), provide uncapped service. Utah-based Vivint, a newcomer to the WISP market, is offering wireless Internet service at upload and download speeds of 50 mbps for just $55 per month. But the company--best known for its home-security/automation services--has only just begun to roll out its service, which is not widely available outside Utah. Finding a WISP If you think a WISP might be a better option for you than your current ISP is, you can check a number of online directories to find a WISP that provides coverage in your area, including the WISPA Member Directory,A WirelessMapping.com, and Broadband Wireless Exchange. Some WISPs provide a coverage map on their website. Others describe only the general coverage area, and you must call or fill out an online form to get coverage details for a particular address. The time when a WISP was an ISP of last resort--because nothing else was available in a particular area--is coming to an end. As the new class of WISP service spreads, the resulting competition should force telcos and cable companies to step up their game, cut their prices, or both! This story, "Meet WISP, the wireless future of Internet service" was originally published by PCWorld.
<urn:uuid:9d143903-8a92-4ed8-b449-180aeba63a8f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2172327/smb/meet-wisp--the-wireless-future-of-internet-service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951813
1,000
2.5625
3
As hard as it is to believe, summer is starting to wind down. Teachers are busy setting up their classrooms, hanging posters and assigning seats. While lesson plans are still being designed, we wanted to share some free, online resources for K-12 teachers. TED Talks – No matter the topic, TED Talks has a relevant video, from algebra and poetry to learning how to handle bullying and racism. With videos easily sorted by category, teachers can easily find the appropriate video for their students. We’ve already pulled together the top Talks for K-12 students, parents, teachers, and school administrators. New TED Talks are constantly being added, so be sure to check back frequently throughout the school year. SMART Exchange – Using SMART boards in your classroom? SMART Exchange has you covered with ready-made lessons or inspiration from other teachers who’ve taught the same material using interactive whiteboards. Custom-designed searches save time by delivering relevant results and a full visual preview of all SMART Notebook files, so teachers can easily evaluate a lesson before downloading it. While registration is required, it’s completely free. Share My Lesson – Teachers are helping teachers with Share My Lesson. The website enables teachers to share their lesson plans with other teachers. Website users are encouraged to rate the lesson plans so the best lesson plans are easy to find. Kahoot! – Want to get kids engaged in a lesson? Turn it into a game. Games are playing an increasingly larger role in the classroom, and Kahoot! let’s you design your own games, play them as a class, and tap into games designed by educators. Plus, it’s free and Kahoot founders promise it will remain free. Code.org – According to Code.org’s website, Code.org is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented students of color. Its vision is that every student in every school should have the opportunity to learn computer science. Code.org uses games and familiar characters such as Angry Birds and Star Wars to teach students the basics of coding. The program is free and is targeted primarily to elementary age students. Are there any resources you would add to the list? Let us know in the comments.
<urn:uuid:21192a59-67f4-4ee4-8cf0-cf2468374f8b>
CC-MAIN-2017-04
https://www.meritalk.com/articles/5-free-curriculum-resources-for-k-12-teachers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951922
472
3.21875
3
Corporate Governance is now being increasingly practiced by companies across the globe due to the number of benefits it offers. Practicing corporate governance is beneficial for a company and its stakeholders as well for the economy as a whole. A few benefits of corporate governance are mentioned below. If a company is practicing corporate governance, people not linked to the firm will also be able to assess its governance. This is because the most fundamental principle of corporate governance is transparency and the principles of disclosure. Every step taken by company authorities, having control over the company’s management, is in the best interests of the company and its stakeholders. This has a positive impact on the community and may reflect upon the market valuation of the firm and hence, its share price. Companies that follow a set of best practices are encouraged to be highly transparent about their business. This helps them attain the trust of the community and its stakeholders and eases the task of raising capital, when needed. As the business is easy to assess and evaluate due to its high level of transparency, many investors and financial institutions prefer funding these companies than those that are not following the core principles of corporate governance. Under corporate governance, a firm tends to act in the best interest of the firm and its stakeholders. This will ensure greater success as the goal of the company managers will now be aligned with the goals of the company. The result of this will be greater profits and faster growth which will benefit the company and all the stakeholders. The practice of good corporate governance followed by firms will allow them to gain the trust of the investors, the customers and the community at large. This will have a positive impact on the company’s reputation and it will be recognized as a fair and transparent company. This image will help the company prosper in the long run and achieve its goals more quickly. Good practices of corporate governance help companies become more efficient in their business. Employees that are trained to follow ethical business practices will avoid excess wastage of company resources will tend to utilize all resources optimally. A company can reduce the amount of risks in their business as well as any attempts of corruption and mismanagement by following the practices of good governance. Due to the amount of transparency necessary in companies that follow the principles of good governance, many individuals intending to misuse their position and power will be unable to do so. This will reduce the overall incidences of negative acts in the company and help it achieve success and a positive image in the community. A company following good corporate governance will be able to achieve the trust of the community and hence, success in the long run. A firm’s good reputation will ensure a good flow of capital by attracting foreign investors in the economy and will benefit the economic situation of the nation.
<urn:uuid:d6dfcfae-9914-4447-be9d-b982667fabe4>
CC-MAIN-2017-04
http://www.best-practice.com/compliance-best-practices/compliance-management/benefits-of-practicing-good-corporate-governance-principles/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964814
556
2.734375
3
SecureWorks published the locations of the computers, from which the greatest number of cyber attacks were attempted against its clients in 2008. The United States topped the list with 20.6 million attempted attacks originating from computers within the country and China ran second with 7.7 million attempted attacks emanating from computers within its borders. This was followed by Brazil with over 166,987 attempted attacks, South Korea with 162,289, Poland with 153,205, Japan with 142,346, Russia with 130,572, Taiwan with 124,997, Germany with 110,493, and Canada with 107,483. Computer security can be greatly improved by keeping your web browser and operating system up to date, using the latest versions of antivirus and antispyware software, following safe computer practices such as being wary of the websites you visit, and not clicking on attachments and links within emails until verifying that the sender intentionally sent the enclosed link or attachment. These findings illustrate the ineffectiveness of simply blocking incoming communications from foreign IP addresses as a way to defend your organization from cyber attacks, as many hackers hijack computers outside their borders to attack their victims. The Georgia/Russia cyber conflict was a perfect example of this. Many of the Georgian IT staff members thought that by blocking Russian IP addresses they would be able to protect their networks, however, many of the Russian attacks were actually launched from IP addresses in Turkey and the United States so consequently they were hit hard. This was a perfect example where we saw Russian cyber criminals using compromised computers outside their borders. China’s hackers do create botnets from spamming through email and blogs, but a relatively larger percentage of the compromised hosts under Chinese control are simply machines in schools, data centers, companies – in other words, on large networks – that are mostly unguarded and consequently are entirely controlled by hacker groups, as opposed to distributed bots harvested from widely distributed international spam runs. Often the groups have an insider in the networks they own. We also see many local hacker groups in Japan and Poland compromise hosts within their own country to use in cyber attacks, so the Chinese hackers are not alone in using resources within their own borders. With hackers utilizing computer resources inside and outside of their borders, SecureWorks suggests that in addition to securing computers with ongoing system and security updates and patches, organizations should utilize a black list to block inbound communications from known malicious IP addresses. Organizations should also block outbound communications to foreign countries known to harbor hackers and block outbound communications to hostile networks known to host criminal activity. This way if your organization does have an infected host within its network, then the host will be blocked from sending personal or company data to the cyber criminals. Of course, some of these hostile networks do support a handful of legitimate sites. In addition to a blacklist, your organization can use a separate whitelist to allow outbound communication only to trustworthy sites on those otherwise hostile networks.
<urn:uuid:3f3f3da7-343e-4ba1-b4a2-d7e23a8a1bb8>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/09/22/us-responsible-for-the-majority-of-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96593
592
2.703125
3
The Pentagon plans to fork over $32 million to develop "fun to play" computer games that can refine the way weapons systems are tested to ensure they are free from software errors and security bugs, according to a Defense Department solicitation. The goal is to create puzzles that are "intuitively understandable by ordinary people" and could be solved on laptops, smartphones, tablets and consoles. The games' solutions will be collected into a database and used to improve methods for analyzing software, according to the draft request for proposals put out by the military's venture capital and research arm, the Defense Advanced Research Projects Agency. As weapons systems have become complex, the military's methods for verifying that the software running on them is glitch-free and secure against hackers has fallen short. Formal verification is the process analysts use, through the application of mathematical theories, to determine if software code is free from bugs. Crowdsourcing this complicated task would help the Pentagon cut costs while it grapples with a shortage of computer security specialists. "Formal verification has been too costly to apply beyond small, critical software components," the document said. "This is particularly an issue for the Department of Defense because formal verification, while a proven method for reducing defects in software, currently requires highly specialized talent and cannot be scaled to the size of software found in modern weapon systems." DARPA's three-year experiment, known as Crowdsourced Formal Verification, will address the question: How can developers translate formal verification problems into compelling puzzles people will want to solve? The agency estimates that it will spend $4.7 million on the project this year. The games will be released for testing by the public at the end of the program's two research phases. Researchers must provide programming tools that allow robots to play the games. "However, some problems are expected to remain beyond any robot's ability to solve," the solicitation notes. DARPA did not respond to requests for an interview. The use of crowdsourcing and games to tackle complex, real-world problems has gained traction since players of Foldit, a protein-folding computer game that analyzes possible protein combinations, recently deciphered an AIDS-related enzyme that had baffled scientists for more than a decade. The creation of Foldit by the University of Washington was funded in part by DARPA. Another game, EteRNA, allows players to design RNA -- or ribonucleic acid -- molecules, creating genetic blueprints that scientists could build on to influence what happens inside living cells and possibly treat diseases in new ways. "One of the really exciting things is that when we inject a new kind of problem in the world and provide tools to solve that problem, experts at the task just emerge," said Adrien Treuille, an assistant computer science professor at Carnegie Mellon University who has been involved in developing both games. Security professionals, while intrigued by the potential of DARPA's idea, have reservations about whether the program will meet the ambitious goals. It would be more cost-effective for the government to focus efforts on ensuring that software is secure while it's being engineered rather than after it has been deployed in systems, said Gary McGraw, chief technology officer at Cigital, a Dulles, Va.-based security consultancy. "It's easier to build something right than to build a broken thing and then have to fix it." If players know a game is mapped to a weapons system's software, there's the alarming possibility that they could rig its results. "They could collude and play the game to show there are no security problems," said Nasir Memon, director of the Information Systems and Internet Security Laboratory at the Polytechnic Institute of New York University. "How can you trust results from that?"
<urn:uuid:ab238f77-1769-46a6-a177-ac38b0b1d7a1>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2012/01/pentagon-funded-games-would-crowdsource-weapons-testing/50479/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961535
765
2.609375
3
The Ruby language and the Ruby on Rails framework remain major forces in the world of computing through both their existing broad bases and the strong demand for Ruby and Rails programmers in the field. With Rails in the wild for more than a decade and Ruby having 20-plus years under its belt, both the language and its associated framework have staked out specific territories. Here's a look at where Ruby and Rails are most at home and where they're facing real challenges, both technical and existential. The great: Rapid Web development (especially for Rails) A big part of the concept behind Ruby (and Ruby on Rails) was making the programmer happy -- keeping it simple to learn and easy to use, but also proffering advanced features when needed. That approach paid off, as Ruby and Ruby on Rails have built a decade-long mutual success story on providing straightforward ways to get a lot done. With Rails, that means Web apps can be prototyped in a fraction of the time it would take to build them elsewhere. That fast-but-smart approach helped bootstrap a range of sites that have since become household names in IT: Twitter, GitHub, Hulu, and many more. All of the above constitutes hints that interest in Rails drives much of the business and enterprise interest in Ruby. Work on Rails itself is also ongoing, with the recently released 4.2 providing performance improvements and setting the stage for the long-awaited version 5 of the framework. The good: Scripting and libraries Like Python, another language often mentioned in the same breath, Ruby is useful for automating tasks or stitching together functionality from different parts of the IT ecosystem. Ruby's "gems," or software packages (6,400 and counting), can be installed easily from the command line. SDKs for third-party applications and services, such as Amazon, more often than not include Ruby wrappers or libraries as a way to make them accessible to Ruby apps. Where Python has an edge, though, is in specialized computing -- specifically, science and math, where Python has a well-developed subculture of both users and libraries. Ruby is addressing this gap by way of projects like SciRuby, although Python has the incumbent advantage in terms of both adoption and coverage. Not so good: Projects needing major scale, speed, or asynchronicity Where Ruby is sliding, it would be fairer to say Rails is the one losing ground in specific settings and pulling Ruby along with it. Some legacy projects in Rails suffering from problems of scale or performance are being rewritten in other languages and frameworks, with Node.js and Go as two of the most common contenders. High-profile examples abound. Mobile-app outfit Parse switched from Ruby to Go to deal with an explosive amount of growth that its engineers felt couldn't scale effectively in Ruby. Twitter, originally a Ruby on Rails project, was rewritten in Scala and replaced its front end with a custom Java-based solution. Few sites would experience the same extreme demand as Twitter, so not every Rails-driven site is a candidate for a ground-up rewrite. But other high-profile projects involving Ruby that aren't Rails works are feeling the draw of other language ecosystems. The Puppet project's server component switched from Ruby to Clojure, in big part because of the advantages provided by the Java Virtual Machine (upon which Clojure runs) and its attendant software. Even JRuby, the version of Ruby hosted on the JVM for both speed and convenience, has its limits as an alternative. As the Parse team found out, "JRuby is still basically Ruby, [since] it still has the problem of asynchronous library support ... The vast majority of Ruby gems are not asynchronous, and many are not threadsafe, so it was often hard to find a library that did some common task asynchronously." In light of this, one possible reason for the growth in demand for Ruby on Rails and Ruby is to preserve or maintain -- or even replace -- existing Ruby or Rails infrastructure, rather than building new objects with it. Either way, there's a heavy ongoing demand for both Ruby and Rails. Even better, it often pays well, as Ruby and Rails developers average around $110,000 nationwide, according to Indeed.com. This story, "The state of Ruby and Rails: Opportunities and obstacles" was originally published by InfoWorld.
<urn:uuid:9c762667-6f2b-4a3a-b87c-82b38b06bd40>
CC-MAIN-2017-04
http://www.itnews.com/article/2945136/scripting-jvm-languages/the-state-of-ruby-and-rails-opportunities-and-obstacles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959545
891
2.609375
3
Kaspersky Lab, an international data-security software developer, reports the detection of the "Donut" virus, which is the first malicious program to infect .NET files. "Donut" has been developed by the notorious Czech hacker going by the pseudonym "Benny", who is a part of the "29A" virus-writers group. "Benny" is known to be the author of many proof-of-concept viruses among which are "Stream" (the first NTFS alternate data streams infector), "Inta" (the first Windows 2000 virus), "HIV", "Champ", "Eva", "Begemot", etc. The most intriguing aspect about this virus is that the .NET technology, which Microsoft presents as the future substitute for Java, has not yet been officially released and intrinsically is still under development. "It is well-known that virus writers are primarily interested in the most popular and wide-spread software products, which nowadays are undoubtedly the Microsoft technologies. The appearance of 'Donut' confirms the opinion that the company's products are guaranteed to be popular not only among users but also among virus-writers," commented Denis Zenkin, Head of Corporate Communications for Kaspersky Lab. "This time the computer underground decided not to wait for the official release of the promising technology and to start developing the .NET-specific malicious programs beforehand, anticipating the technology's future commercial success." When the virus-carrying file is executed, "Donut" loads itself into the system memory and starts searching for the .NET-files on the target computer. If such files are found, the virus infects them by modifying the files' entry point. Thus, when the infected file is launched, the virus code is executed, which then passes control to the .NET-files processor in order to execute the original .NET-file: It is important to note that "Donut" is not a pure .NET-virus. It simply infects .NET-files, but is virtually an ordinary Windows-executable code written in Assembler. Except for infecting other .NET-files, the virus has no additional dangerous side-effects and no destructive payload. Kaspersky Lab believes that "Donut" poses no real danger to computer users because of the low prevalence of .NET technology. Therefore, even if a user accidentally starts an infected file, the virus will not do any harm to the computer due to the absence of the .NET-files processor and other .NET-files necessary for infection. Defense procedures against "Donut" have already been added to the Kaspersky Lab daily anti-virus database update as of January 10, 2002. More detailed information about this malicious program is available in the Kaspersky Virus Encyclopedia.
<urn:uuid:a90c3d3b-8770-4b45-acca-4d5d365f5681>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2002/_NET_Technology_is_Still_in_Development_but_a_Virus_Already_Exists
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937326
574
3
3
This flat monthly charge was established by a U.S. congressional ruling related to the 1983 Bell System settlement. It was intended to reduce the level of access charges long distance providers pay local phone companies for the use of local networks, therefore promoting competition among long distance providers. What is it? It's the part of interstate long-distance rates that pay for some of the cost of the local portion of the telephone network. The subscriber line charge helps local telephone companies recover some of the costs associated with connecting telephone lines to your home or business. Long distance carriers take advantage of those local lines to connect their long distance calls, and this charge contributes the infrastructure needed to make the telecommunication system work. Regulated by the Federal Communication Commission (FCC), this fee is assessed to all incumbent local exchange carrier (ILEC) end users. Also known as: - FCC-Approved Customer Line Charge - FCC Subscriber Line Charge - Interstate Subscriber Line Charge - Customer Subscriber Line Charge - Federal Line Fee - Interstate Access Surcharge
<urn:uuid:d89b9953-2a9d-4e61-8e81-2d4ec360eb6d>
CC-MAIN-2017-04
http://www.centurylink.com/home/help/billing/overview-of-taxes-and-fees/subscriber-line-charge-explained.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92461
221
2.578125
3
When we want two fibers or fiber optic cables joint together, there are two method come to our head, installing a fiber optic connectors at the end of the optical fibers, or splicing the two optical fibers. Fiber optic cable splicing is a method that creates a permanent joint for two fibers, while fiber connector installation is used for temporary connections. There are two options for fiber optic splicing: Fiber optic cable fusion splicing and mechanical splicing. Both methods provide much lower insert loss than fiber optic connectors. Common application for splicing is jointing cables in long outside plant cable runs where the length of the run requires more than one cable. Splicing is generally used to terminate singlemode fibers by splicing preterminated pigtails onto each fiber. It can be also used to mix numbers of different types of fiber cables like connecting a 48 fiber cables to six 8 fiber cables going to different places. Fusion splicing provides a maximum insertion loss of 0.1 which is less than 0.5dB of mechanical splicing. Fusion splicer are available in two types that splice a single fiber or a ribbon of 12 fibers at one time. Almost all singlemode splices are fusion spliced. Mechanical splicing is most used for temporary restoration and for multimode splicing. Fusion splices are so good today that splice points may not be detectable in OTDR traces. Fusion Splicing Process Fusion splices are made by welding two fibers together by an electric arc of the fusion splicing machine. It can be not done in the enclosed space for safety reasons. It is suggested to done the job above the ground in a truck or trailer for a clean environment for splicing. Fusion splicing needs the help of a special equipments which is fusion splicer to perform the splicing process. Main steps are aligning the two fibers precisely and generate a small electric arc to melt the fibers and weld them together.Splicing machine can do one fiber at a time while mass fusion splicer can do all 12 fibers in a ribbon at once. Preparing fibers: The first step for fusion splicing is to strip, clean & cleave the fibers to be spliced. Stripping the primary buffer coating to expose the proper length of bare fiber with the fiber stripper. Clean the fiber with appropriate wipes, what you need is the fiber optic cleaning kit, Cleave the fiber using the directions appropriate to the fiber cleaver being used. Place each fiber into the guides in the fusion splicing machine and clamp it in place. Running the splicer program: Choose the proper program according to the fiber type being spliced. The splicer would show the fibers being spliced on a video screen. The fiber ends will be inspected for proper cleavers and bad ones will be rejected for a second time cleaving. The fibers will be moved into position, prefused to remove any dirt on the fiber ends and preheat the fibers for splicing. The fibers will be aligned using the core alignment method used on that splicer. Then the fibers will be fused by an automatic arc cycle that heats them in an electric arc and feeds the fibers together at a controlled rate. Ribbon fusion splicing: Each ribbon is stripped, cleaved and spliced as a unit. Special tools are needed to strip the fiber ribbon, usually heating it first, then cleave all fibers at once. Many tools place the ribbon in a carrier that supports and aligns it through stripping, cleaving and splicing. Consult both cable and splicer manufacturers to ensure you have the proper directions. Fusion splicing pigtail is another typical application for fiber optic splicing. By this method, a fiber optic patch cord is cut into two pigtails with connectors attached. The fibers are cleaved and welded together with a fusion splicer, which is considered to be the fastest and highest-quality method of fiber connector installation.
<urn:uuid:d00eaefe-f3aa-4a17-b938-41db7d915167>
CC-MAIN-2017-04
http://www.fs.com/blog/when-do-we-need-fiber-optic-splicing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933579
801
3.078125
3
Energy efficiency is rapidly becoming a key factor for many modern high-performance computing (HPC) datacenters. It poses various challenges, which need to be addressed holistically and in an integrated manner, covering the HPC system environment (system hardware and system software), the hosting facility and infrastructure (cooling technologies, energy re-use, power supply chain, etc.), and applications (algorithms, performance metrics, etc.). Most of the management schemes present in current HPC datacenters do not allow data to be shared between the HPC system environment, hosting facility, and infrastructure. But, it is important to collect and correlate data from all aspects of the datacenter in order to: better understand the interactions between different components of the datacenter; spot the improvement possibilities; and assess any introduced improvements. There are currently no tools that support a complete collection and correlation of energy efficiency relevant data, allowing for a unified view of energy consumption present in the datacenter. That’s why a new energy measuring and evaluation toolset is being developed at the Leibniz Supercomputing Centre of the Bavarian Academy of Sciences (BAdW-LRZ) which is capable of monitoring and analysing the energy consumption of a supercomputing site in a holistic way, combining the HPC systems with data from the cooling and building infrastructure. The tool, named Power Data Aggregation Monitor (PowerDAM), allows the collection and evaluation of sensor data independently from the source systems and is capable of monitoring not only HPC systems but any other infrastructure that can be represented as a hierarchical tree. It monitors physical sensors as well as virtual sensors which can represent different functional compositions of several physical sensors. PowerDAM provides a plug-in framework for defining the desired monitored entities such as IT systems, building infrastructure, etc. Two plug-in interfaces for each monitored entity are provided: one for sensor data collection and one for collecting application relevant data (e.g., utilized compute nodes, starting and ending timestamps of application, etc.) from system resource management tools. PowerDAM is an underlying framework for energy efficiency related research at BAdW-LRZ. Evaluating and Reporting Energy-to-Solution (EtS) is an important metric for PowerDAM which denotes the aggregated energy consumption of an application consisting of the energy consumption of utilized compute nodes and partial sub-system components (e.g., system networking and system cooling). Figure 1 presents the EtS report for an application executed on CoolMUC MPP Linux cluster. The first part of the report (part I) shows the sensor measurements for all utilized components in the order of timestamp, sensor name, value and unit. Figure 1: EtS Report for an application executed on CoolMUC MPP Linux Cluster Part II shows all approximations of source measurement data which were considered to be invalid (missing measurements, out of bounds data, etc.). Part III shows the aggregated energy consumption (EtS) of the executed application and provides information on the consumption percentages of computation, networking, and cooling. The ability to calculate the EtS of an application allows for the further understanding and tuning of the application internally (via change of algorithms, memory access patterns, etc.) as well as externally through hardware adaptation (e.g., static/dynamic voltage frequency scaling). PowerDAM provides various visualization options such as: the power draw, utilization rate, and averaged CPU temperatures of utilized compute nodes; correlation between power and load for these nodes; different EtS reports; and system power consumption for a given time frame (e.g. day, month, and year). Figure 2 illustrates one of these options – the EtS report (encompassing in parallel to the EtS, the percentages for computation, infrastructure, cooling, and networking) for all executed applications by a given user. Figure 2: EtS Report for All Jobs Submitted by Given User PowerDAM “node-map” view displays the dynamic behavior of compute nodes for a given sensor type. This view updates automatically after a customized amount of time and uses a color mapping to classify the behavior of the compute nodes (Figure 3). Figure 3: Utilization Map of Compute Nodes for CoolMUC Linux Cluster. The color green illustrates the 96% to 100% utilization range. The color white illustrates the 0% and 90% to 95% utilization range. The color red illustrates the 1% to 89% utilization range. (not all compute nodes of the cluster are depicted) The “node-map” view can be essential for understanding the interconnection between different sensor types. For example, correlating utilization rate (Figure 3) with CPU temperature (Figure 4) allows the investigation of the interdependency between utilization rates and CPU temperatures of defined compute nodes (nodes lxa130 and lxa17). Figure 4: Temperature Map of Compute Nodes for CoolMUC Linux Cluster (2×8-core AMD CPUs per compute node) (not all compute nodes of the cluster are depicted) Further development will allow PowerDAM to: classify applications according to power draw, runtime, performance, and energy consumption; provide data necessary for the enhancement of the resource management systems; and report on datacenter key performance indicators (KPIs) such as PUE, ERE, DCiE, WUE, etc. More detailed information on PowerDAM is available in the Proceedings of the First International Conference on Information and Communication Technologies for Sustainability under “Towards a Unified Energy Efficiency Evaluation Toolset: An Approach and Its Implementation at Leibniz Supercomputing Centre (LRZ)” and is indexed under DOI 10.3929/ethz-a-007337628. The development of PowerDAM was made possible by the PRACE Second Implementation Phase project PRACE- 2IP in the Work Package “Prototyping” which has received funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under grant agreement no. RI-283493 and within the SIMOPEK project which has received funding from the German Federal Ministry of Education and Research (BMBF) under grand agreement no. 01IH13007A. The work was achieved using the PRACE Research Infrastructure resources at BAdW-LRZ with support of the State of Bavaria, Germany. The authors would like to thank Jeanette Wilde for her valuable comments and support. Hayk Shoukourian(1,2); Torsten Wilde(1); Axel Auweter(1); Arndt Bode(1,2) 1Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities (BAdW-LRZ) 2Technische Universität München (TUM), Fakultät für Informatik
<urn:uuid:397bf445-0f58-451c-99cc-e1d998ea3e16>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/29/path-energy-efficient-hpc-datacenters/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00566-ip-10-171-10-70.ec2.internal.warc.gz
en
0.87984
1,435
2.5625
3
A very common question I am asked is which is more important, the speed of the processor or the amount memory. This is a difficult question to answer and it would help if we had some understanding of what each component does and how they relate to each other. This article will strive to teach you the fundamental tasks of both memory and the cpu and how they relate to each other. Hopefully at the end of this article, you will be able to answer this question yourself. Central Processing Unit (CPU) The CPU is the brain of the computer. It's job is to take information from the various input devices, the operating system, and software and execute the instructions that it has been given. A CPU executes a certain amount of instructions within a grouping called a cycle. The speed of the CPU is measured in how many cycles it can perform in a given second. A speed of one cycle per second is called a hertz. Therefore a CPU that has a frequency of 1 million cycles per second has the speed of a Megahertz, and a CPU that has a frequency of 1 billion cycles per second is a Gigahertz. This is illustrated in the table below: |Hertz Term||Cycles Per Second| Therefore to have a very high CPU Speed is a good thing, because more instructions per second get executed. On the other hand, with most computers coming default with around 2 Ghz, you will start to see a diminishing return on the visible speed differences between one processor and the next speed. Computers with at least 2Ghz should be more than fine these days for most applications and you will probably not see much of a difference by increasing the speed of your processor when using standard applications. Games on the other hand can be more CPU intensive, and if you are going to be using your computer predominantly as a gaming machine, then it could not hurt to spend a few extra dollars on the CPU. You must remember though, to save some money for your memory, as that is just as important to having a fast machine. Just as important to the speed of the CPU is the amount of memory you have in your computer. Memory is the temporary storage place for your computers information. When a computer is manipulating some sort of information it is placed in the memory to be retrieves or manipulated later. If all your usable memory gets filled up, the computer will then start storing temporary data on to your hard drive in something called a swap file. When the CPU is ready to use that information it will then read it back from your hard drive and place it into memory where it can be used. As you can see when you use a swap file, and the CPU needs to access the data it becomes a two-step process in retrieving that data from the hard drive and then stored in the memory, instead of a one step process of reading the data directly from memory. Even more important is that reading data from memory is many many times faster than reading that same data off the hard drive. With this in mind, you can see how it is important to have as much memory as you can, so that the swap file on your hard drive is never used, and all data is stored and read directly from your memory. With all this information we are still left with the burning question of "Which is more important, Memory or CPU Speed" and the answer is neither and both. Got you there didn't I? The real answer depends on how much you have to spend on your new computer and what the base system is. If the base system is at least 2 Ghz then I would apply the money towards memory, otherwise I would increase it to over 2 Ghz. If you have money left over I would spend the rest of your budget to increase your memory to 4 GB or as close as you can get. These days, you really should have at a bare minimum of 2 GB of memory, with 4 GB being preferred. Have you ever had an experience where you are using a lot of programs in Windows, or a really memory intensive one, and notice that your hard drive activity light is going nuts, there is lots of noise from the hard drive, and your computer is crawling? This is called disk thrashing and it is when you have run out of physical RAM and instead Windows is using a file on your hard drive to act as a ... This tutorial is intended to explain what RAM is and give some background on different memory technologies in order to help you identify the RAM in your PC. It will also discuss RAM speed and timing parameters to help you understand the specifications often quoted on vendors' websites. Its final aim is to assist you in upgrading your system by suggesting some tools and strategies to help you ... I am sure many of you have been told in the past to defrag your hard drives when you have noticed a slow down on your computer. You may have followed the advice and defragged your hard drive, and actually noticed a difference. Have you ever wondered why defragging helps though? This tutorial will discuss what Disk Fragmentation is and how you can optimize your hard drive's partitions by ... Most people think computers, being electronic devices, don't require any mechanical maintenance, but this is not so. Many computer faults are caused by components overheating due to poor airflow in the case because of a buildup of dirt and dust over time. It's worthwhile cleaning your computer annually or even more often if it is in a particularly dusty environment, on carpet or in a ... Many times you will see software for sale that is listed as OEM , Academic, Upgrades, or Full Versions, all at different prices. This may lead to some confusion making you think that they are all different products. In reality they are all the same products, but are priced differently.
<urn:uuid:7a575386-0152-402a-a878-f234d1582858>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/processor-speed-versus-memory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965761
1,176
4
4
SAS vs. SATA There has been a perennial argument of SAS versus SATA for enterprise storage. Some people say it's OK to use SATA for enterprise storage and some say that you need to use SAS. In this article I'm going to address two aspects of the SAS vs. SATA argument. The first is about the drives themselves, SATA drives and SAS drives. The second is about data integrity in regard to SATA channels and SAS channels (channels are the connections from the drives to the Host Bus Adapter - HBA). SAS vs. SATA: Drives - Hard Error Rate This subject has been written about several times including one of Henry Newman's recent articles. It is defined as the number of bits that are read before the probability of hitting a read error reaches 100% (i.e. can't read the sector). When a drive encounters a read error it simply means that any data that was on the sector cannot be read. Most hardware will go through several retries to read the sector but after a certain number of retries and/or a certain period of time, it will fail and the drive reports the sector as unreadable and the drive has failed. Below is a table from Henry's previously mentioned article that lists the hard error rates for various drive types and how much data, in petabytes, would have to read before encountering an unreadable sector. Table 1: Hard error rate for various storage media The first row in the table, which are drives listed as "SATA Consumer," are drives that typically only have a SATA interface (no versions with a SAS interface). Here is an example of a SATA consumer drive spec sheet. Notice that the hard error rate, referred to as "Nonrecoverable Read Errors per Bits Read, Max" in the linked document, is 10E14 as shown in the table above. <P/P> The second class of drives, labeled as "SATA/SAS Nearline Enterprise" in the above table, can have a SATA or SAS interface (same drive for either interface). For example, Seagate has two enterprise drives, where the first one has a SAS 12 Gbps interface and the second one has a SATA 6 Gbps interface. Both drives are the same but have different interfaces. The first one has a 12Gbps interface and the second one has a 6Gbps SATA interface but both have the same hard error rate, 10E15. The third class of drives, listed in the third row of the table as "Enterprise SAS/FC," typically only has a SAS interface. For example, Seagate has a 10.5K drive with a SAS interface (no SATA interface). The hard error rate for these drives is 10E16. What the table tells us is that Consumer SATA drives are 100 times more likely than Enterprise SAS drives to encounter a read error. If you read 10TB of data from Consumer SATA drives, the probability of encountering a read error approaches 100% (virtually guaranteed to get an unreadable sector resulting in a failed drive). SATA/SAS Nearline Enterprise drives improve the hard error rate by a factor of 10 but they are still 10 times more likely to encounter a hard read error (inability to read a sector) relative to an Enterprise SAS drive. This is equivalent to reading roughly 111 TB of data (0.11 PB). On the other hand, using Enterprise SAS drives, a bit more data can read before encountering a read error. For Enterprise SAS drives about 1.1 PB of data can be read before approaching a 100% probability of hitting an unreadable sector (hard error). At the point where you encounter a hard error the controller assumes the drive has failed. Assuming the drive was part of a RAID group the controller will start a RAID rebuild using a spare drive. Classic RAID groups will have to read all the disks that remain in the RAID group to rebuild the failed drive. This means they have to read 100% of the remaining drives even if there is no data on portions of the drive. For instance, if we have a RAID-6 group with 10 total drives and you lose a drive, then 100% of the seven remaining drives have to be read to rebuild the failed drive and regain the RAID-6 protection. This is true even if the file system using the RAID-6 group has no data in it. For example, if we are using ten 4TB Consumer SATA drives in a RAID-6 group, there is a total of 40TB of data. Given the information in the previous table, when about 10TB of data is read then there is almost a 100% chance of encountering a hard disk error. The drive on which the error has occurred is then failed causing a rebuild. In the case of the ten disk RAID-6 group, this means that there are now nine drives but we can only lose one more drive before losing data protection (recall that RAID-6 allows you to lose two drives before the next lost drive results in unrecoverable data lost). In the scenario, I'm going to assume there is a hot-spare drive that can be used for the rebuild in the RAID group. In a classic RAID-6, all of the remaining nine drives will have to be read (a total of 36TB of data) for the rebuild. The problem is that during the rebuild the probability of hitting another hard error reaches 100% when just 10TB of data is read (a total of 36TB needs to be read for the rebuild). When this happens there is now a double drive failure and the RAID group is down to eight drives.
<urn:uuid:9813c714-5b40-4f6c-b73b-5effcf866729>
CC-MAIN-2017-04
http://www.enterprisestorageforum.com/storage-technology/sas-vs.-sata-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941635
1,148
2.578125
3
A Brief Introduction to Game Theory - Charlotte DeKoning Notacon 11 (Hacking Illustrated Series InfoSec Tutorial Videos) A Brief Introduction to Game Theory From game shows to warfare, strategic decision-making surrounds us every day, whether you realize it or not. Game theory is a branch of economics, mathematics, and psychology that deals with these types of decisions. This talk will cover the basics of game theory – including normal form games, matrix notation, and common game forms – as well as applications and examples. I have had a multitude of careers in my relatively short life. From artist to CAD engineer to dog trainer to bartender to data scientist to college professor. Currently I work as a data analyst at a private economic consulting firm in Cleveland. I dropped out of art school to pursue degrees in accounting, business, liberal arts, and economics. to Notacon 11 video list Printable version of this article
<urn:uuid:93af4e9e-10f2-4077-9ce2-2a0c2608fb01>
CC-MAIN-2017-04
http://www.irongeek.com/i.php?page=videos/notacon11/a-brief-introduction-to-game-theory-charlotte-dekoning-beyond-using-the-buddy-system-holly-moyseenko-kris-perch
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00162-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92923
185
2.625
3
Enable or Disable Code at Run Time April 5, 2016 Ted Holt If you program in RPG or C, you should know how to use compiler directives to enable or disable source code at compile time. IBM i programmers should also know how to enable or disable executable code at run time, and why they would want to do such a thing. Let’s talk about the why first. One good reason to enable or disable code is for trouble-shooting. You might find it valuable to include commands to: Such code can help with elusive problems and need only be enabled until a problem is resolved. Another reason to enable or disable code is to give an end user a way to control execution. For example, he may only want certain programs to run during the first and last months of the year. If you make a program check a setting at run time, and give the user a way to control the setting, you relieve the user of the need to make a run time selection. If the program runs from a job scheduler, then such code is the only way for the user to control run-time behavior. Now, the how. One easy way to enable or disable code is by checking for the existence of an object. Any type of object can serve this purpose. I prefer data structures because they’re easy to deal with in both CL and RPG. In the following example, logical variable &DbgIsOn (debug is on) is controlled by the presence or absence of a data structure named DBG. dcl &DbgDta *char 96 dcl &DbgDtaSiz *uint 2 dcl &DbgIsOn *lgl value('1') chgvar &DbgDtaSiz value(%size(&DbgDta)) rtvdtaara dtaara(DBG (1 &DbgDtaSiz)) rtnvar(&DbgDta) monmsg cpf1015 exec(do) /* not found */ chgvar &DbgIsOn '0' enddo . . . more stuff . . . if (&DbgIsOn) do dmpclpgm dspjoblog output(*print) enddo If the data structure exists in any library that is in the library list, the program sets &DbgIsOn to true, causing the program to produce a dump and a job log. The only problem with this method is that other programs may also have tests for the same data structure. This may or may not cause problems. At best, it could slow down processing and produce a bunch of unwanted spooled files. For this reason, I like to add another test, which looks at the value within the data structure. Consider a program, which I’ll call AR100. dcl &DbgDta *char 96 dcl &DbgDtaSiz *uint 2 dcl &DbgIsOn *lgl value('1') chgvar &DbgDtaSiz value(%size(&DbgDta)) rtvdtaara dtaara(DBG (1 &DbgDtaSiz)) rtnvar(&DbgDta) monmsg cpf1015 exec(do) /* not found */ chgvar &DbgIsOn '0' enddo if (&DbgIsOn) do chgvar &DbgIsOn value(%scan(AR100 &DbgDta) *gt 0) enddo . . . more stuff . . . if (&DbgIsOn) do dmpclpgm dspjoblog output(*print) enddo This program tests for the existence of a data structure named DBG, just as the previous one did. If the data structure exists, the program also looks for the value “AR100” in the data structure. I have seen programmers add code to aid with debugging and/or problem determination, then delete that code once a problem is found. That isn’t wrong, but many times I have found it valuable to retain such code and activate it as needed.
<urn:uuid:2c53119a-3b43-41f8-a660-5815d06098cd>
CC-MAIN-2017-04
https://www.itjungle.com/2016/04/05/fhg040516-story02/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00372-ip-10-171-10-70.ec2.internal.warc.gz
en
0.825777
861
3.0625
3
The Connected Urban Development Conference included breakout sessions in which attendees learned how information and communications technology ICT and broadband technology can help cities address issues regarding global climate change. Attendees learned how cities are using innovative solutions to increase the efficiency of traffic, manage public transportation systems better, create sustainable real estate models and new work environments, establish new models for public services, and enable city residents to self-manage their carbon footprints. Connected and Sustainable Mobility The Connected and Sustainable Mobility workshop examined approaches cities can take to provide citizens with more choices in urban mobility, making use of the Internet and ICT solutions that address traffic and parking issues; offer convenient public transportation options; provide personal travel assistants; and create intelligent, multimodal traffic management systems. Connected and Sustainable Mobility I: CUD Projects, Challenges, and Lessons Learned This session introduced Connected Urban Development (CUD) mobility solutions and provided updates on CUD city projects regarding smart-road pricing, connected bus, and intelligent traffic management. Cities shared their challenges, achievements, and lessons learned through demos and progress updates on CUD projects, feedback on CUD projects, and by providing next steps and directions on additional CUD mobility solutions development. Connected and Sustainable Mobility II: The Future of Urban Mobility and the Role of Broadband This session focused on the future, exploring the urban mobility revolution and the role of broadband in personalized travel-information service and intelligent transportation systems. Discussions on what sustainable mobility will look like in the future and how broadband can enable and accelerate this process also took place. Connected and Sustainable Work The Connected and Sustainable Work breakout session explored the main drivers that are revolutionizing the way people work, the characteristics of sustainable work, and current solutions being implemented. Connected and Sustainable Work I: CUD Solutions, Business Models, and Replication Blueprints This session introduced CUD solutions and examined business models and replication blueprints that can serve as a basis for CUD projects. Connected and Sustainable Work II: The Future of Work In this session, participants discussed the long-term vision for Connected and Sustainable Work and how this vision is designed to evolve over the years. Sustainable Energy in Connected Urban Environments In this session, attendees learned how ICT can help improve the energy efficiency of public and commercial buildings—from smart grids, local and distributed renewable energy sources, to advanced digital energy management and control systems. Sustainable Energy in Connected Urban Environments I: CUD Projects, Challenges, and Lessons Learned This session introduced CUD projects regarding energy-efficient urban living; cities shared their challenges, achievements, and lessons learned. Sustainable Energy in Connected Urban Environments II: The Future This session focused on the future, exploring urban living as buildings and homes continue to evolve in tandem with smart grids, advanced energy management solutions, distributed energy-generation systems, and community microgrids. Connected and Sustainable Urban Design Connected and Sustainable Urban Design I: CUD Projects, Challenges, and Lessons Learned In this session, CUD cities shared their challenges, achievements, and lessons learned with connected and sustainable urban design and development programs. Connected and Sustainable Urban Design II: The Future This session discussed how ICT can be applied in three major ways in the sustainable design and development of cities: urban ICT infrastructures, digital tools, and intelligent communities. The Greening of ICT and Carbon Impact from Next-Generation ICT Infrastructures This session examined approaches and solutions that can help reduce the IT industry’s carbon footprint, including green datacenters, green PCs, and next-generation fiber networks. ICT for Community Awareness During this session, participants examined how ICT can be applied to create community awareness of how urban carbon emissions can be dynamically measured and how citizens can change their behaviors to help reduce their carbon footprints. Strategic Role of Carbon Markets: Implications and Responses Principals and Applications This session introduced principles and applications of global carbon markets and institutions, and examined business models and local institutional structures as a basis for CUD solutions. Connecting Cities to Carbon Markets: The Future In this session, participants engaged in dialogue about the long-term vision for connecting cities to carbon markets and how this will likely will evolve over time.
<urn:uuid:7edb6d4f-9c26-4cd4-9f84-83c33b8147e7>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/consulting-thought-leadership/what-we-do/industry-practices/public-sector/our-practice/urban-innovation/connected-urban-development/cud-globalconference-amsterdam-september-2008/breakout-sessions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892176
894
2.5625
3
Nearly 43 percent of kids have been bullied online. One in four students has experienced cyberbullying more than once. Sixty-eight percent of teens agree that cyberbullying is a serious problem. Ninety percent of teens who have witnessed social media bullying say they have ignored it. Only one in 10 victims of bullying will inform a parent or trusted adult of their abuse. These disturbing statistics from DoSomething.org tell us two things: Using behavior management software to monitor students on the Internet while they are at school can help accomplish these large tasks at hand. What is online monitoring? Monitoring is an Internet safety feature in behavior management software that protects students from online risks. Behavior management and monitoring software is fundamentally different from content-filtering software. Filters allow or deny access to websites. Behavior management software uses categories, such as lists of words or phrases, to capture and identify inappropriate activity on PCs, laptops, and other digital devices. Once captured, an automatic screenshot or video recording is logged, allowing school teachers and administrators to identify the context of any potential concerning activity – such as a screenshot identifying a concerning word or phrase, a logged-in user, or an IP address, etc. When students use certain keywords, the software alerts the teacher. This can identify cyberbullying and present a way to confront the situation. As new slang terms trend, keyword lists can be updated on a regular basis. How is monitoring different than blocking? Safeguarding children in educational settings is critical. Filtering and blocking Internet content is no longer sufficient when it comes to preventing cyberbullying in schools. Simply blocking Internet access not only closes off the opportunity to gain access to valuable learning resources, but it also removes the ability to identify vulnerable students. Monitoring online behavior, including social media, puts behavior management into the student’s hands. It also gives the teacher a window into what is going on in a student’s cyberworld. Helping students report cyberbullying Impero Education Pro classroom management software provides students with a confidential way of reporting any questionable online activities to authorities through its Confide function. Students can find comfort knowing that their submissions are anonymous. They can safely expose a predator without fear of further harassment or “ratting out” their peers. This helps students have a voice when they previously felt like they had none. The future of Education Pro At present in the US, Impero Education Pro software provides schools with the ability to create custom keyword lists for monitoring students, which is the first step in preventing issues with violence, suicides, cyberbullying, eating disorders, child abuse, and simply keeping kids safe online. Soon, however, Impero will be rolling out a keyword library specifically geared toward cyberbullying prevention (among other libraries for the above mentioned issues.) This library will be free of charge for those schools who currently have the Education Pro product, and for new purchases of the software. For a comprehensive explanation of behavior management software, Internet safety monitoring, and Impero Education Pro, you can download a whitepaper here. Find out more about how Impero education network management software can help your school prevent and deal with bullying by requesting free demos and trials on our website. Talk to our team of education experts by calling 877.883.4370, or by emailing Impero now to arrange a call back.
<urn:uuid:738f2863-e029-4264-8738-713713495206>
CC-MAIN-2017-04
https://www.imperosoftware.com/addressing-and-preventing-cyberbullying-by-monitoring-students-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928927
694
3.171875
3
For many years, the diesel backup generator has been a symbol of reliability for data centers, providing the emergency power to keep servers online during utility power outages. But the growing focus on using clean energy to power large data centers is prompting some of the industry’s largest players to ditch their generators, along with their diesel fuel emissions. Microsoft is the latest company to announce its intention to reduce its use of diesel generators. The move is part of a broader initiative to make Microsoft’s server farms more sustainable and less reliant on the utility grid. “We are currently exploring alternative backup energy options that would allow us to provide emergency power without the need for diesel generators, which in some cases will mean transitioning to cleaner-burning natural gas and in other cases, eliminating the need for back-up generation altogether,” Microsoft Utility Architect Brian Janous wrote in a blog post last week. The reference to natural gas suggests that Microsoft is preparing to implement fuel cells to replace its generators. That could be good news for Bloom Energy, whose fuel cells will replace generators and UPS units at a new eBay data center in Utah. “Bloom boxes” are also being deployed to provide supplementary power at Apple’s iDataCenter in North Carolina. Committing to Renewable Power Generators have been associated with several headaches for Microsoft’s data center unit in recent years, including an Azure cloud outage in Europe (when multiple generators didn’t start during a utility outage) and a public controversy about whether the diesel emissions from Microsoft’s generators in Quincy, Washington could cause health problems for local residents (the state EPA says no). But Microsoft’s motivation to find alternatives to diesel generators is rooted in its commitment to become less dependent on the utility grid, and use renewable energy to power its servers wherever possible. The company says its vision for “data plants” will break new ground in integrating electricity and computing, bring together data centers and renewable power generation. Microsoft has been exploring the potential for a waste-powered data center that will be built on the site of a water treatment plant or landfill. In his blog post, Janous indicated that Microsoft is evaluating this design at a biomass project in Europe. The company is also looking at a “photovoltaic solar project in the Southwestern U.S.” Microsoft wouldn’t provide additional details, but it’s worth noting that the company has previously discussed installing a photovoltaic solar array at its data center in San Antonio. Designing Around Generators The most intriguing tidbit in this week’s disclosures was Microsoft’s plans to reduce its need for generators. Diesel engine exhaust is a regulated pollutant, and can be toxic in high concentrations. “Given the unreliability of the electric grid and the need for continuous availability of cloud services, Microsoft maintains diesel generator backup at all of our data centers, as is typical across the industry,” Janous wrote. “Our policy is to use these backup generators only when necessary to help maintain grid stability or in extraordinary repair, and maintenance situations that require us to take our data centers off the power grid. These generators are inefficient and costly to operate. From both an environmental and a cost standpoint, it makes no sense to run our generators more than we absolutely must.” So how do you eliminate generators? In a traditional configuration, data centers use the utility grid for primary power, with UPS units and backup generators providing emergency power in the event of grid outages. Bloom has advanced a power infrastructure design in which Bloom Energy Servers powered by natural gas serve as the primary power source, with the utility grid used as a backup service. Such a configuration could alter the historic economics of using fuel cells in data centers. One of the primary barriers to adoption has been the cost of the fuel cells. But if using fuel cells allows data centers to eliminate expenses for UPS units and generators, the cost equation looks very different. Microsoft is also considering “long term purchases from larger grid-connected installations that would displace some portion of our grid purchases,” Janous wrote. Google has embraced a similar strategy, using power purchasing agreements to add more than 200 megawatts of wind power to the local utility grids that support its data centers. These bulk purchases can help guarantee long-term revenue for new renewable power projects, stimulating additional generation of renewable energy. Towards that end, Microsoft is taking steps to position itself to make bulk power deals. “We have recently signed on as an advisory board member with Altenex, an operator of a network that enables member companies to more efficiently engage with developers of renewable energy projects,” Janous said. “We expect this engagement with Altenex to improve our ability to identify and evaluate cost-effective clean energy projects.”
<urn:uuid:557c6524-5e5f-418c-a7ce-f33672ad73da>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2012/09/17/microsoft-were-eliminating-backup-generators/?utm-source=feedburner&utm-medium=feed&utm-campaign=Feed%3A+DataCenterKnowledge+(Data+Center+Knowledge)&utm-content=Google+Reader
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940873
996
2.609375
3
Almost every year, the same myopic visionaries pop out of the woodwork to announce that processors don't need to be faster. Almost every year, the same myopic visionaries pop out of the woodwork to announce that processors dont need to be faster. Theyre probably of the same genetic ilk as those who thought rail transportation exceeding the 10-mph theoretical "human speed limit" would threaten our health. Some of these confused souls attempt to draw parallels between processor speeds and automobile speeds, saying that just as weve reached a point at which car speeds dont matter, neither does computer performance. What really matters, they say, is bandwidth. Bandwidth certainly is important, but this is where the parallel falls apart. Cars might indeed operate at jet speeds by now if safety and things like traffic control systems and fuel efficiencies werent factors. Highways might have hundreds of 100-foot-wide lanes, too. The physical limitations of cars have little to do with the limitations on processor speeds. Intel says its new architecture can take its chips up to the 10GHz range in the near future, keeping Moores Law alive for at least five more years. CPU performance is only one part of realized computer speed, and processor cycles are still being consumed at a good clip, especially in the entertainment world, with its games, music ripping and video creation. But there is also much work to be done in medical imaging, architectural and CAD applications and for software to do more for us than it already does. These future technologies will sap todays powerhouse CPUs as if they were the bargain basement has-beens that theyll be in five years. As it stands, theres little need for most people to own a 2GHz P4 computer, especially if theyre just browsing the Web and doing e-mail. Years ago, many pundits pondered whether the 486 processor would be the fastest anyone ever needed, too. But those visionaries failed to consider the two most truthful idioms of the modern age: The only constant is change, and nature abhors a vacuum. Those doing e-mail and browsing will want more. They always do, and they also tend to hold on to their computers for several years. Ive dealt with those who have 233MHz PIIs and dont think they need any more. Its not a pretty site. Its disgusting, as if you were seeing an old VW Vanagon struggling up a hill at 25 mph while spewing noxious fumes. Cough. Give me more speed, and Ill figure out what to do with it.
<urn:uuid:adab38fc-d433-4b60-b97d-38d7569fa721>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/The-Need-for-Speed-Is-Still-Prevalent
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965266
526
2.515625
3
The concept of using underground bunkers for protection is far from new. Underground facilities have been used as bomb shelters, as fallout shelters and even to store fissionable material. Why not apply the same underground protection to data centers? Building a data center below ground offers a number of advantages, from the obvious savings in cooling costs to protection from natural disasters and outside elements. Although underground data centers and data shelters are somewhat rare, they do exist. Many are owned by private colocation companies, although there are other examples scattered around the country and operated by companies that want to offer better data security and protection. Three Underground Examples 1. The largest underground data center in the U.S. is in Butler County, Penn., and is owned by Iron Mountain. The data shelter is 220 feet below ground in a limestone cave that was formerly a mine. The facility is highly secure, with 24-hour security, closed-circuit television, X-ray scanners and magnetometers. The data bunker can handle 7 million gigabytes of data and is primarily used for secure data backup, such as emails, corporate files and electronic medical records. The shelter also has redundant and failover systems as backup for aboveground facilities. And Iron Mountain’s facility has access to an underground lake in order to facilitate cooling, so the facility maintains a constant temperature of 52 degrees. 2. Cavern Technologies maintains a facility under 125 feet of limestone in a former mine near Kansas City. The facility has more than 300,000 square feet of operating space and handles data for healthcare facilities, insurance companies and universities. It’s secured by guards, biometric scanners and smart cards. The data center is air-cooled with centralized climate controls underground. 3. InfoBunker has converted a former military communications base located underground outside Des Moines. The 65,000-square-foot facility is located 50 feet underground and hosts data for telecom companies, insurance companies, hospitals, financial services, e-commerce and others, as well as two backup data centers. Considerations for Going Underground? Most underground data centers take advantage of existing underground facilities, which saves the cost of excavation. No matter what type of underground facility you choose to host operations in, there are a number of considerations before going underground: Choosing the right location – Not all underground locations can be adapted as a data center. Older mines from the 1960s or earlier, for example, may not be suitable. Newer excavations have more space. In limestone facilities, you also have to be sure the limestone itself is preserved at the proper thickness, and the space has to be big enough to handle equipment with 25-foot to 30-foot stone columns. Maintaining structural integrity underground can get in the way of neatly placing cable and equipment. Cooling – The most obvious advantage is cooling. Chiller operations are the biggest expense for most data centers. Maintaining a consistent humidity and operating temperature requires high-powered air-conditioning systems that use a lot of energy. And when the weather changes or there is a heat wave, the costs increase. With an underground facility, you have natural cooling built in, and caves are not subject to external weather conditions. Of course, once you load a lot of heat-generating computer equipment in an enclosed underground space, proper ventilation is required for the equipment and the comfort of the staff. Less construction time and cost – One of the advantages of building an underground facility is the shorter time to market. Even a large data center can be deployed quickly if there is sufficient room underground. Construction costs are substantially lower because tehre is no need for a concrete shell, and in areas such as the Midwest or the Atlantic coast, there is no need to disaster-proof the site or ensure it can sustain tornados or hurricane winds, which can cost an additional $100 per square foot. Connectivity – One of the biggest costs of going underground is laying fiber to the location. Most underground data centers are in remote locations, so you may have to lay a substantial amount of cable for Internet access. Exterior equipment – You may not be able to install all the hardware underground. Generators, redundant power systems and other mechanical and electrical equipment may need to be installed outside or against an exterior wall. These systems will need adequate protection in order to make sure they don’t become a point of failure because they are more exposed. You also have to be concerned about other issues such as fire safety—what effect would water have on mechanical and electrical systems if sprinklers are activated? Staff considerations – There won’t be any natural light underground, which is really not different from aboveground data centers. However, you might consider using full-spectrum light bulbs and other tactics in order to make working underground more comfortable. A bigger issue could be facility access. Most mines and underground locations can be difficult to access, and there may be parking limitations. In addition to the cooling benefits, going underground also cuts out the threat of radio interference, microwaves and even electromagnetic pulse weapons. Physical security becomes less of a concern, because there is typically only one entry point, so it’s easier to control access. An underground data center may not be everyone’s idea of a perfect data facility, but there are a number of advantages, and you don’t have to dig very deep to find them.
<urn:uuid:54769502-7840-4d4e-aafc-4b910d86e01b>
CC-MAIN-2017-04
http://www.ingrammicroadvisor.com/data-center/the-underground-data-center-should-it-be-considered
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950612
1,099
2.859375
3
LONGUEUIL, QUEBEC--(Marketwire - March 21, 2013) - The Planck Space Telescope has produced by far the best map ever made of the most ancient light in the Universe, showing that it is slightly older than previously thought, expanding more slowly and that there is more matter than known before. Planck includes contributions from the Canadian Space Agency (CSA). The CSA funds two Canadian research teams that are part of the Planck science collaboration, and who helped develop both of Planck's complementary science instruments, the High Frequency Instrument (HFI) and the Low Frequency Instrument (LFI). Professors J. Richard Bond of the University of Toronto (Director of Cosmology and Gravity at the Canadian Institute for Advanced Research) and Douglas Scott of the University of British Columbia lead the Canadian Planck team, which includes members from the University of Alberta, Université Laval and McGill University. Canadian astronomers are members of the international team unveiling Planck's results in Paris today. Led by the European Space Agency, the Planck Space Telescope has been surveying the sky since launched in 2009. The telescope's incredible accuracy allows it to pinpoint faint, minute patterns—differences in light and temperature that correspond to slightly different densities in the matter left over from the Big Bang. The data being released today are from the first 15 months of the mission, and show a map of the Universe when it was just 380 000 years old. Planck's data confirm and refine previous models of how astronomers believe the Universe originated and evolved, but with intriguing new details. The Planck team has calculated that the Universe is 13.82 billion years old—100 million years older than earlier estimates. Planck has revealed that the Universe is expanding significantly slower than the current standard used by astronomers (known as Hubble's Constant). The space telescope has also allowed cosmologists to confirm the Universe's composition more accurately than ever before: normal matter, the stuff of stars and galaxies like our own Milky Way, makes up just 4.9% of the Universe. Dark matter (an invisible substance that can only be inferred through the effects its gravity cause) accounts for 26.8%. Dark energy, a mysterious force that behaves the opposite way to gravity, pushing and expanding our Universe, makes up 68.3% of the Universe—slightly less than previously thought. "We now have a precise recipe for our Universe: how much dark and normal matter it is made of; how fast it is expanding; how lumpy it is and how that lumpiness varies with scale; and how the remnant radiation from the Big Bang is scattered," said University of British Columbia Professor Douglas Scott. "It is astonishing that the entire Universe seems to be describable by a model using just these 6 quantities. Now, Planck has told us the values of those numbers with even higher accuracy." Planck's precision has also given astrophysicists a number of new puzzles to solve. "For more than three decades, I have been trying to unveil the structure imprinted on the Universe from an epoch of accelerated expansion in its earliest moments," said Professor J. Richard Bond of the University of Toronto. "Planck has now shown that the evidence for this early inflation is much stronger than before. The patterns we see are quite simple, resulting in many formerly viable theories falling victim to our Planckian knife. Our maps reveal unexplained, large-scale features that excite the imaginations of physicists who have been eagerly awaiting what Planck has to say about the early Universe." A series of scientific papers from the Planck mission will be published tomorrow covering many aspects of how the Universe is put together and how it has evolved. Planck's instruments allow astronomers to separate the primordial light from the effects of dust and other emissions coming from our Milky Way Galaxy. "We do not simply sweep away the dust signal into the trash bin, but rather treasure it for what it tells us about the workings of the Galaxy. It enables us to discover the evolution of structure in the interstellar medium leading from a diffuse state to star formation in dense molecular clouds," said Professor Peter Martin of the University of Toronto. Hundreds of astronomers from around the world will continue to study Planck's data as the telescope continues its observations. The complete results of the mission are scheduled to be released in 2014 once the space telescope has completed its study of the skies. More information on Canada's involvement in Planck, as well as the most recent map of the early Universe. Follow us on :
<urn:uuid:41cb457d-1e39-4b96-942d-35b06e35ef71>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/canadian-astronomers-reveal-surprising-new-portrait-of-the-early-universe-1770520.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93557
923
3.046875
3
What is a DDoS attack and how do you protect against DDoS Attacks? What is a DDoS attack? A DDoS (Distributed Denial of Service) attack is an attempt to exhaust the resources available to a network, application or service so that genuine users cannot gain access. Beginning in 2010, and driven in no small part by the rise of Hacktivism, we’ve seen a renaissance in DDoS attacks that has led to innovation in the areas of tools, targets and techniques. Today, DDoS has evolved into a series of attacks that include very high volume, along more subtle and difficult to detect attacks that target applications as well as existing security infrastructure such as firewalls and IPS. What are the different types of DDoS Attacks? DDoS attacks vary significantly, and there are thousands of different ways an attack can be carried out (attack vectors), but an attack vector will generally fall into one of three broad categories: Volumetric Attacks: Attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion. TCP State-Exhaustion Attacks: These attacks attempt to consume the connection state tables which are present in many infrastructure components such as load-balancers, firewalls and the application servers themselves. Even high capacity devices capable of maintaining state on millions of connections can be taken down by these attacks. Application Layer Attacks: These target some aspect of an application or service at Layer-7. These are the most deadly kind of attacks as they can be very effective with as few as one attacking machine generating a low traffic rate (this makes these attacks very difficult to pro-actively detect and mitigate). These attacks have come to prevalence over the past three or four years and simple application layer flood attacks (HTTP GET flood etc.) have been one of the most common DDoS attacks seen in the wild. Today’s sophisticated attackers are blending volumetric, state exhaustion and application-layer attacks against infrastructure devices all in a single, sustained attack. These attacks are popular because they difficult to defend against and often highly effective. The problem doesn’t end there. According to Frost & Sullivan, DDoS attacks are innovation “increasingly being utilized as a diversionary tactic for targeted persistent attacks.” Attackers are launching DDoS attacks to distract the network and security teams while simultaneously trying to inject malware into the network with the goal of stealing IP and/or critical customer or financial information. Why are DDoS attacks so dangerous? DDoS represents a significant threat to business continuity. As organizations have grown more dependent on the Internet and web-based applications and services, availability has become as essential as electricity. DDoS is not only a threat to retailers, financial services and gaming companies with an obvious need for availability. DDoS attacks also target the mission critical business applications that your organization relies on to manage daily operations, such as email, salesforce automation, CRM and many others. Additionally, other industries, such as manufacturing, pharma and healthcare, have internal web properties that the supply chain and other business partners rely on for daily business operations. All of these are targets for today’s sophisticated attackers. What are the consequences of a successful DDoS attack? When a public facing website or application is unavailable, that can lead to angry customers, lost revenue and brand damage. When business critical applications become unavailable, operations and productivity grind to a halt. Internal websites that partners rely on means supply chain and production disruption. A successful DDoS attack also means that your organization has invited more attacks. You can expect attacks to continue until more robust defenses are deployed. What are your DDoS Protection Options? Given the high profile nature of DDoS attacks, and their potentially devastating consequences, many security vendors have suddenly started offering DDoS protection solutions. With so much riding on your decision, it is critical to understand the strengths, and weaknesses, of your options. Existing Infrastructure Solutions (Firewalls, Intrusion Detection/Protection Systems, Application Delivery Controllers / Load Balancers) IPS devices, firewalls and other security products are essential elements of a layered-defense strategy, but they are designed to solve security problems that are fundamentally different from dedicated DDoS detection and mitigation products. IPS devices, for example, block break-in attempts that cause data theft. Meanwhile, a firewall acts as policy enforcer to prevent unauthorized access to data. While such security products effectively address “network integrity and confidentiality,” they fail to address a fundamental concern regarding DDoS attacks—”network availability.” What’s more, IPS devices and firewalls are stateful, inline solutions, which means they are vulnerable to DDoS attacks and often become the targets themselves. Similar to IDS/IPS and firewalls, ADCs and load balancers have no broader network traffic visibility nor integrated threat intelligence and they are also stateful devices vulnerable state-exhausting attacks. The increase in state-exhausting volumetric threats and blended application-level attacks, makes ADC’s and load balancers a limited and partial solution for customers requiring best-of‐breed DDoS protection. Content Delivery Networks (CDN) The truth is a CDN can addresses the symptoms of a DDoS attack but simply absorbing these large volumes of data. It lets all the information in and through. All are welcome. There are three caveats here. The first is that there must be bandwidth available to absorb this high-volume traffic, and some of these volumetric-based attacks are exceeding 300 Gbps, and there is a price for all the capacity capability. Second, there are ways around the CDN. Not every webpage or asset will utilize the CDN. Third, a CDN cannot protect from an Application-based attack. So let the CDN do what it was intended to. What is Arbor’s approach to DDoS protection? Arbor has been protecting the world’s largest and most demanding networks from DDoS attacks for more than a decade. Arbor strongly believes that the best way to protect your resources from modern DDoS attacks is through a multi-layer deployment of purpose-built DDoS mitigation solutions. You need protection in the Cloud to stop today’s high volume attacks, which are exceeding 300GB/sec. You also need on-premise protection against stealthy application-layer attacks, and attacks against existing stateful infrastructure devices, such as firewall, IPS and ADCs. Only with a tightly integrated, multi-layer defense can you adequately protect your organization from the full spectrum of DDoS attacks. - Arbor Networks Cloud (Tightly integrated, multi-layer DDoS protection) - Arbor Networks APS (On-Premises) - Arbor Networks SP/TMS (High Capacity On-Premise Solution for Large Organizations) Arbor customers enjoy a considerable competitive advantage by giving them both a micro view of their own network, via our products, combined with a macro view of global Internet traffic, via our ATLAS threat intelligence infrastructure. This is a powerful combination of network security intelligence that is unrivaled today. From this unique vantage point, Arbor’s security research team is ideally positioned to deliver intelligence about DDoS, malware and botnets that threaten Internet infrastructure and network availability.
<urn:uuid:995d4160-47f2-4eae-8473-5495eeacb3ea>
CC-MAIN-2017-04
https://www.arbornetworks.com/research/ddos-resources
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944445
1,528
2.90625
3
What is a Zero-Day Exploit? Zero-day exploit: an advanced cyber attack defined A zero-day vulnerability, at its core, is a flaw. It is an unknown exploit in the wild that exposes a vulnerability in software or hardware and can create complicated problems well before anyone realizes something is wrong. In fact, a zero-day exploit leaves NO opportunity for detection ... at first. A zero-day attack happens once that flaw, or software/hardware vulnerability, is exploited and attackers release malware before a developer has an opportunity to create a patch to fix the vulnerability—hence “zero-day.” Let’s break down the steps of the window of vulnerability: - A company’s developers create software, but unbeknownst to them it contains a vulnerability. - The threat actor spots that vulnerability either before the developer does or acts on it before the developer has a chance to fix it. - The attacker writes and implements exploit code while the vulnerability is still open and available - After releasing the exploit, either the public recognizes it in the form of identity or information theft or the developer catches it and creates a patch to staunch the cyber-bleeding. Once a patch is written and used, the exploit is no longer called a zero-day exploit. These attacks are rarely discovered right away. In fact, it often takes not just days but months and sometimes years before a developer learns of the vulnerability that led to an attack.
<urn:uuid:818e5d2b-b493-4796-b228-89e88f9073f3>
CC-MAIN-2017-04
https://www.fireeye.com/current-threats/what-is-a-zero-day-exploit.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00060-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936314
304
3.6875
4
Family sickened by food poisoning after poultry consumption Monday, Dec 23rd 2013 Recently, several members of a family were taken to the hospital after consuming chicken. Currently the case this is under investigation, but is just one report supporting a finding by the Centers for Disease Control and Prevention that poultry products cause the most food poisoning infections in the past few years. However, proper storage, handling and preparation practices can help prevent bacterial growth that leads to foodborne illnesses. A group of eight include two adults and six children in Mauranipur, India became ill after eating a meal including chicken, according to the Times of India. After consuming the poultry, the family began experiencing symptoms of foodborne illness and were transported to a nearby hospital. Samples of the food was sent to a lab for testing to confirm the case of foodborne illness. This illustrates recent findings by the CDC that poultry products are the leading cause of food poisoning cases. Leading cause of foodborne illness outbreaks The CDC recently released a report that found that chicken, turkey and other poultry items caused 17 percent of all foodborne sicknesses, stated NBC News. However, NBC News stated that these figures may be worse than they appear as experts estimate that only about 5 percent of food poisoning cases are recognized as part of an outbreak. In total, about 87 million individuals contract foodborne illnesses annually. Of these cases, 371,000 result in hospitalization and 5,700 end in death. When these types of food items are not stored, handled or prepared in a safe manner, bacteria and viruses can grow and cause food poisoning symptoms to appear in persons consuming these products. According to the Mayo Clinic, signs of food poisoning can include nausea, vomiting and abdominal pains, all experienced by the family before being hospitalized. Additionally, individuals may also have fevers, signs of dehydration, muscle weakness and difficulty speaking or swallowing. If people experience these symptoms, they should seek medical attention as soon as possible. The Mayo Clinic also advised contact the local health department to investigate any suspected cases of food poisoning. The CDC report stated that if food is left at an improper temperature, this can cause foodborne illness causing bacteria to grow. To avoid this, individuals should be sure to keep poultry products refrigerated at the optimal temperature. The FSIS noted that the danger zone for food, in which items have the highest chance of bacterial growth, is between 40 and 140 degrees Fahrenheit. Therefore, poultry items should be refrigerated below 40 degrees Fahrenheit as a best practice. As cold temperatures can slow bacterial growth, a storage unit set below 40 degrees Fahrenheit can prevent contamination of food items, according to the FSIS. A temperature monitoring device including a temperature sensor can ensure that refrigerators, freezers and all storage units maintain the best range to prevent bacterial growth. Additionally, when handling poultry products, consumers should be sure that they wash their hands frequently and adequately clean surfaces on which they are preparing the items. These practices can prevent cross contamination which can lead to foodborne illnesses.
<urn:uuid:22ae4221-c99d-450b-a242-122b1ddbe163>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/family-sickened-by-food-poisoning-after-poultry-consumption-557805
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95503
612
2.96875
3
The Link Between Biology and IT In biology the response of an ecosystem to change has one of three outcomes: 1) the system adapts to the change and thrives, 2) the system rejects the change and ossifies or 3) the system cant adapt and dies. So, how does that lesson apply to IT? IT is, for better or worse, the change agent in many organizations. One of the risk factors in initiating change is understanding the rate of change the organization can absorb. The old story about the frog in water is illustrative: If you put a frog in cold water and heat it up gradually, the frog will stay in the water until it cooks. If you drop the frog in hot water it will jump out. The same thing happens when you introduce change in an organization. If you gradually introduce the change and let everyone get used to it in small increments, the change will be more likely to be accepted. But if you try a "big bang" and change everything at once, people get uncomfortable and odds of making a successful change go down. Another risk factor is letting change get out of hand. Take the example of cellular growth. It's a great thing to have. It helps the organism replace dead cells. It provides the organism with additional resources to support a larger, more robust structure. But what happens when the cellular growth gets out of hand? We call it cancer. Planning for controlled change is crucial to successful growth for both organizations and organisms. Putting governance mechanisms in place to control change will keep wild undesirable growth from sucking the life out of your company. Change is a positive thing for many environments. Take the pond scum example. Your backyard pond is probably not the most attractive thing in the world when its covered with algae. So you change it by getting the water to move which inhibits the growth of algae. You add a little base or acid to improve the pH of the water so the coy dont die. But you do that with a little planning because to much or too little will unbalance things to the point where desirable pieces of the ecosystem die off. Working with and involving all the affected parties in planning change will make that change more likely to be a success.Sometimes change is externally initiated. A forest fire is can be a disastrous change, but the ecosystem has mechanisms to deal with that kind of change and it recovers over time. In the business world the appearance of a disruptive change can mean the demise of an organization that cannot readily adapt to the change. Developing adaptive change control mechanisms will mitigate the risk of both internal and external changes. Theres no easy answer to change risk-management. You have to understand your organization, its goals, and its culture to assess what rate and volume of change it can absorb. While looking for risk think about the following: Preparing your company for continuous change is one of the things you can do to reduce the risk of change. Putting mechanisms in place to respond positively to change and take advantage of it will give your business a better chance of survival in a world that is constantly changing. Knowing the organizational organism and using your best judgment on what rate of change can be absorbed is the best way to keep your company from becoming extinct. Formerly with B2B CFO, Mike Sheuerman is now an independent consultant with more than 25 years experience in strategic business planning and implementation. He can be reached at email@example.com.
<urn:uuid:ac9eb8fb-8a73-483c-9bdd-da9361427d70>
CC-MAIN-2017-04
http://www.cioupdate.com/print/insights/article.php/3669481/The-Link-Between-Biology-and-IT.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93684
706
2.78125
3
Over the course of the nearly forty years I have been working on database systems, there have been many debates and arguments about which database technology to use for any given application. These arguments have become heated, especially when a new database technology appears that claims to be superior to anything that came before. When relational systems were first introduced, the hierarchical (IMS) and network (IDMS) database system camps argued that relational systems were inferior and could not provide good performance. Over time this argument proved false, and relational products now provide the database management underpinnings for a vast number of operational and analytical applications. Relational database products have survived similar battles with object-oriented database technology and multidimensional database systems. Although relational technology survived these various skirmishes, the debates that took place did demonstrate that one size does not fit all and that some applications can benefit by using an alternative approach. The debates also often led to relational product enhancements that incorporated features (e.g., complex data types, XML and XQuery support) from competitive approaches. Some experts argue that many of these features have corrupted the purity and simplicity of the relational model. Just when I thought the main relational products had become a commodity, several new technologies appeared that caused the debates to start again. Over the course of the next few newsletters, I want to review these new technologies and discuss the pros and cons of each of them. This time I want to look at MapReduce, which Michael Stonebraker (together with David DeWitt), one of the original relational database technology researchers, recently described as a “a giant step backwards.” What is MapReduce? MapReduce has been popularized by Google that uses it to process many petabytes of data every day. A landmark paper by Jeffrey Dean and Sanjay Ghemawat of Google “MapReduce is a programming model and an associated implementation for processing and generating large data sets…. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.” Michael Stonebraker’s comments on MapReduce explain MapReduce in more detail: “The basic idea of MapReduce is straightforward. It consists of two programs that the user writes called map and reduce plus a framework for executing a possibly large number of instances of each program on a compute cluster. The map program reads a set of records from an input file, does any desired filtering and/or transformations, and then outputs a set of records of the form (key, data). As the map program produces output records, a "split" function partitions the records into M disjoint buckets by applying a function to the key of each output record. This split function is typically a hash function, though any deterministic function will suffice. When a bucket fills, it is written to disk. The map program terminates with M output files, one for each bucket. After being collected by the map-reduce framework, the input records to a reduce instance are grouped on their keys (by sorting or hashing) and fed to the reduce program. Like the map program, the reduce program is an arbitrary computation in a general-purpose language. Hence, it can do anything it wants with its records. For example, it might compute some additional function over other data fields in the record. Each reduce instance can write records to an output file, which forms part of the answer to a MapReduce computation.” The key/value pairs produced by the map program can contain any type of arbitrary data in the value field. Google, for example, uses this approach to index large volumes of unstructured data. Although Google uses its own version of MapReduce, there is also an open source version called Hadoop from the Apache project. IBM and Google have announced a major initiative to use Hadoop to support university courses in distributed computer programming. MapReduce is not a new concept. It is based on the list processing capabilities in declarative functional programming languages such as LISP (LISt Processing). Today’s systems implement MapReduce in imperative languages such as Java, C++, Python, Perl, Ruby, etc. The key/value pairs used in MapReduce processing may be stored in a file or a database system. Google uses its BigTable database system (which is built on top of the Google distributed file system, GFS) to manage the data. Key/value pair databases have existed for many years. For example, Berkeley DB is an embedded database system that stores data in a key/value pair data structure. It was originally developed in the1980s at Berkeley, but it is now owned by Oracle. Berkeley DB can also act as a back end storage engine for the MySQL open source relational DBMS. Why the Controversy? Given that MapReduce is not a database model, but a programming model for building powerful distributed and parallel processing applications, why is there such a controversy with respect to relational systems? To answer this question we need to examine the relational model of data in more detail. In a relational model, data is conceptually stored in a set of relations or tables. These tables are manipulated using relational operators such as selection, projection and join. Today, these relational operators are implemented primarily using the structured query language (SQL). How the table data is physically stored and managed in a relational database management system (RDBMS) is up to the vendor. The mapping of relational operators (SQL statements) to the back-end storage engine is handled by the relational optimizer whose job it is to find the optimal way of physically accessing the data. This physical data independence is a key benefit of the relational model. When using SQL, users define what data they want, not how it is to be accessed. Techniques such as indexing and parallel and distributed computing are handled by the underlying RDBMS. SQL is a declarative language, and not an imperative/procedural language like Java and C++, which require a detailed description of any data access algorithms that need to be run. Of course, SQL statements can be embedded in procedural languages. The reverse is also true; SQL can invoke stored procedures and user-defined functions written in a procedural language. The concern of Michael Stonebraker is that the use and teaching of MapReduce will take the industry back to the pre-relational times when there was a lack of formalized database schemas and application data independence. MapReduce advocates argue that much of the data processed by MapReduce involves unstructured data that lacks a data schema. They also argue that today’s programmers vastly outnumber SQL experts, don’t know or don’t want to know SQL, find MapReduce much simpler, and prefer to access and analyze data using their own procedural programming. Both camps are correct and both approaches have their benefits and uses. As I said at the beginning of this article, one size does not fit all. The challenge is to understand where each approach fits. Data Analysis Processing Modes When accessing and analyzing data there are three types of processing that need to be considered: batch processing of static data, interactive processing of static data, and dynamic processing of in-flight data. A business intelligence environment, for example, involves the SQL processing of static data in a data warehouse. This can be done in batch mode (production reporting) or interactively (on-demand analytical processing). SQL may also be used to analyze and transform data as it is captured from operational systems and loaded into a data warehouse. MapReduce is used to process large amounts of data in batch mode. It is particularly useful for processing unstructured data or sparse data involving many dimensions. It is not suited to interactive processing. It would be very useful, for example, for transforming large amounts of unstructured data for loading into a data warehouse, or for data mining. Neither MapReduce nor SQL are particularly suitable to the dynamic processing of in-flight data such as event data. This is why we are seeing extensions to SQL (such as StreamSQL) and new technologies such as stream and complex event processing to handle this need. MapReduce is, however, useful for the filtering and transforming of large event files such as web logs. The next article in this series will look at stream processing in more detail. MapReduce and Relational Coexistence and Integration Several analytical RDBMS vendors (Vertica, Greenplum, Aster Data Systems) are offering solutions that combine MapReduce (MR) and relational technology. Vertica’s strategy is one of coexistence. With Vertica, MR programs continue to run in their normal operating environment, but instead of routing the output to the MR system, the Reduce program loads output data into the Vertica relational DBMS. The Vertica support works in conjunction with Amazon Elastic MapReduce (EMR). EMR is a web service that provides a hosted Hadoop framework running on the infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). This link shows how to use EMR to process and load a data set from S3 into the Vertica RDBMS running on Amazon EC2. The Vertica solution could be used, for example, to do batch ETL (extract, transform, load) processing where the input is a very large data set (a set of web logs, for example) and the output is loaded into a data warehouse managed by Vertica. Aster and Greenplum have a strategy of integrating the MR processing framework into the RDBMS to take advantage of the benefits of RDBMS technology such as parallel computing, scalability, backup and recovery, and so forth. Greenplum allows developers to write MR programs in the Python and Perl scripting languages. This support enables MR scripts to use open source features such as text analysis and statistical toolkits. These scripts can access flat files and web pages, and can use SQL to access Greenplum relational tables. Source tables can be read by Map scripts and target tables can be created by Reduce scripts. This architecture allows developers to mix and match data sources and programming styles. It also allows the building of a data warehouse using both ETL (data is transformed before it is loaded in the data warehousing environment) and ELT (data is transformed after it is loaded into the data warehousing environment) approaches. Greenplum MR scripts can also be used as virtual tables by SQL statements – the MR job is run on the fly as part of the SQL query processing. Greenplum’s RDBMS engine executes all the code – SQL, Map scripts, Reduce scripts – on the same cluster of machines where the Greenplum database is stored. For more information, see the Greenplum white paper Whereas Greenplum tends to emphasize the use of SQL in MR programs, Aster takes the opposite approach of focusing on the use of MR processing capabilities in SQL-based programs. Aster allows MR user-defined functions to be invoked using SQL. These functions can be written in languages such as Python, Perl, Java, C++ and Microsoft .NET (C#, F#, Visual Basic), and can use SQL data manipulation and data definition statements. The Linux .NET support is provided by the Mono open source product. These functions can also read and write data from flat files. Like Greenplum, Aster MR capabilities can be used for loading a data warehouse using both ETL and ELT approaches. Aster, however, tends to emphasize the power of the ELT approach. For more information, see the Aster white paper Both Greenplum and Aster allow the combining of relational data with MapReduce style data. This is particularly useful for batch data transformation and integration applications, and intensive data mining operations. The approach used will depend on the application and the type of developer. In general, programmers may prefer the Greenplum approach, whereas SQL experts may prefer the Aster approach. What About Performance? MapReduce supporters often state that MapReduce provides superior performance to relational. This obviously depends on the workload. Andrew Pavlo of Brown University together with Michael Stonebraker, David DeWitt and several others recently published a paper comparing the performance of two relational DBMSs (Vertica and an undisclosed row-oriented DBMS) with Hadoop MapReduce. The paper concluded that, “In general, the SQL DBMSs were significantly faster and required less code to implement each task, but took longer to tune and load the data.” It also acknowledged that, “In our opinion there is a lot to learn from both kinds of systems” and “…the APIs of the two classes of systems are clearly moving toward each other.” MapReduce has achieved significant visibility because of its use by Google and its ability to process large amounts of unstructured web data, and also because of the heated debate between the advocates of MapReduce and relational database technology experts. Two things are clear. Programmers like the simplicity of MapReduce and there is a clear industry direction toward supporting MR capabilities in traditional DBMS systems. MapReduce is particularly attractive for the batch processing of large files of unstructured data for use in a business intelligence system. My personal opinion is that if MR programs are being used to filter and transform unstructured data (documents, web pages, web logs, event files) for loading into a data warehouse, then I prefer an ETL approach to an ELT approach. This is because the ELT approach usually involves storing unstructured data in relational tables and manipulating it using SQL. I have seen many examples of these types of database applications, and this approach is guaranteed to give database designers heartburn. At the same time, I accept that some organizations would prefer a single data management framework based on an RDBMS. This is one of the reasons why DBMS vendors added support for XML data and XQuery to their RDBMS products. My concern is that relational products and SQL are becoming overly complex, especially for application developers. Recent articles by Colin White
<urn:uuid:5a0b23d9-6f4b-4f4b-a9d1-e2a1c0eab7c7>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/10786
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925703
2,990
2.734375
3
NASA's Mars rover Curiosity has not found a single trace of methane in the Martian atmosphere, decreasing the odds that there is life on the planet. NASA's Mars rover Curiosity has not found a single trace of methane in the Martian atmosphere, decreasing the odds that there is life on Mars. "It would have been exciting to find methane, but we have high confidence in our measurements, and the progress in expanding knowledge is what's really important," said Chris Webster, NASA's manager of Planetary Sciences Instruments. "We measured repeatedly from Martian spring to late summer, but with no detection of methane." NASA reported today that Curiosity has been running tests to search for traces of methane and, so far, has come up empty. The rover has been working on the Martian surface for more than a year now in the search for signs that the planet does, or ever was able to, support life. Towards that goal, Curiosity has found evidence that water coursed over the planet's surface thousands of years ago and that certain chemicals necessary for life as we know it are present in Martian soil. Some scientists had thought it possible that microbial life exists on Mars today. However, many microbes here on Earth produce methane. No methane on Mars. Not much chance of life. "This important result will help direct our efforts to examine the possibility of life on Mars," said Michael Meyer, NASA's lead scientist for Mars exploration, in a statement. "It reduces the probability of current methane-producing Martian microbes, but this addresses only one type of microbial metabolism. As we know, there are many types of terrestrial microbes that don't generate methane." According to NASA, Curiosity analyzed samples of the Martian atmosphere in search of methane six times between October 2012 and June 2013. Nothing was detected, leading scientists to calculate that the amount of methane must be no more than 1.3 parts per billion. That, the space agency noted, is one-sixth of the amount that scientists had expected to find. Curiosity's controllers now will adjust its analysis tools to search for methane at concentrations well below 1 part per billion. Scientists are a bit let down by the findings after Martian atmospheric measurements made from Earth and by NASA spacecraft orbiting Mars had previously shown methane concentrations up to 45 parts per billion. "Methane is persistent. It would last for hundreds of years in the Martian atmosphere," said Sushil Atreya, a researcher on the Curiosity team. "Without a way to take it out of the atmosphere quicker, our measurements indicate there cannot be much methane being put into the atmosphere by any mechanism, whether biology, geology, or by ultraviolet degradation of organics delivered by the fall of meteorites or interplanetary dust particles." The image shows a lab demonstration of a measurement chamber used inside the Mars rover Curiosity. (Image: NASA) Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "The chances of finding life on Mars just got slimmer" was originally published by Computerworld.
<urn:uuid:ebdf4ecc-dc32-4ce1-b600-4debf59c4a7d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2170090/data-center/the-chances-of-finding-life-on-mars-just-got-slimmer.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945176
679
3.78125
4
Inside UML Version 2.0 - By Joab Jackson - Feb 02, 2005 The Unified Modeling Language is a visual modeling language used for creating maps of complex systems. In much the same way an architect drafts a blueprint for a new building, a project manager can use UML to document how a new system will be constructed, or how it will run. A system can be anything from a software program to the IT infrastructure of an entire agency. According to Jon Siegel, vice president of technology transfer for the Object Management Group, the nonprofit industry consortium that oversees UML development, some of the most notable new features of version 2.0 include:Nested Classifiers: This feature allows users to embed a set of classes within another set of classes. Someone can build a simple high-level model of a system, then embed more detailed views of discrete operational components. For instance, a high-level diagram of an entire agency can contain additional descriptions of the operations of each department. Likewise, those department modules could list all the software used by that department.Improved Behavioral Modeling: OMG unified all the different UML models for describing behaviors through the introduction of a basic behavioral element. A behavior is how an object interacts with other objects within a system. Improved relationship between structural and behavioral models: OMG has developed a way to tie together structural and behavioral diagrams, allowing users to do such things as merge activity diagrams into one metamodel. 'You can say in your model that this behavior represents the behavior of this class or component,' Siegel said. Joab Jackson is the senior technology editor for Government Computer News.
<urn:uuid:ac4858cb-77b0-495c-bbe8-ca6fc04811df>
CC-MAIN-2017-04
https://gcn.com/Articles/2005/02/02/Inside-UML-Version-20.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899176
337
3.15625
3
Photo: Firefighters watch as a fan, simulating wind, changes airflow and smoke conditions during experiments in a seven-story high-rise abandoned apartment building on New York City's Governors Island. The NIST tests examined firefighting techniques such as the use of positive pressure ventilation fans, wind control devices and hose streams to control or suppress heat and smoke from wind driven fires. National Institute of Standards and Technology (NIST) fire protection engineers turned an abandoned New York City (NYC) brick high-rise into a seven-story fire laboratory last month to better understand the fast-moving spread of wind-driven flames, smoke and toxic gases through corridors and stairways of burning buildings. The experiments on NYC's Governors Island, conducted in partnership with the Fire Department of New York (FDNY) and New York's Polytechnic University, examined the effectiveness of firefighting tactics such as the use of positive pressure ventilation fans, wind control devices and hose streams to control or suppress deadly heat and smoke from the wind-driven fires. Between 1985 and 2002, 1,600 civilians died and more than 20,000 people were injured in approximately 385,000 high-rise building fires in the United States, according to the National Fire Protection Association. Due to temperature differences between the outside and inside of a building on fire, open doors and broken windows far from the actual site of the fire can increase the movement of hot gases and smoke dramatically. Wind-driven flames, heat and smoke with temperatures exceeding 815 C (1500 F) can speed across entire floors and around corridors without warning. Smoke and heat entering stairwells often can block the evacuation of occupants and can hinder firefighting operations. To develop an understanding of the wind-driven fires and measure the impact of the firefighting tactics, NIST researchers placed cameras, temperature and pressure sensors throughout the building. From a safe ground floor monitoring post, the researchers with laptops monitored the progress of intentionally set fires raging through the apartments and public corridors. They recorded, second-by-second, the effects of opening or closing doors and windows both near and far from the blaze. Positive pressure ventilation fans, prototype wind control devices and prototype high-rise fire suppression nozzles, which were developed by FDNY, all had a positive impact on controlling the effects of a wind-driven fires. Research findings from the Governors Island experiments are expected to help improve fire service guidelines for combating high-rise fires, enhance firefighter safety, fire ground operations and use of equipment. NIST expects to issue a report on the high-rise experiments by November 2008. The Department of Homeland Security's (DHS) Federal Emergency Management Agency (FEMA) funded the Governors Island tests under its "Assistance to Firefighters" grant program.
<urn:uuid:6e8e1106-069c-45c7-ae08-840d96af3112>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/NIST-Evaluates-Firefighting-Tactics-In-NYC.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931671
560
3.296875
3
Mitchard E.T.A.,University of Edinburgh | Saatchi S.S.,Jet Propulsion Laboratory | White L.J.T.,Agence Nationale des Parcs Nationaux | White L.J.T.,University of Stirling | And 13 more authors. Biogeosciences | Year: 2012 Spatially-explicit maps of aboveground biomass are essential for calculating the losses and gains in forest carbon at a regional to national level. The production of such maps across wide areas will become increasingly necessary as international efforts to protect primary forests, such as the REDD+ (Reducing Emissions from Deforestation and forest Degradation) mechanism, come into effect, alongside their use for management and research more generally. However, mapping biomass over high-biomass tropical forest is challenging as (1) direct regressions with optical and radar data saturate, (2) much of the tropics is persistently cloud-covered, reducing the availability of optical data, (3) many regions include steep topography, making the use of radar data complex, (5) while LiDAR data does not suffer from saturation, expensive aircraft-derived data are necessary for complete coverage. We present a solution to the problems, using a combination of terrain-corrected L-band radar data (ALOS PALSAR), spaceborne LiDAR data (ICESat GLAS) and ground-based data. We map Gabon's Lopé National Park (5000 km2) because it includes a range of vegetation types from savanna to closed-canopy tropical forest, is topographically complex, has no recent contiguous cloud-free high-resolution optical data, and the dense forest is above the saturation point for radar. Our 100 m resolution biomass map is derived from fusing spaceborne LiDAR (7142 ICESat GLAS footprints), 96 ground-based plots (average size 0.8 ha) and an unsupervised classification of terrain-corrected ALOS PALSAR radar data, from which we derive the aboveground biomass stocks of the park to be 78 Tg C (173 Mg C ha-1). This value is consistent with our field data average of 181 Mg C ha-1, from the field plots measured in 2009 covering a total of 78 ha, and which are independent as they were not used for the GLAS-biomass estimation. We estimate an uncertainty of ± 25% on our carbon stock value for the park. This error term includes uncertainties resulting from the use of a generic tropical allometric equation, the use of GLAS data to estimate Lorey's height, and the necessity of separating the landscape into distinct classes. As there is currently no spaceborne LiDAR satellite in operation (GLAS data is available for 2003-2009 only), this methodology is not suitable for change-detection. This research underlines the need for new satellite LiDAR data to provide the potential for biomass-change estimates, although this need will not be met before 2015. © 2012 Author(s). Source Schuttler S.G.,University of Missouri | Philbrick J.A.,University of Missouri | Jeffery K.J.,Agence Nationale des Parcs Nationaux | Jeffery K.J.,University of Stirling | And 2 more authors. PLoS ONE | Year: 2014 Spatial patterns of relatedness within animal populations are important in the evolution of mating and social systems, and have the potential to reveal information on species that are difficult to observe in the wild. This study examines the fine-scale genetic structure and connectivity of groups within African forest elephants, Loxodonta cyclotis, which are often difficult to observe due to forest habitat. We tested the hypothesis that genetic similarity will decline with increasing geographic distance, as we expect kin to be in closer proximity, using spatial autocorrelation analyses and Tau Kr tests. Associations between individuals were investigated through a non-invasive genetic capture-recapture approach using network models, and were predicted to be more extensive than the small groups found in observational studies, similar to fission-fusion sociality found in African savanna (Loxodonta africana) and Asian (Elephas maximus) species. Dung samples were collected in Lopé National Park, Gabon in 2008 and 2010 and genotyped at 10 microsatellite loci, genetically sexed, and sequenced at the mitochondrial DNA control region. We conducted analyses on samples collected at three different temporal scales: a day, within six-day sampling sessions, and within each year. Spatial autocorrelation and Tau Kr tests revealed genetic structure, but results were weak and inconsistent between sampling sessions. Positive spatial autocorrelation was found in distance classes of 0-5 km, and was strongest for the single day session. Despite weak genetic structure, individuals within groups were significantly more related to each other than to individuals between groups. Social networks revealed some components to have large, extensive groups of up to 22 individuals, and most groups were composed of individuals of the same matriline. Although fine-scale population genetic structure was weak, forest elephants are typically found in groups consisting of kin and based on matrilines, with some individuals having more associates than observed from group sizes alone. © 2014 Schuttler et al. Source Oslisly R.,IRD Montpellier | White L.,Agence Nationale des Parcs Nationaux | White L.,Institute Of Recherche En Ecologie Tropicale | White L.,University of Stirling | And 5 more authors. Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2013 Central Africa includes the world's second largest rainforest block. The ecology of the region remains poorly understood, as does its vegetation and archaeological history. However, over the past 20 years, multidisciplinary scientific programmes have enhanced knowledge of old human presence and palaeoenvironments in the forestry block of Central Africa. This first regional synthesis documents significant cultural changes over the past five millennia and describes how they are linked to climate. It is now well documented that climatic conditions in the African tropics underwent significant changes throughout this period and here we demonstrate that corresponding shifts in human demography have had a strong influence on the forests. The most influential event was the decline of the strong African monsoon in the Late Holocene, resulting in serious disturbance of the forest block around 3500 BP. During the same period, populations from the north settled in the forest zone; they mastered new technologies such as pottery and fabrication of polished stone tools, and seem to have practised agriculture. The opening up of forests from 2500 BP favoured the arrival of metallurgist populations that impacted the forest. During this long period (2500-1400 BP), a remarkable increase of archaeological sites is an indication of a demographic explosion of metallurgist populations. Paradoxically, we have found evidence of pearl millet (Pennisetum glaucum) cultivation in the forest around 2200 BP, implying a more arid context. While Early Iron Age sites (prior to 1400 BP) and recent pre-colonial sites (two to eight centuries BP) are abundant, the period between 1600 and 1000 BP is characterized by a sharp decrease in human settlements, with a population crash between 1300 and 1000 BP over a large part of Central Africa. It is only in the eleventh century that new populations of metallurgists settled into the forest block. In this paper, we analyse the spatial and temporal distribution of 328 archaeological sites that have been reliably radiocarbon dated. The results allow us to piece together changes in the relationships between human populations and the environments in which they lived. On this basis, we discuss interactions between humans, climate and vegetation during the past five millennia and the implications of the absence of people from the landscape over three centuries. We go on to discuss modern vegetation patterns and African forest conservation in the light of these events. © 2013 The Authors. Source Maxwell S.M.,University of California at Santa Cruz | Maxwell S.M.,Marine Conservation Institute | Breed G.A.,University of California at Santa Cruz | Nickel B.A.,University of California at Santa Cruz | And 9 more authors. PLoS ONE | Year: 2011 Tractable conservation measures for long-lived species require the intersection between protection of biologically relevant life history stages and a socioeconomically feasible setting. To protect breeding adults, we require knowledge of animal movements, how movement relates to political boundaries, and our confidence in spatial analyses of movement. We used satellite tracking and a switching state-space model to determine the internesting movements of olive ridley sea turtles (Lepidochelys olivacea) (n = 18) in Central Africa during two breeding seasons (2007-08, 2008-09). These movements were analyzed in relation to current park boundaries and a proposed transboundary park between Gabon and the Republic of Congo, both created to reduce unintentional bycatch of sea turtles in marine fisheries. We additionally determined confidence intervals surrounding home range calculations. Turtles remained largely within a 30 km radius from the original nesting site before departing for distant foraging grounds. Only 44.6 percent of high-density areas were found within the current park but the proposed transboundary park would incorporate 97.6 percent of high-density areas. Though tagged individuals originated in Gabon, turtles were found in Congolese waters during greater than half of the internesting period (53.7 percent), highlighting the need for international cooperation and offering scientific support for a proposed transboundary park. This is the first comprehensive study on the internesting movements of solitary nesting olive ridley sea turtles, and it suggests the opportunity for tractable conservation measures for female nesting olive ridleys at this and other solitary nesting sites around the world. We draw from our results a framework for cost-effective protection of long-lived species using satellite telemetry as a primary tool. © 2011 Maxwell et al. Source Anthony N.M.,University of New Orleans | Mickala P.,Universite des Sciences et Techniques de Masuku | Abernethy K.A.,University of Stirling | Atteke C.,Universite des Sciences et Techniques de Masuku | And 31 more authors. Conservation Genetics Resources | Year: 2012 A five-day international workshop was recently convened at the Université des Sciences et Techniques de Masuku in Gabon to enhance international collaboration among Central African, US and European scientists, conservation professionals and policy makers. The overall aims of the workshop were to: (1) discuss emerging priorities in biodiversity and conservation genetics research across Central Africa, and (2) create new networking opportunities among workshop participants. Here we provide a brief overview of the meeting, outline the major recommendations that emerged from it, and provide information on new networking opportunities through the meeting web site. © 2011 Springer Science+Business Media B.V. Source
<urn:uuid:4cbb0152-4788-4035-9a7f-cf686cda7718>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/agence-nationale-des-parcs-nationaux-84419/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91013
2,266
2.546875
3
AT&T has unearthed another video from its archives, this time producing "Adventures in Telezonia", in which a odd-looking puppet instructs viewers on the proper way to use a telephone. If the name "Telezonia" sounds familiar, it's because the company re-did this film in 1974, using live actors and even more creepy "residents". According to AT&T, this film "was part of an educational package distributed in grade schools for kids to learn proper telephone usage. The package contained the 18-minute film, a filmstrip with different supplemental content, a children's booklet, and a teachers' guide. The company also had telephone sets - two brightly colored telephones - as a learning aid that were available to go along with the package." By 1950, about 60% of U.S. households had a telephone, AT&T says, so the goal of the project was to educate children on how to use the phone system. It's interesting to note how many times AT&T (and the Bell System before it) created film packages like this to promote the use of the phone. In modern times, it would be like if Apple created films like these to distribute to schools, teaching students how to use the iPhone or send a text message. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
<urn:uuid:f1a9d791-fe7b-45a5-b74c-f49883c11320>
CC-MAIN-2017-04
http://www.itworld.com/article/2722183/business/watch-a-creepy-puppet-teach-kids-how-to-use-the-telephone.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00355-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950164
318
2.84375
3
IBM is arguably the leader in corporate sacred song producing. In order to keep employees motivated, Big Blue founder Thomas Watson Sr. collected songs employees had written about IBM into a book dubbed “Songs of the IBM,” which the company first published in 1927. Watson felt that song singing was a way to build character and instill company loyalty. “Songs of the IBM” included more than 80 IBM-specific ditties, including the rollicking rally song "Ever Onward IBM," written in 1931 by IBM-er Frederick Tappe. Listen to the song here (.mp3)
<urn:uuid:db6ebea2-a746-4ce9-a49a-cff7323974b3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2868935/data-center/11-corporate-anthems-to-die-for.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979263
126
2.71875
3
As a lot of you probably know, Europe is in the midst of a horse meat scandal at the moment. The main issue is that meat products labelled as beef have been found to contain large quantities of horse meat. So, what can IT possibly learn from this? One of the first things we can learn is to not trust labels. Most data moving across networks is labelled with specific TCP or UDP port values. The problem is that many applications now have the ability to run over many different port numbers. In fact, applications like Bittorrent will go and figure out what ports are open at a network edge so that it can communicate with other peers. If you see something on your network running over TCP port 80, . It could be anything and if it is traversing your network edge, be very suspicious. If you don’t have something already, consider looking at deep packet inspection technologies which have a better chance of understanding what is been transported in the network packets. The second thing that has been exposed by the horse meat scandal is the complex food chains which operate behind the scenes. Meat and other food products now move vast distances between producers and brokers before it finally ends up on your plate. The more links in the chain, the greater the risk of contamination. Just like in the food industry, the Internet has been transformed into a complex infrastructure of hosting and data routing services in recent years. Gone are the days when you would download or stream content directly from the producer. Content delivery networks () and cloud services have made a huge impact with the way content is stored and distributed. When data is uploaded to these services, it is immediately replicated across the globe so that end users have fast access to local copies. It also gets rid of the single point of failure when content is hosted on a single site. These methods for hosting and distributing data can cause problems for some network monitoring tools. On many networks a review of flow records or log files will show lots of bandwidth been consumed by CDN services. You can do a simple test to see an example of this. Use a packet capture application like to monitor traffic while you access your favorite video hosting site. Do a of the IP address of the remote server and you will find a different company associated with it from the service you accessed in the first place. The good news is that there are tools out there that can report on both the IP addresses and the websites users are accessing. Look out for features like the ability to capture HTTP headers and DNS query traffic. As the horse meat scandal is revealing, unknowns can enter the food chain when the proper controls and inspections are lacking. The same applies to your network. Without the proper tools and network visibility in place, unknowns can enter your network. These can then cause a range of problems like excessive bandwidth utilization and issues with network security.
<urn:uuid:13621855-54cb-41a7-a2ef-3ae0f692ec7f>
CC-MAIN-2017-04
http://www.computerworld.com/article/2474620/network-security/what-it-can-learn-from-the-european-horse-meat-scandal.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957326
575
2.515625
3
Data replication refers to duplicating and transferring data from your Desktop Central server to a distribution server. When managing computers across a Wide Area Network (WAN), you might need to use distribution servers, at all the remote locations in your WAN. Using distribution servers at remote locations allows you to use the available bandwidth efficiently depending on the infrastructure that you have. A distribution server recieves updates comprising replicated data from the Desktop Central server at certain intervals, called replication intervals. It sends this data to all the computers in its network. The default replication interval for distribution servers is 2 minutes. You can configure this to suit your requirements. A distribution server replicates configuration details (or the tasks that have to be performed in client computers) and its dependent data, from the Desktop Central server. Dependent data includes: This means that not all the software application configurations and patch binaries that are available in the Desktop Central server will automatically get replicated to all the distribution servers. Specific binaries will be replicated to the respective remote locations that require them. While the distribution server has been introduced to save your WAN bandwidth, improper usage might contradict its purpose. Using your bandwidth efficiently depending on your infrastructure is known as optimizing your bandwidth. You can optimize your bandwidth by selecting only those computers to which a software application or a patch configuration has to be deployed. This will ensure that the software application or patch configuration is replicated only to the appropriate distribution servers. You must select the required remote office as the target. If you select a domain as the target, the software application or patch configuration is replicated in all the distribution servers irrespective of whether the client computers require the configuration or not. For example, if you have 5 distribution servers and you require a patch to be deployed to 3 computers at one remote location, you should select that remote office when choosing a target. This will ensure that the patches will be replicated only to the distribution server of that remote office, thus optimizing your bandwidth. If you have to deploy a patch to computers in more than one remote office, you can use the Add More Targets button to add multiple remote offices as targets. If you are using an Internet connection with limited bandwidth for your remote offices, it is important that you control the bandwidth to ensure that it is used efficiently. You can configure data-transfer rate as required. Configuring the data-transfer rate will increase the time taken to transfer the data from the Desktop Central server to a distibution server in a remote location.
<urn:uuid:dd902dbf-a28c-4222-bd8a-9918ef68c129>
CC-MAIN-2017-04
https://www.manageengine.com/products/desktop-central/distribution-server-data-replication.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909393
502
2.828125
3
The news media has been abuzz over NASA’s recent achievement. The big risk, big reward operation costing 2.5 billion dollars, which included safely landing a mini cooper-sized robot on Mars, paid off. Behind the scenes, the agency has relied on cloud services to keep people informed and will most likely continue to use similar application to assist the rover as it explores the red planet. Animation of the Mars rover Curiosity communicating Just before the landing took place, GigaOm posted an article detailing the preparations NASA put into their live stream. The agency knew Curiosity’s entry sequence was going to be a big event, so they had to prepare for high amounts of Web traffic aimed at their live stream. They partnered with SOASTA, a service that tests how much load applications can handle on the web. The company uses cloud resources to flood servers, emulating high amounts of traffic. It’s a method of testing the stamina of a given infrastructure and was used to determine the resiliency of the 2012 Olympic website. In NASA’s case, a Mac Pro at the Jet Propulsion Laboratory (JPL) shipped four streams of various bitrates to a flash server. The flash server then pushed the streams to a “tier 1” server, which was replicated by 40 load balanced “tier 2” servers on Amazon’s EC2. When SOASTA tested the stream, they generated 25 Gbps of traffic for 40 minutes. In addition, they terminated 10 instances and then 20 instances from NASA’s stream to make sure Amazon’s load balancing service would automatically bring them back up. During the 40-minute test, SOASTA downloaded over 6 terabytes of data. Needless to say, NASA’s stream worked without a hitch. Looking ahead, it appears the agency will continue to use cloud applications during the Curiosity’s mission. HPC in the Cloud reported on NASA’s use of Windows Azure to power a new cloud-based application called the “Be a Martian” program. The project uploaded 250,000 images of Mars onto Microsoft’s cloud platform and served more than 2.5 million data queries. Given that the new rover is equipped with 17 onboard cameras, there is a good chance a number of images captured by Curiosity will make their way to the application.
<urn:uuid:931eb3c7-4083-4661-bcb1-9b1fbaf35de8>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/08/07/curiosity_s_cloud_connection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00437-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936907
487
2.875
3
Planar optical waveguide technology is the optical waveguide branch devices with semiconductor process, the function of the shunt on the chip, to achieve the above shunt up to 1×32 on a chip, the chip at both ends, respectively, coupled to encapsulate the input and outputend multi-channel fiber array. PLC splitters are used to distribute or combine optical signals, which are based on planar lightwave circuit technology and provides a low cost light distribution solution with small form factor and high reliability. The PLC Splitter contains no electronics and uses no power. They are the network elements that put the passive in Passive Optical Network and are available in a variety of split ratios, including 1:4, 1:8, and 1:16, 1:32, 1:64 and 1:128 etc. The main advantages: - Wear and tear on the transmission wavelength of light is not sensitive to meet the needs the transmission of different wavelengths. - Spectrophotometric uniform, the signal can be assigned to the user. - The compact structure, small size, can be installed directly in a variety of transfer of the box, without specially designed to stay a great deal of installation space. - Single device shunt channel can reach more than 32. - Multi-channel, low cost, points more and more large ones, the more obvious cost advantage. The main drawbacks: - Complex device fabrication process, high technical threshold, the current chip monopolized by several foreign companies, domestic enterprises to be able to the production of large quantities of packaging and only a small few. - Relative to the higher cost of fused cone splitter, especially in the low-channel splitter at a disadvantage.
<urn:uuid:8a6698e1-ca7f-411e-9bac-a9ea84d7a8c5>
CC-MAIN-2017-04
http://www.fs.com/blog/the-advantages-and-disadvantages-of-the-plc-optical-power-splitter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909624
355
2.734375
3
Ask the average non-programmer for his or her view of the ideal software developer and odds are you’ll get a description of a whiz kid teenager or a twenty-something writing code all night. The long hours and fast pace of change in software and technology can make it seem to the uninitiated that programming is a younger person’s game. Those who’ve been in the business, however, know that there are advantages to being an older programmer and a new study supports the notion that older is better when it comes to developers. Researchers from the computer science department at North Carolina State University have released a study in which they examined whether programming knowledge gets better with age. Specifically, they used data on over 84,000 members of the Stack Overflow community, the questions they ask and answer in that forum and the site reputations for each user as proxies for the general population of programmers and their level of programming knowledge. Their approach was similar to an earlier, less formal, look at the Stack Overflow data which found that, generally, programming knowledge did seem to improve with age. The NCSU researchers sought to answer three questions: Does age have a positive effect on programming knowledge? Using linear regression they found that there was a statistically significant and positive relationship between age and site reputation, which suggests that programming knowledge does improve with age. Do older programmers possess a wider variety of technologies and skills? To examine this question, the researchers looked at the number topics users asked and answered questions about. They found that the number of topics associated with programmers actually declined through age 30, then increased in the following decades, suggesting an increase in the number of technologies one knows about later in one’s career. To what degree do older programmers learn new technologies? For this question, they divided the users into two groups, younger (under 37) and older programmers, and tested whether older programmers were given lower scores for their answers to questions about newer technologies. They found that older programmers, actually, scored (statistically) significantly higher on questions about iOS and Windows Phone 7, and were essentially even with younger programmers in their knowledge of other new technologies. Based on all this, one can conclude that as programmers get older, they get better; they know more about more programming topics, and they learn new technologies just as well if not better, than their younger counterparts. Take that, whippersnappers! Of course, like all research, there are a number of caveats to keep in mind when extrapolating the findings to the general population. As the authors point out, this is not a randomly selected sample of programmers; it’s a self-selected group that skews younger than than the actual programmer population. Also, how accurately does one’s reputation on Stack Overflow reflect one’s actual programming knowledge? And how well does programming knowledge translate to actual ability? In any case, it’s some validation for any programmer who’s of the age where s/he occasionally forgets why s/he came into a room. Just because a guy finds himself getting up more in the middle of the night to go to the bathroom doesn’t mean he can’t still knock out a killer iPhone app for you. He just made need to take a few naps along the way. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:01776834-7820-4da5-8d6a-887e4dca6bfb>
CC-MAIN-2017-04
http://www.itworld.com/article/2709937/it-management/like-a-fine-wine--programmers-get-better-with-age.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95405
741
2.65625
3
The seminal Laws of Identity define a Digital Identity as a set of claims made by one digital subject about itself or another digital subject. Crucially, this definition leaves "identity" as a metaphor. It's quite different from the way we casually use the word "identity" day-to-day as if it were a thing, like a label. There is a presumption online (largely un-examined) that identity can be lifted from one context and freely applied in others. So despite the careful framing of the Laws of Identity - that digital identity is about sets of claims and critically context-dependent - many people still carry around a utopian idea of a singular digital identity. The archetypal online identity metaphor is the passport. The belief in the possibility of a universal digital passport has been a long standing distraction, and terribly unhelpful, because there is actually no such thing, not in the sense the word is used by technologists! Ever since the early days of Big PKI, there has been the beguiling idea of an all purpose credential that will let its bearer into any and all online services, enabling total strangers to "trust" one another online. Later Microsoft of course even named an early digital identity service "Passport", and the word is still commonplace in discussing authentication products. The idea is that the passport allows you to go wherever you like, yet the concept that the metaphor alludes to doesn't exist. A real world passport simply does not let the holder into any country. To begin with, a passport is not always sufficient; you often need a visa. Then, you can't stay as long as you like in a foreign place; some countries won't let you in at all if you carry the passport of an unfriendly nation. You also need to complete a landing card and customs declarations specific to your particular journey. And finally, when you've got to the end of the arrivals queue, you are still at the mercy of an immigration officer who usually has the discretion to turn you away based on any other evidence they may have to hand. As with business transactions, there is much more to border control than identity. So if we could create the universal digital identity, we would do well to call it something other than "passport"! Metaphors are more than wordplay; they are used to teach, and once learned, simplistic mental models like “electronic passport” can be deeply unhelpful. The dream of all-purpose digital certificates derailed PKI. When they tried to implement "digital passports", they turned out to be unwieldy, riddled with fine print and excessive identity proofing, and very rarely could such certificates be used anywhere on their own. So the passport metaphor is lousy. Yet with "open" federated identity frameworks, we're unwittingly repeating many of the missteps of early PKI, largely because people aren't coming to grips with complexities obscured by faulty metaphors. The well-initiated appreciate that the Laws of Identity and earnest schemes like NSTIC all involve a plurality of identities tuned to different contexts. Many federated identity supporters expressly deprecate a single all-purpose cyber identity. Yet NSTIC especially is easily confused by many with a single new ID; a crazy number of press reports represent it as an Internet "driver licence". The misunderstanding is actually exacerbated by the strategy's own champions when they use terms like “interoperable identity” without enough qualification, and casually suggest that a student in future will log in to their bank using their student card. The Laws of Identity teach that identities are context dependent. That is, you cannot expect that an ID issued in one context will operate seamlessly in another. If we recall the formal definition of digital identity and set aside the passport metaphor, it's actually obvious that identities don't easily interoperate. Consider the set of claims made about me in the context of my employment; my corporate digital identity might comprise my employee number, position and department, contact details, and company role, which together amount to my employer's imprimatur to represent the organisation. On the other hand, I were enrolled at university, my student identity might consist of my student number, faculty, the stage of my course, and my eligibility to get into certain labs and access certain online collections. What do these respective sets of such claims say about me in other contexts, say banking or healthcare? Very little. I can identify as Steve Wilson in my company and Steve Wilson at university, but any interoperability of these identities across contexts only happens at the attribute level, where the identity metaphor breaks down. "Interoperability" is actually a curious omission in the Laws of Identity. The interoperability of atomic claims like date of birth, home address, credit card number, student number or SSN is almost trivial; some services recognise these claims and have business rules that use them, while others don't care about them. But the "interoperability" of a rolled-up set of claims like "Steve Wilson is employed by Lockstep Pty Ltd." makes almost no sense. The set of claims that make up that digital identity says a lot about me to a Relying Party doing business with Lockstep, but my corporate identity means nothing to retailers, doctors, my personal bank, the police or the video store.
<urn:uuid:ecac5d38-4565-4613-9307-884c8a73eea1>
CC-MAIN-2017-04
http://lockstep.com.au/blog/2011/04/25/identity-is-a-metaphor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949243
1,089
2.578125
3
The COBOL computer programming language has turned 50 as of Sept. 18, making it one of the oldest programming languages still in use. The COBOL computer programming language has turned 50 as of Sept. 18, making it one of the oldest programming languages still in use. Officials at Micro Focus, a provider of enterprise application management, testing and modernization solutions that has made its name on COBOL, said the company is celebrating the 50th anniversary of the date the name COBOL was given to the computer language that continues to underpin the modern world. The name COBOL, short for Common Business-Oriented Language, was agreed upon during a meeting of the Short Range Committee of the Conference on Data Systems Languages (CODASYL), the organization responsible for submitting the first version of the language, on Sept. 18, 1959. This followed a meeting at the Pentagon where guidelines for COBOL were first laid down. Despite its age, COBOL still plays a pivotal role in running most of the world's businesses and public services, from powering almost all global ATM transactions, running nearly three quarters of the world's business applications, and booking hundreds of holidays every single day, Micro Focus said. According to some estimates, there are more than 200 billion lines of COBOL code in existence, with hundreds more being created every single day. "COBOL has been a major part of the technology landscape since the dawn of the computing age and it will continue to play a leading role moving forward. Organizations have depended on COBOL for 50 years - a testament to the language's resilience, flexibility and value," said Ken Powell, president, North American operations at Micro Focus. "Over the past five decades, COBOL has grown to encapsulate business logic at the heart of organizations across all industries. As these organizations make plans to modernize business-critical applications, they will be able to draw on the reliability and breadth of business logic that comes with this iconic language." In May of 2009, Micro Focus published research showing that people still use COBOL at least 13 times throughout the course of an average working day. Yet, despite using the technology so often, only 18 percent of those surveyed had ever actually heard of COBOL. Equivalent research conducted by Micro Focus in the United Kingdom showed that U.K. citizens rely nearly as heavily on COBOL, using it at least 10 times per day. Mike Gilpin, analyst at Forrester research and former COBOL programmer, in a statement said, "...32 percent of enterprises say they still use COBOL for development or maintenance... COBOL is one of the few languages written in the last 50 years that's readable and understandable... Modern programming languages are ridiculously hard to
<urn:uuid:13fd7a32-bec4-4e15-b8cc-538ef252d9d6>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/COBOL-Turns-50-563521
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941925
612
2.671875
3
Protecting Servers from Remote AttacksNew NIST Guidance Addresses BIOS Vulnerabilities When IBM unveiled BIOS - Basic Input/Output System - in 1981 with the introduction of its personal computer, few perceived it as a security vulnerability. Fast-forward more than three decades, and security researchers have identified vulnerabilities to servers posed by BIOS. So the National Institute of Standards and Technologies has published new guidance to mitigate the threat. NIST's Special Publication 800-147B: BIOS Protection Guidelines for Servers is aimed at mitigating unauthorized modification of BIOS firmware by malware. Corrupting BIOS is seen as a significant threat because of its privileged position on the computer architecture. The protections offered in the guidance are designed to help mitigate remote attacks but wouldn't necessarily stop dedicated attackers who try to tamper with BIOS in systems they have "unfettered physical access to," says Andrew Regenscheid, a NIST mathematician who authored the guidance. "In practice, depending on how the manufacturer implements BIOS protections, these mechanisms would provide some protection against certain attacks," he says, "but wouldn't necessarily stop an attacker willing and able to pull and replace chips on the motherboard." De Facto Standard BIOS is a de facto standard defining a firmware interface built into IBM-compatible PCs and servers; it's the first software run when a computer based on IBM PC technology is turned on. Essentially, BIOS initializes and tests the system hardware components and boots up the operating system from mass memory. "Historically, BIOS has not been the primary target of attackers; however, in recent years we've seen more activity focusing on lower-level attacks," Regenscheid says. As the security of operating systems improved, Regenscheid says attackers began looking for entry into systems by going lower in the computer systems stack, creating what some cybersecurity researchers have coined as "a race to bare metal" between attackers and security professionals, with each group trying to gain or maintain control of the system before the other side does. "You can't really get any closer to bare metal than the BIOS," he says. History of BIOS Vulnerabilities Regenscheid provides a brief history of BIOS vulnerabilities: In the late 1990s, malware known as the CIH virus attempted to erase BIOS on infected systems. When successful, the computer would not start. In 2011, the Mebromi rootkit attempted to insert malware in the BIOS that would continue to re-infect systems, even after clearing the malicious code with anti-virus software, reinstalling the operating system or replacing the hard drive. "Storing the malicious code inside the BIOS ROM could actually become more than just a problem for security software, given the fact that even if an anti-virus detects and cleans the MBR infection, it will be restored at the next system startup when the malicious BIOS payload would overwrite the MBR code again," ethical hacker Marco Giuliani wrote in 2011, when he was a threat research analyst at Webroot Software. MBR, or master boot record, is a special type of boot sector at the very beginning of partitioned computer mass storage devices, such as fixed disks or removable drives, intended for use with IBM PC-compatible systems "Developing an anti-virus utility able to clean the BIOS code is a challenge, because it needs to be totally error-proof, to avoid rendering the system unbootable at all," Giuliani said. Role of BIOS in Security Regenscheid says the attacks against BIOS have led the security community to recognize the important role BIOS plays in maintaining security on computer systems. "Attacks on BIOS could allow very powerful and very stealthy attacks on computer systems," he says, "But, if BIOS can be strongly protected, it could be used as the foundation from which to build greater trust in computer systems." One such protection might be found in what's known as unified extensible firmware interface, or UEFI, a possible replacement for conventional BIOS that is becoming widely deployed in new PC-compatible computers. This isn't NIST's first guidance regarding BIOS. In 2011, NIST issued SP 800-147, BIOS Protection Guidelines, primarily aimed at desktops and laptops, not servers. The guidance for servers uses the same principles identified in the publication aimed at personal computers. Regenscheid points out that servers have different architectures than PC client systems, and the specific ways to update BIOS vary between client systems and servers. "In many cases, the differences between the documents are rather subtle, but they were important to accommodate the differences between server and PC client systems," he says. Regenscheid says the most significant threat vector for SP 800-147B is remote attackers attempting to perform a malicious BIOS update on a computer system, which could occur after the attacker gains a foothold on the computer system being attacked or by taking control of some part of the infrastructure that pushes BIOS updates to computer systems. "We identified these as the most compelling threats because attacks of this nature can scale to large number of machines," he says. The guidelines in the NIST publication apply to BIOS firmware stored in the BIOS flash, including the BIOS code, the cryptographic keys that are part of the root of trust for update and static BIOS data. This guide is intended to provide server platform vendors with recommendations and guidelines for a secure BIOS update process.
<urn:uuid:b5020cdd-6cc5-4109-a555-0d092151d19c>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/protecting-servers-from-remote-attacks-a-7313/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9466
1,094
2.75
3
Is the light spectrum the next frontier for wireless? It’s getting pretty clear that we are nearing the limit of our wireless networking technology in its current form. And not just because of the speed of devices. More so, we are simply running out of space within the electromagnetic spectrum for radio waves. We can inch into other frequencies, like gamma rays, but doing so without creating the Incredible Hulk, or just causing a lot of cancer, might be tricky. But there is an entire spectrum that is used every day that computers have only begun to tap: visible light. A researcher named Harald Hass is working at the University of Edinburgh to enable computer devices to begin to use light to communicate, according to CNN. If his plans work out, we may one day dump Wi-Fi for something else, perhaps "Li-Fi," which formally is called VLC, or visible light communications. Hass says that adding a microchip to a standard LED light can make it blink millions of times per second. Mobile devices with readers could then translate those blinks, essentially ones and zeros, into data. Adding an LED to mobile devices would allow communication in the other direction. In the world imagined by Hass, every street light could become a high-speed Internet port. The human eye wouldn’t notice the difference. Much the same way movies appear to show solid, moving images because the frames are whizzing by at 24 frames per second, nobody would be able to tell the difference between a data-enabled light and a standard, always-on bulb. Of course, there are some problems. Light communication needs constant line-of-site. Radio waves can travel through a lot of substances without a problem, but if someone walks between a Li-Fi device and a receiver, the communication is broken, not to mention the obvious fact that the signal couldn’t travel through walls, or anything that could dampen or stop light from passing. There is also the potential problem of light pollution, especially in cities where this technology would be most useful. Lots of neon signs and non-communicating lights could interfere with the signal. And what happens if two hubs are close together, or a user is walking from one to the next? How will the handoff take place? There is probably a ways to go on the idea of Li-Fi replacing standard wireless. But at least this technology can already be demonstrated in a laboratory setting. Could a move to the real world be too far behind? Posted by John Breeden II on Dec 06, 2012 at 9:39 AM
<urn:uuid:8116275a-886e-4a23-a1cb-143eb64a3456>
CC-MAIN-2017-04
https://gcn.com/blogs/emerging-tech/2012/12/light-spectrum-next-wireless-frontier.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943384
530
3.375
3
Barriers to trade in the technology sector raise prices and hurt consumers, increasing the digital divide. Free trade in information technology goods and services represents the best possibility for improving the productivity of a country, its industries and its citizens. The primary reasons that governments around the world support an open trading system are to increase economic vitality, enhance consumer choice, spur foreign investment and create economic interdependencies. Open trade policy - and the agreements that result - benefit the high-tech industry as a whole and consumers worldwide by reducing or eliminating restrictive national policies in five key areas, such as those that: Technology and e-commerce impact business globally. Critical inputs such as highly skilled engineers, IT specialists, component parts, etc. are sourced internationally. And outputs, whether they be Taiwanese produced chips, US software, or Indian services are sold worldwide. In this global business environment, open and free trading systems are critical to ensuring access to the best components, services and capital worldwide in order to produce the most sophisticated, highest quality, lowest cost technology goods and services. In FY04, over half of Cisco's total product bookings originated from customers outside of the United States. Accordingly, market rules and regulations in foreign countries have a significant impact on the competitiveness of Cisco products, services, and in some case, our overseas operations. World Trade Organization (WTO) Headquartered in Geneva, Switzerland, the World Trade Organization (WTO) assists in orchestrating the rules of trade between nations in order to ensure the flow of free, equitable trade. The director-general leads the 600 person staff in, among other things, negotiating WTO agreements which are then signed by the majority of the world's trading nations and approved in their parliaments. The WTO's 148 members conduct 97% of the world's trade. WTO negotiations are making headway in a number of areas. As part of the Doha Ministerial Declaration, most negotiations are set to conclude by early 2005. Important areas of discussion for the technology sector are: e-commerce, services (particularly computer and telecommunications services), government procurement, the expansion of products and WTO member participation in the tariff-eliminating Information Technology Agreement (ITA). Progress had stalled, but was put back on track, as WTO members recently agreed on a negotiating framework. European Union Enlargement In 2004, the European Union successfully grew from 15 to 25 members. The 10 new member states are: Cyprus, Malta, Estonia, Slovenia, Czech Republic, Hungary, Slovakia, Lithuania, Poland and Latvia. Bulgaria and Romania hope to join by 2007. Croatia and Turkey are also being considered for membership. European Union (EU) Bi-Lateral Negotiations The European Union (EU) remains firmly committed to working with the countries of Eastern Europe and Central Asia to support their political and economic transformation. Enlargement has already brought the EU much closer to the countries of Eastern Europe. The European Neighbourhood Policy, presented by the European Commission in May 2004, aims at reinforcing existing relations with neighboring and partner countries: Armenia, Azerbaijan, Belarus, Georgia, Kazakhstan, Kyrgyzstan, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine and Uzbekistan. In 2004, as part of the EU Common Strategy, the EU hosted the 14th EU/Russia summit in order to welcome the extension of the existing EU/Russia Partnership and Cooperation Agreement to the ten new EU Member States. The summit discussed the necessary objectives and action steps in order to create four common spaces: a common economic space (with specific reference to the environment and energy); a space of freedom, security and justice; a space of cooperation in the field of external security; and a space of research and education, including culture. The Asia Pacific region contains many crucial partners for the EU as well. A significant number of bilateral agreements have already been concluded with Australia and countries in Central Asia, South Asia and Northeast Asia. The EU is fully committed to supporting China's reforms and liberalization through its co-operation program. Chinese tourists will soon benefit from facilitated procedures to visit Europe through a major agreement made in May 2004. An important agreement with China on customs cooperation, which aims to further facilitate trade and fight against piracy and counterfeiting, was also signed in 2004. New dialogues in the fields of intellectual property rights, competition policy, enterprise policy, textiles and the environment have also recently commenced. In December 2004, the EU and China conducted their 7th Annual Summit. The focus was on further strengthening their maturing strategic partnership. Also, agreements were concluded on customs co-operation and science and technology. US Free Trade Agreements The U.S. Government is aggressively moving forward with a series of free trade agreement (FTA) negotiations. To date, the Bush Administration has completed twelve FTAs with Chile, Jordan, Singapore, Central America (CAFTA - Guatemala, El Salvador, Honduras, Nicaragua, Costa Rica & the Dominican Republic), Australia, Morocco and Bahrain. Congress has adopted the U.S.-Jordan, U.S.-Singapore, U.S.-Chile, U.S.-Morocco and U.S.-Australia FTAs. The U.S.-Bahrain FTA and US-CAFTA have not yet been sent to the U.S. Senate for its approval. The Administration recently notified Congress of its intent to begin negotiations with the United Arab Emirates and Oman in February 2005. Negotiations are underway with eleven other countries: the Andean countries (Bolivia, Colombia, Ecuador, Peru), Panama, Thailand and with the five nations of the South African Customs Union - Botswana, South Africa, Lesotho, Swaziland and Namibia. The FTAs include precedent-setting standards for the promotion of e-commerce, the reduction and/or elimination of duties on technology products and components, greater transparency in customs procedures and strong intellectual property protection. The U.S. Government continues its efforts to conclude a Free Trade Agreement of the Americas (FTAA), despite a political impasse on several issues.
<urn:uuid:5f601872-c33c-48a1-8e7a-2f062ae6ceb6>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/government-affairs/high-tech-policy-guide/promoting-innovation/trade.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00458-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941736
1,238
2.671875
3
Healthcare providers battle mounting flu season Thursday, Dec 5th 2013 The flu season is upon us, as people struggle with early symptoms and healthcare organizations work to provide respite and vaccines. According to the Centers for Disease Control and Prevention, this year's flu season is just starting to pick up. A reported 7.6 to 7.9 percent of respiratory specimens collected tested positive for a strain of the flu virus. This is the fourth consecutive week that these figures have risen, showing an increase in the number of individuals being affected by the flu, stated the Center for Infectious Disease Research and Policy. The CDC has seen all three strains of the virus appear during the season thus far, including influenza A subtype H1N1, which captured mass media attention when it burst onto the scene in 2009. Although the effects of the virus are being felt by individuals and healthcare providers throughout the country, CIDRAP reported that there have been certain regional hot spots. Six states - Alabama, Florida, Kentucky, Mississippi, Texas and Utah - are reporting area-specific flu activity recently, up from four in late November. Preventing flu infection The CDC stated that one of the most important steps in preventing flu infections is getting an annual flu vaccine. This year, an estimated 138-145 million vaccines will be available during the 2013-2014 influenza season. Experts recommend being vaccinated by a certified medical provider, where doses are handled appropriately and can provide the best protection against infection. Healthcare organizations observe cold chain industry standards for storage and handling of material, including the use of vaccine temperature monitors. If handled or refrigerated improperly, vaccines can deteriorate and their effects can diminish be significantly lessened. "Inactivated vaccines can be damaged by exposure to temperature fluctuations (e.g., extreme heat or freezing temperatures)," the CDC stated. "Potency can be adversely affected if vaccines are left out too long or exposed to multiple temperature excursions (out-of-range temperatures) that can have a cumulative negative effect." The CDC advised that inactive vaccines be stored between 35 and 46 degrees Fahrenheit. The best practice is to maintain an average temperature of 40 degrees Fahrenheit. Because temperature can have a significant effect on vaccines, healthcare providers and those involved in the supply chain should utilize temperature monitoring systems at all points before the patient receives the inoculation. This will sustain the effectiveness of the shot and help patients avoid influenza infections. Bloomberg reported a new trend in vaccinations during this year's flu season: Personalized flu shots designed to focus on specific age groups more prone to infection. The Food and Drug Administration recently approved three vaccinations to go on the market this year. The organization hopes this will help boost the number of people who get vaccinated for the flu. Currently less than half of Americans seek out vaccine for influenza, which is the country's eighth largest cause of death. Gregory Poland, of the Mayo Clinic's Vaccine Research Group, said this represents a great advance in healthcare. "For the first time in human history, we can actually target an influenza vaccine to an individual patient," said Poland.
<urn:uuid:57af4db4-36ce-4fdb-8734-3d0f248da1be>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/healthcare-providers-battle-mounting-flu-season-549508
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938141
630
2.828125
3
Worldwide, the installed base of electric generating plants has a capacity of roughly 6,400 GW. Annual new construction is about 5 percent per year, or 320 GW. However the additions can be 'lumpy' depending on economics and energy policies. Older and generally smaller generating units are retired from service usually after a 40 year operating life (though this can be extended via further investment). The amount of generating capacity retired in a typical year can also vary greatly from year to year based on fuel prices, plant economics, and energy policies. Long term, the trend worldwide is to substitute renewable forms of generation (primarily wind and solar photovoltaic) for traditional fossil-fueled thermal plants. The lifecycle costs of wind and solar PV have fallen to the point where they are much more economically competitive than several years ago. In addition, they are subsidized as a matter of policy. In some regions (notably the Nordic countries) renewables make up a substantial fraction of total installed generation capacity. As this transition to more renewables occurs, the role of the traditional thermal plants shifts from base load or daily cycling toward more load-following operations. This is a technical challenge. It is also often a business challenge as the traditional regulated utility companies often own mainly the thermal assets. The regulatory environment in which they operate often has not adjusted for this shift in roles. After the 2008-09 economic crisis, which only had a small effect on the power generation industry, revenues grew fast in 2010 and 2011. However, since that period capital expenditures and revenues have declined, mainly due to overcapacities in Europe as well as a slowdown in China. Overall, there is less growth coming from developed economies. Developed economies in general have much smaller electric load growth than developing economies. Historically this growth is in the range of 2-3 percent per year, but the historical load growth pattern appears to have broken down. This is caused by greater energy efficiency both in industry and in the consumer/residential market segment. Technical innovations in low-energy lighting are mandated in some major markets. Investments in developed markets continue to be in maintenance and modernization of older power plants, if economical. However, many national energy policies favor lower-carbon generation technologies. This has resulted in the closing of many existing coal-fired plants at mid-life age of 25-35 years. These are plants that would have been extensively updated in earlier times under different energy policies. In contrast to commodities, which are mostly traded on global markets, electricity cannot be stored or exported. Available generating capacity cannot be increased on an hourly basis. Automatic load curtailment schemes (called demand response) have been allowed to participate in some markets, providing markets with some elasticity of demand as well as of supply. The drop in CapEx also represents a shift in companies that do invest in power, as the typical utility large generating project is replaced by distributed generation investments by non-utility firms. As renewable generation becomes a significant fraction of the total, utilities are becoming base load providers, providers of capacity at peak times, and backup providers. Generally utility regulation has not yet come to grips with this fundamental change in the role of regulated utilities. This remains a major long-term challenge to the industry.
<urn:uuid:d79193e0-bb71-4dbb-a5d6-50ea3812949d>
CC-MAIN-2017-04
https://www.arcweb.com/industries/electric-power-smart-grid
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00182-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962161
658
3.28125
3
Seasoned musicians know that if they want to "bring the mood down," they'll play a song in a minor key. Some may not be able to articulate why minor chords have the effect that they do, other than there's something about flattening the third note in a major chord that adds a somber tone to the sound. But it's not for musicians to know why; they just have to play. Those kinds of questions are best left to people such as UC Berkeley vision scientist Stephen Palmer, whose research shows a strong cross-cultural link between music and colors. Palmer and his team recruited nearly 100 men and women for the experiment, in which participants listened to 18 classical music pieces from well-known composers (Bach, Mozart, Brahms) that both varied in tempo and employed either major or minor keys. Half of the study participants were based in the San Francisco area, while the other half were in Guadalajara, Mexico. According to UC Berkeley: In the first experiment, participants were asked to pick five of 37 colors that best matched the music to which they were listening. The palette consisted of vivid, light, medium, and dark shades of red, orange, yellow, green, yellow-green, green, blue-green, blue, and purple.Participants consistently picked bright, vivid, warm colors to go with upbeat music and dark, dull, cool colors to match the more tearful or somber pieces. Separately, they rated each piece of music on a scale of happy to sad, strong to weak, lively to dreary and angry to calm. "Surprisingly, we can predict with 95 percent accuracy how happy or sad the colors people pick will be based on how happy or sad the music is that they are listening to," Palmer said in a statement. Results of the study were published this week in the journal Proceedings of the National Academy of Sciences. Regarding the cross-cultural affinity for music-color association, Palmer said, "The results were remarkably strong and consistent across individuals and cultures and clearly pointed to the powerful role that emotions play in how the human brain maps from hearing music to seeing colors." Now, in terms of basic musical forms and structures, what people are exposed to in the U.S. and Mexico aren't dramatically different. That's why Palmer's next project is interesting. He and his research team will conduct a similar study in Turkey, where scales in traditional music go beyond major and minor. "We know that in Mexico and the U.S. the responses are very similar," Palmer said. "But we don't yet know about China or Turkey." By the way, if you're interested in learning more about how music affects the human brain, I recommend This Is Your Brain On Music by Daniel J. Levitin, a professor of psychology and music at McGill University in Montreal. Now read this:
<urn:uuid:ee287a51-7de7-4d84-83c8-8fac39616cb9>
CC-MAIN-2017-04
http://www.itworld.com/article/2710789/enterprise-software/association-of-music-with-colors-crosses-cultural-boundaries.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969946
585
3.125
3
So far this year analysts, government bodies and even security companies have all stated that Internet security incidents are on the rise. Whether fact or fiction, the truth of the matter is that any company with a connection to the Internet increases the threat of theft, hacking, vandalism and data loss. But most companies know this don’t they? More than likely, yes. So they use a firewall to protect themselves don’t they? Probably. Well they’re safe then, and can sit back and put their feet up, can’t they? No. All organisations need to protect the valuable data and documents held on their network, and a firewall is the most efficient way to do this. Acting as guards, firewalls monitor and examine traffic between a network and the Internet. Any unauthorised or suspicious traffic is blocked. Firewalls can also be configured to secure one network from another. However, correct management is crucial. The firewall can become less than 30% effective within three months of installation if managed incorrectly. A firewall is simply an enforcement device. It does not provide security in its own right. The actual firewall device provides approximately 20% of the security capability. It is the way the firewall is configured that provides the overall security effectiveness. It’s a bit like having locks on all the windows and doors in a house but then leaving the key in the door, or one of the windows open. The locks only work if time is taken to ensure that all windows and doors are closed and all the keys are removed. The best way to achieve security effectiveness is to design a security policy. This will ensure the integrity of any mission critical device – especially firewalls. Below is a guide on how to create a firewall policy. 5 Tips to generating a firewall policy 1. Identify trust zones The very first step in securing a network is to decide on the different zones of trust present. In its most basic form, network security is about zones of trust. A simple example would be the Internet (a “no trust’ zone) and an internal network (a “high trust’ zone); a firewall controls traffic between these different zones of trust. Of course, in the real world there are more than two zones. Typically these include Internet, web servers, external connection zone, internal network, and remote access zone. Once the zones are identified the different traffic flowing between the zones can be defined and the firewall policy can be configured accordingly. 2. Change Control With any firewall it is very important to have change control. Far too often firewalls are found with rules that nobody remembers adding. What normally happens is that these rules remain because firewall administrators fear they might break something if they are removed. When rules are introduced there should be a well-defined method for documenting these and, in the case of temporary rules, the removal date for the rule should be added in a comment field. The only way of checking if the firewall is actually enforcing the agreed policy is to either verify it with an Intrusion Detection System, or to do a manual verification using a penetration test or a firewall review by a third party. 3. Log and review traffic When deciding on a firewall policy, do not forget the importance of logging. One of the primary purposes of a firewall is to log traffic going through the firewall. Logging is no good unless these logs are reviewed on a regular basis; this should be included in the policy. 4. Monitor stability A firewall is like any other infrastructure component and should be managed as such. In other words it should be monitored for availability to ensure maximum uptime. If a firewall isn’t stable people will find ways of avoiding the firewall that leads to a low level of security. This should also be reflected in the policy. 5. Document the policy A firewall policy and the issues around it should always be documented to provide a reference for administrators and people working on the firewall. If the policy is documented people can work to, and follow the policy. If no formal policy exists people will tend to do things in an ad hoc fashion.
<urn:uuid:a033e825-9403-4912-a933-336b0bf8684b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/04/02/firewall--firewall-policy--improved-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937379
844
2.90625
3
What you should know about the Internet Standards process. - Page 5 The IETF publication mechanism does not provide the best interface or search engine for locating RFCs. However, it should be considered canonical. This list is not comprehensive, as there are probably scores if not hundreds of websites and FTP servers serving some or all RFCs. However, these are good places to go to when you need to locate a particular RFC or to find out more about what might be in an RFC or I-D. - Internet Standards Archive. This is a good easy site, and they've got good search facilities for RFCs and I-Ds. A good "go-to" site for RFCs. - Lynn Wheeler's RFC Index. This is another excellent resource if you're trying to figure out what's current, what is a standard, and what is not. - The NORMOS Standards Repository. Another good site, it has particularly good search capabilities; very flexible. It also returns all the hits (unlike the Internet Standards Archive above). - Invisible Worlds RFC Land. This seems to be a pretty cool site. Carl Malamud and others had a neat idea about XML-tagging RFCs. There's a lot of graphics and very involved programming underneath the website, so I'd like it better if it were simpler without all that stuff, and they still need to finish the XML-ification (at least, that's how it seems), but this site is recommended as well. - The RFC Editor Page. This is the official place, and there's lots of good information here as well. - The RFC Editor's Search through the RFC Database Page. This used to be nothing more than a simple listing, but has become virtually overnight one of the best resources on the web for RFCs. It is canonical, and you can download the whole RFC database from here too. Pete Loshin (email@example.com) began using the Internet as a TCP/IP networking engineer in 1988, and began writing about it in 1994. He runs the website Internet-Standard.com where you can find out more about Internet standards.
<urn:uuid:ede2a264-3ca3-49a5-a24e-826457490c3c>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsp/article.php/10953_616051_5/What-you-should-know-about-the-Internet-Standards-process.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948095
443
2.71875
3
AVG, a popular antivirus vendor, released a report showing the risk of web browsing in different countries. There are a number of stats that AVG pulls from the data, which is generated from its customers around the world. That might also leave it open for discussion why the imaginary borders matter and how they change statistical tracking. Different cultures might be more prone to certain malicious attacks like social engineering or Nigerian royalty scams, but the headline says the information is for travelers, which would mean culture is migratory. Another given that needs to be forgiven is that the information is gathered from individuals that have installed antivirus protection. If a culture had an aversion to AV protection, they would clearly be on the receiving end of worse attacks although they may not have more frequent encounters with malware. Browsing habits, interest, and how much those are targetted seems to be a better metric. Perhaps North America comes in as the worst continents because more people look for free AV and install AVG. More Internet-connected people and a concentration of English speakers probably lends North America to being a popular target. We probably get more than our fair share as well as being a biased stat. Below are the chances of being attacked by continent and country: North America 1 in 51 Europe 1 in 72 Asia 1 in 102 Africa 1 in 108 South America 1 in 164 Sierra Leone. 1 in 696 Niger. 1 in 442 Japan. 1 in 403 Turkey. 1 in 10 Russia. 1 in 15 Armenia. 1 in 24 More interesting would have been the source of the attacks than the source of the victims. Can you show that Russian and Chinese servers host malware or are the destination for redirects? In a globalized world with a network that doesn’t care where you’re located, there isn’t much that can be done to change a country’s frequency of being attacked.
<urn:uuid:286b49ea-cbb6-433b-9044-281f03bc8a86>
CC-MAIN-2017-04
https://www.404techsupport.com/2010/09/avg-reports-risks-of-web-browsing-by-country/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956365
399
2.53125
3
Uncovering the Dark Secrets of Dark Storage Dark storage is disk space that is unmapped, unclaimed, or unassigned. A new term has been added to the lexicon of storage technology: dark storage. It was coined by MonoSphere in its recent announcement of Release 3.7 of the company’s Storage Horizons software product. Dark storage is real and may represent 15 to 40 percent of your current storage capacity: disk space that is unmapped, unclaimed, or unassigned. How does dark storage happen? Simple. It's the disconnect between the way storage is seen and handled by application and server administrators on the one hand and storage administrators on the other. According to MonoSphere, when storage administrators turns over raw storage capacity for use by the server and applications administrators (in the form of LUNs), file systems are not regularly mapped to every LUN. Unmapped LUNs go undetected by traditional capacity management tools, which may consist simply of multi-page spreadsheets designed by storage administrators who find fault with the on-array capacity management tools of the hardware vendor. The storage admins know that capacity has been shared out. The server or application administrator's file system management tools report that the efficiency of file system utilization is high, when, in fact, the underlying storage infrastructure is dramatically underutilized. Spotting this discrepancy is technically important undertaking, requiring an astute comparison of raw allocated capacity and used allocated capacity, followed by an investigation of any mismatches found. The effort can be even more challenging, MonoSphere warned last year, by technologies such as thin provisioning on disk arrays. Thin provisioning attempts to improve capacity-allocation efficiency by enabling storage that is reserved to an application (but not yet used) to be reallocated "behind the scenes" by the on-array thin provisioning engine. Thin provisioning techniques are, collectively speaking, a high-tech capacity shell game that falls apart if an application ever makes a "margin call" for the storage it thinks it owns. If that storage has been thinly provisioned elsewhere and there is no more capacity to be had, the result of a margin call can be catastrophic: application failure, or, in the worst case, server failure. Server virtualization only makes spotting and correcting dark storage more difficult, though the press releases last week about MonoSphere's interoperability with VMware were too politically correct to say so directly. Virtualization itself adds an abstraction layer to resources such as storage and further obscures the relationships between array LUNs, the ESX server, VMware file systems (VMFS), VMware virtual disks (VMDK), guest OSs, and guest OS file systems/raw devices. Capacity management problems have always existed in storage infrastructure, beginning with deliberate vendor obfuscation of raw capacity. Some vendors hold a percentage of capacity "in reserve" for their own software that the customer has either purchased with the array or that the vendor hopes to sell the customer in the future. When you hear reference to "T" bits (the technical formatted capacity of an array) and "B" bits (how much of the formatted capacity that the vendor "lets you use"), you are already dealing with putative dark storage. The problem increases with technology "value-adds" such as thin provisioning and perhaps even de-duplication and compression, which unpredictably change capacity usage forecasting models. In their limited definition of the term "dark storage," MonoSphere focuses only on the issue of capacity mismanagement at the file system level. They claim to have the solution: an easy-to-interpret reporting facility and dashboard that displays what storage is being used and what subset of that capacity is actually committed to active file systems, databases, and so forth. The product doesn't work on all storage arrays. That’s predictable, given the close control many vendors wish to exert over what the customer can readily see about their capacity allocation, but it's a start. Provided the customer's gear is on the MonoSphere support list, the product can deliver real value. In the demonstration I attended, discrepancies in allocated capacity and file system overlays that amounted to significant capacity waste were quickly spotted. Remarkably, this works well in virtualized server environments, too. Spotting and correcting the dark storage problem can mean deferring additional CAPEX investments and management burden for new storage deployments -- a value proposition on its own in this economy. In addition, MonoSphere also enables users to include details about storage platform costs, which it uses to establish cost per GB. Done diligently, this costing data can be a rich trove of information useable to see not only the dollar value of wasted dark storage and the return on investment accrued to using MonoSphere, but to set the stage for improved data management. MonoSphere contextualizes the feature as a "chargeback system" enabler. In fact, it can provide ready insight into the cost of hosting data on one platform versus another -- extremely important in efforts to correctly size storage and construct "purpose-built" storage infrastructure going forward. To be truly effective, MonoSphere's product needs a graphical mapping facility that will appeal to the visually-oriented user, whether a technical person or a business manager. They say that this has been suggested by many of their customers and will likely find its way into a future release. Even without this feature, MonoSphere's Storage Horizons is worth a look. Your opinions are welcomed, especially if you have used this product. Send them to me at firstname.lastname@example.org. Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.
<urn:uuid:8ff9ec72-4bcc-46b4-9ddf-ad489c18512f>
CC-MAIN-2017-04
https://esj.com/articles/2008/05/20/uncovering-the-dark-secrets-of-dark-storage.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939947
1,219
2.546875
3
The majority of people have at least one USB stick in order to transfer files from work to their homes.Also a common characteristic of all humans is curiosity.These two things combined together can create a huge threat which can affect any organization.This article is an another example of why people are the weakest link in the security chain. This type of attack allows the penetration tester to create a USB,DVD or a CD with malicious content.When the unsuspicious user will open the file the payload will executed and it will return a shell.In this article we will explore this type of attack. We are opening the Social Engineering Toolkit and we are selecting the Infectious Media Generator option. The implementation of this attack is very simple.SET will create automatically an autorun.inf file and a payload.For this scenario we will choose to use File-Format Exploits as an attack vector. In the next image you can see the available payloads for this attack.We will use the default option which will embed an executable inside the PDF file. Now it is time to choose the payload that the malicious pdf will carry.Our option will be to return to us a simple Windows Shell. We will set the port at 443 which is the default option and then the Social Engineering Toolkit will create the autorun file and the malicious PDF automatically. Now lets say that during a penetration test we have plant the USB stick in a place that it will be too obvious for the employees to discover it.If someone takes that USB and connect this to his work computer then he will see a PDF file which is blank. At that time the payload will executed to his machine and it will return to us a remote shell. This attack doesn’t require any knowledge and it is very fast and easy to implemented by anyone.This means that anyone that can plant a malicious USB stick inside a company can be a potential threat.It also points out how a simple USB or DVD can bypass the network perimeter and can become a threat for any company if the employees are not following the security policies.For example companies should have a policy that would protect them against any mobile threats and the employees should follow that policy. Companies must educate their users about the risks of such threats.Additionally this attack proves that it doesn’t matter how much money an organization will spend for securing their network perimeter with Firewalls,IDS and IPS when the biggest threat may come from inside and with no bad intention.
<urn:uuid:ba5b76d8-f8e1-4558-885f-0aab52d8a439>
CC-MAIN-2017-04
https://pentestlab.blog/tag/autorun-file/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930432
509
2.796875
3
By Frank Yip According to one Chinese source, tea was first used as a medicine over 4500 years ago. Whether or not this is true, the fact remains that China was first in discovering and cultivating tea as a people's beverage. In the process, it gives the world a unique tea culture that is good for the body and soul, and one that cuts across territorial, cultural and religious times. This writer is of the opinion that tea and pottery (invented at least 7,000 years ago in China) could well have gone hand in hand no less than 4,000 years ago based on the discovery of Neolithic Longshan black pottery cups in Shantung Province. This would be discussed in a subsequent paper in the column later. Chinese tea experts do agree that tea drinking on a large enough scale took place during the western Han period on about 100 before the last millennium began, i.e. about 2000 years ago. It is now known that tea was first cultivated in the southwestern Sichuan Province, which also sent its best produce to the imperial court. Besides normal consumption, tea was presented as gifts, used at marital ceremonies and offered to the deities and the dead alike. Archaelogical discoveries of eared cups, ladles, tea trays and tea tablesgo to show that tea was a popular beverage, if it is not a national yet. Because the Han dynasties were trade expansionists, tea in addition to silk, must have been exported to foreign countries (chieftly to the west, Middle East, and Persia in particular) via the overland Silk Road. The mighty Tang Dynasty arrived in A.D. 618 (to 907). It was a cultural renaissance in China of a scale not to be seen perhaps until mid 18th Century when Qing Emperor Qianlong ascended to the throne one thousand years later. Tea drinking became a national pastime, thanks to a special class of teamen whose sole function was to promote it with good quality tea, water (yes, water), the proper way of drinking and the choice of charcoal. Tea parties were in vogue from the Imperial Court down to the men in the street. Elaborate implements (numbering 24 in all) were used to prepare tea. For the gastronomical and spiritual enjoyment of the tea, we must thank the Tea Saint, Lu Yu, who living between A.D. 733 and 804, almost single-handedly wrote his monumental book "Classic on Tea" perfected and perpetuated tea drinking. More on him will be said in the next article. Japanese tea drinking culture owes its origins to Tang tea practice, which was introduced by a Japanese monk, who had lived in Changan (the Tang capital) for 30 years, at the end of the 8th Century. Song Dynasty (960-1279) succeeded Tang and is credited with the popularisation of the small tea plantation system country-wide. Another Song Dynasty innovation was the introduction of teahouses whoch played a significant role in tea competitions, on "dou cha" (tea fight), a national craze at the time. In this social drinking game, competitors were judged from the quality, colour and the fiveness of their tea powder among other things. When mixed with the boiling water, it should produce a white froth and a nice aroma and which should leave its mark as close and as tidy, to the rim of the black tea bowl as possible. Less fine tea would contain some grains that would lower the water-mark from the rim. China came under the reign of her first foreign rulers, the Mongolians in 1280. Like all conquerors, money must be quickly found to keep the soldiers happy. Through heart-landers, the Mongols were quick to exploit the maritime trade with their Southern neighbours. Thus, luck for local collectors, we are left with lots of blue and white, Yinqin and Longchuan teawares of Yuan excavated from time to time from the land and sea around us. Succeeding Yuan was the Ming Dynasty (1368 - 1644), led by a semi-illeterate former monk who valued the importance of security. One of his early decrees upon becoming emperor was to ban the tea cake so loved by his Song predecessors. He introduced tea leaves to make tea drinking a much manageable chore. The Ming Dynasty also saw the emergence of Yixing (Jiangsu Province) tea pots made of unglazed red on brown earth as a major force in teawares. The Ming teamen are also remembered for their innovation of The Manchus became the second foreign conquerors of China in 1644 and with their animal milk drinking habit, they invented drinking tea with milk. An innovation if we may call it, that made tea purists chagrin, It was during the 18th Century that Chinese tea started to be shipped to Europe, thanks to the Dutch. Like porcelain, it took Europe by storm, turnig itself into an indispensable drink to the Victorian Englishmen (Queen Victoria included, by mid Century) The English "high tea" is quite rightly a "step son" of Chinese tea. The magic and impact of tea, on the Chinese is overwhelming, to say the
<urn:uuid:87583025-1f89-493a-ae7b-1dc4ebabbf8c>
CC-MAIN-2017-04
http://www.easterntea.com/tea/frank_yip/histperspective.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967792
1,165
2.703125
3
We all recently heard that Twitter fell victim to an anonymous hacker and over 55,000 login credentials were exposed. While to some, a hacked Twitter account may not seem as significant as a bank or credit card account, a security breach of any kind can cause damage, lost information and identity theft. Back in February of this year, UNC Charlotte had a security breach of personal account information where at least 350,000 social security numbers were compromised. We also know of the potential Cyber War mentioned in our last blog, and how attacks could shut down major infrastructures necessary to live in the United States. Regardless of the type of security breach, they all show a weakness in our systems. Unfortunately, this type of criminal behavior will probably never cease as we continue to input valuable information about our personal lives onto the web. Just how dangerous can a security breach be? The potential Cyber War is just the beginning. In order to combat the potential mayhem that could be, the Government wants to take steps that would help prevent and guard against these major cyber breaches and threats. Cyber Intelligence Sharing and Protection Act, also known as CISPA, was just recently passed through the House of Representatives on April 26. CISPA is a proposed law that would allow the sharing of private information from the internet between the U.S. Government and technology and manufacturing companies to help the U.S. investigate cyber attacks. CISPA is favored by major corporations such as Facebook and Microsoft, but many people and companies are concerned about how CISPA would allow the government to monitor an individual’s internet browsing usage. It’s scary to imagine that a potential law would allow the government to monitor internet usage, private emails, etc., and that our information could be handed over without our knowledge. Having your personal information readily available to others is a thought that no person dreams about which is why we take measures to prevent our information from being exposed–to anyone! The House of Representatives recently spent hours debating this difficult, proposed law known as CISPA. The law is meant to protect cyber security, but in many ways it also makes consumers vulnerable to data privacy attacks. No one likes the idea of private information being shared, regardless of whose hands it ends up in. Some have even snickered about the “Big Brother” effect this potential law could have on consumers. Whether or not CISPA ends up passing through the Senate, take the next step to keep your personal information protected NOW by using Keeper! Keeper is one of the world’s most downloaded security applications. It uses military grade, 128-bit AES security to keep your usernames and passwords safe and secure. Keeper will also help you manage the plethora of combinations used on a daily basis by auto-launching websites and auto-filling your passwords. Keeper can be synced to all of your devices including your mobile device, tablet, desktop and laptop! Download Keeper now at www.keepersecurity.com/download
<urn:uuid:a32cec2a-e246-4302-8149-a6d4fc9fca3d>
CC-MAIN-2017-04
https://blog.keepersecurity.com/2012/05/10/our-private-information-is-our-private-information-not-yours/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964761
600
2.96875
3
Telehealth: Transforming from Sickness Treatment to Wellness Management The term telehealth— the use of telecommunications to share healthcare services and information–encompasses multiple aspects of healthcare: teleconferencing, conversion of medical records to digital form, and collaboration among healthcare providers who all have the same information. Potentially more exciting is the use of telecommunications to remotely monitor patients’ health and relay medical and biometric information directly from the home to doctors and health facilities, all within seconds. Remote monitoring is possible with a new generation of small, inexpensive sensors with very low power requirements. The new sensors, benefiting from recent advances in miniaturization, are as accurate as hospital-grade equipment of just a few years ago, at a fraction of the cost. Moreover, their low cost and small size make sensors easy to incorporate into a range of devices, providing solutions to multiple sensing applications: Data collected by sensors when the patient is at home and going about normal activities can be more representative of everyday wellness than readings taken during episodic office visits when the patient may be ill or nervous. The data can be aggregated over days, months, or years to show trends over time, giving a more complete picture of a person’s health. Adding telecommunications capability Sensors can collect high-quality medical data, but the real value is relaying this information to doctor’s offices, hospitals, and other medical facilities where the data can be interpreted and acted upon. The infrastructure for transmitting medical data from the home to healthcare facilities exists now in the broadband networks built and managed by AT&T and other providers. (AT&T alone will spend over $17 Billion on capital outlays in 2010.) Sensor-collected data can be sent over broadband and IP networks in the same way data and voice are relayed from PCs and cell phones. Getting the sensor data from the devices to the broadband network will be done wirelessly, through a local area network set up in the home, which allows the measurements to be collected conveniently anywhere. The data flows via radio to a home gateway and is then coupled into AT&T broadband or cellular networks for transmission to where it’s needed. In a medical emergency, sensors could transmit alarms to hospitals or emergency rooms even as the event is occurring. Today, wireless connectivity can be provided by the Wi-Fi and Bluetooth technologies or, more promisingly, by the new ZigBee radio standard (based on IEEE 802.15.4). Much like Wi-Fi and with similar range, ZigBee networks communicate packets, such as a temperature or glucose readings, more slowly but also using much less battery power. The radios are simple, economical, and can “sleep” for long periods when not transmitting or receiving. Wi-Fi, by contrast, is built for higher data rates and low-latency streaming. It’s ideal for voice, video, and multimedia-rich sensing “telepresence” applications, but requires more battery power, making it less attractive for devices that have small batteries that must last a long time. ZigBee has two additional advantages over point-to-point radios such as Bluetooth: a single ZigBee “cell” can accommodate thousands of nodes, and these nodes can self-organize into a store-and-forward network in which each node can relay data on behalf of another, an important capability when data integrity must be preserved as users and their devices move about homes where RF coverage may be uneven. Sensor data combined with telecommunications will shift the sub-clinical or health-maintenance care of patients from expensive healthcare facilities to the home. . . This property allows ZigBee relay nodes distributed throughout the home to ensure signals reach the gateway from any room. From the gateway, the data is securely carried over AT&T’s broadband network. Away from home, a ZigBee-enabled cell phone can perform the same functions as the gateway, allowing a seamless transition from indoor local area networks to outdoor cellular networks as monitored patients go about their daily routines With sensors able to communicate and share data, much becomes possible. In a medical emergency, sensors could transmit alarms to hospitals or emergency rooms even as the event is occurring, perhaps saving a life. Since the sensors are two-way, the hospital could send a demand-respond message to verify the information or request a series of updates in real time for more intensive information-gathering. Sensor data combined with telecommunications will shift more post-treatment, rehabilitation, or chronic care of patients from expensive healthcare facilities to the home and allow medical conditions to be dealt with before they become acute. Eliminating trips to hospitals and doctors’ offices will reduce costs, and allow doctors more time with other patients. For the chronically ill, remote monitoring means more control over their own condition and reduces the stress of frequent office visits. Even patients who often require continuous monitoring in a hospital, such as women with high-risk pregnancies, could remain at home since the sensor-collected data is just as accurate as data collected in hospitals. If a dangerous situation develops, sensors could trigger an automatic alert so doctors can intervene at the first sign of trouble. If approved by those being monitored at home, measurements can also be shared with caregivers to allow them a more active role in assisting older family members, for example, who wish to live at home independently and who may have one or more medical conditions. Adding communications capabilities to devices Almost any medical device can be transformed by telecommunications. AT&T Research is working with device-makers to prove the concept. Such partnerships are expanding the flexibility and convenience of today’s common health devices: scales, pill dispensers, blood-pressure meters, pulse-oximeters (combined pulse-rate and blood oxygen concentration sensors), glucometers, and other formerly stand-alone instruments,into wireless, portable Personal Health Devices (PHDs). This is done by adding a small module containing an entire communications system, complete with CPU, memory, a ZigBee radio, and data interface (for passing data from the basic sensor to the ZigBee radio) into these devices. Devices thus augmented, become “smart”: able to detect and store readings and then communicate them automatically to a doctor, medical professional, or caregiver. A regular pill dispenser, for example, simply sounds an alert to take a pill at a certain time. But one equipped with a sensor and telecommunications can also report to the doctor at what time the pill was removed, giving doctors much better knowledge about the relationship between the blood drug level and patient condition. Similarly, a ZigBee-embedded pulse-ox meter that clips onto a patient’s fingertip can measure pulse rate and oxygen levels in the blood (important indicators for heart and lung performance), and enter the readings in a doctor database located in a hospital hundreds of miles away. Doctors will essentially become knowledge managers, and spend less time collecting data and more time analyzing it. At the leading edge of research and testing is transformation of bed-stand or kitchen-table instruments that take measurements at intervals during the day to devices designed to be “wearable”, such as EKG monitors for cardiac patients. An interesting and more unconventional example is a “smart” shoe insole for combating a major health problem of the elderly: falls. For the elderly, one in three falls will require an E R visit, and one in 20 will lead to a fatal complication within six months. AT&T Research has partnered with device maker 24Eight LLC to embed pressure sensors, accelerometers, and a ZigBee radio within cushioned insoles that can be inserted into footwear to gather continuous information about a patient’s movements and weight distribution. The work is a good illustration of how sensing capabilities have evolved. Early gait studies done in the last two decades to investigate design of mechanical prosthetics required bulky hardware cabled to the super-computers of the day. The data collected using the new insoles are almost identical to the data collected from a lab full of yesterday’s equipment. Insole data can now be transmitted over AT&T's network, processed, and sent to a doctor’s office, where it may serve multiple purposes: The “smart” insole concept has evolved into “smart slippers” that are currently entering clinical trials at the Garrison Geriatric Education and Care Center in Lubbock, Texas, as part of a research partnership with Texas Tech University Health Sciences and TTU Electrical Engineering Schools. And with modern telecommunications, the data can be collected from any location and transmitted to where it’s needed . . . Communications can be added to almost any device, and AT&T Research, by building and demonstrating prototype devices, is hoping more device makers will incorporate communications into their products, making them available to all. AT&T Research has already laid much of the groundwork, creating and validating specifications that can serve as a basis for mass-producing medical devices with communications ability. Telehealth promises to fundamentally transform healthcare. The new sensors combined with telecommunications will collect a wealth of health data, giving doctors much more data to analyze, making for better and faster diagnoses. Doctors will essentially become knowledge managers, and spend less time collecting data and more time analyzing it. And with modern telecommunications, the data can be collected from any location and transmitted to where it’s needed, allowing doctors to view the information anywhere and give a diagnosis based on a complete health record. More complete data also makes it easier to accurately track treatments and outcomes. But there are obstacles. Standards are needed to ensure devices from different manufacturers can communicate, and that data sent from a home’s wireless network can be viewed from any doctor’s office. AT&T Research is working with standards bodies, industry groups, and device makers to create integrated solutions and help ensure that the telehealth remote monitoring future is built on a platform of solid science and good networking architectures. One industry group of which AT&T is a member, the Continua Health Alliance, is specifically focused on remote health monitoring. Made up of leading technology, healthcare, and fitness companies, the alliance is developing requirements and certifications for a wide range of interoperable devices, such as fitness equipment, heart monitors, weight scales. Another hurdle is the current insurance model that compensates doctors and patients based on office visits. As telehealth becomes more pervasive, this model needs to change so doctors are not penalized for adopting the new technology and processes. These solutions will require broad consensus across multiple industries and interest groups. This will be difficult. But as the healthcare in the US approaches a crisis point, telehealth is a solution that has the potential to improve healthcare and lower costs, and it makes sense to do all that is necessary to make it a reality. Telecommunications can directly transmit sensor-collected data from any location to where it's needed.
<urn:uuid:d044587f-bc24-405c-abd5-49a8fc502b34>
CC-MAIN-2017-04
http://www.research.att.com/articles/featured_stories/2010_01/201002_techview_telehealth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00438-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933236
2,248
3.421875
3
Definition: A member of the sequence of numbers such that each number is the sum of the preceding two. The first seven numbers are 1, 1, 2, 3, 5, 8, and 13. F(n) ≈ round(Φn/√ 5), where Φ=(1+√ 5)/2. Formal Definition: The nth Fibonacci number is Aggregate parent (I am a part of or used in ...) Fibonacci tree, Fibonaccian search. See also kth order Fibonacci numbers, memoization. Note: Fibonacci, or more correctly Leonardo of Pisa, discovered the series in 1202 when he was studying how fast rabbits could breed in ideal circumstances. Computing Fibonacci numbers with the recursive formula is an example in the notes for memoization. The Nth Fibonacci number can be computed in log N steps. The following method is by Bill Gosper & Gene Salamin, Hakmem Item 12, M.I.T. Let pair-wise multiplication be (A,B)(C,D) = (AC+AD+BC,AC+BD)This is just (AX+B)*(CX+D) mod X²-X-1, and so is associative and commutative. Note that (A,B)(1,0) = (A+B,A) which is the Fibonacci recurrence. Thus, (1,0)^N = (F(N),F(N-1))which can be computed in log N steps by repeated squaring. As an example, here is a table of pair-wise Fibonacci numbers: b^pow powand here are some "Fibonacci" multiplications (1,0)(1,0) = (1,1) 2 (1,1)(1,0) = (2,1) 3 (2,1)(1,0) = (3,2) 4 (3,2)(1,0) = (5,3) 5 (5,3)(1,0) = (8,5) 6 (8,5)(1,0) = (13,8) 7 (13,8)(1,0) = (21,13) 8 (1,1)(1,1) = (3,2) b^2 * b^2 = b^4 (3,2)(3,2) = (9+6+6,9+4) = (21,13) b^4 * b^4 = b^8 (1,1)(5,3) = (5+3+5,5+3) = (13,8) b^2 * b^5 = b^7 They also note that for general second order recurrences G(N+1) = XG(N) + YG(N-1)we have the rule (A,B)(C,D) = (AD+BC+XAC,BD+YAC) Inverses and fractional powers are given also. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 March 2015. HTML page formatted Mon Mar 2 16:13:48 2015. Cite this as: Patrick Rodgers, "Fibonacci number", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 March 2015. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/fibonacciNumber.html
<urn:uuid:0c9e3e74-1a35-4fe5-9959-53c2e595c9b1>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/fibonacciNumber.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.82927
793
3.703125
4
Definition: A variant of a finite state machine having a set of states, Q, an output alphabet, O, transition probabilities, A, output probabilities, B, and initial state probabilities, Π. The current state is not observable. Instead, each state produces an output with a certain probability (B). Usually the states, Q, and outputs, O, are understood, so an HMM is said to be a triple, (A, B, Π). Formal Definition: After Michael Cohen's lectures for CN760. Also known as HMM. Generalization (I am a kind of ...) finite state machine. Aggregate parent (I am a part of or used in ...) Baum Welch algorithm, Viterbi algorithm. See also Markov chain. Note: Computing a model given sets of sequences of observed outputs is very difficult, since the states are not directly observable and transitions are probabilistic. One method is the Baum Welch algorithm. Although the states cannot, by definition, be directly observed, the most likely sequence of sets for a given sequence of observed outputs can be computed in O(nt), where n is the number of states and t is the length of the sequence. One method is the Viterbi algorithm. Thanks to Arvind <email@example.com> May 2002. Named after Andrei Andreyevich Markov (1856 - 1922), who studied poetry and other texts as stochastic sequences of characters. L. E. Baum, An inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes, Inequalities, 3:1-8, 1972. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 14 August 2008. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "hidden Markov model", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 August 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/hiddenMarkovModel.html
<urn:uuid:331b20e2-ee9d-40a4-953b-910f13dcf013>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/hiddenMarkovModel.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00072-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869575
477
3.171875
3
As the Pentagon continued its development of the F-35 Joint Strike Fighter program -- the costliest weapons project in U.S. history -- news surfaced in April that for more than a year, hackers downloaded several terabytes of sensitive data from contractors' computers. The breach was a startling realization that even the most secretive projects are vulnerable. Defense Secretary Robert Gates admitted to CBS News that the United States is "under cyber-attack virtually all the time, every day" and that the Pentagon is changing its strategy to combat and use cyber-warfare in the U.S. defense policy. Gates ordered the creation of a new military cyber-command that will defend the Pentagon's networks and conduct cyber-warfare. The Pentagon also will more than quadruple the number of security experts it employs to combat cyber-attacks. Yet as hackers and botnets -- groups of "zombie" computers that autonomously spam the Internet -- continue to attack organizations worldwide, a prevailing cyber-security question is how to unite the public and private sectors, as well as individual computers, to fight cyber-crime. President Barack Obama announced that cyber-security is a national priority and that he'll appoint a cyber-security coordinator. He also stated that the U.S. government would collaborate with the private sector to create a comprehensive national cyber-security policy. But he did not outline citizen involvement, which some security experts say is crucial. "I think civilian participation in cyber-security is absolutely essential because the systems that are used in attacks and most of the systems that are attacked are owned and operated by civilians," said Susan Brenner, professor of law and technology at the University of Dayton School of Law. Brenner believes the current system for cyber-space enforcement is outdated and based on modern criminal law that's oriented around territorial domains, which limits law enforcement. Because cyber-attacks occur from anywhere around the world, law enforcement deterrence and prevention has become very difficult. Brenner said cyber-crime can be prevented by information sharing and coordinating response from the public and private sectors, and especially individual citizens. Since citizens are usually the ones attacked, Brenner said they should be included in cyber-security response by reporting attacks and making their systems more resistant. Since civilians often don't report attacks to authorities, cyber-security enforcement is losing valuable information about cyber-attacks. If law enforcement and the military create a rapid flow of threat data across the public, private and individual sectors, the nation's cyber-security would be strengthened, Brenner said. Brenner suggested a "distributed" approach to cyber-crime, where governments would require anyone accessing cyber-space to employ security measures, without infringing on civil liberties. New cyber-crime prevention laws could potentially require citizens and private- and public-sector organizations to implement tools necessary to prevent threats like identity theft, anonymous e-mail relaying and the expansion of botnets. "People are currently the biggest flaws in cyber-security," said Joseph J. Schwerha, associate professor of business law at California University of Pennsylvania, who co-wrote an article with Brenner on cyber-crime. "Because information has to be available for people to use it, people are frankly the weak link in the chain." Education has been the primary cyber-crime prevention strategy, with numerous organizations -- including the U.S. Department of Homeland Security, InfraGard and the United States Computer Emergency Readiness Team -- gathering and relaying information about cyber-threats. Education is essential on the individual level, Schwerha said since cyber-criminals are increasingly targeting and hijacking individuals' computers to conduct cyber-warfare and perform other malicious activities. The National Cyber Security Alliance (NCSA) is a public-private organization that specializes in cyber-security awareness to build a national understanding about appropriate online tools and behavior. The NCSA believes education is the key to protecting individual computers and networks. "The biggest threat I see, in general, is that users don't understand the connection between what they do on a computer and how that affects the networks they use," said Michael Kaiser, the alliance's executive director. "We really believe the answer to cyber- security issues is sharing information. We don't believe one organization, nonprofit, school or parent can do it individually. It has to be done collectively." However, only five states have mandated Internet security training for individuals, and fewer than one-third of all classrooms teach anything related to cyber-security, according to the NCSA. In addition, an estimated 60 percent of teachers admitted they don't feel prepared to teach cyber-security. To help remedy this education gap, the NCSA created a volunteer program that brings IT security professionals into classrooms to teach about cyber-security, ethics and safety. A partnership between the FBI and private sector called InfraGard disseminates information and reporting among its members on cyber-crime and other major crime programs. Many private-sector members of InfraGard have improved internal education programs for their employees because individuals working for organizations are often the cause of security threats. The increasing use of portable technologies -- such as laptop computers, PDAs, BlackBerrys, phones and flash media -- containing sensitive information has made many organizations vulnerable to cyber-security threats. "As industries become more global with outsourcing and you have computer files with product designs, such as CAD files, that are available electronically, it's very easy for those files to go where they shouldn't," said John Landwehr, a member of the San Francisco Bay Area InfraGard local board of directors. Though educating employees is important, the idea of incorporating private citizens into cyber-security protocol is a challenging proposition, said Ronald Dick, president of the InfraGard national members alliance, because privacy issues must be addressed, enforcement would be difficult -- especially in other countries -- and service providers would also need to be included. Dick, who is the former director of the FBI's National Infrastructure Protection Center, said there should be more stringent requirements on software developers and hardware manufacturers to increase Internet security. He's encouraged by Obama's recent attention to securing the country's critical Internet infrastructure, but said more work must be done to form effective collaboration between the public and private sectors. "There is the realization that for our national security and the security of our information -- both from the public- and private-sector standpoints -- there has to be a partnership between the two sectors," Dick said. "It will not work without the two of them working together to better secure networks. There is this realization, but the question is: How do you execute it? There's a real searching on both sides on how do we make it work to protect the rights of citizens and the nation."
<urn:uuid:d1d5592d-3334-45c4-b710-88a33df20e36>
CC-MAIN-2017-04
http://www.govtech.com/security/Making-Cyber-Security-a-National-Priority.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954113
1,379
2.609375
3
Germany, it appears, can’t get enough soccer these days. With one World Cup tournament already under way, another is scheduled to begin Wednesday—with robots instead of human beings kicking away to score goals. The Robot World Cup, or RoboCup, will run through July 9 in Bremen, Germany. The decade-old event, drawing 440 teams from 40 countries, offers robotics researchers an opportunity to test their computing systems and skills in competition with international colleagues. It is the brainchild of Japanese scientist Hiroaki Kitano, who in the early 1990s sought a platform for researchers to challenge each other in the area of artificial intelligence. Initially, Kitano and his international colleagues considered a competition focused on simulated tasks in nursing or disaster work, but because of widely varying requirements for these tasks around the world, the group eventually agreed on soccer with its straightforward rules but fast, unpredictable game play. The event has given a boost to an area of combined robotics and artificial intelligence research that differs greatly from robotics research aimed at controlled environments such as manufacturing and logistics. Factories and warehouses tend to be adapted around robots designed to carry out repetitive motions, and not the other way around. Soccer, by comparison, is a real-world situation where players have to work in teams and react to the ball and opposing players. The sport provides a useful way to test models of behavior and motion that change quickly. The robotic players, which can be on wheels or legs or simulated, must know as exactly as possible where they are at any time, follow the movements of other players and the ball and react to these changing circumstances appropriately, according to Ubbo Visser, a director with the Center for Information Technology in Bremen, which is organizing the RoboCup together with the Bremen Fair Center. The competition is organized into five leagues: the simulation league, the small-size league, the middle-size league, the Sony legged league and the humanoid league. The automation of everyday intelligence still presents one of the biggest challenges to researchers, according to Visser. The RoboCup’s goal—and not the one with a net—is to develop a team of fully autonomous humanoid robots that can play—and win—against a human World Cup champion team by 2050. -John Blau, IDG News Service (Dusseldorf Bureau)
<urn:uuid:ed6fc7a9-c621-4fc3-b80f-934d3512f225>
CC-MAIN-2017-04
http://www.cio.com/article/2446089/consumer-technology/world-cup-robot-competition-to-kick-off-in-germany.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95466
489
2.921875
3
The Defense Advanced Research Projects Agency recently said its program to develop cutting edge photonics products has yielded two chips that can support long optical delays with low loss useful for a number of applications including wideband wireless systems, optical buffers for all-optical routing networks, and ultra-stable optical interferometers for sensing applications. Working under DARPA's integrated Photonic Delay (iPhoD) program the University of California, Santa Barbara (UCSB) and the California Institute of Technology (CalTech) came up with new microchip-scale, integrated waveguides for photonic delay. Optical waveguides-any structure that can guide light, like conventional optical fiber-can be used to create a time delay in the transmission of light. Such photonic delays are useful in military applications ranging from small navigation sensors to wideband phased array radar and communication antennas, DARPA said. [OTHER TECH NEWS: What now for lithium ion technology?] According to DARPA, the new waveguides are built onto microchips and include up to 50 meters of coiled material that is used to delay light. "Conventional fiber optic coils of the same length would be about the size of a small juice glass. These waveguides also employ modern silicon processing to achieve submicron precision and more efficient manufacturing. The result is a new component that is smaller and more precise than anything before in its class. Chip-based waveguides also eliminate bulky, labor-intensive waveguide-to-fiber couplers, DARPA stated. "Prior to the start of iPhoD, the best integrated waveguides had a signal loss of about 1 decibel per meter with total lengths of only a few meters," said Josh Conway, DARPA program manager in a statement. "Under iPhoD, two research teams created chips with loss around 0.05 decibels per meter. The submillimeter bend diameter, which describes how tightly the waveguide can coil without significant signal loss, allowed the demonstration of a 50-meter optical delay on a single microchip." DARPA said that the ultra-low loss, true-time delay chip developed at UCSB is composed of silicon nitride. Selecting this material may allow for integration with a variety of devices and materials-thereby reducing size, weight and power requirements of an overall system. UCSB researchers also demonstrated 3D waveguide stacking, enabling more waveguide length, and thus, longer photonic delays, the agency said. Meanwhile researchers at CalTech developed a waveguide that was constructed from silicon oxide, or glass, and demonstrated low loss over 27 meters. Check out these other hot stories:
<urn:uuid:f8504e07-e443-4c05-add8-0d11fc34e404>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225923/wi-fi/how-to-shove-50-meters-of-optical-fiber-into-a-microchip.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00246-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917151
549
3.046875
3
We used to say that all data has a decaying value; the further away from its creation date it gets, the less valuable that data becomes. Compliance and regulatory requirements as well as big data analytics and archive have changed that. We now have to assume that all data will become valuable again -- we just don't know which data or when. If decades from now your grandchildren check into a hospital, the doctors might want to access your medical records. They need them quickly and they better be readable. In theory, these archiving needs strengthen the position of many disk-based object-storage vendors. Their systems can provide data durability as well as quick access and cost effectiveness when compared to primary storage. The problem is that object storage is not as inexpensive as tape storage nor is it as power efficient. [ Learn more about archiving schemes. Read Find The Right Data Archive Method. ] Because we are talking about potentially storing all data for decades, we need to do everything we can, without putting data at risk, to reduce the overall storage cost of the system. After all, those records won't do you any good if the hospital can't afford to keep the system that stores them powered on and up-to-date. However, before we turn over all archive data to the object storage vendors, there is a part of that "all data has a decaying value" theory that is still applicable. It's this: All data has a decaying speed at which it needs to be accessed. Using our medical example above, the doctors might need to access your medical records 50 years from now, but they probably don't need to have them in seconds. They can probably wait a minute or two. As I noted in my article "Comparing LTO-6 to Scale-Out Storage for Long-Term Retention," in these situations tape is an ideal storage type. Data on tape can still be automatically scanned for durability and it certainly meets the cost-effectiveness requirements. What surprises most people that are either new to tape or have forgotten about it is how quickly a modern tape library can deliver data. In most cases access takes less than a minute; in the worst case it is two to three minutes. Understanding The Data Access Decay Rate The speed at which you need to have data returned to primary storage will depend on the needs of the business. Because the predictable response to, "How long can you wait?" is, "I need it now," it is important to make sure that business line managers understand the value of waiting. If they understand that waiting two minutes could save the organization $2 million a year in storage expenses, waiting sounds much more attractive. In almost every case the durability of the data is far more important than the speed at which it can be recovered. I typically suggest a blended strategy: As little primary storage as possible, a reasonable amount of object/archive storage, and a hefty amount of tape. The amount of object/archive disk storage will be driven by your data access decay rate. For many organizations that might mean keeping all data on object storage for three to five years. For almost all organizations, longer-term retention should be on tape. This blended strategy gives the right balance between access, affordability and durability.
<urn:uuid:749a0ede-e2ff-4663-a4b6-ef08639b64f0>
CC-MAIN-2017-04
http://www.networkcomputing.com/storage/plot-effective-data-archive-strategy/355219580?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962855
658
2.515625
3
So will users' smartphones become infected with malware? The simple answer is yes and they are. Of course the Windows PC platform is still the biggest target for virus and malware authors as this produces the biggest “return on malware investment”. Due to poor security measures taken by users such as failing to patch their PCs or not using anti-malware there are now around 4 million PC-based viruses and worms out in the wild. Contrast this with the 400 or so viruses and worms targeting smartphones and you can see the order of magnitude difference. But complacency is an enemy, and criminals are now exploring the smartphone market as a new and untapped source of devices waiting to be infected. In April 2010 a pirated game was infected with malware, forcing the infected smartphone to dial out to premium international numbers unknown to the user. The first the user knew of the problem was the incoming phone bill at the end of the month. August 2010 saw an SMS-based Trojan for smartphones running the popular Android operating system. Called Trojan-SMS.AndroidOS.FakePlayer.a the malicious program penetrates smartphones running Android looking like a harmless media player application. Users are prompted to install a 13KB file and once installed on the phone, the Trojan uses the system to begin sending SMSs to premium rate numbers. Not only does this malware create havoc on the smartphone it can also take advantage of the voice capability of the device. This is where these threats start to raise very sinister security concerns far beyond those of the humble personal computer. To prove the point about smartphone security last year, Veracode, the code security people, conducted an experiment to see how easy it is to infect a smartphone with malware. The coders at Veracode created a tic-tac-toe game (noughts and crosses) that ran, in this case, on a BlackBerry device. Not that they were picking on BlackBerrys—they could have done this attack on any smartphone as it simply used a bit of social engineering to get a user to download the software on to their phone. Nothing that advanced here; in fact if the user didn’t actively download the app and put in their passwords then the attack would have failed. Once installed on the device the user happily played a game whilst in the background the malware was siphoning off their email contacts and SMS messages. It would have been trivial, at that stage, to turn the smartphone microphone on and have the device act as a bug. David Cameron, the UK Prime Minister, carries a BlackBerry device and, in early 2011, he announced that he was following the cricket test match in Australia live on his BlackBerry whilst he was in bed. Consider the implications if this device was compromised and the Prime Minister was bugged in bed? But no end of security education will prevent users from downloading apps if they really want them. Yes devices can be locked down, as is the case with many company issued BlackBerrys. Many security practitioners would agree that BlackBerry devices can be very well secured and these devices have been tested and approved by the UK security establishment. But what employee in a “normal” business would agree to their personal device being locked down in such a way that they are prevented from downloading and running the latest game or app? Is my smartphone based data secure? Any CISO considering their smartphone security strategy should consider this data from the Get Safe Online website, a crime prevention website based in the UK Over 1 in 4 (28%) internet users use a smartphone to access the internet, rising to 50% amongst 18–24 year olds. Of these: - 71% use their phones to send emails or use messaging applications - 56% view and update their social networking profiles - 1 in 5 (20%) synchronise their handsets to a personal computer - Almost 1 in 5 (19%) use their mobiles to make purchases online - Over 1 in 6 (16%) manage their finances, including banking and paying bills - 1 in 5 (20%) have had their handsets lost or stolen The statistics speak for themselves but we are seeing a lot of people using devices for financial transactions, with the issues that can bring. Also 20% have had their devices stolen—and these are their own devices they love and cherish! Would they take greater or lesser care over a company issued phone? The next article in this series will look at voice data security and smartphone managment tools. References (All accessed February 2011) Smartphone Malware Multiplies. [Online] 2010. Windows Mobile Trojan Poses As “3D Anti-terrorist action” War Game. [Online] 2010. First SMS Trojan detected for smartphones running Android. [Online] 2010. Is Your BlackBerry App Spying on You?. [Online] 2010. Smartphone security put on test. [Online] 2010. Getsafeonline website
<urn:uuid:a5aec3d1-9bfe-40b1-9a92-17057328e0d0>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/the-smartphone-a-real-bug-in-your-bed-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00577-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955883
1,012
2.59375
3
Basics of Cryptography Questions derived from the CompTIA SY0-101 – Security+ Self Test Software Practice Test. Objective: Basics of Cryptography SubObjective: Understand and be able to explain the following concepts of PKI: Certificates (Certificate Policies, Certificate Practice Statements), Revocation, Trust Models Item Number: SY0-184.108.40.206 Single Answer, Multiple Choice What is contained within an X.509 CRL? - Digital certificates - Private keys - Public keys - Serial numbers D. Serial numbers An X.509 Certificate Revocation List (CRL) contains a list of serial numbers of unexpired or revoked digital certificates that should be considered invalid. CRLs are created by certificate authorities (CAs). Public and private keys are used in encryption, which can be used to protect the confidentiality of file contents. A digital certificate is an electronic document that contains authentication credentials. Although a CRL contains information about digital certificates, a CRL does not contain digital certificates. Wikipedia.org, Certificate Revocation List, http://en.wikipedia.org/wiki/Certificate_revocation_list
<urn:uuid:c0548526-0c70-4c27-a83e-e00916b11fcf>
CC-MAIN-2017-04
http://certmag.com/basics-of-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.79976
246
3.65625
4
NOAA readies a supercomputing boost for hurricane season - By Frank Konkel - May 24, 2013 Hurricane Sandy as seen from NOAA's GOES-13 satellite on October 28, 2012. (NASA/NOAA photo) The 2012 hurricane season was one of the worst and costliest on record, but the National Oceanic and Atmospheric Administration predicts the 2013 hurricane season could be even worse. For the six-month swing beginning June 1, when conditions are ripe for hurricane formation, NOAA's Climate Prediction Center forecasts an active to extremely active season, with a 70 percent likelihood of 13-20 named storms – those that produce winds 39 miles per hour or faster. It calls for up to 11 hurricanes, three to six of which classified as major hurricanes with winds 111 miles per hour or faster. For perspective, 2012's hurricane season produced 19 named storms that caused upwards of $80 billion in damage, including Hurricane Sandy's $75 billion demolition job on the east coast. "NOAA predicts an above normal and possibly an extremely active hurricane season with a range of 13 to 20 named storms," seven to 11 of which are forecast to turn into hurricanes and three to six of which are forecast to turn into major hurricanes, said Kathryn Sullivan, acting NOAA administrator. NOAA's hurricane outlook does not predict how many storms will make landfall. Individual storm forecasts are conducted by NOAA's National Hurricane Center. It uses a Hurricane Weather Research and Forecasting (HWRF) model, centered on a supercomputer that analyzes data sets collected from satellites, weather buoys and airborne observations from Gulfstream-IV and P-3 jets and churns out high-resolution computer-modeled forecasts. In July, NOAA's hurricane forecasts will get a boost when a new supercomputer is brought online – it will run an upgraded HWFR that NOAA officials say will improve forecast models "10 to 15 percent." In addition, Congress has approved $23.7 million in funds directed to the NOAA-housed National Weather Service to beef up its forecasting and supercomputer infrastructure through the Disaster Relief Appropriations Act, also called the Sandy supplemental. The money will significantly upgrade the Reston, Va.-based supercomputer named Tide that runs the NWS' Global Forecast System (GFS). GFS' long-term forecasts of Hurricane Sandy were significantly bested by those made by the England-based European Centre for Medium Range Forecasting (ECMWF) model last year, drawing significant criticism from American weather experts and putting the national model a distant second to ECMWF in worldwide forecasting supremacy. The appropriation allows NWS to boost the computational power behind the GFS model by more than ten times, from 213 teraflops to 2,600 teraflops by fiscal 2015. If those numbers hold as expected, they would surpass ECMWF the same year. Increased computational capacity translates to higher resolution models of existing data from sources like weather satellites, which ultimately helps forecasters pick up on potentially minute changes in conditions that can change the way a storm system behaves. Given the increased frequency of extreme weather in recent years, and NOAA's forecast for another hurricane-laden year, infrastructure investments in weather prediction will have plenty of chances to prove their value. Frank Konkel is a former staff writer for FCW.
<urn:uuid:0e3a7bda-7b1e-4a55-963a-b5bda70361ee>
CC-MAIN-2017-04
https://fcw.com/articles/2013/05/24/noaa-hurricane-season.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941213
679
2.5625
3
Introduction to Python for Statistical Learning The first session in our statistical learning with Python series will briefly touch on some of the core components of Python’s scientific computing stack that we will use extensively later in the course. We will not only introduce two important libraries for data wrangling, numpy and pandas, but also show how to create plots using matplotlib. Please note that this is not a thorough introduction to these libraries; instead, we would like to point out what basic functionality they provide and how they differ from their counterparts in R. But before we get into the details we will briefly describe how to setup a Python environment and what packages you need to install in order to run the code examples in this notebook. To run the R examples in this code you also need: You can find instructions how to install rpy2 here . If you have an working R environment on your machine the following command should install $ pip install -U rpy2 To test if rpy2 was installed correctly run: $ python -m 'rpy2.tests' If you run on Anaconda and it complains that it misses libreadline.so please install the following conda package: $ conda install python=2.7.5=2 IPython is an interactive computing environment for Python. It is a great tool for interactive data analysis and programming in general. Amongst other things it features a web-based notebook server that supports code, documentation, inline plots, and much more. In fact, all blog posts in this series will be written using IPython notebooks with the advantage that you can simply download it from here and either run it locally or view it on nbviewer. The goal of this session is to get familiar with the basics of how to work with data in Python. The basic data containers that are used to manipulate data in Python are n-dimensional arrays that act either as vectors, matrices, or tensors. In contrast to statistical computing environments like R, the fundamental data structures for data analysis in Python are not built into the computing environment but are available via dedicated 3rd party libraries. These libraries are called Numpy is the lingua-franca in the Python scientific computing ecosystem. It basically provides an n-dimensional array object that holds elements of a specific numpy.int32). Most packages that we will discuss in this series will directly operate on arrays. Numpy also provides common operations on arrays such as element-wise arithmetic, indexing/slicing, and basic linear algebra (dot product, matrix decompositions, …). Below we show some basic working with numpy arrays: from __future__ import division # always use floating point division import numpy as np # convention, use alias ``np`` # a one dimensional array x = np.array([2, 7, 5]) print 'x:', x # print x # a sequence starting from 4 to 12 with a step size of 3 y = np.arange(4, 12, 3) print 'y:', y # element-wise operations on arrays print 'x + y:', x + y print 'x / y:', x / y print 'x ^ y:', x ** y # python uses ** for exponentiation x: [2 7 5] y: [ 4 7 10] x + y: [ 6 14 15] x / y: [ 0.5 1. 0.5] x ^ y: [ 16 823543 9765625] If you need any help on operations such as np.arange you can access its documentation by either typing help(np.arange) or — if you use IPython — write a '?' after the command: You can index and slice an array using square brackets . To slice an array, numpy uses Python’s slicing syntax x[start:end:step] where step is the step size which is optional. If you omit end it will use the beginning or end, respectively. Python uses exclusive semantics meaning that the element with position end is not included in the result. Indexing can be done either by position or by using a boolean mask: print x # second element of x print x[1:3] # slice of x that includes second and third elements print print x[-2] # indexing using negative indices - starts from -1 print x[-np.array([1, 2])] # fancy indexing using index array print print x[np.array([False, True, True])] # indexing using boolean mask 7 [7 5] 7 [5 7] [7 5] For two or more dimensional arrays we just add slicing/indexing arguments, to select the whole dimension you can simply put a colon ( # reshape sequence to 2d array (=matrix) where rows hold contiguous sequences # then transpose so that columns hold contiguous sub sequences z_temp = np.arange(1, 13).reshape((3,4)) print "z_temp" print z_temp print # transpose z = z_temp.T print "z = z_temp.T (transpose of z_temp)" print z print # slicing along two dimensions a = z[2:4, 1:3] print "a = z[2:4, 1:3]" print a print # slicing along 2nd dimension b = z[:, 1:3] print "b = z[:, 1:3]" print b print # first column, returns 1d array c = z[:, 0] print "c = z[:, 0]" print c # one dimensional print # first column but return 2d array (remember: exclusive semantics) cc = z[:, 0:1] print "cc = z[:, 0:1]" print cc # two dimensional; column vector z_temp [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] z = z_temp.T (transpose of z_temp) [[ 1 5 9] [ 2 6 10] [ 3 7 11] [ 4 8 12]] a = z[2:4, 1:3] [[ 7 11] [ 8 12]] b = z[:, 1:3] [[ 5 9] [ 6 10] [ 7 11] [ 8 12]] c = z[:, 0] [1 2 3 4] cc = z[:, 0:1] [ ] To get information on the dimensionality and shape of an array you will find the following methods useful: print z.shape # number of elements along each axis (=dimension) print z.ndim # number of dimensions print z[:, 0].ndim # return first column as 1d array (4, 3) 2 1 In numpy, slicing will return a new array that is basically a view on the original array, thus, it doesn’t require copying any memory. Indexing (in numpy often called fancy indexing), on the other hand, always copies the underlying memory. Differences between R and Python R differentiates between vectors and matrices whereas in numpy both are unified by the n-dimensional numpy.ndarray class. There are a number of crucial differences in how indexing and slicing are handled in Python vs. R. Note that the examples below require the Python package rpy2 to be installed. # allows execution of R code in IPython try: %load_ext rmagic except ImportError: print "Please install rpy2 to run the R/Python comparision code examples" The rmagic extension is already loaded. To reload it, use: %reload_ext rmagic Python uses 0-based indexing whereas indices in R start from 1: x = np.arange(5) # seq has excl semantics x %%R # tells IPython that the following lines will be R code x <- seq(0, 4) # seq has incl semantics print(x) Python uses exclusive semantics for slicing whereas R uses inclusive semantics: x[0:2] # doesnt include index 2 %%R x <- seq(0, 4) # seq has incl semantics print(x[1:2]) # includes index 2 0 1 Negative indices have different semantics: in Python they are used to index from the end on an array whereas in R they are used to drop positions: x[-2] # second element from the end %%R x <- seq(0, 4) # seq has incl semantics print(x[-2]) # drop 2nd position, ie 1 0 2 3 4 If you index on a specific position of a matrix both R and Python will return a vector (ie. array with one less dimension). If you want to retain the dimensionality, R supports a drop=FALSE argument whereas in Python you have to use slicing instead: X = np.arange(4).reshape((2, 2)).T # 2d array X[0:1, :] # still 2d array - slice selects one element %%R X = matrix(seq(0, 3), 2, 2) print(X[1, , drop=FALSE]) # use drop=FALSE [,1] [,2] [1,] 0 2 pandas provides a key data structure: the pandas.DataFrame; as can be inferred from the name it behaves very much like an R data frame. Pandas data frames address three deficiencies of arrays: - they hold heterogenous data; each column can have its own - the axes of a DataFrame are labeled with column names and row indices, - and, they account for missing values which this is not directly supported by arrays. Data frames are extremely useful for data munging. They provide a large range of operations such as filter, join, and group-by aggregation. Below we briefly show some of the core functionality of pandas data frames using some sample data from the website of the book “Introduction to Statistical Learning”: import pandas as pd # convention, alias ``pd`` # Load car dataset auto = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Auto.csv") auto.head() # print the first lines |0||18||8||307||130||3504||12.0||70||1||chevrolet chevelle malibu| |1||15||8||350||165||3693||11.5||70||1||buick skylark 320| |3||16||8||304||150||3433||12.0||70||1||amc rebel sst| One of the first things you should do when you work with a new dataset is to look at some summary statistics such as mean, min, max, the number of missing values and quantiles. For this, pandas provides the convenience method You can use the dot . or bracket notation to access columns of the dataset. To add new columns you have to use the bracket mpg = auto.mpg # get mpg column weight = auto['weight'] # get weight column auto['mpg_per_weight'] = mpg / weight print auto[['mpg', 'weight', 'mpg_per_weight']].head() mpg weight mpg_per_weight 0 18 3504 0.005137 1 15 3693 0.004062 2 18 3436 0.005239 3 16 3433 0.004661 4 17 3449 0.004929 Columns and rows of a data frame are labeled, to access/manipulate the labels either use Index([u'mpg', u'cylinders', u'displacement', u'horsepower', u'weight', u'acceleration', u'year', u'origin', u'name', u'mpg_per_weight'], dtype=object) Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64) Indexing and slicing work similar as for numpy arrays just that you can also use column and row labels instead of positions: auto.ix[0:5, ['weight', 'mpg']] # select the first 5 rows and two columns weight and mpg For more information on pandas please consult the excellent online documentation or the references at the end of this post. Differences between R and Python The major difference between the data frame in R and pandas from a user’s point of view is that pandas uses an object-oriented interface (ie methods) whereas R uses a functional interface: |0||18||8||307||130||3504||12.0||70||1||chevrolet chevelle malibu||0.005137| |1||15||8||350||165||3693||11.5||70||1||buick skylark 320||0.004062| # this command pushes the pandas.DataFrame auto to R-land %Rpush auto %%R auto = data.frame(auto) print(head(auto, 2)) mpg cylinders displacement horsepower weight acceleration year origin 0 18 8 307 130 3504 12.0 70 1 1 15 8 350 165 3693 11.5 70 1 name mpg_per_weight 0 chevrolet chevelle malibu 0.005136986 1 buick skylark 320 0.004061738 Below is a table that shows some of methods that pandas DataFrame provides and the corresponding functions in R: Plotting using Matplotlib Like R there are several different options for creating statistical graphics in Python, including Chaco and Bokeh, but the most common plotting libary is Matplotlib. Here is a quick introduction on how to create graphics in Python similar to those created using the base R functions. %pylab inline import pandas as pd import matplotlib.pyplot as plt data = np.random.randn(500) # array of 500 random numbers Populating the interactive namespace from numpy and matplotlib To make a histogram you can use the plt.hist(data) plt.ylabel("Counts") plt.title("The Gaussian Distribution") <matplotlib.text.Text at 0x5e46d50> Like R, you can specify various options to change the plotting behavior. For example, to make a histogram of frequency rather than of raw counts you pass the argument You can also easily make a scatter plot x = np.random.randn(50) y = np.random.randn(50) plt.plot(x, y, 'bo') # b for blue, o for circles plt.xlabel("x") plt.ylabel("y") plt.title("A scatterplot") <matplotlib.text.Text at 0x5e5a890> Matplotlib supports Matlab-style plotting commands, where you can quickly specify color (b for blue, r for red, k for black etc.) and a symbol for the plotting character ( '-' for solid lines, '--' for dashed lines, '*' for stars, …) s = np.arange(11) plt.plot(s, s ** 2, 'r--') [<matplotlib.lines.Line2D at 0x687bd50>] There is also a scatter command that also creates scatterplots <matplotlib.collections.PathCollection at 0x667a390> Boxplots are very useful to compare two distributions plt.boxplot([x, y]) # Pass a list of two arrays to plot them side-by-side plt.title("Two box plots, side-by-side") <matplotlib.text.Text at 0x6155410> Matplotlib and Pandas Pandas provides a convenience interface to matplotlib, you can create plots by using the # create a scatterplot of weight vs "miles per galone" auto.plot(x='weight', y='mpg', style='bo') plt.title("Scatterplot of weight and mpg") # create a histogram of "miles per galone" plt.figure() auto.hist('mpg') plt.title("Histogram of mpg (miles per galone)") <matplotlib.text.Text at 0x64e8190> <matplotlib.figure.Figure at 0x6890c10> Finally, pandas has built in support for creating scatterplot matrices and much more. from pandas.tools.plotting import scatter_matrix _ = scatter_matrix(auto[['mpg', 'cylinders', 'displacement']], figsize=(14, 10)) Matplotlib has a rich set of features to manipulate and style statistical graphics. Over the next few weeks we will cover many of them to help you make charts that you find visually appealing, but for now this should be enough to get you up and running in Python. For a more in-depth discussion of the Python scientific computing ecosystem we strongly recommend the Python Scientific Lecture Notes. The lecture notes contain lots of code examples from applied science such as signal processing, image processing, and machine learning. Wes Mckinney, the original author of pandas, wrote a great book on using Python for data analysis. It is not only the primary reference to pandas but also features a concise yet profound introduction to Python, numpy and matplotlib. This post was written by Peter Prettenhofer and Mark Steadman. Please post any feedback, This post was inspired from the StatLearning MOOC by Stanford.
<urn:uuid:c4d90294-e5c8-4b4a-a1b2-50c642968125>
CC-MAIN-2017-04
https://www.datarobot.com/blog/introduction-to-python-for-statistical-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.714203
3,815
3.671875
4
Dan Reed, now with Microsoft Research and a member of the SC Steering Committee, told representatives of the US Congress at a hearing in 2008 that “information technology is a universal intellectual amplifier” while arguing for the need for greater Congressional support for US investments in IT. This is certainly still true today. Reed’s statement carries with it the idea that investments of money, people, and intellectual effort into IT and computing solutions have a force multiplier that other activities do not share: $100M invested into a telescope has an immediate impact on our understanding of the universe, but won’t likely impact our understanding of how to build more effective bridges or prevent the spread of disease. On the other hand a $100M investment in computing — and especially in computing hardware — can have a much broader impact. There are hundreds of computing centers around the world that host significant supercomputing resources and expertise that are used by a diverse community of users working in fields as diverse as environmental restoration, applied mathematics, and linguistics. The broad leverage of supercomputing resources across so many scientific and technical disciplines is a powerful argument in favor of HPC. Perhaps more compelling, however, is the fact that researchers study phenomena using software on supercomputers that are too dangerous, too expensive, or simply impossible to do any other way. If you are trying to build new materials atom-by-atom, or need to study the environmental effects of a nuclear reactor leak, you simply don’t have other options. 2010 marks the second year of SC Communities, the body that synthesizes the programs contributing to the vibrancy and diversity of the global supercomputing community. “SC conference organizers have long understood the unique advantages that supercomputing offers humanity,” explains Boston University’s Jennifer Teig von Hoffman, SC10 Communities chair. “They also recognize that it is ultimately not about the hardware alone. We need people to enable the hardware to make a difference. Developing the next generation of HPC talent is the focus of the collection of programs organized as SC10 Communities.” There is a great need for workforce development to attract students into science, math and computing worldwide, at the same time that research shows that the number of students pursuing studies in these domains is declining and student performance needs to improve. The supercomputing community is not immune to these shortages. As pointed out in a recent IDC study, there is a “growing worldwide shortage of HPC talent, due to an aging HPC workforce and a scarcity of new graduates in various HPC fields.” Companies, national labs, and universities are affected by workforce shortages, but they are also in a strategic position to address this problem through science education and the special emphasis on workforce development at this year’s Supercomputing Conference in New Orleans. SC Conference leaders are committed to innovative mechanisms for broadening participation within high-performance computing and the computational sciences. The Student Volunteers Program provides introduction and experience in the SC Conference. Student Volunteers are comprised of local, international, graduate and undergraduate students in a variety of disciplines (including Computer Science, Information Sciences, Applied Mathematics and IT), for whom the conference is an important opportunity to interact with leading researchers in technical fields. To further encourage discussion of critical issues, a set of panels and papers at SC10 will explore issues and solutions in HPC workforce development, focusing on identifying and exploring specific skills and capabilities needed in the HPC workforce. The conference will also cover new and existing approaches to increase the skilled workforce, as well as the trends and forces shaping HPC workforce needs, education and training approaches over the next 5 and 10 years. Because it is so broadly applicable, supercomputing in particular benefits from the broadest possible set of points of view and backgrounds among its practitioners. This means providing opportunities and support to emerging leaders and groups who historically have not had a strong presence in HPC. These include women, students and early-career professionals from under-represented groups and international attendees. The Broader Engagement Program provides competitive grants to support travel to and participation in the SC10 Technical Program by members of under-represented groups. Participants go on to provide leadership in SC Committees and show excellence in SC technical sessions, such as SC posters and the Doctoral Showcase. As part of the Broader Engagement Program, the SC10 Student Job Fair will be held during the conference. At the SC09 Fair, over 100 students met representatives from government and private industry, research labs, academic institutions and recruiting agencies to discuss research and employment opportunities, co-ops and internships. Shannon Steinfadt is a recently hired engineer who attended SC08 with help from the Broader Engagement Program. In talking about the program’s impact on her career she says, “The cooperative environment, social events, mentorship and tours of the exhibitor hall were so helpful in boosting my confidence. The program enabled me to really participate, and not just observe what could have been an overwhelming experience. I felt that I was taken care of by the Broader Engagement Program.” Shannon also found the 2008 Job Fair valuable. She was contacted by four national laboratories based on the resume she distributed at the Job Fair, and is now part of Los Alamos National Laboratory. The future of our planet, and the quality of life for all life living on it, may well depend upon how well we address the challenges we face today. Short-term efforts and special projects can win individual battles, but in order to win the war we must capture the world’s best and brightest minds for science, engineering, and computing. The Education Program introduces supercomputing and computational tools, resources, and methods to K-12 educators, and helps them to integrate computational techniques into the classroom. During SC10, the Education Program will host a four-day intensive program that immerses participants in high-performance computing, networking, storage and analysis. The program offers mentorship, focused hands-on tutorials, formal and informal opportunities to interact with other SC communities and exhibitors. Although engaging K-12 students is critical to the continued long-term health of the global science community, the immense opportunities to capture students further along in their studies must not be ignored. The Student Cluster Competition is a joint effort between the SC10 Technical Program and SC10 Communities. Teams consisting of six undergraduate students showcase the amazing power of clusters and the ability to utilize open source software to solve interesting and important problems. They compete in real-time on the exhibit floor to run a workload of real-world applications on clusters of their own. According to Hai Ah Nam, SC10 Technical Program Student Cluster Competition Co-Chair and an alumnus of the SC Broader Engagement program, “Student Cluster Competition teams come from Taiwan, Russia and the United States with team sponsors such as AMD, Atlantic Computing, Cray, Dell, HP, IBM, Lockheed Martin, Mellanox, and Microsoft.” Prior to the competition, teams work with their advisor and vendor partners to design and build a cutting-edge commercially available small cluster. Teams must also learn open source competition applications and are encouraged to enlist the help of domain specialists. Doug Smith, advisor for the team at the University of Colorado Boulder, has used the SC Student Cluster Competition to build the undergraduate curriculum at his institute for HPC courses. “Our team is a very grass-roots effort,” he explains. “We take students from any major regardless of past experience. The majority of our students come from computer science or one of the other engineering disciplines. However, we have had some past students from applied math, physics and astronomy. We will have a team of six undergrads at the SC10 competition. We are being sponsored by the University of Colorado, Lockheed Martin and the HP Advisory Council and our hardware will be based on Dell/AMD/Mellanox. I think of the Student Cluster Competition as the Formula 1 race of the computer industry.” The SC conference series has a long history of fostering HPC/science education and workforce development. National labs and large research centers with significant interests in education, outreach and training have collaborated with SC Communities, spearheading special projects and programs to foster the development of its leaders and participants. “The global computing community’s continued success in applying supercomputing to the great challenges of our time depends in good measure on bringing new people and ideas into the HPC fold; there is no better place to make that happen than SC10,” says Teig von Hoffman. “HPC and advanced networking have become a critical component of a growing worldwide cyber infrastructure and it is important for that same global diversity to be reflected in the HPC community.” About the Author Linda Barney owns Barney and Associates, a technical, marketing writing and Web firm in Beaverton, Oregon, that provides writing and Web content for the high tech, government, medical and scientific communities. Readers can reach her at firstname.lastname@example.org.
<urn:uuid:44de3da1-c00d-44bd-90dd-f09b28c3b95e>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/26/sc10_champions_hpc_education_and_workforce_development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00237-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947085
1,864
2.6875
3
PROBLEM/SITUATION: Develop an accurate, cost-effective method of remediating radioactive contamination in the Kerr-McGee residential area Superfund site. SOLUTION: Combine GPS, GIS and radiation-detection technologies to provide rapid, accurate identification, analysis, mapping and clean up of site. JURISDICTION: City of West Chicago, Ill., Dupage County, Ill. VENDORS: CH2M Hill Inc., Trimble, Sun Microsystems, Oracle, ESRI. CONTACT: Rebecca Frey, Region 5 project manager, U.S. Environmental Protection Agency, 312/886-4760; Tim Runyon, project manager, Illinois Department of Nuclear Safety, 217/786-6365. In 1930, the Lindsay Light and Chemical Company Plant in West Chicago began milling naturally occurring radioactive thorium and other rare earths for the manufacture of filament coatings, polishing compounds and other products. In 1958, the plant was purchased by American Potash and Chemical, and in 1957, by Kerr-McGee Corp. When it closed in 1973, wastes from the various milling operations covered much of the 43-acre site. Decades of processing, most predating regulatory control of radioactive materials, left a landscape of thorium tailings (residue from the milling process), sediment piles and leach ponds. Prior to closure -- and before the dangers of thorium tailings were known -- local residents, contractors and the city of West Chicago were allowed to haul away truckloads of the sandy residue to use as fill material in parks, streets, sidewalks, lawns, swimming pools and septic-tank installations. Runoff from heavy rains carried these wastes into sewers that emptied into a creek running through the surrounding residential area. When the creek flooded, waste from the plant site was deposited in people's yards. To determine the extent of waste dispersal, aerial radiological surveys were conducted by the Nuclear Regulatory Agency in 1977, and again in 1989 by the U.S. Department of Energy and CH2M Hill, an engineering firm under contract to the federal Environmental Protection Agency (EPA) for remediation of Superfund sites. The surveys indicated that the waste had produced radioactive contamination in a 30-square-mile area around the plant site. SUPERFUND SITE ESTABLISHED In 1989, the EPA established the Kerr-McGee residential area as a Superfund site. Today it is the largest and most active of four such sites in West Chicago. In 1992, EPA; its prime contractor, CH2M Hill; Illinois Department of Nuclear Safety (IDNS); and Kerr-McGee Corp., began the actual clean up. Identifying which of the more than 1,200 properties in the area require clean up is the responsibility of the EPA and the prime contractor. EPA Region 5 project director, Rebecca Frey, described this as the "discovery-characterization phase of the project." It includes surveying 1,200 to 1,400 properties, processing the data, and providing EPA with individual property summaries -- including GIS and GPS information -- and the analytical results from soil sampling. EPA then passes the information to Kerr-McGee, which is under unilateral order to carry out excavation and property restoration. Following excavation, IDNS conducts soil-sample analysis and gamma radiation tests to determine if the property is "clean." If not, IDNS indicates where further excavation is needed. When the results meet EPA clean-up standards, Kerr-McGee is given authority to back-fill and restore the property. The EPA requires remediation of properties having five picoCuries-per-gram total radium in the soil above the normally occurring background gamma radiation levels. However, naturally occurring background gamma radiation comes from the content of radium and uranium in the soil, and must be determined by lab analysis of random soil samples from the area. A gamma radiation detector will not do the job because the instrument is unable to distinguish between naturally occurring soil radiation and that "broadcast" by the concentration of thorium tailings at the nearby plant site. A conventional approach would be to measure radium levels by manually analyzing soil samples from hundreds of individual properties. "A slow, costly process," said CH2M Hill Quality Assurance Manager John Fleissner. "It produces insufficient data that often results in too much soil being removed or not enough." A different method of measurement had to be developed. The solution was provided by CH2M Hill Project Director Alta Turner. In the pilot-study phase of the project, Turner developed a calibration that correlated radium concentrations in the soil with gamma radiation measurements. The detector readings could then be used as a "surrogate" measure to determine radium levels. A software program on a tiny chip, called an EPROM (Erasable Programmable Read-Only Memory) enabled the combined use of technologies to conduct rapid, accurate and cost-effective radiological soil analyses. The chip translates data from a hand-held Ludlum radiation instrument into ASCII format and sends it to a Trimble Pro XL differential GPS receiver. The receiver sends the data and GPS coordinates to a Trimble TDC-1 datalogger, where they are stored as a single file. In post processing, the GPS coordinates are differentially corrected to sub-meter accuracy and loaded into an Oracle or Arc/Info database, along with the corresponding gamma readings and a time stamp. "The purpose of measuring radium content in the soil," Turner said, "is to identify hot spots that must be excavated. As technicians walk over the property, the datalogger records a GPS point, a corresponding gamma reading and time stamp every two seconds. Field crews can survey a 10,000-square-foot property in an hour, achieving on the order of 2,000 x-y gamma data points. The incredible density of data provides maps with a very high degree of resolution of the radiological hot spots." The original basemaps with parcel lines and owner identification were provided by the DuPage County Processing Department. IDNS contractors added planimetric data, including streets, sidewalks, driveways, etc. The maps were converted to North American Datum (NAD) 83 to provide a common coordinate system for GPS points, corresponding gamma readings and spatial attributes in the various data layers. In Arc/Info, GPS-point coverage and corresponding gamma readings are "layered" over the basemap, or displayed as contours of gamma concentrations. Referring to EPA's requirement for remediation of properties with five picoCuries-per-gram total radium in the soil above the normally occurring background gamma radiation levels, Turner said, "we contour the x-y coordinates and color-code them to indicate three possible conditions: less-than-background plus five; an indiscriminate gray zone; and greater-than-background plus five -- clearly requiring remediation." BENEFITS OF COMBINED TECHNOLOGIES "The process is accurate, fast and cost-effective," said Runyon. "During initial testing, I was able to survey a site in 45 minutes using combined GPS and radiological measuring equipment, post-process the data in 15 minutes, and from there produce a contour map in 30 minutes -- one-and-a-half man hours! If you were to do radiological surveys as we originally did them -- laying out the property in a grid, walking behind the survey technician and writing down gamma readings, then producing an AutoCAD, or other computer graphics display of the site -- you would have at least 24 hours tied up." Runyon concurs that without the capability to rapidly and accurately collect and analyze geographical and radiological data, remediation on such a scale would probably take several years longer and carry an astronomical price tag. Runyon emphasized that having all parties use the same basemaps, technologies and standards reduces overall cost and makes for efficient coordination and smooth operation. "The technology itself makes the process cost-effective for everybody, particularly in the construction-related phases. We can make a pretty quick decision on turnaround, whether a property should be back-filled or have additional work. When you have a piece of heavy equipment sitting on site, or operators and technicians you are paying, you don't want them standing around waiting for a decision," he said. "Another viewpoint we sometimes forget is the home owner's. The sooner CH2M Hill does their work, Kerr-McGee does their excavation, and we do our verification -- the less impact on the home owner." Bill McGarigle is a freelance writer specializing in GIS, GPS, and marine-related topics. E-mail: . Aerial radiological survey -- survey by helicopter equipped with sodium-iodide pods, an instrument that measures pulses created by the interaction of gamma rays with the sodium-iodide medium. Sensitive enough to pick up naturally occurring radiation from granite tombstones in cemeteries. Curie -- basic unit for measurement of radioactivity, equal to 37 billion disintegrations per second, approximately the decay rate of 1 gram of radium 226. Gamma radiation -- high-energy, short-wavelength electromagnetic radiation, capable of penetrating 100mm of lead. Naturally occurring background radiation -- concentrations of radium, thorium and uranium that occur naturally in uncontaminated soil. Radium -- decay (daughter) product of Thorium 232 (Th-232) and Uranium 238 (U-238), generically referred to as radium. The specific radiums are Ra-228 (or Actinium-228) from Th-232, and Ra-226 from U-238. Radium level -- concentration of combined Ra-228 plus Ra-226 in the soil, in picoCuries per gram of soil (pCi/g). Thorium -- Thorium 232, a naturally occurring radioactive ore, processed from monazite sands. The ore is used in the production of gas mantles and various industrial polishing compounds. The comparison between background radiation and contamination levels on and around the Kerr-McGee site can be expressed in either concentrations or gamma exposure rates. For example, in this immediate area, the background concentrations for combined radium (Ra-226 plus Ra-228) is approximately 2.2 pCi/g. The concentration in some of the highly contaminated materials identified in the surrounding residential area is as high as 500-600 pCi/g. The mean background exposure rate as measured with an organic scintillation, micro-Roentgens-per-hour (uR/h) meter is approximately 10 uR/h. Exposure rates as high as 2000 uR/h have been identified. Trimble XL differential GPS receiver. 4000SSE base station and TDC-1 datalogger. Ludlum 2221 scaler rate meter with sodium iodide detector. Sun Microsystems UNIX workstations. Trimble Pathfinder differential correction.
<urn:uuid:2d86ab89-139b-4ee1-962a-939bf78ead1a>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Technologies-Speed-Radioactive-Clean-Up.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9091
2,288
3.140625
3
Net Tip: Using pathping More often than not, knowing that there are connectivity problems between two computers is not enough information to effect a fix. You can use the ping command, which will send four ICMP echo packets to the target, and then listens for the replies on the originating machine. The tracert command displays the Fully Qualified Domain Name and IP address of each gateway along the route to a remote computer. Together, you can glean a little information about what's going on, but you're probably still in the dark as to exactly where the problem lies. Shedding more light on the issue is pathping; a relatively obscure, yet simple and powerful command in the Windows NT/2K/XP arsenal. Basically a hybrid of ping and tracert, pathping one-ups these traditional utilities by providing statistical analysis of results over a period of time -- usually about 5 minutes. (This can vary dependant on the number of hops being analyzed enroute.) In addition to returning the computer name and IP address for each hop, pathping computes the percentage of lost/sent packets to each router or link. This additional information is what enables you to isolate the cause of a network problem. Syntax: pathping [-n] [-h maximum_hops] [-g host-list] [-p period] [-q num_queries] [-w timeout] [-t] [-R] [-r] target_nameParameters: |-n||Do not resolve IP addresses to hostnames.| |-h maximum_hops||Maximum number of hops to search for target. Default is 30| |-g host-list||Allows loose source route along host-list. (consecutive computers to be separated by intermediate gateways)| |-p period||Wait period milliseconds between pings. Pings are sent to each intermediate hop, one at a time. Therefore, the interval between two pings sent to the same hop is (period) x (number of hops). It is suggested you don't go below the default number, so as to avoid network congestion.| |-q num_queries||Number of queries per hop. Default is 100| |-w timeout||Wait timeout milliseconds for each reply. Default is 3000 milliseconds. Multiple pings can be done in parallel, so the amount of time specified in the timeout parameter is not bounded by the amount of time specified for the period parameter for waiting between pings.| |-T||Test connectivity to each hop with Layer-2 priority tags. This parameter attaches a layer-2 priority tag (for example, 802.1p) to the ping packets that it sends to each of the network devices along the route. This helps identify network devices that do not have layer-2 priority configured. This parameter must be capitalized.| Enabling layer-2 priority on the host computer allows packets to be sent with a layer-2 priority tag, which can be used by layer-2 devices to assign a priority to the packet. Legacy devices that do not understand layer-2 priority will toss tagged packets, since they will appear as malformed packets. Therefore, a switch that connects to a legacy network should be configured to strip the tag before forwarding the packets. This option helps identify the network elements that are tossing the tagged packets. |-R||Test if each hop is Resource Reservation Setup Protocol (RSVP) aware, which allows the host computer to reserve a certain amount of bandwidth for a data stream. This parameter must be capitalized. | An RSVP reservation message for a non-existent session is sent to each network device along the route. If the device is not configured to support RSVP, it returns an Internet Control Message Protocol (ICMP) unreachable message. If it is configured to do RSVP, it returns a Reservation Error. Some devices may not return either of these messages. If this happens, pathping returns a timeout message. CrossNodes Net Tips are a new feature of crossnodes.com. If you have a networking tip or trick that you'd like to share, please submit it to the Managing Editor. There can be no financial remuneration, though we will place your byline upon request.
<urn:uuid:ab6bc075-e5e5-4022-b967-9fb3ef127bb2>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/967051/Net-Tip-Using-ipathpingi.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85999
875
3.09375
3
As our world becomes increasingly connected via the Internet, it only seems logical that the interconnectivity would eventually permeate our homes. "Smart devices" like alarm systems, locks, thermostats, and more that can be controlled over the Internet are gradually gaining visibility and creating legions of "smart homes." For all the technological advancements, however, it would appear that our houses are simultaneously becoming more vulnerable. "Everything can be hacked," said Jerry Irvine, CIO of Prescient Solutions and a member of the National Cyber Security Task Force. "Here's the big picture: With Target...they're not saying it with certainty, but supposedly the way the hackers got into their network was through the HVAC network. That's a similar situation with us with our home solutions and our IoT [Internet of Things] environment." Each of these connections, explained Irvine, is a potential risk for hackers to get into your internal network. Once they get into your network and place some sort of virus or even just a sniffer, they can see what's going on and everything becomes hackable. And when he says "everything," he means everything. "Home security systems, thermostat controls, lock controls, opening and closing the garage, the lights, info on the fire alarms...basically anything that is there can now be controlled via a Wi-Fi network," he said. "In a home environment, the average person doesn't even have a password on their cell phone, which they're going to be connecting to their home systems. It's just not going to happen." Craig Heffner, a vulnerability researcher at Tactical Network Solutions, expressed similar concern about how much potential smart devices have to be a vulnerable attack surface. "Anything that connects to the outside internet or listens for outside connections would be of concern," said Heffner. "There are a lot of devices where you can connect to them remotely. You have to consider wireless -- in other words, using your wireless network on its own and it's not secured. Then, someone would just have to be near your home and wouldn't even need a physical address." He went on to add that obviously if users properly secured their wireless networks, they would be safe from such attacks, but many people either don't do that or don't know how. How attackers can actually gain access to your network via these smart devices can vary. One common scenario, however, would be is that if one of the devices was programmed to listen for connection and an attacker did a scan for devices that were doing exactly that. Even if the device is an innocuous one, like a temperature sensor, an attacker can hack into and use it as a "pivot point" once inside. In other words, they can use that device to bounce around and gain access to more important things. The simple solution, said Heffner, is to not make these devices publicly accessible. "Use reflection. I'm going to keep [these devices] in my network and not configure them to be remotely accessible," he said. "If you browse for a website [on a computer that is connected to the same network], attackers can use your web browser to send requests to devices in your network, since a lot of htem have web-based configurations. If it's not secure or there's a vulnerability, that's a problem. "When they exploit it, they would run code that calls back to them; a server they have control of," he added. "And that gives them remote access." One would think that given the threat that these devices pose for the networks to which they are connected, vendors would release them with included security measures. Unfortunately, it appears they don't come equipped with much beyond a request for credentials. "[Smart devices] usually have something built in; most devices, whatever admin access they have, will typically be at least password protected," said Heffner. "But there are a couple of problems there. A lot of people don't consider all of the scenarios." What little security measures these devices have are not necessarily mandatory to implement; users could, for example, not even bother setting a password. Heffner added that there are also ways for attackers to bypass the login process at some point in the code before the device checks credentials. "So even if you have configured a secure password, you're not necessarily safe," he said. "Security is not taken that seriously as it is with things like PCs with Windows." Irvine added that not only is an ID and password typically the extent of the security measures, they're not even that strong given that passwords often don't even need to be complex. "It's easy these days to proxy and masquerade as a web device," he said. "You could be a rogue web server, for instance, that these devices would then report to, nullifying the need for a user ID and password." Even if a user is diligent enough to make the most out of the security measures at hand, there's no way to secure what you don't know is vulnerable, Heffner pointed out. "If there's a vulnerability in a device, most consumers will never hear about it," he said. "Most vendors will just ignore a vulnerability and never patch it at all. It's hard to protect against unknown vulnerabilities." With so many vulnerabilities, both in the products themselves and as a result of poor user awareness, Irvine and Heffner seemed concerned about attack rates increasing alongside adoption rates. Irvine seemed particularly concerned with the lack of awareness surrounding the vulnerabilities of smart homes. "I think the security [of these devices] won't improve until there is a major issue," he said. "As the adoption rates increase, so will the attacks. The same thing happened with mobile devices." Heffner said that with other targets like PCs becoming better protected, attackers are more likely to target newer devices that users haven't properly learned how to secure yet, thereby making them more attractive targets. "I think an increase in attacks along with adoption rates is pretty inevitable and we're already seeing that," he said. "You're already seeing large exploits targeting things like home routers. Things like that are only going to increase as the number of targets increase and as attackers realize how critical these devices are." Despite the potential for creating vulnerabilities in one's network by using smart devices in a home setting, both Heffner and Irvine believe that as long as users are responsible, they can be implemented in a safe and secure manner. "I think there's a lot of work to be done, but it comes back to your threat model," said Heffner. "If your network is reasonably secure and you keep these devices on your network, they're relatively secure even if there's a vulnerability in them. So yes, there are certainly steps users can take to make sure any vulnerabilities are mitigated." Irvine also argued that the security of the devices, at least as the situation stands now, falls squarely on the shoulders of the users. Without proper care, people can -- and do -- fall prey to these kinds of attacks. "There are secure ways to implement home automation systems, [but] I don't believe any of those are being done," said Irvine. "Rather than having your home automation systems on the same Wi-Fi as your PCs and smartphones, I would want a completely different segment that had no direct access to the rest of my internet. There are ways to do that." So if it's up to the user to secure such an enticing attack vector, how can they go about doing that and avoid having their entire networks infiltrated? "First and foremost is creating user IDs for each account," said Irvine. "Don't use the same email address or user ID for everything, or at least use different information for different categories. In other words, don't use your bank ID for your home automation, as well as Facebook." The same goes for passwords, he said, which should not only be different, but also complex (alphanumeric, upper and lower case letters, etc.). Some of Irvine's other device was equally simple, like keeping both systems and anti-virus updated at all times. "When Microsoft says there's a patch, install it," he said. "These companies have found vulnerabilities in their systems, so they get updated." Finally, for the especially cautious, he suggested taking a somewhat more complicated approach. "If you are connecting to any type of home automation system that allows remote access, do it across a VPN," said Irvine. "Make sure the vendor allows for a totally encrypted connection. That should keep you more secure than the average person." Network segmentation was another key strategy that Irvine proposed. Keeping the devices from communicating with each other and the rest of the home network is one way to insulate them from outside threats. "I would segment my alarm systems from my home automation systems," Irvine said as an example. "If someone gets into my AC, I don't want them to be able to turn off my alarm. Users can create segmented networks or VPNs and make these devices unable to communicate with each other. You can also have your router set up so there is a VLAN on the inside and it only allows these network segments to communicate to those other networks through a VPN." Heffner was on the same page, suggesting measures that ranged from the simple to the slightly more technical. For one, he said, users should refrain from making their smart devices openly accessible by any remote users. "The device usually has an option to set specific devices or IP addresses that can access it remotely," he said. As for mobile devices that are linked to smart devices via some sort of mobile app, Heffner said that it falls on the software developers to keep them up to snuff. The user's responsibility lies in being aware of what exactly they're installing on their phones or tablets. "Most [mobile apps] will auto update themselves when they're available," he said. "But whether you're using a mobile device or not, be careful of what programs you install. Android users should be especially careful. It will go a long way towards keeping people off your network." This story, "Without Proper Security Measures, Smart Homes Are Just Begging to Be Targets" was originally published by CSO.
<urn:uuid:2cfe2f7c-bec0-45bb-8c38-bb93f34a4e50>
CC-MAIN-2017-04
http://www.cio.com/article/2377863/consumer-technology/without-proper-security-measures--smart-homes-are-just-begging-to-be-targets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.97764
2,119
2.6875
3
It's been about eight years since the United States adopted MPEG-2 as the video coding standard for digital television. But now a better algorithm has come along–actually, two better algorithms. Let's look at how these new technologies might work their way into video distribution and into consumer products. In 1993, the Grand Alliance was formed by the companies that developed the four competing digital TV systems. They decided to use the MPEG-2 video coding algorithm as part of the combined Grand Alliance system, partly because MPEG-2 was part of a documented international standard. Another feature of the Grand Alliance system was a modulation method that allowed broadcasters to pack 19 Mbps into their 6 MHz channel. And it was determined that MPEG-2 coding at a 19 Mbps rate gave acceptable HDTV picture quality. Standard definition picture quality requires about 4 Mbps with MPEG-2 coding. Fast-forward to today. Many consumers have purchased HD-ready TV displays. Some consumers have DTV tuners to watch digital broadcasts. Many don't have tuners but use the sets to display DVD movies. Many U.S. broadcasters have installed digital TV transmitters and are delivering HDTV programming, and the rest will soon follow. Both the DVDs and the DTV broadcasts use MPEG-2 video coding. Digital cable programming services use MPEG-2 coding for both HDTV and standard definition. Between DVD players, DTV receivers and digital cable boxes, there are around 80 million MPEG-2 decoders deployed today. But within the last year or so, two new digital video methods have emerged. One of them is referred to as AVC for Advanced Video Coding, and has two formal names: H.264 (for the ITU-T standards designation) and MPEG-4 Part 10 (for the ISO standards designation). The other is WM9V for Windows Media 9 Video. WM9V will eventually have a formal name, because WM9V has been submitted by Microsoft to SMPTE to become a standard. Instead of HDTV at 19 Mbps and standard definition video at 4 Mbps, both of the new coding methods seem to provide good HDTV picture quality at around 5 Mbps, and SDTV around 1 Mbps. A four-to-one improvement in capacity makes the engineers pay attention–and the business executives, too. The question is how to achieve these rates while still providing programming to those who bought legacy MPEG-2 receivers. The broadcast industry has decided on a first step. Broadcasters are in the process of defining an "enhanced" modulation method that allows a portion of their 19 Mbps to be delivered with increased error coding and decreased payload capacity to viewers beyond the reach of their current signal, and perhaps viewers using portable hand-held PDAs. That enhanced signal, receivable only by viewers with "new" DTV sets, could carry programming that was encoded with the "new" coding method. Viewers with legacy DTV receivers would continue to receive the remainder of the 19 Mbps signal that carries legacy MPEG-2 programming. Under the cable industry's old way of doing business, advanced video coding would have been easy to deploy because the cable operators owned all of the set-top boxes. If an MSO wanted to deploy advanced coding (or any advanced service) in a market, the MSO could simply move the legacy boxes to another market and deploy the "new technology" boxes in the roll-out market. With FCC rules allowing customer ownership of set-top boxes, and with FCC approval of the Plug-and-Play agreement between the cable and consumer electronics industries, MSOs have lost that kind of control. The cable industry has been studying the new video coding technology, but so far as I know, does not have a roadmap for deploying it. The consumer electronics folks don't seem to have any problems with building DTV receivers with both MPEG-2 and advanced video decoders, so long as only one advanced method is chosen. They want the broadcast industry to choose between AVC and WM9V, because it costs more money to build in two new decoders than just one new decoder. But the broadcast industry doesn't really have a mechanism for making that selection. For example, there hasn't been any comparative picture quality testing yet, and some think it's needed. And first, a decision is needed on whether the primary use of advanced video coding will be for extending the coverage area of TV stations, or for delivering low bit rate, real-time video to next-generation hand-held PDAs, or simply increasing the capacity to TV receivers within the existing coverage area. One algorithm might perform better for one purpose, but not the others. And there are still unanswered questions about licensing and royalty fees. If the picture quality of the two proponents is equal, and if the decoder circuitries have equal cost and complexity, then licensing fees might be the key decisional factor. But standards bodies normally make their decisions on technical criteria, not business factors. So there are still challenges, but a shift to new video coding technology will take place over the next 5 to 10 years, similar to the MPEG-2 timeline. Now's the time to start planning, particularly for the cable industry. Have a comment? Contact Jeff via e-mail at: email@example.com
<urn:uuid:b1a73614-2bf6-4ca0-bd7a-a6eaf1c88ebd>
CC-MAIN-2017-04
https://www.cedmagazine.com/article/2003/10/advanced-video-coding
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00017-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950986
1,082
2.625
3
Researchers are looking to develop an intelligent image system that can monitor large areas, perhaps miles wide, identify potential threats based on the correlation of events and anomalies it detects, and issue timely alerts with few false alarms. Such a surveillance system is at the heart of what researchers at the Defense Advanced Research Projects Agency calls a Persistent Stare Exploitation and Analysis System (PerSEAS) that can automatically and interactively discover intelligence from optical or infra-red devices in the air on drones, for example, or spread over urban, suburban, and rural environments. DARPA said it envisions two major applications for such a system. Perhaps most important, the first would use the system in a near real-time mode to receive alerts and warnings to react to and avert disasters. For example, if it notices a number of activities that were out of the usual, such as the gathering of lots of soldiers and trucks it could alert local authorities. The second would be to use the data gathered from the system to use archived data from the system to analyze events, such as an attack to determine the movements and origins of the entities involved in the event, DARPA said. For both types of applications DARPA said the PerSEAS system ideally could receive or generate cues from/to other sensor systems to identify places or people of interest for additional details. Overall the challenge is to identify potential threats based on the accumulation and correlation of multiple events and anomalies, and issue alerts so military folks in the field can take quick action or other officials can alert the public of problems, DARPA said. Specifically the PerSEAS system will gather data from sensors and feed the data into an intelligent software engine supporting algorithms that discover relationships and anomalies that are indicative of suspicious behavior, match previously learned threat activity, or match user defined threat activity should also be incorporated, DARPA stated. DARPA notes that in recent years, the military has fielded several optical/infra-red systems it calls Wide Area Motion Imagery (WAMI) but is in the process of buying newer, more capable systems that will expand the deployment and development of airborne WAMI devices. One such system, known as SWEEPER, which is short for short-range wide-field-of-view extremely-agile electronically-steered photonic emitters, uses lasers as a super-powered surveillance system. These new systems persistently monitor fixed geographic locations for long periods of time using electro-optic sensors. Some store their WAMI data onboard and download it at the end of each mission for post-event analysis. Others provide operational support through real-time transfer of the data, DARPA stated. Current efforts to exploit data from current sensor systems are mostly manual and require hours to days of painstaking analysis to produce results. The tedious nature of current exploitation capabilities limits the ability to fully utilize the available data. Consequently, critical questions go unanswered and timely threat cues are missed. PerSEAS will automatically discover potential threat activities in near real-time, as well as allow analysts to quickly validate the findings, DARPA stated. Layer 8 in a box Check out these other cool stories:
<urn:uuid:e99338d1-52f7-46bc-88fe-d66f9e8f15d6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231830/security/futuristic-security-surveillance-system-brings-big-brother-to-life.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93356
643
2.75
3
We know you just want someone to explain in simple, plain English: “What is ITIL?” While there are many ITIL definitions on the Internet (and many articles written), you’re often left wondering what it all means. Your simple introduction to the basics of ITIL So, what does ITIL stand for? ITIL is the acronym for Information Technology Infrastructure Library. While this longer name is still officially in place, it is now more commonly known as just ITIL. ITIL is a globally recognized set of best practices for Information Technology Service Management (ITSM). The British Government created ITIL when it recognized that the ever-increasing dependence on IT required a set of standard practices. The standard is now published and owned in a joint venture between a private company, Capita, and the United Kingdom Cabinet Office, named Axelos. The official definition of ITIL is: “A set of best-practice publications for IT service management. ITIL gives guidance on the provision of quality IT services and the processes, functions and other capabilities needed to support them. The ITIL framework is based on a service lifecycle and consists of five stages (service strategy, service design, service transition, service operation and continual service improvement), each of which has its own supporting publication. There is also a set of complementary ITIL publications providing guidance specific to industry sectors, organization types, operating models and technology architecture.” But, the best way to think about ITIL is as a simple and practical framework that focuses on aligning your Information Technology (IT) services with the wider needs of your business. ITIL is about making smart improvements to your IT service management processes. And it’s really practical, not simply a mass of theory (in fact, ITIL was born from practical experience and not in a university classroom). And ITIL can help your organization deliver best practice in ITSM, whatever the size of your company or the industry you’re in. Both companies and individuals can comply with ITIL, but only individuals can become certified. The 20000Academy service is aimed at helping organizations become compliant with ITIL.
<urn:uuid:1e54440f-623e-4530-8ae7-749154b5e3a2>
CC-MAIN-2017-04
https://advisera.com/20000academy/what-is-itil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940856
445
3.125
3
While not in an area of space considered habitable, the rocky planet known as Kepler-10b and is never-the-less significant because it showcases the ability of Kepler to find and track such small exoplanetary movements. By the numbers: 20 projects that kept NASA hopping in 2010 According to NASA Kepler has found the planet is 1.4 times bigger than Earth and orbits its sun once every 0.84 days. It is more than 20 times closer to its star than Mercury is to our sun and likely has a surface temperature north of 2,500 degrees F, NASA stated. "Accurate stellar properties yield accurate planet properties. In the case of Kepler-10b, the picture that emerges is of a rocky planet with a mass 4.6 times that of Earth and with an average density of 8.8 grams per cubic centimeter -- similar to that of an iron dumbbell," NASA stated. According to NASA: Kepler's ultra-precise photometer measures the tiny decrease in a star's brightness that occurs when a planet crosses in front of it. The size of the planet can be derived from periodic dips in brightness. The distance between the planet and the star is calculated by measuring the time between successive dips as the planet orbits the star, NASA stated. "The Kepler team made a commitment in 2010 about finding the telltale signatures of small planets in the data, and it's beginning to pay off," said Natalie Batalha, Kepler's deputy science team lead at NASA's Ames Research Center in a statement. She is the primary author of a paper on the discovery accepted by the Astrophysical Journal. Kepler has already made a number of key space discoveries. In August, Kepler discovered two Saturn-sized exoplanets crossing in front of, or transiting, the same star. At the time NASA said in addition to the two confirmed giant planets, Kepler spotted what appears to be a third, much smaller transit signature in the observations of the sun-like star designated Kepler-9, which is 2,000 light years away from Earth. The planets were named Kepler-9b and 9c. NASA reported last January Kepler spotted five planets orbiting stars beyond our own solar system. The five planets are called "hot Jupiters" because of their deep mass and extreme temperatures, NASA said. They range in size from about the same size as Neptune to larger than Jupiter and have orbits ranging from 3.3 to 4.9 days, NASA stated. The orbs likely have no known living organisms because NASA estimates their temperatures to range from 2,200 to 3,000 degrees Fahrenheit, hotter than molten lava and all five orbit stars hotter and larger than Earth's sun. In June, mission scientists announced the mission has identified more than 700 planet candidates that it had not confirmed as planets. The grand prize for Kepler of course would be finding a planet similar to Earth or those that orbit stars in a warm habitable zone where liquid water could exist on the surface of the planet, according to NASA. Since transits of planets in the habitable zone of solar-like stars occur about once a year and require three transits for verification, it is expected to take at least three years to locate and verify an Earth-size planet, NASA stated. The satellite has been peering at a patch of space, scanning over 150,000 stars since 2009. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:f40437a5-7c50-40c5-8634-cad8587ffdd3>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228194/security/nasa-s-kepler-spots-rocky---iron-dumbbell--exoplanet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9463
709
4
4
Rare earth metals or rare earth elements (REEs) are a relatively abundant group of seventeen elements found in the periodic table. Out of the seventeen, fifteen elements comprise the lanthanide series found between atomic number 57 and 71. The rare earths can further be divided into two categories—heavy rare earth elements (HREE’s) and light rare earth elements (LREEs). The demand for rare earth metal in Asia-Pacific was estimated 95,695.38 tons in 2012, and is projected to reach 161,491.5 tons by 2018, growing at a CAGR of 9.1% from 2013. Asia-Pacific Rare earth metals market has grown considerably during the past few years and is expected to grow at a more rapid pace in the next five years. Cerium oxide is a major type of rare earth metal, and has a huge demand in Asia-pacific. These elements are relatively abundant in nature even though their name suggests otherwise. Each REE is more common in the earth's crust than silver, gold or platinum, while cerium, yttrium, neodymium, and lanthanum are more common than lead. Thulium and lutetium are the least abundant REEs with crustal abundance of approximately 0.5 parts per million. The radioactive element promethium does not occur freely in nature. Rare earth metals are widely used in various applications such as permanent magnets, metal alloys, glass polishing, glass additives, catalysts, phosphors, ceramics, and others. The continuous rise in production of end products for use within the region and for exports derives a huge demand for the chemicals. The growing demand and policies including emission control, environment friendly products, etc., have led to innovation and developments in the industry, making it a strong chemical hub globally. The exorbitant growth and innovation along with the industry consolidations are expected to ascertain a bright future for the industry in the region. China is the major consumer of Rare earth metals in Asia-Pacific, accounting for 60% of the global rare earth consumption. The key countries covered in Asia-Pacific Rare earth metals Market are China, Japan, and Others. The types of rare earth metals studied include lanthanum, cerium, praseodymium, neodymium, samarium, europium, gadolinium, terbium, dysprosium, yttrium, and others. Further, as a part of qualitative analysis, the Asia-Pacific Rare earth metals Market research report provides a comprehensive review of the important drivers, restraints, opportunities, and burning issues in the rare earth metals market. The Asia-Pacific Rare earth metals Market report also provides an extensive competitive landscape of the companies operating in this market. It also includes the company profiles of and competitive strategies adopted by various market players, including Alkane Resources Ltd (Australia), Arafura Resources Ltd (Australia), Avalon Rare Metals Inc. (Canada), and Baotou Hefa Rare Earth Co. Ltd. (China). With Market data, you can also customize MMM assessments that meet your Company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters: - Market size and forecast (Deep Analysis and Scope) - Consumption pattern (in-depth trend analysis), by application (country-wise) - Consumption pattern (in-depth trend analysis), by type of rare earth metal (country-wise) - Country wise market trends in terms of both value and volume - Competitive landscape with a detailed comparison of portfolio of each company mapped at the regional- and country-level - Production Data with a wealth of information on Rare earth metals Raw material suppliers as well as producers at country level with much comprehended approach of understanding - Comprehensive data showing Rare earth metals plant capacities, production, Consumption, trade statistics, Price analysis - Analysis of Forward chain integration as well as backward chain integration to understand the approach of business prevailing in the Asia-Pacific Rare earth metals market - Detailed analysis of Competitive Strategies like new product Launch, expansion, Merger & acquisitions etc. adopted by various companies and their impact on Asia-Pacific Rare earth metals Market - Detailed Analysis of various drivers and restraints with their impact on the Asia-Pacific Rare earth metals Market - Detailed Analysis of various drivers and restraints with their impact on the Asia-Pacific Rare earth metals market - Upcoming opportunities in REE market - SWOT for top companies in REE market - Porters 5 force analysis for rare earth metals market - PESTLE analysis for major countries in rare earth metals market Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:a427ca70-6217-4945-bd1c-cc6ffa47d3b9>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/asia-pacific-rare-earth-metals-8699602155.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901864
1,000
3.171875
3
Functions to determine the current geolocation of the device. This file defines the geolocation service, which provides functions for reading the device's geolocation. To read the geolocation data, the application must have the read_geolocation capability. To grant an application the read_geolocation capability, the bar-descriptor.xml file in the application's project must contain the line "<action>read_geolocation</action>". Some of these geolocation functions are designed to return boolean values that indicate whether their associated attributes are valid. For example, geolocation_event_is_altitude_valid() indicates whether the altitude from a GEOLOCATION_INFO event is valid. In this context, a valid attribute means that the value of the attribute was included in the last update from the geolocation system. For example, if the device cannot obtain a GPS fix, but has Wi-Fi connectivity, the geolocation system will report latitude, longitude, and accuracy. The system will not provide values for any other attributes (such as altitude, heading, and so on), and these attributes are marked as not valid. This means that the validity functions for these attributes will return false. Subsequently, if the device obtains a GPS fix, the geolocation system will report values for all attributes, and all attributes are marked as valid. This means that the validity functions for these attributes will return true. If the GPS fix is lost, the attributes other than latitude, longitude, and accuracy are marked as not valid again.
<urn:uuid:c96b5d1e-7844-4437-8ebd-abe8b9be9b23>
CC-MAIN-2017-04
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.bps.lib_ref/com.qnx.doc.bps.lib_ref/topic/about_geolocation_8h.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00431-ip-10-171-10-70.ec2.internal.warc.gz
en
0.853859
328
2.796875
3
A race to restore the voices of the past - By William Jackson - Feb 08, 2012 In the 1880s, Alexander Graham Bell and his associates working in Washington performed some of the earliest experiments with optical transmission and sound recording. Nearly 130 years later, a team of physicists, curators and preservationists is using high-resolution digital imaging to tease the sound out of these and a handful of other experimental recordings. Six recordings, created by Volta Laboratory Associates between 1880 and 1885, are among nearly 200 recordings Bell deposited with the Smithsonian Institution. They were played back in 2011 using IRENE (for Image Reconstruct, Erase Noise, Etc.), an imaging workstation developed by physicists at the Energy Department’s Lawrence Berkeley National Laboratory and programmed to interpret the imaged grooves. The results are not exactly broadcast quality, said Carlene Stephens, a curator at the Smithsonian’s National Museum of American History. “You have to suspend your 21st-century sensibilities of what is good-quality sound,” she said. But considering that these recordings were made using a variety of techniques on media, including glass, copper, brass and wax and that no playback equipment was ever created for them, the recovery is remarkable. IRENE: Key to unlocking mute recordings Library of Congress preservation program works with millions of items, terabytes of data in a full spectrum of formats The Library of Congress now has two IRENEs, a 2-D and a 3-D version, and is using them to preserve recordings from the library’s collection that are in danger of being lost or becoming unable to be readily played. The Library of Congress is the largest library in the world, with more than 147 million items in its collections, and preserving and accessing these recordings is a major challenge. “The bulk of our collections are not books,” said Dianne Van der Reyden, the library's director for preservation. The library’s Packard Campus for Audio-Visual Conservation in Culpepper, Va., is home to more than 4 million items, including millions of analog audio recordings in a large variety of formats and media and in varying conditions of preservation. “Few people are researching what to do with this material,” Van der Reyden said. Some media, such as shellac discs, are relatively hardy and can last nearly forever. But some recordings lack playback equipment, some are too fragile to be played, and some are physically deteriorating. “This is a ticking time bomb," she said. "We’re in danger of losing much of our culture.” High-energy physics and sound preservation Because of the danger of losing some recordings forever, “we have really embraced digital preservation,” said Peter Alyea, the library’s digital conversion specialist. Reformatting old analog recordings to an archival standard can not only preserve them for the foreseeable future but also make them easily available for listening without additional wear and tear to the original. But some way is needed to recover sound from obsolete, damaged or fragile media without damaging them. Carl Haber, a scientist at Lawrence Berkeley, became intrigued by this challenge in 2000. “I’m a physicist,” Haber said. “I work in high-energy physics, and my particular area of interest and expertise is instrumentation” for data collection. He and fellow scientist Earl Cornell put their heads together, and “we saw immediately that there was some relevance of the techniques we were using” to audio conservation. After some brainstorming and “Saturday experimenting” on the concept, Haber and Cornell came up with promising results and approached the Library of Congress. The library and several other institutions contributed some funding, and Haber, Cornell and the occasional grad student spent the next several years developing IRENE. Additional support came from DOE, the National Archives and Records Administration, the University of California, the Institute of Museum and Library Services, the National Endowment for the Humanities, the Andrew P. Mellon Foundation, and the John Simon Guggenheim Memorial Foundation. Conceptually, the idea was simple: Use a scanner to produce a high-resolution digital image of the grooves in a record, cylinder or other recording media, showing the details in three dimensions. Then, clean the images up to compensate for imperfections, wear, damage or errors in the imaging. “With that information, we have algorithms that can calculate how the needle would move through that,” Haber said. Those virtual movements then can be used to duplicate the sounds that would be produced by a real needle or stylus. The evolution of IRENE “The first demonstration was pretty easy,” Haber said. “It followed fairly directly.” But the devil was in the details, and making it work well on objects of different shapes with recordings in different formats without constantly having to tweak the software was more complex. Although it is not yet as user-friendly as it could be, IRENE has become a versatile tool. “As long as there is something like a groove, we have parameters built into it to adjust for the basic things that characterize it, such as size, depth, et cetera,” Haber said. By 2006, the first 2-D IRENE was installed at the Library of Congress for production use. The 3-D version was installed in 2009. It is capable of producing images of grooves in three dimensions, providing more information on the depth and height of grooves. IRENE has now been used to digitally read and copy hundreds of rare recordings, preserving them digitally while gathering information on the technology’s strengths and weaknesses and its possible uses. “We are in the middle of a broad study of how well we can tune IRENE,” Alyea said. It has proved capable of accurately imaging and reproducing many types and shapes of recordings, and because it does this noninvasively it does not risk the damage that could be done by playing a disc or cylinder with a traditional stylus. It is not perfect, however. “It doesn’t get quite the fidelity you would get out of a turntable and stylus,” Alyea said. So, for stable, robust recordings such as shellac discs and for more recent high-fidelity disc recordings from the 1960s onward, it probably makes more sense to continue playing them the traditional way. Simple tech, complex art For the time being, getting the best results out of IRENE remains something of an art. “In some ways it’s fairly simple technology, but it is also complex,” Alyea said. “Sometimes it works really well,” but wear and variations in different types of recordings can degrade results and require additional tuning or tweaking of the software. The software is being refined so that it does not have to be tweaked and tuned for each type of recording and can be used more easily by people without technical expertise. Over the past year, the library has collected 4 terabytes of data with IRENE from recordings of many different formats and conditions. One of the possible additional uses of IRENE is to analyze recordings before they are played with a stylus, so that technicians can tell in advance what the best shape and style of stylus would be. This currently is often determined by trial or error, which adds to the wear and tear on old recordings. “We’ve had some successes and some failures,” Alyea said. “It certainly is getting better.” One of IRENE’s successes came in 2008 when it was able to recover the contents from what is believed to be the first sound recording, made on smoked paper in France in the 1860s, well before Thomas Edison’s invention of the phonograph in 1877. This recording was made as an experiment to show that sound travels in waves and was never intended to be played back. Using two-dimensional imaging, IRENE was able to read the tracings on the paper and reproduce the sound. When Stephens read of this, she thought of the 200 Volta Lab recordings locked away at the Smithsonian. “This is what I’ve been waiting for 35 years” to hear, she said. Bell's Volta Lab The Volta Laboratory was established in the Georgetown area of Washington by Alexander Graham Bell, his chemist cousin Chichester Bell, and Charles Sumner Tainter in 1880. Over the next several years, they experimented with the transmission and recording of sound. Using revenue from the lab, Bell was able in 1887 to found the Volta Bureau, an institute to aid people with speech and hearing disabilities. The 1880s was a period of innovation in recording and intense competition among Bell, Edison and Emile Berliner. To document their work to support patent claims, these inventors deposited about 400 early recordings with the Smithsonian along with notes and other records of the experiments. Some of the documentation also is housed in the Library of Congress. Some of the Volta experiments were commercially successful. The graphophone, which recorded on a wax-covered cylinder, become a popular business tool for dictation and eventually evolved into the Dictaphone. “But most of the recordings in the collection predate any kind of commercially available recording or playback systems,” Stephens said. “They were mute artifacts,” well-cared-for but incapable of being played. IRENE, with its noninvasive, format-agnostic approach, offered hope of unlocking the old recordings, and Stephens approached the Library of Congress about the project. The library collaborated with Haber on a pilot program to demonstrate whether it was possible to recover the audio. “Nothing about it was easy,” Stephens said. “The challenge was to tune the equipment for the nonstandard formats.” Deciphering the earliest recordings One of the earliest recordings recovered was from a photographic glass disc made in 1884. Bell had experimented with different techniques to modulate a beam of light by width and intensity, which was recorded in a spiral on the disc. IRENE was able to treat the images like the grooves on a record. The disc’s label identified it as a man saying “barometer,” but it took a while to identify the sound because the man was saying the word one syllable at a time — “ba-ro-me-ter.” A wax recording of Hamlet’s soliloquy on a brass cylinder was easier to identify, Stephens said. “Right up front you could hear ‘To be or not to be.’ ” A recording of "Mary Had a Little Lamb" was a little harder to make out but still possible to hear. Six of the old Bell recordings have been played back in the feasibility study, and now Stephens would like to see a broader program to recover early recordings in the Smithsonian’s collection. “I don’t think it is feasible to do all of them,” she said. “Some are too fragile or too partial to get sound from.” She estimated that about half of the 200 Volta recordings are good prospects for playback and some of the others are possibilities. Some experts maintain that Bell made some recordings himself. “We would love to get confirmation of a recording of Alexander Graham Bell’s voice,” Stephens said. There now are no known samples of it, so identifying it on a recording would not be easy. But the Library of Congress has the lab notes of Bell and his cousin Chichester, and the Smithsonian has Tainter’s notes. “It’s a matter of collating the information.
<urn:uuid:86e9725b-286d-416c-adab-af495860e675>
CC-MAIN-2017-04
https://gcn.com/articles/2012/02/06/doe-digital-imaging-of-bell-recordings.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962755
2,456
3.234375
3
The success of Ethernet has driven a diverse and ever-changing set of applications involving various Ethernet data rates over different media. Historically, the pursuit of higher speeds was paramount, but we have also been working to enable existing Ethernet rates over new distances for specific applications, and to enable new Ethernet rates at intermediate speeds but better optimized to developing applications. By creating standards for different Ethernet data rates that optimize limited resources like the number of pins, or lanes, on a chip, we're creating a deeper toolkit that provides the network and computing domains with Ethernet rates and reach that will cost-effectively suit their specific applications. The work of building out “families” of Ethernet standards continues, and the IEEE Standards Association (IEEE-SA) recently initiated two new IEEE 802.3 projects, as well as the modification of an existing standard. These projects will deliver standards that support additional Ethernet rates over various media in a cost-effective manner for a range of applications in networking and computing. At this point, task forces have been established and technical decisions are being made. Because IT executives must plan well ahead for technology transitions and costs, an update on the status of this work and its implications is due. The three new projects include: • IEEE P802.3cc, 25Gbps Ethernet over single-mode fiber • IEEE P802.3cd, 50Gbps, 100Gbps and 200Gbps Ethernet • IEEE P802.3bs, 200Gbps and 400Gbps Ethernet The IEEE P802.3cc project will complete the 25 GbE family of physical layer specifications (PHYs). The data center market for 25 GbE short-reach over copper and multi-mode fiber is currently in full swing. The IEEE P802.3cc 25 Gb/s over Single-Mode Fiber Task Force will develop new 10 km and 40 km PHYs over single-mode fiber for 25 GbE. A standard supporting single-lane signaling at 25Gbps will help lower costs for this application. The timeline for completion of this standard is, roughly, sometime in the second half of 2017. The IEEE P802.3cd Task Force will develop the new 50Gbps Ethernet rate as well as a set of PHYs for 50 GbE, 100 GbE and 200 GbE that can cost-effectively leverage common 50Gbps optical and electrical signaling technologies. The timeline for completion of IEEE P802.3cd 50Gbps Ethernet is, roughly, in the first half of 2018 The IEEE P802.3bs 400Gbps project is expanding its scope. Originally, 400 GbE was defined by creating 50Gbps single-lane technology, and multiplexing 8 lanes together. The creation of 50Gbps single-lane signaling was the catalyst for the creation of the IEEE P802.3cd project. During the creation of that project, however, it was realized that the synergies for 200 GbE SMF PHYs with the P802.3bs project meant it made more sense to include it there to accelerate completion. The IEEE P802.3bs 400GbE project modification therefore expands the project to include the definition of the new rate of 200 GbE and 200Gbps single-mode fiber PHYs within its scope. The timeline for completion of the IEEE P802.3bs project is late 2017. Copper, multimode fiber and single-mode fiber PHYs will be developed for all three Ethernet rates. It’s worth noting that the success of IEEE 802 standards has always been based on their open, transparent, inclusive development process conducted by IEEE-SA, with full participation by industry. These latest IEEE 802.3 Ethernet Working Group projects will address the increasing needs for speeds and reach targeted at specific applications, and help ensure best practices are implemented through the principles of standardization. If you're an IT executive eyeing your technology roadmap, rest assured that industry participants are working to address your needs and we now have timetables for the delivery of standards that will enable the solutions you need in your toolkit. This story, "The IEEE Standards Association initiates three new Ethernet projects" was originally published by Network World.
<urn:uuid:4c42f134-f487-493f-bf8e-fa5eebfa8490>
CC-MAIN-2017-04
http://www.itnews.com/article/3107105/lan-wan/the-ieee-standards-association-initiates-three-new-ethernet-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921308
871
2.5625
3
Infrastructure built during President Abraham Lincoln's administration in the 1860s still provides Washington, D.C., with water -- this includes the water citizens use every day and what firefighters tap into to keep residents and property safe. The district's water assets include approximately 1,800 miles of sewer lines, 36,000 valves and 9,000 public fire hydrants. Until 1999, the District of Columbia Water and Sewer Authority (WASA) managed these assets the old-fashioned way, too: on paper. The authority had 40 computers and a single e-mail address, said CIO Mujib Lodhi. WASA needed modernization, especially for the community's safety. Although mapping water assets on paper showed authorities what size water main hooks up to a specific fire hydrant -- which dictates water flow available from the hydrant -- that information couldn't be accessed quickly during an emergency response. According to a Washington Post article, firefighters waited 40 minutes at a July 2009 residential fire before a WASA representative arrived to direct them to larger water mains. The low water pressure on some hydrants forced firefighters to try hydrants on other blocks and bring in reinforcements from a neighboring county. "Our fire hydrants were in extreme disarray -- over the years they had been neglected. No maintenance and limited replacement were done on the fire hydrants themselves," said Lt. Sean Egan, hydrant inspection coordinator for the District of Columbia Fire and Emergency Medical Services department. "So we [worked] with the water authority, started building systems and trying to figure out how to correct these measures, implement testing measures, and collect data and exchange it with the water authority." To help combat information sharing issues and provide WASA with up-to-date information about its water assets, the authority collaborated with IBM's Global Business Services and Research to integrate analytics with asset-management software. Now WASA and the fire department share hydrant information in near real time with databases updated hourly. Tracking Hydrant Inspections View Full Story
<urn:uuid:0a162415-36fc-4a2c-9430-51db7dee9dbd>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Washington-DC-Tracks-Fire-Hydrants-with.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949238
416
2.609375
3
Black Box Explains Machine Vision Machine vision technology—the image-based automatic inspection process—has matured greatly and is now becoming an indispensable tool in manufacturing to increase quality and profitability. USB 3.0, with its 5-Gbps throughput and ability to send power and data over the same line, has greatly contributed to this growth. What is machine vision? Machine vision is a system often used in assembly line applications that incorporates cameras, computers, software, and other hardware to automatically take pictures and inspect materials. Machine vision uses a small industrial camera and lights mounted near an assembly line to take pictures of product as it passes. The images are then analyzed by software to determine if various aspects of the product meet acceptable specifications. For instance, if a label is misplaced, the bottle will be rejected. All of this is done at incredibly high speeds—fractions of a second. Machine vision is an indispensable tool for quality assurance, sorting, and material handling in every industry, including electronics, food processing, pharmaceuticals, packaging, automotive, etc. It is an economical way to make sure sub-spec product is rejected. Machine vision can be used to inspect for geometry, placement, packaging, labeling, seal integrity, finish, color, pattern, bar code, and almost any other parameter. USB 3.0 and machine vision USB 3.0 brings a number of advantages to machine vision systems. Because of its 5-Gbps throughput, ten times more than USB 2.0, it eliminates problems of stability and low latency for image transmission and camera control. USB 3.0 enables the transmission of higher-resolution, higher-frame rate video with no loss of quality. USB 3.0 also sends data and power on the same line. This is enough to power a camera without worrying about a separate power supply or power line. In addition, compared to older systems, USB 3.0 is plug-and-play, making it easy to swap out cameras and other hardware, such as USB 3.0 extenders, hubs, and other devices.
<urn:uuid:5a570713-0a06-4a48-81b1-c9fe085e9161>
CC-MAIN-2017-04
https://www.blackbox.com/en-ca/products/black-box-explains/machine-vision
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00184-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907378
422
2.734375
3