text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Heart Facts - Interesting Facts About Human Heart
Let's get straight to the heart of the matter. The heart's job is to move blood. Here is a collection of some amazing and interesting facts about human heart.
Facts About Human Heart
Day and night, the muscles of your heart contract and relax to pump blood throughout your body. When blood returns to the heart, it follows a complicated pathway. If you were in the bloodstream, you would follow the steps below one by one.
- Oxygen-poor blood (shown in blue) flows from the body into the right atrium.
- Blood flows through the right atrium into the right ventricle.
- The right ventricle pumps the blood to the lungs, where the blood releases waste gases and picks up oxygen.
- The newly oxygen-rich blood (shown in red) returns to the heart and enters the left atrium.
- Blood flows through the left atrium into the left ventricle.
- The left ventricle pumps the oxygen-rich blood to all parts of the body.
Do right and left seem backward? That's because you're looking at an illustration of somebody else's heart. To think about how your own heart works, imagine wearing this illustration on your chest.
Sure, you know how to steal hearts, win hearts, and break hearts. But how much do you really know about your heart and how it works? Read on to your heart's content.
Put your hand on your heart. Did you place your hand on the left side of your chest? Many people do, but the heart is actually located almost in the center of the chest, between the lungs. It's tipped slightly so that a part of it sticks out and taps against the left side of the chest, which is what makes it seem as though it is located there.
Hold out your hand and make a fist. If you're a kid, your heart is about the same size as your fist, and if you're an adult, it's about the same size as two fists.
Interesting Facts About Human Heart
- Your heart beats about 100,000 times in one day and about 35 million times in a year. During an average lifetime, the human heart will beat more than 2.5 billion times.
- Give a tennis ball a good, hard squeeze. You're using about the same amount of force your heart uses to pump blood out to the body. Even at rest, the muscles of the heart work hard - twice as hard as the leg muscles of a person sprinting.
- Feel your pulse by placing two fingers at pulse points on your neck or wrists. The pulse you feel is blood stopping and starting as it moves through your arteries. As a kid, your resting pulse might range from 90 to 120 beats per minute. As an adult, your pulse rate slows to an average of 72 beats per minute.
- The aorta, the largest artery in the body, is almost the diameter of a garden hose. Capillaries, on the other hand, are so small that it takes ten of them to equal the thickness of a human hair.
- Your body has about 5.6 liters (6 quarts) of blood. This 5.6 liters of blood circulates through the body three times every minute. In one day, the blood travels a total of 19,000 km (12,000 miles) - that's four times the distance across the US from coast to coast.
- The heart pumps about 1 million barrels of blood during an average lifetime - that's enough to fill more than 3 super tankers.
- Lub-DUB, lub-DUB, lub-DUB. Sound familiar? If you listen to your heart-beat, you'll hear two sounds. These "lub" and "DUB" sounds are made by the heart valves as they open and close.
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:2035734d-37f9-41b0-a828-d2c9285e49fe> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-241.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00271-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942104 | 899 | 3.40625 | 3 |
Raising new privacy concerns, research shows that the DNA signatures of bacteria transferred to objects by human touch can be used for identification.
Scientists at the University of Colorado at Boulder have found that the bacteria trail left behind on objects like computer keyboards and mice can analyzed and used to help identify users of those devices.
"Your body is coated with bacteria inside and out," says CU-Boulder assistant professor Noah Fierer in a video on YouTube. "You're basically a walking microbial habitat. And we found that the diversity of bacteria just on the skin surface is really pretty incredible. You habor hundreds of different bacteria species just on your palm, for example. We've also found that everybody is pretty unique. So of those let's say hundred or so bacteria species, very few are of them are shared between individuals."
What Fierer and his colleagues have demonstrated in a new study is that the distinctive combination of bacteria each of us carries and distributes can be used to help identify what we've touched.
Such work may one day help link individuals to malicious computer use or other crimes.
The study, "Forensic identification using skin bacterial communities," appears in the March 15 Proceedings of the National Academy of Sciences. It describes how the researchers swabbed bacterial DNA from the keys of three personal computers and matched them to the bacteria on the fingertips of the owners of the keyboards. It also details a similar test conducted on computer mice that had not been touched for over 12 hours.
The study indicates that the technique is 70% to 90% accurate and Fierer expects that accuracy will improve as the technique is refined. Until accuracy is extremely high, the technique is most likely to be useful as a way to augment more established forensic techniques, like fingerprinting and DNA identification.
"There's still a lot of work we need to do to assess the validity of the technique and how well we can recover bacteria from surfaces and how well we can match objects to the individual how touched that object," Fierer explains in the video.
In a University of Colorado at Boulder news release, Fierer said that the new technique raises bioethical issues, including privacy. "While there are legal restrictions on the use of DNA and fingerprints, which are 'personally-identifying,' there currently are no restrictions on the use of human-associated bacteria to identify individuals," he said.
Secure Application Development - New Best PracticesThe transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Published: 2015-10-15 The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b...
Published: 2015-10-15 Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.
Published: 2015-10-15 Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.
In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year. | <urn:uuid:fa4fa5cc-913d-4deb-b8b3-c79f5cd3c79a> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/bacteria-trail-betrays-identity-of-computer-users/d/d-id/1087584 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929739 | 815 | 3.5 | 4 |
Packet sniffing is a technique of monitoring network traffic. It is effective on both switched and nonswitched networks. In a non-switched network environment packet sniffing is an easy thing to do. This is because network traffic is sent to a hub which broadcasts it to everyone. Switched networks are completely different in the way they operate.
Switches work by sending traffic to the destination host only. This happens because switches have CAM tables. These tables store information like MAC addresses, switch ports, and VLAN information. Before sending traffic from one host to another on the same local area network, the host ARP cache is first checked. The ARP cache is a table that stores both Layer 2 (MAC) addresses and Layer 3 (IP) addresses of hosts on the local network. If the destination host isn’t in the ARP cache, the source host sends a broadcast ARP request looking for the host. When the host replies, the traffic can be sent to it. The traffic goes from the source host to the switch, and then directly to the destination host. This description shows that traffic isn’t broadcast out to every host, but only to the destination host, therefore it’s harder to sniff traffic.
This paper discusses several methods that result in packet sniffing on Layer 2 switched networks. Each of the sniffing methods will be explained in detail. The purpose of the paper is to show how sniffing can be accomplished on switched networks, and to understand how it can be prevented.
Download the paper in PDF format here. | <urn:uuid:33aa66de-0d30-4796-bb01-1a07e169f8e9> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/12/15/packet-sniffing-on-layer-2-switched-local-area-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00087-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927313 | 319 | 3.40625 | 3 |
Pullar D.,DairyCo |
Allen N.,EBLEX |
Nutrition Bulletin | Year: 2011
Meat and milk production generally gets a bad press when it comes to discussions around climate change and food production. However, with a rising population, a balance has to be found between the environmental cost of producing and the benefits in terms of food security. Consequently, sustainable production is the byword. The beef, sheep meat, pig meat and dairy industries all have specific challenges in work to reduce their environmental impact. This paper examines some of these challenges, the research which has been undertaken into their scale, routes being exploited to improve efficiency and what gains have already been achieved. It also looks at some of the mitigating factors of dairy and livestock production that help mitigate any environmental costs and ensure we are making the most efficient use of available land. © 2011 The Authors. Journal compilation © 2011 British Nutrition Foundation. Source
VanRaden P.M.,U.S. Department of Agriculture |
Null D.J.,U.S. Department of Agriculture |
Sargolzaei M.,University of Guelph |
Wiggans G.R.,U.S. Department of Agriculture |
And 10 more authors.
Journal of Dairy Science | Year: 2013
Genomic evaluations for 161,341 Holsteins were computed by using 311,725 of 777,962 markers on the Illumina BovineHD Genotyping BeadChip (HD). Initial edits with 1,741. HD genotypes from 5 breeds revealed that 636,967 markers were usable but that half were redundant. Holstein genotypes were from 1,510 animals with HD markers, 82,358 animals with 45,187 (50. K) markers, 1,797 animals with 8,031 (8. K) markers, 20,177 animals with 6,836 (6. K) markers, 52,270 animals with 2,683 (3. K) markers, and 3,229 nongenotyped dams (0. K) with >90% of haplotypes imputable because they had 4 or more genotyped progeny. The Holstein HD genotypes were from 1,142 US, Canadian, British, and Italian sires, 196 other sires, 138 cows in a US Department of Agriculture research herd (Beltsville, MD), and 34 other females. Percentages of correctly imputed genotypes were tested by applying the programs findhap and FImpute to a simulated chromosome for an earlier population that had only 1,112 animals with HD genotypes and none with 8. K genotypes. For each chip, 1% of the genotypes were missing and 0.02% were incorrect initially. After imputation of missing markers with findhap, percentages of genotypes correct were 99.9% from HD, 99.0% from 50. K, 94.6% from 6. K, 90.5% from 3. K, and 93.5% from 0. K. With FImpute, 99.96% were correct from HD, 99.3% from 50. K, 94.7% from 6. K, 91.1% from 3. K, and 95.1% from 0. K genotypes. Accuracy for the 3. K and 6. K genotypes further improved by approximately 2 percentage points if imputed first to 50. K and then to HD instead of imputing all genotypes directly to HD. Evaluations were tested by using imputed actual genotypes and August 2008 phenotypes to predict deregressed evaluations of US bulls proven after August 2008. For 28 traits tested, the estimated genomic reliability averaged 61.1% when using 311,725 markers vs. 60.7% when using 45,187 markers vs. 29.6% from the traditional parent average. Squared correlations with future data were slightly greater for 16 traits and slightly less for 12 with HD than with 50. K evaluations. The observed 0.4 percentage point average increase in reliability was less favorable than the 0.9 expected from simulation but was similar to actual gains from other HD studies. The largest HD and 50. K marker effects were often located at very similar positions. The single-breed evaluation tested here and previous single-breed or multibreed evaluations have not produced large gains. Increasing the number of HD genotypes used for imputation above 1,074 did not improve the reliability of Holstein genomic evaluations. © 2013 American Dairy Science Association. Source
Atkinson O.C.D.,Dairy Veterinary Consultancy Ltd. |
Cattle Practice | Year: 2014
Mobility scoring cows has two broad functions: 1. A measure of lameness prevalence 2. A method of finding cows for treatment There is potentially a third function: a motivator for producers to reduce lameness. Practically, very few other options exist for both measuring lameness, or for early detection of new lame cows, yet mobility scoring is consistently undervalued by producers, and the practicalities of doing it well are considerable.This short communication (to accompany the workshop) explores some of the challenges vets and producers face implementing mobility scoring, and opportunities for incorporating mobility scoring into a vet practice service offering which can become an integral part of lameness management. Source
Agency: GTR | Branch: BBSRC | Program: | Phase: Research Grant | Award Amount: 546.67K | Year: 2012
Ruminant animals, including cattle, sheep and goats, rely on microbial activity in their digestive tract to digest grass and other forages that they consume. A balanced, stable digestion (fermentation) is essential for good growth or milk production. Most livestock producers require productivity higher than that which can be sustained by forage feeding alone, and include some grain in the diet to increase production rates. Gut microbes produce acids more rapidly from the starch in grain than the cellulose in forages, leading to lower pH values prevailing in grain-fed animals. This has adverse effects on the microbes, which require near-neutral pH to perform optimally. This sub-acute ruminal acidosis (SARA) is a major economic and health issue in ruminant livestock production. Animals suffering SARA are less productive, and they suffer from necrosis of the rumen wall, liver abscesses and laminitis. SARA is often difficult for the farmer to detect - it is sub-acute and can only be detected easily at slaughter. SARA is an under-researched condition, such that only a small number of papers have addressed the dietary and microbiological causes of SARA and its underlying pathology, particularly concerning the role of the large intestine. This project aims to understand why SARA is prevalent on some farms but not others, an observation that is common knowledge but not well documented. Farm management conditions and nutrition will be monitored in these farms, and the animals will be followed to slaughter, when the extent of pathological damage will be assessed. Samples of ruminal digesta and wall tissue will be taken for analysis and tissue necrosis, abscesses and laminitis will be scored. SARA also affects some animals but not others within a herd. Remote motion-sensing technology will be used to externally monitor movements, such as rumination activity, that may alert livestock producers to problematic animals. Post mortem analysis will also be carried out on these animals. The root cause of SARA lies in altered gut microbiology. Digesta samples will be taken forward to describe the microbes that are present in the rumen and intestine in susceptible and non-susceptible animals, with the idea that some microbial species may be particularly important in causing the disease while others may be protective. Candidate probiotic bacteria isolated from non-susceptible animals will be investigated with a view to developing them as feed additives. The role of soluble lipopolysaccharide (LPS) in the inflammation will be investigated. LPS is released when bacteria lyse - it is known as endotoxin in human medicine. Materials that may bind soluble LPS to prevent inflammation will also be investigated as potential feed additives. The overall aims are to explain the underlying mechanism of pathogenesis of SARA, to investigate if microbiome analysis can predict the severity of SARA, and to develop simple, non-invasive methods for monitoring animal behaviour relating to SARA and preventing the condition. Three academic partners, three complementary companies, Quality Meat Scotland and DairyCo are involved in the project. The industrial partners will ensure that relevance to the livestock industry is maintained throughout the project and that the pathway to impact will be short and rapid.
Huxley J.N.,University of Nottingham |
Archer S.C.,University of Nottingham |
Atkinson O.C.D.,Dairy Veterinary Consultancy Ltd. |
Bell N.J.,Lane College |
And 11 more authors.
Cattle Practice | Year: 2014
Lameness in dairy cattle remains at unacceptably high levels. The claw horn lesions (sole haemorrhage/ sole ulceration and white line disease) are two of the most important causes of lameness in the UK. Prompt identification and early and effective treatment to reduce the period over which animals are lame is one of the corner stones of reducing prevalence on farm. This paper will review recent work conducted on the treatment of claw horn lesions in dairy cattle and identify challenges which must be overcome to ensure these important diseases are effectively managed on farm. Source | <urn:uuid:d843c5c6-bb6a-4f5b-8b2d-0f39b1bc992c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/dairyco-215511/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937096 | 1,947 | 3.046875 | 3 |
Memory and Storage
Lots of Memory The standard and maximum amount of RAM is a common entry in printer spec sheets, but unless you know what it's used for-holding print jobs in a queue, rasterizing additional pages while other pages print, or something else altogether-it doesn't tell you much. More important, it doesn't tell you what you'll gain from adding the maximum amount.Hard drives will almost always show up in a spec sheet if a printer includes one, whether as standard or as an option. As with memory, however, drives can be used for any number of different functions. Unless the spec sheet tells you what the printer uses the drive for, it's not telling you anything useful. Prints from USB Key Printing files from a USB key is a useful convenience, but it's important to know which file formats the feature works with. Printing JPG files, for example, won't be as useful in most offices as printing PDF files. On the other hand, printing JPGs may be the better choice for businesses that use photos, including, for example, real estate. | <urn:uuid:81154f26-c766-463e-aa3d-d8277899b228> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Printers/Playing-Fast-and-Loose-with-Printer-Specs/4 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937054 | 223 | 2.75 | 3 |
Communication Styles and Strategies
Identify your social style and those of others to become a more effective communicator.
There are four basic social styles: driver, analytical, amiable, and expressive.
Knowing your dominant social style and the styles of others can help you become
a more effective communicator. In this course, you'll learn how to identify your
social style and that of others. You'll explore the positive and negative characteristics
associated with each social style and the possible conflicts that may arise between
people with different social styles. You'll also learn strategies from moving from
an overly passive or aggressive communication style to an appropriate level of assertiveness.
Virtual short courses do not include materials or headsets. | <urn:uuid:e31d11bf-3993-4571-b460-1d6184950d87> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120425/communication-styles-and-strategies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00538-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909468 | 146 | 3.40625 | 3 |
The Semantic Web has been talked about for more than a decade. Over those years, several mistaken or misleading ideas about the Semantic Web have repeatedly popped up. This lessons looks at some of the most pervasive of these misconceptions and discusses both the confusion and the reality of the situation.
After completing this lesson, you will know:
- If ontologies should always be reused.
- How Semantic Web relates to query federation.
- Whether you need up-front agreement on ontologies or vocabulary to be successful.
- Whether Semantic Web tools necessarily replace existing systems.
- The real relationship between the Semantic Web and Artificial Intelligence.
- How Semantic Web relates to natural-language processing.
As semantic technologies have begun to move more and more into the public sphere, questions have naturally arisen about what exactly Semantic Web technologies are, how they work, and how they might interact with existing technologies. Some of the suggested answers to these concerns are more accurate and helpful than others. Separating fact from fiction may help clarify our understanding of the topic. With that in mind, here is the fact behind some common Semantic Web misconceptions:
Ontology reuse is a double-edged sword. To be sure, being able to reuse others' work to carefully model and define concepts and relationships for a particular topic can definitely have great value in certain circumstances. However, unless the scope and granularity of the information with which you are working lines up almost precisely with the existing ontology, you will have to work to translate your world view to that of the existing ontology. If you are reusing a large ontology, you will most likely find that you have to wade through hundreds of classes and properties in which you are not interested in order to reuse even a small fraction of the ontology.
On the other hand, creating a new ontology from scratch is not necessarily a bad thing. The resulting ontology will be well-tailored to your specific use cases and will align well with the ways in which you wish to present your data. Your application will involve fewer layers of translation from your source data, and the new ontology is more likely to be a true model of the information. The biggest cost of not reusing an ontology is the cost of developing a new one; however, many tools are now available that will do much of the "heavy lifting" for you, particularly if you are starting with existing information in a database, spreadsheet, XML file, or some other structured source.
Keep in mind that at least one situation exists where you should definitely try to reuse an existing ontology: if you find that an ecosystem of 3rd-party tools is available that know how to access and display information for a particular ontology, then it would be best for you to reuse that ontology if at all possible. By doing so, you will be able to apply these tools to your data without any additional work.
Generally speaking, two approaches to ontology development are available: top-down and bottom-up.
In top-down development, you begin by getting agreement on the core concepts in your domain and then build out a single model, one likely to be agreed upon by most people who might use the ontology. Eventually, individual communities of users can specialize those top-down ontologies by extending the concepts within them to meet their particular needs. Top-down ontology development is appealing because if everyone agrees from the beginning, then everyone will be able to reuse the same concepts, and the resulting data and software will all work well together.
Unfortunately, top-down ontology development is usually not practical. For one thing, it requires that you get all of the people who will initially need to buy in to the ontology to the same table. Additionally, it means negotiating the usually delicate balance amongst the many vested and entrenched interests of various people and organizations, which have often invested significant time or money in their various conflicting world views. By the time all is said and done, top-down ontology development can be an expensive proposition that takes months or even years to complete.
Fortunately, Semantic Web ontology standards (such as RDFS and OWL) are designed to also be used in a bottom-up approach. Here, individual users or communities of users can each develop their own small ontologies that suit their current needs. Later, ontology developers can use mechanisms provided by the Semantic Web technology stack to bridge and relate various elements of competing ontologies whenever an application comes along that needs to integrate information that has been modeled using two different ontologies. In this way, bottom-up ontology development effectively amortizes the cost of smoothing over different world views and in doing so allows everyone's ontologies to be developed and used in a quicker and much more agile way.
The origins of this misconception are fairly easy to understand. A major focus of Semantic Web technologies is the attempt to make it possible to integrate heterogeneous data across many sources. Furthermore, information in the Semantic Web is identified by means of a URI. In addition, SPARQL—the query language of the Semantic Web—lets developers pick and choose what sources of information should be searched for the answers to a query. Therefore, it is somewhat natural to conclude that a fundamental characteristic of Semantic Web applications is that they access data via federated (or distributed) queries.
In reality, the choice of data technology (i.e., Semantic Web vs. relational vs. something else) and the choice of integration paradigm (i.e., federation/EII vs. warehouse/ETL vs. something in between) are independent. People can (and do) perform federated data access using relational technology. Moreover, people can (and do) build ETL pipelines that populate Semantic Web warehouses.
Generally speaking, a warehouse/ETL approach provides better interactive query performance, eliminates runtime complexity, and guarantees consistency between information from different data sources. A federated query approach, on the other hand, avoids copying any data prematurely and can preserve source data security contexts. In both cases, choosing a Semantic Web data model gives additional flexibility that simplifies the process of extending and refining the integrated data model.
The association between artificial intelligence and the Semantic Web has a long history. The scenarios put forth in the 2001 Scientific American article that introduced the Semantic Web to the world involved a level of automated decision-making that seemed straight out of an AI textbook. Discussions of ontologies, inference, and description logics merely added to the confusion.
However, to equate Semantic Web with AI is to focus on the semantic aspects while ignoring the Web. In reality, Semantic Web technologies are as much (if not more) about the data as they are about reasoning and logic. RDF, the foundational technology in the Semantic Web stack, is a flexible graph data model that does not involve logic or reasoning in any way. In fact, for many people and applications, RDF is all they need (one example of this scenario is the Linked Data community). Even the parts of the Semantic Web technology stack that deal with reasoning and inference are grounded in well-understood formal semantics and can usually be expressed via straightforward sets of rules. As such, they lack both the complexity and the opacity of artificial intelligence approaches that are based on machine learning and neural models.
For more on this topic, see Applying the Semantic Web: Two Camps.
The Semantic Web technology stack is designed to be non-disruptive. This family of technologies provides the flexibility and expressiveness required to integrate a variety of data from a number of different sources; they're not designed to replace existing transactional databases, CRM systems, or XML Web Services. Instead, Semantic Web solutions take an overlay approach that virtualizes information from existing (non-semantic) source systems, imports that information into the Semantic Web data model, and then links together information between various connected systems.
To this end, the Semantic Web technology stack includes standards explicitly developed to help map data in legacy systems to RDF:
- R2RML is a markup language that allows you to specify how to map data from a relational database schema to RDF.
- GRDDL is a standard for associating XML documents with transformations that can be automatically run to convert XML into RDF.
Just as some people mistakenly equate Semantic Web technologies with artificial intelligence, others expect that Semantic Web technologies are all about using text analytics to understand natural language. While a great number of reasons may exist for choosing Semantic Web technologies as a vehicle for implementing NLP solutions, the Semantic Web itself does not deal with unstructured content; instead, it is about representing not only structured data and links but also the meaning of the underlying concepts and relationships. More about the relationship between Semantic Web and natural language can be found in these two Semantic University lessons:
Note: These two articles will be published soon. Stay tuned!
- Semantic Web vs. Semantic Technologies.
- Semantic Web and NLP.
We hope this clears up many of the common misconceptions surrounding Semantic Web technologies.
If you feel that we're missing any, let us know. | <urn:uuid:34167351-1cdb-4187-bdd4-d95e056f5e1c> | CC-MAIN-2017-04 | http://www.cambridgesemantics.com/semantic-university/semantic-web-misconceptions?p_auth=dR8RBoPy&p_p_auth=Og86jqCy&p_p_id=49&p_p_lifecycle=1&p_p_state=normal&p_p_mode=view&p_p_col_pos=1&p_p_col_count=4&_49_struts_action=%2Fmy_sites%2Fview&_49_groupId=10518&_49_privateLayout=false | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00172-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924109 | 1,895 | 2.609375 | 3 |
In 2005, the Estonian national electoral commission wrote a new chapter in the development of e-Government: for the first time in the world, secure online voting via the Internet was organized on a national scale. In accordance with the law,
iVoting (as the scheme for electronic vote is known in Estonian) shall be available at every election and referendum. The main objective of
iVoting is to provide voters with an additional channel via which they can express their vote with the aim of increasing electoral participation through better accessibility.
vote electronically via the Internet, voters need to have a
national ID card or a mobile ID for electronic authentication.
Nearly 100% of Estonian residents, aged from 15 to 74 years of age, have a valid ID card. Internet voting does not replace the conventional process of voting with paper ballot slips at polling stations in Estonia, but is a complementary alternative to it.
3.13% of those eligible to vote voted on line in 2007. For the European elections in June 2009, 15% of those eligible to vote voted on line. In March 2011, 25% of the votes cast were electronic. The procedure takes 2 minutes on average and was strongly adopted this year among those over 55 years of age.
30,5% of the population voted on-line in 2015 and voter participation in parliamentary elections rose from 61.9% in 2007 to 64.2% in 2015.
The iVoting application is considered by citizens to be just another online application among others. What most surprises Estonians today is the specific interest that this application is still generating abroad.
Back to: Electronic residence
Back to: Overview | <urn:uuid:989b14a0-6d04-45fa-bfac-6cbbc092d993> | CC-MAIN-2017-04 | http://www.gemalto.com/govt/inspired/estonia/evote | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960824 | 337 | 2.859375 | 3 |
Get up to speed on telecom terms with this handy glossary. If your organization has been considering converging your office desk phones with BlackBerry® smartphones with a solution such as BlackBerry® Mobile Voice System, you'll want to know what your colleagues in telecommunications are talking about.
Refer to Figure 1 to see these terms represented in a simplified telecom architecture diagram.
PBX (Private Branch eXchange)
A PBX is analogous to a router for the internal telecom network, and provides the basic foundation of an enterprise communication infrastructure. Common PBX manufacturers include Nortel, Avaya®, Siemens® and Cisco. A PBX provides the ability for users to dial within the enterprise using extension dialing, make external calls, and use central voice functions, such as call transfer, conferencing, and others.
The acronym PABX (Private Automatic Branch eXchange) is used interchangeably with PBX.
There are two main types of on-premise PBXs, TDM (Time Division Multiplexing) PBX and IP (Internet Protocol) PBX:
- TDM PBX (Time Division Multiplexing PBX)
TDM PBXs are the legacy system PBXs still use in many enterprises today.
- IP PBX (Internet Protocol PBX)
VoIP PBXs use Voice over Internet Protocol to transfer voice as data packets.
Many enterprises with a TDM PBX are in the process of switching to a VoIP PBX system. As a result, many companies operate hybrid environments running both TDM PBX and VoIP PBX systems.
PSTN(Public Switched Telephone Network)
A PSTN is the connection point for the PBX to the outside world. When users dial “9” to make an outside call, for example, they are in essence asking the PBX for permission to use the PSTN.
IP PBX (Internet Protocol Private Branch eXchange)
A PBX that is based on an IP architecture. Most new PBXs support both IP and TDM architecture.
IP (Internet Protocol)
IP uses a packet switching technology instead of the point-to-point technology of TDM (Time Division Multiplexing). In a TDM scenario, callers have exclusive rights to use a physical path all the way from caller to receiver. In an IP scenario, however, data packets are disassembled, transmitted and reassembled when received.
A generic term describing any technology that allows voice traffic to be passed over an IP based network. Requires a higher quality of service than data only networks, in order to transmit voice packets from point A to point B in the correct order.
PBX Connection Methods:
ISDN (Integrated Services Digital Network)
ISDN is a basic type of telecommunications circuit. It’s the standard telecom circuit that allows for data traffic to be passed on a separate channel from the voice traffic. Data traffic here refers to a relatively small amount of call data like call setup data, caller ID, caller party name, etc.
PRI (Primary Rate Interface)
PRI is a type of ISDN circuit that allows for 23 voice channels plus one data path. (One data path can hold call data for 23 voice paths.)
SIP (Session Initiation Protocol)
SIP is a standard VoIP circuit connection type. It creates a voice circuit inside the IP that carries data between the PBX and other connection point. In SIP environments, a TDM connection to the PSTN usually exists.
Find out how to mobilize your desk phone functionality. Learn how BlackBerry Mobile Voice System can work with your PBX system.
BlackBerry Mobile Voice System >> | <urn:uuid:9db086fd-e8a5-4e93-a2db-125ae835bbfd> | CC-MAIN-2017-04 | http://www.blackberry.com/newsletters/connection/it/i208/blackberry-cheat-sheet.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873139 | 762 | 3.078125 | 3 |
Eye-Tracking SystemBy Reuters - | Posted 2008-11-04 Email Print
Tens of thousands of prostate, heart and other procedures are already being performed by robots, and experts predict machines will be used to penetrate deeper into ailing bodies in the years ahead. In a university laboratory behind London's Science Museum, researchers are working on a new generation of hi-tech gadgets to take minimally invasive robotic surgery to the next level.
Sitting at console and looking at a stereoscopic viewer, Darzi can direct his robot's multi-jointed pincers inside the patient's body using a series of joysticks and foot controls as he conducts gall bladder, cancer and other operations.
They may be state-of-the-art, but these robots are just the start.
"This is the tip of the iceberg -- this is the first car ever invented," Darzi told Reuters. "There is a huge amount of work in this field which will significantly enhance the ability of the surgeon to provide a much more precise, accurate procedure."
One idea that could soon become a reality is a device that uses the surgeon's gaze to direct tools by tracking the light reflected from the user's eyes, making operations simpler and less invasive.
Positive results with the eye-tracking system were presented at the International Conference on Intelligent Robots and Systems in Nice, France in September.
The natural orifice "I-snake" camera and surgery system, which would do away with the need for incisions altogether, is further down the track. The team at Imperial hope to have their oral or rectal access system ready for tests within 3-1/2 years.
Work is also under way on "augmented reality" software. This could use data from past patient scans to help surgeons visualize tumors or other structures underneath living tissue.
Another possibility is artificially stabilizing the image of moving organs, such as a beating heart, by creating robotic instruments that move in tandem with the patient's body.
"Currently, robots are used in relatively simple procedures," said Guang-Zhong Yang, joint head of the Imperial unit. "But in future, you will see them used in more advanced procedures, like beating-heart surgery."
Darzi and Yang are not alone.
In May this year, doctors at the University of Calgary in Canada used a robot called neuroArm to remove a tumor from a 21-year-old woman's brain in the first operation of its kind.
Privately held U.S. firm Satiety Inc, meanwhile, is testing a stomach stapler for obese patients that slides down the throat rather than requiring abdominal surgery.
Researchers at Germany's DLR Institute of Robotics and Mechatronics are working on a lightweight system called MIRO using the same robotic arm technology as is used in space.
And business is booming at Intuitive Surgical, whose installed base of more than 1,030 da Vinci robots at hospitals throughout the world is due to perform at least 130,000 prostatectomies, hysterectomies, heart valve operations and other procedures this year.
(editing by Andrew Dobbie and Sara Ledwith)
© Thomson Reuters 2008 All rights reserved | <urn:uuid:8f1d3ce7-c863-4c07-b630-b0f198e9148a> | CC-MAIN-2017-04 | http://www.baselinemag.com/c/a/Infrastructure/Robodoc-Surgeon-of-the-Future-in-Theaters-Now/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00290-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945145 | 658 | 2.9375 | 3 |
Relative to most channels of communications today, fiber optics communications is still comparatively young. Through the use of optical fibers, it utilizes advanced technology that includes transferring ” pulses of light” from one location to another.
The modern optic communication systems generally include an optical transmitter to convert an electrical signal into an optical signal to send into the optical fiber, a cable containing bundles of multiple optical fibers that is routed through underground conduits and buildings multiple kinds of amplifiers, and an optical receiver to recover the signal as an electrical signal. The information transmitted is typically digital information generated by computers, telephone systems, and cable television companies.
Though fiber has a relative advantage against copper wire communication, it still stays costly to set up and manage. But its extensive applications in networking, medicine, telecommunications and data communications to name a few, makes this technology worth the investment. What created the dawn of the ‘information age’ is its function in data transmission over long distances, even countries apart.
Of course, let’s also not forget the integral role of fiber optic cables in the process of optics communications. These flimsy, transparent cables are capable of performing tasks way beyond its phsyical attributes, from operation theaters to interior designs. The need for fiber optics
communications is terribly high in this current age of computers.
Fiber optic communications has made a global impact of not leaving a country behind in communications technology. It wouldn’t be too long before our world would be connected by transparent strands of fiber optic cables.
Furthermore, global communications would be quicker, more consistent, and more effective if only fiber optics producers would make fiber optics readily available and reasonable to all. On the brighter side, happily there are a few fiber optic suppliers who understand the needs of their clients. Go to a reliable fiber provider for all your fiber optics communications needs, from Cisco X2 to Cisco XFP.
FiberStore.com is a worldwide leading manufacturer & supplier of fiber optic products. We have been in these business almost 7 years. We have our own factory and our own professional R&D department. We are specializing in supplying fiber optic components and network equipment. | <urn:uuid:4d5a4ef3-63ac-4f9a-85b4-a7dcc6468f2e> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-value-of-fiber-optics-communications.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00016-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930079 | 438 | 3.171875 | 3 |
The HPC community has been following IBM’s Watson technology since a semi-personified version of the analytics machine became a winning Jeopardy contestant in 2011. Since then Watson has been SaaS-ified, cloud-enabled, and sent to medical school. Most recently the technology popped up in a tool called KnIT (Knowledge Integration Toolkit). Developed by IBM in partnership with Baylor College of Medicine, this prototype system scours the available scientific literature on a given topic to find hidden relationships in the data.
KnIT helped Baylor researchers identify six new proteins to target for cancer research. Considering that in the last 30 years, scientists have uncovered 28 protein targets, the fact that the Baylor team found a half a dozen in a month is an impressive feat.
It’s not that humans couldn’t do what KnIT or Watson does, it’s just that machines can do it so much faster. In a just-published paper, the researchers conclude that society is better at amassing new data than at analyzing what it already has. “This leads to deep inefficiencies in translating research into progress for humanity,” they write.
Consider the sheer number of papers that are published: about 1.5 million each year, growing by about 5 percent annually. The KnIT system employs Watson technology to mine for previously unseen connections in these massive text archives. It then creates graph-based visualizations and suggests hypotheses to help the researchers identify promising targets.
For this study, KnIT analyzed millions of papers that mentioned p53, a tumor suppressor associated with half of all cancers. The Baylor team was interested in a class of enzymes, called kinases, that can interact with p53 by switching it on and off. KnIT was tasked with searching for undiscovered p53 kinases, which could provide pathways to new cancer drugs.
The study, which first employed retrospective analysis to demonstrate the accuracy of the approach, identified six new kinases implicated in p53 activity.
In addition to expanding its use for cancer research, the research team is considering how the tool can be applied to other areas of biology, such as personalized medicine. There may also be a future for KnIT in other scientific domains, like physics, although mining equations rather than text would significantly up the challenge factor.
The researchers are clear that the Watson-based tool is not replacing scientists, but by pointing out the interesting places to look, it is helping to accelerate the discovery process. | <urn:uuid:1cd10bdc-09ac-44ee-b483-d9289fa0a4e8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/08/28/watson-based-tool-automates-discovery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951454 | 508 | 2.71875 | 3 |
Greater Funding and Planning to Fortify Midwest States Water and Wastewater Treatment Market in the United States
The comparatively abundant supply of water from the Great Lakes to the Grand Midwest states of the United States has provided reliable drinking water at competitive rates, while creating opportunities for sustainable use. Greater funding, planning, and engineering will ensure that these water services continue to be offered and the wastewater collection is conducted in an environment-friendly manner. The Midwest states can ensure reliable service and formulate long-term plans once they find solutions to infrastructure challenges, separation of combined sewers, and technology implementation initiatives. The Crypto outbreak of 1993 reminded many utilities that they need to constantly monitor municipal water services. "Investing in technologies will allow the Midwest states to rest assured that the majority of the surface water being used is safe for long-term use," says the analyst of this research. "In addition, technologies will allow for money savings and increased efficiencies."
Meanwhile, in the wastewater segment, the antiquated combined sewers in the Midwest are posing an environment hazard. Frequent sewer overflows, especially during storms, have severely affected drinking water supplies, the aquatic ecosystem, and public health. In fact, many of Ohio’s major permitted facilities violate the Clean Water Act, which implies that many of the state’s waterways are not considered ‘fishable and swimmable’. It is not just the industrial discharges that are to blame but also the municipal discharges and wet weather.
The issues related to wet weather can be resolved to a large extent, as has been proved in Michigan. "In this state, construction of a wet weather flow treatment facility (WWFTF) – to be operated during wet-weather periods to treat peak wet-weather flows or captured combined sewer overflows – is under consideration," notes the analyst. "Even though pricey, the benefit of such a facility will reduce effluent loadings to receiving streams, thereby maintaining both regulatory and environmental/aesthetic standards." | <urn:uuid:f0017b8c-1aaa-452b-a70d-e1342850cb82> | CC-MAIN-2017-04 | http://www.frost.com/prod/servlet/report-analyst.pag?repid=N3AD-01-00-00-00 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00438-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948987 | 404 | 2.671875 | 3 |
Passwords remain a problem even for tech-conscious consumers. In an F-Secure poll, 43% of respondents report using the same password for more than one important account – a big no-no for proper password hygiene.
58% of poll respondents have over 20 password-protected online accounts or simply too many too keep track of. 27% have between 11 and 20 password-protected accounts and 15% have under 10. But even with so many accounts, just 40% use a password manager to keep track of them.
Encouragingly, 57% of poll respondents changed passwords after hearing about Heartbleed. Of poor password habits, the most common was using the name of a family member. The next most common poor password habit was using a pet name, and then using generic passwords like “Password” or “123456.”
Post-Heartbleed, it’s especially important to pay some attention to passwords. But getting all one’s passwords in order – setting a unique, strong password for each individual account – can seem like too big a job, which is why many aren’t doing it.
There’s a lot of advice out there on how to generate and manage passwords. What’s the average person to do? Sean Sullivan, Security Advisor at F-Secure shares the one fundamental tip that everyone should remember: “Identify the critical accounts to protect, and then make sure the passwords for those accounts are unique and strong.”
Sullivan’s advice takes into account the fact that many people have accounts for services where little personal information is stored. “If you created an account for some website and there’s hardly anything more in there than your username and password, then that’s probably not a critical account,” he says. “But your Amazon account with your credit card info, your bank account, your primary email accounts, the Facebook account with your life story, these are examples of the critical ones. If you don’t have time or inclination to tackle everything, at least take care of those.”
A prime example of a critical account is an email account that is used as the point of contact for password resets on other accounts. For these “master key” accounts, it’s a good idea to activate two-factor authentication if available.
But how to protect those critical accounts? Use a secure password manager which stores passwords, usernames and other credentials so you can access them through one master password. | <urn:uuid:2fc25513-8efa-42b5-be11-779b14e64380> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/05/20/passwords-remain-a-problem-for-everyone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928264 | 525 | 2.53125 | 3 |
This tutorial is aimed at helping you tighten your Windows security and proactively preventing performance degradation by identifying and monitoring critical Windows Events.
The tutorial is made available in two parts, with this first part covering topics focussed on what you need to know as a beginner about Event Logs and why they need to be watched. If you are a seasoned administrator or a network engineer, move on to part II and learn to set up Event Logs monitoring.
Event logs are local files recording all the 'happenings' on the system and it includes accessing, deleting, adding a file or an application, modifying the system's date, shuting down the system, changing the system configuration, etc. Events are classified into System, Security, Application, Directory Service, DNS Server & DFS Replication categories. Directory Service, DNS Server & DFS Replication logs are applicable only for Active Directory. Events that are related to system or data security are called security events and its log file is called Security logs.
The following sections provide more details on Windows Event Logs and what mandates their monitoring:
The Event logs are broadly classified into few default categories based on the component at fault. The different components for which events are logged include the system, the system security, the applications hosted on the system etc. Some applications log events in a custom category instead of logging them into the default Applications category.
|Event Log Type||Description|
|Application Log||Any event logged by an application. These are determined by the developers while developing the application. Eg.: An error while starting an application gets recorded in Application Log.|
|System Log||Any event logged by the Operating System. Eg.: Failure to start a drive during startup is logged under System Logs|
|Security Log||Any event that matters about the security of the system. Eg.: valid and invalid Logins and logoffs, any file deletion etc. are logged under this category.|
|Directory Service log||records events of AD. This log is available only on domain controllers.|
|DNS Server log||records events for DNS servers and name resolutions. This log is available only for DNS servers|
|File replication service log||records events of domain controller replication This log is available only on domain controllers.|
Each event entry is classified by Type to identify the severity of the event. They are Information, Warning, Error, Success Audit (Security Log) and Failure Audit (Security Log).
|Information||An event that describes the successful operation of a task, such as an application, driver, or service. For example, an Information event is logged when a network driver loads successfully.|
|Warning||An event that describes the successful operation of a task, such as an application, driver, or service. For example, an Information event is logged when a network driver loads successfully.|
|Error||An event that is not necessarily significant, however, may indicate the possible occurrence of a future problem. For example, a Warning message is logged when disk space starts to run low.|
|Success Audit (Security log)||An event that describes the successful completion of an audited security event. For example, a Success Audit event is logged when a user logs on to the computer.|
|Failure Audit (Security log)||An event that describes an audited security event that did not complete successfully. For example, a Failure Audit may be logged when a user cannot access a network drive.|
The Event Viewer lists the event logs like this:
Events are listed with Header information and a description in the Event Viewer.
|Date||The date the event occurred|
|Time||The time the event occured|
|User||The user who has logged onto the computer when the event occurred|
|Computer||The computer where the event occurred|
|Event ID||An event number that identifies the event type. Helps to know more about the event|
|Source||The source which generated the event. It could be an application or system component|
|Type||Type of event (Information, Warning, Error, Success Audit and Failure Audit)|
Double-click an event to see the details:
Security is the biggest concern every business faces today. Incidents like hacks and data thefts are continuously on the rise, exposing all segments of business to risks and leaving the administrators red-eyed. Various industrial researches reveal that majority of the hacks and thefts take place due to illegal authentication attempts. Auditing illegal or failed login attempts could prevent (or reduce) data thefts.That said, it is important that we know what an operating system can provide by way of security and what we must do to implement operating systems with the required security.
Events are not logged by default for many security conditions which means that your resources are still exposed to hacks.You have to configure audit policies to audit the security events and log them.Critical security events that need auditing:
It is not necessary to configure all the audit policies. Doing so would result in logging for each and every action that take place and will increase the log size. The logs roll-over and depending the size of the roll-over configured, the older logs are deleted. Configuring the right policies that are really critical to your environment will improve the security.
Auditing critical events are enabled by default for domain controllers. For the other Windows devices, configure the audit policies available under Local Security Settings. The audit policies available are:
The need to adhere to security compliances such as SOX, HIPAA etc for the publicly traded companies, health care industry etc, necessitates implementing security management process to protect against attempted or successful unauthorized access. Securing the information on your network is critical to your business with or without having to comply to some standards. Windows event logs is one of the sources using which the login attempts can be tracked and logged. A manual check on every Windows device is tedious and impossible and warrants automated auditing and monitoring of event logs on a regular basis. | <urn:uuid:c171f4ed-a403-4929-8ace-a0668d12150c> | CC-MAIN-2017-04 | https://www.manageengine.com/network-monitoring/Eventlog_Tutorial_Part_I.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00556-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917828 | 1,233 | 2.796875 | 3 |
In the absence of the sun, life as we know it would not exist. In addition to providing just the right amount of heat and light for third planet inhabitants, the sun is responsible for circadian rhythms, vitamin D production and photosynthesis. However, this life-sustaining orb also carries the potential for severe destruction. Via a phenomenon known as solar wind, the sun ejects a sea of protons, electrons and ionized atoms in all directions at speeds of a million miles per hour or more. If these particles were to reach earth, the radiation would threaten human health, while the massive onslaught of charged particles would disrupt power grids, communication networks and electronic devices.
|3D global hybrid simulation of Earth’s magnetosphere. Magnetic field lines are color coded based on their origin and termination point.
Courtesy: H. Karimabadi and B. Loring. Source: NICS
Solar wind is just one kind of space weather, the term for environmental conditions in near-Earth space. Solar flares, explosive storms that occur on the surface of the sun, eject blasts of charged particles with a ferocity that’s equivalent to 10 million volcanic eruptions. Less frequent, but even more dangerous than solar wind or flares, are coronal mass ejections, or CMEs. These eruptions of plasma from inside the sun’s corona can set off space-weather events called geomagnetic storms that can wreck havoc on our planet’s inhabitants and its technology.
This non-stop space attack is held in check by a natural shield, a magnetic field known as the magnetosphere. Created by Earth’s magnetic dipole, this field extends out into space for 37,000 miles. The magnetosphere stops most charged particles from entering Earth’s atmosphere. However, it is not a perfect solution. Enough solar particles get through the magnetic net to pose a serious hazard to power grids, communication networks and living creatures.
Supercomputing to the rescue
A research group led by Homa Karimabadi of the University of California, San Diego, is investigating the effects of space weather on the magnetosphere.
“Earth’s magnetic field provides a protective cocoon, but it breaks during strong solar storms,” explains Karimabadi.
Karimabadi teamed up with visualization specialist Burlen Loring of Lawrence Berkeley National Laboratory (LBNL) to create a topological map of Earth’s magnetosphere, using the supercomputing resources at National Institute for Computational Science (NICS).
“The ‘topomap’ helps us find the location of the magnetic field lines from different sources [for example, the magnetic field of Earth versus the magnetic field of the solar wind],” said Karimabadi.
Currently, researchers can track storms, but there are no tools available for predicting storms. In this first-of-its-kind project, the researchers will leverage the map to build global kinetic simulations of the magnetosphere and space-weather effects.
The simulations are both compute- and data- intensive. A single job can require 100,000 central processing units and take 48-hours or longer to complete. It’s only since the advent of petascale supercomputers like Jaguar, Nautilus and Kraken, that such complex phenomenon as this can begin to be unraveled. And according to Karimabadi, even the fastest computers of our era still fall short. It’s ultimately an exascale problem, he says.
“Handling and analysis of massive datasets resulting from our simulations are quite challenging. Partnering with the [visualizations] group at LBNL has been critical in developing tools to analyze our data sets,” says Karimabadi.
Improved predictive capabilities are crucial in order to prepare for space-weather events. A feature article on the research notes it will lead to “a better understanding of how space weather affects our magnetosphere allows scientists to more accurately predict the impact of solar activity on our planet.”
Last July, a geometric superstorm, spawned by a CME, was narrowly avoided. If the storm had taken place only a few days sooner, there would likely have been far-reaching consequences. Such storms have the potential to take down power grids on a national or even international scale.
As previous events (in 1859 and 1989) have taught us, the danger is real. “There is an urgent need to develop accurate forecasting models,” Karimabadi asserts. “A severe space-weather effect can have dire financial and national-security consequences, and can disrupt our everyday lives on a scale that has never been experienced by humanity before.” | <urn:uuid:06cc92c9-975a-43e8-8726-0b6aa2bba8cd> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/26/preparing_for_solar_storms/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918895 | 975 | 3.90625 | 4 |
Definition: An assemblage of items that are randomly accessible by integers, the index.
Formal Definition: Ignoring size an array may be seen as an abstract data type with the operations new(), set(i, v, A), and get(i, A), where i is a numeric index, v is a value, and A is an array. The operations may be defined with axiomatic semantics as follows.
If the contents of a new array are set to some implicit initial value vi, we could add the following rule for get.
Typically arrays have a fixed size and use either 0-based indexing or one-based indexing. Fixed initial size and 0-based indexing may incorporated as follows.
Specialization (... is a kind of me.)
dynamic array, sorted array.
Aggregate child (... is a part of or used in me.)
array index, one-based indexing, 0-based indexing.
See also associative array, matrix, huge sparse array.
Note: An unordered array must be searched with a linear search. Average search time may be improved using a move-to-front heuristic in some cases. An external index, such as a hash table or inverted index may help make an array search quicker and speed overall processing if the array is not changed often. If the array is kept sorted, a binary search or interpolation search is faster.
Inserting into an array takes Θ(n) time. If that's too slow, use a balanced tree, skip list, or a linked list. Knuth uses a balanced tree with a RANK field that supports Θ(log n) access by index and Θ(log n) insert and delete. [Knuth98, 3:471, Sect. 6.2.3]
If it takes too long to initialize a big array of size S, a huge sparse array takes time proportional to the number of accesses and only Θ(S) extra space.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 23 May 2011.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "array", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 23 May 2011. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/array.html | <urn:uuid:e9ef4f81-4f79-4b92-a890-98b2e3c4a0ca> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/array.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.849088 | 527 | 3.40625 | 3 |
Big Data in 2020
Last year, Big Data became a big topic across nearly every area of IT. IDC defines Big Data technologies as a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data by enabling high-velocity capture, discovery, and/or analysis. There are three main characteristics of Big Data: the data itself, the analytics of the data, and the presentation of the results of the analytics. Then there are the products and services that can be wrapped around one or all of these Big Data elements.
The digital universe itself, of course, comprises data — all kinds of data. However, the vast majority of new data being generated is unstructured. This means that more often than not, we know little about the data, unless it is somehow characterized or tagged — a practice that results in metadata. Metadata is one of the fastest-growing subsegments of the digital universe (though metadata itself is a small part of the digital universe overall). We believe that by 2020, a third of the data in the digital universe (more than 13,000 exabytes) will have Big Data value, but only if it is tagged and analyzed (see “Opportunity for Big Data”).
Not all data is necessarily useful for Big Data analytics. However, some data types are particularly ripe for analysis, such as:
- Surveillance footage. Typically, generic metadata (date, time, location, etc.) is automatically attached to a video file. However, as IP cameras continue to proliferate, there is greater opportunity to embed more intelligence into the camera (on the edge) so that footage can be captured, analyzed, and tagged in real time. This type of tagging can expedite crime investigations, enhance retail analytics for consumer traffic patterns, and, of course, improve military intelligence as videos from drones across multiple geographies are compared for pattern correlations, crowd emergence and response, or measuring the effectiveness of counterinsurgency.
- Embedded and medical devices. In the future, sensors of all types (including those that may be implanted into the body) will capture vital and nonvital biometrics, track medicine effectiveness, correlate bodily activity with health, monitor potential outbreaks of viruses, etc. — all in real time.
- Entertainment and social media. Trends based on crowds or massive groups of individuals can be a great source of Big Data to help bring to market the “next big thing,” help pick winners and losers in the stock market, and yes, even predict the outcome of elections — all based on information users freely publish through social outlets.
- Consumer images. We say a lot about ourselves when we post pictures of ourselves or our families or friends. A picture used to be worth a thousand words, but the advent of Big Data has introduced a significant multiplier. The key will be the introduction of sophisticated tagging algorithms that can analyze images either in real time when pictures are taken or uploaded or en masse after they are aggregated from various Web sites.
These are in addition, of course, to the normal transactional data running through enterprise computers in the courseof normal data processing today. “Candidates for Big Data” illustrates the opportunity for Big Data analytics in just these areas alone.
All in all, in 2012, we believe 23% of the information in the digital universe (or 643 exabytes) would be useful for Big Data if it were tagged and analyzed. However, technology is far from where it needs to be, and in practice, we think only 3% of the potentially useful data is tagged, and even less is analyzed.
Call this the Big Data gap — information that is untapped, ready for enterprising digital explorers to extract the hidden value in the data. The bad news: This will take hard work and significant investment. The good news: As the digital universe expands, so does the amount of useful data within it. | <urn:uuid:582629dd-832f-4c41-9440-553631912276> | CC-MAIN-2017-04 | https://www.emc.com/leadership/digital-universe/2012iview/big-data-2020.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00492-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926531 | 806 | 3.296875 | 3 |
Considering the cost of high-end supercomputers, the ones found at top academic institutions and national labs, can run into many millions of dollars, it’s only natural to want to treat these valuable resources as gingerly as possible. Administators are prudent to enact conservative permissions schemes and only allow well-vetted applications to be run on these machines. But such protective measures, while understandable and even commendable, could stifle the kind of innovation and tinkering that often leads to better code and time-saving efficiencies. That’s where the idea of a test system comes into play. A computational testbed allows developers and researchers to try out new software ideas on smaller-scale, and less-expensive machinery. This type of resource is implemented with the idea that it’s ok to break.
The testbed cluster recently completed at North Carolina State University was the brainchild of Frank Mueller, a computer science professor at NC State. Seeing the need for a more flexible supercomputer, he decided to create his own. Mueller’s team completed work on the ARC cluster March 30. ARC stands for “A Root Cluster” as the computational infrastructure will primarily support research into scalability for system-level software solutions. This will involve making changes to the cluster’s entire software stack, including the operating system. Once Mueller and his team are able to demonstrate the worthiness of a solution, then they can implement it on a big-name system — like the Jaguar supercomputer at Oak Ridge National Labs.
“We can do anything we want with it. We can experiment with potential solutions to major problems, and we don’t have to worry about delaying work being done on the large-scale systems at other institutions.”
Today’s generation of supercomputers experience failures on average a couple times a day, translating into hours of lost work. But the coming class of exaflop-level machines are anticipated to exhibit one-billion-way parallelism. The implication of all those cores is an exponential increase in the number of failures. That is why it’s so important to increase hardware reliability or make systems more error-tolerant. Being able to try out theories on a crash-test cluster will help accomplish these goals.
The ARC cluster was made possible by a $549,999 NSF grant, with additional support coming from NVIDIA and NC State. With 1,728 processor cores and 36 NVIDIA Tesla C2050 GPUs on 108 computer nodes (32GB RAM each), it is now the largest academic HPC system in North Carolina. | <urn:uuid:5acbb3f9-dd45-4c68-aa6d-44afb65606d9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/04/04/nc_state_completes_crashtest_cluster/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00400-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932409 | 532 | 3.0625 | 3 |
When all else failed, they have often had to rely on artificial insemination to ensure the endangered black and white creatures have cubs. On Tuesday, a study suggested the answer may be a lot simpler and, perhaps, more obvious—let the pandas choose their own mates. "Giant pandas paired with preferred partners have significantly higher copulation and birth rates," researchers noted in the journal Nature Communications. Generally, pandas in captivity are presented with a mate chosen by scientists based on the animals' "genetic profile". The goal is to minimise inbreeding and expand the DNA pool. But the result is often frustrating, with the animals having to be coaxed through human intervention to show even the slightest sexual interest in the mate thrust upon them. A team from the United States and China ran a test at the China Conservation and Research Centre for the Giant Panda in Sichuan province, to see if being allowed to choose their own partner might make a difference. Male and female pandas were housed in enclosures with animals of the opposite sex on either side. They were allowed limited physical interaction with their neighbours through cage bars. Scientists measured the animals' "mate preference behaviour", which included different forms of playfulness and bond-forming, as well as sexual arousal. "Negative" interactions could include signs of aggression or a mere lack of interest. The animals were then introduced to each other for mating—with both preferred and non-preferred partners. "The highest reproductive performance was seen when both males and females showed mutual preference," the researchers found. The results should come as no big surprise—ever since Charles Darwin published his theory of sexual selection in 1859, scientists have understood that mate selection is key to animal reproduction. "Mate incompatibility can impede captive breeding programmes by reducing reproductive rates," wrote the study authors. "It is therefore surprising that mate preferences have not figured more prominently in captive breeding programmes." The findings may help China better spend its limited conservation budget, the scientists added. "The future of conservation breeding will not take place in a test tube," they wrote. The most cost-effective way to get captive animals to produce offspring is to breed them naturally, and "to do that requires better understanding of natural mating behaviour", they concluded. "Mate choice has an important role to play in conservation." The authors said their study was the first to "rigorously examine" the effects of mate preference in giant pandas. Pandas have only a brief breeding season from around March to May—and females become fertile only about two to three days a year, producing a cub approximately every 24 months. Conservation group WWF estimates there are only around 1,600 giant pandas left in the wild in south-central China.
News Article | September 7, 2016
China objects to the decision of the International Union for Conservation of Nature (IUCN) to take out giant pandas (Ailuropoda melanoleuca) from the endangered species list. The IUCN Red List is considered the most comprehensive inventory of plants and animals at the global level. In the new update, the IUCN has reclassified giant pandas' status in the Red List from "endangered" to "vulnerable." The latest report of IUCN noted that there were 1,864 giant pandas in the wild, compared to 1,600, which was the population in 2004. That shows a 17 percent growth in the giant panda numbers in the wild. The Switzerland-based body then commended China's efforts at conservation and in contributing to the eventual increase of the panda population. It made special mention of China's measures such as tight regulations against poaching and adding new forest reserves for housing giant pandas. Despite the praises, China was not amused and criticized the IUCN reclassification as a setback and asserted that the black-and-white pandas continue to be "endangered." "If we downgrade their conservation status, or neglect or relax our conservation work, the population and habitats of giant pandas could still suffer irreversible loss and our achievements could be quickly lost," China's State Forestry Administration said. The official Xinhua news agency said IUCN move was a hasty step and quoted Zhang Hemin, of the China Conservation and Research Center for the Giant Panda. "A severely fragmented natural habitat still threatens the lives of pandas; genetic transfer between different populations will improve, but is still not satisfactory," Zhang said. He expressed fear that by lowering the guard on conservation efforts, protection work will suffer and the panda population as well as their habitat will face "irreversible losses." China's reasoning is that the wild giant pandas are facing the threat of diminishing genetic diversity. They are split into 33 isolated groups and some group had only fewer than 10 members. According to Zhang, as many as 18 sub-populations are facing "a high risk of collapse." China's assessment is that the giant panda species could be called less endangered only when the wild population grows steadily without adding captive-bred pandas. Marc Brody, senior adviser for conservation at the China's Wolong reserve also expressed doubts over the wisdom of IUCN's review of the pandas' status. "It is too early to conclude that pandas are actually increasing in the wild," Brody said at the World Conservation Congress in Hawaii. He added that no justifiable reason is in sight to downgrade the listing from endangered to "threatened." Meanwhile, the ABC from Australia said the good news for pandas may not last as a warming planet from excessive fossil fuel burning may wipe out one-third of the pandas' bamboo habitat in the coming decades. "The concern now is that although the population has slowly increased — and it is still very small — several models predict a reduction of the extent of bamboo forests in China in the coming decades due to climate change," Carlo Rondinini, a mammal assessment coordinator at the Sapienza University of Rome, told reporters. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.
News Article | November 7, 2015
Chinese scientists who studied the language of giant pandas at a conservation center in the Sichuan province were able to decipher 13 different vocalizations. Researchers found that male giant pandas make 'baa' sounds like a sheep when wooing mate. The female giant pandas then respond by making bird-like sounds (chirping) when they're interested. Baby pandas (cubs) make 'wow-wow' sounds when they're sad. When they're hungry, the make 'gee-gee' sounds to prompt their mothers into action. Cubs also say 'coo-coo' which translate to 'nice' in human language. The research team recorded the giant pandas' vocalizations in various scenarios which included nursing the cubs, fighting and eating to analyze the voiceprints. "Trust me - our researchers were so confused when we began the project, they wondered if they were studying a panda, a bird, a dog, or a sheep," said China Conservation and Research Center for the Giant Panda head Zhang Hemin, who lead the study. The research team has been analyzing panda linguistics since 2010. Panda cubs learn to bark, shout, chirp, and squeak to express what they want. The researchers found that adult giant pandas are typically unsocial animals, making their mothers the only language teacher they ever had. When a mother panda won't stop making bird-like sounds (chirping), she could be worried about her cubs. Like a dog, she barks when a stranger goes near her babies. In general, barking can be translated as "get out of my place." Understanding how giant pandas communicate can be valuable in their conservation, especially in their natural habitat in the wild. Findings coupled with conservation efforts will benefit future generations. Looking forward, the China Conservation and Research Center for the Giant Panda is looking into the creation of a "panda translator" using a voice-recognition software. The 2014 census of the World Wildlife Fund said there are 1,864 giant pandas living in the wild, majority of which are found in Shaanxi and Sichuan provinces in China. Towards the end of 2013, there were 375 giant pandas living in conservation centers or zoos around the world. Two hundred captive pandas are living at the China Conservation and Research Center for the Giant Panda. Saving giant pandas from the brink of extinction have reached a tipping point. On the other side of the world, scientists gear up to clone one male and one female panda at the Roslin Embryology, a biotechnology firm at Edinburgh Science Triangle in the United Kingdom (UK). Tian Tian and Yang Guang, who live in the 82-acre Edinburgh Zoo, are last two giant pandas left in the UK. The team who successfully cloned Dolly the sheep will also be cloning the two pandas.
Hull V.,Michigan State University |
Xu W.,CAS Research Center for Eco Environmental Sciences |
Liu W.,Michigan State University |
Zhou S.,China Conservation and Research |
And 12 more authors.
Biological Conservation | Year: 2011
Protected areas worldwide are facing increasing pressures to co-manage human development and biodiversity conservation. One strategy for managing multiple uses within and around protected areas is zoning, an approach in which spatial boundaries are drawn to distinguish areas with varying degrees of allowable human impacts. However, zoning designations are rarely evaluated for their efficacy using empirical data related to both human and biodiversity characteristics. To evaluate the effectiveness of zoning designations, we developed an integrated approach. The approach was calibrated empirically using data from Wolong Nature Reserve, a flagship protected area for the conservation of endangered giant pandas in China. We analyzed the spatial distribution of pandas, as well as human impacts (roads, houses, tourism infrastructure, livestock, and forest cover change) with respect to zoning designations in Wolong. Results show that the design of the zoning scheme could be improved to account for pandas and their habitat, considering the amount of suitable habitat outside of the core zone (area designated for biodiversity conservation). Zoning was largely successful in containing houses and roads to their designated experimental zone, but was less effective in containing livestock and was susceptible to boundary adjustments to allow for tourism development. We identified focus areas for potential zoning revision that could better protect the panda population without significantly compromising existing human settlements. Our findings highlight the need for evaluating the efficacy of zoning in other protected areas facing similar challenges with balancing human needs and conservation goals, not only in China but also around the world. © 2011 Elsevier Ltd. Source
Mother giant panda Aibang is seen with her newborn cub at a giant panda breeding centre in Chengdu, Sichuan Province, China, May 6, 2016. China Daily/via REUTER BEIJING (Reuters) - It is too soon to downgrade the conservation status of China's giant pandas as they still face severe threats, a leading conservationist said, after the International Union for Conservation of Nature took the species off its endangered list. The giant panda has emerged as a success story for conservation in China whose cause has been championed right up to the highest levels in Beijing, where leaders often give the animal to other countries as a sign of friendship. As of the end of 2015, China had 1,864 giant pandas in the wild, up from about 1,100 in 2000, with 422 in captivity, according to the government. But on Sunday, the International Union for Conservation of Nature reclassified the species as "vulnerable" rather than "endangered", citing growing numbers in the wild due to decades of protection efforts. Zhang Hemin, of the China Conservation and Research Centre for the Giant Panda, known in China as the "father of pandas", told the official Xinhua news agency that this was a hasty move. "A severely fragmented natural habitat still threatens the lives of pandas; genetic transfer between different populations will improve, but is still not satisfactory," Zhang said in a report late on Tuesday. "Climate change is widely expected to have an adverse effect on the bamboo forests which provide both their food and their home. And there is still a lot to be done in both protection and management terms." The wild giant panda population faced a lack of genetic diversity as it was broken up into 33 isolated groups, some of which had fewer than 10 individuals, Zhang said. Of those 18 sub-populations with fewer than 10 pandas, all faced "a high risk of collapse", he added. Only when the wild population could grow steadily without the addition of captive-bred pandas could the species be called less endangered, Zhang said. "If the conservation status is downgraded, protection work might slacken off and both the panda population and their habitat are more likely to suffer irreversible loss," he added. "The present protection achievements will be lost and some small sub-populations may die out." Shi Xiaogang, of the Wolong National Nature Reserve in southwestern Sichuan province, China's main panda conservation centre, said pandas still needed continuous protection, according to Xinhua. It was good China's efforts had been recognized. "But as conservators, we know that the situation of the wild panda is still very risky," Shi said. | <urn:uuid:cabcd0f3-de98-429a-928d-cd7e0a9dbce0> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/china-conservation-and-research-989785/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00456-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952864 | 2,726 | 3.40625 | 3 |
How the Internet-of-Things can help keep the World fed
One of my favorite expressions—one that serves as an elegantly simple reminder in so many situations—is “Waste not, want not.” Plain in sentiment, but complex in execution, to be sure.
When it comes to the business of food, however, the sentiment rises to a level of “words to live by,” more than a mere reminder or suggestion. Population growth is not slowing down, expected to rise to nine billion by 2050, and very real questions are being asked about whether or not the world can produce enough food at affordable prices to keep widespread poverty and hunger at bay.
The issue cuts to two key areas that simply must be made less prevalent, and it so happens that technologies built upon the Internet-of-Things are primed to play a significant role in affecting change. They are:
"Food Waste" and "Food Safety"
As the often irreverent and always inciting John Oliver informed us last year, food waste in developed nations has grown to comically high proportions. To hear the Natural Resources Defense Council tell it, as much as 40 percent of food produced in the United States never gets eaten. That totals out to about 20 pounds of food, per person, per month, going straight to the dumpster.
And if you view that stat up against another, that nearly 50 million Americans lived in a “food insecure” household in 2013, the juxtaposition should ring alarm bells for any human being with an ounce of empathy in their bones. This kind of discrepancy is simply not acceptable, or sustainable.
So, what does the IoT have to do with it? Certainly, the causes of food waste are multifactorial, but the good news, if you can call it that, is that the lion’s share of waste can be attributed to inefficient supply chains. In so many cases, goods do not arrive on store shelves until it is far too close to the “sell by” date, and thus ends up getting glanced over by consumers and discarded.
With modern sensor technology, this is a relatively simple problem to solve. Sensors affixed to pallets or individual products packages open the door to stringent oversight of these items as they make their journey through the supply chain, from farms or factories to wholesale locations, and on to retail distribution. By keeping closer tabs, the theory goes, it will be possible to close loops in transit and storage time, and thus lengthen the time these products spend on actual store shelves, exposed to consumers and available for purchase, before the expiry date arrives.
More to the point, if we can better track these products en route to the stores, it stands to reason we can also track more precisely how often they get purchased, and stores can make cleaner decisions about inventory and back stock. If only a “just exactly perfect” volume of goods make it to the shelves to meet demand, then by rule there is less chance of spoilage and waste. Data begets intelligence.
On the other side of the coin, there’s a shockingly near-daily occurrence of late of national foodstuff recalls. Big scares like Chipotle and Kraft grab the headlines, but a jaunt to Foodsafety.org quickly reveals there have been no fewer than 20 recalls this month alone, by companies running the gamut from Whole Foods Market to Hormel to Garden of Life.
It turns out that temperature—or, more specifically, the ability to maintain a consistent temperature throughout transport—has the biggest impact on food safety as it makes its way from production to the grocery store or restaurant table. That said, with IoT-connected devices growing more and more conducive as a means to keep tabs on the condition and quality of food as it’s produced, transported, stored and prepared, the ground has been laid for better control over ensuring these products stay safe, and certainly for preventing unsafe products from making into circulation; the data can reveal when and where something goes wrong to compromise the product, and it can be dealt with right then and there.
It goes without saying, of course, that better moment-by-moment visibility of food during transport translates into a safer food chain. If the food industry can eliminate incidents of having to recall entire lots of food due to a glitch along the way, it will also cut down on the potential for waste. Two birds with one stone. | <urn:uuid:8482a5b0-8f7c-40e1-93e0-1f092e1504f1> | CC-MAIN-2017-04 | http://www.koretelematics.com/blog/food-safety-and-food-waste-how-iot-can-solve-both-problems | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00208-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948583 | 910 | 2.671875 | 3 |
As often as you use email you should make sure that you are presenting your message in a clear and professional manner. Your email says a lot about yourself.
Here are some areas you should pay attention to when creating or reply to an email.
1. Your Spelling
You would think this would be obvious. Use the spell check if it is in your email client. Check your email client options to see if the email can be automatically checked for spelling before it is sent, just in case you forget. Nothing says unprofessional like an email with spelling mistakes.
2. Your Grammar
Is your email grammatically correct? Check for proper grammar, misused words and punctuation. This will make sure your message is not misinterpreted by the reader. Along this line, if this email comes from someone who works for you and has spelling or grammatical errors you should coach them on the subject. Email from direct reports or team members also reflect on you.
3. Your Subject Line
The subject line has a purpose. It is meant to instantly communicate to the reader what the email is about. If you are communicating with someone who receives hundreds of emails per day, the subject line tells the recipient if they need to read it right away or if it can wait. If a message is critical put “Critical!” in the subject line. Do not get annoying by making every message critical. It is like the boy who cried wolf once too often.
4. Your Email Format
An email message should be formatted so the reader can quickly read and understand the message. Unless you only have one to three sentences, do not write a continuous paragraph with no breaks. It makes the email hard to read. Just as this article is broken up into different thoughts, so should your email.
5. Your Email Length
Emails should be short and to the point. They are intended to be a quick way of communicating a specific message. Rambling on or including information that is not relevant to the email should be left out. If you have more than one topic to email the recipient about, write a separate email for each. It allows the recipient to respond according to priority and specifically to the topic.
6. Your Email Indicator
Email programs such as Microsoft Outlook have priority indicators. They are there for a purpose and should not be abused. If your email is not critical do not mark it as such. Respect the time of the reader. A critically marked email means the message is vital or contains information that must be acted on immediately.
7. Your Email Signature
Wow, where do I start. First your email should have a signature block. It should be a short and professional looking block with your contact information. Someone should not have to look up your contact information if they need to call you about the email. Do not include images such as the company logo that are not needed. While they may look nice they make the email larger than needed and may be held by email filtering software as it is seen as an attachment. Some people like to use fancy email signatures using HTML or Rich Text. Be aware that some email clients do not render these well.
8. How Will Your Email Be Read?
Today more and more people read email via Blackberry or some other type of mobile device. Consider this when formatting your email message. This is another reason not to include an image as part of your email signature. Portable devices are slower and images become an annoyance.
9. Do You Really Need a Return Receipt?
Unless you are sending a time sensitive or otherwise important enough message that you feel you need to cover yourself or document that you sent it, do not ask for a delivery receipt. It communicates to the recipient that you do not trust them to read your email.
10. Proofread Your Email
You should also proofread your email. Even the short ones. Check for misused words. Make sure the subject line correctly communicates what the message is about. Check your format. Is the message broken up enough so it is easy to read? If you do go back and edit an email message you should proofread it again. You may have taken a word out that destroyed your sentence structure.
11. Answer All Questions
When replying to email you should answer all questions. While your email should be short, a reply should include all if the information the sender is requesting. This prevents a stream of emails to get all the answers and waste both parties time.
12. Do Not CC: Unless You Need To
It can be confusing and creates the corporate version of junk mail when Carbon Copy (CC:) is over used. There is no need in adding anyone to the email that does not need to see it. If the person you are sending the email to thinks the information needs to be seen by someone else, they can always forward it.
13. Will Only The Recipient Read The Email?
When you create or reply to an email you should keep your personal comments out unless specifically asked for. Do not include any remarks or information you would not want someone else to see. How many times have you seen an email you sent to one person come back to you with 5-10 other people added to the conversation?
14. Email Attachments
Attachments should only be used when needed. Also consider the size of the attachment. If you are sending an email to someone who does not have an inbox big enough for it or their email system has restrictions they will never get the email. If you need to send a large attachment, follow-up on the email to be sure they received it.
15. Do Not Forward Spam or Chain Email
Don’t waste my time, the companies email server and storage space by sending me spam or the cute message you just received. It may sound harsh, but business email should only be about business. Remember in most companies all email is considered property of the company. What may be funny to you may be offensive to someone else. Consider this when viewing email in public. If someone walks by your desk while you are viewing a less than tasteful email they may be offended and complain. Worse, your boss could walk by.
All of this may take a little more time and effort. But it is important to remember your email is communicating not just the message, but who you are. You do not want to come off as lazy and unprofessional. Like all communications email is remembered. With some care your email will be remembered for the good reasons, not the bad.
Do you have any email pet-peeves? If so, share them with a comment. | <urn:uuid:91832be0-2617-4962-b523-babaf161e983> | CC-MAIN-2017-04 | http://itmanagersinbox.com/206/15-tips-on-business-email-etiquette/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942456 | 1,353 | 2.75 | 3 |
NASA today said it was looking to for technology that could offer green rocket fuel alternatives to the highly toxic fuel hydrazine used to fire up most rockets today.
According to NASA: "Hydrazine is an efficient and ubiquitous propellant that can be stored for long periods of time, but is also highly corrosive and toxic. It is used extensively on commercial and defense department satellites as well as for NASA science and exploration missions. NASA is looking for an alternative that decreases environmental hazards and pollutants, has fewer operational hazards and shortens rocket launch processing times."
NASA said it expects such green fuels would decrease environmental pollutants nut also reduce propulsion systems complexity, create fewer operational hazards, decrease launch processing times and increase performance.
Of course creating and testing such fuels takes money and time. NASA noted it expects to make multiple contract awards for the technology with no single award exceeding $50 million.
This isn't the first trip down the green fuel lane NASA has made. In 2009 the agency and the Air Force said they had successfully launched a 9ft rocket 1,300 feet into the sky powered by aluminum powder and water ice.
Aluminum powder and water ice, or ALICE, has the potential to replace some liquid or solid propellants and is being developed by Purdue University and Pennsylvania State University to possibly replace liquid or solid rocket propellants.
Aside from the environmental impact ALICE could be manufactured in distant places like the moon or Mars, instead of being transported to distant locations at high cost, researchers said.
More NASA news: 10 wicked off-the-cuff uses for retired NASA space shuttles
In a paper scientists said aluminum-water combustion has been studied since the 1960s as a viable propellant for propulsion since the mixture's reaction liberates a large amount of energy during combustion as well as green exhaust products.
Currently, propellants used for Earth to orbit and orbit-to-orbit missions are expensive. Thus, there is quite a need for new-generation propellants which can be used in the booster stage as well as possess characteristics which make them storable in Low Earth Orbit (LEO). ALICE reportedly has a toothpaste-like consistency, and is cooled to -30° C (-22° F) 24 hours before flight, researchers said.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:e6aa09cd-714e-4292-b8e5-82418bd59d24> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221657/data-center/nasa-wants-green-rocket-fuel.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00116-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955657 | 473 | 3.171875 | 3 |
A Web service uses Internet protocols to provide a service. It is an XML-based protocol transported over SOAP, or a service whose instances and data objects are addressable via URIs.
Access Manager consists of several elements that comprise Web services:
Web Service Framework: Manages all Web services. The framework defines SOAP header blocks and processing rules that enable identity services to be invoked via SOAP requests and responses.
Web Service Provider: An entity that provides data via a Web service. In Access Manager, Web service providers host Web service profiles, such as the Employee Profile, Credential Profile, Personal Profile, and so on.
Web Service Consumer: An entity that uses a Web service to access data. Web service consumers discover resources at the Web service provider, and then retrieve or update information about a user, or on behalf of a user. Resource discovery among trusted partners is necessary because a user might have many kinds of identities (employee, spouse, parent, member of a group), as well as several identity providers (employers or other commercial Web sites).
Discovery Service: The service assigned to an identity provider that enables a Web Service Consumer to determine which Web service provider provides the required resource.
LDAP Attribute Mapping: Access Manager’s solution for mapping Liberty attributes with established LDAP attributes.
This section describes the following topics:
For additional resources about the Liberty Alliance specifications, visit the Liberty Alliance Specification page. | <urn:uuid:70a5cce1-7bd0-421e-a319-7f80d763a5a4> | CC-MAIN-2017-04 | https://www.netiq.com/documentation/novellaccessmanager31/identityserverhelp/data/b1yc5c1.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.837973 | 297 | 2.609375 | 3 |
It is surprisingly difficult to get accurate figures for the amount of spam that is sent globally, yet everyone agrees that the global volume of spam has come down a lot since its peak in late 2008. At the same time, despite some recent small decreases, the catch rates of spam filters remain generally high.
Spam still accounts for a significant majority of all the emails that are sent. A world in which email can be used without spam filters is a distant utopia. Yet, the decline of spam volumes and the continuing success (recent glitches aside) of filters have two important consequences.
The first is that we don't have to fix email. There is a commonly held belief that the existence of spam demonstrates that email (which was initially designed for a much smaller Internet) is somehow 'broken' and that its needs to be replaced by something that is more robust against spam.
Setting aside the Sisyphean task of replacing a tool that is used by billions, proposals for a new form of email tend either to put the bar for sending messages so high as to prevent many legitimate senders from sending them, or break significant properties of email (usually the ability to send messages to someone one hasn't had prior contact with).
Still, if spam volumes had continued to grow, we would have had little choice but to introduce a sub-optimal replacement. The decline in spam volumes means we don't have to settle for such a compromise.
Secondly, current levels of spam mean there is little threat of a constant flow of spam causing mail servers to fall over.
At the same time, one would be hard-pressed to find a user whose email is not filtered somewhere — whether by their employer, their provider, or their mail client.
Thus, looking at the spam that is sent isn't particularly interesting as it provides us with little insight into the actual problem. What matters is that small minority of emails that do make it to the user — whether because their spam filter missed it, or because they found it in quarantine and assumed it had been blocked by mistake.
Equally important is the question of which legitimate emails are blocked, and why — and what can be done to prevent this from happening again in the future.
It is tempting to look at all the spam received by a spam trap, or by a mail server, and draw conclusions from that. They certainly help paint a picture, but in the end they say as much about what users see as the number of shots on target in a football match says about the final result.
Despite the doom predicted by some a decade ago, email is still with us — and we have won a number of important battles against spam. But if we want to win the war, we need to shift our focus.
|Data Center||Policy & Regulation|
|DNS Security||Regional Registries|
|Domain Names||Registry Services|
|Intellectual Property||Top-Level Domains|
|Internet of Things||Web|
|Internet Protocol||White Space|
Afilias - Mobile & Web Services | <urn:uuid:da16eade-72af-479b-95b9-c847247a80b1> | CC-MAIN-2017-04 | http://www.circleid.com/posts/20130425_different_focus_on_spam_needed/?utm_source=buffer&utm_medium=twitter&utm_campaign=Buffer:%2BComplexD%2Bon%2Btwitter&buffer_share=a2e6a | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00145-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951226 | 626 | 2.53125 | 3 |
Don't have a login yet?
Sign up now
10 January 2013
WebRTC is a standard under development by the World Wide Web Consortium (W3C) and Internet Engineering Task Force (IETF) designed to enable browser-to-browser applications for audio, video and peer-to-peer file sharing without additional plug-ins.
This represents a significant step
forward from the days of the original HTTP technology, where
browsers could only make requests to return pages, and could
lead to applications like Instagram and Skype within a browser,
live video streaming via mobile phones and easy file
It will also enable a web developer to code RTC capabilities
into their web page for web browsers without problems
associated with development and deployment.
Google is one of the main players
leading the development of WebRTC with an open source project
to enable developers to easily implement their own RTC web
applications. The open source aspect of
Google’s offering is particularly important
as it has made the technology available to reuse, modify and
create derivatives, preventing control by a closed group of
engineers or companies.
Google used its acquisition of Global IP Solutions to provide
the core components of its WebRTC platform and the latest
version of its browser, Google Chrome, is WebRTC enabled.
Microsoft has also given support to WebRTC but has proposed a
different approach named CU-RTC-Web. The IT giant has an
interest in the manner in which WebRTC develops as it will have
a major impact on its subsidiary Skype and other messaging
applications in its portfolio.
Skype has been working on a browser-based version of its
software but Microsoft is reportedly not keen to support WebRTC
on its own browser, Internet Explorer, until the standard has
The company also believes the current draft standard for the
technology falls short as it shows no signs of real world
interoperability with existing VoIP phones and mobile phones,
from behind firewalls and across routers and does not allow an
application to control how media is transmitted on the
Opera and Mozilla Firefox are planning to support WebRTC
by implementing the getUserMedia API into their browsers, but
Apple, which develops the Safari browser, has so far been quiet
on its intentions for WebRTC.
The framework of WebRTC is still
under development and it is likely to take some time before the
standard becomes widely adopted but it is expected to disrupt
telecoms companies, video conferencing providers and OTT
players in the future.
This is because the technology is not bound to any legacy
infrastructure and is able to use both peer-to-peer and
Phil Edholm, president and founder of PKE Consulting, suggests
WebRTC will enable each website to essentially become its own
“service provider†without a
requirement for any relationship to a party outside of itself
and the user it is enabling to communicate.
For carriers, WebRTC can be
considered both a threat and an opportunity as it will disrupt
communications services but also present a new source of
revenue for those willing to embrace it.
In order to pursue opportunities in WebRTC, Tsahi Levent-Levi,
director of business solutions at Amdocs, suggests that
carriers should engage with the web developer community and
deliver value to WebRTC applications and services.
Some of the opportunities he earmarks include session-based
charging for WebRTC, merging the carrier instant messaging
platform Rich Communications Services (RCS) with WebRTC,
quality of service assurance premiums on WebRTC communications
and offering server side infrastructure components to enable
WebRTC as a service to customers.
In addition, offering WebRTC termination to PSTN and GSM
networks and WebRTC signalling are also considered viable
opportunities for carriers to generate revenue.
On the OTT side, services like Skype and Lync are likely to be
impacted by the introduction of WebRTC. The majority of OTT
business models revolve around reaching as many users as
possible but restricting them within the confines of a
downloadable application or client, preventing communication
between two different services such as WhatsApp and
WebRTC places control firmly in the hands of the user by
removing the need to download individual clients for each
vendor and to create a user ID, while also removing
restrictions imposed by a lack of cross client compatibility.
This will force many of today’s OTT players
To change their business model to remain competitive.
One carrier that has already
invested in the WebRTC space is Telefónica,
through its acquisition of TokBox in October 2012.
TokBox is a specialist in live video-based communications
services through websites and mobile applications and
Telefónica said it plans to use the
company’s OpenTok Video Platform to offer
business and consumer customers cross platform web-based video
The Spanish carrier plans to offer the solution both directly
through custom solutions and through the provision of APIs and
applications, allowing businesses and developers to produce
their own services.
Other carriers that are investigating opportunities in WebRTC
include AT&T, France Telecom-Orange and Deutsche
World Wide Web Consortium,
Internet Engineering Task Force, | <urn:uuid:d2725bf5-b3ca-452e-9eb7-420298115d95> | CC-MAIN-2017-04 | http://www.capacitymedia.com/Article/3144455/Digital-Directories/What-is-WebRTC.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910567 | 1,143 | 2.78125 | 3 |
If you can easily understand how the network is working you will have more time to think how to make improvements on that system, and not spend all the energy on learning and getting to figure out the things that are already done. On the other side, this layered model will guarantee you that your new technology incorporated into some new protocol, or version of protocol, will be able to function and cooperate with all other protocols in the same or other layers. This is why we need The OSI network reference model.
Tag: OSI model
Let’s see what is UDP – User Datagram Protocol. Inside the computer world, term “networking” is denoted to the physical joining of two machines for different purposes like communication and data distribution. But mesh of this hardware and computer’s software can communicated with each other with the help of specially designed protocols. Moreover, computer networks can be categorized on the basis of such communication languages (protocols) used by a network. | <urn:uuid:7a93b7a8-b902-41af-97c9-87ed0fb104d6> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/osi-model | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930203 | 201 | 3.609375 | 4 |
PKCS #5: Password-Based Cryptography Standard
This document provides recommendations for the implementation of password-based cryptography, covering key derivation functions, encryption schemes, and message-authentication schemes.
- PKCS #5 v2.0 Password-Based Cryptography Standard: MS-Word, Acrobat PDF, PostScript, ps.zip
- PKCS #5 v2.0 Amendment 1: XML Schema for Password-Based Cryptography: Adobe PDF
- XML schema file for PKCS #5 v2.0 Amendment 1
This file, by courtesy of Dr. Stephen Henson (email@example.com), contains three Base64-encoded PKCS #8 EncryptedPrivateKeyInfo values, all making use of PBES2 and PBKDF2 defined in PKCS #5 v2.0. The first key is encrypted with des-cbc, the second with des-ede3-cbc and the third with rc2-cbc. The password in each case is "password" (without quotes). Once decrypted, they should all yield the same private RSA key.
- 1.1 What is RSA Laboratories' Frequently Asked Questions About Today's Cryptography?
- 1.2 What is cryptography?
- 1.3 What are some of the more popular techniques in cryptography?
- 1.4 How is cryptography applied?
- 1.5 What are cryptography standards?
- 1.6 What is the role of the United States government in cryptography?
- 1.7 Why is cryptography important? | <urn:uuid:f861e6ed-9ec3-49ef-9f7a-32dd64074f1a> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/pkcs-5-password-based-cryptography-standard.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.799676 | 325 | 2.71875 | 3 |
Security Vulnerability Assessment
The key to any effective enterprise security program is to understand the extent and types of risk your organization is willing to assume. Vulnerabilities are the channel between threats and assets. Assessing the vulnerabilities that expose your company’s assets to elevated risk is a challenging mission. It is a critical component of your organization’s overall risk-management program.
This article provides the information that security professionals need to launch vulnerability assessment projects and initiate the risk analysis process. We also review tools that may be used for security vulnerability assessments.
The Risk Management Context
Depending on the complexity of your organization, performing responsible risk-management due diligence involves a number of tasks:
- Develop a clear understanding of your organization’s core mission, objectives and policies.
- Establish a program to educate senior leaders about information technology risk management.
- Identify and inventory all relevant corporate assets.
- Assess and document vulnerabilities.
- Ensure that senior leaders are aware of these vulnerabilities.
- Solicit input from all decision makers involved in risk-management activities.
- Review and update all relevant security policies (document them if they were not formally published).
Once a rational security policy foundation is in place, it is the foundation for the procedures and technologies used to mitigate risks to vulnerable data and associated systems. This process is referred to as risk management.
Risk analysis is a process whereby relevant assets and relevant threats are identified and cost-effective security and control measures are identified or engineered in order to effectively balance the costs of various security, risk-mitigation and control measures against the losses that would be expected if these measures were not in place. Threats and risks are real. Each entity needs to identify and prioritize risks and threats. We genuinely need to be compulsive about managing risk.
A thorough risk assessment should identify the system vulnerabilities, threats and current controls and attempt to determine the risk based on the likelihood and threat impact. These risks should then be assessed and a risk level assigned, such as high, medium or low.
A risk analysis determines what needs to be protected—for example, sensitive business assets and information—what the possible threats are and what the vulnerabilities are. It then determines the likelihood of various security incidents and their impact on the organization.
The key to any effective security program is to understand the risk level in the organization and then to determine how to effectively mitigate that risk. This requires identifying the data that your organization needs to protect and where that data lives and moves. This then provides the basis for security policies, practices and technologies to protect all such data, such as electronic protected health information.
Risk analysis requires understanding the core business functions of the enterprise and then analyzing potential threats and vulnerabilities to assets and information. It helps identify critical business assets and associated risks.
The end result of the risk-analysis process should be a list of vulnerabilities that identify gaps in the security infrastructure that may be exploited. The threat to the infrastructure is serious. CIO Magazine reported that in December 2002, hard drives that contained more than 500,000 social security numbers of members were stolen from the Phoenix office of TriWest, a managed care provider serving the military. This breach resulted in a class action suit.
Business Security Goals
Security professionals understand that business leaders are driven by shareholders, customers, lenders, regulators, lawmakers and others to:
- Ensure the confidentiality, integrity and availability of all sensitive business information, including its creation, receipt, storage and transmission.
- Protect against any reasonably anticipated threats or hazards to the security or integrity of such information.
- Protect against any reasonably anticipated uses or disclosures of such information.
- Ensure compliance with the security policy by all members of the organization’s workforce.
The purposes of a security vulnerability assessment include:
- To assess security technology capabilities as they relate to business objectives.
- To determine security technology limitations (gaps), as they exist today.
- To understand dissonance between business processes and systems and the IT systems and infrastructure.
- To identify business risks, security requirements and possible vulnerabilities from the business unit’s perspective.
- To identify technical risks, security requirements and possible vulnerabilities from the business unit’s and/or IT personnel’s perspectives.
Businesses need to periodically assess the information security infrastructure with a specific focus on identification of significant vulnerabilities to sensitive systems and data. Once the vulnerabilities have been identified, the risk from these needs to be analyzed and the costs associated with mitigating these risks needs to be determined.
Vulnerability Assessment Tools
A number of tools may be used in assessing the vulnerability of an organization’s systems and networks. Examples of tools that may be used for risk analysis and vulnerability assessment include (but are not limited to): SamSpade Tools, Nmap, Nessus Vulnerability Scanner, Microsoft Baseline Security Analyzer, QualysGuard, STAT Scanner and ISS Internet Scanner. Security professionals need to be familiar with using these tools and understand their capabilities for functions such as reporting.
There are other scanning and testing tools that may also be run to determine gaps in the enterprise security architecture. These tools fall into the following categories:
- Web Server Vulnerability Scanners: These tools look for common vulnerable scripts and files within Web sites. Hacking Web applications is quickly growing in popularity.
- Network Sniffers: These tools may be used to examine traffic in and out of the network to look for instances where passwords or important information is sent unencrypted.
- War Dialers: These may be used to search for rogue modems on systems.
- Wireless Tools: These may be used to search for rogue access points and to determine the difficulty with which someone outside the company could connect to the wireless network.
Remember that vulnerability assessment tools are simply snapshots of your network. The processes governing access to technology and information are often the most vulnerable to exploitation. When an individual calls a support engineer, administrator or database analyst for access to confidential information, are there processes and controls in place to ensure that only appropriate access is granted? Are there alerts when an individual’s access-control activities violate policy? Are there frequent reports and audit trails in place to track process compliance? It is also important to carefully manage physical and software changes so that vulnerabilities are not injected into your procedural and technical infrastructure. Remain attentive in your use of anti-virus systems throughout the infrastructure.
All of the vulnerability assessment tools mentioned in this article can be misused. Even without misuse, they can impair or interrupt communications or corrupt information held in your networked systems. Learn how to use your vulnerability assessment | <urn:uuid:0ccb14d0-4975-49bf-8b8d-d2c062e6446a> | CC-MAIN-2017-04 | http://certmag.com/security-vulnerability-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923324 | 1,351 | 2.53125 | 3 |
NASA today said all systems were go for the Jan. 11 firing of its Mars Science Laboratory spacecraft's thrusters - a move that will more precisely set the ship's trajectory toward the Red Planet.
NASA said the blast is actually a choreographed sequence of firings of eight thruster engines during a period of about 175 minutes beginning at 3 p.m. PST. The maneuver has been planned to use the spacecraft's inertial measurement unit to measure the spacecraft's orientation and acceleration.
"We are well into cruise operations, with a well-behaved spacecraft safely on its way to Mars," said Mars Science Laboratory Cruise Mission Manager Arthur Amador, of NASA's Jet Propulsion Laboratory in a statement. "After this trajectory correction maneuver, we expect to be very close to where we ultimately need to be for our entry point at the top of the Martian atmosphere."
While this firing is planned to be the largest, before its arrival at Mars on Aug. 5, NASA said there are opportunities for five more flight path correction maneuvers, as needed, for fine tuning.
On Jan. 15, the NASA said it will begin a set of engineering checkouts that include tests of several components of the system for landing the rover on Mars and for the rover's communication with Mars orbiters.
Some other interesting facts about the Mars Science Laboratory spacecraft:
- The spaceship's cruise-stage solar array is producing 780 watts.
- The telecommunications rate is 2 kilobits per second for uplink and downlink. The spacecraft is spinning at 2.04 rotations per minute.
- The Radiation Assessment Detector, one of 10 science instruments on the rover, is collecting science data about the interplanetary radiation environment.
- As of 9 a.m. PST on Saturday, Jan. 7, the spacecraft will have traveled 72.9 million miles (117.3 million kilometers) of its 352-million-mile (567-million-kilometer) flight to Mars.
- It will be moving at about 9,500 mph (15,200 kilometers per hour) relative to Earth and at about 69,500 mph (111,800 kilometers per hour) relative to the sun.
NASA calls the laboratory, which is expected to operate for at least two years once it arrives, the biggest astrobiology mission to Mars ever. The Mars Science Laboratory rover Curiosity will carry the biggest, most advanced suite of instruments for scientific studies ever sent to the Martian surface. Curiosity will use an onboard laboratory to study rocks, soils, and the local geologic setting in order to detect chemical building blocks of life.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:f1f74b19-3229-4fa9-9a86-4ef6460ac43f> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221421/security/nasa-set-for-mars-bound-spacecraft-s-biggest-thruster-blast.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909058 | 540 | 3.484375 | 3 |
UNIX® systems have hundreds of utility applications or commands. Some commands manipulate the file system, while others query and control the operating system itself. A healthy number of commands provide connectivity, and an even larger set of commands can generate, permute, modify, filter, and analyze data. Given the long and rich history of UNIX, chances are your system has just the right tool for the task at hand.
Moreover, when a single utility doesn't suffice, you can combine any number of UNIX utilities in a variety of ways to create your own tool. As you've seen previously, you can leverage pipes, redirection, and conditionals to build an impromptu tool immediately on the command line, and shell scripts combine the power of a small, easy-to-learn programming language with the UNIX commands to build a tool you can reuse over and over again.
Of course, there are times when neither the command line nor a shell script is
adequate. For example, if you must deploy a new daemon to provide a new network
service, you might switch to a rich language, such as
or Python, to write the application yourself. And because so many
applications are freely available on the Internet—freely meaning no
cost, licensed under liberal terms, or both—you can also download, build,
and install a suitable, working solution to meet your requirements.
Many versions of UNIX (and Linux®) provide a special tool called a package manager to add, remove, and maintain software on the system. A package manager typically maintains an inventory of all software installed locally, as well as a catalog of all software available in one or more remote repositories. You can use the package manager to search the repositories for the software you need. If the repository contains what you're looking for, all it takes is one command or a few clicks of the mouse to install a new package on your system.
A package manager is invaluable. With it, you can remove entire packages, update existing packages, and automatically detect and fulfill any prerequisites for any package. For example, if you choose software to manipulate images, such as the stalwart ImageMagick, but your system lacks the library to process JPEG images, the package manager detects and installs what is missing before it installs your package.
Yet, there are also instances where the software you need is available but is not (yet) part of any repository. Given the predominance of package management, most software comes bundled in a form you can download and install using the package manager. However, because any number of versions and flavors of UNIX are available, it can be difficult to offer every application in each package manager format for each particular variation. If your UNIX installation is mainstream and enjoys a large, popular following, chances are better that you'll find the software prebuilt and ready to use. Otherwise, it's time to roll up your sleeves and prepare to build the software yourself.
Yes, young Jedi, it's time to use the source code.
Like lifting an X-wing fighter from a swamp, building software from source might seem intimidating at first, especially if you're not a software developer. In fact, in most cases, the entire process takes but a handful of commands, and the rest is automated.
To be sure, some programs are complex to build—or take hours to build—and require manual intervention along the way. However, even these programs are typically constructed from smaller pieces that are simple to build. It's the number of dependencies and the sequence of construction that complicate the build process. Some programs also have oodles of features that you might or might not want. For instance, you can build PHP to interoperate with the new Internet Protocol version 6 (IPv6) Internet addressing scheme. If your network has yet to adopt IPv6, there's no need to include that feature. Vetting a plethora of options adds effort to the build process.
This month, examine how to build a typical UNIX software application. Before you
proceed, make sure that your system has a
such as the GNU Compiler Collection, and the suite of common UNIX software
development tools, including
awk. In addition, ensure that all the development tools
are in your PATH environment variable.
Good things come in software packages
As an illustrative and representative example, let's configure, build, and install SQLite—a small library that implements a Structured Query Language (SQL) database engine. SQLite requires no configuration to use and can be embedded in its entirety in any application, and databases are contained in a single file. Many programming languages can call SQLite to persist data. SQLite also includes a command-line utility aptly named sqlite3 that manages SQLite databases.
To begin, download SQLite (see Resources). Pick the most current source code bundle, and download it to your machine. (As of this writing, the most recent version of SQLite was version 3.3.17, released on 25 April 2007.) This example uses the file stored as http://www.sqlite.org/sqlite-3.3.17.tar.gz.
When you have the file, unpack it. The .tar.gz extension reflects how the archive was constructed. In this case, it's a gzipped, tar archive. The latter extension, .gz, stands for gzip (compression); the former extension, .tar, stands for tar (an archive format). To extract the contents of the archive, simply process the file in reverse order—first extracting it and then opening the archive:
$ gunzip sqlite-3.3.17.tar.gz $ tar xvf sqlite-3.3.17.tar
These two commands create a replica of the original source code in a new
directory named sqlite-3.3.17. By the way, the .tar.gz file format is quite
common (it's called a tarball), and you can unpack a tarball using the
tar command directly:
$ tar xzvf sqlite-3.3.17.tar.gz
This single command is equivalent to the two previous commands.
Next, change the directory to sqlite-3.3.17, and use
ls to list the contents. You should see a manifest like
Listing 1. A manifest of the SQLite package
$ ls Makefile.in contrib publish.sh Makefile.linux-gcc doc spec.template README ext sqlite.pc.in VERSION install-sh sqlite3.1 aclocal.m4 ltmain.sh sqlite3.pc.in addopcodes.awk main.mk src art mkdll.sh tclinstaller.tcl config.guess mkopcodec.awk test config.sub mkopcodeh.awk tool configure mkso.sh www configure.ac notes
The source code and supplemental files for SQLite are well organized and model how most software projects distribute source code:
- The src directory contains the code.
- The test directory contains a suite of tests to validate the proper operation of the software. Running the tests after the initial build or after any modification provides confidence in the software.
- The contrib directory contains additional software that the core SQLite
development team didn't provide. For a library such as SQLite, contrib might
contain programming interfaces for popular languages such as
C, Perl, PHP, and Python. It might also include graphical user interface (GUI) wrappers and more.
- Among the other files, Makefile.in, configure, configure.ac, and aclocal.m4 are used to generate the scripts and rules to build the SQLite software on your flavor of UNIX. If the software is simple enough, a quick compile command might be all that's required to build the code. But because so many variations of UNIX exist—Mac OS X, Solaris, Linux, IBM® AIX®, and HP/UX, among others—it's necessary to investigate the host machine to determine both its capabilities and its implementations. For example, a mail reader application might attempt to determine how the local system stores mailboxes and include support for the format.
Concentrate. Concentrate. Feel the source flow through you.
The next step is to probe the system and configure the software to build properly. (You can think of this step as tailoring a suit: The garment is largely the right size but needs some alteration to fit stylishly.) You customize and prepare for the build with the ./configure local script. At the command-line prompt, type:
The configure script conducts several tests to qualify your system. For instance,
./configure on an Apple MacBook computer (which
runs a variation of FreeBSD® UNIX) produces the following (see
Listing 2. The result of running ./configure on Mac OS X
checking build system type... i386-apple-darwin8.9.1 checking host system type... i386-apple-darwin8.9.1 checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for ld used by gcc... /usr/bin/ld ...
./configure determines the build and host system
type (which can differ if you're cross-compiling), confirms that the GNU
C Compiler (GCC) is installed, and finds the paths to
utilities the rest of the build process might require. You can scan through the
rest of your output, but you'll see a long list of diagnostics that characterize
your system to the extent needed to construct SQLite successfully.
./configure command can fail,
especially if a prerequisite—a system library or critical system utility,
say—cannot be found.
Scan the output of
./configure, looking for anomalies,
such as specialized or local versions of commands, that might not be appropriate to
build a general application such as SQLite. As an example, if your systems
administrator installed an alpha version of GCC and the
configure tool prefers to use it, you might choose to
manually override the choice. To see a list (often long) of options you can
./configure --help, as shown in
Listing 3. General options for the ./configure script
$ ./configure --help ... By default, `make install' will install all the files in `/usr/local/bin', `/usr/local/lib' etc. You can specify an installation prefix other than `/usr/local' using `--prefix', for instance `--prefix=$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] ...
The output of
./configure --help includes general
options used with the configuration system and specific options pertinent only to
the software you're building. To see the latter (shorter) list, type
./configure --help=short (see
Listing 4. Package-specific options for the software to build
$ ./configure --help=short Optional Features: --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-libtool-lock avoid locking (might break parallel builds) --enable-threadsafe Support threadsafe operation --enable-cross-thread-connections Allow connection sharing across threads --enable-threads-override-locks Threads can override each others locks --enable-releasemode Support libtool link to release mode --enable-tempstore Use an in-ram database for temporary tables (never,no,yes,always) --disable-tcl do not build TCL extension --disable-readline disable readline support [default=detect] --enable-debug enable debugging & verbose explain
./configure --help, the output at the
very top indicates that the default installation directory for executables is
/usr/local/bin, the default installation directory for libraries is
/usr/local/lib, and so on. Many systems use an alternate hierarchy to
store non-core software.
For example, many systems administrators choose to use /opt instead of /usr/local
as the locus of locally added or locally modified software. If you want to install
SQLite in a directory other than the default, specify the directory with the
--prefix= option. One possible use—and a common
one if you're the only person using a package or if you don't have root access to
install the software globally—is to install the software in your own
hierarchy within your home directory:
$ ./configure --prefix=$HOME/sw
Using this command, the install portion of the build would recreate the hierarchy of the software in $HOME/sw, as in $HOME/sw/bin, $HOME/sw/lib, $HOME/sw/etc, $HOME/sw/man, and others as needed. For simplicity, this example installs its code in the default targets.
Compile the code
The result of
./configure is a Makefile compatible
with your version of UNIX. The development utility named make uses the Makefile to
execute the steps required to compile and link the code into an executable. You
can open the Makefile to examine it, but don't edit it, because any modifications
you make will be listed if you run
The Makefile contains a list of source files to build, and it also includes constants
that enable or disable and choose certain snippets of code in the SQLite package.
For instance, code specific to 64-bit processors might be enabled if the
configure tool detected a suitable chip within your
system. The Makefile also expresses dependencies among source files, so a change
in an all-important header (.h) file might cause recompilation of all the
C source code.
Your next step is to run
make to build the software
(see Listing 5):
Listing 5. Running make
$ make sed -e s/--VERS--/3.3.17/ ./src/sqlite.h.in | \ sed -e s/--VERSION-NUMBER--/3003017/ >sqlite3.h gcc -g -O2 -o lemon ./tool/lemon.c cp ./tool/lempar.c . cp ./src/parse.y . ./lemon parse.y mv parse.h parse.h.temp awk -f ./addopcodes.awk parse.h.temp >parse.h cat parse.h ./src/vdbe.c | awk -f ./mkopcodeh.awk >opcodes.h ./libtool --mode=compile --tag=CC gcc -g -O2 -I. -I./src \ -DNDEBUG -I/System/Lib rary/Frameworks/Tcl.framework/Versions/8.4/Headers \ -DTHREADSAFE=0 -DSQLITE_THREA D_OVERRIDE_LOCK=-1 \ -DSQLITE_OMIT_LOAD_EXTENSION=1 -c ./src/alter.c mkdir .libs gcc -g -O2 -I. -I./src -DNDEBUG \ -I/System/Library/Frameworks/Tcl.framework/Vers ions/8.4/Headers \ -DTHREADSAFE=0 -DSQLITE_THREAD_OVERRIDE_LOCK=-1 \ -DSQLITE_OMIT_L OAD_EXTENSION=1 -c ./src/alter.c -fno-common \ -DPIC -o .libs/alter.o ... ranlib .libs/libtclsqlite3.a creating libtclsqlite3.la
Note: In the output above, blank lines have been added to better highlight
each step that
make utility checks the modification dates of
files—header files, source code, data files, and object files—and
C source files that are appropriate.
make rebuilds everything, because no object
files or build targets exist. As you can see, the rules to build the targets
include intermediate steps, too, that use tools, such as
awk, to produce
header files that are used in later steps.
The result of the
make command is a finished library
Although not mandatory nor provided in every package, it's a good idea to test the software you just built. Even if your software builds successfully, it's not necessarily an indication that the software functions properly.
To test your software, run
make again with the
test option (see Listing 6):
Listing 6. Testing the software
$ make test ... alter-1.1... Ok alter-1.2... Ok alter-1.3... Ok alter-1.3.1... Ok alter-1.4... Ok ... Thread-specific data deallocated properly 0 errors out of 28093 tests Failures on these tests:
Success! The software built fine and works correctly. If one or more test cases did fail, the summary at the bottom (here, it's blank) would report which test or tests require investigation.
A finished product
If your software works properly, the final step is to install it on your system.
Once again, use
make and specify the
install target. Adding software to /usr/local usually
requires superuser (root) privileges provided by
sudo (see Listing 7):
Listing 7. Installing the software on your local system
$ sudo make install tclsh ./tclinstaller.tcl 3.3 /usr/bin/install -c -d /usr/local/lib ./libtool --mode=install /usr/bin/install -c libsqlite3.la /usr/local/lib /usr/bin/install -c .libs/libsqlite220.127.116.11.dylib /usr/local/lib/libsqlite18.104.22.168 .dylib ... /usr/bin/install -c .libs/libsqlite3.lai /usr/local/lib/libsqlite3.la /usr/bin/install -c .libs/libsqlite3.a /usr/local/lib/libsqlite3.a chmod 644 /usr/local/lib/libsqlite3.a ranlib /usr/local/lib/libsqlite3.a ... /usr/bin/install -c -d /usr/local/bin ./libtool --mode=install /usr/bin/install -c sqlite3 /usr/local/bin /usr/bin/install -c .libs/sqlite3 /usr/local/bin/sqlite3 /usr/bin/install -c -d /usr/local/include /usr/bin/install -c -m 0644 sqlite3.h /usr/local/include /usr/bin/install -c -m 0644 ./src/sqlite3ext.h /usr/local/include /usr/bin/install -c -d /usr/local/lib/pkgconfig; /usr/bin/install -c -m 0644 sqlite3.pc /usr/local/lib/pkgconfig;
make install process creates the necessary
directories (if each doesn't exist), copies the files to the destinations, and
ranlib to prepare the library for use by
applications. It also copies the
sqlite3 utility to
/usr/local/bin, copies header files that developers require to build software
against the SQLite library, and copies the documentation to the proper place in
Assuming that /usr/local/bin is in your PATH variable, you can now run
sqlite3 (see Listing 8):
Listing 8. SQLite, ready to use
$ which sqlite3 /usr/local/bin/sqlite3 $ sqlite3 SQLite version 3.3.17 Enter ".help" for instructions sqlite>
Advice for the apprentice?
A fair majority of software packages build as readily as SQLite. Indeed, you can often configure, build, and install the software with one command:
$ ./configure && make && sudo make install
&& operator runs the latter
command only if the former command works without error. So, the command above
./configure, and if that works, run
make, and if that works, run
sudo make install." This one command builds a package
unattended. Just kick it off and go get coffee, a sandwich, or a prix fixe
meal, depending on the size and complexity of the package you're building.
Here are some other helpful tips for building software from source code:
- If the software package you're building requires more than the typical
./configure && make && sudo make install, keep a journal of the steps you followed to build the code. If you must rebuild the same code or build a newer version of the code, you can refer to your journal to refresh your memory. Store the journal in the same directory as the package's README file. You might even adopt a convention for the journal's file name, which makes it easy to recognize what you've built previously.
- Better yet, if the steps required to build the software are repeatable without manual intervention, capture the process in a shell script. Later, if you must rebuild the same code, simply run the shell script. If a newer version of the code becomes available, you can modify the script as needed to add, change, or remove steps.
- You can reclaim disk space after you've installed the software by using
make clean. This rule usually removes the target files and any intermediate files, and it leaves the files required to restart the process intact. Another rule,
make distclean, removes the Makefile and other generated files.
- Keep the source of differing versions of the same code separate. This regimen allows you to compare one release to another, but it also allows you to recover a specific version of the software. Organize the source code into a local repository, say $HOME/src or /usr/local/src, depending on your scope of use (personal or global) and your local conventions.
- Further, you might choose to prevent accidental removal or overwrites by
making the source code globally read-only. Change to the directory of the source
code you want to protect, and run the
chmod -R a-w *command (run
chmodrecursively, turning off all write permissions).
Finally, there will be instances when source code simply won't build on your system. As mentioned above, the most frequent obstacle encountered is missing prerequisites. Read the error message or messages carefully—it might be obvious what has gone wrong.
If you cannot deduce the reason, type the exact error message and the name of the package you're trying to build into Google. Chances are very good that someone else has encountered and solved the same issue. (In fact, searching the Internet for error messages can be quite illuminating—although you might have to dig a little to find a gem.)
If you get stumped, check the software's home page for links to resources such as an IRC channel, a newsgroup, a mailing list, or an FAQ. Your local systems administrator is an invaluable font of experience, too.
The source is strong with this one
If your system lacks a tool you need, you can ad lib one on the command line, you can write a shell script, you can write your own program, and you can borrow from the enormous pool of code found online. You'll be well on your way to practicing Jedi mind tricks just like me.
"This is the best article I've ever read."
- Speaking UNIX: Check out other parts in this series.
- Check out other articles and tutorials written by Martin Streicher:
- Popular content: See what AIX and UNIX content your peers find interesting.
- Search the AIX and UNIX library by topic:
- AIX and UNIX: The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills.
- New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX.
- AIX 6 Wiki: Discover a collaborative environment for technical information related to AIX.
- Safari bookstore: Visit this e-reference library to find specific technical resources.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- Podcasts: Tune in and catch up with IBM technical experts.
Get products and technologies
- About SQLite: You can download SQLite from here.
- IBM trial software: Build your next development project with software for download directly from developerWorks.
- Participate in the developerWorks blogs and get involved in the developerWorks community.
- Participate in the AIX and UNIX forums: | <urn:uuid:bcb25ae1-4c68-40fe-a03c-4a14fffdcb84> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/aix/library/au-speakingunix12/?ca=dgr-lnxw1wdoitunix12S_TACT=105AGX59S_CMP=GRsite-lnxw1w | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.813258 | 5,523 | 3.15625 | 3 |
The National Oceanic and Atmospheric Administration's Space Weather Prediction Center today issued a geomagnetic storm bulletin for the next 12 hours.
Such storms can cause problems with Global Positioning Systems and power grids.
NOAA stated: "Great anticipation for the first of what may be three convergent shocks to slam the geomagnetic field in the next twelve hours, +/-. The CME with the Radio Blackout earlier today is by far the fastest, and may catch its forerunners in the early hours of August 5 (UTC) -- at earth. Two impacts are expected; G2 (Moderate) to G3 (Strong) Geomagnetic Storming on August 5, and potentially elevated protons to the S2 (Moderate) Solar Radiation Storm condition, those piling up ahead of the shock. The source of it all, Region 1261, is still hot, so more eruptions are possible. New Solar Cycle 24 is in its early phase now, and this level activity is typical for this time interval. Expect increased space weather activity over the next few years as the Sun erupts more frequently. "
More on space: 10 wicked off-the-cuff uses for retired NASA space shuttles
There have been a couple solar blasts this year that have garnered lots of attention. One on Valentine's Day raised a lot of concern but didn't amount to much.
A NASA-funded study in 2009 showed some of the risk extreme weather conditions in space have on the Earth. The study, conducted by the National Academy of Sciences notes that besides emitting a continuous stream of plasma called the solar wind, the sun periodically releases billions of tons of matter called coronal mass ejections. These immense clouds of material, when directed toward Earth, can cause large magnetic storms in the magnetosphere and upper atmosphere, NASA said. Such space weather can impact the performance and reliability of space-borne and ground-based technological systems, NASA said.
This year space weather scientist Bruce Tsurutani at NASA's Jet Propulsion Laboratory in a paper written on Sunspots stated: "Geomagnetic effects basically amount to any magnetic changes on Earth due to the Sun, and they're measured by magnetometer readings on the surface of the Earth. Such effects are usually harmless, with the only obvious sign of their presence being the appearance of auroras near the poles. However, in extreme cases, they can cause power grid failures on Earth or induce dangerous currents in long pipelines, so it is valuable to know how the geomagnetic effects vary with the Sun."
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:bf73e11a-a48b-402a-9759-b58e514f4ba9> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2220343/security/geomagnetic-storm-predicted-for-earth-in-next-12-hours.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93249 | 541 | 2.984375 | 3 |
Mapping the tiny mouse brain is pushing the limits of information technology capabilities.
By the end of the three-year project to map the mouse brain about one petabyte of data will have been generated, pushing scientists up against a range of technological limitations, according to the senior director of the Allen Brain Atlas Project Mark Boguski at the recent Bio-IT World Conference.
The staff of 26 scientists and IT specialists went into the work knowing that it would require augmenting existing hardware and software as well as in-house development. This may result in a technology infrastructure that will be useful to other scientists when made publicly available. The goal of the project is to create a 3D molecular map of the mouse brain; its constructed through painstakingly imaging slices of the brain.
Read the full story at IDG News Service | <urn:uuid:2a30f647-4cfc-4074-9895-6d5455ada708> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Database/RAM-Limitations-Strain-BioIT | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941795 | 165 | 2.984375 | 3 |
New Invention on the way - Charge your Laptop & Mobile from Air
Have you ever imagine that you can get rid of power supply cable and make your laptop or mobile phones truly portable while you are on the move? It seems like a dream that you can charge up your laptop, mobile phone or home appliances from the air without even plugged into the power socket. But now, the invention looks set to be a great breakthrough with a demonstration of a 60W light bulb powered up from a power source located two meters away wirelessly by a group of MIT researchers.
WiTricity is based on the simple physics principal that energy can be transferred wirelessly by utilizing magnetically coupled resonance. It involved a pair of copper coils for the energy transmission. One coil, acts as the transmitter, generates a magnetic field oscillating in MHz range and on the other end, another coil act as the receiver resonates with the generated magnetic field and convert the energy back to electricity which then be used to power up the laptop continuously.
The advantage of using this method of transferring energy is it is more efficient, about 40-45% efficient and less hazardous to human body as compared to electromagnetic radiation. Furthermore, there is no line-of-sight requirement which tend to degrade the efficiency significantly with obstacles. Take a look on how the team demonstrating the setup while obstructing the line of sight between both transmit and receive coils in proof of concept stage and feature about WiTricity in BBC.
Check out the WiTricity Video
This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you.
We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus | <urn:uuid:ef16ff5f-f884-4c2f-8ff5-15712c4c30a0> | CC-MAIN-2017-04 | http://www.knowledgepublisher.com/article-407.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00283-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949148 | 400 | 2.78125 | 3 |
After a month on the surface of Mars, NASA rover Curiosity has driven more than the length of a football field and has started gearing for some real scientific work.
"We've been on the surface of Mars for about a month and Curiosity continues to surprise us with how well she's doing with everything we've asked off her," said Mike Watkins, a mission systems manager at NASA's Jet Propulsion Laboratory (JPL). "Now that [Curiosity has] moved, we've reached a point where we want to do a more detailed check of the arm and the tools on it."
During a press conference Thursday, NASA showed off images of the tracks left by Curiosity after its first trek of 358 feet, which leaves it some 269 feet from the landing site.
Curiosity is now heading toward Glenelg, an area of scientific interest because of three different types of terrain that meet there.
At this point, though, the rover is spending about a week in one spot so scientists can run detailed tests of Curiosity's robotic arm.
On Saturday, scientists will start moving the arm, which has five robotic joints, into various positions to make sure it wasn't damaged during its journey from Earth, or during its descent and landing on Mars, said Matt Robinson, lead engineer for Curiosity's robotic arm at JPL.
The team will also use the arm to take pictures of the area and the rover itself, including the first images of the machine's underbelly since it left Earth.
Robinson also noted that the team will test various tools that are attached to the end of the arm. The tests will include firing up the drill, the arm's dust removal tool and its hand-held imager.
Joy Crisp, deputy project scientist for Curiosity, said today that the rover has taken measurements of the atmosphere on Mars. The rover used its Sample Analysis at Mars (Sam) instrument to test for the concentration of different gases.
The results of those tests could come as early as next week, Crisp said.
Crisp noted today that once Curiosity once again picks up its trek to Glenelg, scientists are hoping to find some interesting rocks along the way.
If they find a fine-grained rock that is embedded in the ground and has a horizontal surface, they'll stop and study it. However, scientists figure they won't use the rover's drill until it reaches Glenelg.
The car-sized, nuclear-powered machine is on what NASA hopes will be at least a two-year mission.
It is equipped with 10 scientific instruments and offers the most advanced payload of scientific gear ever used on the surface of Mars, including chemistry instruments, environmental sensors and radiation monitors.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org. | <urn:uuid:3f485014-ad54-4180-81a6-a1599d5c1cc2> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2492152/emerging-technology/nasa-set-to-test-curiosity-mars-rover-s-robotic-arm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00099-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960693 | 612 | 3.390625 | 3 |
Distributed Denial of Service (DDoS) attacks aren’t infamous for their sophistication; however, they’re increasing adaptability warrants a fresh look at the evolving anatomy of these dynamic threats.
What is a DDoS attack?
A DDoS attack is an attempt to consume one or more finite resources on a target computer or network of computers. These attack vectors block genuine users from accessing the network, application, or service and exploit detectable vulnerabilities.
What are the types of DDoS attacks?
Although there is a broad spectrum of types of DDoS attacks, they can typically be categorized into one of the following:
The intent of these attacks is to cause congestion by consuming the bandwidth either within or between the target network/service and the rest of the Internet.
- TCP state-exhaustion
These attacks target web servers, firewalls, and load balancers in an attempt to disrupt their connections, consuming the finite number of concurrent connections the device can support.
- Application layer
Also known as Layer-7 attacks, these threats target vulnerabilities in an application or server with the intent of establishing a connection and exhausting it by monopolizing processes and transactions. They are difficult to detect because it takes very few machines to carry out the attack, generating deceptively low traffic rates, making them the most serious type of DDoS attacks.
The sophisticated attackers of today blend all three DDoS attack types, creating a formidable threat that is even more challenging for businesses to combat.
How could DDoS attacks impact my business?
The modern business landscape all but requires a(n) website/application(s) with uninterrupted performance. DDoS attacks pose a serious threat to maintaining business continuity in today’s web-based world. From “Mafiaboy’s” notorious “Project Rivolta” that brought down the websites of Amazon, CNN, Dell, E*Trade, eBay, and Yahoo! in 2000, to the recent server attack on game developer and publisher Blizzard, the storied history of DDoS attacks speaks for themselves.
DDoS: Next Gen Defense
When it comes to addressing DDoS threats, prevention is key (see Cybersecurity: 5-Step Plan to Address Threats & Prevent Liability for detailed tips). Here are the most beneficial DDoS prevention tools:
Train staff to recognize the signs of an attack is essential, vendors advise. They should know what DDoS patterns look like, as well as how to respond if they’re alerted to the website or application being down.
Regularly update and proactively patch servers and other network elements to mitigate potential threats.
- Choose a provider with 24/7 DDoS prevention at the network connection layer– this can be detection and human mitigation, network defense systems (advanced, enterprise level threat protection solutions), or often both.
- A website firewall (or web application firewall) secures connections and maintains data integrity.
- Content Delivery Network (CDN)-based DDoS protection adds another layer of critical defense for websites at the point of contact.
Stay tuned for upcoming DDoSand security features on the Codero Blog, and subscribe to the Codero blog for other helpful tips from our team of hosting experts. | <urn:uuid:14df0305-f2ce-4e70-9517-b691ec085052> | CC-MAIN-2017-04 | http://www.codero.com/blog/adaptive-ddos-attacks-demand-next-gen-defense/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.904761 | 665 | 3.0625 | 3 |
Rootkits have gone from theoretical to everyday infections in a pretty short time. Tools like TDSSKiller, Sophos Antirootkit, Combofix, and others all help in the battle against these sophisticated infections but the best solution is to avoid getting one installed in the first place. To better understand the threat that rootkits pose, I recommend reading the threat report that the Microsoft Malware Protection Center just published today.
“The Microsoft Malware Protection Center (MMPC) has published a new threat report on Rootkits and how they work. This threat report is recommended reading for those people looking to better understand how malware families use rootkits to avoid detection and how to protect themselves from this type of threat.”
The report covers the purpose of rootkits and their etymology, how attackers use rootkits, the scope of the rootkit problem, notable malware families that use rootkits, and protection against rootkits and malicious/potentially unwanted software.
A rootkit works by essentially inserting itself into a system to moderate – or filter – requests to the operating system. By moderating information requests, the rootkit can provide false data, or incomplete data, to utterly corrupt the integrity of the affected system. This is the key function of a rootkit and explains why rootkits are a serious threat – after a rootkit is installed, it is no longer possible to trust any information that is reported back from the affected computer.
You can find out more from the Microsoft Security blog, which announced the report’s release. You can jump straight to the report (.pdf) by clicking the image below. | <urn:uuid:2a69ebc4-c462-41e1-9c7b-0e9c8a47bc6d> | CC-MAIN-2017-04 | https://www.404techsupport.com/2012/10/microsoft-malware-protection-center-provides-report-on-rootkits/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916634 | 339 | 2.5625 | 3 |
Table of Contents
It is important to know how to properly shut down or restart your computer so that you do not lose data or corrupt important Windows files or Registry locations. Many people think that you shut down your computer simply by pressing the power button. On some configurations, this will work as Windows will recognize that you press the power button and shut it down gracefully. On the other hand, if your computer is not configured to do this, when you press the power button the computer will turn off and any unsaved data in Windows will be lost. Shutting down a computer this way could also cause data corruption on your hard drive or within Windows.
With this said, it is important to know how to gracefully shut down or restart Windows so that all your data is saved by Windows before it terminates power to the computer. This tutorial will provide steps on how to shut down or restart Windows XP, Windows Vista, Windows 7, and Windows 8. As each of these operating systems have different ways of performing these tasks, we have broken the tutorial into different sections that correspond to each of the operating systems.
To shut down your computer in Windows XP click on the Start button () to open the Windows XP Start Menu as shown below:
Once the Start Menu is open, please click on the red power button as indicated by the blue arrow in the picture above. This will open up the power options dialog box.
At this screen, to shut down your computer you should click on the Turn Off button and to restart your computer and boot back into Windows you should click on the Restart button.
To shut down or restart your computer in Windows 7 or Windows Vista, please click on the Start button () to open the Windows 7 Start Menu as shown below.
Once the Start Menu is open, click on Shut down button to shut down Windows and your computer. If you would like to restart your computer, click on the arrow next to the Shut down button as indicated by the blue arrow in the picture above. This will open another menu where you can click on Restart to restart your computer and boot back into Windows.
The new interface of Windows 8 introduces new locations for common Windows tasks. One of these tasks is shutting down or restarting the computer. These functions were previously located in the Windows Start Menu, but as the Start Menu does not exist in Windows 8 they have been moved to another location. This section will outline two methods that can be used to shut down or restart Windows 8 and your computer.
Method 1: Using the Control+Alt+Delete keyboard combination:
The easiest method to use is to use the Control+Alt+Delete keyboard combination on your keyboard. The best way to use this shortcut is to put your pinky on the Ctrl key and hold it down, your thumb on the Alt key and hold it down, and then tap the Delete key with your other hand. This will open up a screen where you can perform some basic administrative tasks.
In the lower right-hand corner of this screen you should see a power button as indicated by the red arrow above. Click on the power button and a new screen will appear that provides different power options as shown below.
From the new menu you can now click on the Shut down or Restart options depending on what you would like to do.
Method 2: Using the Windows 8 Charms Bar
You can also shut down or restart your computer from the Windows Start Screen. To do this we need to access the Charms bar which can be opened by putting your mouse cursor in the upper or lower right-hand corners of the screen. This is indicated by the red arrows in the image below.
Once you do this, the Charms bar will appear as shown in the image below.
You should now click on the Settings charm to open up the Settings Charm bar.
At the bottom of the Settings Charm screen you will see a icon that looks like a power button and is labeled Power. Click on the Power button to open the Power options menu as shown in the image below.
You can now select either Shut down or Restart depending on what you wish to do.
Method 3: From the Windows 8.1 Desktop Start Menu
Windows 8.1 introduced a basic Start Menu that now includes the ability to Shutdown or Restart your computer. To access the Windows 8 Start Menu, right-click on the Windows 8.1 Start button () and the Windows 8.1 Start Menu will appear as shown in the image below.
Click on the Shut down submenu as indicated by the blue arrow and you will then see the various Shut down, Restart, Sleep, or Hibernate options.
Now left-click on the option you wish to use.
If you are having problems launching a Windows 8 App from the Start Screen, we have put together a list of troubleshooting steps you can try. Typically, if you are having this problem, when you double-click on a Windows app Windows will look like its doing something, but the app does not launch. You will not receive any errors or any other alert that indicates why the app has not started. When ...
Fast user switching is a feature in Windows that allows you to switch to another user account on the same computer without logging off. This allows multiple users to use the same computer while keeping each account's programs and files open and running in the background. As you can imagine, this can be useful when someone wants to use their computer but another account is logged in with open ...
To achieve the best performance on your computer, it is suggested that you log off an account instead of switching to another one using Fast User Switching. By logging off an account, all the previous user's programs and files will be closed. This will allow the new account to have access to all of the computer's resources, which will allow it to be faster.
Fast User Switching is a useful feature in Windows that allows you to quickly switch between different accounts on your machine. If it's not used properly, though, it can lead to problems as your computer becomes slow due to the amount of programs that are running at the same time. With this said, many people feel its better to disable Fast User Switching altogether and require people to log ...
When using Windows there may come a time where you will need to close a program or process that is not responding or that you are concerned is a computer infection. This tutorial will walk you through using the Windows Task Manager to close a program when you cannot close it normally. | <urn:uuid:6ac8f419-1032-42e7-803b-377f1fc3d8a2> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-shut-down-or-restart-windows/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00457-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913849 | 1,332 | 3.203125 | 3 |
The principle of fiber optics are based on transmission of data by means of light. Fiber optics emerged and grew into more advanced phase due to requirement from radio and telephone engineers. These engineers required more bandwidth for data transmission. Therefor these engineers had been looking out for a medium to transmit data in more reliable and faster form rather than copper cables.
Fiber optics had attracted some attention because they were analogous in theory to plastic dielectric waveguides used in certain microwave applications. Finally a technology evolved that used glass or plastic threads to transmit data. Cables involved in fiber optics contain several bundles of glass threads which are capable of transmitting data in modulated form.
With the start of fiber optics and fiber optic cables data started to transfer faster as fiber optic cables have greater bandwidth than metal cables and are more resistive to external interference. Lighter and thinner fiber optic cables readily transfer data in digital form rather than analogue form. This technology is most useful in computer industry which now forms an integral part of telephone, our mission is to save shoppers time, money by bringing together everything that’s needed to buy a fiber optic splitter radio and television industry.
Fiber optics yield distortion free data transmission in digital form. The audio waves transmitted via principle of fiber optics deliver accurate signal transfer. Fiber optics is also useful in automotive and transportation industry. Traffic lights, the new MIPP is a termination patch panel that need to be connected to active equipment. organized and scrutinized highway traffic control, explains what is a optical fiber pigtail, fiber pigtail color codes are some of the benefits of application of fiber opticons in the transportation mechanism.
The use of lasers in fiber optics is an important milestone in the technical field. Because of their capability for higher modulation frequency, lasers were identified as an important means of carrying information. In fiber optic technology, transmitters comprise lasers and modulators. Lasers help to inject a signal into the fiber. Lasers create the light and the modulator changes the power of the laser light to combine the data to be transmitted.
More information about fiber optic products, please visit FiberStore.com | <urn:uuid:2f15801b-a95c-4ff1-b908-439d29c6fea5> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-world-of-fiber-optics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934348 | 429 | 3.59375 | 4 |
The writing is on the wall for storage as we have known it over the past decades - with flash (solid state disk (SSD)) developments the key reason. Yet, to minimise risks and maximise benefits we must think differently about data access.
The advent of SSDs did not immediately revolutionise storage and backup. It helped with some disk latency and access speed problems, but small capacities, high pricing and reliability concerns slowed adoption. This was no bad thing because inserting SSD as a straight replacement for spinning disk was poor usage - as it still carried all the legacy storage layers of abstraction.
This forced vendors to think hard about how and where to best use SSD technology. That only added to confusion because all sorts of flash formats then appeared...but now SSD prices are reducing, capacities multiplying and reliability now surpassing spinning disk. This was emphasised by speaker after speaker at the first annual "Flash Forward" conference in London last month. So, we might summarise storage's direction by saying, "flash is the future".
Yet that leaves organisations, not least those with large amounts of legacy storage, asking how to get from here to there. SSD covers a group of related technologies that all speed data access becoming ever-more mainstream; the challenge is how to best use them, and not only as a performance fix. (For example, they can do nothing for data at rest not being accessed.)
3D NAND technology means 30TB SSD devices soon; they will begin to blow away disk, not least because of space, power and heat savings. 3D XPoint is also due for imminent release and, as reported, it has 100 times the speed of NAND read and write, 10 times DRAM density and, most importantly, 1000 times NAND endurance - potentially to perform for 15 years (although further SSD advances may overtake that).
While big-capacity SSD prices may stay high for some time, a life-span of, say, 10 years with immediate power savings hugely alters the cost of ownership calculation. So, every sizable IT storage user should be making or updating its long-term storage plans - now. It will allow for the write-down periods of existing equipment, and may need to include interim use of some flash technology to address short-term performance needs; but a fundamental systems architecture change is needed.
One reason is that a more efficient use of SSD is as memory, as this by-passes existing storage approaches. Where access speed is critical, a short-term fix may be SSD as tier 1 (or an inserted tier zero); but, longer term, tiering can vanish alongside spinning disk (except for a deep archive tier using low-cost tape or long-lasting non-volatile flash). I also doubt that SANs will stay cost-effective. Users of in-memory databases should now see an opportunity to massively multiply their sizes, giving a boost to Big Data - and so on. There are so many new issues.
An obvious question to ask is: what are my primary aims (affecting prioritisation)? The answer(s) will depend on your organisation, and vary even in different parts of it: Is it an operational cost saving, improved IOPS or reduced latency (as SSDs can help all of these)? That means identifying workloads likely to benefit most from flash, and this will form part of a broad evaluation of where to automate to improve productivity and reduce cost.
Remember also that, if storage access is a bottleneck that SSDs remove, this will inevitably expose another infrastructure bottleneck so the overall performance boost might not be as expected. In this regard, data takes time to travel across a network (e.g. an electron can only travel 11 inches in a msec). As data multiplies so this becomes significant. So, one design criterion is to place data as near to where it is processed as possible. In turn this may mean various processors across a network capturing data and processing it there and then.
Other factors include altering applications. For example, get away from time-wasting locking of data in transactions, and replace with reversible transactions for when, very rarely, two transactions try to update the same record at once. One app at the edge may need only to process its tiny piece of data, and discard all that is not needed, then forward only the key information to the data centre (if this still exists). How about backup? If the media is all SSD, are super-fast snapshots all that will be needed?
I am barely scratching the surface of course. There is also one piece of (potentially) bad news in all this. With the demand for SSDs multiplying, there could be a lack of fabric capacity to produce the right flash types in the quantities industry will need. That could mean delivery delays and slow down the implementation of plans and/or push up prices again as demand outstrips supply.
Yet, make no mistake. SSD technology needs to be central to your plans going forward. | <urn:uuid:ec0848ec-258c-4d90-92d5-7e0846a3e941> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/storage-with-ssd-key-questions-to-ask-now/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941266 | 1,014 | 2.515625 | 3 |
Data security has emerged as a critical problem for large and small businesses alike. Corporations are obligated to protect their sensitive information (and the personally identifiable information of their individual clients) against theft and loss. Better security controls, carefully regulated tape storage, and superior authentication and rights management have made the incidence of security faults quite rare.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
However, you needn't look hard to find highly publicised examples of lost tapes and hacked files leading to complex and expensive legal problems. Encryption is one means of protecting data against any loss -- even if a tape is lost or a server is hacked, sensitive data cannot be read. Encryption can also help to meet growing regulatory requirements for data protection.
But, encryption strategies are not the same for every organisation. When selecting an encryption scheme, companies should consider several factors: the point where encryption takes place, the amount of data being protected, key management processes, and the corresponding effect on performance and cost.
Over the next few days we'll cover this subject from several angles, but let's s start by identifying the core concerns when considering encryption.
Consider exactly what data needs to be encrypted. Not all data needs to be encrypted -- only personally identifiable information (a.k.a. names with birth dates and social security numbers), or other sensitive information types that are delineated by industry standards, government regulations, or common business practices. Reducing the encryption load can ease any impact on backup performance or media utilisation. IT should not make this decision in a vacuum; each major department of the company should be involved. For example, a good time to discuss the need for encryption is when setting retention policies for each file type.
Decide where to encrypt. Encryption can be implemented through a specific application when data is actually saved (such as Oracle), though that will only encrypt data for that specific application. The broader form of "source" encryption takes place at the backup server through the backup software such as EMC's Legato, Symantec's Veritas NetBackup and IBM's Tivoli Storage Manager. Both types of "source" encryption can impair its server's performance since encryption is CPU-intensive.
Data can also be encrypted at the media itself. For example, LTO-4 tape drives incorporate AES-256 bit encryption. This eases any performance impact on backup jobs, and provides protected tapes that can be sent offsite.
Finally, data can be encrypted in-flight using a dedicated security appliance such as Decru's DataFort , the StrongBox SecurDB from Crossroads Systems, or the CryptoStor family from NeoScale Systems. While dedicated appliances can be more expensive than software-only solutions, they typically offer superior performance by encrypting/decrypting data at line speed -- imposing little (if any) performance penalty.
Determine the impact of encryption on compression. Compression works by removing redundant elements of information from a data stream. Encryption, however, effectively randomises the data stream and removes all redundancy. If you implement encryption prior to compression, you will lose the compression feature in your drives or backup software. You then need more media to complete the backup or time to transfer across the wire. Increased media requirements will raise the cost and maintenance burden of any backup processes. Reducing the amount of compressed data (e.g., encrypting only selected data) can mitigate this issue, but implementing encryption after the compression process can also help.
Remember that encryption can affect performance. Encryption is a mathematical process, and when implemented in software, can demand significant processing power from the host server. This, in turn, can affect performance. Experts suggest that the penalty for software-based encryption products can reach 40-50%, depending on the type of encryption and the files being protected.
By comparison, a dedicated hardware encryption box might only impair performance by 10% or less. This means that encryption will take longer to process backups or conduct remote data transfers, posing a dilemma for storage administrators who already struggle with bloated backup windows and WAN bandwidth limitations. Most storage professionals resolve this quandary by encrypting only the most sensitive data.
Consider the implications of encryption key management. All encryption requires the use of a unique "key," which seeds the encryption algorithm. The key is also needed to decrypt the data later on when files are read from tapes or disks; encrypted data is effectively unreadable without the key. Companies must impose strict controls and policies (such as "key quorums") to ensure that only responsible storage professionals have access to the key. | <urn:uuid:cdf496f4-d57b-4f61-a17a-78d929726ba9> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240021818/Tape-encryption-purchases-Part-One-Essential-issues | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9233 | 950 | 2.703125 | 3 |
Which statement is true about RSTP topology changes?
Only nonedge ports moving to the blocking state generate a TC BPDU.
Any loss of connectivity generates a TC BPDU.
Any change in the state of the port generates a TC BPDU.
Only nonedge ports moving to the forwarding state generate a TC BPDU.
If either an edge port or a nonedge port moves to a block state, then a TC BPDU is generated.
The IEEE 802.1D Spanning Tree Protocol was designed to keep a switched or bridged network
loop free, with adjustments made to the network topology dynamically. A topology change typically
takes 30 seconds, where a port moves from the Blocking state to the Forwarding state after two
intervals of the Forward Delay timer. As technology has improved, 30 seconds has become an
unbearable length of time to wait for a production network to failover or "heal" itself during a
Topology Changes and RSTP
Recall that when an 802.1D switch detects a port state change (either up or down), it signals the
Root Bridge by sending topology change notification (TCN) BPDUs. The Root Bridge must then
signal a topology change by sending out a TCN message that is relayed to all switches in the STP
domain. RSTP detects a topology change only when a nonedge port transitions to the Forwarding
state. This might seem odd because a link failure is not used as a trigger. RSTP uses all of its
rapid convergence mechanisms to prevent bridging loops from forming.
Therefore, topology changes are detected only so that bridging tables can be updated and
corrected as hosts appear first on a failed port and then on a different functioning port.
When a topology change is detected, a switch must propagate news of the change to other
switches in the network so they can correct their bridging tables, too. This process is similar to the
convergence and synchronization mechanism-topology change (TC) messages propagate through
the network in an everexpanding wave. | <urn:uuid:1413a865-e72b-4dd5-afac-0f17826101f1> | CC-MAIN-2017-04 | http://www.aiotestking.com/cisco/which-statement-is-true-about-rstp-topology-changes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00330-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.881398 | 451 | 2.875 | 3 |
This course is designed to introduce business professionals the skills they need to manage product information with the use of Catalogs tool that is provided by IBM WebSphere Commerce V7 Feature Pack 7.
The skills that are developed in this course enable WebSphere Commerce business users to manage product attributes with the use of features that are provided by the Management Center. The course explains different types of product attributes, attribute dictionary attributes, and how they can be associated with products in a catalog of the Management Center. The course begins with an overview of the product information that includes a business scenario to explain the need to change product information, which results in catalog update for a store. It introduces the students to different key terms, and definitions, and the associated tools of the Management Center that are involved in product information management. Subsequent units cover the types of product attributes. Defining attributes, descriptive attributes, and their values (either predefined, or assigned values). Various units in the course provide description of how to update information, create, and use attribute dictionary, and attribute dictionary attributes to manage product information. They also explain various features such as merchandising association, and SKU creation. The Course also provides a brief description of how the catalog can be filtered by using the Catalogs and Filtering tool.
Scenario-based sample demonstrations that are provided, facilitate students to understand how to log in to the Management Center, and use various features and functions of the Catalogs tool of the Management Center to manage product information. The course consists of instructions that can be used to re-create demonstration tasks. | <urn:uuid:2135307d-94d0-41dc-9fd5-f2007d96d62a> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/120868/product-information-management-for-ibm-websphere-commerce-version-7-fep-7/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00448-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920766 | 318 | 2.65625 | 3 |
We are inching closer to a world in which everything -– and everybody -– will be part of some wireless computer network. Or, more precisely, a Specknet. "Specks" are tiny computing devices that can be placed on common objects -- or on people, as they currently are in a Scottish community testing “speckled computing” with respiratory patients. Each individual programmable speck can sense real-world data (temperature and motion, for instance), compute it, and communicate with each other wirelessly. The goal, as laid out by the Centre for Speckled Computing at the University of Edinburgh in Scotland, is no less than to bridge the physical and virtual worlds:
“In our vision of Speckled Computing, the sensing and processing of information will be highly diffused – the person, the artefacts and the surrounding space, become, at the same time, computational resources and interfaces to those resources."
In the case of the Scottish trial patients, small electronic patches (about the size of a man's thumb) placed on their chests monitor respiratory data -- a person’s breathing rate and depth, for example -- and transmit the information wirelessly to doctors monitoring the patients miles away. This community-care pilot project followed successful hospital trials of the new technology in Scotland based on a system created at the Centre for Speckled Computing, headed by Professor D K Arvind. (See interview below.) Speaking this month at the annual British Science Festival in Aberdeen, Scotland, Dr. Arvind explained that the “inspiration” for the speckled computing model comes not from the virtual or digital world, but from the living world. “A large number of these is almost like a semblance of cells and neurons,” he said. “It’s very similar to how we believe biological computation takes place.” The vision of a world in which computers are embedded in everyday objects has been around a long time. Way back in 1988, Mark Weiser, the late, renowned Xerox PARC scientist, was talking about this very notion, which he called “ubiquitous computing” and which he predicted would become as common and invisible as electricity. “Hundreds of computers in a room could seem intimidating at first,” Weiser wrote in 1991. “But like wires in the walls, these hundreds of computers will come to be invisible to common awareness. People will simply use them unconsciously to accomplish everyday tasks.” Speckled computing essentially is ubiquitous computing in granular, dispersed form. But it wouldn’t be possible without advances in Internet technology. As Arvind explained in his inaugural lecture for the Centre for Speckled Computing earlier this year, Internet Protocol 6, only now just being rolled out, eventually will support 35 trillion subnetworks, each able to connect millions of devices. Much of the early focus on uses for speckled computing has targeted health care. Besides using specks to allow doctors to monitor respiratory patients, researchers are investigating using them to help prevent falls among the elderly and to measure conversational skills in adults with Asperger Syndrome. Beyond healthcare, speckled computing appears to have numerous fascinating potential applications. Among them are:
* Enabling humans to control the actions of robots with their own body motions, so a human raising his speck-covered arm would be able to make a robot tied into the same Specknet raise its arm in an identical manner. This would have potential uses by the military and rescue personnel.
* Increasing the ability of scientists to understand “land-atmosphere interactions” such as fires. Specknets may be used to monitor seasonal cycles and their impact on land surfaces, and can also be connected to simulation models.
* Allowing people “hands-free” and “eyes-free” control of devices via body movements, including head, wrist and foot gestures.* Measuring stress and forecasting disease for agricultural crops
If you have time and are interested in learning more about speckled computing, Dr. Arvind's inaugural lecture on YouTube is a good place to start. However, if you don't have 70 minutes to spare at the moment, you can learn a lot about speckled computing directly from Dr. Arvind, who was kind enough to answers some of my questions via email: From the Lab: Specks currently are small, but still easily visible. How much smaller can they get? As small as grains of rice or sand? Dr. Arvind: The limiting factor in the miniaturisation of specks is not the electronics but the battery sizes required to do anything practical. Improvement in energy density of batteries is quite modest compared to the doubling of transistors every 18 months. Also, as we make the specks smaller the radio frequency for wireless communication has to move up the GHz range (24GHz compared to the current 2.4GHz) so that the antenna size gets smaller at the cost of increased power consumption. An alternative is optical communication using laser/LEDs which is low power but with the disadvantage of fixed specks for line of sight communication and problems of occlusion. From the Lab: What are the most promising areas for speckled computing? Dr. Arvind: We are concentrating on three application areas for specks: Healthcare, environmental monitoring and digital media. In healthcare, we have developed the Respire speck which can monitor respiratory rate/flow, heart rate, and activity levels. It has undergone clinical trials in Edinburgh hospitals for postoperative care in hospitals and has been deployed in the community to monitor remotely COPD patients in the comfort of their homes. The Orient speck has been used in mobile gait analysis and remote physiotherapy. On environmental monitoring, we are developing specks with GPS, accelerometer, and magnetometer to analyse the behaviour of wild horses in southwest Spain. The information of interest are the areas visited by the horses, times spent resting, running, feeding and their social behaviour -- which horses spend time with others which is useful to study the spread of disease. The specks are currently undergoing trials in a herd of horses (not wild) in Edinburgh and will be deployed in Spain in 2013. Other applications are the monitoring of the environment in buildings and greenhouses to optimise usage of energy. In digital media, we have developed on-body wireless specks for full-body, 3-D motion capture in real-time. This has applications in 3-D animation, user interfaces, biomechanics, dance, sports (analysis of golf swing). From the Lab: What are the greatest practical challenges to speckled computing? Cost? Power supply? Wireless reliability? Dr. Arvind: The greatest challenge is taking great care in the design of the speck architecture, firmware, networking protocols, distributed algorithms so as to optimise energy usage. From the Lab: Is there any danger of a Specknet being compromised? Are they inherently insecure? Dr. Arvind: Not necessarily. The data transmitted by specks can be encrypted for wireless transmission which is an overhead, of course. From the Lab: How long do you think it will be before “ubiquitous computing” truly is as common as electricity? Dr. Arvind: We have come a long way in the last 10 years. Speckled computing presaged the "Internet of things" by a good many years! When the research community was talking in terms of wireless sensor networks, it was clear to me that it was not the sensor data but the analysis of the data on the specks, at the edges, to extract information in situ was the key and its connection to the rest of the IP network with the advent of the IPv6 protocol and the accompanying explosion in addresses. I do believe we will get there but cannot be precise when. | <urn:uuid:84d5e3d8-c35c-4c4c-be5f-070cea4df307> | CC-MAIN-2017-04 | http://www.itworld.com/article/2721483/consumer-tech-science/the-tiny--yet-powerful--world-of-speckled-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951632 | 1,602 | 3.390625 | 3 |
Beyond the developer's toolkit is a universe of other crucial concepts and perspectives.
The blessing, and the curse, of being a software developer is that theres almost no limit on what would be useful to know. As software merges into every product or service, extracurricular knowledge will be the mark of the developer with a career path--not just a job slinging code.
Some have tried to list the things we should understand. Computer science professors Eugene Charniak and Drew McDermott described an ideal background in one of their textbooks: not only programming, data structures and logic, but also calculus, physics and statistics.
With studies in logic
, as well as in programming topics such as recursion, were more apt to write code that does its job (or, at any rate, does no harm) even under nonideal conditions. As more of our code lives on networks whose configuration we cant control, or even know, this skill is becoming more important.
in C++, Java and other languages aimed at network-based applications is not just a decoration. Master it and use it.
As more of our tasks involve sophisticated data retrieval, rather than brute-force data manipulation, data structures become more than bookkeeping. The right data structure
becomes a huge head start in capturing the right knowledge and making it accessible with flexibility and speed.
In fact, the essence of object-oriented design and development is to think of code as the property of its data, as something that exists to enable that data to contribute to solving your problem. If youre just using objects to collect your procedures in little bags with name tags, youre not getting the picture. The data is in the drivers seat.
So far, youre probably nodding your head in ready agreement. But calculus? Physics? Statistics? The first is more relevant than it may seem: As the language of things that change, calculus is the path to the best solution when a problem can be solved in many ways.
With software coming out of its back-office role as a necessary cost of doing business, becoming instead a major part of the value of products and services, we have to think like product designers about up-front and recurring costs. A decision to spend more up front, in order to reduce the costs that repeat with every transaction, is a trade-off that has to be stated with precision and determined with accuracy.
To deliver solutions that we can be proud of, we have to be part of the dialog that decides what problem were going to solve. Though you may not see calculus up on the whiteboard during the product planning session, its a way of thinking
that will make your contributions more valuable even if you never mention the word in public. (In fact, a word of advice: Dont
mention it in public.)
And physics? Its physics that limits the speed of our processors, the bandwidth of our networks, the reliability of our wireless links. If you dont understand the connections, youre writing code for an ideal world of massless bits that live on noise-free channels. The developer whos writing code for the real world
will deliver a better product.
Statistics? Increasingly, software is part of a process of making decisions: calling the signals in the game, not just keeping score. If calculus is the language of change, statistics is the language of the errors and uncertainties
that are larger in the presence of rapid change.
Again, its a matter of solving problems in a world of realities instead of a world defined by APIs. Do it and prosper, or leave it to others--and wind up working for them. | <urn:uuid:b85c126c-c3ee-40c1-ae4c-6dc9b3d2e193> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Data-Storage/Great-Code-Comes-from-Knowing-More | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949124 | 745 | 2.875 | 3 |
We use KEEP to display data contents and PEEK to look at data definition and contents.
Peek is used to display the value of the variable declared in the data division.
syntax - P (PEEK) < VARIABLE NAME > and press Enter.
The screen automatically scrolls to the DATA DIVISION statement where the VARIABLE defined, inserts a P in column 9, and displays the occurrence and value of that variable.
If you want to see a variable value while excuting, use KEEP.
syntax - K (KEEP) < VARIABLE NAME > and press Enter.
It shows the variable name and changing value while you debugging in the top part of screen. | <urn:uuid:21eb9cfc-9194-4b43-9835-1f85caa9c2ca> | CC-MAIN-2017-04 | http://ibmmainframes.com/about21096.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.78007 | 150 | 2.671875 | 3 |
After a successful liftoff of the SpaceX Dragon on Friday, the company's engineers were working on a glitch in the spacecraft's thruster system, delaying a Saturday rendezvous with the International Space Station.
NASA reported earlier Friday that three of the Dragon spacecraft's four thruster pods were not working. The problem is caused by a malfunctioning propellant valve.
At 3:30 p.m. ET, the space agency said that while a second thruster pod was brought online, the Dragon spacecraft won't be able to link up with the space station on Saturday as had been planned.
Engineers are working to get two still-malfunctioning thrusters up and running. The spacecraft needs at least three thrusters working to be able to make a series of burns needed to rendezvous with the space station.
The thruster pods enable maneuvering and altitude control.
Shortly after Dragon reached orbit, SpaceX founder and CEO Elon Musk reported on Twitter that there was a problem with the Dragon spacecraft's thruster pods, delaying the deployment of the craft's solar array, which powers it.
"Issue with Dragon thruster pods. System inhibiting three of four from initializing. About to command inhibit override," Musk tweeted. "Holding on solar array deployment until at least two thruster pods are active."
At approximately 11:50 a.m. ET, the Dragon's solar arrays were successfully deployed.
The SpaceX Falcon 9 rocket, carrying the unmanned Dragon capsule, lifted off from Cape Canaveral Air Force Station in Florida at 10:10 a.m. ET today. The spacecraft, was scheduled to rendezvous with the space station on Saturday, ferrying 1,268 pounds of scientific experiments and supplies for the space station crew to the orbiter.
Using a robotic arm onboard the space station, two astronauts are set to grab hold of the Dragon capsule and attach it to the station. The capsule will stay attached for about three weeks, returning to Earth on March 25.
Today's launch is the second of 12 SpaceX flights contracted by NASA to resupply the space station. It also will be the third trip by a Dragon capsule to the orbiting laboratory.
After SpaceX made a demonstration flight in May 2012, it then launched the first official resupply mission last October, delivering 882 pounds of supplies.
Another successful commercial launch is an important milestone for NASA, which now depends on commercial flights since retiring the agency's fleet of space shuttles in the summer of 2011. For the foreseeable future, NASA will need commercial missions to ferry supplies, and possibly even astronauts, to the space station, while the space agency focuses on developing robotics and big engines in preparation for missions to the moon, asteroids and Mars.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org. | <urn:uuid:415528bb-305d-4064-89a8-4cb4a5011af2> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2495736/emerging-technology/update--nasa-says-glitch-delays-spacex-dragon-linkup-with-space-station.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950068 | 616 | 2.515625 | 3 |
Le Cozannet G.,Bureau de Recherches Geologiques et Minieres |
Garcin M.,Bureau de Recherches Geologiques et Minieres |
Petitjean L.,Bureau de Recherches Geologiques et Minieres |
Petitjean L.,University Paris |
And 12 more authors.
Journal of Coastal Research | Year: 2013
The climate component of sea level variation displays significant spatial variability, and it is now possible to reconstruct how sea level varied globally and regionally over the past half century. The fact that sea level rose faster than the global mean since 1950 in the central Pacific stimulated a study of decadal shoreline changes in this region. Here, the study of Yates et al. (2013) was extended to two additional atolls (17 islets): Tetiaroa and Tupai in the Society islands. Both atolls remain stable on the whole from 1955 to 2001/02, however with significant differences in shoreline changes among their islets and within the period. A modeling of waves generated by historical cyclonic events in French Polynesia since 1970 reveals consistency between major shoreline changes and cyclonic and seasonal waves. As in previous studies, this suggests that waves' actions are a dominant cause of shoreline dynamics on relatively undeveloped atolls, even if affected by higher sea level rise rates. In such regions, numerous joint analyses of shoreline changes and their potential causes may help to explain the relation between erosion and sea level rise. © Coastal Education & Research Foundation 2013. Source | <urn:uuid:03a66495-eeec-4f12-85b6-903816b114f5> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/center-antilles-guyane-1354482/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00440-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.858917 | 323 | 2.734375 | 3 |
If you live in an industrialized nation, chances are you own a computer and enjoy an inexpensive connection to the Internet. Or, if you lack such personal resources, you most likely have ready access to cycles and bandwidth nonetheless, courtesy of your local library, school, or corner coffee shop. In fact, according to Internet World Stats, 75 percent or more of the populations of first-world countries have regular access to a computer and the Internet. For residents of the United States, Australia, Japan, and Western Europe, connectivity is convenient to the point of seeming ubiquitous.
In stark contrast, less than 25 percent of the world's total population has access to the Internet. In many parts of the world, computers are scarce, and connectivity is even rarer. Indeed, some of these third-world nations have barely any online ingress at all. For example, only one percent of the people of Rwanda can connect to the Internet, and no more than 5 percent of all Africans have access.
Moreover, many countries have makeshift, fragile utility grids, rendering computers and uplinks useless during what are typically interminable outages. Worse, a natural disaster or civil emergency can cause widespread failure of infrastructure—ironically, just as the very same facilities are needed to communicate and coordinate with relief workers and local populations. Shipping containers full of recycled computers from the United States and other world powers do little good without electricity.
Unfortunately—and as with many other modern practices and technologies—the countries of the world are increasingly split into the "computing haves" and the "computing have-nots." Moore's Law has an unfortunate corollary: Innovation widens the digital divide.
Minding the gap
But fortunately many recognize the growing disparity and are taking action to bridge the gap. One Laptop Per Child (OLPC) aims "to create educational opportunities for the world's poorest children by providing each child with a rugged, low-cost, low-power, connected laptop with content and software designed for collaborative, joyful, self-empowered learning." Geekcorps "promotes economic growth in the developing world by sending highly skilled technology volunteers to teach communities how to use innovative and affordable information and communication technologies to solve development problems." The United Nations promotes awareness of the computing inequity each year during World Information Society Day on 17 May.
SolarNetOne is another novel initiative to span the divide. For a relatively small investment, SolarNetOne can deploy a turnkey Internet hotspot—conditioned, renewable power; computers; WiFi; and an uplink—anywhere the sun shines. According to Scott Johnson, founder and lead engineer of the project:
"[SolarNetOne was designed] to go places where there was no existing power infrastructure in place. It's excellent in places where the grid is unreliable or disabled."
Johnson says that he conceived SolarNetOne following a series of conversations with Dr. Vint Cerf, the Internet pioneer and Google's Chief Internet Evangelist. Enamored of Johnson's proposal, Cerf personally funded research and development of the system, and Johnson teamed with Bob Freling of the Solar Electric Light Fund (SELF) and Steve Huter of Network Startup Resource Center (NSRC) to develop a prototype solar-powered network. SELF designs and implements sustainable energy solutions to provide power for water pumping and drip irrigation, health clinics, schools, homes, street lights, microenterprise, and wireless Internet; NSRC assists countries and regions with construction, expansion, and maintenance of Internet infrastructure. See the Resources section for a developerWorks podcast interview with Johnson.
The first SolarNetOne kit was installed at Katsina State University in northern Nigeria in 2007. Since then, the project has refined and commercialized its offering and deployed additional systems in the field.
Uncrate, wire, and go
Each SolarNetOne kit is a self-powered communications network. Energy is produced from a solar array sized to each locale's latitude and predominant weather conditions. The generated power is stored in a substantial battery array, and circuit breakers and electronics protect the gear from overloads and other perturbations.
A basic kit includes five "seats," implemented as thin clients connected through a LAN to a central server. The networking gear also includes a long-range, omnidirectional WiFi access point, and a Session Initiation Protocol (SIP) device. Each kit also includes all the cables and wires required to assemble the system, so few additional materials are required for an installation.
Figure 1 shows the architecture of a SolarNetOne kit. Dashed lines trace power; each solid line represents a network connection.
Figure 1. The construction of the SolarNetOne system
Most of its components are off the shelf and can be replaced easily. For example, the server is an MSI PR210-SEED2 notebook with 2GB of RAM, an 8-GB solid-state hard disk set aside for the operating system, a DVD burner, and a 120-GB external hard disk. An external, extruded heat sink with dual-fan, forced-air cooling significantly lowers the server's operating temperature, ensuring stable operation even in equatorial areas.
The Ethernet hub is a Linksys SR224G. Each terminal is a diskless Sumotech ST166 with 128MB of RAM and a 15-inch VGA LCD. Power for the terminals and monitors is provided from a hybrid 12VDC Power over Ethernet switch through the existing Ethernet wiring, which eliminates the need for extra power drops. The terminals boot via Preboot eXecution Environment (PXE), mount files using Network File System (NFS), and use the X Windows System and the X Display Manager Control Protocol (XDMCP) for remote login to the server.
The diskless thin clients provide many advantages. There is less hardware to fail, and the terminals sip power. Each terminal consumes 4.5 watts while in use, and the LCD consumes an additional 8 watts. (A typical computer consumes 350 watts when in use.) All told, the panels for a five-seat implementation of SolarNetOne must provide only 600 watts of power per hour to enable eight hours of client terminal operation daily and continuous server operation.
The cost for a kit is US$15,000. Maintenance is inexpensive. The solar panels must be kept clean to work optimally. If the batteries are vented, staff must add distilled water to the cells monthly. If cared for properly, a SolarNetOne system should last 20 years or more, although the batteries will likely require replacement after a decade of use.
Powered by open source
SolarNetOne is based entirely on open source technology. The thin clients are powered by the Linux® Terminal Server Project (LTSP). Both the thin clients and the server run the Ubuntu Linux operating system (version 8.04); Apache, Exim, BIND, and OpenSSH provide Web, e-mail, DNS, and remote access, respectively, and Madwifi provides the software for the wireless access point. System software is easily kept up to date with Debian's own Aptitude utility.
Linux was chosen for a number of reasons. First, it is available at no cost—an ideal price tag when an entire SolarNetOne kit costs less than one subcompact car. Linux lowers the initial cost of each system and allows a site to scale without incurring incremental, per-seat software licensing fees. Because a single SolarNetOne server can support up to 50 thin clients, the savings can be substantial, even recouped to add more or improved hardware.
Linux was also chosen because much of the add-on software available for Linux is similarly free (as in beer). All the daemons mentioned above are available and usable without fee, and additional capabilities such as databases, compilers, and scientific libraries are also available at no cost. Thus, once installed, each SolarNetOne kit can be expanded to serve many constituencies and special interests. For example, the SolarNetOne system at Katsina State University provides compute cycles and wireless access to the entire campus. The terminal lab is rarely idle.
In Johnson's experience, Linux is ideal, because a relatively small system can run lots of software. Johnson observes, "Windows® is entirely too heavy to consider for a project like this." Johnson says remote system administration over low-bandwidth links is a breeze with Linux and the command-line shell.
Further, the freedoms provided by the GPL, the Apache License, and other, similar intellectual property grants allow unencumbered access to the source code of applications: Adaptations are not only possible, but variants are encouraged. SolarNetOne customizes Ubuntu Linux for its client and server hardware—specialization that's typically not possible or not financially feasible with a proprietary operating system.
Linux is also immune to most viruses and malware. Such resilience bolsters uptime and availability—one of the fundamental tenets of SolarNetOne.
Success so far
To date, five SolarNetOne systems have been deployed or are in the works, and interest has increased greatly because of early successes and a handful of positive media reports. Johnson is in serious talks with several groups for deployments of 10 or more seats.
The first SolarNetOne installation in Nigeria remains in continuous operation and is used for e-mail, word processing, and Internet surfing. Except for an initial customs hiccup that stranded the system's power hardware in Germany for several months, the system has suffered no major problems. It's widely regarded as the most stable and reliable system in the region.
In point of fact, corruption is often the most significant impediment to deployment. Often, shady customs officials or other government employees can cause problems. Johnson reports that at least one deployment was scuttled by efforts of proprietary interests seeking greater "developing world market share."
Johnson also recently added a salesperson to field inquiries and vet opportunities. The project is now selling the SolarNetOne system for profit and continues its work with non-governmental agencies and non-profits, such as SELF and the Internet Society, which subsidize purchases. Johnson says SolarNetOne remains unique and especially valuable: "I am unaware of anyone else offering a client-server multiuser system with an integrated, long-range WiFi access point and Internet service provider features, such as HTTP, SMTP, DNS, and more. No other project matches the scale of SolarNetOne."
Certainly, there are many opportunities to make an impact. To find prime locales, one must only glance at the gross domestic products of the world's countries and look at the bottom of the list. Johnson notes, "Africa, South America, and the Pacific islands are all excellent targets. We would like to connect the several billion people on the planet who live in areas without stable power or telecommunications to the Internet," says Johnson.
Questioned by e-mail, Dr. Cerf replied, "SolarNetOne and others like it offer progress towards bringing Internet access to the 77 percent of the world's population that doesn't have it yet."
Johnson says many challenges remain. He wants to continue to reduce power consumption of the system as a whole either to deliver more compute capacity for the same number of watts or to burn less watts for the current capacity. One option, for example, is the use of active-matrix, organic light-emitting diode (AMOLED) displays for the terminals, as AMOLEDs consume less power. Johnson also wants to simplify the components and combine more features into fewer chassis.
Even better, he continues, "We would like to 'push back the envelope' on low-power, sustainable computing and set the standard for power-efficient computing. [The world] must adopt clean power within the next two generations, and this project can lead the transformation to green computing. Most computers waste vast amounts of power, and most wasteful of all is the PC-based network architecture."
Johnson is ambitious, and the project's goals are as lofty as the heavens. "Oh, yes, when I become older and greyer, I would love to see SolarNetOne or its descendants used on other bodies in our solar system."
Now that would make Internet access universal.
If you would like to volunteer your time and expertise to SolarNetOne, or if you would like to donate to the project, contact the SolarNetOne team through its home page listed in the Resources below.
- In this podcast interview with Scott Johnson, founder of SolarNetOne, Johnson discusses the dual vision of a greener planet and remote access to the Internet, his formative conversations with internet pioneer Dr. Vint Cerf, and the critical role open source has played in the SolarNetOne project.
- Read more about the One Laptop Per Child (OLPC) initiative.
For an overview of the One Laptop Per Child (OLPC) project and
details on how to get started developing for it, read:
- Application development for the OLPC laptop (developerWorks, December 2007)
- Learn more about Geekcorps and how you can contribute and volunteer.
- See pictures of SolarNetOne's first installation at Katsina State University, Nigeria.
- Learn more about the Solar Electric Light Fund (SELF), which "designs and implements sustainable energy solutions to improve the health, education, and economic well-being of rural communities in the developing world."
- The Network Startup Resource Center (NSRC) is an organization that "deploys networking technology in various projects throughout Asia/Pacific, Africa, Latin America and the Caribbean, the Middle East, and the New Independent States."
- Read about the goals and features of the Linux Terminal Server Project.
- Join the It's all about Green! group on My developerWorks to connect with other developers about energy-efficient computing. | <urn:uuid:4501acdc-22d4-433b-a802-6e8b61e02f30> | CC-MAIN-2017-04 | http://www.ibm.com/developerworks/linux/library/l-solarnetone/?ca=dth-grn | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00558-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927006 | 2,837 | 3.125 | 3 |
Schools should introduce computer games in class to update the IT curriculum and help drive young people into IT careers, according to experts.
The Council of Industry and Higher Education (CIHE) have released a report claiming the IT curriculum is preventing IT industry growth and should be changed to be made more relevant to school children. This includes teaching computer principles relating to computer games.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"The current curriculum concentrates on word processing and office productivity tools, but fails to educate students about the vital computing principles which underpin games and internet services," said the CIHE in its report.
Bill Mitchell, director of the BCS academy for computing and member of the CIHE report task force, said: "For many years the emphasis has been on training children how to use technology. But they also need to understand computing. Learning about Microsoft Word, Powerpoint and Excel are all important but if that's all children learn about, they will get bored. Computer games are incredibly motivational and embody the fundamental principles of computing."
There has been growing concern about IT education from industry experts as the drop in numbers of young people taking IT-related courses looks set to fall short of demand for IT skills.
The Department of Education said it is reviewing the curriculum. "The new government has come in and feel [the curriculum] is restrictive and isn't teaching kids what they need for the real world," said a spokesman. | <urn:uuid:838fab3f-9f13-4858-9160-ce92d694c9b5> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280093759/Report-urges-national-curriculum-include-computer-games | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96874 | 308 | 3.1875 | 3 |
SSL (secure sockets layer) certificates are encryption programs that protect online communications. Many web services have added support for SSL encryption, and with recent revelations of large scale surveillance and bulk data collection by intelligence agencies, privacy-conscious web users have begun to adopt the technology.
But they aren’t the only ones. According to the Swiss security blog Abuse.ch, cybercriminals also use SSL certificates to encrypt traffic between malware-infected computers and command-and-control servers in an attempt to bypass intrusion prevention and detection systems.
An article in IT World, “SSL Blacklist project exposes certificates used by malware” details a plan by Abuse.ch’s botnet tracking initiative to track and create a list of SSL certificates used in botnet and malware operations.
Abuse.ch has been tracking command-and-control servers for malware threats like Zeus, SpyEye, Palevo and Feodo for several years and lists the IP addresses and domain names associated with those servers in order to help network administrators identify infected computers that attempt to communicate with them. In similar fashion, the outfit has launched a project to list SSL certificates used by some malware programs to hide their communications. “The SSL Blacklist” will list digital certificates — identified by their SHA1 cryptographic fingerprints — that are used by botnets.
So far the list contains “127 certificates including some that cybercriminals generated themselves instead of buying from a trusted certificate authority. The majority of certificates are used in the command-and-control operations of KINS, Shylock and Vawtrak, three distinct malware threats that target online banking users,” according to IT World. This list will undoubtedly grow as and prove to be a valuable tool in identifying cyber threats. | <urn:uuid:ea2d950b-08a6-4bbe-b587-8a417910e5dc> | CC-MAIN-2017-04 | http://www.bsminfo.com/doc/ssl-certificates-used-in-cyber-crime-published-in-blacklist-0001 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923307 | 358 | 3.28125 | 3 |
Can you rely on the website address to tell if you’re on a phishing site? Not anymore, according to some websites.
It seems that the International Corporation for Assigned Names and Numbers (ICANN) has recently allowed non-latin domain names to be registered. This is in an effort to encourage internet content build-up from other countries.
Reacting to this news, some creative authors have found a way to display common website addresses using a combination of Cyrillic and English letters. For example the Russian Cyrillic characters “raural” look exactly like “paypal”. Check out the Paypal example here. This IDN homograph phishing attack is nothing new, just a lot easier according to some authors.
Some potential issues have been addressed: depending what type of browser you use, you’ll likely get a warning; IDN implementations won’t allow mixed-script URLs so a nefarious registrant can’t mash up a domain name using multiple scripts. But one can’t help wondering what happens on older browsers, or mobile browsers?
No matter what the case, it’s just become a bit more unreliable to depend on the domain name displayed in the browser address bar. It’s too bad because that’s usually the best way to train non-technical users to be sure they’re on the right website. Of course another way to rely on a website is through the SSL information. But try explaining to your great aunt that she needs to click on the little lock icon at the bottom right of her browser. And with the proliferation of certificates that only validate domain names (DV certificates), many SSL sessions just don’t offer the reliability.
Browser manufacturers and Certificate Authorities have taken the first step towards making it easier by introducing Extended Validation (EV) certificates. The standardized EV Guidelines specifically mention that:
The CA MUST visually compare any Domain Names with mixed character sets with known high risk domains. If a similarity is found, then the EV Certificate Request MUST be flagged as High Risk. The CA must perform reasonably appropriate additional authentication and verification to be certain beyond reasonable doubt that the Applicant and the target in question are the same organization.
In other words, this problem wouldn’t happen if sites were protected by EV certificates. EV guidelines also dictate that certificate providers validate the company name that owns the website as well as the true website name, and this information is displayed in the “chrome” of the browser, such as in the menu bar. Most browsers provide visual cues for EV certificates. Usually the address or address bar turns green. But is it enough? Would your great aunt know to look for green visual cues and the name of the company? Perhaps. Perhaps not. Perhaps the next step is for browsers to provide more dramatic visual cues. Like “You are about to send information securely to <Insert verified company name here>”. Let’s hope the browser vendors can stay ahead of the criminals on this one. | <urn:uuid:4ea9f66b-8dfc-4859-8304-88631e531540> | CC-MAIN-2017-04 | https://www.entrust.com/is-it-paypal-or-is-it-paypal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905808 | 625 | 2.734375 | 3 |
OTTAWA, ONTARIO--(Marketwired - May 22, 2014) - The Royal Canadian Legion acknowledged Aboriginal Awareness Week as well as the many Aboriginal cultures in Canada, including First Nations, the Inuit and the Métis earlier today.
"This week, designed to honour the Canadian Mosaic is a welcomed part of our Canadian Heritage," says Dominion President of The Royal Canadian Legion, Gordon Moore. "We are pleased not only to acknowledge but also participate in this event and recognize many Aboriginals who served and continue to do so in the Canadian Armed Forces and the Royal Canadian Mounted Police," adds Moore.
First Nations, the Inuit and the Métis have an important military history. For example, during the First World War, more than 4,000 Aboriginal Canadians volunteered to join the military. During the Second World War, more than 3,000 Aboriginal Canadians served in our military overseas. A few years later, hundreds volunteered to help the United Nations defend South Korea during the Korean War. This proud history of support to defend this country continues to this day. Likewise, in the early days of the Legion, many Aboriginal Veterans were some of the first members to join this organization and play a key role in the direction the Legion would take in support of all Veterans. Today, many Aboriginals are members of the Legion where they are still engaged in shaping the future of Canada's largest Veterans' not-for-profit organization.
The Legion participated in the AAW by having a kiosk on the main concourse in the MGen. George R. Pearkes Building, 101 Colonel By Drive, Ottawa today. This event is part of a larger Aboriginal Affairs Secretariat (AAS) initiative, in conjunction with Parks Canada, the Department of National Defence and the Canadian Armed Forces to recognize Aboriginals in the public service - including military and Royal Canadian Mounted Police service.
ABOUT THE LEGION
Established in 1926, the Legion is the largest Veterans' and community support organization in Canada with more than 320,000 members. Its mission is to serve all Veterans including serving Canadian Armed Forces and Royal Canadian Mounted Police members as well as their families, to promote Remembrance and to serve our communities and our country.
The Legion's Service Bureau Network provides assistance and representation to all Veterans regarding their disability claims, benefits and services from Veterans Affairs Canada and the Veterans Review and Appeal Board. In communities across Canada it is the Legion that perpetuates Remembrance through the Poppy Campaign and Remembrance Day ceremonies. With more than 1, 460 branches, the Legion supports programs for seniors, Veterans' housing, outreach and visitation, youth leadership, education, sports, Cadets, Guides and Scouts.
We Will Remember Them. | <urn:uuid:efd93145-bdc1-49c3-8806-bf6bcf68c79e> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/legion-supports-aboriginal-awareness-week-1913057.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960576 | 558 | 2.546875 | 3 |
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more.
6 of 14
DARPA has already developed radar that can penetrate trees and other foliage. Its Foliage Penetrating Ground Moving-Target Indicator Radar Exploitation and Planning project aims to take that work further by enabling radar to, among other things, estimate the size of groups of enemy soldiers walking on foot, and to distinguish between people exiting vehicles and, say animals or wind-blown foliage. Image credit: DARPA
Military Transformers: 20 Innovative Defense Technologies
DARPA Demonstrates Robot 'Pack Mules'
DARPA Seeks 'Plan X' Cyber Warfare Tools
DARPA Cheetah Robot Sets World Speed Record
DARPA Demos Inexpensive, Moldable Robots
DARPA Unveils Gigapixel Camera
DARPA: Consumer Tech Can Aid Electronic Warfare
6 of 14 | <urn:uuid:77e48f82-342c-4c30-8ead-80ef30584051> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?page_number=6 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.812755 | 213 | 2.671875 | 3 |
Master Data Management
Master Data Management (MDM) is the collective application of governance, business processes, policies, standards and tools facilitate consistency in data definition.
Master Data Management (MDM) refers to the process of creating and managing data that an organization must have as a single master copy, called the master data. Usually, master data can include customers, vendors, employees, and products, but can differ by different industries and even different companies within the same industry. MDM is important because it offers the enterprise a single version of the truth. Without a clearly defined master data, the enterprise runs the risk of having multiple copies of data that are inconsistent with one another.
MDM has the objective of providing processes for collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the ongoing maintenance and applied use of this data.
Traditionally, Master Data could be code tables, a "master file", reference data, dimensions. Master Data (when established) should feed downstream data systems like Data Marts, data applications etc...
MDM is typically more important in larger organizations. In fact, the bigger the organization, the more important the discipline of MDM is, because a bigger organization means that there are more disparate systems within the company, and the difficulty on providing a single source of truth, as well as the benefit of having master data, grows with each additional data source. A particularly big challenge to maintaining master data occurs when there is a merger/acquisition. Each of the organizations will have its own master data, and how to merge the two sets of data will be challenging. Let's take a look at the customer files: The two companies will likely have different unique identifiers for each customer. Addresses and phone numbers may not match. One may have a person's maiden name and the other the current last name. One may have a nickname (such as "Bill") and the other may have the full name (such as "William"). All these contribute to the difficulty in creating and maintain in a single set of master data.
At the heart of the master data management program is the definition of the master data. Therefore, it is essential that we identify who is responsible for defining and enforcing the definition. Due to the importance of master data, a dedicated person or team should be appointed. At the minimum, a data steward should be identified. The responsible party can also be a group -- such as a data governance committee or a data governance council.
The key driver for MDM is to resolve inconsistencies in data from multiple systems, silos.
Master Data Management vs. Data Warehousing Based on the discussions so far, it seems like Master Data Management and Data Warehousing have a lot in common. For example, the effort of data transformation and cleansing is very similar to an ETL process in data warehousing, and in fact they can use the same ETL tools. In the real world, it is not uncommon to see MDM and data warehousing fall into the same project. On the other hand, it is important to call out the main differences between the two:
Common topics on MDM include:
- MDM Benefits
- MDM Selection Criteria
- MDM Features
- MDM Maturity
- Master Data Categories
- Master Data Management Solutions
- Master Data Management Technology | <urn:uuid:f2e02063-08af-4560-a78a-e3b99cd02f3d> | CC-MAIN-2017-04 | http://wiki.glitchdata.com/index.php?title=Master_Data_Management | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931924 | 695 | 2.8125 | 3 |
It was a fierce battle -- probably even more fierce than the fight awaiting the two plastic dinosaurs above - that ended in the deaths of both combatants 67 million years ago in an area of Montana called Hell's Creek. While these deadly duels between prehistoric carnivores and herbivores were common, scientists say the discovery in Montana is only the second time fossilized remains have been found of dinosaurs in combat. But this particular find may have even more significance, as the Irish Independent reports:
At first glance (the meat eater) looked like a smaller version of Tyrannosaurus rex, the apex predator of the Cretaceous era, but there were key differences, in particular its graceful head and large forelimbs.
Scientists believe the fossil provides clear evidence that T. rex shared its habitat with a smaller cousin, Nanotyrannus, in much the same way lions and cheetahs hunt together on the African savannah.The discovery could end the debate that has raged between experts who believe in Nanotyrannus, and others who say the creature's fragmented fossils belong to T. rex's juvenile offspring.
The fossils (there's a picture in the link above) show the carnivore, measuring 20 to 24 feet in length, on the back of the 18-foot Triceratops, its teeth embedded in its prey's neck. Sadly for the meat-eater, its head was bashed in by the strong tail of its intended meal. The sadness may not have ended there, however. Scientists may not be able to examine the huge rock because it is privately owned and scheduled for auction in New York in November, when it could be sold for up to $9 million. According to the Irish Times, the "new species cannot be brought into the scientific literature unless it is brought into public ownership and made available for scientific research." Let's hope the winner of the auction has a healthy respect for science. Now read this: | <urn:uuid:3d1341a7-22dc-446a-aecc-c0cfbe9d69a1> | CC-MAIN-2017-04 | http://www.itworld.com/article/2704013/hardware/deadly-dinosaur-duel-could-resolve-t--rex-debate.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00083-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965178 | 395 | 3 | 3 |
According to a blog posted by Guy Steele on Oracle’s website, the company will begin winding down the work on Fortress, an experimental open-source programming language designed to make HPC application development more productive. Fortress was originally developed by Sun Microsystems, where Steele led the language R&D effort. The work was partially funded under DARPA’s High Productivity Computing Systems (HPCS) project, which was designed to bring productive multi-petaflop systems to the supercomputing community.
Sun never made the final cut in the HPCS program, but the Fortress work was retained as an area of research at a time when the company still had serious supercomputing aspirations. When Oracle acquired Sun, it inherited the language technology, but not the enthusiasm to pursue the HPC market.
Fortress runs on the Java Virtual Machine (JVM) and could, at least theoretically, be applied to less compute-intensive domains. But since the language syntax and design are focused primarily on highly parallel, math-heavy code, it was likely deemed expendable by the higher-ups at Oracle, who couldn’t rationalize the continued research for its database-focused business.
In his blog post, Steele implies that they had essentially reached the end of the line, in a technical sense, with the technology:
[O]ver the last few years, as we have focused on implementing a compiler targeted to the Java Virtual Machine, we encountered some severe technical challenges having to do with the mismatch between the (rather ambitious) Fortress type system and a virtual machine not designed to support it (that would be every currently available VM, not just JVM). In addressing these challenges, we learned a lot about the implications of the Fortress type system for the implementation of symmetric multimethod dispatch, and have concluded that we are now unlikely to learn more (in a research sense) from completing the implementation of Fortress for JVM.
That leaves just Cray’s Chapel and IBM’s X10 as the surviving members of the HPCS language program. Like Fortress, both languages are being developed in the open-source model. While neither Chapel nor X10 has reached anywhere near mainstream acceptance, both efforts are still active.
Since Fortress was developed as an open-source technology, according to Steele, it will “remain available for the foreseeable future to those who wish to work on it.” He also says the that they’ll spend the next few months polishing up the code and language spec and penning some academic papers before shutting down the research effort at Oracle.
You can read the entire Fortress obituary here. | <urn:uuid:278a95ca-d6cc-43b5-8dc9-ef6125da62b0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/07/24/oracle_ditches_supercomputing_language_project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965964 | 548 | 2.546875 | 3 |
Passive optical network (PON) based FTTH access network is a point-to-multipoint, fiber to the premises network architecture in which passive optical splitters are used to enable a single optical fiber to serve multiple premises. The optical splitter can be placed in different locations of the PON based FTTH network, which involves using centralized (single-stage) or distributed (multi-stage) splitting configurations in the distribution portion of the network. In fact, both methods have its own advantages and disadvantages. Which one should you deploy? Comparison between centralized splitting and distributed splitting will be provided in this article.
Centralized Splitting Overview
A centralized splitting approach generally uses a combined split ratio of 1:64 (with a 1:2 splitter in the central office, and a 1:32 in a cabinet). These single-stage splitters can be placed at several locations in the network or housed at a central location. But in most cases, the centralized splitters are placed in the outside plant (OSP) to reduce the amount of overall fiber required. The optical line terminal (OLT) active port in the central office (CO) will be connected/spliced to a fiber leaving the CO. This fiber passes through different closures to reach the input port of the splitter, normally placed in a cabinet. The output port of this splitter goes to the distribution network, reaching the homes of potential customers through different closures and indoor/outdoor terminal boxes.
Distributed Splitting Overview
Unlike centralized splitting, a distributed splitting approach has no splitters in the central office. The OLT port is connected/spliced directly to an outside plant fiber. A first level of splitting (1:4 or 1:8) is installed in a closure, not far from the central office. The input of this first level splitter is connected with the OLT fiber coming from the central office. A second level of splitters (1:16 or 1:8) resides in terminal boxes, very close to the customer premises (each splitter covering 8 to 16 homes). The inputs of these splitters are the fibers coming from the outputs of the first level splitters described above.
Centralized Splitting vs Distributed Splitting
From the knowledge of centralized and distributed splitting described above, we can know that for centralized splitting, all splitters are located in one closure, which will maximize OLT utilization and provide a single point of access for troubleshooting. But since optical splitters must be terminated to customer either through individual splices or connectors, the cost of distribution cables will be very high. In terms of distributed splitting methods, the splitters are located in two or more different closure, which will minimize the amount of fiber that needs to be deployed to provide service. But it may create inefficient use of OLT PON ports and may increase the testing and turn-up time of customers. The advantages and disadvantages of centralized and distributed splitting are summarized in the table below:
|OLT utilization (pay as you grow)||More distribution fiber|
|Future proof & easy to change technology||Larger network elements in the OSP|
|Monitoring & maintenance||Possibly additional infrastructure|
|Lower capital expense for customer connection||More actives and more splitters|
|Reduces splitter cabinet requirements||Less flexible network|
|Flexibility in split ratios in serving area||Fewer monitoring & maintenance capabilities|
Before deciding which splitting methods to use in a PON based FTTH network, always considering every unique aspect of your network case. Since centralized splitting and distributed splitting both has its pros and cons, the best architecture is the one that meets the requirements and expectations of the provider by reducing capital expense, optimizing long-term operational expense, and making a future-proof network that can cope with new technologies without dramatic changes. FS.COM provides full series 1xN or 2xN FBT and PLC splitters which can divide a single/dual optical input(s) into multiple optical outputs uniformly, and offer superior optical performance, high stability and high reliability to meet various application requirements. | <urn:uuid:037941d4-39c7-4c6b-921b-869ca6b62756> | CC-MAIN-2017-04 | http://www.fs.com/blog/centralized-splitting-vs-distributed-splitting-in-pon-based-ftth-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00019-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919874 | 841 | 2.515625 | 3 |
(1 votes, average: 5.00 out of 5)
What is encryption?
Encryption is the practice of encoding communication in such a way that only an authorized party can read it.
That’s pretty much the most simplified explanation we can give.
So why do you need it? Well, there’s really two answers to that. The first, and less satisfying answer is because it will become a minimum security requirement in 2017 and your website will begin to be penalized if you don’t have it.
The second answer, and the one we’re going to spend a little more time on is: security.
We’ll delve into both of those answers in just a minute, but let’s start at the beginning – with the first question we posed – what is encryption?
To understand what encryption is you have to back up a little bit to the way the internet is constructed in general. The internet is built on HTTP, Hypertext Transfer Protocol. When you visit a website, what’s really happening is your browser is making a connection with a web server. The two exchange bits of information, and the browser takes that information and constructs a visualized website. This is done via HTTP.
The problem with HTTP is it’s not secure, which means that anyone who knows what they’re doing can essentially see – for lack of a better term – all the communication between your computer and the server. That means that any information that is exchanged can be intercepted and either stolen or manipulated by a third party.
Encryption prevents that from happening by securing your connection via the SSL/TLS protocol. When encryption is active, it basically scrambles the communication between your computer and the server so that only the other party can unscramble it and read it. To any third party that’s listening in on the connection, the communication is complete unintelligible.
So how does SSL work? Well, it starts when you purchase an SSL Certificate and install it on your web server. If you don’t have SSL, you don’t have encryption. Once the certificate is installed, the server needs to be configured so that the correct pages are served over HTTPS, which is the secure version of HTTP.
A quick aside, many people mistakenly think you only need to configure the pages that collect personal information to be served over HTTPS. While that’s certainly a method that has existed for a while, it actually makes more sense just to configure the entire site to be served over HTTPS at this point.
Now, when a user visits a site with SSL installed and properly configured, the user’s web browser is going to see that the site has SSL and begin a verification process known as the SSL handshake. We won’t get too granular here, but there are a few noteworthy things about the SSL handshake.
Namely the speed with which it occurs. A browser will download the SSL Certificate, check its validity, ensure that the server is the rightful owner of the Certificate’s public key, use that public key to encrypt a small bit of communication, wait for the server to use its private key to decrypt the information and send it back, and then finally negotiate the terms of an encrypted connection with the server—all in just a matter of milliseconds!
That’s one hell of a technological feat.
Once the server and the browser have negotiated an encrypted connection they create and exchange symmetric session keys. The two parties can now encrypt and decrypt the communication they exchange without fear of a third party being able to look at it. At the point the session ends, the keys are discarded. New session keys will be exchanged at the start of a new session.
This is the shorthand explanation of how SSL encryption works.
Now, it should be pretty obvious why you need encryption if you’ve been following along up until this point. The internet is a dangerous place – as unfortunate as it is to have to say that – there are hackers and cybercriminals looking to take advantage of people at every turn.
If you’re running a website that is collecting personal information, financial information, even login information and passwords—you need to keep that information safe for your users. As we mentioned, the default communication protocol, HTTP, is not secure. Anyone who knows how can readily see all the communication taking place across an HTTP connection.
That alone should be enough to convince you.
But if it’s not, here’s a couple of other things to consider. First of all, if you’re running a business you may think that only the biggest companies have to worry about cybercrime. That’s absolutely false. According to Symantec, 74% of small and medium-sized businesses have been targeted by a cyber-attack in just the last 12 months. And even more terrifying, 60% of the small businesses that fall victim to a cyber-attack go out of business within six months.
Security is important.
Second of all, even if you’re not a business or you’re not collecting what you consider to be vital information from visitors—if your users can login you absolutely need encryption. It doesn’t matter if you’re not selling anything, if users can login—you have to encrypt. The internet is fairly unique in that users can only do so much to protect themselves, a lot of the onus for protecting people falls on the websites they visit. You definitely don’t want to gain a reputation as a site that doesn’t protect its visitors. And beyond that, people’s password hygiene, in general, is atrocious. Meaning, people reuse that same passwords across multiple accounts and seldom change them. A breach on your site might seem innocuous, but if cybercriminals can use those stolen passwords to access other, more important accounts—your users are going to blame you.
And finally, even if nobody is logging in on your website—you still are. That’s right, your website likely has a back-end login. How else are you updating it? Shouldn’t that login be secure? If that information gets compromised so does your site. Can you really afford that?
Even if nothing we’ve said in the past 1,000 words has convinced you, this will: SSL is about to become mandatory. No, Google isn’t going to break into your house and put a gun to your head or anything. But then, Google doesn’t have to. It will just put you out of business.
We’re not kidding.
Over the past couple of years, the browser community – Google, Mozilla, Microsoft, Apple – has been politely suggesting encryption. Now it’s done being polite. In 2017, SSL becomes a requirement.
You see, the browsers are in a unique position to influence the internet. You can’t access the web without a browser, can you? And it goes well beyond that. Browsers can tell users that sites are dangerous. They can block sites entirely. Many browsers are owned by companies that also own search engines and we don’t need to tell you how influential SEO and search rankings are, do we?
Well, the browsers are acutely aware of their positioning in the market and they are more than happy to leverage that position to affect change across the internet. That’s what they’re doing here.
Already Google has been giving an SEO boost to sites with encryption. Right now that boost is worth about 5%, but it can go up at any moment. The browsers have also decided to withhold premium features from unencrypted websites. And then newer advances like the faster, safer HTTP/2 protocol are only for sites with SSL too.
But those are subtle compared to what’s about to happen. In fact, it’s already begun. There are visual indicators that appear in the address bar of every browser. Right now unencrypted sites get neutral indicators while encrypted sites get positives ones. But soon, unencrypted sites will begin getting negative indicators, and the words “not secure” will appear next to their URL.
After that, the browsers will begin issuing warnings to users before visiting unencrypted sites. And that’s where the real pain will begin. Because the majority of internet users will not continue to a site when prompted with a warning about it not being safe.
That’s going to have a huge impact on any site—especially business sites.
Even if you don’t feel like you need encryption for security, you now need it just to stay competitive. Hey, don’t blame us—blame the browsers for trying to make the internet a safer place. The nerve.
So there you have it. Encryption is a practice wherein information is encoded in such a way that only an authorized party can read it. It’s really an integral part of any web security strategy. And now, it’s also a basic requirement on the internet. | <urn:uuid:bfe924db-a0a5-4786-b36b-dd2be66e091f> | CC-MAIN-2017-04 | https://comodosslstore.com/blog/what-is-encryption-and-why-do-you-need-it.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00137-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941084 | 1,879 | 3.3125 | 3 |
It has married augmented reality to heads-up-displays and come up with a system to let pilots see through fog, glare, darkness and other conditions that contribute to the single most common factor in airline crashes.
Global positioning systems tell pilots exactly where they are, but don't have any information about rolling terrain, buildings, mountains and other obstacles that are as deadly as they are hard to see in challenging conditions.
"If pilots are not familiar with the airport, they have to stop and pull out maps," said Trey Arthur, an electronics engineer at NASA Langley Research Center in Virginia. "This display, in the new world where these routes are going to be digital, can tell them what taxiway they're on, where they need to go, where they're headed, and how well they're tracking the runway's center line."
The augmented-reality headset fits over one eye and display what looks like the actual runway, taxiways and other directional data as the plane is on the ground as well as the runway centerline and terrain details during the approach to landing.
The headset is a new product in itself, but doesn’t use any GPS or terrain data that aren't already available, according to NASA. The system works in ways similar to the heads-up-displayes fghter pilots use, but incorporates data from high-precision efforts to map the Earth's surface such as the Space Shuttle's Radar Topography Mission in 2000.
It has been tested in a unique NASA plane with two cockpits – one normal, with windows, the other totally enclosed so the "blind" pilot has to fly using only data from the augmented reality system.
The setup helped keep test pilots from crashing, but was primarily designed to verify the quality of NASA[s terrain and location data, the accuracy of the augmented reality display and the ability of pilots to fly using only a virtual picture of the real world.
NASA has been working on Synthetic Vision systems for civilian use since 1993. This headset, which is designed for airliners, could also be adapted to help drivers in cars, though the challenge of gathering enough detailed terrain data to make it practical for cars is a huge challenge.
The challenge of landing a plane, not to mention the downside of failing to notice the bit of terrain into which you're about to fly is incalculably higher. The number of locations needed to make the device useful for pilots and level of detail needed about airports is far lower than trying to provide the same level of detail in a form that would be useful for drivers on any of the 10s of thousands of miles of roads in the U.S.
If the headset ever makes it into the consumer market, it won't be for quite a while.
The headset does not yet even have an official name. NASA is still looking for commercial business partners who can bring the headset to market aimed specifically at pilots. The auto version will have to wait until later.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:4c8e86ea-21ba-4517-8109-023121ef5ec5> | CC-MAIN-2017-04 | http://www.itworld.com/article/2730782/consumer-tech-science/nasa-set-to-market-headset-that-lets-pilots--and-drivers---see-through-fog.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00441-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958685 | 659 | 3.296875 | 3 |
It's been four years since Apple introduced Siri, putting a mini-secretary into every iPhone owners' pocket.
I know it's made my day-to-day life easier — providing an easy way to reply to an SMS text message while I'm on the move, or letting me set the timer when my fingers are mucky from cooking in the kitchen. But the technology has also been at the heart of a number of security problems, most recently providing a sneaky way to bypass the iOS 9 lock screen.
Well now, French security researchers appear to have uncovered a whole different way in which Siri (and, to be fair, Google Now on Android phones) could potentially be exploited by hackers to hijack control of your smartphone, without you ever realising that any funny business was afoot.
Now, it's important to stress that this is a *potential* problem. Although quite ingenious, when you hear the details of just what the researchers had to do, and how the iPhone has to be prepared before a successful attack can proceed, you will probably decide that this particular threat is not one to lose much sleep about.
But that doesn't make it any less fascinating.
In a technical paper published by IEEE, a team from the French government's Network and Information Security Agency (ANSSI), claims to have discovered "a new silent remote voice command injection technique," which could allow them to control Siri via radio waves from a distance of up to 16 feet, if — and this is crucial — a pair of headphones with an in-built microphone (such as the standard earbuds shipped by Apple) are plugged into the iPhone.
Armed with an amplifier, laptop, antenna and Universal Software Radio Peripheral (USRP) radio, attackers could apparently send surreptitious signals, that the headphones' cord would pick up like an antenna, and converted into the electrical signals understood as speech by the iDevice's operating system.
Although somewhat unlikely and impractical, it is quite ingenious.
No words have been spoken, and yet Siri has received a command.
Perhaps the most plausible abuse of the French researchers' discovery would be to order iOS to visit a particular website, hosting a malicious exploit that could infect the phone and install malware. Alternatively, unauthorised messages could be sent from the compromised device.
In a demonstration video, the researchers showed how they were able to transmit silent commands to an Android Smartphone, forcing it to visit the ANSSI website.
However, there are additional limitations for the attack to work against iPhones. Not only does Siri need to be enabled with Voice Activation turned on to allow 'Hey Siri", and headphones with a built-in microphone need to be plugged into the targeted device, but also the hardware required to perform the attack is not insubstantial.
Indeed, the researchers say that in its smallest form (which can fit inside a backpack) the range is limited to about 6.5 feet. A more powerful version that would require larger batteries could only fit practically inside a car or van, giving a range of 16 feet or more.
Regardless, as Wired reports, the researchers believe that the vulnerability could create a real security headache:
"The possibility of inducing parasitic signals on the audio front-end of voice-command-capable devices could raise critical security impacts," the two French researchers, José Lopes Esteves and Chaouki Kasmi, write in a paper published by the IEEE. Or as Vincent Strubel, the director of their research group at ANSSI puts it more simply, "The sky is the limit here. Everything you can do through the voice interface you can do remotely and discreetly through electromagnetic waves."
I would agree that, potentially, anything that can be said to Siri could, in theory, be sent secretly through radio waves, but I think there's quite a jump between that and describing it as "inducing parasitic signals."
"Parasitic" implies some malware-like component and, as we all know, it has proven to be immensely difficult for hackers to infect iPhones with malicious code without going to the effort of jailbreaking or exploiting the enterprise provisioning feature that Apple provides for companies who wish to roll out their own apps to staff.
That's why threats such as the YiSpecter iOS malware, which managed to creep into the App Store, are so rare and had to use such a convoluted route to get there.
I don't see why the introduction of remote Siri commands necessarily significantly increases the risks of iPhones and iPads becoming infected.
Of course, if you're an Android user — particularly one who has found it problematic to update your operating system with the latest patches, and who might be of interest to intelligence agencies willing to attempt an attack like this — then you may be more at risk.
My advice? If you're concerned, consider turning off Siri when your phone is locked or at least disabling Voice Activation. And, furthermore, unplug your headphones when you're not using them! | <urn:uuid:9abb42de-db30-40b7-987e-cbbcd16af9e2> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/ingenious-attack-shows-how-siri-could-be-hijacked-silently-from-16-feet-away-but-dont-lose-any-sleep/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00559-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956841 | 1,024 | 2.625 | 3 |
Despite world economic turmoil of the recent severe European crisis, the demand for high performance computing services stays on the rise. Companies and institutions progressively see computational power as a source of competitive advantage and in many cases as the only optimal solution for many scientific and business challenges, from high energy physics to big data. This trend has brought an unprecedented rise in demand for high computational power. This is posing some sound energy and thermal management challenges.
The energy problem in data centres is two sided. On the one hand, data centres have a problem of energy consumption, which enlarges bills also in countries where the cost of energy is relatively low. On the other hand, there is a problem of peak power demand, so, in other words, a problem of availability. Megawatt installations are not so uncommon anymore, meaning that a request of power similar to the one that has traditionally belonged to the heavy industry sector is becoming almost the norm, in some occasions requiring special arrangement for power systems and electrical lines.
Thermal management is exacerbated by another trend: density. In many cases, rack powers of 30 kW are well beyond what legacy air cooling can handle. In the modern HPC, the high powers in play often leave few options but resorting to some form of water cooling.
Liquid cooling has many advantages, which derive from the much higher heat capacity per unit volume of water compared to air (we are talking about a factor of 3500 times higher). Liquid cooling implies higher densities, energy savings and the possibility to reuse the thermal energy that the water extracts from the IT equipment. Some additional advantages can be found in terms of lower noise levels, less vibrations and close control of electronics temperatures.
The best approach in deciding what type of cooling to implement is to consider alternatives in relation to technical and business needs, the type of air and liquid cooling system available within budget and a series of variables that play an important role in the decision: the desired density versus space availability, new construction versus existing construction, the proximity to natural sources of cold water like rivers and lakes, the local climate, the cost of energy and the thermal energy recovery possibilities.
For instance, high performance high density requirements may leave little choice than liquid cooling to efficiently manage the extraction of the heat from the supercomputers. While, if the data centre has an economizer and the climate is best suited to air-side economizers (mild temperatures and moderate humidity) than an air cooled DC may have more sense.
Deciding the cooling system may also take in consideration the type of water cooling to be installed. There are solutions that simply create an extension of the existing liquid-cooling loop closer to the IT equipment like in the case of liquid cooled racks (liquid cooled door, closed-liquid rack). In other solutions, in-row units are embedded in rows of data center cabinets, providing localized air distribution and management. Alternatively, overhead cooling suspends from the ceiling complements a hot aisle/cold aisle arrangement. As hot air rises from the hot aisle, the overhead cooler captures it, conditions it, and releases it back to the cold aisle
More effective cooling can be reached when the liquid is brought in the near proximity of the electronic components like in the case of submerged cooling, spray cooling or direct (embedded) cooling.
In the first case, the electronic components are immerged in oil and water which is kept in circulation through small pumps. In the second, the water is vaporized and tiny drops of water fall on the electronics evaporating immediately and taking away a lot of heat. In the latter, water is taken through metal plates or micro pipes to direct contact with processors, memory and other components.
Another distinction is normally made between hot and cold liquid cooling. The definition of hot liquid cooling can be vary. In Eurotech we think that hot liquid cooling means the technology capable of using a liquid (e.g. water) with a temperature above the server room temperature. We also accept that, pushing the bar up in terms of max coolant temperature, hot liquid cooling may take place when the water is hot enough to allow thermal energy reuse.
In any kind of liquid cooling, one aspect that needs careful attention is the risk of leaking. This is an issue because the electronic components are upgraded on a routine basis resulting in many systems with the need to disconnect and reconnect the liquid carrying lines. Also, there is the need to consider whether cooling with water brings on all of its potential. For instance, resorting to chillers to cool the water will allow density, but limit the energy savings that are maximized with hot water cooling technologies, thanks to air conditioning avoidance. However, it is no news that new powerful processors with TDP of 150W may require coolant temperatures lower that the ones guaranteed by free cooling in warm climates. An additional downside of increasing water temperature may be the higher operating temperature of electronic components. This risk needs to be balanced by the advantages coming from levelling temperatures on the mother board and avoiding hot spots at data center level.
Eurotech has developed liquid cooling systems for more than 7 years and it was the first in the market to offer a hot liquid cooling with high serviceability. Eurotech Aurora supercomputers have been liquid cooled since product one and day one, allowing for precious competences and know how to be waived within the fabric of the organization. This experience helped the development of our idea of liquid cooling.
Eurotech liquid cooling is:
Hot. That means using hot water of 50+ °C, balancing customer needs, density targets, data center temperature and site temperature/humidity profiles. Eurotech delivers to customers the liquid cooling solution that allows utilizing the water at the maximum temperature possible across the year.
Direct. The cooling takes place inside the rack, where aluminum cold plates are put in direct contact with the components, allowing to maximize the heat transfer and heat extraction efficacy. The good side effect is to level out temperatures on board avoiding hot spots.
Green. Eurotech aims to utilize free coolers (liquid to air heat exchangers) in any climate zone. Solutions are designed to avoid air conditioning, while maintaining the highest density possible, and to exploit, if required and wherever it is possible, thermal energy recovery.
Comprehensive. The “cold plates” cool processors, memory, FPGAs, power supply, switches and any other heat generating component, including GPUs or other accelerators. This means that there is not a single heat source in the rack that is not cooled, preventing hot spots at DC level.
Serviceable. Eurotech Aurora HPC boards are hot swappable despite being water cooled thanks to connectors that seal instantaneously when a node card is extracted for maintenance or management purposes. The node cards are blades that a single person can easily manage.
Safe. Eurotech understand that it is imperative to keep water away from electronics. For this reason we have spent several years to develop a system that doesn’t leak and to mature those competencies that guide our customers into the correct and trouble free maintenance of the liquid cooling infrastructure.
Indeed, one of the Eurotech focus is on correct liquid cooling operations and maintenance, which is fundamental to preserve the system safety and integrity and keep performances at top levels.
“The maintenance of liquid cooling systems is not a daunting task” says Paul Arts, Eurotech technical director “but it requires following guidelines many of them are conveniently collected by Ashrae. At Eurotech, we assist our customers in approaching hot water cooling, designing the systems and training the customers in operations and maintenance. If have to spare my 2 cents, areas I would focus my attention are water quality, anti-corrosion precautions, flow rate and dew point temperatures”
Eurotech has experienced that correct operations maximize the life not only of the cooling system but also of the electronic components, rounding up the advantages of using hot water cooling. Eurotech believes in liquid cooling as an approachable and concrete solution for facing energy and thermal issues, especially in those contexts that are climatically unfavourable. | <urn:uuid:6159dd26-4def-4566-857a-e9916528daca> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/06/11/liquid_cooling_decisions_types_approaches/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929364 | 1,639 | 3.046875 | 3 |
Modern silicon-based computing is running into some serious problems. Mainly, a limit. This limit is based on the size of transistors that we use; once they reach a certain size, electrons are too large to be useful. States become indistinguishable, the Heisenberg uncertainty principle makes things very uncertain, and the chip gets so hot that it breaks down.
We briefly mentioned this in our recent article on 3D-NAND, the basis of which is that we decided to build upwards instead of shrinking transistor size even more. Even with a third dimension, we will still reach a limit to our data storage density eventually, and likely in the near future.
Moore’s Law states the number of transistors on a computer chip doubles every eighteen months. If this trend is to continue, we must look to alternative methods of data storage and transfer. The topic of this article is computational photonics/optical computing.
Photonics? What’s that?
First off, let’s look at the word photonics vs. electronics. We know silicon-based computing uses electrons to perform logic. Photonics uses photons, the elementary particles of light, in place of electrons. Using photons has some advantages.
Since the speed of light is what we might call the top speed of the universe (for the most part), it makes sense to use light as an information carrier. We would be able to transfer data sets over vast distances in no time at all, since light is capable of traveling around the world about 7.5 times per second.
Fiber optics uses light as an information carrier, and it’s one of the reasons we have seen ISP’s branching into fiber, such as Google Fiber, though only select cities have been graced with it so far.
Without launching into a techno-babble explanation that neither you nor I will understand, here are some basic advantages scientists think optical computing can bring to the table:
- Small in size
- Increased speed
- Scalable to large or small networks
- Less power consumption (debatable, depends on the length of transmission)
- Low heat
- Complex functions performed quickly/simultaneously (photons rarely interact with each other, thus many different beams, signifying different packets of information, could be sent at the same time in the same space)
Optoelectronic computers are much more realistic than entirely optical computers, which is a hybrid computer using photonic components integrated with contemporary silicon-based components. We already have prototypes of optoelectronic chips.
It provides some of the advantages of optical computing without completely overhauling the design, so that at least some of the currently available photonic technologies can be utilized. Unfortunately, having to convert from binary to light pulses and back again greatly reduces speed and increases energy consumption.
This is what makes purely optical computers so tantalizing. Without conversion, the information travels near the speed of light the whole way.
There are some drawbacks to optical computers.
The first glaring disadvantage is that we are likely still a long ways off from having a fully functional, scalable, and commercially available optical computer. Unless there is a drastic breakthrough (yes, it’s possible), we still have years and millions of dollars in research ahead of us.
Another is that optical fibers are generally much wider than electrical traces. It seems that most components are currently much larger than their electrical counterparts, though this isn’t surprising given the relative youth of optical computing compared to silicon-based computers.
Finally, we don’t have the software to run these computers even if we do manage to create them. Once we get closer to creating a function optical computer, I’m sure there will be plenty of developers working on it. For now, we just sit and wait.
Despite all this information, there’s no guarantee that optical computers will ever become a big thing.
Scientists are researching molecular computing and quantum computing alongside optical computing, both of which have major potential. Quantum computing in particular has been called “the hydrogen bomb of cyber warfare,” given its potential for solving incredibly complex equations with massive numbers extremely quickly. It would render current methods of encryption useless.
Just like with optical computing, both molecular and quantum present their own barriers to development and have a long way to go before they are very useful.
Until such a time, we’ll have to continue development of 3D NAND to keep up with Moore’s Law. With more research and a lot of luck, we will hopefully see breakthroughs in the next ten to fifteen years regarding these various alternative methods of computing. By that time, Gillware will have to understand the complications involved in data loss for optical computing. While we don’t currently perform data recoveries on optical computers, we would love to help if you have an electronic one. | <urn:uuid:9744b579-8406-4abf-9bd4-bcc748a934c0> | CC-MAIN-2017-04 | https://www.gillware.com/blog/articles/optical-computing-light-and-the-future-of-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00513-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939376 | 1,001 | 3.75 | 4 |
Attacks: types and what you need to know about them
When one is using the internet, he must know that there is no one in the cyber world who is secured and would not get attacked by any of the hackers. There are always some malicious attacks which are done on the people even randomly, hoping to gather some information about them which can lead the hackers to have their hands on some privileged information which they can use in the future. Here are some attacks which are quite famous and are used by the hackers;
This attack is often known as the MiM as well. This is the type of attack in which some individual connections are made with the victims. Also, the messages are replayed between the and hence it is made sure that one is talking to the other person directly through some private connection. But the truth is, in this case, all the information is actually controlled by the hacker. The hacker can actually intercept the entire message which go on among two victims and can inject some new ones as well. Here the attacker can impersonate himself too to both end points for the satisfaction of the other one.
One must know, that this attack means the distributed denial of service attack. This is the type of attack where many systems are first infected with the Trojans and then they are used to target one sole system and this cases the DDoS. The victims of this attack are the both of the ends and they systems which are involved are used maliciously and hence are also controlled by the hackers. Also, during this attack, the incoming which is used for the flooding of victim's system are used from many of the sources.
This attack is basically an attempt which is made so that some computer doesn't get available to the user. The ways through which this specific attack is done, can change. But they all contain the efforts for the permanent or temporary interruption of the services of some host who is connected to the internet. These threats are pretty much common in the business world and they are sometimes also responsible for some websites attacks as well.
This is the type of attack which is also known as the playback attack. Here, the data transmission which is valid is repeated or delayed fraudulently or maliciously. This thing is actually done by some adversary or the originator who can intercept the data and can retransmit it. To get rid of this problem, there is a technique which is vastly implemented in the banking sector. That is, the usage of only one time passwords. As the name suggests, there is a password hitch is allocated to the client and it stays valid for short period of time. Hence the chances of fraud and the data delay are ignored. Also, the time stamping is the other technique which is used for prevention of the attacks. When the secured protocol is achieved once, then the sync should be done.
SMURF attack can be done on some simple client too. When this is performed, one can be infected through various ways like one can be the victim or the target of that specific attack. He can be on the network which has been used for abusing and amplifying the attack. This attack can done some very serious damages to the network services. They can be done to the individual users or some corporates as well. In these function one of the important element is the directed broadcast. So if one wants to ignore this attack and want to be date, then he should simply turn off the directed broad cast and it can help a lot. Hence the ports of routers can be closed so that none of the networks can get abused in this way. There is also a component which is important for this kind of attack. This is that attacks have to make some packets get injected in the network with some foraged IP address. There are some functions in the routers which can help getting rid of it by simple preventing that forgery of the IP. Hence this will help one prevent those SMURF attacks from being launched.
This technique is very common one. Here one can simply act like one he isn't and then can have access to all the data which he isn't authorized to have access to. This technique is very common these days. The spoofing has got many types and some of them are GPS spoofing, TCP spoofing, email spoofing etc. The email spoofing is the very common one that we see every day. There is from field in the emails which shows from where the emails are coming from. One can easily hide the and hence these spammers hide the source through which they are sending so many emails. The spoofing of the email address is done in pretty much the same way as it is done through the snail mail. The GPS spoofing is another interesting kind of it. It is the attack attempt which is done to deceive some GPS receiver.
The email spamming is basically an electronic version of some junk emails. It includes the sending of the unwanted message and they normally contain some unsolicited advertisements. They are sent to the receipt ants that are large in number. It is a very serious thing and is something to be compered about as this method can even be utilized for the delivery of Trojans, dares, viruses etc. There are some symptoms too through which one cane easily recognize whether the email is spam or not. Like one of them is, in the TO box, there won't be the email address of the receiver and it might be empty. Some of those emails may contain some really bad language and the websites which won't have any good content. So if someone gets these problems, he can do some measures to prevent it. Like, he can actually use some spam filtering software's and hence he can block the spams. Also, when he suspects that the email is spam, he can simply report it. Also, deleting it is the good option. Also, the messages which are sent from those who are not in the friends list should not be replied to. The anti-virus and the other security patches which are used by one should be kept updated time by time.
This attack is basically done to get some of the sensitive information which can be the passwords. usernames etc. the technique is pretty simple one. It involves the method in which a person is sent email. He opens it up and then opens the link to some website, which might be to the social networking site. When he enters his credentials, the data goes to the hacker. The reason why one would put his credentials is that the website given looks too much like the real website and many people can't even differentiate among the real one and the fake one. Hence one should always be aware of this fact and should check whether the URL is correct or not.
This attack is also one of the dangerous ones. But most probably one won't know the name of this attack since it is not used that commonly. One might not be able to find the adequate data about it either.
Vising is actually the combination of two words which are voice and phishing. It is the tool of social engineering which includes some telephonic system. Through this the private and the personal information can be gained. The hackers first get the truth of public in the telephone services and then the physical location is said to be in the telephone company and is as associated to bill payer. This technique is most commonly used to steal one's important information like credit card numbers.
This technique is the kind of phishing. It is specially targeted at the organization and is done to get some access to the data which they are not supposed to get. This technique is difficult one and can fool someone more easily than the normal better since the source which is shown, comes from some trusted party. This is the spoofing Email attempt which is fraudulent and should be dealt with strictly.
Some firewalls which are stateless check the security policies which have the SYN flags on them. The attack's packets don't have the Seen on so they can get passed through security easily and hence one can become their victim.
This is the type of attack which one can see quite easily. It is done so that the traffic of one website can be taken to another. It can be done by changing the host files on the computer of victim or even some DNS server's software's.
When there is some bug in the OS and it is default, then it can lead to this attack which can cause in the change of resources present at the system.
Malicious insider threat
This threat is typically a threat which an organization has to face internally. They come from those people who work in that organization.
DNS poisoning and ARP poisoning
One must know that the ARP and the DNS both can get infected so one should always install some antivirus to get rid of theme easily.
This technique involves the method that the victim's trust is gained and then the attack is gained so the security can be bypassed.
These are the attacks which get interacted with some server malicious data. If the client interacts with the serer, he would be at risk.
Here are some passwords attacks which are in the fashion these days;
This attack can be used whenever the data is encrypted and the weak point in the encryption is targeted so that the access to some unauthorized data can be gained.
This is the attack technique which is used for defeating the authentication mechanism. It is done by determining the passphrases and many of the possibilities are tried in order to break into it.
As the name suggests, one must know that this technique is the mixture of one or more types of attacks and it can be pretty dangerous as well.
This attack is the type of an attack which can exploit some maths in the probability theory. It is used for abusing the commination among some parties and there are some random attacks attempts made.
This attack involves the playing with some table. There is the table which is defined for the hackers by the hackers and they use it in order to get some access to the personal information that one can hold in his computer. This table is the pre computed one and it is sued for the hashes of passwords cracking's. It means the conversion of some plain password to some certain length and the limits characters set.
Typo squatting/URL hijacking
This technique is also known as the fake URL technique or the URL highjack. They depend of the mistakes which are made by users while putting the URL into the web browser. One might put any incorrect URL which can open up a new website.
Watering hole attack
This method includes the targeting of the sites which are most commonly visited by the targets which one is interested in. the hacker's just compromise the HTLM or the java Script so that the malicious codes can be inserted.
Hence, one would see there are many attacks types which exist and they are different in nature. So, one should know about them all so that they can make some good defence. Normally, staying aware of them and having some good antiviruses can do the trick. | <urn:uuid:285d6deb-d52b-4d3e-94b9-35a96da10255> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-attack-types-and-what-you-need-to-know-about-them.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00496-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976213 | 2,242 | 3.234375 | 3 |
Denial-of-service (DoS) attacks have risen 50% over the last six months, and phishing attacks have rose almost 40% over the same period.
The increases are reported in the biannual Symantec Threat Report.
DoS attacks see hackers overloading a network with data until it collapses, while phishing is where remote attackers send e-mails with fraudulent weblinks to encourage users to hand over passwords to on-line bank accounts.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Symantec said DoS attacks were now running at over 1,400 a day, and that there were now almost eight million phishing attempts per day.
Symantec said those behind internet security attacks were now dominated by people looking to make financial gains rather than get publicity for their malicious skills.
It said the threat of DoS attacks could be used in extortion scams. Symantec said DoS attacks and the rise of phishing were being helped by the increasing use of “bot” or slave computers – infected computers used to spread attacks without their owners’ knowledge.
The government intends to tackle DoS attacks with its proposed justice bill currently before Parliament.
The existing Computer Misuse Act does not directly address DoS attacks and police have had trouble making charges against alleged perpetrators stick. | <urn:uuid:9087c48c-bbe8-4edb-b1b9-71e897869175> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240076808/DoS-attacks-up-50-phishing-up-40 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00028-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967702 | 287 | 2.8125 | 3 |
NASA's long-running Mars rover Opportunity is getting ready for the harsh Martian winter, but this year for the first time in its nearly eight-year history needs a sunnier location to continue its work.
NASA said the rover, which depends on solar power for energy, is sitting just south of Mars' equator and has worked through four Martian southern hemisphere winters. Being closer to the equator than its now defunct twin rover, Spirit, Opportunity has not needed to stay on a Sun-facing slope during previous winters but now its solar panels carry a thicker coating of Martian dust than before. The dust makes it necessary for Opportunity to spend the winter at a Sun-facing site where the rover can tilt its power panels northward about 15 degrees for maximum solar exposure, NASA stated.
Dust has long been one of the biggest challenges for the Mars rovers as huge dust storms are common on Mars.
In one particularly bad storm in 2007 NASA wrote that the dust in the Martian atmosphere over Opportunity was blocking 99% of direct sunlight to the rover, leaving only the limited diffuse sky light to power it. Before the dust storms began blocking sunlight last month, Opportunity's solar panels had been producing about 700 watt hours of electricity per day, enough to light a 100-watt bulb for seven hours. When dust in the air reduced the panels' daily output to less than 400 watt hours, the rover team suspended driving and most observations, including use of the robotic arm, cameras and spectrometers to study the site where Opportunity is located. One day the output from Opportunity's solar panels dropped to 148 watt hours and the next day fell even lower, to 128 watt hours.
NASA said it has selected a piece of Red Planet real estate called Greeley Haven an outcrop of rock on Mars recently named informally to honor Ronald Greeley, Arizona State University Regents' professor of planetary geology, who died October 27, 2011.
NASA says the spot will give Opportunity the power to continue working. Planned experiments include a radio-science investigation of the interior of Mars, inspections of mineral compositions and textures on the outcrop, and recording a full-circle, color panorama, NASA said
"Greeley Haven provides the proper tilt, as well as a rich variety of potential targets for imaging and compositional and mineralogic studies," said Jim Bell, lead scientist for the Panoramic Camera (Pancam) on the rover in a statement. "We've already found hints of gypsum in the bedrock in this formation, and we know from orbital data that there are clays nearby, too."
Opportunity, which landed on Mars January 24, 2004 has driven a total of 21 miles (34 kilometers). The rover last summer arrived at the rim of Endeavour Crater, an ancient crater 14 miles wide. Endeavor is of interest to scientists because NASA's Mars Reconnaissance Orbiter satellite has shown the crater to have clay minerals and older geological deposits, the space agency stated. Clay minerals, which form exclusively under wet conditions, have been found extensively on Mars from orbit, but have not been examined on the surface, NASA said.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:9831619b-b940-4568-a6a6-40d22e7e04e6> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2221410/security/thick-martian-dust-makes-nasa-pick-sunnier-locale-for-mars-rover.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00514-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949172 | 653 | 3.390625 | 3 |
The paper starts with an introduction to the ICMP Protocol. The introduction explains what is the ICMP protocol; it’s message types, and where and when we should expect to see these. The following chapters are divided into several subjects ranging from Host Detection to Passive Operating System Fingerprinting. An effort was made to offer more illustrations, examples and diagrams in order to explain and illustrate the different issues involved with the ICMP protocol’s usage in scanning.
Download the paper in PDF format here. | <urn:uuid:25648f54-99a1-46dd-9cf2-6674f578b244> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2002/04/04/icmp-usage-in-scanning-version-30/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00543-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936606 | 103 | 2.8125 | 3 |
The security vulnerabilities already present and prevalent across the Internet of Things (IoT) leaves a responsibility at the feet of the embedded software architects and software application development professionals building out the technology stacks that will drive the devices we use today, tomorrow and into the immediate future.
Faulty by default?
Surely our architectural approach to embedded software engineering needs a re-rationalization from ground zero? Stephen Gates, chief research intelligence analyst at NSFOCUS asks why do many IoT devices use default passwords?
“Simple; when manufacturers build this type of technology they make it as “user-friendly” as possible. Just plug it in and often it works. The real intention of the decision to ship every device with the same username/password is primarily designed to reduce customer support calls; which costs manufacturers money,” said Gates.
As we know, most IoT devices ship with the username of “admin” and the password is the word “password”.
“Some vendors may use different default combinations, but once you know what vendor does what, it’s easy from there. Manufacturers must do a better job of either insuring that each device has a unique default password, or they must force users to change the password once the default is entered, when the device is first installed,” insists Gates.
OPERATIONAL NOTE: One way of ensuring that each device has a unique password is to etch the devices’ default username and password on the unit itself. Even if a user did not change the default password, a hacker would have to gain physical access to the unit to determine its default username/password combination. This would go a long way to solving that problem if every device shipped with a different combination of login credentials.
If this problem is not solved on a global scale, analysts argue that soon we may see DDoS attacks that are capable of taking down major portions of the Internet, as well as causing brownouts, creating intolerable latency, or making the Internet and all the ‘Things’ in it unusable.
The Kappenberger factor
Reiner Kappenberger, global product manager at HPE Security – Data Security agrees. He says that the IoT space has become a hot market where companies need to enter quickly with functionality to be considered leading the space.
However, with that approach where functionality is the leading indicator comes the risk that security measurements are pushed to the back of the development cycle and frequently then dropped in order to release a product. While some of these are easy to fix the problem can lead to new entrants into the market running out of business due to security not taking an equal position to features during development.
“The current lack of guidance and regulations for IoT device security is one of the bigger problems in this area and why we see breaches in the IoT space rising,” said Kappenberger.
“Typically computers have a lifespan of a few years. However IoT devices may be around for 10+ years before being replaced – especially in home networks. Companies working in this market need to consider this fact as over the years we have seen a constant flood of vulnerabilities in the tools being used and those systems need to be updated to patch those security flaws. As shown by this latest development, this is a broad problem that manifests itself on many IoT devices with extremely damaging results,” he added.
Kappenberger asserts that consumers that venture into the IoT space should identify the security measurements that have been taken to secure the device and ask about the long term support for the product.
The developer responsibility to IoT
Many commentators have already discussed the lack of standards across IoT software platforms. Still more have commented that the IoT security war has already been lost before it started and that it now comes down to how well we architect the Application Programming Interface (API) connections between devices — and how carefully software application developers start to ‘couple up’ the decoupled services that exist across the IoT. | <urn:uuid:26868165-37aa-41bd-a05d-6258f634aef1> | CC-MAIN-2017-04 | https://internetofbusiness.com/faulty-default-build-iot-software-safely/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00451-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951813 | 818 | 2.671875 | 3 |
Picking on Weak Browsers
It has also become easier for attackers to use the vulnerabilities in browser programs to build engines on Web servers that detect what type of software an individual is using and then launch malware programs that can take advantage of applications with holes that they have discovered. The malware writers are also using peoples IP address information to tailor the content they attempt to deliver to a certain target. "If a malware site such as this sees Internet Explorer 6, they send something different than if they see IE 7; theres a lot of logic in these engines," Ollmann said. "The site will look at the first request the browser makes and then find the right payload to deliver when the browser makes a second request. It happens that fast."Traditional signature-based anti-virus products, versus behavior-oriented tools, are still failing to stop even those threats aimed at well-known vulnerabilities, according to Ollman, who noted that the most popular exploit used to infect Web browsers with malware in 2006 was the Microsoft MS-ITS vulnerability, first disclosed in 2004. Over the course of 2006, June was the month that saw the highest volume of new software vulnerabilities, while the week before the Thanksgiving holiday was the busiest week of the year. IBM reported that so-called downloaders, also known as Trojan Viruses, which install themselves and attempt to retrieve other malware programs, represented the most popular form of threat seen in 06, accounting for 22 percent of all attacks. Among the other findings highlighted in the report was news that the volume of spam increased by 100 percent during the last year, and that the United States, Spain and France were the three top sources of spam worldwide. In a reflection of the number of experienced users and businesses run in Germany, German was the second most popular language for spam e-mails, Ollmann said, but the volume of spam written in English still represents approximately 92 percent of the messages. In a nod to the art of simplicity, the most popular subject line for spam in 2006 was "Re: hi," according to the report. South Korea accounts for the highest source of phishing e-mails, according to the report, and Web sites that host pornographic or sex-related content represented 12 percent of the Internet last year.
Check out eWEEK.coms Security Center for the latest security news, reviews and analysis. And for insights on security coverage around the Web, take a look at Ryan Naraines eWEEK Security Watch blog.
The researcher said that malware communities are also sharing lists of IP addresses to find specific sets of targets to assail with their programs, and to help identify accounts used by security software makers to help detect new attacks and code variations. | <urn:uuid:dac6efb2-fd44-450f-8202-c3c6fea0fc4d> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/IBM-Researchers-Predict-More-Vulnerabilities-in-07/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963606 | 549 | 2.515625 | 3 |
OTTAWA, ONTARIO--(Marketwired - Feb. 4, 2014) - Health Canada
Eve Adams, Parliamentary Secretary of Health, on behalf of the Minister of Health, Rona Ambrose, held a demonstration to remind Canadians of the importance of using a carbon monoxide detector to alert homeowners of unsafe levels of the gas in their homes.
Parliamentary Secretary Adams demonstrated the use of a detector to raise awareness about the dangers of carbon monoxide in the home.
Health Canada encourages residents to protect their health and safety by properly installing and regularly testing carbon monoxide detectors in their home. These detectors provide a warning if carbon monoxide levels pose a threat to health. Carbon monoxide is an odorless, colourless gas, and high levels of carbon monoxide can be deadly. A detector is the only way to identify a problem.
Sources of carbon monoxide include furnaces, water heaters, wood stoves and other household appliances that burn fuel. If these devices are improperly installed or malfunction, they can release carbon monoxide into the home. Other sources include exhaust fumes from vehicles and gas-powered equipment like snow blowers and generators, and fuel-burning cooking appliances like barbeques and camp stoves.
People can maintain safe carbon monoxide levels in their homes by keeping their furnace and other fuel-burning appliances well maintained and inspected regularly; never idling their car or other gas-powered equipment in their garage; and never using a generator indoors or close to a window.
- You can't taste, see or smell carbon monoxide, so a detector is the only way to alert you that levels are high.
- Carbon monoxide reduces the body's ability to carry oxygen in the blood and exposure can cause headaches, fatigue, dizziness, chest pain, and at high levels, coma or death.
- There were 380 accidental carbon monoxide poisoning deaths in Canada from 2000 to 2009 according to Statistics Canada.
- Proper use and maintenance of fuel-burning appliances and other gas-powered equipment is key to keeping carbon monoxide levels in the home low.
"Our Government is encouraging all Canadians to install carbon monoxide detectors in their homes. You can't see, smell or taste carbon monoxide so a detector is the only way to alert you if there's a problem."
Eve Adams, Parliamentary Secretary for Health
"Every year, our team responds to close calls involving carbon monoxide in the home and the consequences can be tragic. Carbon monoxide detectors are just as key to health and safety as smoke alarms, and we appreciate the efforts of the Government to raise awareness on this important issue."
Fire Chief John deHooge, Ottawa Fire Services
"Health care teams are continually on the lookout for symptoms of carbon monoxide poisoning, but prevention is the best defence. A carbon monoxide detector is a valuable ally in protecting your family."
Dr. Charles-Antoine Breau, Emergency physician, Hôpital Montfort
Factsheet - Carbon Monoxide
Health Canada - Carbon Monoxide
Health Canada news releases are available on the Internet at: www.healthcanada.gc.ca/media
What is carbon monoxide?
Carbon monoxide (CO) is a gas that forms whenever you burn fuel like propane, natural gas, gasoline, oil, coal and wood. Because it is colourless, odourless and tasteless, it can't be detected without a carbon monoxide detector. Carbon monoxide can cause health problems before people even notice it is present.
What are the effects of carbon monoxide on health?
When you inhale carbon monoxide, it reduces your body's ability to carry oxygen in your blood. The health effects can be very serious.
Exposure to low levels of CO may cause:
- shortness of breath
- flu-like symptoms
- impaired motor functions (like difficulty walking or problems with balance)
At high levels, or if you are exposed to low levels for long periods of time, symptoms may include:
- chest pain
- poor vision
- difficulty thinking
At very high levels, CO exposure can cause:
What are the sources of carbon monoxide (CO)?
Sources of CO include furnaces, water heaters/boilers, wood stoves, and other appliances that run on fuels. If these devices are improperly installed or malfunction, they can release CO into your home.
Other sources of CO include:
- exhaust fumes from vehicles or other gas-powered equipment, like lawnmowers, snow blowers, and power generators, used indoors or in your attached garage
- chimneys that are blocked or dirty
- fuel-burning cooking appliances, like propane, natural gas or charcoal grills
- tobacco smoke
How can you reduce your risk?
Take these steps to protect your family from exposure to CO in your home.
Put at least one carbon monoxide detector in your home to warn you if CO levels pose an immediate threat
- Put CO detectors in hallways outside bedrooms where you can hear them.
- Choose CO detectors that are certified by the Canadian Standards Association (CSA) or the Underwriters Laboratories of Canada (ULC).
- Follow the manufacturer's directions for installing, testing and replacing detectors. Store the manual in a handy place.
- If your CO alarm sounds, leave your home right away. Call local authorities (9-1-1) and do not go back home until a professional has fixed the problem.
- Keep in mind that CO detectors and smoke detectors have different purposes. You need both to stay safe.
CO detectors are designed to prevent immediate carbon monoxide poisoning. A carbon monoxide detector is not a substitute for proper installation and maintenance of fuel-burning appliances.
Maintenance is the key to keeping CO levels low
- Make sure fuel-burning appliances, like furnaces, fireplaces and gas stoves, are well maintained and working properly.
- Have a professional inspect appliances and clean chimneys at least once a year. Make sure your chimney is not blocked by snow or ice, bird nests or other debris.
Leave it outside
- Never use a barbecue or fuel-burning camping equipment inside your home, garage, vehicle, camper or tent or close to a window.
- Never use a power generator indoors or in an attached garage (even with the door open) or close to a window.
- Don't use kerosene or oil space heaters or lamps in enclosed areas, unless they are specifically designed for indoor use and in a well-ventilated room.
- Keep your home completely free of tobacco smoke.
- Never let vehicles idle in the garage, even when the garage door is open.
- Never run gas-powered lawnmowers, trimmers, snow blowers or other gas-powered equipment in the garage.
- Keep the door between your house and the garage closed when not needed and seal leaks between the garage and the home. | <urn:uuid:24a3a8ca-f9d0-4545-acdd-ec18a8e600c0> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/harper-government-reminds-canadians-carbon-monoxide-detectors-can-prevent-illness-save-1875659.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00295-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905956 | 1,433 | 3.140625 | 3 |
A Mail Relay is a server, normally on the Internet, which can forward mail from its users to other mail servers. Most electronic mail, and most Mail Relay's, use the SMTP mail transfer protocol.
Since most mail client programs (e.g., Netscape, Eudora, Outlook) do not fully understand how to deliver mail, they generally rely on a Mail Relay to take care of mail delivery. Consequently, Mail Relay's are mandatory for most mail systems.
A potential security problem with Mail Relay is that, if they are not
correctly configured, they will agree to relay mail sent by any user,
from any network. This allows evil Spam-mers to forward mail through
them, and thus make their Spam appear to have originated from the
offending Mail Relay. | <urn:uuid:19b345ed-770c-4113-97c9-ec135ba89fe6> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/mail_relay.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00323-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926823 | 171 | 3.1875 | 3 |
Valenchon M.,CNRS Physiology of Reproduction and Behaviors |
Valenchon M.,University of Tours |
Valenchon M.,French Institute of Horses and Riding IFCE |
Levy F.,CNRS Physiology of Reproduction and Behaviors |
And 11 more authors.
Animal Behaviour | Year: 2013
In the present study, we sought to determine the influence of stress and temperament on working memory for disappearing food in horses. After assessment of five dimensions of temperament, we tested working memory of horses using a delayed-response task requiring a choice between two food locations. Delays ranging from 0 to 20. s were tested. The duration of working memory for disappearing food was first characterized without stressors (N=26). The horses were then divided into two groups and their performance was assessed under stressful (exposure to acute stressors prior to testing, N=12) or control conditions (N=12). Results showed that the duration of working memory for disappearing food lasted at least 20. s under nonstressful conditions, and that under stressful conditions this duration lasted less than 12. s. This stress-induced impairment confirms in a nonrodent species that working memory performance is very sensitive to exposure to stressors. In addition, working memory performance in horses is influenced by the temperamental dimension of fearfulness according to the state of stress: fearful horses showed better performance under control conditions and worse performance under stressful conditions than nonfearful horses. These findings are discussed in the context of the Yerkes-Dodson law of stress and performance. © 2013 The Association for the Study of Animal Behaviour. Source | <urn:uuid:ed608fd8-caf4-44a6-8e8f-87c3cfcfa90d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/french-institute-of-horses-and-riding-ifce-1894485/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930914 | 339 | 2.859375 | 3 |
With 2014 marking the 25th anniversary of the internet, how do modern cyber security challenges compare to those of the early days of the World Wide Web?
In the beginning
One of the earliest pieces of recorded malware was the Morris worm, created in 1988. According to its creator, Robert Morris, it was not intended to cause harm. The worm was intended to quietly gauge the size of the internet, however due to a slight miscalculation it nearly brought it down instead.
The worm spread using techniques that are still used by threat actors today, exploiting known vulnerabilities and weak login credentials.
The similarities don’t end there though. In 1995, Kevin Mitnik, the FBI’s most wanted hacker was arrested. His crimes were compromising computer systems, databases and telephone networks across the world. So how did one man access all these systems? He just asked nicely.
Kevin had discovered social engineering and that by pretending to be a colleague or IT administrator, he could talk his way into systems without a single software exploit.
Fast forward to 2014
Now, organizations are facing malware and Advanced Persistent Threats every day. Take the latest US breach at ‘Splash Carwash’ where the POS system was compromised, leading to the loss of customer credit card information. It is believed that the attackers exploited known vulnerabilities in an old version of pcAnywhere and default login credentials; clearly a lesson not learned from history.
Another anniversary in 2014 is Edward Snowden’s first year in asylum. The former technical assistant for the CIA is well known for releasing top secret documents documenting global surveillance. So how did one man amass 1.7 million US intelligence files? He asked nicely, reportedly persuading fellow workers that he needed their logins to do his job. As a system administrator, Snowden’s influence allowed him to manipulate people and computer systems.
So what conclusions can we draw to protect organizations against the next generation of threats in 2014? We recently discussed next generation solutions in our on-demand webinar with Forrester analyst Chris Sherman. The webinar shares the latest research findings and discusses which controls can combat next generation threats.
The worrying conclusion is that despite wide spread awareness and understanding, there are still lessons to be learned. For example, there are still unpatched systems and increasing numbers of unmanaged admin users.
The best solutions learn the lessons of history and prioritize the controls with the biggest impact. The Council on Cyber Security and analysts such as SANS recommend regular patching, privilege management and application whitelisting as the most effective ‘quick wins’ against real-life attacks.
Ultimately, by learning from cyber history and layering these controls as part of a defense in depth strategy, organizations can mitigate the vast majority of threats and proactively prepare for what’s next in the world of cyber security. | <urn:uuid:d00f43a5-cf02-4340-9ff1-bffa05bdbce1> | CC-MAIN-2017-04 | https://blog.avecto.com/2014/07/lessons-learned-from-25-years-of-the-web/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00259-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952117 | 582 | 2.671875 | 3 |
Ash Ashutosh, CEO of Actifio, says that IoT deployments could be hindering by data copies.
The Internet of Things (IoT) is fast becoming the next major technological revolution. According to Gartner, 6.4 billion connected things will be in use by the end of 2016 and the IoT will support total services spending of $235 billion (£163 billion). With this huge amount of revenue comes the data to match. The impact of the IoT and the data it generates is being felt across the entire IT spectrum, with companies having to upgrade technology and processes to manage this deluge of data efficiently and securely.
For many, Big Data is seen as the Holy Grail for organisations. It will enable them to understand what their customers want and target them to drive sales and growth. The Big Data trend has the potential to revolutionise the IT industry by offering new business insight into the data they previously ignored. To say it’s critical for organisations to harness the potential of Big Data is a huge understatement.
In an age where Big Data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to grow at an alarming rate. At the current rate, by the end of this year the world will be producing more digital information than it can store – incredible. Just look at that mismatch between data and storage – one zettabyte would fill the storage on 34 billion smartphones.
The real challenge with Big Data
The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. IDC estimates that 60% of what is stored in data centres is actually copy data – multiple copies of the same thing or out-dated versions. The vast majority of stored data is extra copies of production data created every day by disparate data protection and management tools like backup, disaster recovery, development, testing and analytics.
IDC estimates up to 120 copies of specific production data is being circulated by a company whereby, the cost of managing the flood of data copies reached $44 billion dollars worldwide.
Also read: Why the UK is playing catch-up with Big Data
Tackling data bloating
While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of data bloating. In the same way that prevention is better than cure, reducing this weed-like data proliferation should be a priority for all businesses.
Copy data virtualisation – freeing organisations’ data from their legacy physical infrastructure just as virtualisation did for servers a decade ago – is increasingly seen as the way forward. In practice, copy data virtualisation reduces storage costs by 80%. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business everywhere they need it. That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, any department or individual who might need to work with company data can access and use a full, virtualised data set. This is what true agility means for developers and innovators.
Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocused on more meaningful tasks that can help grow the business. Data management licences are reduced, due to no longer requiring back-up agents, de-duplication software and WAN (wide area network) optimisation tools.
The ‘golden master copy’
By eliminating copy data and working off a ‘golden master copy’, storage capacity is reduced as well – and along with it, all the attendant management and infrastructure overheads. The net result is a more a streamlined organisation driving innovation. When you consider all of the ways to tackle the issue of data bloating, the remedies result in cost savings worth millions and millions. It’s one of the main reasons this issue has fast become a key topic discussed at boardroom level.
You’ve heard of both server virtualisation and network virtualisation; two concepts that once seemed outlandish. However, fast-forward to now and the benefits of both have seen them become commonplace within IT departments. Now, it’s the turn of copy data virtualisation. As the IoT spaces continue to grow so significantly at the rate they do, so will the need for businesses to put a data management strategy in place to capitalise on the opportunity presented by Big Data. | <urn:uuid:14e27a7d-702b-4ca5-bb81-57cff88a9c63> | CC-MAIN-2017-04 | https://internetofbusiness.com/iot-big-data-and-why-you-should-care-about-data-copies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939082 | 931 | 2.515625 | 3 |
Survey: Most Online Shoppers Value Green-Powered Web Sites
Philadelphia — March 25
Web hosting company 1&1 Internet Inc. released research conducted in association with Wired Magazine that identifies a “green expectation” from consumers: Virtual shops should now run more green operations. The results show that the requirement for all companies to go green is growing, as the traditional methods of using less energy are only a start in the eyes of most consumers today.
The “SMB Green Study” found that a green-powered Web site may be a deciding factor when selecting which retailer to purchase from. More than 60 percent of people admit to being swayed to purchase from an online shop if the Web site identifies itself as using green energy. About 78 percent of consumers say that the environmental practices of even a virtual shop are important to them, and most consumers believe that all businesses should be environmentally responsible.
One way for e-commerce Web sites to take green efforts to the next level is by powering their Web sites from a green data center or servers using renewable energy. More than 70 percent of consumers surveyed believe using a green service provider is an acceptable way to put forth a “green” image.
“Committing to minimizing their impact on the environment has a clear commercial advantage for all types of retailers,” said Oliver Mauss, CEO of 1&1 Internet Inc. “By offering green Web hosting at no extra cost, 1&1 offers an easy way for any Web site to run on green power.”
The research also showed that two-thirds (67 percent) of participants are frequent online shoppers, making Web purchases more than twice a month. The amount of people shopping online has been growing exponentially, which presents a lucrative opportunity for many people setting up shop on the Internet. The data suggests that choosing a green Web host could potentially be a selling point that convinces online browsers to become buyers. | <urn:uuid:9f726449-e5f0-4752-aacf-a2ec03f68339> | CC-MAIN-2017-04 | http://certmag.com/survey-most-online-shoppers-value-green-powered-web-sites/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00469-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936851 | 393 | 2.515625 | 3 |
Hongsanan S.,CAS Kunming Institute of Botany |
Hongsanan S.,Institute of Excellence in Fungal Research |
Hongsanan S.,Mae Fah Luang University |
Hongsanan S.,World Agroforestry Center |
And 30 more authors.
Fungal Diversity | Year: 2014
The order Asterinales comprises a single family, Asterinaceae. In this study, types or specimens of 41 genera of Asterinaceae are re-examined and re-described and illustrated by micrographs. Seventeen genera, namely Asterina (type genus), Asterinella, Asterotexis, Batistinula, Cirsosia, Echidnodella, Halbania, Lembosia, Meliolaster, Parasterinopsis, Platypeltella, Prillieuxina, Schenckiella (=Allothyrium), Trichasterina, Trichopeltospora, Uleothyrium and Vizellopsis, are maintained within Asterinaceae. Echidnodes, Lembosiella, Lembosina, Morenoina, and Thyriopsis are transferred to Aulographaceae based on morphological and molecular characteristics. Anariste is transferred to Micropeltidaceae, while Lembosiopsis is transferred to Mycosphaerellaceae. Placoasterella and Placosoma are morphologically close to taxa in Parmulariaceae, where they are transferred. Aulographina is placed in Teratosphaeriaceae, while Asterodothis, Asterinema, Dothidasteromella, Leveillella, Petrakina and Stephanotheca are transferred to Dothideomycetes, genera incertae sedis. Eupelte, Macowaniella, Maheshwaramyces, Parasterinella, and Vishnumyces are treated as doubtful genera, because of lack of morphological and molecular data. Aphanopeltis, Asterolibertia, Neostomella, Placoasterina, and Symphaster are synonyms of Asterina based on morphology, while Trichamelia, Viegasia, and Yamamotoa are synonyms of Lembosia. The characteristics of each family are discussed and a phylogenetic tree is included. © 2014, School of Science. Source
Families of Dothideomycetes: In loving memory of Majorie Phyllis Hyde (affectionately known as Mum or Marj), 29 August 1921-18 January 2013 - Without mum's determination, a character passed on to children, this treatise would never have been completed - K.D. Hyde
Hyde K.D.,CAS Kunming Institute of Botany |
Hyde K.D.,World Agroforestry Center |
Hyde K.D.,Institute of Excellence in Fungal Research |
Hyde K.D.,Mae Fah Luang University |
And 101 more authors.
Fungal Diversity | Year: 2013
Dothideomycetes comprise a highly diverse range of fungi characterized mainly by asci with two wall layers (bitunicate asci) and often with fissitunicate dehiscence. Many species are saprobes, with many asexual states comprising important plant pathogens. They are also endophytes, epiphytes, fungicolous, lichenized, or lichenicolous fungi. They occur in terrestrial, freshwater and marine habitats in almost every part of the world. We accept 105 families in Dothideomycetes with the new families Anteagloniaceae, Bambusicolaceae, Biatriosporaceae, Lichenoconiaceae, Muyocopronaceae, Paranectriellaceae, Roussoellaceae, Salsugineaceae, Seynesiopeltidaceae and Thyridariaceae introduced in this paper. Each family is provided with a description and notes, including asexual and asexual states, and if more than one genus is included, the type genus is also characterized. Each family is provided with at least one figure-plate, usually illustrating the type genus, a list of accepted genera, including asexual genera, and a key to these genera. A phylogenetic tree based on four gene combined analysis add support for 64 of the families and 22 orders, including the novel orders, Dyfrolomycetales, Lichenoconiales, Lichenotheliales, Monoblastiales, Natipusillales, Phaeotrichales and Strigulales. The paper is expected to provide a working document on Dothideomycetes which can be modified as new data comes to light. It is hoped that by illustrating types we provide stimulation and interest so that more work is carried out in this remarkable group of fungi. © 2013 Mushroom Research Foundation. Source | <urn:uuid:c9637c85-9b10-461b-a059-0c9c03826c18> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/international-fungal-research-and-development-center-894005/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.82404 | 1,034 | 2.953125 | 3 |
by Hannes Tschofenig, Nokia Siemens Networks and Henning Schulzrinne, Columbia University
Summoning the police, the fire department, or an ambulance in emergencies is one of the most important functions the telephone enables. As telephone functions move from circuit-switched to Internet telephony, telephone users rightfully expect that this core feature will continue to be available and work as well as it has in the past. Users also expect to be able to reach emergency assistance using new communication devices and applications, such as instant messaging or Short Message Service (SMS), and new media, such as video. In all cases, the basic objective is the same: The person seeking help needs to be connected with the most appropriate Public Safety Answering Point (PSAP), where call takers dispatch assistance to the caller's location. PSAPs are responsible for a particular geographic region, which can be as small as a single university campus or as large as a country.
The transition to Internet-based emergency services introduces two major structural challenges. First, whereas traditional emergency calling imposed no requirements on end systems and was regulated at the national level, Internet-based emergency calling needs global standards, particularly for end systems. In the old Public Switched Telephone Network (PSTN), each caller used a single entity, the landline or mobile carrier, to obtain services. For Internet multimedia services, network-level transport and applications can be separated, with the Internet Service Provider (ISP) providing IP connectivity service, and a Voice Service Provider (VSP) adding call routing and PSTN termination services. We ignore the potential separation between the Internet access provider, that is, a carrier that provides physical and data link layer network connectivity to its customers, and the ISP that provides network layer services. We use the term VSP for simplicity, instead of the more generic term Application Server Provider (ASP).
The documents that the IETF Emergency Context Resolution with Internet Technology (ECRIT) working group is developing support multimedia-based emergency services, and not just voice. As is explained in more detail later in this article, emergency calls need to be identified for special call routing and handling services, and they need to carry the location of the caller for routing and dispatch. Only the calling device can reliably recognize emergency calls, while only the ISP typically has access to the current geographical location of the calling device based on its point of attachment to the network. The reliable handling of emergency calls is further complicated by the wide variety of access technologies in use, such as Virtual Private Networks (VPNs), other forms of tunneling, firewalls, and Network Address Translators (NATs).
This article describes the architecture of emergency services as defined by the IETF and some of the intermediate steps as end systems and the call-handling infrastructure transition from the current circuit-switched and emergency-calling-unaware Voice-over-IP (VoIP) systems to a true any-media, any-device emergency calling system.
IETF Emergency Services Architecture
The emergency services architecture developed by the IETF ECRIT working group is described in and can be summarized as follows: Emergency calls are generally handled like regular multimedia calls, except for call routing. The ECRIT architecture assumes that PSAPs are connected to an IP network and support the Session Initiation Protocol (SIP) for call setup and messaging. However, the calling user agent may use any call signaling or instant messaging protocol, which the VSP then translates into SIP.
Nonemergency calls are routed by a VSP, either to another subscriber of the VSP, typically through some SIP session border controller or proxy, or to a PSTN gateway. For emergency calls, the VSP keeps its call routing role, routing calls to the emergency service system to reach a PSAP instead. However, we also want to allow callers that do not subscribe to a VSP to reach a PSAP, using nothing but a standard SIP user agent (see and for a discussion about this topic); the same mechanisms described here apply. Because the Internet is global, it is possible that a caller's VSP resides in a regulatory jurisdiction other than where the caller and the PSAP are located. In such circumstances it may be desirable to exclude the VSP and provide a direct signaling path between the caller and the emergency network. This setup has the advantage of ensuring that all parties included in the call delivery process reside in the same regulatory jurisdiction.
As noted in the introduction, the architecture neither forces nor assumes any type of trust or business relationship between the ISP and the VSP carrying the emergency call. In particular, this design assumption affects how location is derived and transported.
Providing emergency services requires three crucial steps, which we describe in the following sections: recognizing an emergency call, determining the caller's location, and routing the call and location information to the appropriate emergency service system operating a PSAP.
Recognizing an Emergency Call
In the early days of PSTN-based emergency calling, callers would dial a local number for the fire or police department. It was recognized in the 1960s that trying to find this number in an emergency caused unacceptable delays; thus, most countries have been introducing single nationwide emergency numbers, such as 911 in North America, 999 in The United Kingdom, and 112 in all European Union countries.
This standardization became even more important as mobile devices started to supplant landline phones. In some countries, different types of emergency services, such as police or mountain rescue, are identified by separate numbers. Unfortunately, more than 60 different emergency numbers are used worldwide, many of which also have nonemergency uses in other countries, so simply storing the list of numbers in all devices is not feasible. In addition, hotels and university campuses often use dial prefixes, so an emergency caller in some European universities may actually have to dial 0112 to reach the fire department.
Because of this diversity, the ECRIT architecture decided to separate the concept of an emergency dial string, which remains the familiar and regionally defined emergency number, and a protocol identifier that is used for identifying emergency calls within the signaling system. The calling end system has to recognize the emergency (service) dial string and translate it into an emergency service identifier, which is an extensible set of Uniform Resource Names (URNs) defined in RFC 5031 . A common example for such a URN, defined to reach the generic emergency service, is urn:service.sos. The emergency service URN is included in the signaling request as the destination and is used to identify the call as an emergency call. If the end system fails to recognize the emergency dial string, the VSP may also perform this service.
Because mobile devices may be sold and used worldwide, we want to avoid manually configuring emergency dial strings. In general, a device should recognize the emergency dial string familiar to the user and the dial strings customarily used in the currently visited country. The Location-to-Service Translation Protocol (LoST) , described in more detail later, also delivers this information.
Some devices, such as smartphones, can define dedicated user interface elements that dial emergency services. However, such mechanisms must be carefully designed so that they are not accidentally triggered, for example, when the device is in a pocket.
Emergency Call Routing
When an emergency call is recognized, the call needs to be routed to the appropriate PSAP. Each PSAP is responsible for only a limited geographic region, its service region, and some set of emergency services. For example, even in countries with a single general emergency number such as the United States, poison-control services maintain their own set of call centers. Because VSPs and end devices cannot keep a complete up-to-date mapping of all the service regions, a mapping protocol, LoST , maps a location and service URN to a specific PSAP Uniform Resource Identifier (URI) and a service region.
LoST, illustrated in Figure 1, is a Hypertext Transfer Protocol (HTTP)-based query/response protocol where a client sends a request containing the location information and service URN to a server and receives a response containing the service URL, typically a SIP URL, the service region where the same information would be returned, and an indication of how long the information is valid. Both request and response are formatted as Extensible Markup Language (XML). For efficiency, responses are cached, because otherwise every small movement would trigger a new LoST request. As long as the client remains in the same service region, it does not need to consult the server again until the response returned reaches its expiration date. The response may also indicate that only a more generic emergency service is offered for this region. For example, a request for urn:service:sos.marine in Austria may be replaced by urn:service:sos. Finally, the response also indicates the emergency number and dial string for the respective service.
The number of PSAPs serving a country varies significantly. Sweden, for example, has 18 PSAPs, and the United States has approximately 6,200. Therefore, there is roughly one PSAP per 500,000 inhabitants in Sweden and one per 50,000 in the United States. As all-IP infrastructure is rolled out, smaller PSAPs may be consolidated into regional PSAPs. Routing may also take place in multiple stages, with the call being directed to an Emergency Services Routing Proxy (ESRP), which in turn routes the call to a PSAP, accounting for factors such as the number of available call takers or the language capabilities of the call takers.
Emergency services need location information for three reasons: routing the call to the right PSAP, dispatching first responders (for example, policemen), and determining the right emergency service dial strings. It is clear that the location must be automatic for the first and third applications, but experience has shown that automated, highly accurate location information is vital to dispatching as well, rather than relying on callers to report their locations to the call taker.
Such information increases accuracy and avoids dispatch delays when callers are unable to provide location information because of language barriers, lack of familiarity with their surroundings, stress, or physical or mental impairment.
Location information for emergency purposes comes in two representations: geo(detic), that is, longitude and latitude, and civic, that is, street addresses similar to postal addresses. Particularly for indoor location, vertical information (floors) is very useful. Civic locations are most useful for fixed Internet access, including wireless hotspots, and are often preferable for specifying indoor locations, whereas geodetic location is frequently used for cell phones. However, with the advent of femto and pico cells, civic location is both possible and probably preferable because accurate geodetic information can be very hard to acquire indoors.
In almost all cases, location values are represented as Presence Information Data Format Location Object (PIDF-LO), an XML-based document to encapsulate civic and geodetic location information. The format of PIDF-LO is described in , with the civic location format updated in and the geodetic location format profiled in . The latter document uses the Geography Markup Language (GML) developed by the Open Geospatial Consortium (OGC) for describing commonly used location shapes.
Location can be conveyed either by value ("LbyV") or by reference ("LbyR"). For the former, the XML location object is added as a message body in the SIP message. Location by value is particularly appropriate if the end system has access to the location information; for example, if it contains a Global Positioning System (GPS) receiver or uses one of the location configuration mechanisms described later in this section. In environments where the end host location changes frequently, the LbyR mechanism might be more appropriate. In this case, the LbyR is an HTTP/Secure HTTP (HTTPS) or SIP/Secure SIP (SIPS) URI, which the recipient needs to resolve to obtain the current location. Terminology and requirements for the LbyR mechanism are available in .
An LbyV and an LbyR can be obtained through location config-uration protocols, such as the HTTP Enabled Location Delivery (HELD) protocol or Dynamic Host Configuration Protocol (DHCP) [12, 13]. When obtained, location information is required for LoST queries, and that information is added to SIP messages .
The requirements for location accuracy differ between routing and dispatch. For call routing, city or even county-level accuracy is often sufficient, depending on how large the PSAP service areas are, whereas first responders benefit greatly when they can pinpoint the caller to a particular building or, better yet, apartment or office for indoor locations, and an outdoor area of at most a few hundred meters. This detailed location information avoids having to search multiple buildings, for example, for medical emergencies.
As mentioned previously, the ISP is the source of the most accurate and dependable location information, except for cases where the calling device has built-in location capabilities, such as GPS, when it may have more accurate location information. For landline Internet connections such as DSL, cable, or fiber-to-the-home, the ISP knows the provisioned location for the network termination, for example. The IETF GEOPRIV working group has developed protocol mechanisms, called Location Configuration Protocols, so that the end host can request and receive location information from the ISP. The Best Current Practice document for emergency calling enumerates three options that clients should universally support: DHCP civic and geo (with a revision of RFC 3825 in progress ), and HELD . HELD uses XML query and response objects carried in HTTP exchanges. DHCP does not use the PIDF-LO format, but rather more compact binary representations of locations that require the endpoint to construct the PIDF-LO.
Particularly for cases where end systems are not location-capable, a VSP may need to obtain location information on behalf of the end host .
Obtaining at least approximate location information at the time of the call is time-critical, because the LoST query can be initiated only after the calling device or VSP has obtained location information. Also, to accelerate response, it is desirable to transmit this location information with the initial call signaling message. In some cases, however, location information at call setup time is imprecise. For example, a mobile device typically needs 15 to 20 seconds to get an accurate GPS location "fix," and the initial location report is based on the cell tower and sector. For such calls, the PSAP should be able to request more accurate location information either from the mobile device directly or the Location Information Server (LIS) operated by the ISP. The SIP event notification extension, defined in RFC 3265 , is one such mechanism that allows a PSAP to obtain the location from an LIS. To ensure that the PSAP is informed only of pertinent location changes and that the number of notifications is kept to a minimum, event filters can be used.
The two-stage location refinement mechanism described previously works best when location is provided by reference (LbyR) in the SIP INVITE call setup request. The PSAP subscribes to the LbyR provided in the SIP exchange and the LbyR refers to the LIS in the ISP's network. In addition to a SIP URI, the LbyR message can also contain an HTTP/HTTPS URI. When such a URI is provided, an HTTP-based protocol can be used to retrieve the current location .
This section discusses the requirements the different entities need to satisfy, based on Figure 2. A more detailed description can be found in .
Note that this narration focuses on the final stage of deployment and does not discuss the transition architecture, in which some implementation responsibilities can be rearranged, with an effect on the overall functions offered by the emergency services architecture. A few variations were introduced to handle the transition from the current system to a fully developed ECRIT architecture.
With the work on the IETF emergency architecture, we have tried to balance the responsibilities among the participants, as described in the following sections.
An end host, through its VoIP application, has three main responsibilities: it has to attempt to obtain its own location, determine the URI of the appropriate PSAP for that location, and recognize when the user places an emergency call by examining the dial string. The end host operating system may assist in determining the device location.
The protocol interaction for location configuration is indicated as interface (a) in Figure 2; numerous location configuration protocols have been developed to provide this capability.
A VoIP application needs to support the LoST protocol in order to determine the emergency service dial strings and the PSAP URI. Additionally, the device needs to understand the service identifiers, defined in .
As currently defined, it is assumed that SIP can reach PSAPs, but PSAPs may support other signaling protocols, either directly or through a protocol translation gateway. The LoST retrieval results indicate whether other signaling protocols are supported. To provide support for multimedia, use of different types of codecs may be required; details are available in .
The ISP has to make location information available to the endpoint through one or more of the location configuration protocols.
In order to route an emergency call correctly to a PSAP, an ISP may initially disclose the approximate location for routing to the endpoint and give more precise location information later, when the PSAP operator dispatches emergency personnel. The functions required by the IETF emergency services architecture are restricted to the disclosure of a relatively small amount of location information, as discussed in and in .
The ISP may also operate a (caching) LoST server to improve the robustness and reliability of the architecture. This server lowers the round-trip time for contacting a LoST server, and the caches are most likely to hold the mappings of the area where the emergency caller is currently located.
When ISPs allow Internet traffic to traverse their network, the signaling and media protocols used for emergency calls function without problems. Today, there are no legal requirements to offer prioritization of emergency calls over IP-based networks. Although the standardization community has developed a range of Quality of Service (QoS) signaling protocols, they have not experienced widespread deployment.
SIP does not mandate that call setup requests traverse SIP proxies; that is, SIP messages can be sent directly to the user agent. Thus, even for emergency services it is possible to use SIP without the involvement of a VSP. However, in terms of deployment, it is highly likely that a VSP will be used. If a caller uses a VSP, this VSP often forces all calls, emergency or not, to traverse an outbound proxy or Session Border Controller (SBC) operated by the VSP. If some end devices are unable to perform a LoST lookup, VSP can provide the necessary functions as a backup solution.
If the VSP uses a signaling or media protocol that the PSAP does not support, it needs to translate the signaling or media flows.
VSPs can assist the PSAP by providing identity assurance for emergency calls; for example, using , thus helping to prosecute prank callers. However, the link between the subscriber information and the real-world person making the call is weak.
In many cases, VSPs have, at best, only the credit card data for their customers, and some of these customers may use gift cards or other anonymous means of payment.
The emergency services Best Current Practice document discusses only the standardization of the interfaces from the VSP and ISP toward PSAPs and some parts of the PSAP-to-PSAP call transfer mechanisms that are necessary for emergency calls to be processed by the PSAP. Many aspects related to the internal communication within a PSAP, between PSAPs as well as between a PSAP and first responders, are beyond the scope of the IETF specification.
When emergency calling has been fully converted to Internet proto-cols, PSAPs must accept calls from any VSP, as shown in interface (d) of Figure 2. Because calls may come from all sources, PSAPs must develop mechanisms to reduce the number of malicious calls, particularly calls containing intentionally false location information. Assuring the reliability of location information remains challenging, particularly as more and more devices are equipped with Global Navigation Satellite Systems (GNSS) receivers, including GPS and Galileo, allowing them to determine their own location . However, it may be possible in some cases to check the veracity of the location information an endpoint provides by comparing it against infrastructure-provided location information; for example, a LIS-determined location.
So far we have described LoST as a client-server protocol. Similar to the Domain Name System (DNS), a single LoST server does not store the mapping elements for all PSAPs worldwide, for both technical and administrative reasons. Thus, there is a need to let LoST servers interact with other LoST servers, each covering a specific geographical region. Working together, LoST servers form a distributed mapping database, with each server carrying mapping elements, as shown in Figure 3. LoST servers may be operated by different entities, including the ISP, the VSP, or another independent entity, such as a governmental agency. Typically, individual LoST servers offer the necessary mapping elements for their geographic regions to others. However, LoST servers may also cache mapping elements of other LoST servers either through data synchronization mechanisms (for example, FTP or exports from a Geographical Information System [GIS] or through a specialized protocol ) or by regular usage of LoST. This caching improves performance and increases the robustness of the system.
A detailed description of the mapping architecture with examples is available in .
Steps Toward an IETF Emergency Services Architecture
The architecture described so far requires changes both in already-deployed VoIP end systems and in the existing PSAPs. The speed of transition and the path taken vary between different countries, depending on funding and business incentives. Therefore, it is generally difficult to argue whether upgrading endpoints or replacing the emergency service infrastructure will be easier. In any case, the transition approaches being investigated consider both directions. We can distinguish roughly four stages of transition (Note: The following descriptions omit many of the details because of space constraints):
If devices are used in environments without location services, the VSP's SIP proxy may need to insert location information based on estimates or subscriber data. These cases are described briefly in the following sections.
Figure 4 shows an emergency services architecture with traditional endpoints. When the emergency caller dials the Europeanwide emergency number 112 (step 0), the device treats it as any other call without recognizing it as an emergency call; that is, the dial string provided by the endpoint that may conform to RFC 4967 or RFC 3966 is signaled to the VSP (step 1). Recognition of the dial string is then left to the VSP for processing or sorting; the same is true for location retrieval (step 2) and routing to the nearest (or appropriate) PSAP (step 3). Dial-string recognition, location determination, and call routing are simpler to carry out using a fixed device and the voice and application service provided through the ISP than they are when the VSP and the ISP are two separate entities.
There are two main challenges to overcome when dealing with traditional devices: First, the VSP must discover the LIS that knows the location of the IP-based end host. The VSP is likely to know only the IP address of that device, visible in the call signaling that arrives at the VSP. When a LIS is discovered and contacted and some amount of location information is available, then the second challenge arises, namely, how to route the emergency call to the appropriate PSAP. To accomplish the latter task it is necessary to have some information about the PSAP boundaries available.
Reference does not describe a complete and detailed solution but uses building blocks specified in ECRIT. Still, this deployment scenario shows many constraints:
Partially Upgraded End Hosts
A giant step forward in simplifying the handling of IP-based emergency calls is to provide the end host with some information about the ISP so that LIS discovery is possible. The end host may, for example, learn the ISP's domain name by using LIS discovery , or might even obtain a Location by Reference (LbyR) through the DHCP-URI option or through HELD . The VSP can then either resolve the LbyR in order to route the call or use the domain to discover a LIS using DNS.
Additional software upgrades at the end device may allow for recognition of emergency calls based on some preconfigured emergency numbers (for example, 112 and 911) and allow for the implementation of other emergency service-related features, such as disabling silence suppression during emergency calls.
In most countries, national and sometimes regional telecommunications regulators, such as the Federal Communications Commission (FCC) and individual states, or the European Union, strongly influence how emergency services are provided, who pays for them, and the obligations that the various parties have. Regulation is, however, still at an early stage: in most countries current requirements demand only manual update of location information by the VoIP user. The ability to obtain location information automatically is, however, crucial for reliable emergency service operation, and it is required for nomadic and mobile devices. (Nomadic devices remain in one place during a communication session, but are moved frequently from place to place. Laptops with Wi-Fi interfaces are currently the most common nomadic devices.)
Regulators have traditionally focused on the national or, at most, the European level, and the international nature of the Internet poses new challenges. For example, mobile devices are now routinely used beyond their country of purchase and, unlike traditional cellular phones, need to support emergency calling functions. It appears likely that different countries will deploy IP-based emergency services over different time horizons, so travelers may be surprised to find that they cannot call for emergency assistance outside their home country.
The separation between Internet access and application providers on the Internet is one of the most important differences to existing circuit-switched telephony networks. A side effect of this separation is the increased speed of innovation at the application layer, and the number of new communication mechanisms is steadily increasing. Many emergency service organizations have recognized this trend and advocated for the use of new communication mechanisms, including video, real-time text, and instant messaging, to offer improved emergency calling support for citizens. Again, this situation requires regulators to rethink the distribution of responsibilities, funding, and liability.
Many communication systems used today lack accountability; that is, it is difficult or impossible to trace malicious activities back to the persons who caused them. This problem is not new, because pay phones and prepaid cell phones have long offered mischief makers the opportunity to place hoax calls, but the weak user registration procedures, the lack of deployed end-to-end identity mechanisms, and the ease of providing fake location information increases the attack surface at PSAPs. Attackers also have become more sophisticated over time, and Botnets that generate a large volume of automated emergency calls to exhaust PSAP resources, including call takers and first responders, are not science fiction. | <urn:uuid:98b9e042-5201-4abc-9a76-bb269cb78870> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-50/134-ecrit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928629 | 5,633 | 2.546875 | 3 |
Originally published November 11, 2009
What outputs and results are expected from computerized decision support systems (DSSs)? What outputs do we expect from a business intelligence (BI) system?
Traditionally, one would answer the intended output is "relevant information." In a vague, idealistic way that is correct. But DSS designers and builders need to understand exactly what information a manager needs and wants in a specific decision situation. So more exactly, the output of a computerized decision support system may be quantitative results from models, analyses and displays of historical operating data, displays of facts in various formats, recommendations and relevant documents. Outputs are the direct result of the interaction of user inputs, stored or accessed data and documents and analytical and retrieval processes in the computerized system. One hopes DSS/BI outputs decrease uncertainty in a decision situation and positively impact decisions.
Outputs describe the information that comes from the programmed process in the DSS. An output may be a map, a chart, a tabular data summary, a printed report or a data file. DSS outputs include forms, objects and other representations for inputs and manipulation by a user, and representations that display results from queries, analyses and rules for users. Decision support information may be data points on a chart, text, stored or computer-generated images and even sounds. Decision support system output may be descriptive or prescriptive. Any output must be "suitable for human interpretation" and meaningful to users. Decision support outputs should inform the user of the system about the supported decision situation. According to Wikipedia, the English word inform "comes (via French) from the Latin verb informare, to give form to, to form an idea of." Outputs should inform and give form to data and situations. The outputs provide analysis and context to what we know and think about a situation.
The evidence is substantial that the amount of information that is received impacts strategies for processing information and making choices, the time spent in decision making and decision accuracy/quality. The relation between increasing information and improved decision making is not, however, linear. Rather the relation is an inverted U; that is, decision making improves as more information is received until an inflection point is reached where no improvement occurs, and then more information creates an overload and decision-making performance declines. Decision support systems for providing business intelligence should help manage the enormous information load that confronts managers.
Decision-relevant information is the result of processing, manipulating and organizing data in a way that adds to the knowledge of the person receiving it. Decision relevant outputs must possess utility, value or some meaning for the system user and the consumers of the information. Decision support outputs must be related to truth about a situation, to communicating relevant information and to representing complex relationships.
Managers and their support staffs need to consider what information and analyses are actually needed to support management and business decisions. Some managers need both detailed transaction data and summarized data. Most managers only want summarized data. Managers usually want lots of charts and graphs; a few only want tables of numbers. Many managers want information provided routinely or periodically, and some want information available online and on demand. Certain managers want financial analyses, and some managers want primarily "soft," non-financial or qualitative information.
In general, a computerized data-driven decision support system can provide summarized transaction information, trend analyses and performance monitoring. A model-driven DSS can provide projections and forecasts, sensitivity and "what if" results. Document-driven DSS outputs include relevant documents; knowledge-driven DSSs provide recommendations. Outputs of communications-driven DSSs are interactive messages and information sharing.
A computerized DSS can help managers understand the status of operations, monitor business results, review customer preference data or even investigate and analyze competitor actions. In all of these situations, management information and analyses should have a number of characteristics. Information must be both timely and current. These characteristics mean the information is up to date and available when managers want it. Also, information must be accurate, relevant and complete. Finally, managers want information presented in a format that assists them in making decisions. In general, management information should be summarized and concise; and any decision support system should have an option for managers to obtain more detailed information about underlying data, models or rules.
Decision support and business intelligence systems need to provide current, timely information that is accurate, relevant and complete. A specific DSS must present appropriate information outputs in an appropriate format that is easy to understand and manipulate. The information presented may result from analysis of transaction data or it may be the result of a decision model or it may have been gathered from external sources. Computerized support systems can present internal and external facts, informed opinions and forecasts to managers.
Managers want the right information, at the right time, in the right format, and at the right cost to support their decision making.
Dan Power Blog, "Managing Information Load," November 29, 2007.
Power, D. J., "What is the output of a decision support system?” DSS News, Vol. 9, No. 3, February 10, 2008.
Power, D. J., Decision Support Systems: Concepts and Resources for Managers, Westport, CT: Greenwood/Quorum Books, 2002.
SOURCE: Decision Support System Outputs
Recent articles by Dan Power | <urn:uuid:45e9d456-be9e-499e-887f-c511d1983420> | CC-MAIN-2017-04 | http://www.b-eye-network.com/channels/1385/view/11973 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912333 | 1,097 | 2.640625 | 3 |
Learning the Lesson About Disaster Preparedness
In addition, the government mandated survivability into the network infrastructure, just as it did for everything from building codes to transportation. While the Japanese didn't think of everything (who could possibly imagine a 9.0 earthquake with tsunamis of this magnitude?), they thought of enough. While the human tragedy continues, an intact Internet in Japan means that aid can flow more easily, help can come more quickly and the nation can function more normally where it's possible to do so. Of course, in areas directly affected by the disaster, people probably aren't getting on the Internet much.What this means to you is a lot. A natural disaster in the area where you are probably won't take out the Internet. What should matter to you is whether you can get to the Internet. Just as is the case in Japan, you need more than the existence of the network; you need to be able to run the infrastructure that gets you to the Internet. This is when you see just how ready your data center is for a disaster, whether it's an earthquake, a monster snowfall or a hurricane. Do you have a source of power that's really reliable? By that I mean power that's not going out in two days because you ran out of diesel fuel or that depends on an ISP without an emergency plan. You need to confirm that your entire pathway to the Internet will stay functional in spite of the worst of disasters. Then you need to do it again because you need more than one way to get to the Internet. And then you have to test it regularly, just to make sure it will actually work. While there's not a lot you can do if your data center is physically destroyed, except bring your backup data center online, you can make sure that if your data center stays up, you can still reach the outside world. That's what they did in Japan, and obviously it worked.
It takes power to run computers, and even if the ISPs are mostly operating, there will be areas where there isn't service. In addition, there are areas where the undersea cables are out because the landing stations aren't staffed, or because the cables are damaged. After all, the entire island shifted more than eight feet during the quake, and these cables don't necessarily have a lot of stretch left in them. | <urn:uuid:da4fca8a-eb76-4d0f-aacc-543f7c168a6c> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/IT-Infrastructure/Even-a-Huge-Earthquake-Cant-Kill-Japans-Internet-Service-365308/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969312 | 476 | 2.859375 | 3 |
How does sensitivity analysis differ from "What if?" analysis?
by Dan Power
In the early days of decision support deployment, one of the major "selling points" of vendors and academics was the ability to do "What If?" analysis. In the 1970s, model-driven decision support for sales and production planning helped a manager change a decision variable like the number of units to produce and then immediately get a new result for an outcome variable like profit. As decision support has gotten more sophisticated and become more diverse in its use, "What If?" as a concept has broadened. The decision support community has also introduced more precise terminology from the mathematical modeling literature.
In most decision support and analytic applications, sensitivity and "What If?" analysis refer to quantitative analyses. In some of the decision making and planning literature, "What if analysis" is also discussed as a qualitative, brainstorming scenario approach that "uses broad, loosely structured questioning to investigate contingencies." In the context of business intelligence and data-driven decision support, "What If?" is often used as a descriptor for ad hoc queries of a decision support data base.
According to a vendor website, Applix.com, planners use models to address "What If" questions such as: 1) What profits can we anticipate next year if inflation is 7 percent and we continue current pricing policies? 2) If we open a new plant, what profits can we expect? 3) What if we were to hire 55 people in Sales, 10 in Marketing and 35 in R&D? 4) What is the impact on manufacturing and shipping if the price of oil increases 15% during Q2? and 5) What would be needed for raw material and inventory if the demand of a product went up 20%?
In the decision support literature and in common discourse, we don't have agreement about the difference between "What If?" analysis and sensitivity analysis. Microsoft Excel documentation defines "What-if analysis" as a "process of changing the values in cells to see how those changes affect the outcome of formulas on the worksheet. For example, varying the interest rate that is used in an amortization table to determine the amount of the payments." Four tools in Excel are commonly categorized as "What If?" or sensitivity analysis (Winston, 2004) tools: Data Tables, Goal Seek, Scenarios, and Solver. The simplest type of "What If?" analysis is manually changing a value in a cell that is used in a formula to see the result. Excel experts seem to use the terms sensitivity and "What If?" analysis interchangeably.
To get a better understanding of what is possible, let's briefly examine how one would implement "What If?" or sensitivity analysis using MS Excel tools. First, a data table is a range of cells that summarizes the results of changing certain values in formulas in a model. There are two types of data tables: one input variable tables and two input variable tables. "Two-variable data tables use only one formula with two lists of input values. The formula must refer to two different input cells." In Microsoft's Mortgage Loan Analysis example, a two-variable data table would show how different interest rates and loan terms would affect the mortgage payment amount. The table shows the decision maker how sensitive the payment amount is to the interest rate. The Goal Seek tool is helpful when you know the desired result from a model and want to find the appropriate input or decision variable levels. "What If?" involves incrementally changing an input until the goal is reached. Goal Seek automates this trial and error process. Scenarios let an Excel user construct strategies where multiple decision variables are changed in each scenario. For example, a decision maker may have best case, most likely, and worst case scenarios. Finally, Solver is an optimization tool that includes a sensitivity analysis capability. Monte Carlo simulation in Excel can also be used to assist in "What If?" or sensitivity analysis. Spreadsheets models with probability distributions for inputs can simulate outcomes for a range of input parameters.
According to Parnell (1997), "a decision variable is a variable over which the decision maker has control and wishes to select a level, whereas a strategy refers to a set of values for all the decision variables of a model. An optimal strategy is the strategy which maximises the value of the decision maker's objective function (e.g. profit, social welfare, expected utility)." In general, mathematical methods assess the sensitivity of a model's output to the range of variation of one or more inputs. Sensitivity analysis is used to determine what inputs, parameters or decision variables contribute more to the variance in the output of a model and hence are the most important and most "sensitive".
Let's check some definitions from Web sites. Wikipedia (wikipedia.org) notes "What-if analysis of a model considers the question: 'What happens to the result if we make a particular change to a parameter?'. If the change of a parameter is small this is also called sensitivity analysis: 'How sensitive is the result to a small change of a parameter?' Wikipedia also defines sensitivity analysis as "the study of how the variation in the output of a model (numerical or otherwise) can be apportioned, qualitatively or quantitatively, to different sources of variation." Wikipedia shows the differing usage of these terms across various disciplines. The sensitivity analysis entry notes that in a business context "sensitivity analysis can provide information to managers about which elements of the business require more concentration. For example if sales, variable costs, fixed costs or output were to increase or decrease by 10% which would have the most effect on profit?"
The Web Dictionary of Cybernetics and Systems defines sensitivity analysis as "a procedure to determine the sensitivity of the outcomes of an alternative to changes in its parameters (as opposed to changes in the environment; see contingency analysis, a fortiori analysis). If a small change in a parameter results in relatively large changes in the outcomes, the outcomes are said to be sensitive to that parameter. This may mean that the parameter has to be determined very accurately or that the alternative has to be redesigned for low sensitivity. (IIASA)"
Finally, the Michigan Department of Environmental Quality (www.michigan.gov) defines a sensitivity analysis as "the process of varying model input parameters over a reasonable range (range of uncertainty in values of model parameters) and observing the relative change in model response."
Parnell (1997) identifies uses of sensitivity analysis in decision making, communication, understanding systems and in model development. Based on his discussion, a model-driven DSS with appropriate sensitivity analysis should help in 1) testing the robustness of an optimal solution, 2) identifying critical values, thresholds or break-even values where the optimal strategy changes, 3) identifying sensitive or important variables, 4) investigating sub-optimal solutions, 5) developing flexible recommendations which depend on circumstances, 6) comparing the values of simple and complex decision strategies, and 7) assessing the "riskiness" of a strategy or scenario.
The most common "What If?" analysis in model-driven DSS is changing an input value in an ad hoc way and seeing the result. This type of analysis has severe limitations. The analysis is likely to be more complete if an input object like a spinner or a slider is used to change values. Such an approach is much faster and easier than typing in individually new input values. A range sensitivity analysis evaluates the effect on outputs by systematically varying one of the model inputs across its entire range of plausible values. According to Frey and Patil "results of nominal range sensitivity are most valid when applied to a linear model."
What are the limitations of "What If?" analysis? If the analysis is ad hoc rather than systematic, the analysis is likely to miss potential problems and solutions. Managers may not understand the assumptions of the sensitivity analysis, e.g. assuming a linear relationship. Also, in general it is impossible to audit the thoroughness of sensitivity and "What If?" analyses and their impact on decision making. My general sense is that systematic sensitivity analysis using a one or two-variable data table should be required in all model-driven DSS based upon algebraic models. Relying on an ad hoc manipulation of single variables in a quantitative model is always problematic and limited.
So "What If?" analysis is used broadly for techniques that help decision makers assess the consequences of changes in models and situations. Sensitivity analysis is a more specific and technical term generally used for assessing the systematic results from changing input variables across a reasonable range in a model. The current frontier is animated sensitivity analysis where a visual display like a chart or graph is sytematically varied showing results of changing model parameters. Check the Planners Lab review (Power, 2006).
As always your comments and questions are welcome.
Alexander, E.R. (1989). Sensitivity analysis in complex decision models, Journal of the American Planning Association 55: 323-333.
Frey, H. C. and S. R. Patil, "Identification and Review of Sensitivity Analysis Method,"NCSU/USDA Workshop on Sensitivity Analysis Methods, http://www.ce.ncsu.edu/risk/abstracts/frey.html
Isukapalli, S.S., "Uncertainty Analysis of Transport-Transformation Models, "A dissertation submitted to the Graduate School--New Brunswick, Rutgers, The State University of New Jersey, URL http://www.ccl.rutgers.edu/~ssi/thesis/thesis-node14.html
Microsoft help, http://office.microsoft.com/en-us/assistance/
Pannell, D.J. (1997). Sensitivity analysis of normative economic models: Theoretical framework and practical strategies. Agricultural Economics 16: 139-152, at URL http://cyllene.uwa.edu.au/~dpannell/dpap971f.htm
Power, D., "What is Planners Lab?" DSS News, Vol. 7, No. 11, May 21, 2006.
Sensitivity analysis, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/Sensitivity_analysis
Web Dictionary of Cybernetics and Systems at URL http://pespmc1.vub.ac.be/ASC/SENSIT_ANALY.html
What-if analysis, from Wikipedia, the free encyclopedia, URL http://en.wikipedia.org/wiki/What-if_analysis
Winston, W. L., Microsoft Excel Data Analysis and Business Modeling, Microsoft Press, 2004.
Power, D., "How does sensitivity analysis differ from 'What if?' analysis?" DSS News, Vol. 7, No. 16, July 30, 2006; revised for Decision Support News August 31, 2014.
Last update: 2014-08-30 07:31
Author: Daniel Power
You cannot comment on this entry | <urn:uuid:9284e38a-507f-444a-84f1-3a6f7cae07c9> | CC-MAIN-2017-04 | http://dssresources.com/faq/index.php?action=artikel&id=121 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.873254 | 2,252 | 2.984375 | 3 |
You can bet that if there are little red aliens running around on Mars or spaceships patrolling other planet in our solar system for that matter, a recently powered-up telescope built by the researchers at the Defense Advanced Research Projects Agency might just be able to see them.
The Air Force, which operates the DARPA-developed Space Surveillance Telescope (SST) says the telescope's design, featuring unique image-capturing technology known as a curved charge coupled device (CCD) system, as well as very wide field-of-view, large-aperture optics, doesn't require the long optics train of a more traditional telescopes. The design makes the SST less cumbersome on its moveable mount, enabling it to survey the sky rapidly, the Air Force says. The telescope's mount uses advanced servo-control technology, making the SST one of the most agile telescopes of its size ever built.
More on cool telescopes: Amazing telescopes produce hot space images
"The SST will give us in a matter of nights the space surveillance data that current telescopes take weeks or months to provide," said Air Force Lt. Col. Travis Blake, DARPA's Space Surveillance Telescope program manager in a statement.
From DARPA: "Beyond providing faster data collection, the SST is very sensitive to light, which allows it to see faint objects in deep space that currently are impossible to observe. The detection and tracking of faint objects requires a large aperture and fast optics. The SST uses a 3.5 meter primary mirror, which is large enough to achieve the desired sensitivity. The system is an f/1.0 optical design, with a large-area mosaic CCD camera constructed from the curved imagers and a high-speed shutter allowing for fast scanning at the high sensitivity."
The SST has a number of missions, watching for debris in low earth orbit to help existing satellites avoid collisions chief among them, it also tracks objects in deep space and offers astronomers a wide-angle lens to take astronomical surveys of stars and comets, DARPA says.
DARPA says the SST developed its first images earlier this year and is still undergoing tests.
Follow Michael Cooney on Twitter: nwwlayer8
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:df4fbe7e-5921-43ab-bf82-1e0c457c221e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2229018/security/darpa-s-new-telescope-could-see-the-aliens-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00157-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92461 | 463 | 3.4375 | 3 |
Microsoft has been notably forward-thinking in data-center design over the past few years, at least the design of its own data centers.
Last year it announced it was building a data center inside an old barn that was partly open to outside air so it could take advantage of natural air circulation and heat leakage as part of its plan to cool the place.
It ran a prototype data center in a tent for seven months to make sure the idea would work.
Earlier this month it announced it was building one in Wyoming, which isn't that innovative in itself, but how many data centers have you ever visited in Wyoming? (The state offered $10 million in incentives for a data center Microsoft said will cost $112 million to build.)
Amidst a shift from a business model totally dependent on selling software to be installed on a customer's own hardware to one in which Microsoft has to host and maintain SaaS or cloud versions of many of its own apps, to be sold by subscription, the ultimate software company is having to expand its network of data centers rapidly.
It has expanded its facilities in the Seattle area, in Dublin and located others in Quincy, Wash., Chicago, San Antonio, Tex. And Southern Virginia.
The "mega data centers" among the new crowd cost as much as $500 million to build.
Smells like renewable power supplies
Microsoft's newest plan is to power a data center partially using heat and biogas generated by landfills and sewage treatment plants.
[Fill in your own joke here]
The design calls for a modular data center in which the hardware and support systems are housed in crates similar to shipping containers.
It will include facitlities to collect methane produced by landfills and sewage treatment plants to be used in fuel cells that will provide electricity for the hardware in the IT PACS (Pre-assembled Components) – Microsoft's term for data center modules built inside shipping containers.
The theoretical savings in power and carbon-dioxide emissions is impressive, though. According to Microsoft, a 200 kilowatt prototype data center will eliminate more than two million pounds of CO2 emissions per year, an amount Microsoft said is equivalent to 300 Honda Civics.
The "data plant" concept is a long-term strategy, not a plan for any data centers under construction already according to the blog explaining it, written by Christian Belady, Microsoft’s General Manager of Data Center Services.
While the savings in CO2 emissions and electricity are significant, being able to remove all or most of a data center's power needs from the grid also reduces the number of UPSs, back-up generators, power-conditioning gateways, bypass circuits and other layers of emergency support that add complexity and additional points of failure to the design, Belady wrote.
Eco-efficient data centers can save money, be more reliable because the designs are simpler and take advantage of essentially free resources like landfill methane supplies.
Ultimately they're an accommodation to allow resource-intensive data centers to operate within a national infrastructure often not robust enough to support them, however, he wrote.
Inevitably, the poop/trash/Microsoft connection will generate as many jokes as the prototype generates kilowatts.
If it works, however, the Data Plant design will make eco-friendly data center designs much more mainstream than they are now (which is not at all). So far I know of no other data centers built in barns or tents, but there is more science and a lot more renewable-fuel research and support behind the Data Plant than behind the open-air data center.
It will be interesting to see how well it works (from well upwind, of course).
From Belady:Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
"A constraint we all need to work with, is the fact that our electrical grid was never methodically planned or engineered for the significant growth we are experiencing today. And it certainly was not engineered to take on the proliferation of data center growth. Independence from the power grid will allow our industry to minimize its impact and ease some of the constriction already taking place. The Data Plant is one way of giving us an ability to manage the growth of our clouds in a thoughtful manner: building in sustainability from the ground-up, so we can run sustainably every day. Our goal is to reduce the impact of our operations and products, and to be a leader in environmental responsibility." –Christian Belady, Microsoft’s General Manager of Data Center Services, Microsoft Global Foundation Services Blog, April 18, 2012 | <urn:uuid:ae9c8c38-9fe1-4f83-bdb1-b8b460095691> | CC-MAIN-2017-04 | http://www.itworld.com/article/2729084/data-center/microsoft-to-power-data-center-from-sewage--landfills.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00553-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952462 | 981 | 2.5625 | 3 |
Table of Contents
If you use a computer, read the newspaper, or watch the news, you will know about computer viruses or other malware. These are those malicious programs that once they infect your machine will start causing havoc on your computer. What many people do not know is that there are many different types of infections that are categorized in the general category of Malware.
Malware - Malware is programming or files that are developed for the purpose of doing harm. Thus, malware includes computer viruses, worms, Trojan horses, spyware, hijackers, and certain type of adware.
This article will focus on those malware that are considered viruses, trojans, worms, and viruses, though this information can be used to remove the other types of malware as well. We will not go into specific details about any one particular infection, but rather provide a broad overview of how these infections can be removed. For the most part these instructions should allow you to remove a good deal of infections, but there are some that need special steps to be removed and these won't be covered under this tutorial.
Before we continue it is important to understand the generic malware terms that you will be reading about.
Adware - A program that generates pop-ups on your computer or displays advertisements. It is important to note that not all adware programs are necessarily considered malware. There are many legitimate programs that are given for free that display ads in their programs in order to generate revenue. As long as this information is provided up front then they are generally not considered malware.
Backdoor - A program that allows a remote user to execute commands and tasks on your computer without your permission. These types of programs are typically used to launch attacks on other computers, distribute copyrighted software or media, or hack other computers.
Dialler - A program that typically dials a premium rate number that has per minute charges over and above the typical call charge. These calls are with the intent of gaining access to pornographic material.
Hijackers - A program that attempts to hijack certain Internet functions like redirecting your start page to the hijacker's own start page, redirecting search queries to a undesired search engine, or replace search results from popular search engines with their own information.
Spyware - A program that monitors your activity or information on your computer and sends that information to a remote computer without your knowledge.
Trojan - A program that has been designed to appear innocent but has been intentionally designed to cause some malicious activity or to provide a backdoor to your system.
Virus - A program that when run, has the ability to self-replicate by infecting other programs and files on your computer. These programs can have many effects ranging from wiping your hard drive, displaying a joke in a small box, or doing nothing at all except to replicate itself. These types of infections tend to be localized to your computer and not have the ability to spread to another computer on their own. The word virus has incorrectly become a general term that encompasses trojans, worms, and viruses.
Worm - A program that when run, has the ability to spread to other computers on its own using either mass-mailing techniques to email addresses found on your computer or by using the Internet to infect a remote computer using known security holes.
Just like any program, in order for the program to work, it must be started. Malware programs are no different in this respect and must be started in some fashion in order to do what they were designed to do. For the most part these infections run by creating a configuration entry in the Windows Registry in order to make these programs start when your computer starts.
Unfortunately, though, in the Windows operating system there are many different ways to make a program start which can make it difficult for the average computer user to find manually. Luckily for us, though, there are programs that allow us to cut through this confusion and see the various programs that are automatically starting when windows boots. The program we recommend for this, because its free and detailed, is Autoruns from Sysinternals.
When you run this program it will list all the various programs that start when your computer is booted into Windows. For the most part, the majority of these programs are safe and should be left alone unless you know what you are doing or know you do not need them to run at startup.
At this point, you should download Autoruns and try it out. Just run the Autoruns.exe and look at all the programs that start automatically. Don't uncheck or delete anything at this point. Just examine the information to see an overview of the amount of programs that are starting automatically. When you feel comfortable with what you are seeing, move on to the next section.
Make sure you are using an anti-virus program and that the anti-virus program is updated to use the latest definitions. If you do not currently have an anti-virus installed, you can select one from the following list and use it to scan and clean your computer. The list below includes both free and commercial anti-virus programs, but even the commercial ones typically have a trial period in which you can scan and clean your computer before you have to pay for it.
It is also advised that you install and scan your computer with MalwareBytes' Anti-Malware and Emsisoft Anti-Malware. Both of these are excellent programs and have a good track record at finding newer infections that the more traditional anti-virus programs miss. Guides on how to install and use these programs can be found below.
After performing these instructions if you still are infected, you can use the instructions below to manually remove the infection.
We have finally arrived at the section you came here for. You are most likely reading this tutorial because you are infected with some sort of malware and want to remove it. With this knowledge that you are infected, it is also assumed that you examined the programs running on your computer and found one that does not look right. You did further research by checking that program against our Startup Database or by searching in Google and have learned that it is an infection and you now want to remove it.
If you have identified the particular program that is part of the malware, and you want to remove it, please follow these steps.
In order to protect yourself from this happening again it is important that take proper care and precautions when using your computer. Make sure you have updated antivirus and spyware removal software running, all the latest updates to your operating system, a firewall, and only open attachments or click on pop-ups that you know are safe. These precautions can be a tutorial unto itself, and luckily, we have one created already:
Please read this tutorial and follow the steps listed in order to be safe on the Internet. Other tutorials that are important to read in order to protect your computer are listed below.
Now that you know how to remove a generic malware from your computer, it should help you stay relatively clean from infection. Unfortunately there are a lot of malware that makes it very difficult to remove and these steps will not help you with those particular infections. In situations like that where you need extra help, do not hesitate to ask for help in our computer help forums. We also have a self-help section that contains detailed fixes on some of the more common infections that may be able to help. This self-help section can be found here:
Windows Safe Mode is a way of booting up your Windows operating system in order to run administrative and diagnostic tasks on your installation. When you boot into Safe Mode the operating system only loads the bare minimum of software that is required for the operating system to work. This mode of operating is designed to let you troubleshoot and run diagnostics on your computer. Windows Safe Mode ...
By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them.
HijackThis is a utility that produces a listing of certain settings found in your computer. HijackThis will scan your registry and various other files for entries that are similar to what a Spyware or Hijacker program would leave behind. Interpreting these results can be tricky as there are many legitimate programs that are installed in your operating system in a similar manner that Hijackers get ...
If you are experiencing problems such as viruses that wont go away, your browser gets redirected to pages that you did not ask for, popups, slowness on your computer, or just a general sense that things may not be right, it is possible you are infected with some sort of malware. To remove this infection please follow these 4 simple steps outlined below. Not all of these steps may be necessary, but ...
Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or ... | <urn:uuid:9c19cbdf-4842-4234-b82e-4ef32ebd5071> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/tutorials/how-to-remove-a-trojan-virus-worm-or-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951521 | 1,914 | 3.28125 | 3 |
When it comes to IT security, the biggest risk is more real—and closer to you—than you might think. Recent studies show that insider security breaches are not only one of the most common types of attack, they are also the most expensive.
An insider attack is one that is carried out by a person or persons who have authorized system access. It could be malicious,or it could be due to an innocent mistake of some sort. One classic example is that of a disgruntled employee who is planning on moving to a competitor company and decides to take confidential customer information along.
In a recent case, as of software developer went to extraordinary lengths to fool his employer by outsourcing his own job to a counterpart based in China. Thanks to the coding skills of his Chinese proxy, the employee in question managed to free up his day, enabling him to surf the internet and watch online videos. As a result, he received amazing quarterly reviews and was regarded as the best developer the company had. Eventually he was caught, but what is surprising is that he got away with it for a considerable period of time.
Insider attacks are—and always have been— one of the toughest types of security risks to deal with. Having the right procedures and measures in place is extremely important.
Here are some of the more common methods employed for IT security:
Multifactor authentication (MFA) requires two or more steps in the authentication process, such as a username and password plus a code sent to a cell phone. MFA is not going to protect against an insider that knows what they are doing, but it can help prevent against classic mistakes that employees make, like leaving a password lying around or using an access code that is easily deciphered.
Encrypting data can go a long way toward censuring that information is of no value to third parties, should it fall into the wrong hands. Most IT providers encrypt data in transit, and some also offer the option to encrypt data at rest. The other side of data security is physical. The data center environment should be in conspicuous and secure, and all access should be audited.
Another way to help thwart inside attacks is by searching for abnormal workflow patterns. This can be a challenge depending on the type of business, however by examining and recording normal behavior, it can become clear when workflows go beyond the boundaries of normal practices.
It doesn’t matter how much effort you put into ensuring outsiders are kept away from your company’s confidential information, there will always be a danger of inside attacks. However, the risk can be minimized by placing your IT infrastructure in to the hands of service providers that are practiced cyber security professionals. By ensuring that policies and procedures are implemented and monitored, you can substantially reduce the like likelihood of insider attack.
While poorly managed IT can contribute to a high risk of insider threat, a well-managed and secure environment offers the latest in defense mechanisms against an age old security risk.
flickr photo by .makai https://www.flickr.com/photos/createlove/3317505600/ shared under a Creative Commons (BY) license
flickr photo by Roy Lathwell https://www.flickr.com/photos/11755880@N00/3346972585/ shared under a Creative Commons (BY) license | <urn:uuid:fc80b889-3209-4544-8835-4cf343b93e99> | CC-MAIN-2017-04 | http://getnerdio.com/blog/how-toprotect-your-it-environment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00001-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.962872 | 681 | 2.84375 | 3 |
Encryption in Business
Because of legislative requirements and the sensitivity of electronic business information, organizations are increasingly deploying a variety of encryption solutions. While the focus has been on the flow of information between the business perimeter and the outside Internet, businesses also are examining options to better protect data at rest at the core of the infrastructure. Security practitioners continue to integrate virtual private networks (VPNs), secure sockets layer (SSL) and Wi-Fi protected access (WPA) technologies into the infrastructure fabric. All of these use encryption to secure “data in motion.”
This article examines VPN, SSL and WPA technologies, as well as triple data encryption standard (3DES) and the advanced encryption standard (AES). By deploying these technologies, security practitioners can significantly address the challenge of “confidentiality” in the transmission of sensitive information.
California’s Assembly Bill 1950 is an example of legislation that is leading businesses to deploy encryption capabilities from the edge of the network to the inside core. This bill requires businesses to protect information about California residents from unauthorized access, destruction, use, modification or disclosure—encryption is a reasonable way to protect all such information. Any business that comes into sensitive information about California residents will feel the impact of this bill.
Further, wireless communication is transforming the computing infrastructure inside businesses. The number of laptops, PDAs and wireless access points (APs) continues to increase. These systems transmit sensitive information that must be protected.
Businesses also are looking to establish a “network of trust.” Organizations have to be assured that any information transmitted among customers, partners and business associates remains private and protected as it travels between services.
Virtual Private Networks and Secure Sockets Layer
Virtual private networks (VPNs) are a cost-effective way to connect remote sites or branch offices with the corporate infrastructure over the Internet. VPNs are an excellent example of an application of encryption technology. VPNs encrypt all traffic transmitted between its end points. Encryption protocols typically supported by VPNs include 3DES, IPSec and AES.
Secure sockets layer (SSL) encrypts information such as credit card numbers or other data when it is transmitted over the Internet. SSL is commonly used to encrypt HTTP traffic. SSL uses a combination of public key cryptography and secret key encryption to provide confidentiality. When you enter “https://” as the URL in a Web browser, you are using SSL to communicate information securely. SSL supports server as well as client authentication, so both ends of the connection can authenticate their identity.
One common use of SSL is to enable business partner access to proprietary information over the Internet. An example is HCR Manor Care’s system to manage legal documents. In many instances, outside law firms are contracted to represent HCR Manor Care’s interests in various types of legal transactions. Rather than allowing these firms to connect directly to its network, HCR Manor Care obtained a certificate from a certificate authority and used it to grant controlled access to an application server in a screened subnet. This application server communicates encrypted case information to outside firms using SSL and with the database server on the internal network via IPSec. Access to sensitive information on the database is controlled, connection to the HCR Manor Care network provides flexibility at a low cost, and the information transmitted to the remote firms retains its confidentiality.
VPN solutions available from vendors today include Web VPN, also known as SSL VPN, which leverages SSL and a Web browser to eliminate the need for a client application. Using the Web browser, an end user at a remote location can connect to the enterprise infrastructure over a secure connection based on SSL. All information is encrypted in the SSL connection that is established between the end points. This provides two potential cost advantages. First, no client-side software, other than a browser that supports SSL, is typically required. This may reduce VPN management costs associated with help desk and client update activities. Second, remote users can connect to information in the corporate data center over any Internet connection. Organizations currently using VPN services provided and managed by a third party, such as one of the interexchange carriers (MCI, AT&T, etc.), may experience a significant return on investment by moving to Web VPN.
As with all emerging technologies, Web VPN has some limitations. For example, there may be a requirement for additional software on the client to support non-Web-enabled applications, thereby increasing total cost of ownership. Organizations looking at Web VPN technology should ensure its compatibility with business requirements and objectives.
From a security perspective, Web VPN provides significant flexibility in controlling remote access to specific data-center services. For example, SSL VPN can restrict a specific user or group of users to the use of a corporate intranet application, while denying access to all other applications in the data center.
The encryption protocols supported by SSL include RC4, the RSA algorithm and the data encryption standard (DES). SSLv3 is the latest version of this protocol.
Wi-Fi Protected Access
The Wi-Fi protected access (WPA) standard was developed by the Wi-Fi Alliance to address security challenges associated with the wired equivalent privacy (WEP) protocol. WPA uses the temporal key integrity protocol (TKIP) to encrypt information and the IEEE 801.1x/extensible authentication protocol (EAP) for authentication.
In June 2004, the IEEE 802.11i standard was ratified. IEEE 802.11i, also being marketed as WPAv2, supports AES for secure transmission over a wireless infrastructure. Security practitioners need to make sure that all wireless communication is encrypted and that the encryption protocol deployed is based on the organization’s security policy.
Although the new wireless encryption standards are improvements over WEP, some organizations may not be ready to deal with the performance issues that often accompany the deployment of AES, for example. In such cases, an interim move to dynamic WEP may be the answer. In a dynamic WEP solution, the WEP key changes at a frequency designated by the security administrator. The re-keying frequency selected should depend on the amount of information moving through the access points and the capabilities of current wireless hacking tools. Although not a perfect defense, this helps to safeguard against intruders by regularly changing the encryption key. It also allows employees to use existing wireless clients without experiencing unacceptable performance degradation due to the implementation of more processor-intensive encryption algorithms.
One example of how 802.1x and dynamic WEP work together to protect information is the introduction of wireless access into corporate conference rooms. Conference rooms are typically unguarded and unlocked. Data jacks in these areas can be gaping holes through an organization’s security perimeter. In addition, most meeting areas do not have sufficient jacks for everyone who needs network access.
HCR Manor Care designed a solution for this problem by installing wireless access points that support 802.1x and dynamic WEP into the conference rooms. A RADIUS server was implemented to authenticate users to the wireless access points before allowing a wireless client onto the network. This is a function of the 802.1x standard. Finally, the network jacks were disabled. The result was a fa | <urn:uuid:674e899a-dc00-409b-9097-84084fd121b8> | CC-MAIN-2017-04 | http://certmag.com/encryption-in-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923592 | 1,467 | 3 | 3 |
Mars in High-Resolution
/ August 14, 2012
The color image above is from NASA's Curiosity rover, and it shows part of the wall of Gale Crater -- it is part of a larger, high-resolution color mosaic made from images obtained by Curiosity's Mast Camera.
This image of the crater wall (higher resolution version here) is north of the location on Mars where the rover landed on Aug. 5, 2012 PDT/Aug. 6, 2012 EDT. Here, a network of valleys believed to have formed by water erosion enters Gale Crater from the outside. This is the first view scientists have had of a fluvial system -- one relating to a river or stream -- from the surface of Mars.
Curiosity is about 11 miles away from this area and the view is obscured somewhat by dust and haze, by the image provides new insights into the style of sediment transport within this system.
Photo courtesy of NASA/JPL-Caltech/MSSS | <urn:uuid:8b3dde2b-5ade-406b-ad36-e6e84f58f0f4> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Mars-in-High-Resolution-08142012.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935257 | 200 | 3.859375 | 4 |
To continue on the topic of passwords: not only should you use a proper iteration count when implementing password hashing in code — the same thing also applies to password safe software such as KeePass.
As strong passwords are pain to remember, many people opt to use KeePass or other password managers, and then copy the password manager to one or another sync service. Passwords can then be available on all devices whether a desktop, laptop, phone or tablet. However, this brings a potential problem. The password file is more likely to end up in the wrong hands if one of the devices is compromised, stolen or the sync service is hacked.
An obvious defense for this is to use a strong password on the password database file. But strong passwords are a pain to enter on a mobile phone. And so that causes many people to use shorter passwords than is wise. A greater than 14 character password or passphrase is the proper way of doing things but we all know that most people just won't do it.
One can mitigate the problem of a short password in mobile use by adjusting key iteration count in the password manager configuration. Common wisdom is to set the iteration count so that it takes about 1 second to verify password on slowest device your are using.
For example, if you use KeePass the default key derivation iteration count is 6,000. On the typical mobile phone you can get about 200,000 iterations per second. So by setting a proper key iteration count you make password cracking ~33 times more expensive for attacker. Of course adding one character to your password gives about the same protection and adding two characters gives about 1024 times better protection. But that is no reason to leave the key iteration count to a ridiculously low default value.
Here's KeePass on a Windows laptop, set to a value of 4,279,296:
And a free tip to anyone who is developing mobile password manager: the low CPU power of mobile devices seriously limit the key iteration count from proper figures, which should be around 4-6 million instead of hundreds of thousands. So how about using the phone's GPU for password derivation? Using that you could have a proper iteration count for key derivation, and you will have a more level playing ground against password crackers which use GPU acceleration. | <urn:uuid:e2d80365-100c-4752-bfdb-d0140e800869> | CC-MAIN-2017-04 | https://www.f-secure.com/weblog/archives/00002382.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00242-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91103 | 462 | 2.5625 | 3 |
Attackers rely on email attacks to gain access to your network and data
What Is a Data Breach?
A data breach is an incident where sensitive, protected, or confidential data is stolen, viewed, or used by an unauthorized party. Data breaches have been in the news frequently in the last year, and many wonder what the reason is for their dramatic rise. This is probably best explained by the value the stolen data has to attackers wishing to mount targeted attacks on large numbers of people.
It is believed that much of the data stolen in breaches is sold and resold, thereby broadly enabling more sophisticated attacks — some of which may be years in the making. Many breaches are the result of intrusions caused by credential theft or malware installation, which in turn is fueled by social engineering and identity deception — and the value of being able to mount targeted attacks.
An example of a data breach is that of Yahoo! where 1 billion Yahoo! credentials were stolen.
Challenges With Preventing Data Breaches
Organizations looking to reduce the risk of data breaches face several challenges.
- Expanding attack surface–With the adoption of new software and cloud computing infrastructures, the traditional network perimeter is no longer the central focus for attackers. There are simply more areas to protect and establish a beachhead for a breach, giving the attacker an unfair advantage.
- Increasingly sophisticated threats–Today’s email-based threats are targeted, sophisticated, and evasive. Traditional detection based approaches that look for malicious content can be bypassed by motivated attackers.
- The effectiveness of social engineering and identity deception–Social engineering campaigns, predominantly using email, are amazingly potent from an attacker’s perspective when it comes to tricking victims into revealing useful information or clicking on malicious URLs. According to the most recent Verizon Data Breach Investigations Report, 30% of phishing messages were opened, and 12% of targets went on to open the malicious attachment or click the link.
Most data breaches are followed up by targeted phishing, when customers are most vulnerable and communication is key.
The Solution: Agari Email Trust Platform
Email is the most popular communication tool and the entry point for up to 95% of security breaches. The Agari Email Trust Platform is the only solution that verifies trusted email identities based on insight into 10 billion emails per day to stop advanced email threats that use identity deception. Agari protects the inboxes of the world’s largest organizations from the number one cyber security threat of advanced email attacks including phishing and business email compromise. | <urn:uuid:5945e182-833c-45d4-bfae-cec0da2eba52> | CC-MAIN-2017-04 | https://www.agari.com/data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00544-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940831 | 518 | 2.71875 | 3 |
Once you've understood the basics of Semantic Web technologies and how they can be combined with other semantic technologies you can start to really appreciate real-world semantic applications.
This series of lessons helps define what types of use cases benefit from the use of semantics and includes many examples of real-world use cases to underscore the key points.
What Makes a Good Semantic Web Application?—people have used semantic technologies for lots of things, but not all of them are appropriate. What are the key characteristics of a use case that can benefit from semantics?
Applying the Semantic Web - Two Camps—there are two primary kinds of Semantic Web practitioners in the world. Understanding this can help you understand blogs and literature outside of Semantic University.
Example Semantic Web Applications—real-world use cases.
Semantic Web on the Web—applying Semantic Web technologies towards their original goal: the World Wide Web.
Semantic Web in the Enterprise—applying Semantic Web technologies to enterprise use cases behind the firewall. | <urn:uuid:02ba979f-9daf-4887-b9ba-66e594603dbd> | CC-MAIN-2017-04 | http://www.cambridgesemantics.com/semantic-university/semantic-technologies-applied | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00178-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.883427 | 211 | 2.640625 | 3 |
RIP (Routing Information Protocol) is a Distance-Vector routing protocol which uses hop count as its metric. There are three versions or RIP, RIPv1, RIPv2 and RIPng.
RIPv1 was defined in RFC 1058. It is a classful protocol. It uses Broadcast messages for hellos/updates, the default hello timer is 30 seconds. RIPv1 does not support authentication. Due to these limitations, RIPv1 is not an ideal choice for use in current networks.
RIPv2, defined in RFC 2453, was modified to support many of the features which were deemed lacking in RIPv1. Some of these added features are:
- CIDR (classless) support
- Multicast updates (18.104.22.168)
- Support for triggered updates
RIPng, defined in RFC 2080, is a version of RIP which was modified to support IPv6.
RIP uses hop count as its metric. It has a limitation of 15 hops for reachability. By default, a router will not use a path if it is over 15 hops away.
router rip version 2 no auto-summary network 10.0.0.0 network 172.30.0.0 ! interface fa0/0 ip address 10.1.1.1 255.255.255.0 ! interface fa0/1 ip address 172.30.0.1 255.255.255.0 | <urn:uuid:ea0c6677-7eb8-443b-87a5-d3558cdb9da7> | CC-MAIN-2017-04 | http://www.networking-forum.com/wiki/RIP | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00178-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.888354 | 301 | 2.671875 | 3 |
Configuring and Planning a Windows Server Cluster
In the first three installments of this series ( Is a Server Cluster Right for Your Organization? , Choosing the Cluster Type that's Right For You , and Network Load Balancing Clusters ), I discuss the concepts involved in setting up a server cluster. As I do, I discuss some of the differences between the Network Load Balancing (NLB) model and the server cluster model. In this final article in the series, I'll discuss the server cluster model in greater detail.
|"If you're the type who wants it all, you'll be happy to know that you can have high availability with load balancing. "|
A Quick Cluster Refresher
Just as a reminder, I'll take just a moment to describe what constitutes a server cluster. On a Windows network, a server cluster is a cluster of two or more machines running Windows 2000 Advanced Server that function as a single machine. Although the machines have separate CPUs and network cards, they are linked to a common storage unit--usually through a fiber channel or SCSI bus. If either unit were to fail, the other unit would keep running, thus providing continuous availability of the application the cluster is hosting.
Keep in mind that not all configurations keep the servers mirrored. Instead, the server cluster model relies on something called a fail-over policy. The fail-over policy dictates the behavior of the cluster during a failure situation. For example, suppose that the first CPU in a cluster were to fail. The fail-over policy on the second CPU would dictate which applications from the failed first CPU would temporarily run on the second CPU. The fail- over policy can also shut down non-critical services and applications on the functional CPU to make way for the extra load it must endure during a failure situation.
Configuring a Server Cluster
There are several different ways to configure a server cluster. Which method is right for you depends largely on what you're trying to accomplish. For example, are you more worried about high availability, load balancing, or both?
If you're the type who wants it all, you'll be happy to know that you can have high availability with load balancing. To do this, you'll have to set the cluster's policies to run some applications or services on one CPU and the remaining applications and services on the other CPU. You must then set the cluster's fail-over policy in such a way that if any of the applications or services fail, they will be run on the other CPU. Obviously, during a failure situation, the functional CPU may become bogged down, because it's performing twice the usual workload. Therefore, you might set the fail-over policy so that if either machine has to take over for a failed CPU, the unnecessary services or applications will be temporarily suspended until the failed unit comes back online. Although this method is tedious to configure, it provides a great mix of performance and availability.
If the idea of having a server bog down during a failure or the thought of shutting down unnecessary services bothers you, there are alternatives. One such alternative is to implement high availability without load balancing. In this implementation, one server basically runs everything. The other server in the cluster is on constant standby as a hot spare. If the first CPU fails, the fail-over policy shifts control of all applications and services to the second CPU. By using this method, your end users will probably never even notice when a problem occurs. When the failed CPU is brought back online, it takes over control of all of the services and applications, and the second CPU goes back into standby mode.
In the past, I've worked for several organizations in which management deemed one or two applications to be mission critical. In these environments, management never wanted to see a network failure of any kind; but if the network did fail, they really didn't care what failed, as long as those essential applications were still running.
In such environments, load shedding is a great configuration. This configuration is especially effective because it not only guarantees that the application will be available under any circumstances, it also ensures that the application's performance won't suffer because of a bogged-down server.
In the load-shedding model, the clustered servers each run their own set of applications, just as you normally would on two separate servers (remember that the cluster is still seen as a single server by the rest of the network). The only difference is that the fail- over policy defines the critical applications. Now, suppose that one of the CPUs fails. During this failure, the second CPU would detect the failure and look at the fail-over policy. The fail-over policy would then tell the CPU to shut down all non-essential applications and to begin servicing any essential applications that were previously running on the failed CPU.
Once you have an idea of which cluster model is right for your environment, you have a lot more planning to do. The first part of this process is to create an exhaustive list of your applications. This list should include things like the current location of each application, any dependencies related to the application, and just how critical the application is. For example, if you have a critical customer management program, you might list the place that the program currently resides and indicate that the program is dependant on the sales database running in the background. Therefore, you'd also want to document the location of the sales database and flag both the program and the underlying database as critical applications. If you're questioning the critical status of the database, consider that the customer management program is critical and can't run without the database; therefore, the database is also critical.
While determining dependencies, you must also look for applications that have common dependencies. For example, suppose that you have two applications that both depend on the same underlying database. Because of the dependency structure, these applications and their dependencies must always be grouped together.
Finally, when designing your fail-over policy, you must consider the impact of that policy. For starters, if you make the second server take over running a critical application, will all the dependencies be in place for the application to run? You must also consider hardware-related issues, such as whether the CPUs have a fast enough processor and enough memory to handle the fail-over policy that you've designed without crashing or bogging down. As you can see, setting up a cluster can be a great way to protect your data or to increase the speed of a Web site. In this article, I've explained the type of clustering environment that's suitable for both situations. //
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all. | <urn:uuid:fb129249-c1d2-4114-8686-c18ca0df02f8> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624431/Configuring-and-Planning-a-Windows-Server-Cluster.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949499 | 1,415 | 2.5625 | 3 |
For those of you following the D-Wave story, the designers of “the world’s first commercial quantum computer” have published a revealing blog entry detailing the company’s latest achievements.
When Canadian startup D-Wave announced back in 2007 that it had developed a prototype quantum computing machine suitable for running commercial applications, the technical community paid attention, yet many were skeptical of the claim. Much of the debate has centered on the semantics of the term “quantum computing” and exactly what that means, but the folks at D-Wave were not easily discouraged. Within a few years, they had sold systems to Lockheed Martin and a NASA-Google collaboration.
In May 2013, the Quantum Artificial Intelligence Laboratory, shared by NASA, Google, and Universities Space Research Association (USRA), took delivery of the D-Wave Two Computer backed by the 509-qubit Vesuvius 6 (V6) processor. Since the system went online, it has operated around-the-clock at nearly 100 percent usage – with the majority of time being spent on benchmarking.
According to a recent blog post from D-Wave founder and chief technology officer Geordie Rose, six “interesting findings” have arisen as a result of this extensive benchmarking period.
Rose notes that while some of these results have been published, he wants to provide his own take on what it all means.
The six findings are as follows:
- Interesting finding #1: V6 is the first superconducting processor competitive with state of the art semiconducting processors.
- Interesting finding #2: V6 is the first computing system using ideas from quantum information science competitive with the best classical computing systems.
- Interesting finding #3: The problem type chosen for the benchmarking was wrong.
- Interesting finding #4: Google seems to love their machine.
- Interesting finding #5: The system has been running 24/7 with not even a second of downtime for about six months.
- Interesting finding #6: The technology has come a long way in a short period of time.
Rose provides further thoughts on each of these, but #1 and #4 are especially telling.
With regard to the first point, Rose states that a recently published paper “shows that V6 is competitive with what’s arguably the most highly optimized semiconductor based solution possible today, even on a problem type that in hindsight was a bad choice. A fact that has not gotten as much coverage as it probably should is that V6 beats this competitor both in wallclock time and scaling for certain problem types.”
Finding four is backed by a blog post that the Google team published last week.
“In an early test we dialed up random instances and pitted the machine against popular off-the-shelf solvers — Tabu Search, Akmaxsat and CPLEX. At 509 qubits, the machine is about 35,500 times (!) faster than the best of these solvers,” writes the Google team.
There was earlier discussion of a 3,600-fold speedup, but the Google rep explains that was on an older chip with only 439 qubits.
“This is an important result,” Rose adds. “Beating a trillion dollars worth of investment with only the second generation of an entirely new computing paradigm by 35,500 times is a pretty damn awesome achievement.”
As for the final point – the fast pace of the D-Wave technology – the CTO notes that all of these advances have been completed in the last year. In closing, he says “the discussion is now about whether we can beat any possible computer – even though it’s really only the second generation of an entirely new computing paradigm, built on a shoestring budget.” Rose expects that within the next few generations, the D-Wave processor will do just that. | <urn:uuid:d2dd0fe6-282e-48ef-a3ff-3a188138c96e> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/23/d-wave-aims-beat-classical-computer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958705 | 810 | 2.515625 | 3 |
As the price of genome sequencing plummets, the output of genomics data is skyrocketing. According to the U.S. National Human Genome Research Institute, over the next few years, the number of human genomes sequenced is expected to explode — from around 30,000 in 2011 to more than a million before 2014. Forget, for a moment, the processing horsepower necessary to transform everyone’s genome into useful medical knowledge; just the storage capacity required to hold all this data is staggering.
As pointed out in a recent post at Technology Review (TR), if every person on the planet had their genome sequenced, it would take up as much digital storage as was available world-wide in 2010, estimated to be just over 721 PB. That’s assuming 100 GB per human genome.
But that 100 GB represents a pretty brute-force storage model. Theoretically, a person’s 3.2 billion base pairs should only take 800 MB (each of the four bases can be packed into 2 bits). The problem, according the TR post, is that a lot of other data is collected about the bases, and the genes are sequenced multiple times for the sake of accuracy.
One solution, at least according to Harvard geneticist George Church, is to only store the differences between a particular genome and some reference genome. According to Church that would reduce the data capacity needed to a mere 4 MB per person. Using this approach, it would take just 28 PB of storage to hold all human genomes.
And if that seems like a lot, keep in mind that the Blue Waters supercomputer will have a storage capacity of over 25 PB when it comes online later this year. By the middle of this decade, when petascale supercomputers are apt to be much more commonplace, that 28 PB is probably going to reflect an average-sized storage capacity for hundreds of systems around the world. | <urn:uuid:c4ba01f2-a909-47a3-8f3e-c1c1431d31b0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2012/06/04/it_s_time_to_put_the_squeeze_on_genomic_data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945115 | 392 | 3.375 | 3 |
One of the key decisions during a XenDesktop design is whether to use the 32-bit (x86) or 64-bit (x64) version of the Windows desktop operating system. I’ve seen a few projects falter because they’ve opted for the 64-bit version without really thinking this decision through and I want to make sure that this doesn’t happen to you.
Question: What’s the benefit from implementing a 64-bit desktop operating system?
The primary benefit of a 64-bit operating system is that it allows you to assign significantly more physical memory to the desktop. With the 32-bit version of Windows XP or Windows 7 you’re restricted to 4GB of physical memory (there are a few techniques available to extend this limit but they do not come without disadvantages – check out Daniel Feller’s blog post for more information). However, with the 64-bit versions this limit is increased to 128GB for Windows XP and 192GB for Windows 7 Professional.
This sounds great, but how many people really need more than 4GB of RAM? In my experience, this requirement is limited to a very small number of heavy users within the company, for example developers or designers. Most people can get by very well with 2-4GB of RAM. Another benefit of 64-bit operating systems is that they let you run 64-bit applications (applications specifically written for a 64-bit operating system will not run on 32-bit operating systems). However, most 64-bit applications are also available as 32-bit applications.
Question: Even if most people don’t need 4GB of RAM, why not use the 64-bit version so that you have the flexibility to support more than 4GB of RAM in the future? After all, application memory requirements are continuously increasing.
It’s a valid point, however the main issue with 64-bit operating systems is that they can’t support 16-bit applications and most companies still have some 16-bit applications hanging around somewhere. Even 32-bit applications can have elements of 16-bit code in them (and many do!). So what are the options?
1: Deploy a 32-bit operating system limited to 4GB of RAM for the majority of users. Provide power users with a 64-bit operating system so that they can be assigned more than 4GB of RAM.
Note: For future reference, Windows 8 is available in both 32-bit and 64-bit versions.
2: Deploy a 64-bit operating system for everyone and use Microsoft Windows 2008 x86 with Citrix XenApp 5.0 to deliver 16-bit applications.
Note: 5.0 is the last version of XenApp released that supports a 32-bit version of Microsoft Server (Microsoft Server 2008). Windows 2008 R2 is required for XenApp 6.x and is 64-bit only. Mainstream support for Microsoft Server 2008 ends on January 13th, 2015 (extended support ends on January 14th, 2020). End of Life (EoL) for XenApp 5 is also January 13th, 2015 (extended support ends on January 14th, 2020).
3: Deploy a 64-bit operating system for everyone and use VM Hosted Apps to deliver 16-bit applications from 32-bit desktop operating systems.
4: Deploy a 64-bit operating system for everyone and replace / re-engineer all applications to be 32-bit or 64-bit.
Question: Any other disadvantages?
Another disadvantage from selecting a 64-bit desktop operating system is that you will need to find 64-bit drivers. Unfortunately, this is easier said that done and you may well struggle for some of the older drivers out there.
Question: So should I just go with 32-bit to be safe?
Unless you’re designing virtual desktops for people that need more than 4GB of memory, the simplest approach is going with a 32-bit desktop operating system (Hosted VDI or Hosted Shared). I also recommend checking out Citrix AppDNA when selecting your desktop and application delivery methods. It will tell you if an application is 16-bit or contains elements of 16-bit code, in addition to a wealth of useful compatibility information. | <urn:uuid:534d02ca-af78-42d9-8eab-8621b409a05e> | CC-MAIN-2017-04 | https://www.citrix.com/blogs/2013/03/29/virtual-desktops-32-bit-or-64-bit-desktop-operating-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00158-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941381 | 879 | 2.734375 | 3 |
Most people just want their smartphones to take great pictures of their dogs or their kids' soccer games.
NASA is a bit more ambitious.
NASA combined multiple photos from the orbiting smartphones, called PhoneSats, to create images of Earth as seen from space.
"During their time in orbit, the three miniature satellites used their smartphone cameras to take pictures of Earth and transmitted these image-data packets to multiple ground stations. Every packet held a small piece of the big picture. As the data became available, the PhoneSat Team and multiple amateur radio operators around the world collaborated to piece together photographs from the tiny data packets."
The three PhoneSats were launched into orbit on April 21 aboard an Antares rocket from NASA's Wallops Flight Facility in Virginia. The smartphones completed what NASA called a successful mission on April 27.
The goal of NASA's mission was to see how capable these tiny nanosatellites are and whether they could one day serve as the brains of inexpensive, but powerful, satellites.
The phones were encased in 4-inch metal cubes and hooked up to external lithium-ion battery packs and more powerful radios for sending messages from space.
The devices went into a orbit about 150 miles above Earth, after six days fell back to Earth, burning up in the atmosphere.
In addition to the photos, the three PhoneSats transmitted messages about their functions and condition.
The transmissions were received at multiple ground stations, indicating that they were operating normally.
The three smartphones that NASA launched into orbit to see if they could work as the brains for future inexpensive satellites sent back these photos of Earth. (Image: NASA)
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about smartphones in Computerworld's Smartphones Topic Center.
This story, "Space shots: Android phones beam back Earth pix" was originally published by Computerworld. | <urn:uuid:12fff2aa-6cc3-47e6-9816-b80851caed2e> | CC-MAIN-2017-04 | http://www.itworld.com/article/2710301/it-management/space-shots--android-phones-beam-back-earth-pix.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945443 | 433 | 3.703125 | 4 |
- Remote Repositories
- Managing a Website Using
- Essential Commands
git is a wicked-powerful distributed revision control system. It is confusing to many, so there are myriad tutorials and explanations online to help people understand it. This one will focus on the fundamental concepts and tasks rather than trying to compete with the documentation.
“I’m an egotistical bastard, and I name all my projects after myself. First Linux, now git.” ~ Linus Torvalds
- Working Directory
- – the working directory is the directory where you have content that you want to manage with
- – a commit is a full snapshot of the contents of your working directory (everything being tracked by
git, anyway), and it’s kept track of using a unique 40 character SHA1 hash. This way, the exact state of your project can be referred to, copied, or restored at any time.
- – the index can be considered a staging area. It’s where snapshots of changes are placed (using
git add) before they get committed. The index is crucial to
gitbecause it sits between the working directory and your various commits.
- – a branch is similar in concept to other versioning systems, but in
gitit’s simply a pointer to a particular commit. As you make additional commits within a branch, the branch pointer moves to point to your latest one. To revert to a previous version of your code, the branch pointer is simply moved to point to another commit within that branch.
Understanding how these components work together is the key to understanding
First and foremost, it’s important to understand that
git has something called an index that sits between your working directory and your commits. It’s basically a staging area, so when you
git add you copy a snapshot of your working directory to the index, and when you
git commit you copy that same thing from the index to create a new commit.
It is crucial to understand the intermediate (staging area) nature of the
git index in order to grasp the relationship and differences between adding and committing content.
git status is very helpful in understanding git because it shows you the differences between the working directory and the index and previous commits. The “status” refers to the status of the working directory, so if you make a change in your content — say to index.php — and you run
git status, it’ll show you what’s changed that isn’t yet staged for a commit (in your index):
$ git status
On branch master Changed but not updated: (use "git add
..." to update what will be committed) (use "git checkout -- ..." to discard changes in working directory)
no changes added to commit (use "git add" and/or "git commit -a")
git diff is similar to
git status, but it shows the differences between various commits, and between the working directory and the index.
git diff --cached, on the other hand, shows you the difference between what’s in the index vs. in your last commit.
Branches are just pointers to various commits. Every project has a default pointer called Master. As you continue to create commits within the default branch, the Master pointer follows you along so that it always points to the latest commit in the branch.
Branches are created using the
git branch command, which creates a new branch label which points to the current commit. So, until a new commit is made, both the previous and new branch labels will point to the current commit. One way to think of them is “File->Save As Copy” for your codebase.
Using this model, someone can then checkout the Master branch, make changes, and commit them. The result would look like this.
Keep in mind that the “Next Commit” commit would only happen for someone who had the Master branch checked out. If the same person who performed the
git branch command immediately made changes, added them, and then committed them, they’d have made another commit along the Test branch instead, like so:
If both branches had been checked out, worked on, and had a commit made, the diagram would look like so:
To change to and start working on a given branch, you have to check it out. This is done with the
git checkout command:
$ git checkout
Keep in mind that when you do this you will pull the copy of your project that exists at that branch’s latest commit point, and it will overwrite your current working directory with that version of your project.
Creating code branches on a single system is enjoyable enough, but the real purpose of
git is allowing people in disparate locations to contribute to a project. You add remote repositories in
git by using the
git remote command, like so:
$ git remote add remotesite ssh://yourdomain.tld/path/to/remote/gitdirectory.git
[ This is using SSH as the protocol, but
git supports many others as well. ]
Then you push your content from your local repository to the remote one, using the name you’ve set up:
$ git push remotesite +master:refs/heads/master
Then, to perform updates, you simply run
git push with your target defined:
$ git push remotesite
If you have more than one person working on the project — each doing his own thing — eventually you’re going to have to synchronize.
git handles this quite elegantly by simply moving the necessary branch pointers upon completion.
In the diagram below we merge Test into Master, and we simply end up with a later version of Master, as you might expect — only this version includes the changes that existed in Test as well.
NOTE: You want to make sure you’re fully committed before you attempt a merge. Not doing so invites wrath.
So now that we’ve seen the various components of
git in play, let’s see how it all works together by creating a setup that allows us to manage a website remotely.
On Your Local System
Either have, or create, a web directory on your local system. This is the main place you’ll make changes from. Start by changing directory into it.
Create some content.
Add the content to
git add index.html
Commit the changes.
On Your Server
Initiate a new
git repository on the server.
mkdir /path/website.git && cd /path/website.git && git init –bare
Create your hook that will checkout the code to your actual web directory.
GIT_WORK_TREE=/path/htdocs git checkout -f
chmod +x /path/website.git/hooks/post-update
Back on the Local System
Add the remote directory to the local config.
git remote add website ssh://path/server/website.git
Push the contents of the local repository to the remote one.
git push website +master:refs/heads/master
Then, change things locally and to upload changes, simply do a:
git push website
The changes will upload to the remote
git repository and trigger the post-update hook, which copies the contents of the repository to the working directory — your live website directory (htdocs).
[ To update from the server side you can execute
git commands as usual, but you must provide environment context with each command, like so: ]
GIT_DIR=/path/website.git GIT_WORK_DIR=/path/htdocs git $some_git_command
[ On the server you won’t have to push after you add and commit, as using the environment variables above will mean the committed changes will already be present in the repository. ]
git has a powerful feature called tags, which allow you to define an intuitive name to a given commit — like “Gold Version”, or “Version 3”, or “Before Production”.
These are done using
git tag (easy enough). You can then check out that version just like a branch, only this will point to that specific point in time.
git tag Gold
…and later on…
git checkout Gold
There are a good number of
git commands, and the documentation is excellent, so I won’t cover many here. Here are some I think are worth mentioning.
git bisect– allows you to isolate the exact code push that caused a problem by executing
git bisect badto mark the current (broken location), then restore to a known-good configuration, and then git will step you through each commit you’ve made in between the two. For each one you then either git bisect good or bad until you’ve found your error-causing commit.
git stash– sets changes you’ve made off to the side in a way that lets you bring them back later. Usage: you are about to checkout another branch which would crush your current changes, so you
git stashthem before doing your checkout.
git rm / mv– deletes / moves items in the working directory with visibility to
gitso that you can commit afterwards.
git reset– resets to a specified state (commit).
git status– show the status of the working tree — including differences between the index and the current commit, differences between the working tree and the index, and items within the working tree that are not tracked by
git log– show a log of commits
git clone– clone a repository into a new directory.
git clean– remove untracked files from the working directory.
1 Be sure to restrict access to your
.git directory on your server if it resides within the presented content. | <urn:uuid:ddc0dd23-6e0d-42e4-a48a-d38c6d985935> | CC-MAIN-2017-04 | https://danielmiessler.com/study/git/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898059 | 2,061 | 3.09375 | 3 |
A Taxonomy of NoSQL Databases
June 6, 2011
Search is morphing. The line between databases and search is thin and in some case porous. We believe that most readers of Beyond Search are familiar with the Access and JET engines from Microsoft.
However, some individuals find that the traditional, decades old relational database inappropriate for certain tasks. The solution for some is NoSQL databases. We learned in “The Four Categories of NoSQL Databases”:
Most people just see one big pile of NoSQL databases, while there are quite some differences. You couldn’t use a Key-Value store when you need a Graph database for example, while Relational database systems are all quite compatible.
The author identifies four distinct categories of NoSQL databases:
- Key-values—A math method powers this technique implemented in Google’s and its variants’ approach
- Column Family—A columnar oriented method of organization
- Document—Key value method1
- Graph—Node and edge set up.
No database method is without drawbacks. The article points out that most NoSQL approaches eliminate the central, declarative language of SQL to allow for faster processing. Coupled with different architectures, NoSQL gains some advantages for “big data”; that is, large data sets and certain types of processing. But each models described in the article requires its own method of querying, trading a single, simple method of access for more flexible storage. These programs may not embrace the latest methods from Digital Reasoning, Kitanga and others, but this source is definitely worth tucking away for reference.
Stephen E Arnold, June 6, 2011 | <urn:uuid:1b5b17db-04eb-4732-a7f3-f6033a8ae247> | CC-MAIN-2017-04 | http://arnoldit.com/wordpress/2011/06/06/a-taxonomy-of-nosql-databases/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882981 | 344 | 2.84375 | 3 |
SCOTTSDALE, AZ--(Marketwired - Jan 9, 2014) - It's not news that regular physical activity offers a number of health benefits for young people. The U.S. Department of Health and Human Services (HSS) states that regular physical activity helps build and maintain healthy bones and muscles; helps reduce the risk of developing obesity and chronic diseases; reduces feelings of depression and anxiety and promotes psychological well-being; and may help improve students' academic performance. But The Little Gym says that families who want to encourage their young children to make physical activity a lifelong, daily priority should establish active habits as early as possible.
"Making physical activity fun is the best way to encourage children to get and stay active," says Ruk Adams, president and CEO, The Little Gym International. "At The Little Gym we know that it's important to keep the emphasis on success, not winning; and on fun, not 'fitness.' If children are having fun as they tumble, run and play, then healthy habits will follow."
Programs like the ones offered by The Little Gym can also help bridge the shortfall caused by recent budget cuts to school districts across the country. HHS recognizes that schools have traditionally had an important role to play in physical fitness through comprehensive school-based physical activity, recess and elective sports programs. But SPARK, a research-based, public health organization that provides Physical Education (PE) curriculum, training & equipment for Pre-K - 12th grade, states that 44% of schools have had to reduce elective offerings (including PE) and another 70% have increased class sizes -- up to 80 students in a PE class in some cases -- due to budget cuts. Today, only eight states require PE as part of each school day.
"At The Little Gym, we recognize that PE programs have more to teach than physical skills. That's why we offer children much more than just active fun each week," said Adams. "While physical activity is a cornerstone of our programs, we take a 'whole-child' approach, challenging children to test their limits as they gain confidence and build self-esteem. And our small group programs also help kids improve their ability to work as part of a team and to listen to and follow instructions. At a time when fewer schools are able to include PE in their curriculum, our programs help children learn about their physical potential as individuals and as part of a group. That knowledge is a great foundation for a healthy, active lifestyle."
For more information about The Little Gym, please visit www.TheLittleGym.com.
About The Little Gym
The Little Gym is an internationally recognized program that helps children build the developmental skills and confidence needed at each stage of childhood. The very first location was established in 1976 by Robin Wes, an innovative educator with a genuine love for children. The Little Gym International, Inc., headquartered in Scottsdale, Ariz., was formed in 1992 to franchise The Little Gym concept. Today, The Little Gym International has nearly 300 locations in 29 countries. For more information, visit The Little Gym at www.TheLittleGym.com. | <urn:uuid:cd0b6859-a490-45af-a893-86f9f07dea2a> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/keeping-your-family-focused-on-physical-activity-1867820.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965841 | 639 | 2.65625 | 3 |
Opinion: Linear programming offers a better way of solving problems.
Lots of folks get all misty-eyed these days about the power of computers to help us express ourselves. The engineer in me recoils, though, from the use of so much power to wait for a user to think of something to sayas the 1GHz CPU of my laptop is waiting for me, right now, to get on with this column.
My own misty-eyed devotion is to the power that computers give us to make better decisions and make better use of resources. I therefore note this months passing of mathematician George Dantzig, principal developer of the methods of linear programming that are the foundation of modern planning.
To understand the essence of linear programming, imagine a factory thats equipped to make two types of chairs. One type is made of mostly cloth with leather trim; the other is made of mostly leather with cloth trim. Each chair needs labor from leather workers and cloth workers in proportion to its use of those materials. Each chairs cost is the sum of these four component costs, plus other common costs, all with what well assume to be linear relationships to the number of chairs producedthat is, making twice as many chairs consumes twice as much material and labor.
The profit to the factory can also be modeled as linear: The sum of the profits from each type of product, proportional to the quantities produced, minus the fixed costs of being in business.
There are limits, though, on the number of hours of labor available from each craft, based on the number of people who can work at any one time in each area of the factory. We could, if we wished, express this as a certain number of regular hours available at one cost and another number of overtime hours available at a higher cost. The "linear program" for this problem is then the statement of an "objective function" (overall profit) to be, in this case, maximized subject to these constraints.
This problem and other similar maximization and minimization problems can be diagrammed and solved with a pencil and a straightedge. We can also graphically determine which resources would be most valuable to increase, or which constraints it would be most worthwhile to loosen.
Its easy to imagine more realistic problems, though, involving dozens or even thousands of such variables and constraints. Dantzigs Simplex algorithm, which he devised in 1947, solves such problems reliably and efficiently; the algorithm therefore lets a planner examine many production-scheduling scenarios.
Dantzigs work was pivotal to the early adoption of computers by business and military planners. "Until computers started to be used for e-mail and the World Wide Web in the 80s and 90s, the single most important use of computersthe biggest user of computer time in the entire worldwas running the Simplex algorithm to solve linear programming problems. No large organization can exist or stay in business without the Simplex algorithm," declared Stanford University professor Keith Devlin during National Public Radios May 21 report on Dantzigs death.
Its perhaps the most dubious achievement, then, of the desktop computing revolution that instead of running Simplex scenarios on mainframes to yield provably optimal solutions, we now throw at least as many CPU cycles at trial-and-error spreadsheet manipulations on PCs that may or may not be finding the best possible choices. In the process, people who could be using their expertise to conceive new production options, or investigating the accuracy of their assumptions, are instead using too much of their time to do what Dantzigs methods have been doing better for more than half a century.
You can do better. You can use the Solver add-in that comes with Excel or add a third-party spreadsheet enhancement such as Frontline Systems Premium Solver Platform; you can use the linear programming functions in an industrial-strength mathematical tool kit such as Wolfram Researchs Mathematica or The MathWorks Matlab and Optimization Toolbox. One way or another, you can rediscover whats been known for so long about the value of merely thinking about a problem clearly enough to program itlet alone solve it.
Technology Editor Peter Coffee can be reached at firstname.lastname@example.org.
To read more Peter Coffee, subscribe to eWEEK magazine. | <urn:uuid:3c4855e8-d21f-4b28-869a-823bdd74ab17> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Linear-Programming-Methods-Are-Underused | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948132 | 878 | 2.75 | 3 |
ISO 27001 risk assessment & treatment – 6 basic steps
Risk assessment (often called risk analysis) is probably the most complex part of ISO 27001 implementation; but at the same time risk assessment (and treatment) is the most important step at the beginning of your information security project – it sets the foundations for information security in your company.
The question is – why is it so important? The answer is quite simple although not understood by many people: the main philosophy of ISO 27001 is to find out which incidents could occur (i.e. assess the risks) and then find the most appropriate ways to avoid such incidents (i.e. treat the risks). Not only this, you also have to assess the importance of each risk so that you can focus on the most important ones.
Although risk assessment and treatment (together: risk management) is a complex job, it is very often unnecessarily mystified. These 6 basic steps will shed light on what you have to do:
Need help with risk assessment?
1. Risk assessment methodology
This is the first step on your voyage through risk management. You need to define rules on how you are going to perform the risk management because you want your whole organization to do it the same way – the biggest problem with risk assessment happens if different parts of the organization perform it in a different way. Therefore, you need to define whether you want qualitative or quantitative risk assessment, which scales you will use for qualitative assessment, what will be the acceptable level of risk, etc.
2. Risk assessment implementation
Once you know the rules, you can start finding out which potential problems could happen to you – you need to list all your assets, then threats and vulnerabilities related to those assets, assess the impact and likelihood for each combination of assets/threats/vulnerabilities and finally calculate the level of risk.
In my experience, companies are usually aware of only 30% of their risks. Therefore, you’ll probably find this kind of exercise quite revealing – when you are finished you’ll start to appreciate the effort you’ve made.
3. Risk treatment implementation
Of course, not all risks are created equal – you have to focus on the most important ones, so-called ‘unacceptable risks’.
There are four options you can choose from to mitigate each unacceptable risk:
- Apply security controls from Annex A to decrease the risks – see this article ISO 27001 Annex A controls.
- Transfer the risk to another party – e.g. to an insurance company by buying an insurance policy.
- Avoid the risk by stopping an activity that is too risky, or by doing it in a completely different fashion.
- Accept the risk – if, for instance, the cost for mitigating that risk would be higher that the damage itself.
This is where you need to get creative – how to decrease the risks with minimum investment. It would be the easiest if your budget was unlimited, but that is never going to happen. And I must tell you that unfortunately your management is right – it is possible to achieve the same result with less money – you only need to figure out how.
4. ISMS Risk Assessment Report
Unlike previous steps, this one is quite boring – you need to document everything you’ve done so far. Not only for the auditors, but you may want to check yourself these results in a year or two.
5. Statement of Applicability
This document actually shows the security profile of your company – based on the results of the risk treatment you need to list all the controls you have implemented, why you have implemented them and how. This document is also very important because the certification auditor will use it as the main guideline for the audit.
For details about this document, see article The importance of Statement of Applicability for ISO 27001.
6. Risk Treatment Plan
This is the step where you have to move from theory to practice. Let’s be frank – all up to now this whole risk management job was purely theoretical, but now it’s time to show some concrete results.
This is the purpose of Risk Treatment Plan – to define exactly who is going to implement each control, in which timeframe, with which budget, etc. I would prefer to call this document ‘Implementation Plan’ or ‘Action Plan’, but let’s stick to the terminology used in ISO 27001.
Once you’ve written this document, it is crucial to get your management approval because it will take considerable time and effort (and money) to implement all the controls that you have planned here. And without their commitment you won’t get any of these.
And this is it – you’ve started your journey from not knowing how to setup your information security all the way to having a very clear picture of what you need to implement. The point is – ISO 27001 forces you to make this journey in a systematic way.
P.S. ISO 27005 – how can it help you?
ISO/IEC 27005 is a standard dedicated solely to information security risk management – it is very helpful if you want to get a deeper insight into information security risk assessment and treatment – that is, if you want to work as a consultant or perhaps as an information security / risk manager on a permanent basis. However, if you’re just looking to do risk assessment once a year, that standard is probably not necessary for you.
Learn about the details of the risk management process in this free ISO 27001 Foundations Online Course.
Need help with internal audit?
RISK ASSESSMENT DOCUMENTS
ISO 27001 Risk Assessment Toolkit
contains all the document templates needed to
implement risk assessment and treatment.
RISK ASSESSMENT TRAINING
ISO 27001 Foundations Course
is a free online training that explains you step-by-
step how to perform risk assessment and treatment. | <urn:uuid:c77b60a7-38b8-4055-b6b7-f90032bacccb> | CC-MAIN-2017-04 | https://advisera.com/27001academy/knowledgebase/iso-27001-risk-assessment-treatment-6-basic-steps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940906 | 1,228 | 2.640625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.