text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
VoIP networks are very popular these days. In order to support communication between traditional PBXs, Cisco IP phones, analog PSTN, and the analog telephones, all over IP network, quite a number of protocols are needed. Few protocols are indicating protocols (for instance, MGCP, H.323, SIP, H.248, and SCCP) used to position, sustain, and bring down a call. Other protocols are marked at the real voice packets (for example, SRTP, RTCP, and RTP) relatively indicating information. Few of the most common VoIP protocols are shown and described here.
<urn:uuid:c29356ef-3fc0-4877-8687-cc11007a2dbc>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/voice-protocols
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916589
129
2.71875
3
The microblogging service sent out an email to millions of users saying their account had been compromised. The firm sent an email which read, "Twitter believes that your account may have been compromised by a website or service not associated with Twitter," then reset the passwords of numerous accounts. Twitter said that in situations where they believe an account has been compromised, they reset the password and send an email letting the owner know they’ve been compromised. Twitter users who received the email were prompted to create a new password. "We unintentionally reset passwords of a large number of accounts, beyond those that we believed to have been compromised," said Twitter in a blog post. Twitter explained that what they did is a "routine part of our processes to protect our users." "We apologize for any inconvenience or confusion this may have caused," said the company. Twitter is no stranger in being a target for hackers or fake accounts. A study by Barracuda labs this year revealed they had found at least 11,283 ‘abusers’ with over 72,000 fake accounts. The average abuser was found to have 48,885 followers and the average fake twitter account was following at least 1,799 accounts. The study also found that over 60% of fake accounts are less than three months old with the average age being 19 weeks. The oldest fake account dates as far back as January 2007. "Creating fake Twitter accounts and buying/selling followers is against Twitter’s ToS, and gradually erodes the overall value of the social network," said Barracuda Labs.
<urn:uuid:582363ab-3081-4d2a-9867-b8ea170c81d9>
CC-MAIN-2017-04
http://www.cbronline.com/news/twitter-says-sorry-for-fake-hack-warnings
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973856
325
2.546875
3
Scientists at the University of Sheffield have developed a way to send crime scene fingerprints wirelessly for examination in less than a minute. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The system has been approved by the National Fingerprint Board and will be used by all 43 police forces across the country, in a move that is expected to dramatically speed up the identification of crime suspects. The technique involves compressing the fingerprint data collected, then using a small scanner and a wireless enabled laptop to send them to police fingerprint bureaus over mobile phone networks. The method reduces the time taken to lift and despatch the prints for examination from up to 20 minutes to between 30 and 60 seconds and allows instant transmission. Currently, police investigators must often wait for the end of the day for a batch of prints to be sent off, delaying the identification of suspects for hours or days. The university said research had gone into ensuring that there would be no deterioration in identification due to the use of “lossy” image compression. They claimed, “In fact, the reverse is true. The correct form and degree of compression improves identification for poor quality lifts.” Vote for your IT greats Who have been the most influential people in IT in the past 40 years? The greatest organisations? The best hardware and software technologies? As part of Computer Weekly’s 40th anniversary celebrations, we are asking our readers who and what has really made a difference? Vote now at: www.computerweekly.com/ITgreats
<urn:uuid:5d3e89c7-8903-438d-aea3-3559fb7e5f20>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240078311/Researchers-crack-mobile-fingerprint-checking
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00224-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940516
329
2.609375
3
Visualizing data means that we can see relationships if they exist. A visualization is a short cut to understanding the underlying pattern in the data. The danger is that the pattern is spurious and that we fail to test for the significance of the relationship. So what are "best practices" for visualizing a data set? Should we start with descriptive information on the data set? Should we have a prior hypothesis that guides our analysis? Should we try a number of techniques and tools and to see if we find a relationship? Continue reading at http://dssresources.com/faq/index.php?action=artikel&id=248 Posted December 24, 2012 5:47 AM Permalink | No Comments |
<urn:uuid:79c364dd-d5df-41e6-8f49-f634e26d8cf4>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/power/archives/2012/12/what_is_best_pr.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869715
148
2.734375
3
Regan J.J.,Centers for Disease Control and Prevention | Traeger M.S.,Indian Health Service HospitalAZ | Humpherys D.,Indian Health Service HospitalAZ | Mahoney D.L.,Indian Health Service HospitalAZ | And 12 more authors. Clinical Infectious Diseases | Year: 2015 Background: Rocky Mountain spotted fever (RMSF) is a disease that now causes significant morbidity and mortality on several American Indian reservations in Arizona. Although the disease is treatable, reported RMSF case fatality rates from this region are high (7%) compared to the rest of the nation (<1%), suggesting a need to identify clinical points for intervention. Methods: The first 205 cases from this region were reviewed and fatal RMSF cases were compared to nonfatal cases to determine clinical risk factors for fatal outcome. Results: Doxycycline was initiated significantly later in fatal cases (median, day 7) than nonfatal cases (median, day 3), although both groups of case patients presented for care early (median, day 2). Multiple factors increased the risk of doxycycline delay and fatal outcome, such as early symptoms of nausea and diarrhea, history of alcoholism or chronic lung disease, and abnormal laboratory results such as elevated liver aminotransferases. Rash, history of tick bite, thrombocytopenia, and hyponatremia were often absent at initial presentation. Conclusions: Earlier treatment with doxycycline can decrease morbidity and mortality from RMSF in this region. Recognition of risk factors associated with doxycycline delay and fatal outcome, such as early gastrointestinal symptoms and a history of alcoholism or chronic lung disease, may be useful in guiding early treatment decisions. Healthcare providers should have a low threshold for initiating doxycycline whenever treating febrile or potentially septic patients from tribal lands in Arizona, even if an alternative diagnosis seems more likely and classic findings of RMSF are absent. © 2015 Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US. Source
<urn:uuid:7a3330d9-2420-4b8e-a57f-cdac6aeb76fc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/indian-health-service-441116/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920546
456
2.5625
3
Nations have a responsibility to increase the well being and wealth of their residents. As in the private sector, one of the key factors influencing a state’s ability to pursue this goal is the number of customers (people and businesses) that it can generate or attract. Historically, states have tried to maximize their number of customers by promoting population growth, entrepreneurship and immigration. Estonia realistically has little chance of succeeding with those approaches. But in an increasingly digital world, its experience in enhancing the lives of its citizens with digital services could translate to virtual growth, which in many ways could be better than the real thing. After splitting from the Soviet Union and regaining its statehood in 1991, Estonia clearly saw the impossibility of physically serving a small population spread across a large territory (large in a European context; Estonia is bigger than the Netherlands or Switzerland). It is not realistic to put a bank branch in every small town or have a full-service government office in each village. Both the private and public sectors decided to bet on the development of digital solutions and e-services. Today, 25 years later, Estonia has one of the most developed national digital infrastructures in the world. It is a country where a digital signature is preferable to a physical one, taxes take only a few minutes to file, and online elections have been a fact of life for over a decade. One of the most important foundational components of a functioning digital society is a secure digital identity. When nearly all government services are provided over the Internet, both the state and the private sector need to know who is physically accessing them via a computer or mobile device. This is why, back in 2002, Estonia started issuing its residents mandatory ID cards containing a chip that allows residents to unambiguously identify themselves and authenticate legal transactions and documents through digital signing. A digital signature has been legally equivalent to a handwritten one throughout the European Union — not just in Estonia — since 1999. The Estonian state’s secure digital identity system and e-services facilitated location independence. The state could serve not only its sparsely populated areas, but also the entire Estonian diaspora. Estonians who reside in Silicon Valley, Singapore or South Africa can maintain a connection to their homeland via e-services, contribute to the legislative process and even participate in elections. The ability to serve the diaspora led to a logical question: If it is possible to offer a convenient and effective e-services environment to expatriate Estonians, why not also offer it to non-Estonians, even those who do not reside in Estonia, who need better everyday solutions than those offered by their own states? Is it possible to provide country as a service? In recent years, the world has seen a massive leap in the number of people who offer their skills and knowledge for sale on the global marketplace irrespective of location and national borders. Businessweek estimates this number will reach 100 million in the U.S. alone by 2020. These people are not looking to streamline their finances via tax havens. They have not been engaged in entrepreneurship so far because incorporating and maintaining a company is a major hassle. It is simpler to not take the step and to just continue drawing a salary. At the same time, since they are providing their services globally, it does not really matter to them whether their company is a legal entity in their place of residence or a different jurisdiction altogether. The most important thing is that the creation and upkeep of the company be easy and hassle-free. Incidentally, it is also important for these people that, despite being incorporated in a different nation, they remain honest taxpayers as far as their own country is concerned. This is one of Estonia’s target groups. Its offering is a location-independent, hassle-free and fully digital economic and financial environment for anyone who needs it. The company is managed by its owners themselves, not nominal “directors.” Where exactly are the taxes paid, at the end of the day? “Taxes must be paid where the value was created” — that is the principle espoused by the Organization for Economic Co-operation and Development and increasingly adopted by nations. If a location-independent entrepreneur creates a company in Estonia but lives in Singapore, the company is not benefiting from Estonia’s roads, its educational system, its healthcare or any of the other services it provides its residents. The person is using the Singaporean educational system and driving on Singaporean roads, so it is logical that he or she should contribute taxes to the functioning and development of that state’s physical environment. The solution is transparency reporting between Estonia’s tax authority and the Singaporean one. The Estonian Tax and Customs Board has the capability to offer such information and transparency. Potentially, the Estonian side could even collect the taxes and send the money to Singapore. So why is Estonia doing it? The more people and companies that are engaged with the Estonian business environment, the more clients there are for Estonian companies. E-residents will not only establish companies, but they will also likely start using the services of other Estonian companies. They will need bank accounts, international payment service providers, accounting support, legal advice, auditors, asset management, investment opportunities, etc. The more clients Estonian companies gain, the bigger their growth potential will be and therefore also the growth potential of the Estonian economy. Country as a service is the new reality. For example, if the U.K. says unequivocally that it will not issue a secure, government-backed digital identity to its subjects, or if states fail to greatly simplify the machinery of bureaucracy and make it location-independent, this becomes an opportunity for countries that can offer such services across borders. As a small state, Estonia has learned over the years to serve primarily small and micro businesses. To do so profitably, processes must be maximally digitized and automated, and not just in the private sector, but in the public one as well. Estonia’s model is location-independent, which makes it easy to scale without overextending resources. Estonia is a nation of 1.3 million people and its vision is to acquire at least 10 million digital residents (e-residents), in a way that is mutually beneficial by the nation-states where these people are tax residents. Taavi Kotka is the government of Estonia’s CIO, the founder of the e-residency program, and one of Computerworld’s Premier 100 IT Leaders of 2016. This story, "Country as a service: Estonia’s new model" was originally published by Computerworld.
<urn:uuid:195203af-53c9-4237-8d00-ccaf5f068e92>
CC-MAIN-2017-04
http://www.itnews.com/article/3071209/digital-transformation/country-as-a-service-estonia-s-new-model.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962457
1,368
2.671875
3
The Wild World of Wireless Standards Many IT professionals who have casually expressed an interest in breaking into wireless computing may have been discouraged when they started to study the far-reaching and somewhat Byzantine set of standards the industry operates by. Well, we’re here to help. Inasmuch as it’s possible, we’ll try to break down and simplify some of these standards for you, so that you can get down to the nitty-gritty of wireless networking and avoid getting too bogged down by details. Why So Many? The main reason there are so many standards out around wireless is that there are several wireless organizations out there, which are all pulling the field toward their particular technology or idea of what the industry should be doing. Additionally, different protocols address varying aspects of wireless: Some deal with clear communication, others with security and so forth. One of the best-known wireless standards associations is the Institute of Electrical and Electronic Engineers (IEEE), which encompasses more than 365,000 members in approximately 150 companies. (Nearly 40 percent of its membership is outside the United States.) While it was not established—nor is it strictly devoted—to promote wireless criteria, it does develop and advocate the widely accepted 802.11 set of wireless standards. The Bluetooth Special Interest Group (SIG) is another example. This organization is dedicated to the creation and promotion of the Bluetooth wireless specification, which explains how mobile devices like smartphones, laptops and PDAs can be linked to any other device with transceiver chips via short-range, low-cost radio solutions. 802.11: Keeping It in the Family IEEE’s suite of wireless LAN standards includes some that have already been adopted and are used extensively in the field, whereas others—at the time of this writing—are either still highly theoretical and under development by the organization’s task teams, or apply to very unique circumstances. We’ll focus more on the former. Here they are: - 802.11: The granddaddy of them all, this regulation more or less lays the framework for all the ones that follow. 802.11 pertains to transmissions that involve 1 or 2 megabytes per second (Mbps) in the 2.4 GHz band using either frequency-hopping spread spectrum (FHSS) or direct sequence spread spectrum (DSSS). - 802.11a: A kind of addendum to 802.11, this standard relates to wireless ATM systems and is used in access hubs. It provides up to 54 Mbps in the 5 GHz band, although transmissions seldom exceed 24 Mbps. Also, instead of FHSS or DSSS, 802.11a uses an orthogonal frequency division multiplexing encoding scheme. - 802.11b: Simply put, this one’s Wi-Fi. 802.11b employs the complementary code keying (CCK) modulation technique, which permits higher data speeds and is less vulnerable to multipath-propagation interference. Its frequency range is between 2.4 GHz and 2.4835 GHz. - 802.11e: This standard, which was actually finalized by IEEE only a few months ago, spans both home and enterprise wireless operating environments. It is aimed to enhance 802.11a and 802.11b specifications with quality-of-service (QoS) features and multimedia support guidelines. - 802.11g: Another recently approved standard, this one covers wireless transmissions at up to 54 megabits per second (Mbps) over relatively small areas. 802.11g is compatible with 802.11b, as both of these operate in the 2.4 GHz range. - 802.11i: This regulation deals with a major concern of many users of wireless technologies: security. 802.11i brings the Advanced Encryption Standard (AES) security protocol to 802.11. It also includes the Temporal Key Integrity Protocol (TKIP), a set of algorithms that enhances encryption capability. Outside of the immediate 802.11 family—cousins, if you will—are 802.15, a specification that involves wireless personal area networks (wPANs) and is compatible with Bluetooth standards, and 802.16, which covers broadband wireless communications principles for metropolitan area networks (MANs). The latter even has its own advocacy group. You may have heard of it: The WiMAX coalition counts tech heavy-hitters like Intel and Nokia among its members. What in the World Is Bluetooth? Bluetooth takes its name from Harald Bluetooth, a relatively obscure 10th century Viking ruler in present-day Denmark who ostensibly facilitated greater communication between people in his time. As a concept in wireless, it’s hard to pin down. Bluetooth is definitely a technology, but it’s also a corporate community that includes Nokia, Motorola, Ericsson and many other companies. What they all have in common is that they follow specifications developed by the organization in using Bluetooth technology in their products. The latest of these is Version 1.2, the fourth generation in Bluetooth standards. All of these regulations are in a checklist format that covers criteria such as protocol and profile provisions and test specifications. These are available for download on the Bluetooth SIG Web site (http://www.bluetooth.org). WAP (Wireless Application Protocol): Developed in 1997 by Ericsson, Motorola, Nokia and Unwired Planet (now Phone.com), the WAP specification concerns the ways in which wireless devices can access the Web and operate in corporate intranets. This standard is designed to help administrators, manufacturers and providers overcome obstacles in differentiating themselves in the market and offering fast and flexible service. WAP encompasses four layers: Wireless Application Environment (WAE), Wireless Session Layer (WSL), Wireless Transport Layer Security (WTLS) and Wireless Transport Layer (WTP). The Wireless Markup Language (WML), an open language that can be accessed and used without any royalty payments, makes WAP possible by allowing users to access text portions of Web sites through cell phones and PDAs. Are We Finished Yet? We’ve only scratched the surface on the existing wireless standards. I’d like to say this is an exhaustive overview of regulations, but there are many, many others out there. Until the entire industry comes under one set of standards (don’t hold your breath), you’ll have to research which wireless products and services your employers or customers use, and adjust your comprehension of particular specifications accordingly. –Brian Summerfield, email@example.com
<urn:uuid:196d25fd-7af4-4536-a7ea-5261065a5290>
CC-MAIN-2017-04
http://certmag.com/the-wild-world-of-wireless-standards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00370-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922531
1,354
3.046875
3
The Minister for Broadband, Communications and the Digital Economy, Senator Stephen Conroy, has welcomed the Queensland launch of an innovative online safety program to teach children how to be responsible cyber-citizens. "Raising awareness of online safety is critical in ensuring children enjoy safe and positive internet experiences, and know how to protect themselves from risks," Conroy said. "Cybersmart Detectives is an exciting, interactive online activity where children work online in real-time to solve an internet-themed problem. The Australian Communication and Media Authority (ACMA) will be providing the program free-of-charge to schools." The Australian Government has committed $125.8 million to a comprehensive range of cyber-safety measures, including law enforcement, filtering and education, over the next four years. Measures include: Senator Conroy commended ACMA for its cyber-safety initiatives that provide educational resources, advice and support for children, teachers, families, and library staff across Australia. "Through Cybersmart Detectives, children in their last year of primary school will learn about some of the risks associated with internet use and important internet safety messages, like not giving out personal information online," Conroy said. "This is just one of a number of Australian Government initiatives aimed at creating a safer online environment for all Australian children." Cybersmart Detectives is currently running each week in Western Australia and Victoria.
<urn:uuid:a6011db7-441e-49df-9761-1a848f60a84f>
CC-MAIN-2017-04
http://www.govtech.com/security/School-Program-to-Promote.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00188-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936652
286
2.6875
3
Use Hydra to Remotely Test Password Security With Hydra, you can test the security of passwords on your network from afar. In the last article in this series we looked at how easy it is to carry out a brute force attack on a password file containing the hashes of users' passwords. By using John the Ripper to systematically try out every password from a word list, and then every possible permutation of letters, numbers and other characters for passwords of increasing length, you are bound to find any passwords which are common English words or which are reasonably short. But what about a situation where you don't have access to the file containing hashed passwords? This makes things considerably harder, because instead of having the luxury of taking away the file and subjecting it to password attacks at your leisure, you have to perform an online password attack. This means that every password guess you make has to be sent over the network to the appropriate server (along with a username in most cases.) You then have to wait for a response from the server to see if your guess was successful. If it wasn't (which will be almost all of the time) then you'll have to repeat the process, sending in another guess and waiting for the response. It's often possible to send in multiple guesses concurrently, but even so online password attacks are very, very slow, compared to offline attacks, which are only speed-limited by the power of the computer you are using. Online attacks are more than just slow. There are many security hurdles to overcome. Many servers have security features which limit the number of failed password attempts that are allowed before the account is suspended, your IP address is blocked or the period before a new login attempt can be made is extended. They should also log where failed attempts are coming from and alert administrators. This makes it hard for a hacker to carry out an online attack on your systems. Which is good. The question is how hard? Do the systems work? Would you know if someone was carrying out an online attack, and what would you do about it? The best way to answer these questions is to carry out an online attack yourself, and see how far you get. The open-source tool you'll need for this is called Hydra, available from http://freeworld.thc.org/thc-hydra/. It's available for Windows, handheld ARM-based devices and Palm PDAs precompiled, or as source code which you can compile for MacOS X or your favorite Linux distribution. There's even a GUI for the Linux version. After downloading the Linux version and compiling (following the instructions included in the README.txt file), you can launch the GUI version by typing from the Hydra directory. The Hydra GUI will start, showing the Target tab (see Figure 1). The first thing to choose is what you want to test: Hydra can handle abut forty common protocols, including Pop3, telnet, ftp, VNC, SMTP, Cisco auth. You can select the protocol you want from the Protocol dropdown box, and choose a target either the name or IP address of a single server, or a text file with a list of them. On the Passwords tab, you can then choose to test a single username in this case hydratest, and specify a Password list that you want to test. (See Figure 2.) The Tuning tab is used for selecting the number of login attempts that are submitted simultaneously, and this number can be quite critical. Too high and the chances of being detected or locked out of the system are much higher, but too low and it could take days to work through your password list. Once you are ready to launch the attack, click the Start tab, and click Start. In the example illustrated in figure 3, the correct password was found in about three seconds. In the next example, the password was very far down a very long password list. Look what happens: the POP3 server has got fed up with too many failed login attempts and appears to have locked us out of the system. It may be a few minutes or hours before another attempt can be made. If you find you can work through a long list of passwords fairly quickly then it is well worth reconfiguring the security settings on your server to block access after fewer failed login attempts: legitimate users may misspell their passwords a couple of times, but there is no reason why anyone should be entering their password incorrectly ten times on the trot. (See Figure 4.) Because online attacks are susceptible to this kind of lockout, hackers try to make their password lists as targeted as possible. If they can find out any information about the owner of a given username (from Web sites such as FaceBook for example) such as a pet's name, it's likely that that will be included in the list. A tool that they may also use is Wyd, a Perl script which is available from http://www.remote-exploit.org/codes_wyd.html Give Wyd the address of a particular Web site and the tool will extract "useful" words that appear on it to add to a password list. The idea is that for reasons of corporate loyalty or whatever, many people use passwords connected to their work, projects they are working on, places they do business, or other bits of information found on the Web site. It's certainly worth using Wyd to create a corporate password list for your organization from your corporate Web site, to use with Hydra to see if anyone is using an easily guessable password. Once again, checking your servers' security is a matter of putting yourself in the position of a genuine hacker. Many will use bots to carry out online attacks - perhaps on your SMTP server to see if they can guess a password and use your server to send out spam. If you use Hydra to successfully guess a password or two then so can the bots. An hour or so finding out what Hydra can come up with is definitely time well spent.
<urn:uuid:f680ab78-9b71-4adf-a11b-2a2e83fe7aab>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/article.php/3745276/Use-Hydra-to-Remotely-Test-Password-Security.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954238
1,236
2.765625
3
The people behind the Inspiration Mars Foundation -- which on Wednesday announced plans to send a manned spacecraft on a 510-day fly-by mission to Mars -- say this on their website: "We are steadfastly committed to the safety, health and overall well-being of our crew. We will only fly this mission if we are convinced that it is safe to do." Let's hope that's true, because launching humans on such a long and faraway mission into space before we're technologically capable and reasonably certain about the health effects of such a prolonged journey just isn't worth it, at least in my opinion. The foundation, headed by U.S. multimillionaire and first space tourist Dennis Tito, wants to send a two-person crew ( a man and a woman) to Mars in 2018, when a rare planetary alignment would allow for a relatively short round-trip of about 500 days. The craft wouldn't even go into Mars orbit, but instead would fly within 100 miles and then "sling-shot" its way back toward Earth. The problem is, even while the Inspiration Mars Foundation assures it won't go through with the mission if it is unconvinced it would be safe, Tito tells Space.com that the two-person crew essentially are going to be guinea pigs: SPACE.com: What is the scientific value of a manned mission to Mars, if the crew won't be landing on the planet? Tito: At first, I thought this is not a science mission. This is for inspiration; it's a test flight to show we can get there. You're going to learn a lot about the engineering problems.But then as I started learning more about the life sciences, apparently [the benefits] are huge. There hasn't been really any information on human behavior in this kind of environment. The impact of radiation, the isolation — the academics are all very excited. It'd be a huge scientific value in the life sciences. And let's not forget all the other things that happen to the human body in space. A Russian experiment in which participants lived in the equivalent of deep space for 17 months showed that long trips in space can have drastic effects on sleep patterns and fitness. Given that prolonged sitting can be fatal, this is something to think about. Then there's bone loss, heart atrophy, nausea and headaches -- all conditions of modern space travel. While we're at it, let's throw in the recent NASA-supported study reporting that space travel is harmful to the brain and could accelerate Alzheimer's disease. And the "impact of radiation," as Tito puts it, is described in Wikipedia: The potential acute and chronic health effects of space radiation, as with other ionizing radiation exposures, involve both direct damage to DNA and indirect effects due to generation of reactive oxygen species. ...By one NASA estimate, for each year that astronauts spend in deep space, about one-third of their DNA will be hit directly by heavy ions. Thus, loss of critical cells in highly complex and organized functional structures like the central nervous system (CNS) could result in compromised astronaut function, such as changes in sensory perception, proprioception, and behavior or longer term decrements in cognitive and behavioral functions. So you lift off from Earth as a fully functioning human astronaut and you return (if you return) as ... what? I've said it before, and I'll say it again: As eager as I am to see us explore the stars, rushing into it is only going to lead to unnecessary lives lost. I understand exploration requires risk, but it shouldn't require recklessness. But that's just me. What do readers think? In our eagerness to go to Mars, are we rushing into disaster? Now read this:
<urn:uuid:6a27f128-cc7a-401c-b792-3b8fb6e83503>
CC-MAIN-2017-04
http://www.itworld.com/article/2713145/hardware/wanted--2-human-guinea-pigs-for-premature-flight-to-mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95823
766
2.78125
3
Learn how to implement 3D printing to increase student engagement across K-12 curriculum. Presenter Ryan Erickson, a Minnesota Maker Space coordinator, outlines 3D printing lessons applicable to daily student life. The webinar focuses on one simple question, “How do we apply 3D printing to K-12 education?” The process doesn’t start with expensive machines and complex software applications. Students are introduced to the technology from the bottom up. Simple IOS apps such as MakerBot PrintShop, scan student drawings for immediate upload to CAD for 3D printing. Applying 3D printing to classrooms goes beyond engineering in STEM learning – it redefines creativity entirely. Students can model historical monuments into tangible figures to understand sentiment and context; model sonic waves into visible artifacts; build geometric figures to understand volume and surface area, and map proteins and atoms into connectable models. 3D printing engages students to think creatively, it allows them to craft and build with imagination. For teachers, this technology maximizes the opportunity for impactful learning environments.
<urn:uuid:f8c3bd56-52b6-4b68-bd29-fc1d68420449>
CC-MAIN-2017-04
https://www.brighttalk.com/channel/14481/stratasys
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00086-ip-10-171-10-70.ec2.internal.warc.gz
en
0.912248
211
3.953125
4
Questions for the victim: 1. Who has been notified of the intrusion? Is the security division of the local telephone company aware of the intrusion? If the intruder is using the Internet, who is the victim's local Internet service provider? Is that provider aware of the intrusion? 2. Information about the computer system: What type of computer or computer system was accessed without authorization? Is the computer used: as a PBX? to store proprietary information? 3. Telephone connections: Is the computer connected to external phone lines? What are the numbers for each of those lines? Who is able to dial in on the line(s) used by the intruder? Are those lines reserved for maintenance or other use which is not companywide? If reserved for maintenance, determine whether the intrusion is part of normal maintenance activity. 4. Computer security measures in place before the intrusion: Does the system require a "log-on" identification? Does the system require users to enter a password? Are there restrictions on the type of password which users may select? If the password is reasonably secure (i.e., at least six characters long), it is likely the intruder is a current or former employee, knows an employee, or has obtained the password through "social engineering" of a gullible employee. How often is that password changed? Knowing the date on which the password was last changed may narrow your field of suspects. Does the computer allow different levels of access? The level of access gained by the intruder may provide information about the source of your problem, particularly if the intruder is a former employee. How does the computer record when a user logs on and off the system? Make sure that the victim configures the computer to create logs, and stores those logs off the compromised system. If they are stored on the system, they should be encrypted. Is the computer configured to record the commands typed by the intruder? 5. Information about the intrusion. When did the intrusion begin? How many calls has the intruder made? A series of attempts repeated every few seconds suggests a war dialer. Does the intruder always use the same phone line? This pattern suggests that the intruder knows only that number. If employees know more than one number, your intruder may be a cracker (or a lazy employee). Did anyone unsuccessfully attempt to access the computer within 30 days before the first intrusion? Unsuccessful attempts suggest an intruder guessing a password or probing for a security flaw. No prior attempts suggests that the intruder is a current or former employee, or a cracker who obtained the password by social engineering or by intercepting a password sent by an authorized user. For each access: What was the date and time of that call? How long did the call last? What account(s) was/were accessed? Who are the authorized users who have access to that account? (Some companies allocate accounts for particular work groups which anyone in that group may access.) Interviewing the authorized user: Warning: Interviewing a group of employees who share a password may alert any intruder who is a fellow employee to your investigation. Determine whether the authorized user is your intruder. Does the user have a motive to misuse the computer (e.g., a departing employee stealing proprietary information)? Ask whether the authorized user disclosed his or her password to anyone, including seemingly authorized users (i.e., social engineering), or displayed the password on scraps of paper taped to the terminal or left in an unlocked drawer. What is the victim's theory concerning how the intruder was able to access the computer? Consider some of the common security "holes": Electronic mail. Specifically the sendmail program which handles electronic mail in most UNIX systems. Telnet. If crackers have compromised the "calling machine," they can record the passwords typed in by users using that computer to call the victim's computer, thus intercepting those passwords for their own use. TFTP and FTP. Owners may inadvertently place password files in these areas and lose them. Crackers may use the anonymous FTP area to penetrate into "the rest" of the computer. Finally, where the owner has allowed outsiders to place files in the anonymous FTP, crackers and others may store stolen data, illegally copied programs and pornography. Network "spoofing." The victim's computer may have been fooled into believing that it is being "called" by another computer on the network. If you suspect an employee (including a user whose account was penetrated), do you have records documenting which employees were at the victim's facility, and what they were doing? What did the intruder do after gaining access? Are there any new files that were not there before the intruder arrived? If the victim does not have computer security experts on staff, suggest that it hire a consultant to check for back doors, trojan horses, viruses, logic bombs, etc. Gather the following evidence as soon as possible (and after each intrusion). 1. All records of the unauthorized access. Again, make sure that your victim keeps those records in a secure area of the computer, preferably encrypted. Also caution the victim not to use the computer to discuss the intrusion (i.e., by e-mail). 2. All records of system activity on the day (or within a few hours) of the access. 3. Backup tapes of the above. Make an exact copy of that data in the form in which it existed in the computer (i.e., onto a backup tape). Make more than one copy if possible. You should also print out that data to have a hard-copy record which you can display at trial. Create evidence of ongoing intrusions. 1. The law usually allows victims to use their computers to track an intruder's activity. Discuss this issue with your victim at the beginning of your investigation. At a minimum, ensure that the computer is configured to "time-stamp" each log-in and log-off for each account. Track damage to the victim. 1. Advise the victim to keep a log of the time employees spend responding to the intrusion. This includes time spent verifying that the intruder did not damage the computer and that the intruder has not left any "trap doors" behind. Track the intruder. 1. Discuss with the victim whether the risk of damage from allowing the intruder to continue his attack on the system is so great that the victim must eject the intruder. Ejecting the intruder will usually end your investigation. 2. If the victim has the capability and inclination to do so, consider creating a "virtual sandbox" inside the victim's computer to contain the intruder. 3. If the intruder is using dial-up lines, obtain a court order allowing a trap and trace. (See below for ideas on what to do when the intruder is using the Internet.) Some states require a search warrant to authorize a trap and trace. The victim usually pays for the installation, and you should discuss this issue with the victim before drafting an order. File the order (or search warrant) under seal. 4. Arrange for the telephone company to install the trap and trace. 5. Assuming that your intruder attacks while your trap and trace is operating, match the calls "trapped" by the trap and trace against the logs of the victim's computer. Look for calls occurring at or about the time of the intrusion. (Remember that the computer's system clock may be anywhere from a few seconds to a few minutes "off" from the telephone company computer's system clock.) 6. Continue obtaining trap and trace orders as necessary to trace the intruder to the source of the phone calls. 7. If the intruder is using the Internet, seek assistance from the victim's Internet service provider. It may be able to track the intruder to the computer he is using. Arrange for the victim (or a consultant) to capture and examine the intruder's data packets for source/destination information. 8. Investigate whether the source of the intrusion as reported by the trap and trace or Internet service provider is the actual location of your intruder. Remember that intruders can route their calls through many different phone companies before reaching their target. They can also use accounts owned by others. If the location returned by your trap and trace is an institution (e.g., a company or a university), contact that institution and seek assistance. If it is a residence, obtain records, such as utility bills, identifying the occupants of that residence. Consider checking whether your local school or police department is familiar with a juvenile living in the residence. 9. If the intruder is using dial-up lines, after obtaining the requisite order or search warrant, install a pen register on the location identified by your trap and trace. Use the results to: Confirm that the intruder is using the telephone number(s) identified by your trap and trace. Remember to account for time zones if your intruder is dialing from out-of-state. Determine whether the intruder is using a war dialer (look for dozens or hundreds of calls spaced every few seconds). Identify other computers under assault by your intruder (look for numbers listed dozens or hundreds of times). Identify the intruder's confederates, caches of stolen data, and pirate bulletin boards. Arrest the intruder. 1. Prepare a search warrant for the intruder's location. You may find it easier to draft the warrant if you collect the following information before you begin: Phone numbers for dial-in ports used by the intruder. Passwords to the victim's computer system used by the intruder (make sure that the victim changes those passwords before you file the warrant). The name of the account used by the intruder. Information unique to the victim's computer system which you would expect the suspect to have downloaded to his computer, such as welcoming banners, the name of the victim, and even the name of the victim's computer (if named by its location, such as "Building 4 computer," or by number, such as "Computer X452"). Messages or commands sent by the intruder to the victim's computer system. A description of software or data which you believe the intruder stole from the victim's computer system. 2. Consider whether you will be able to prove which occupant of that location is your intruder (e.g., which sibling or employee). 3. When obtaining a description of the residence to include in the search warrant, drive by the residence and look at the telephone line to make sure that it is not connected to an adjacent residence occupied by your intruder. 4. Arrange for a magistrate to sign the warrant. 5. Before serving the warrant, consider: Do you have enough officers to allow the investigating officer to interview the suspect (after providing appropriate Miranda warnings)? Are you better off serving the warrant when the suspect is not at home? If you are planning to "turn" the suspect into an informant, and are going to serve your warrant when he is not at home, determine his whereabouts in advance. 6. During the search, do not ignore the following items which may appear in plain view: Printouts containing phone numbers, credit card numbers, or any string of numbers which may be access codes. Also look for names of bulletin boards (BBSs) which may reveal data caches. Pads of paper. In addition to strings of numbers and BBSs, look for passwords. Evidence identifying the user of the computer (i.e., your intruder). Look for names inside manuals, or on labels affixed to floppy disks. Evidence of confederates. Magazines relating to cracking (e.g., 2600). Computer manuals for the computer used by your victim. 7. Subject to Miranda, interview the suspect. Ask him whether the computer you find on the premises is rigged. If you are going to use your suspect to cooperate in investigating his friends, secure his cooperation immediately. A long delay (more than a day) before your "turned" suspect returns "online" may warn confederates that he is no longer their ally. Kenneth S. Rosenblatt is a prosecutor in the Office of the Santa Clara County, Calif., District Attorney, and this checklist is excerpted from his book, High-Technology Crime: Investigating Cases Involving Computers (KSK Publications, 1995, 603 pp. plus diskette, $69.95; call 408/296-7072 for more information). The book offers step-by-step instruction in investigating crimes involving computers and searching, seizing and analyzing evidence stored within computers. access to a
<urn:uuid:027b5af0-cfa4-45e0-a1ea-6ec152d2c8da>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Computer-Intrusion-Checklist.html?page=3
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933823
2,612
2.875
3
First year engineering students at Dalhousie University received something extra when they started last year – a BlackBerry PlayBook. Professors took advantage of the new teaching opportunities created when every student is connected. “Given its ultra portability and power, the PlayBook has given the students the ability to interact with professors at an unprecedented level,” explains Dalhousie Professor George Jarjoura. Having every student using a PlayBook opened up new ways to teach. Students could follow along with course presentations on their PlayBooks, annotating slides with their own notes. Equally as important, given the size of first-year lectures, was the ability to ask questions and take quizzes electronically, in real time. “For some first year students, participation during the question period is very intimidating, especially in large classes with hundreds of students,” says Professor Jarjoura. “With the PlayBooks, students could ask questions, submit answers and interact with me in real time, as I lecture.”
<urn:uuid:fd996f20-129d-46c2-9d21-50924384bd53>
CC-MAIN-2017-04
http://blogs.blackberry.com/2013/05/playbook-tablets-are-changing-the-way-first-year-dalhousie-engineering-students-learn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00509-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964797
212
2.53125
3
Energy measurement provides hard data on how IT energy costs impact the bottom line, which fuels the business case for exploring Green IT initiatives. Energy measurement also provides the data necessary to make strategic business decisions. You can’t manage what you don’t measure. If you are already measuring energy usage, this tool provides a means for tracking results over time and determining the associated carbon emissions. If you aren’t measuring yet, use this tool to get started. Specifically, use this tool to do the following: - Calculate total energy usage and costs for specific assets (e.g. servers, cooling system) to identify areas for improvement. - Determine carbon emissions based on energy usage. - Track and report energy and carbon emissions year-over-year to assess progress. As more regions start to introduce carbon taxes and regulations based on emissions, knowing your carbon footprint is critical to assessing how far you will need to go to meet government requirements.
<urn:uuid:c03bc72c-51a7-4d1c-83c5-fddb95d68d2e>
CC-MAIN-2017-04
https://www.infotech.com/research/it-energy-carbon-calculator-and-tracking-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00325-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895529
197
2.53125
3
Software Defined Networking, or SDN, may finally help us create a discipline in the networking space the same way it happened in systems domain, especially software programming with definition of abstractions, interfaces and modularity. It is interesting to understand what this change is, and why it is needed. While the service providers made architectural changes in their networks in the last few years to move to all IPs to expect a more open ecosystem of applications, the reality is that the networks have become more complex with a large abundance of protocols/control plane algorithms and a network that is very difficult to change. The real need is to have a secured network that provides mobility to the users to access the internet and its associated world of applications. The need is to leverage the advancements in computing power, routing protocols, access mechanisms and availability of the rich open source community to create a more open ecosystem that allows rapid changes and innovation to happen, while control moves to centralized software, reducing hardware costs and power while being easy to manage. SDN is precisely defined by three abstractions – distribution, forwarding and configuration. In networking terms, SDN shall shield control mechanism from state distribution, provide a “Global Network View” or automated network graph though an API (Application Programming Interface) with a centralized control function. The control program has an abstracted view of the network (called “Abstract Network Model”) specifying the desired behavior, which in turn is compiled to the underlying topology by a virtualization layer (“Network Virtualization”), and the “Network Operating System” transmits these settings to the physical boxes. Finally, the forwarding abstraction provides CPU abstraction for the management plane (smart but slow) and ASIC abstraction (fast but dumb) with an open interface to the underlying hardware, often achieved in the form of Open Flow specifications from ONF (Open Networking Foundation). In simple terms, SDN provides network transition from vertically-integrated, closed, proprietary and slow innovation lead systems to horizontal, open interfaces and rapid innovation. OpenFlow is one of the standards that separate the control plane from the data plane with Open API to the black box networking node, which could be either an L2/L3 switch or a router. OpenFlow is based on an Ethernet switch (or “Open Flow Switch”) which has an internal Flow Table (for data path) and a standardized interface toward a controller which programs (control path) the Flow entry in the Flow Table. As per the Information Week 2012 SDN Survey, July 2012, a more efficient and flexible network that speeds service delivery and cost savings on hardware are the top two market drivers for SDN adoption. North America continues to be the biggest market for SDN solutions, though APAC will experience increased market traction over the next five years to become the biggest SDN market globally. According to Gartner, SDN stands among the top critical IT trends for the next five years, and Markets to Markets predicts the global SDN market to grow from $198 million in 2012 to $2.10 billion in 2017 at a CAGR of 60.43%. From a service provider standpoint, Infonetics Research emphasizes simplified network provisioning while from an OEM perspective, WCP Research mentions the deteriorating gross margin for hardware-based networking products in data centers as well as the need for lower cost “virtual” switching and routing functions. It is just a matter of time before OEMs implement OpenFlow via embedded functionality on merchant silicon. From an enterprise perspective, ONF trends clearly indicate SDN allowing network resources to be allocated in a more elastic fashion, enabling rapid provisioning of cloud services and more flexible hand-off to the external cloud provider. While the industry gears up for this next revolution called SDN, there are many challenges which need to be dealt with. A few key of them include definition of mature standards covering the depth and breadth of control, data and management aspects and the mutual cooperation of leading players/vendors to evolve and eventually support higher interoperability between different platforms. We view network programmability, open interface, virtualization and interoperability to be the key engineering imperatives of SDN for OEMs and service providers with opportunity to provide solutions that can “accelerate” product launch, add “value” to the eco-system by providing seamless management interface to the cloud and on-premise deployments and test “Technology” co-existence and interoperability. REST API enablement of platforms or Northbound API for business domain abstraction that enables integration with a higher order management system are examples where a solution accelerator can help an OEM reduce development time and cost to accelerate SDN adoption. For more on network engineering services, visit us today!.
<urn:uuid:aa84db30-4b73-4dab-b3d0-429b58cd4794>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/sdn-%E2%80%93-not-just-random-good-idea-here-stay%E2%80%A6
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915443
970
2.84375
3
Earlier this month, NBC news reporter Richard Engel created a stir by attempting to show how easy it is to get hacked in Russia. He reported that his phone was hacked while accessing WiFi at a coffee shop within 24 hours of arriving in Moscow. Visitors to the 2014 Sochi Winter Games may be left wondering if they, too, are vulnerable. The answer is, to a degree, yes. According to this CBS News article, the chances of encountering malicious software in Russia last year were a staggering 63 percent, versus a mere four percent in the U.S. But encountering malware and getting hacked are not the same thing. Malware, or malicious software, works by disguising itself as a benign application or program that asks a user for certain permissions. These permissions include access to personal data that can then be used for purposes of identity theft and other cybercrimes. However, if the user identifies the download as malicious and does not grant it access to their data, they remain safe. So what does this mean for international guests at the Olympics? Just be smart. Don’t download suspicious files or enter login credentials on an untrusted website. Following the same precautions you would at home should be enough to avoid hacking. And by using Keeper, you can keep your identity safe at home or abroad.
<urn:uuid:5ffd6d30-a5b6-4152-bd9d-75ddd35084b8>
CC-MAIN-2017-04
https://blog.keepersecurity.com/2014/02/22/cybersecurity-at-sochi-2014/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938327
265
2.625
3
Lascelles B.G.,BirdLife International Cambridge UK | Taylor P.R.,Center for Conservation Science | Miller M.G.R.,James Cook University | Dias M.P.,BirdLife International Cambridge UK | And 8 more authors. Diversity and Distributions | Year: 2016 Aim: Enhanced management of areas important for marine biodiversity are now obligations under a range of international treaties. Tracking data provide unparalleled information on the distribution of marine taxa, but there are no agreed guidelines that ensure these data are used consistently to identify biodiversity hotspots and inform marine management decisions. Here, we develop methods to standardize the analysis of tracking data to identify sites of conservation importance at global and regional scales. Location: We applied these methods to the largest available compilation of seabird tracking data, covering 60 species, collected from 55 deployment locations ranging from the poles to the tropics. Methods: Key developments include a test for pseudo-replication to assess the independence of two groups of tracking data, an objective approach to define species-specific smoothing parameters (h values) for kernel density estimation based on area-restricted search behaviour, and an analysis to determine whether sites identified from tracked individuals are also representative for the wider population. Results: This analysis delineated priority sites for marine conservation for 52 of the 60 species assessed. We compiled 252 data groupings and defined 1052 polygons, between them meeting Important Bird and Biodiversity Area criteria over 1500 times. Other results showed 13% of data groups were inadequate for site definition and 10% showed some level of pseudo-replication. Between 25 and 50 trips were needed within a data group for data to be considered at least partially representative of the respective population. Main conclusions: Our approach provides a consistent framework for using animal tracking data to delineate areas of global conservation importance, allowing greater integration into marine spatial planning and policy. The approaches we describe are exemplified for pelagic seabirds, but are applicable to a range of taxonomic groups. Covering 4.3% of the oceans, the sites identified would benefit from enhanced protection to better safeguard the threatened species populations they contain. © 2016 John Wiley & Sons Ltd. Source
<urn:uuid:3f21874a-df06-4e16-8e4b-55758f7e455e>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/birdlife-international-cambridge-uk-1804764/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.86837
445
2.796875
3
Achard F.,European Commission - Joint Research Center Ispra | Stibig H.-J.,European Commission - Joint Research Center Ispra | Eva H.D.,European Commission - Joint Research Center Ispra | Lindquist E.J.,Forest Assessment | And 3 more authors. Carbon Management | Year: 2010 This article covers the very recent developments undertaken for estimating tropical deforestation from Earth observation data. For the United Nations Framework Convention on Climate Change process it is important to tackle the technical issues surrounding the ability to produce accurate and consistent estimates of GHG emissions from deforestation in developing countries. Remotely-sensed data are crucial to such efforts. Recent developments in regional to global monitoring of tropical forests from Earth observation can contribute to reducing the uncertainties in estimates of carbon emissions from deforestation. Data sources at approximately 30 m × 30 m spatial resolution already exist to determine reference historical rates of change from the early 1990s. Key requirements for implementing future monitoring programs, both at regional and pan-tropical regional scales, include international commitment of resources to ensure regular (at least yearly) pan-tropical coverage by satellite remote sensing imagery at a sufficient level of detail; access to such data at low-cost; and consensus protocols for satellite imagery analysis. © 2010 Future Science Ltd. Source Eberenz J.,Wageningen University | Herold M.,Wageningen University | Verbesselt J.,Wageningen University | Wijaya A.,Center for International Forestry Research | And 5 more authors. 2015 8th International Workshop on the Analysis of Multitemporal Remote Sensing Images, Multi-Temp 2015 | Year: 2015 This study predicts global forest cover change for the 1980s and 1990s from AVHRR time series metrics in order to show how the series of consistent land cover maps for climate modeling produced by the ESA climate change initiative land cover project can be extended back in time. A Random Forest model was trained on global Landsat derived samples. While the deforestation was underestimated by the model, major global patterns were effectively reproduced. Compared to reference data for the Amazon satisfying accuracies (>0.8) were achieved, but results are less promising for Indonesia. © 2015 IEEE. Source Sanou H.,Sotuba BP 258 | Sanou H.,Copenhagen University | Angulo-Escalante M.A.,Investigacion en Alimentacion y Desarrollo A.C. | Martinez-Herrera J.,Jatro Bio Energy and Oilseeds SPR de RL | And 7 more authors. Crop Science | Year: 2015 Jatropha curcas L. has been promoted as a “miracle” tree in many parts of the world, but recent studies have indicated very low levels of genetic diversity in various landraces. In this study, the genetic diversity of landrace collections of J. curcas was compared with the genetic diversity of the species from its native range, and the mating system was analyzed on the basis of microsatellite markers. The genetic diversity parameters were estimated, and analysis of molecular variance, principal coordinate analysis, and unrooted neighbor-joining tree were used to describe the relationship among populations. Results confirmed very low genetic diversity in African and Asian landraces. Mexican populations from the regions of Veracruz, Puebla, and Morelos were also found to have low levels of diversity (mostly monomorphic), while populations from Chiapas were polymorphic with an expected heterozygosity between 0.34 and 0.54. Bayesian analysis showed differentiation according to geographic locations, which was confirmed by principal coordinate analysis and neighbor-joining tree. Estimations of outcrossing rate of individual families from Chiapas showed that some mother trees were mainly outcrossing. Mating system could not be estimated in the landraces from Mali and populations from Veracruz, Puebla, and Morelos (Mexico), as these were highly monomorphic. The observed low level of genetic diversity in some of the populations and landraces suggests that breeding programs should test for genetic variation and heritability in relevant quantitative traits and estimate if sufficient gain can be expected from traditional testing and selection. Diversification of the local gene pools may be considered for breeding and selection. © Crop Science Society of America. Source Kaeslin E.,Forest Assessment Unasylva | Year: 2010 The article presents an overview of conservation issues affecting the successful coexistence of forests, people, and wildlife. Forest wildlife likewise offers both products and ecosystem services. Forests and wildlife together offer a basis for commercial and/or recreational activities like hunting, photography, hiking and birdwatching. There are two main drivers behind these threats. The increasing consumption of wealthier populations, which stimulates agricultural and industrial production, resource extraction, and tourism, leads to degradation of forests. As a result of faunal depletion, the remaining primary tropical and subtropical forests, which still provide good habitat for wild animals, are widely becoming empty of large vertebrate. The Convention on Biological Diversity (CBD) Liaison Group on Bushmeat defines bushmeat hunting as the harvesting of wild animals in tropical and subtropical forests for food and non-food purposes. Source Potapov P.,South Dakota State University | Hansen M.C.,South Dakota State University | Gerrand A.M.,Forest Assessment | Lindquist E.J.,Forest Assessment | And 3 more authors. International Journal of Digital Earth | Year: 2011 To collect and provide periodically updated information on global forest resources, their management and use, the United Nations Food and Agriculture Organization (FAO) has been coordinating global forest resources assessments (FRA) every 5-10 years since 1946. To complement the FRA national-based statistics and to provide an independent assessment of forest cover and change, a global remote sensing survey (RSS) has been organized as part of FAO FRA 2010. In support of the FAO RSS, an image data set appropriate for global analysis of forest extent and change has been produced. Landsat data from the Global Land Survey 1990-2005 were systematically sampled at each longitude and latitude intersection for all points on land. To provide a consistent data source, an operational algorithm for Landsat data pre-processing, normalization, and cloud detection was created and implemented. In this paper, we present an overview of the data processing, characteristics, and validation of the FRA RSS Landsat dataset. The FRA RSS Landsat dataset was evaluated to assess overall quality and quantify potential limitations. © 2011 Taylor & Francis. Source
<urn:uuid:f7334898-8a9a-4fce-9154-7c8d716b8fb4>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/forest-assessment-147033/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00069-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905644
1,365
2.59375
3
Even though the U.S. hasn’t suffered an attack to manufacturing, power production or public transit, the risk is growing. Indeed, the number of IC-related cybersecurity incidents reported to U.S. authorities rose 20 percent in the last year. ICS solutions and protocols were originally designed to work within isolated environments. They monitor and control industrial processes in critical infrastructure sectors such as electric grids and water treatment facilities, as well as in heavy industry. As more organizations connect their infrastructures to the Internet, companies are retrofitting this older equipment to work in modern networked environments. The concern is that many of these systems were designed and installed before the emergence of the commercial Internet. What often results is a hodgepodge of modern and legacy elements where cybersecurity falls through the cracks. Indeed, a recent report on publicly accessible ICS hosts found that 91 percent of public-facing ICS components were vulnerable to being remotely exploitable. Cybercriminals would then be free to carry out attacks against control system protocols by modifying packets in transit — or even worse. Something old, something new ICS vulnerabilities were highlighted when cyberintruders manipulated the access systems at a German steel mill in late 2014 and prevented managers from shutting down a blast furnace. The breach resulted in what investigators would later describe as "massive" damage. It seems that the intruders launched the attack by sending a spear-phishing email that executed malicious code on an employee’s computer to gain access to the control systems. Despite their spectacular nature, ICS attacks aren’t unique. They represent many of the familiar challenges that security executives now face as they navigate a sometimes fraught transition updating legacy infrastructure to join the Internet of Things. As they get connected to the Internet, IT and ICS networks are going to become increasingly intermingled. Even if they can’t prevent cyberattacks, operators of critical infrastructures can still reduce their exposure by doing the basics out of a recognition that industrial control systems are challenged by many of the same cybersecurity threats that also target corporate networks. Recommendations for some of the basic risks follow. Third parties. One of the blunt realities about the IoT era is the growing security risk posed by third parties. Partners can’t be automatically trusted any longer and system security often depends on the security hygiene of the weakest member connected to the supply chain network. That’s why the Cyber Emergency Response Team suggests that peer links be restricted behind firewalls to specific hosts and ports. Firewalls should also separate the business LAN from the control system LAN. Patch management. Another weakness in ICS security should be easy to fix. Patch management of ICS software for critical infrastructure has paradoxically been found to be inconsistent at best and nonexistent at worst. Secure networks. If industrial control systems can’t run in a physically isolated environment, organizations should at least surround them with controls and then monitor network security to search for any communications abnormalities. Organizations can further insulate their infrastructure by reducing the number of remote connections to employees. Clearly, industrial control systems pose particular security challenges. But adopting these and other common sense risk-focused approaches can go a long way to managing the risks. Charles Cooper has covered technology and business for the past three decades. All opinions expressed are his own. AT&T has sponsored this blog post.
<urn:uuid:61f78bcf-3c3e-4aac-b5a1-e7f1f80e68a4>
CC-MAIN-2017-04
http://www.csoonline.com/article/3124766/internet-of-things/where-manufacturers-could-lose-cybercontrol.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00307-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95664
687
2.75
3
Chris Cline on Wednesday, February 1, 2012 What is PHI and ePHI? PHI stands for Protected Health Information. It includes any information that identifies an individual, i.e. includes either the individual's name or any other information that could enable someone to determine their identity and relates to at least one of the following: - The past, present or future payments for health care services; - The provision of health care to the individual; - The individual's past, present or future physical or mental health. ePHI stands for Electronic Protected Health Information. ePHI is all Protected Health Information which is stored, accessed, transmitted or received electronically. Where can ePHI be found? There are many places on a network where ePHI can be found. A few locations such as servers, workstations, laptops, iPads and email are at the forefront when thinking about ePHI, but there are many other possible locations which are also important to take into consideration. In addition to locations mentioned above, ePHI can also be found on smartphones, phone systems, in the form of recorded calls or voicemails, faxes, removable media such as USB keys, CD and DVDs, backup tapes, external hard drives, etc., and even multifunction devices. Why is it important to know where my ePHI is? Knowing where ePHI exists on your network is a critical step in avoiding a breach of information. What can I do to locate ePHI on my network? Collect an Inventory of Your Computing Infrastructure An inventory of hardware and software can provide a clear picture of the potential locations for ePHI. Implement a Data Loss Prevention Product Data Loss Prevention (DLP) products can scan servers, workstations and laptops/tablets for ePHI. Most of these products have policies that perform certain actions when ePHI is found on a device. The most common actions for these products are to report, destroy or encrypt. Perform a Security Risk Assessment A security risk assessment can help discover any gaps that could potentially create a breach. Security risk assessments should be performed any time when major system changes occur in your infrastructure, as well as on a recurring basis with the schedule being determined by the outcome of previous risk assessments. Implement Policies and Procedures Create and implement written policies that determine where ePHI is allowed to exist. Communicate these policies to your staff as part of your regulatory compliance training. Use security risk assessments, data loss prevention products, system inventories or other automated systems to audit that these policies are being followed. Chris Cline is a Senior Sales Engineer at mindSHIFT Technologies, Inc., based in our Morrisville, NC office.
<urn:uuid:206d06f1-cf9f-4911-94c6-24fa5a69fdf0>
CC-MAIN-2017-04
http://www.mindshift.com/Blog/2012/February/Do-you-know-where-your-ePHI-is.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940403
570
3.015625
3
We may think that stopping the spread of malicious software is the job of security system vendors. There are, however, a number of things we can do to prevent malware from spreading and causing damage. Let’s look at a few simple ones. The inadvertent source – website owners Do you own or run a website? Do you develop websites for friends, family or clients? Are you sure that any of these websites you own or manage do not host malicious content? Hackers often use websites that are not maintained or are configured incorrectly as a host for malware. As a result, you may be inadvertently giving your Internet readers more than they bargained for – or wanted in the first place. How can you prevent this from happening? There are several steps to take: 1. Always use the latest version of the content management system (CMS). Whether you are using OpenCMS, Joomla, WordPress, Drupal, Magento, DotNetNuke, Kentico, or any other popular CMS, ensure it is fully patched and updated so that vulnerabilities are kept to a minimum. 2. Use the latest version of third party plug-ins (such as forums, shopping carts, newsletters, templates). Just like your CMS, plug-ins may have vulnerabilities. By running the latest versions, you greatly reduce the risk these vulnerabilities can be exploited. 3. Ask your host to help you secure your website. 4. Use Google WebMaster Tools to monitor your website’s health. Google WebMaster Tools will advise you immediately if your website is infected with malware. 5. Do not make use of pirated content management systems, templates, plug-ins, or anything coming from unreliable sources. These may carry malicious code and the price you pay is often far higher than if you had to purchase the original software. Websites using old software are a primary source of infection on the web. Vulnerabilities in old software versions are exploited to infect visitors to your website. At times, even opening a website is enough for a machine to be infected – no download or user interaction is necessary. There are several tools that facilitate the exploitation of these security loopholes. The middleman – IT admins IT admins have many tools at their disposal to ensure safe and secure browsing for users. Traditional security mechanisms, such as firewalls and anti-virus software, whilst important, are simply not enough. Let’s look at how these can help: 1. Use a corporate anti-virus solution to protect all your endpoints 2. Use Web security software to block security threats before they reach your users 3. Use anti-spam and email security software 4. Use vulnerability assessment and patch management software to keep all software updated and patched. All these solutions are available in different delivery models – on-premise, cloud or hybrid. The best solution is that which fits your needs and IT environment. Computer users can also take steps to ensure they are not the victim of a malware attack. First, make sure that potentially vulnerable computer software is updated; closing holes makes it harder for a threat to cause damage. It only takes a couple of minutes to install software updates. When you are prompted to do so, resist the temptation to click “ignore” or “later”. It takes longer to remove an infection or to format a machine. There are other actions to take too: 1. Enable Windows® updates as these will address commonly exploited bugs on your computer. 2. Enable the Java browser plug-in ONLY if you need it. The Java plug-in is one of the biggest threats to your machine. Use the latest browsers, such as Chrome, which will allow you to enable the plug-in if, and when necessary. 3. Make sure Adobe Reader and Flash are always updated, and that auto-updates are enabled. 4. Uninstall ALL browser plug-ins which you don’t really need to keep your browser lean and clean. 5. Keep other browser plug-ins updated. If they aren’t, only enable them when you fully trust the website you are visiting. 6. Do not switch off or disable auto-updates on any software as these exist for a very valid reason. Software vendors provide updates to ensure you have the most stable and secure version of their software. 7. Do not use pirated software as this is often booby-trapped. Tracking Malware in the Wild Stopbadware.org have created a fun video on malware. It conjures up images of the late Steve Irwin, creating a “Crocodile Hunter-style” explanation of “Tracking Malware in the Wild”. Like our posts? Subscribe to our RSS feed or email feed (on the right hand side) now, and be the first to get them!
<urn:uuid:c0da3436-b354-4ae6-85b8-a6828a583a83>
CC-MAIN-2017-04
https://techtalk.gfi.com/stopping-the-spread-of-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906896
1,008
2.796875
3
As more and more of our lives migrate online, congress, state legislatures and local governments are grappling with how to address the issues and challenges this presents. The problem is that by the nature and structure of the Internet itself, content and commerce on the Internet are virtually impossible regulate. The federal government is better positioned than a state or local government, and even they have a very difficult time. Internet servers and mirror sites located all over the world allow people to operate businesses from whatever jurisdiction they choose. If tightly controlled countries like China and Iran can’t completely stop their citizens from accessing websites they deem objectionable, it is virtually impossible in a free, democratic society like the United States. Internet gambling is an excellent case study of how difficult it is to effectively regulate many aspects of the Internet. Just as Internet poker was gaining in popularity in the early 2000s, congress passed the Unlawful Internet Gaming Enforcement Act (UIGEA), which it thought would put an end to Internet gambling. But a 2010 study by the respected Internet gaming research along with data firm H2 Gambling Capital found that Internet poker was booming despite the law. The study found there were between 10 million and 11 million people playing Internet poker for money in the United States. Leaving aside personal feelings about gambling and whether Internet gambling should be legal, it is virtually impossible to enforce a law that 11 million people are violating in the privacy of their own homes. Between 2006 (when UIGEA passed) and 2011, billions of hands of poker were played, and billions of dollars changed hands. And for every hand of poker played, the companies operating the games took a percentage. But none of that money went to U.S. companies, and no governmental entity in the United States earned any tax revenue from it. Instead of ending Internet gambling in the U.S., UIGEA simply forced it offshore. Companies that were operating in the U.S. market withdrew, and were replaced by companies that were less interested in respecting the law. Places like Gibraltar, The Isle of Man and Alderney became the homes to multi-million dollar businesses. Their governments and regulators created a friendly regulatory structure and welcomed the businesses. Many of these countries had a very small GDP. Revenue from the new gambling enterprises provided a huge boost in tax revenue, and they had no incentive to enforce U.S. law. In 2011, after years of trying, the United States’ Department of Justice was able to seize the domain names and shut down the U.S. operations of the world’s two biggest Internet poker companies. But their enforcement actions did not arise from great police and detective work, nor did it come from tightly written, effective statutes. It came from an informant who the FBI was able to arrest because of a spat between the informant and his employers. According to a survey by Poker Voters of America (disclosure: they are a former client), there were at least 532 Internet poker sites in operation in 2006. The closure of two of those sites, through complete luck, hardly constitutes the triumph of government’s ability to regulate the Internet. And if proof is needed, less than 24 hours later, it was easy to find many sites ready and willing to accept wagers from U.S.-based players. The biggest action that slowed down Internet poker in the U.S. was the decision by ESPN to stop accepting advertising from Internet poker companies -- a decision that came in the wake of the Department of Justice’s actions, but was completely voluntary. Fast forward to 2013. Federal law remains unchanged; it is still illegal (at least on paper) to gamble for money over the Internet. Despite that, anyone with a computer and a credit card can be playing Internet poker within a few minutes. For that matter, if blackjack is your game, you can find that too. The same is true with slots, craps, bingo and roulette. You can even play backgammon for money if you want. Gambling has a long history and seems to be about as certain in society as death and taxes. Neither the Internet nor gambling is going away. And when the two are combined, it is a matter of "where" the games will take place, not "if" they will take place. The only question is, will they take place on computer servers owned by foreign companies located in foreign territories? Or will they take place on servers located in the United States that are owned and operated by companies based here. This story was originally published by Techwire.net. (Editor's note: Then Assemblymember Lloyd Levine was the first state legislator in the country to introduce legislation to legalize Internet gaming at the state level. Since that time, he has served as a consultant in the Internet gaming industry, been featured panelist, speaker and moderator and many Internet gaming conferences, and is a frequent contributor to gaming publications around the world. In part two of this series, Levine will look at the current efforts by various state legislatures to legalize intrastate, Internet poker.)
<urn:uuid:402e1d8d-bf2f-4941-ba50-89a5673ed4bb>
CC-MAIN-2017-04
http://www.govtech.com/internet/Internet-Gaming-Law-vs-Reality-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9746
1,032
2.578125
3
There's a lot of space debris caught within the Earth's orbit. This includes everything from dead satellites to rocket parts, and when two pieces of debris collide, it can break up into even more, even smaller pieces of space trash. We've seen a few concepts for to deal with all this junk, including recycling the dead satellites or capturing them with a giant net, but we are practically blind as to where all this space trash is going. We covered Lockheed Martin's plans to track space junk in the past, but the company just activated its prototype radar system that can monitor our entire sky and track more than 200,000 objects in orbit. Lockheed's ground-based radar system can detect any space object that's larger than 0.8 inches across. So far the system has detected over 20,000 objects. The radar also catalogs, tracks, and predicts the course of each piece of space debris. The radar incorporates the Solid-State S-band, which pings at a higher wavelength frequency than the VHF band used by the Air Force Space Surveillance System. Lockheed's radar allows them to detect much smaller--and many more--objects in space than other systems. The eventual goal of the Space Fence project is to replace the Air Force's aging system that has been in place since 1961. The scientists say the system could dramatically improve our "space situational awareness." The prototype radar would also prove to be extremely useful in protecting the International Space Station and our other working satellites from collisions long before they even happen. Both Lockheed Martin and Raytheon are currently competing to win a contract from the US Air Force for the Space Fence Project. The Air Force hopes to award the final production contract within the year, and expects to have the first Space Fence site operational by 2017. Personally, I'm hoping they discover hidden alien probes or spaceships out there... Like this? You might also enjoy... - GeekBytes: Spider Webs Cover an Entire Town and Other Things We Didn't Cover - Robotics Developer Studio 4 Lets You Build Kinect-Guided Robots - YouTube and NBC Will Live-Stream the Entire 2012 Summer Olympics This story, "Lockheed Martin develops a 'space fence' to track orbiting space trash" was originally published by PCWorld.
<urn:uuid:2c34df8f-f0c8-4156-bd55-bb1785f33a7d>
CC-MAIN-2017-04
http://www.itworld.com/article/2730744/security/lockheed-martin-develops-a--space-fence--to-track-orbiting-space-trash.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931285
466
2.78125
3
There is no such thing as a truly secure password. There are only more secure or less secure passwords. Passwords are currently the most convenient and effective way to control access to your accounts. Most people aren’t aware of the numerous common techniques for cracking passwords: Dictionary attacks: There are free online tools that make password cracking almost effortless. Dictionary attacks rely on software that automatically plugs common words into password fields. So, don’t use dictionary words, slang terms, common misspellings, or words spelled backward. Avoid consecutive keyboard combinations such as qwerty or asdfg. Cracking security questions: When you click the “Forgot Password” link within a webmail service or other website, you’re asked to answer a question or series of questions to verify your identity. Many people use names of spouses, kids, other relatives, or pets in security questions or as passwords themselves. These types of answers can be deduced with a little research, and can often be found on your social media profile. Don’t use traceable personal information in your security questions or passwords. Simple passwords: When 32 million passwords were exposed in a breach last year, almost 1% of victims were using 123456. The next most popular password was 12345. Other common choices are 111111, princess, qwerty, and abc123. Avoid these types of passwords, which are easily guessed. Reuse of passwords across multiple sites: When one data breach compromises passwords, that same login information can often be used to hack into users’ other accounts. Two recent breaches revealed a password reuse rate of 31 percent among victims. Reusing passwords for email, banking, and social media accounts can lead to identity theft. Social engineering: As previously described, social engineering is the act of manipulating others into performing certain actions or divulging confidential information, and can be used as an alternative to traditional hacking. Social engineering can be employed to trick targets into disclosing passwords. One day we will develop a truly secure password, perhaps a cross-pollination of various access control tools such as biometrics, dynamic-based biometrics, image-based access, and multi-factor authentication. In the meantime, protect your information by creating a secure password that makes sense to you, but not to others. Use different passwords for each of your accounts. Be sure no one watches as you enter your password. Always log off if there are other people in the vicinity of your laptop or other device. It only takes a moment for someone to steal or change your password. Use comprehensive security software and keep it up to date to avoid keystroke loggers and other malware. Avoid entering passwords on computers you don’t control, such as at an Internet café or library. These computers may have malware that steals passwords. Avoid entering passwords when using unsecured Wi-Fi connections, such as at an airport or in a coffee shop. Hackers can intercept your passwords and other data over this unsecured connection.
<urn:uuid:106895de-b58c-4737-80c3-a968b8d08dcb>
CC-MAIN-2017-04
http://infosecisland.com/blogview/22695-What-Makes-My-Passwords-Vulnerable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92975
626
3.09375
3
Early applications of geographic information systems were for land and environmental assessment, and in some regions its reputation remains linked to environmental matters such as managing and monitoring vast tracks of land. But that is the old picture of GIS. Today's systems are used in countless applications that address specific needs in countries throughout the world. GIS now permeates government operations in transportation, health, housing, justice and public safety, even fiscal management. The popularity of desktop computing enabled more ubiquitous uses of GIS and eventually made the systems faster and easier to access, according to ESRI state and local government solutions manager Chris Thomas. Around the globe, the technology was adapted to regional uses that help to fuel everyday government activities. However, 9-11 catapulted GIS to new heights in the United States as mapping and imaging systems were used as emergency management and assessment tools in the aftermath of the terrorist attacks. Bill Gentes, executive director of the Urban and Regional Information Systems Association (URISA), said New York City used GIS to create thousands of maps, which were used throughout the response and clean up. "Maps went from being forgotten to being one of the most important things," he said. "Its amazing what they were mapping -- rubble piles, hot spots, gas leaks, lidar for heat sensing, measuring shifting piles of rubble -- everything, every facet of the emergency used maps and GIS." (Lidar is light detection and ranging technology that operates on the same general principle as radar.) A Household Tool It is still the routine use of GIS tools that support government operations. In Portsmouth, England, the City Council depends on GIS to monitor asset management, community facilities, average birth weights, crime patterns, emergency planning and employment demographics." Portsmouth's highway management application has proven particularly useful in providing wide-ranging data from disparate sources, according to Jac Cartwright, with the Portsmouth City Council's IT department. "The tools have enabled our city desk to deal with calls on a whole range of issues such as street lighting, abandoned vehicles and missed refuse collections -- normally the domain of individual specialists," he explained. "We also have plans for future Web delivery of much more information about highway works, traffic conditions and local events." With a GIS model of the highway system and associated assets, users are able to see data that is integrated from several sources. For example, they can look at the number of accidents in a particular location and at the same time, see that location's road conditions, street lighting or historical data. This flexible management system has been so successful that Portsmouth has formed a consortium with other city councils to share the applications. "Portsmouth pioneered the approach and basic information tools but needed more resources to develop the concept further," Cartwright said. "A small but growing cooperative of like-minded authorities have contributed to a communal pot and financed development." In Berlin, Germany, the Sanitation Department uses GIS to manage the pick-up of 500,000 garbage bins and the clean-up of 12,500 miles of streets and sidewalks in addition to managing winter services over 344 square miles. The GIS maps allow department staff to go from an overview of sections of the city to exact locations and individual objects. At the same time, information specific to that site, such as clean-up plans, can be accessed. Gothenberg City, a community of 450 in Sweden, has put GIS to work in dozens of management arenas, including one that examines new home developments to determine how many children will be needing schooling and which classrooms those youngsters will attend. The city was an early adopter of GIS and, since 1991, has built some of its own applications. Consequently, mapping and associated data is used in most departments, from traffic to health,
<urn:uuid:a05dad5f-5344-4f4e-a144-b08307e70ac5>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Picture-the-World.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959953
769
3
3
Table of Contents Video presentation for this tutorial can be found here. Though Firewalls are necessary when your computer is connected to the Internet, they can cause problems trying to get Internet aware programs working properly. For example, if you wanted to host a game server on your computer, unless you configure your firewall correctly, outside users would not be able to connect to your server. This is because by default a Firewall blocks all incoming traffic to your computer. This causes a problem, because programs that require incoming connections will now not be reachable. To fix this, we need to open the specific Internet port that the program expects to receive incoming connections on. This tutorial will cover how to open specific ports for programs, or to open these ports globally using Zone Alarm. The safest method when opening ports with Zone Alarm is to specify the specific incoming ports you would like allowed on a per program basis. In this method, you actually incoming traffic on certain Internet ports for a particular program rather than globally for any computer. This is safer, because if we allow a specific port for a program, only that program will be allowed incoming access on the port, rather than any program. A popular question we see in the forums is how to open up ports or port forward in order to get BitTorrent to work with a firewall. For this tutorial we will use as an example how to open up the incoming port for the popular BitTorrent program uTorrent. These methods, though, can apply to any program that requires a specific inbound port to be opened such as games, web servers, applications, etc. The first step is to determine what port we need to open or port forward to our computer. For uTorrent, we will specify this port by going into the uTorrent program's Preferences and writing down the port shown in the Connection screen as shown in Figure 1 below. Figure 1. uTorrent Port Settings Simply write down the port you see in the Port used for incoming connections field, in our example 15697, and make sure the checkbox for randomizing ports is unchecked. Now we know the port we need to allow incoming access to your computer when uTorrent is running. For any other program, you simply need to determine the port it needs for incoming connections and use that number in the following steps. These ports can generally be found by Googling for the name of the program and the word firewall. For example: VNC firewall To start the process we double-click on the Zone Alarm icon, , in your task bar to open up the main console screen as shown in Figure 2 below. Figure 2. Main Screen of Zone Alarm Now click on the category in the left navigation bar called Program Control. This will open the program control screen where you can configure how you want Zone Alarm to secure the programs installed on your computer. Once in the Program Control screen, click on the Programs tab and you will see a list of your installed programs and their settings as shown in Figure 3 below. Figure 3. Zone Alarm Program Control Program Listings In the list of programs find the program we want to allowing incoming access for, and select it by clicking once on the program name in the list. In our example, we are looking for the uTorrent, so we scroll through the list and click once on the utorrent.exe when we find it. We then click on the Options button and in the new screen that opens, click on the Expert Rules tab. This will present you with a screen similar to Figure 4 below. Figure 4. Expert Rule Settings for a program in Zone Alarm Click on the Add button to start adding the ports that we want to allow incoming access to. When you click on the Add button you will be presented with a screen where you can enter the rule. In the Name field you must provide a name for the rule, like uTorrent, or you will not be able to continue. In the Comments field enter a description of what this rule is going to do. This screen can be seen in Figure 5 below. Figure 5. Adding a rule in Zone Alarm Still in the same screen, click on the Modify button under the Protocol box and select Add Protocol, and then select Add Protocol again as shown in Figure 5 above. You will now be presented with a new screen where you can enter the specific ports you want to allow access. Figure 6. Add Incoming Ports Rule in Zone Alarm Select the protocol type, in our example it is TCP, enter a Description for the rule in the Description field (this is required to save the rule), change Destination Port to Other, and then enter the incoming ports you want to allow in the field to it's right. Since we only want to allow the one incoming port, 15697, we enter that port. If there was a range of ports we wanted to allow access we could add the range like 15697-15680. This would allow incoming access for all TCP ports between, and including, 15697 and 15680. When done, simply press the OK button. This will bring you back to the Add Rule page again where you will press the OK button again. You will now be at the Expert Rules page where you will see your new rule listed. To save and activate this rules, press the Apply button and then the OK button. Now you can close the Zone Alarm console and Zone Alarm will now allow incoming access to these ports when the uTorrent program is running. These steps will work for any other program that requires incoming Internet connections. There are times that you want Zone Alarm to allow system wide access to certain ports on your computer, rather than on a per program basis. For example, if you are a developer whose program listens on a particular port, and the name or location of your program keeps changing, then opening the ports your program uses globally makes it easier for you. The steps to accomplish this are very similar to the ones above, but this time you configure these rules via the Firewall screen rather than the Program Control screen. As the steps are essentially the same as the information provided previously, I will just summarize the steps here instead. The ports should now be open globally on your computer and can be used by any program that uses these ports. Now that you know how to allow incoming connections via Zone Alarm, getting programs to work that require incoming access from the Internet should no longer be a problem. Just remember to do your research as to what ports need to be opened for various programs and then simply plug them into your Zone Alarm settings as described above. As always, if you have question about this tutorial, or about configuring your firewall, feel free to ask about them in the Anti Virus, Firewall and Privacy Products and Protection Methods forum. If you suspect that you have spyware installed on your computer, then an excellent tool to remove them is Ad-Aware SE. Follow the instructions below to learn how to use Ad-Aware SE to remove these programs from your computer. Word of warning, though, Spyware can sometimes be integrated tightly into software that you use, and if you remove the spyware, that software may not function correctly. So ... Many organizations that use Remote Desktop Services or Terminal Services are not using a VPN connection before allowing connections to their in-house servers or workstations. If no VPN is required, this means that the Terminal Server or Remote Desktop is publicly visible and allows connections from anyone on the network and in most cases the Internet. This is a major security risk ... In this tutorial we will discuss the concept of Ports and how they work with IP addresses. If you have not read our article on IP addresses and need a brush up, you can find the article here. If you understand the concepts of IP addresses, then lets move on to TCP and UDP ports and how they work. One of the most important things a user can do to keep their computer secure is make sure they are using the latest security updates for Windows and their installed programs. Unfortunately, staying on top of these updates can be a time consuming and frustrating task when you have hundreds of programs installed on your computer. Thankfully, we have a utility called Secunia PSI, which is vital ... Windows Vista comes with a rich feature set of diagnostic and repair tools that you can use in the event that your computer is not operating correctly. These tools allow you to diagnose problems and repair them without having to boot into Windows. This provides much greater flexibility when it comes to fixing problems that you are not able to resolve normally. This guide focuses on using the ...
<urn:uuid:330012b0-e419-4e11-9ee8-543cf5c09169>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/how-to-open-ports-in-zone-alarm-professional/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00262-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915361
1,767
3.125
3
Botnets are flourishing with new packaging, new methods and new business models. ZeroAccess, the world’s fastest-growing botnet, infected millions of computers in 2012, using them to commit large-scale click fraud and Bitcoin (a digital currency) mining. Zeus, which is also a banking trojan, causes millions of dollars in loss to its victims by siphoning money from their online bank accounts. F-Secure Antibot disinfects devices that are infected on a network by guiding the users through a simple self-cleaning process, cutting out the need to call the operator helpdesk. “Anywhere from 6% to 20% of people, depending on the study, still don’t use antivirus software,” says Mikko Hypponen, Chief Research Officer at F-Secure. “These computers are the ones most likely to get infected – a problem for operators wanting a clean network. Antibot helps solve this problem because it works whether or not the computer has antivirus.” A bot (short for robot) is a malware-infected PC or device that is remotely controlled by cybercriminals, and a botnet is a whole network of those infected devices. Cybercriminals can use the device to make money by sending spam, displaying and clicking ads or in the case of a smartphone, sending text messages to premium numbers. Or they can take the device hostage, requiring a ransom to be paid before ceding control. Criminals also use botnets to launch DDoS attacks that bring down organizational websites. Typically, users don’t even realize their computer is part of a botnet, says Hypponen. “You’re living your life and meanwhile, your computer is part of an army of zombies, carrying out the orders of cybercriminals.” Botnets can impact device and Internet performance, slowing down connections and affecting usability. They also pose a risk to consumer privacy. Private credentials like passwords can be stolen, giving access to online bank accounts, social media accounts, and other personal data. Operators are significantly affected by the burden of botnets. Helpdesk call volumes increase when customers experience slow connections or other problems, and infected devices that send spam take up bandwidth that slows down the network for everyone. By cleaning infected devices and restoring their performance, Antibot’s automated cleaning capabilities will turn a negative user experience into a positive one, and the reductions in volume and length of calls to operator support will result in considerable savings for operators. Minimized bandwidth-hogging by botnets will reduce unnecessary load on network infrastructure. Antibot will work across platforms, supporting Windows, Android and later this year OS X. A smooth user experience with few interaction steps keeps Antibot light for the consumer, and its “always up-to-date” status and capability to remove complex malware make it the most effective product of its kind. Antibot is co-brandable and the end user messaging is fully customizable.
<urn:uuid:6f1a6207-6570-4334-9798-dd05235a02f3>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/02/19/antibot-network-based-botnet-removal-tool/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929385
624
3
3
Michigan Governor Rick Snyder signed a law that defines how self-driving cars can be used on public roads in testing and commercial deployment, the Michigan Economic Development Corp. said in a statement. The law allows public road testing of vehicles without steering wheels, gas or brake pedals or any need for human control. It lets auto and tech companies operate driverless ride-sharing services and also lays out rules for how self-driving cars can be sold to the public once the technology has been tested and certified. “Michigan is the global center for automotive technology and development,” Snyder said in the statement. “By establishing guidelines and standards for self-driving vehicles, we’re continuing that tradition.” Michigan business leaders and politicians are keen to keep Detroit at the center of automaking as Silicon Valley heavyweights such as Alphabet Inc.’s Google, Apple Inc. and Uber Technologies Inc. are accelerating research into robot rides. The state is developing a 335-acre (136-hectare) testing facility for driverless cars on the site of a World War II bomber factory, and the University of Michigan has opened a proving grounds for such vehicles on its campus. U.S. regulators also have proposed rules for testing and deploying driverless autos. In preparing the legislation, Michigan lawmakers received input from General Motors Co., Ford Motor Co., Fiat Chrysler Automobiles NV, Toyota Motor Corp., Google, Uber and Lyft Inc., according to the economic development agency, which is financed by public and private funding. The companies “helped inform the final legislation” so that “any new policy would not impact the autonomous vehicle industry’s ability to evolve,” the agency said. “By creating a more in-depth framework for how self-driving vehicle technology can be researched, tested and used, we’re building a structured plan that takes into account the needs of private industry,” said Steve Arwood, the agency’s chief executive officer.
<urn:uuid:e33f8003-411a-438b-9f05-953c54d9511d>
CC-MAIN-2017-04
http://www.ioti.com/transportation/michigan-enacts-first-law-testing-sale-driverless-cars
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00528-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937106
408
2.578125
3
This blog post is also featured as a guest blog post on the WEF blog. The UK government recently admitted that foreign states hacked and attacked its national infrastructure, including the country’s gas, water and electricity supplies. National defense systems have also been targeted by cyberwar. The most well-known instance being Stuxnet, malware designed to sabotage Iran’s nuclear program, discovered back in 2010. In the US, the Navy’s internal IT system faces a reported 110,000 cyber attacks every hour, according to HP, the company that manages the Navy Marine Corps Intranet. More recently, Red October, a cyber espionage campaign, was recently identified infiltrating both PCs and phones of international diplomats. Political, military and business leaders know quite well that cyberwar is already among us. This is not to say that security has been an afterthought or wholly inadequate. In order to roll out today’s web-enabled infrastructures, our governments and industries have invested massive amounts of resources in security and this will only continue to grow. Currently, the US Department of Defense invests more than $3 billion annually in cybersecurity, its cyber-forces are 6,000 strong and could add another 1,000 over the next year. Last week, Neelie Kroes at the European Commission put a stake in the ground about cybercrime in an effort to curb data breaches, putting data & privacy requirements on companies that run large databases. The world is paying attention. How do we reap the benefits of the connected world and simultaneously protect what is necessary? In energy, defense, transportation and communications, we’ve experienced vast improvements due to the power of the Web. For example, we absolutely need the societal benefits of the smart electricity grid and the protection of this system is paramount. We can’t have our energy grid turned off. At The Forum this year, building cyber resilience and identifying new approaches to reinforce and protect critical infrastructure will be top of mind. Lookout is approaching this massive problem is by using big data to predict and prevent future threats. We don’t expect this brave new world to unfold quietly. The opportunities of new technologies come with responsibility. Businesses, policymakers, and individuals must set and hold ourselves accountable to define a baseline of norms for what we expect from the organizations and companies we put our trust in. Security cannot a sunk cost; it must be considered as the infrastructure is being built. Intuitive and simple security should be a priority for government, companies and developers creating in this space.
<urn:uuid:9d84d2e8-15ca-411b-a2e8-18d25ca1b2fc>
CC-MAIN-2017-04
https://blog.lookout.com/blog/2013/01/28/building-a-stronger-cyber-resilience/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94952
520
2.578125
3
IPv6 Extension Headers There are a lot of similarities between IPv4 and IPv6. There are also a lot of differences, including some differences that may have security implications for network engineers who deploy IPv6. In this blog posting, I will discuss IPv6 extension headers and why network and security engineers may want to pay closer attention to them. The IPv6 specification(1) supports what are known as extension headers, which have varying uses. The good thing about extension headers is that they are typically seldom seen with general Internet usage, except in specific situations, such as where packets must be handled in a specific manner that cannot be described in the standard IPv6 header. The bad thing about extension headers is that end nodes (such as user computers) and intermediate nodes (such as routers, firewalls and other security devices) generally need to be aware of and be able to handle extension headers. Perhaps the most frequent and important extension header is the fragment extension header (which will be discussed in a later post). Other extension headers defined in the IPv6 specification include hop-by-hop options, destination options and routing. The authentication and the encapsulating security payload headers, defined in separate RFCs , support IPsec in IPv6.(2) Source routing in IPv4 has been problematic because of opportunities for denial of service attacks and routers are usually configured to ignore source routing options.(3) Because of its similarity to IPv4 source routing and its even greater potential for facilitating denial of service attacks, the IPv6 routing extension header type 0 was deprecated by the IETF in December 2007. In packets that contain a type 0 routing header (also known as RH0), the routing header must be ignored or the packet must be dropped.(4) Extension headers force the packet byte offset of the layer 4 header (typically a TCP or UDP header) to be shifted from its usual packet offset immediately after the main header. As a result, it is possible for the layer 4 header to appear at a variety of packet offsets into the packet. In IPv4, if there are options present, the location of the layer 4 header will be at the offset indicated by the header length field. In IPv6, if a network device is to correctly detect the layer 4 header for an IPv6 packet with extension headers, it must parse the extension header chain until it reaches the layer 4 header. Some network devices are incapable of traversing the list of extension headers, with the result being that the network device either incorrectly identifies what it thinks is the layer 4 header or it doesn’t identify the layer 4 header information at all. In some cases, the network device may only be capable of parsing a specific number of extension headers or it may stop evaluating extension headers past a specific byte offset.(5) Filters that make decisions based upon layer 4 information may fail if the network device or its software cannot parse extension headers. Other devices may evaluate packets with extension headers in a slower software processing mode, which can reduce the packet processing capabilities of the device. When purchasing a network device, it is important to confirm that the device can identify layer 4 headers at a variety of offsets in the packet and to confirm what performance penalties may occur if the device has to examine extension headers in packets. The extent to which a router will attempt to process and report extension header or layer 4 information in flow varies. The Netflow v9 standard, as described in IETF RFC 3954(6), describes an “IPv6 option headers” field that is used for encoding which extension headers were observed in the packet. Not all router vendors implement this field. IPFIX provides a similar field and is described in IANA document “IP Flow Export (IPFIX) Entities”. Some router operating systems are unable to report layer 4 information when one or more extension headers are present. In theory, extension headers allow for adding new functionality to IPv6. In reality, it is hard to create and implement new extension headers because of the lead time that network device implementers need to update their hardware and software. Many network devices will drop any packets with extension headers that aren’t recognized by the device. There will be a continual trade-off between the introduction of new extension headers to IPv6 and the cost to network device creators and maintainers of adding support for the new extension header. Around 2002, I attended a presentation prepared by a systems engineer for a router manufacturer, in which the presenter argued that due to the obscure nature of extension headers and the inherent opportunities for problems, that extension headers should be eliminated. I have not heard anyone seriously suggest outright elimination of extension headers since, because some extension headers provide essential functionality in IPv6. But extension headers are still generally despised for how they complicate the IPv6 protocol. 1. IETF RFC 2460, “Internet Protocol, Version 6 (IPv6) Specification,” Deering, S and Hinden, R, December 1998. 2. IETF RFC 4302, “Authentication Header,” Kent, S, December 2005 and IETF RFC 4303, “IP Encapsulating Security Payload (ESP),” Kent, S, December 2005. 3. “Old IPv4 flaws resurface with IPv6,” Iljitsch, Ars Technica, May 2007. http://arstechnica.com/hardware/news/2007/05/old-ipv4-flaws-resurface-with-ipv6.ars 4. IETF RFC 5095, “Deprecation of Type 0 Routing Headers in IPv6”, Abley, J, Savola, P, and Neville-Neal, G, December 2007. 5. Arguably, from a purely functional point of view, routers, don’t need to parse extension headers; however where routers implement packet filtering based on header content, routers really must be able to parse extension headers. 6. IETF RFC 3954, “Cisco Systems NetFlow Services Export Version 9,” Claise, B, Editor, October 2004
<urn:uuid:a50499bb-47ef-48d0-a4fa-f943e8982930>
CC-MAIN-2017-04
https://www.arbornetworks.com/blog/asert/ipv6-extension-headers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893226
1,259
2.953125
3
Definition: A recursive algorithm, especially a sort algorithm, where dividing (splitting) into smaller problems is time consuming or complex and combining (merging) the solutions is quick or trivial. Generalization (I am a kind of ...) divide and conquer. Aggregate parent (I am a part of or used in ...) radix sort, quicksort, bucket sort, selection sort. See also easy split, hard merge. Note: Although the notion is wide spread, I first heard this term from Doug Edwards about 1994. Called "Conquer form" of using divide and conquer in [ATCH99, page 3-3]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 27 October 2005. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "hard split, easy merge", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 27 October 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/hardSplitEasyMerge.html
<urn:uuid:60277219-94b1-4dd8-9ffa-120da871eab9>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/hardSplitEasyMerge.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.826508
258
3.34375
3
Chapter 9A – Review of Basic Electronics A traditional presentation of this material might have begun with a review of the basic concepts of direct current electricity. Your author has elected to postpone that discussion until this point in the course, at which time it is needed. A Basic Circuit We begin our discussion with a simple example circuit – a flashlight (or “electric torch” as the Brits call it). This has three basic components: a battery, a switch, and a light bulb. For our purpose, the flashlight has two possible states: on and off. Here are two diagrams. Light is Off Light is On In the both figures, we see a light bulb connected to a battery via two wires and a switch. When the switch is open, it does not allow electricity to pass and the light is not illuminated. When the switch is closed, the electronic circuit is completed and the light is illuminated. The figure above uses a few of the following basic circuit elements. describe each of these elements and then return to our flashlight example. The first thing we should do is be purists and note the difference between a cell and a battery, although the distinction is quite irrelevant to this course. A cell is what one buys in the stores today and calls a battery; these come in various sizes, including AA, AAA, C, and D. Each of these cells is rated at 1.5 volts, due to a common technical basis for their manufacture. Strictly speaking, a battery is a collection of cells, so that a typical flashlight contains one battery that comprises two cells; usually AA, C, or D. An automobile battery is truly a battery, being built from a number of lead-acid cells. A light is a device that converts electronic current into visible light. This is not a surprise. A switch is a mechanical device that is either open (not allowing transmission of current) or closed (allowing the circuit to be completed). Note that it is the opposite of a door, which allows one to pass only when open. The Idea of Ground the above circuit, which suggests a two-wire design: one wire from the battery the switch and then to the light bulb, and another wire from the bulb directly to the battery. One should note that the circuit does not require two physical wires, only two distinct paths for conducting electricity. Consider the following possibility, in which the flashlight has a metallic case that also conducts electricity. Physical Connection Equivalent Circuit Consider the circuit at left, which shows the physical connection postulated. When switch is open, no current flows. When the switch is closed, current flows from the battery through the switch and light bulb, to the metallic case of the flashlight, which serves as a return conduit to the battery. Even if the metallic case is not a very good conductor, there is much more of it and it will complete the circuit with no problem. electrical terms, the case of the battery is considered as a common ground, so that the equivalent circuit is shown at right. Note the new symbol in this circuit – this is the ground element. One can consider all ground elements to be connected by a wire, thus completing the circuit. In early days of radio, the ground was the metallic case of the radio – an excellent conductor of electricity. Modern automobiles use the metallic body of the car itself as the ground. Although iron and steel are not excellent conductors of electricity, the sheer size of the car body allows for the electricity to flow easily. To conclude, the circuit at left will be our representation of a flashlight. The battery provides the electricity, which flows through the switch when the switch is closed, then through the light bulb, and finally to the ground through which it returns to the battery. As a convention, all switches in diagrams will be shown in the open position unless there is a good reason not to. The student should regard the above diagram as showing a switch which is not necessarily open, but which might be closed in order to allow the flow of electricity. The convention of drawing a switch in the open position is due to the fact that it is easier to spot in a diagram. Voltage, Current, and Resistance It is now time to become a bit more precise in our discussion of electricity. We need to introduce a number of basic terms, many of which are named by analogy to flowing water. The first term to define is current, usually denoted in equations by the symbol I. We all have an intuitive idea of what a current is. Imagine standing on the bank of a river and watching the water flow. The faster the flow of water, the greater the current; flows of water are often called currents. In the electrical terms, current is the flow of electrons, which are one of the basic building blocks of atoms. While electrons are not the only basic particles that have charge, and are not the only particle that can bear a current; they are the most common within the context of electronic digital computers. Were one interested in electro-chemistry he or she might be more interested in the flow of positively charged ions. All particles have one of three basic electronic charges: positive, negative, or neutral. Within an atom, the proton has the positive charge, the electron has the negative charge, and the neutron has no charge. In normal life, we do not see the interior of atoms, so our experience with charges relates to electrons and ions. A neutral atom is one that has the same number of protons as it has electrons. However, electrons can be quite mobile, so that an atom may gain or lose electrons and, as a result, have too many electrons (becoming a negative ion) or too few electrons (becoming a positive ion). For the purposes of this course, we watch only the electrons and ignore the ions. An electric charge, usually denoted by the symbol Q, is usually associated with a large number of electrons that are in excess of the number of positive ions available to balance them. The only way that an excess of electrons can be created is to move the electrons from one region to another – robbing one region of electrons in order to give them to another. This is exactly what a battery does – it is an electron “pump” that moves electrons from the positive terminal to the negative terminal. Absent any “pumping”, the electrons in the negative terminal would return to the positive region, which is deficient in electrons, and cause everything to become neutral. But the pumping action of the battery prevents that. Should one provide a conductive pathway between the positive and negative terminals of a battery, the electrons will flow along that pathway, forming an electronic current. To clarify the above description, we present the following diagram, which shows a battery, a light bulb, and a closed switch. We see that the flow of electrons within the battery is only a part of a larger, complete circuit. Materials are often classified by their abilities to conduct electricity. Here are two common types of materials. Conductor A conductor is a substance, such as copper or silver, electrons can flow fairly easily. Insulator An insulator is a substance, such as glass or wood, significant resistance to the flow of electrons. In many of our circuit diagrams we assume that insulators do not transmit electricity at all, although they all do with some resistance. The voltage is amount of pressure in the voltage pump. It is quite similar to water pressure in that it is the pressure on the electrons that causes them to move through a conductor. Consider again our The battery provides a pressure on the to cause them to flow through the circuit. When the switch is open, the flow is blocked and the electrons do not move. When the switch is closed, the electrons move in response to this pressure (voltage) and flow through the light bulb. The light bulb offers a specific resistance to these electrons; it heats up and glows. As mentioned above, different materials offer various abilities to transmit electric currents. We have a term that measures the degree to which a material opposes the flow of electrons; this is called resistance, denoted by R in most work. Conductors have low resistance (often approaching 0), while insulators have high resistance. In resistors, the opposition to the flow of electrons generates heat – this is the energy lost by the electrons as they flow through the resistor. In a light bulb, this heat causes the filament to become red hot and emit light. An open switch can be considered as a circuit element of extremely high resistance. We have discussed four terms so far. We now should mention them again. Charge This refers to an unbalanced collection of electrons. The term used for denoting charge is Q. The unit of charge is a coulomb. Current This refers to the rate at which a charge flows through a conductor. The term used for denoting current is I. The unit of current is an ampere. Voltage This refers to a force on the electrons that causes them to move. This force can be due to a number of causes – electro-chemical reactions in batteries and changing magnetic fields in generators. The term used for denoting voltage is V or E (for Electromotive Force). The unit of current is a volt. Resistance This is a measure of the degree to which a substance opposes the flow of electrons. The term for resistance is R. The unit of resistance is an ohm. Law and the Power Law One way of stating Ohm’s law (named for Georg Simon Ohm, a German teacher who discovered the law in 1827) is verbally as follows. The current that flows through a circuit element is directly proportional to the voltage across the circuit element and inversely proportional to the resistance of that circuit element. What that says is that doubling the voltage across a circuit element doubles the current flow through the element, while doubling the resistance of the element halves the current. Let’s look again at our flashlight example, this time with the switch shown as closed. The chemistry of the battery is pushing from the positive terminal, denoted as “+” through the battery towards the negative terminal, denoted as “–“. This causes a voltage across the only in the circuit – the light bulb. This voltage placed across the light bulb causes current to flow through it. In algebraic terms, Ohm’s law is easily stated: E = I·R, E is the voltage across the circuit element, I is the current through the circuit element, and R is the resistance of the circuit element. Suppose that the light bulb has a resistance of 240 ohms and has a voltage of 120 volts across it. Then we say E = I·R or 120 = I·240 to get I = 0.5 amperes. As noted above, an element resisting the flow of electrons absorbs energy from the flow it obstructs and must emit that energy in some other form. Power is the measure of the flow of energy. The power due to a resisting circuit element can easily be calculated. The power law is states as P = E·I, P is the power emitted by the circuit element, measured in watts, E is the voltage across the circuit element, and I is the current through the circuit element. Thus a light bulb with a resistance of 240 ohms and a voltage of 120 volts across it has a current of 0.5 amperes and a power of 0.5 · 120 = 60 watts. There are a number of variants of the power law, based on substitutions from Ohm’s Here are the three variants commonly seen. P = E·I P = E2 / R P = I2·R above example, we note that a voltage of 120 volts across a resistance of 60 would produce a power of P = (120)2 / 240 = 14400 / 240 = 60 watts, as expected. student will notice that the above power examples were based on AC circuit elements, for which the idea of resistance and the associated power laws become more complex (literally). Except for a few cautionary notes, this course will completely ignore the complexities of alternating current circuits. There are very many interesting combinations of resistors found in circuits, but here we focus on only one – resistors in series; that is one resistor placed after another. In this figure, we introduce the symbol for a resistor. Consider the circuit above, with two resistances of R1 and R2, respectively. One of the basic laws of electronics states that the resistance of the two in series is simply the sum: thus R = R1 + R2. Let E be the voltage provided by the battery. Then the voltage across the pair of resistors is given by E, and the current through the circuit elements is given by Ohm’s law as I = E / (R1 + R2). Note that we invoke another fundamental law that the current through the two circuit elements in series must be the same. Again applying Ohm’s law we can obtain the voltage drops across each of the two resistors. Let E1 be the voltage drop across R1 and E2 be that across R2. Then / (R1 + R2), and E2 = I·R2 = R2·E / (R1 + R2). It should come as no surprise that E1 + E2 = R1·E / (R1 + R2) + R2·E / (R1 + R2) = (R1 + R2)·E / (R1 + R2) = E. If, as is commonly done, we assign the ground state as having zero voltage, then the voltages at the two points in the circuit above are simple. 1) At point 1, the voltage is E, the full voltage of the battery. 2) At point 2, the voltage is E2 = I·R2 = R2·E / (R1 + R2). Before we present the significance of the above circuit, consider two special cases. In the circuit at left, the second resistor is replaced by a conductor having zero resistance. The voltage at point 2 is then E2 = 0·E / (R1 + 0) = 0. As point 2 is directly connected to ground, we would expect it to be at zero voltage. Suppose that R2 is much bigger than R1. Let R1 = R and R2 = 1000·R. We calculate voltage at point 2 as E2 = R2·E / (R1 + R2) = 1000·R·E / (R + 1000·R) = 1000·E/1001, or approximately E2 = (1 – 1/1000)·E = 0.999·E. Point 2 is essentially at full voltage. a Resistor and Switch in Series We now consider an important circuit that is related to the above circuit. In this circuit the second resistor, R2, is replaced by a switch that can be either open or closed. The Circuit Switch Closed Switch Open The circuit of interest is shown in the figure at left. What we want to know is the voltage at point 2 in the case that the switch is closed and in the case that the switch is open. In both cases the voltage at point 1 is the full voltage of the battery. When the switch is closed, it becomes a resistor with no resistance; hence R2 = 0. As we noted above, this causes the voltage at point 2 to be equal to zero. When the switch is open, it becomes equivalent to a very large resistor. Some say that the resistance of an open switch is infinite, as there is no path for the current to flow. For our purposes, it suffices to use the more precise idea that the resistance is very big, at least 1000 times the resistance of the first resistor, R1. The voltage at point 2 is the full battery voltage. we present our circuit, we introduce a notation used in drawing two wires that appear to cross. If a big dot is used at the crossing, the two wires are connected. If there is a gap, as in the right figure, then the wires do not connect. Here is a version of the circuit as we shall use it later. In this circuit, there are four switches attached to the wire. The voltage is another circuit that is not important at this time. If all four switches are open, then the voltage monitor registers full voltage. If one or more of the switches is closed, the monitor registers zero voltage. This is the best way to monitor a set of switches. to Tri–State Buffers We use the above verbiage to present a new view of tri–state buffers. Consider the following two circuits, which have been used previously in this chapter. Suppose that the battery is rated at five volts. In the circuit at left, point A is at 5 volts and point B is at 0 volts. In the circuit at right, point B is clearly at 0 volts, but the status of point A is less clear. What is obvious about the circuit at right is that there is no current flowing through it and no power being emitted by the light bulb. For this reason, we often say that point A is at 0 volts, but it is better to say that there is no specified voltage at that point. This is equivalent to the third state of a tri–state buffer; the open switch is not asserting anything at point A. Perhaps the major difference between the two circuits is that we can add another battery to the circuit at right and define a different voltage at point A. As long as the switch remains open, we have no conflict. Were the switch to be closed, we would have two circuits trying to force a voltage at point A. This could lead to a conflict. Here is a more common use of tri–state buffers. Suppose a number of devices, each of which can signal a central voltage monitor by asserting logic zero (0 volts) on a line. Recalling that a logic AND outputs 0 if any of its inputs are 0, we could implement the circuit as follows. Suppose we wanted to add another device. This would require pulling the 4–input AND gate and replacing it with a 5–input AND gate. Continual addition of devices would push the technology beyond the number of inputs a normal gate will support. The tri–state solution avoids these problems. This circuit repeats the one shown above with the switches replaced by tri–state buffers, which should be viewed as switches. note that additional devices can be added to this circuit merely by attaching another tri–state switch. The only limit to extensibility of this circuit arises from timing considerations of signal propagation along the shared line. Analysis of the Four–Tristate Circuit In order to analyze the circuit at the bottom of the previous page, we refer back to the circuit on the page before that. We need to understand the voltage at the monitor, which is assumed to be the input to a digital gate in the control logic of the CPU. While a precise discussion of this circuit involves treating resistors in parallel, such is not needed to be accurate here. that none of the tri–states are enabled. In that case, the circuit is equivalent to the one in the next figure. The voltage at point 2 is the full battery voltage, as the resistance between that point and ground is = E / (1 + R1/R2) E2 » E · (1 – R1/R2), but R1/R2 » 0. the situation in which one of the tri–state buffers is enabled. Tri–state 2 has been chosen arbitrarily. Now there is a direct path of zero resistance between point 2 and ground. The voltage at that point drops to 0, with the entire voltage drop being across the resistor R. consider the situation in which more than one of the tri–state buffers is As before, the choice is arbitrary. there is a direct path of zero resistance between point 2 and ground. The fact that there are two such paths has no practical consequences. The only criterion is one or more path of zero resistance.
<urn:uuid:e9a45ef0-aa68-4acc-9693-b273f93fb958>
CC-MAIN-2017-04
http://edwardbosworth.com/CPSC2105/MyTextbook2105_HTM/MyText2105_Ch09A_V06.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00492-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933952
4,473
4.5
4
As ten million people prepare to complete their tax returns online in January, British citizens are being bombarded with scams. Forty per cent have received phishing emails which appeared to be from HMRC, and identity fraud is rife – with many people still unaware of the potential risks involved, according to Miracl. The research, which surveyed the attitudes of 1,000 UK consumers about their personal security online, revealed that a fifth of UK consumers, or their close friends or family, have been the victim of data theft or identity fraud. But despite these clear risks, there is still a lack of awareness among many in the UK who seem to have no idea how dangerous this kind of data theft can be. Of those who have filled in a tax return online, almost half (48%) are not at all worried about the potential risks of losing their personal and financial information. In addition, when asked which online activity made them most nervous about their personal and financial information being stolen, the majority were most worried about shopping online (51%), with just over a third most concerned about online banking (36%), and only 14% most concerned about using online government services, such as applying for a driving licence or filling in a tax return. “Consumers are surprisingly laid back about the potential risks of filling in their tax returns online. It’s true that you could lose money if your financial details were stolen while online shopping, but the volume of data involved in filling out a tax return online makes this a far greater risk. With all the financial data involved in a tax return, a criminal could potentially take out a mortgage in your name. Data theft and identity fraud is a multi-billion dollar business on the dark web, and so consumers must be vigilant,” said Brian Spector, CEO at Miracl. This lack of awareness could be because people are being lulled into a false sense of security, by thinking that using stronger passwords will protect them. Over two-thirds of those surveyed said that they create stronger passwords in order to keep their personal and financial data safe online, such as using a combination of letters and numbers, or substituting numbers for letters. High profile data breaches such as the TalkTalk hack have made most people (61%) feel more nervous about providing their personal and financial information online, and as a result, the majority (51%) think it is only a matter of time before they are affected. The research found that most people would welcome the chance to use tighter security to protect themselves when using online services. Three-quarters (77%) said that they would feel better about providing their personal and financial details online if the website had stronger security procedures, such as multi-factor authentication. Spector continues, “High profile data breaches such as TalkTalk understandably make people nervous about their personal security online. But we don’t have to be part of the weekly announcements about mass data breaches. The underlying issue is that the username and password system is old technology that simply cannot secure the deep information and private services that we all store and access online today. By contrast, new, secure methods of two-factor authentication can eliminate password risk and at the same time be user-friendly.” “Database hacks, password reuse, browser attacks and social engineering can all be a thing of the past in the authentication space. Customers are rightly demanding to be protected when they submit their valuable personal information on the web, and online services need to respond appropriately by contributing to the restoration of trust on the internet and removing the password from their systems altogether,” Spector concluded.
<urn:uuid:4d006e94-8306-42c3-ba5a-b3a0add68cc3>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/01/06/scammers-target-citizens-filing-tax-returns-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963383
731
2.75
3
Definition: The set of maximally connected components of an undirected graph. Generalization (I am a kind of ...) See also connected graph, biconnected component, undirected graph, subgraph, clique, strongly connected components. Note: If a graph is connected, it has only one connected component. Often the term "component" is used, with the "connected" property understood. Let G=(V, E) be a graph and G1=(V1, E1)..., Gm=(Vm, Em) be its connected components. Every vertex is in exactly in one connected component, that is, the components partition(1) V. Formally, for all i ≠ j, Vi∩ Vj=ø. Further, V=V1∪...∪ Vm and E=E1∪...∪ Em. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Alen Lovrencic, "connected components", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/connectedComponents.html
<urn:uuid:13131ea5-9dbb-418e-86c5-22e36b62a3fd>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/connectedComponents.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.857678
303
2.671875
3
There is a lot of news about Big Data. Technology experts talk about how it is going to impact the data center industry and ensure that they fill up. However, the critical question here is its degree of greenness. The term Big Data has come up only in the recent past. Before that however, it was nothing but analytics. This service, if one can call it that, was delivered to high value clients by most companies. Today, however, open source software and the cloud and cloud based services have brought this feature to almost anybody. Data centers contribute to this easier availability by making hardware cheaper too. Even though there are smart initiatives being implemented to manage aspects like public transportation system, the question of changing human behavior is also important. So on one hand, Big Data Analytics can get into the minutest details and deliver data and calculations. But it is up to us to use the same and make significant changes in energy use. Read More About Big Data
<urn:uuid:9dc90d09-c4bc-4aa7-9512-03aef19d8d7d>
CC-MAIN-2017-04
http://www.datacenterjournal.com/big-data-energy-and-its-impact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968849
195
2.734375
3
Adapting Communication Styles for Audiences One of the most basic and effective strategies trainers can employ in physical or virtual classrooms is to adapt their delivery method or communication style to accommodate the various learning styles in the audience. Doing so increases the chance the audience will fully absorb the message and practice what is taught. Virtual classrooms require specialized trainer communication techniques, so this article will focus on physical, instructor-led training considerations. First, get an understanding of the different learning styles. This knowledge will help you analyze and better communicate with your audience. There are a variety of theories regarding learning styles or the way people perceive and process information, but most prescribe at least partially to educational theorist David A. Kolb’s take on experiential learning. The four basic learner types based on Kolb’s Learning Style Inventory are accommodators, divergers, assimilators and convergers. · Accommodators easily adapt knowledge learned to new situations. One of an accomodator’s favorite questions might be, “What if?” · Divergers view an idea or fact from multiple, divergent perspectives. Thus, they are often great at brainstorming. · Assimilators easily and holistically integrate knowledge from multiple pieces of information. They value logic and order, and they often want just the facts. · Convergers like to make quick decisions or to come to one right answer. They value common sense and want to know, “How does this work, or how can I use it?” If that’s too complicated, there is the more standard classification that people learn in three ways: 1. Visual (They need to “see” what they’re learning) 2. Auditory (They need to “hear” the information and facts) 3. Kinesthetic (They need hands-on training to learn by doing. This might include taking notes or writing down the important parts or steps of a task.) Next, consider the best delivery method or approach for the intended message. It’s tough to get around the fact that learners with different learning styles often frustrate one another in group learning or work situations. Therefore, a trainer might want to consider not just how different learning styles complement and antagonize one another but what delivery style or styles will best suit the intended lesson or message. Ultimately, the training message is the most important consideration. What is the go
<urn:uuid:adad3c8a-8e27-4548-9eac-639beaa09804>
CC-MAIN-2017-04
http://certmag.com/adapting-communication-styles-for-different-audiences/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00447-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910351
505
3.1875
3
The Connected States of America maps communities Borders that define cities, states, and nations are products of politics, culture, and geography. Once drawn, they rarely change even as the world transforms around them. Over time, do borders remain relevant to the way people actually interact and communicate? The data and data analysis In a project designed to understand the intersection of people’s self-formed communities with administrative borders, researchers at AT&T Labs - Research, IBM Research, and MIT SENSEable City Laboratory collaborated on mapping communities based on the strength of communication ties among members. Using millions of anonymized records of cell phone data, researchers were able to map the communities that people form themselves through personal interactions. The cell phone data included both calls and texts and was collected over a single month from residential and business users. All communication data was aggregated by county, with researchers looking at the home counties of the caller and recipient involved in each communication; home counties were determined by the caller’s and recipient’s most frequently used cell tower, which was assumed to be near their residence (see sidebar). The actual locations of the caller and recipient were used in a later stage of the project. No personally identifiable information was used. Researchers were interested in numbers and overall patterns, not individuals; they simply wanted to know which counties communicated most closely. Only anonymized communications between two AT&T customers were considered to ensure complete location data for both ends of the calls and texts. Counties with insufficient data were excluded from the study. On the anonymized communication data, researchers applied a modularity algorithm to find strong county-to-county links, normalized for population. Counties with strong ties were assigned a similar color. A change in color between neighboring counties indicates a boundary and a falling off in the strength of communication ties. Comparing communities and states Once the counties were organized into communities according to communication patterns, researchers compared the map of communities with established state borders. This map shows communities drawn from call data. From a high level, several phenomena are immediately apparent: States with splits: California, Illinois, New Jersey, all showed north-south splits, with Pennsylvania showing an east-west divide. States that seamlessly merge with neighboring states: Louisiana-Mississippi, Alabama-Georgia, New England. The pull of cities Large cities often pulled in counties from across state lines, sometimes splitting states in the process. This accounts for the north-south divide in New Jersey--with northern counties gravitating toward New York City, and southern ones toward Philadelphia—and for the split in Wisconsin, where two large cities pull from opposite directions: Chicago from the southeast, and Minneapolis/St. Paul from the north and west. St. Louis draws counties from southern Illinois, while Chicago exerts a strong pull on northern Illinois counties. How strong depends on the type of communication. St. Louis’s area of influence diminishes when the community map is drawn from texting data only. One third of Illinois counties (19 in all) that align with St. Louis when only calls are considered exhibit a tighter relationship with the rest of Illinois when only texts are considered. What can be inferred when communities form differently for texts than for calls? Comparing the two maps from the national perspective shows that many of the merged states that result from call data no longer exist in the communities formed from texting data (Georgia and Alabama, Kentucky and Tennessee, Oklahoma and Arkansas). Texas shows remarkable consistency for both calls and texts, even though the state's large cities (Dallas, Houston, Austin, San Antonio) could potentially form their own communities. Questions for demographers These examples illustrate some of the insights that can be inferred from anonymized and aggregated communication patterns. The data can be endlessly examined by region, community, or by type of communication: Will a map based on business communications show what role businesses play in pulling together communities? Other information may also one day be considered, including the length of a call. Do longer calls suggest closer personal and family ties, or are constant, short calls more indicative of close family and friend relationships? Especially useful will be comparing the aggregate cell phone data with census data and with studies previously done in the areas of commuting behavior, urban living, and community planning. In examining communication patterns for New Jersey counties, researchers found that the north-south split—consistent between call and texting data—differed on the map drawn from mobility data, a pattern that could be explained from commuting and migration data for New Jersey. But other findings from this research project may challenge conventional thinking that is based on previous studies, and it’s in resolving these cases that demographers, sociologists, statisticians, and other experts will learn the most. About the project Researchers from AT&T Labs - Research (Alexandre Gerber, DeDe Paul, James Rowland, Christopher Rath) along with researchers at MIT SENSEable City Laboratory and IBM Research examined anonymous connections from AT&T cell phone networks across the US and analyzed how these aggregated county-to-county connections determine regional boundaries. This research was originally highlighted in the global edition of TIME Magazine on the 11th of April in the series on Intelligent Cities, funded by the Rockefeller Foundation and in partnership with the National Building Museum, IBM, and TIME. The research was also featured in the Opinion section of the New York Times on July 3, 2011. Home-based, actual, and mobility communities With today’s highly mobile communications, researchers could look at a single communication from three location perspectives: home-based, actual, and mobility.
<urn:uuid:08e86439-8776-4c81-9fd1-15b2d61c6b70>
CC-MAIN-2017-04
http://www.research.att.com/articles/featured_stories/2011_06/201106_connected_states_America_project_no_links.html?fbid=HqTkGyyHT4U
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00081-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952545
1,159
3.0625
3
A local government uses a centralized customer service system - sometimes called 311 - so residents can call a centralized government phone number, place requests for service and are assigned tracking numbers to monitor their requests. Though a centralized customer service system is valuable for residents, local governments benefit too. Some big cities - Baltimore, Las Vegas, Chicago, New York, Houston and Dallas - have implemented these systems to ease the burden on 911 emergency systems, and they seem to be doing the trick. The International City/County Management Association recently conducted a Local Government Customer Service Systems (311) national survey. Funded by the Alfred P. Sloan Foundation, the survey explored successful 311 implementations and how they're used to respond to citizen needs and strengthen local government-constituent relationships. Of 710 survey respondents, only 104 reported they use a centralized system. But the results show that not only large cities and counties are using them: Thirty-two local governments that use a centralized system have a population under 30,000. Although that number of adopters seems low, twice as many local governments are considering installing a system. For local governments that lack systems, the major concerns were cost and the process of obtaining a 311 designation. But implementation leads to demonstrable savings, such as reduced calls to 911, and improved customer service, information, reporting and management. There are also alternatives to a 311 designation, such as an easy-to-remember, seven-digit number. View Full Story
<urn:uuid:9f01ec4f-1987-4b68-94db-fadbff0b3940>
CC-MAIN-2017-04
http://www.govtech.com/e-government/311-Survey-Customer-Service-Systems-Spread.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932427
298
2.75
3
As one of the oldest applications of the Internet, e-mail has never been known for having top-notch security. This reputation isn't completely undeserved: even today, anyone who knows how to bring up the preferences of a mail program can send out messages with any "From:" address they please. Ironically, such forged messages may travel to and from mail servers over encrypted connections. This helps make sure that nosy types with big WiFi antennas don't get to see your mail or passwords, but it doesn't keep your mail safe from equally nosy mail server admins—or subpoenas by nosy governments. But not all hope is lost for e-mail. Secure/Multipurpose Internet Mail Extensions (S/MIME) can secure your mail by encrypting a message at the source and only decrypting it once it's in the hands of the receiver. S/MIME also supports digital signatures, so you can know for sure who sent the message and that it wasn't changed in transit. (Big caveat: the nosy governments could still be in cahoots with the certificate authorities, so we make no promises there.) In the past, we've written about GPGMail, a plug-in that lets Apple's Mail.app use GNU GPG encryption. Unfortunately, GPG is a pretty unwieldy system and GPGMail could take a very long time to be updated for a new Mac OS X release. (There is currently a stable version available for Snow Leopard and an alpha version for Lion.) The advantage of S/MIME is that it's built into Mail on the Mac and, as of last week, also in iOS. (I've only tried this using iOS 5 on an iPhone 4, but I assume things work much the same on iPads and iPod touches.) GPG and S/MIME us the same public key encryption as their underlying technology. Every user has two keys: a public and a private one. The public key can be used by anyone to encrypt messages or check signatures, while only the private key allows for decrypting messages encrypted with the matching public key, and creating signatures. However, key and trust management is very different: with GPG, this is done in a decentralized fashion, while S/MIME requires obtaining a certificate from a certificate authority. Receiving signed messages The good news is that because your computer or iDevice already knows many of these certificate authorities, it can check signatures without the need for additional information. If you receive a signed e-mail, Mail will show a "Security" line in the mail headers—as long as you haven't hidden all headers—with a checkmark icon and the name of the person who signed the message. (Note that most certificate authorities only check the e-mail address, not the name of the person requesting a certificate.) If something is wrong with the certificate or the message was changed after it was signed, Mail displays a big yellow banner telling you there is a problem. Click on the checkmark icon to see the sender's certificate. For some strange reason, Apple has chosen to not indicate that a message was signed in the standard configuration under iOS. To enable this feature, you have to go into the Settings > Account > Advanced for each e-mail account, and then enable S/MIME. (Be careful not to tap "cancel" when traversing back the menus.) The iOS Mail application will then show a little checkmark after the sender's name if a message was signed. If there is a problem, the name is shown in red followed by a question mark. A common reason for signature failures is people using self-signed certificates or using CAcert, which isn't considered a trusted authority by Apple and others. You can tap a name to see more information. That's all there is to receiving signed messages. But if you want to be able to sign messages yourself and receive encrypted ones, you need a certificate. S/MIME or e-mail certificates are cheap (or even free), but that means that the certificate authority only checks whether the person requesting a certificate is actually in control of the e-mail address in question, with no actual identity checking. Because these certificates are so cheap, not all certificate vendors bother with them or, if they do, they don't give this service prominent placement. I purchased a certificate that's valid for a year from VeriSign for $20. They also offer a 60-day free trial; just leave the payment information empty in order to do this. After jumping through a hoop or two, I ended up with a .p7s file on my system, which can be opened with the Keychain Access utility. This will install the file in your keychain so Mail can use it. From Keychain Access, you can then also export the certificate as a password-protected .p12 file for installation on your iOS device. The proper way to do that is probably using the iPhone Configuration Utility, but mailing the file to yourself—or storing it in a draft mail message on the mail server—is a lot simpler. Opening the file will make iOS install the certificate. In my case, it said the cert was untrusted at first, for reasons that I couldn't determine. Once you've installed a certificate under Mail on the Mac, you can then compose a message with a From: address that matches the e-mail address in the cert. Here, you have the option of encrypting and/or signing your new message. Encryption is toggled using a lock icon and signing with a checkmark icon. Note that these settings carry over to subsequent messages. Signing requires access to your private key, so depending on your keychain settings, you may have to type your password. Encrypting, on the other hand, (only) requires the certificate of the person you're e-mailing. Mail automatically adds all the certificates found in signed messages you've received to your keychain. So if you don't have someone's cert, just ask them to send you a signed message. Under iOS, there is no way to toggle signing and encryption on a per-message basis. Instead, you enable (or disable) these functions in the S/MIME settings for each account. Additionally, iOS doesn't automatically remember the certs of people who have e-mailed you. It's not even smart enough to pick up the cert from a signed message you're replying to. Instead, when someone has sent you a signed message, you have to tap their name and then you can install their certificate for future use. If you try to send a message to someone you don't have a certificate for while encryption is enabled, their name turns red to alert you to the problem. A lock icon indicates that a message was encrypted. The Comodo and Diginotar incidents have shown that authority-based security has its limitations, but it's still much better than trusting random mail headers. So turn on S/MIME on your iPhone to enable signature checking, and consider signing your business e-mail. Hopefully at some point big business will also catch on and start sending signed mail so we can finally tell legit messages from misspelled fishing.
<urn:uuid:3ad4b449-e205-4551-8cdb-f7277233ade1>
CC-MAIN-2017-04
http://arstechnica.com/apple/2011/10/secure-your-e-mail-under-mac-os-x-and-ios-5-with-smime/?comments=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945055
1,482
2.640625
3
Internet of Things (IoT) is entering our life gradually, driven by the advanced technology and deployment of FTTH (fiber to the home). Smart home, as an importance part of IoT, allows home owners to control devices including lighting, heating, air conditioning, TVs, computers, entertainment audio & video systems, security, and camera systems by phone or Internet, regardless of whether anyone is home. A smart home can provide home owner comfort, security, energy efficiency (low operating costs) and convenience. Smart home is no longer strange to most of us, as there are already a wide range of related products provided in the market and many of our devices can be connected to the Internet. However, if you want to enjoy the benefits of smart home, a systemically cabling network should be built in your house. Apparently, we depend on WiFi for many things and many of our devices in our home can be connected to the Internet via WiFi. It’s necessary that a smart home should have WiFi signal. However, if the whole house is going to wireless, you may not be able to get the top performance of smart home. Why? This is because WiFi is not as strong as you think. It is common that your WiFi signal bar changes, if you are in different places of your home. WiFi signals can be dramatically reduced, when they travel through walls. In addition, wireless has bandwidth limits even in a perfect situation. Thus, WiFi cannot supply enough bandwidth for devices like HD TV. Then cables are suggested to connect these devices to the Internet. Devices in our home are generally powered by electricity and most of them have RJ45 interfaces. Thus, copper cables are suggest to connect household electrical appliances. Here offers an example of smart home network. The above picture shows the floor plan of a typical two-story residential house and the required functions that each room should have. Introducing Fiber to Home As shown in the floor plan, the fiber optic distribution point is near the basement. And the basement is not a frequently used area in a house. Thus, fiber optic cable is suggested to get into the home from basement. Basement is also suggested as a distribution point of the whole smart home network. The following picture illustrates how to introduce the fiber optic cable to the home. A flexible plastic conduit is installed from the outside termination point to the basement to protect the fiber optic cable. Design the Smart Home Network The design and cabling of the smart home network should be based on both the functions of each room and the future possibilities. The following is a table listing the functions of the rooms that need wiring. |living room||TV, Telephone, Media Console, Computer| |bedroom3||TV, Telephone, Computer| |office||TV, Telephone, Computer| According to the list, it is recommended to install at least two RJ45 type jacks on the same faceplate for phone and Internet on the occupied room. As to the cable choice, It depends on the Ethernet requirement. Cat5e—a high quality copper cable can support 1 Gigabit Etherent over a distance of 100 m and Cat6—a higher quality copper which can support up to 10 Gigabit are suggested. Both of them are economical solutions for smart home cabling. At the basement, where the smart home network distribution point located, an ONT (optical network terminal) is installed to connection the outside fiber optic network with the smart home network. For this house, as shown in the above picture, an ONT with 4 RJ45 Gigabit Ethernet ports and built in wireless is used. An additional 12 port patch panel is used to distribute voice and Ethernet throughout the home over the wired network. Finally, the the whole cabling system of smart home network is done. The above picture shows the details. With almost every occupied room connected to the whole smart home system by cables and wireless, the householder can enjoy the convenience provided by the smart house.
<urn:uuid:f5e957d8-e64a-40d8-87f2-c95fc1f63678>
CC-MAIN-2017-04
http://www.fs.com/blog/diy-your-own-smart-home-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00503-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940141
810
2.953125
3
This is pretty cool: A team of Canadian and British scientists have discovered a source of water in an Ontario mine that has been isolated for a billion years or more. While the researchers say their study of the water and any possible life found in it could help determine if there was -- or possibly is -- life on Mars, (I'm more interested in something else: Does billion-year-old water taste the same as modern water? Seriously, I want to know.) Scientists say the water contains elevated levels of hydrogen and methane, both of which are essential to support life. The water was found in a Canadian mine 2.4 kilometers underground. The discovery was reported in the journal Nature, in which author Jessica Marshall writes: Micrometre-scale pockets in minerals billions of years old can hold water that was trapped during the minerals’ formation. But no source of free-flowing water passing through interconnected cracks or pores in Earth’s crust has previously been shown to have stayed isolated for more than tens of millions of years.“We were expecting these fluids to be possibly tens, perhaps even hundreds of millions of years of age,” says Chris Ballentine, a geochemist at the University of Manchester, UK. He and his team carefully captured water flowing through fractures in the 2.7-billion-year-old sulphide deposits in a copper and zinc mine near Timmins, Ontario, ensuring that the water did not come into contact with mine air. After analyzing isotopes of noble gases in the water, the researchers determined that it has been trapped underground -- unexposed to our planet's atmosphere -- for anywhere from 1 billion to 2.64 billion years. Earth itself is about 4.54 billion years old, so that water could have been there for more than half of the planet's existence. That's some old water. NASA continues to search for signs of previous life on Mars. Ballentine said the composition of the Red Planet's rocks are similar to Earth's, so there is "no reason to think the same interconnected fluids systems do not exist there," he tells Nature. Which leads to another question: What would Martian water taste like? I'll never find out, but someday maybe someone will. Now read this:
<urn:uuid:164b4a58-0253-4403-83b4-e257060846de>
CC-MAIN-2017-04
http://www.itworld.com/article/2710732/enterprise-software/scientists-discover-billion-year-old-water-in-canadian-cave.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972262
458
3.203125
3
In March 2016, the Anti-Phishing Work Group detected 123,555 websites and 229,265 unique email campaigns dedicated to phishing. The lure? Spam. The attacks pose a serious threat to businesses—and they’re beginning to change. Instead of mass-distributing thousands of bulk email messages, phishers are now researching their targets and narrowing their attack vectors. If you’re unfamiliar with the ways that phishing relates to spam, we need to talk. Typically, a phishing attack starts with an email message. This message might include a persuasive call-to-action, such as “Your email address has been reported as a source of spam . Click here to view the report.” The link either opens an attachment or directs the user to a website. Either way, once the person completes the action, malware attempts to infect the user’s system. The system could simply be enrolled in a botnet , or the intent could be more devious—such as installing an app that sends sensitive trade secrets to a black-market dealer. To protect against phishing attacks, it’s helpful to look at each point in the email process as a weakness, or potential point of failure. For example, when: In most phishing attacks, cyber-criminals try to send as many emails at once as possible. Spam blockers—which scan for combinations of attachments, bulk recipients and known keywords—aren’t perfect; as we’ve all discovered, the occasional email gets missed. However, even if spam blockers only filter out 99.9 percent of malicious phishing and spam attacks, as Google says its system does, they’re a great first line of defense. A spam email may get through the filter and land in a user’s inbox. That person should know how to recognize it and what to do with it—so train users ahead of time. Stress that unsolicited emails, especially those with attachments, links, a compelling call-to-action, and bad spelling or grammar are often attacks. Instruct users to report those messages as spam (most email apps have a button to do so), which will help improve your spam filter. Oops, a new employee wasn’t aware of the spam policy and accidentally clicked on a phishing link. This is where a proxy server or website monitoring app comes in handy. If the site is malicious—it has known flash exploits, or an IP address associated with phishing—the app or proxy server can block access and notify the user (and, hopefully, the IT department). If the attack comes from an attachment, group policies can be configured to prevent users from installing the malicious code (similar to Windows UAC). Maybe it was a clever attack: you suspect a legitimate email was intercepted, compromised and forwarded, and you didn’t realize it. Chances are, you’d be in damage control mode at that point. To remedy the situation, you would want to be using network-monitoring apps, modeled on traffic-pattern analysis, to watch for suspicious behavior. Once you find a device or service behaving unusually, you can track down the malicious software—and eliminate it, as soon as possible.
<urn:uuid:b961bd15-56be-4253-919b-36efe4281393>
CC-MAIN-2017-04
http://getnerdio.com/blog/4-ways-protect-business-email-spam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945332
669
2.984375
3
The U.S. military relies heavily on distributed, wireless networks to communicate in combat zones. Now DARPA is looking for ideas on how to keep bad actors off these networks. In a post on its blog on Monday, DARPA - the U.S. military's advanced research group - said that it was seeking proposals for new technologies to "help make wireless networks more resilient to unforeseen scenarios and malicious compromise." The "Wireless Network Defense program" is intended to develop new protocols that enable military wireless networks to "remain operational despite inadvertent misconfigurations or malicious compromise of individual nodes." Various technologies already exist to secure communications over wireless local area networks. But DARPA said that its goal is larger than just securing individual nodes, or the communications between them. Instead, the organization envisions something like the reputation system used by credit card companies to spot fraudulent transactions, according to Dr. Wayne Phoel, a DARPA program manager. "We need to change how we control wireless networks by developing a network-based solution for current and future systems that acknowledges there will be bad nodes and enables the network to operate around them," Phoel said. Winning solutions won't involve new hardware or software. Rather, DARPA is looking for a way to make existing and future wireless networks more robust and resilient to compromise, That might include the creation of new protocols that can assess the "viability and trustworthiness" of neighboring nodes on a wireless network. Suspicious or compromised nodes would be ignored and have traffic sent around them. Compromises of military networks in combat zones have become an issue of great importance to The Pentagon. In August, Marine Lt. Gen. Richard P. Mills told an Baltimore, Maryland, that U.S. commanders considered cyber weapons an important part of their arsenal, and that U.S. military command in Afghanistan had to defend its networks against "almost constant incursions" and efforts to "get inside my wire, to affect my operations." DARPA will host a Proposer's Day on April 1, 2013 in Arlington, Virginia. More information on that can be found here.
<urn:uuid:730fd089-3eb3-47fb-903a-844ad5bc5643>
CC-MAIN-2017-04
http://www.itworld.com/article/2713583/security/darpa-looking-for-way-to-spot-attackers-on-wireless-networks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00071-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936888
434
2.515625
3
Naming Conventions for creating Database Objects When creating database objects developers can choose a naming method that follows the traditional IBM i behavior with the System Naming mode (*SYS) or that conforms to the SQL Standard rules with the SQL Naming convention (*SQL). The main difference between DB2 for i and other database management systems (DBMS) is that DB2 for i is integrated into the operating system. This integrated aspect allows IBM i users to directly access DB2 for i databases with their operating system user profile and the associated access authorities. Other databases are not integrated in an operating system, therefore specific database users with individual access authorities must be defined. The default naming used to create database objects with SQL depends on the environment where the SQL DDL (Data Definition Language) commands are executed. The default naming for all server-side SQL environments, such as STRSQL (Start SQL Interactive Session) or RUNSQLSTM (Run SQL Statements), but also embedded SQL in an HLL (High Level Language) program such as RPG or COBOL is System Naming. The default naming value that is used on client-based SQL environments, such as System i Navigator, IBM Rational Developer for Power Systems Software (RDp), middleware (ODBC, JDBC, etc.) or third-party SQL tools is typically SQL Naming. To avoid a mismatch in object authorities and access methods, you need to decide whether using System Naming or SQL Naming will work best in your application environment. You may need to change the default naming in some environments to match the naming conventions you are using in your application environment. System i Navigator Interface If you want to use the System i Navigator Interface to create your database objects, you can predefine the naming to be used as follows: Open your connection, right-click on the Database icon and select the Preferences task as shown in Figure 1. System i Navigator – Set Preferences The Preferences window offers 3 options. The Connection (all Systems) option allows you to predefine the naming convention to be used for future connections. This setting will also be used as the default naming value for future Run SQL Scripts and Generate SQL executions, but will NOT affect any existing windows. Figure 1. System i Navigator – Set Preferences System i Navigator's Run SQL Scripts Tool If you want to use the Run SQL Scripts Tool to execute a SQL script stored in a file or entered interactively, the naming convention is controlled by clicking on the Connection pull-down menu and selecting the JDBC Setup task. The naming convention can be set on the Format tab. Figure 2. System i Navigator – Run SQL Scripts – Set Naming Conventions RUNSQLSTM – Run SQL Statements If you want to execute SQL statements stored in either a source physical file member or an IFS (Integrated File System) file through RUNSQLSTM (Run SQL Statements), the naming convention can be specified on the RUNSQLSTM command with the Naming parameter as shown in the following example. The specified SQL script will be executed by using SQL Naming. Listing 1. RUNSQLSTM setting Naming Conventions RUNSQLSTM SRCFILE(MYSCHEMA/QSQLSRC) SRCMBR(MYSCRIPT) NAMING(*SQL) Embedded SQL in an HLL Program If you want to use embedded SQL in an HLL program such as RPG or COBOL to either process data in your tables or to create new database objects, the default naming setting is System Naming. If you want to use SQL Naming, you can predefine the naming convention in the compile command (CRTSQLRPGI, CRTSQLCBL1 or CRTSQLCI depending on the programming language) as shown in the following example: Listing 2. Create an embedded SQL program using SQL Naming CRTSQLRPGI OBJ(MYPGMLIB/MYSQLPGM) SRCFILE(MYSRCLIB/QRPGLESRC) SRCMBR(MYMBR) OPTION(*SQL) Instead of specifying the naming method in the compile command, it is also possible to include it in your source code by adding a SET OPTION statement (as shown in the next example). The SET OPTION statement must be the first SQL statement in your source code and include all options you want to set. Listing 3. SET OPTION Statement for setting Naming Conventions /Free Exec SQL Set Option Commit=*NONE, Naming=*SQL DatFmt=*ISO, CloSQLCsr=*ENDACTGRP; //All other source code including embedded SQL statements /End-Free IBM i Access for Windows ODBC Driver The naming convention can be specified for ODBC connections by using the IBM i Access for Windows - ODBC Administration interface or connection keywords. The next figure shows the Naming convention being controlled on the ODBC Administration interface on the Server tab. Figure 3. ODBC Setup For JDBC access naming convention can be controlled by specifying JDBC driver connection properties in the connection URL. The naming property connection property supports the values of sql and system for naming convention. SQL Naming is the default. When using System Naming a library list can be predefined with the libraries property as shown in the following example. Listing 4. JDBC setting Naming Conventions conn = DriverManager.getConnection("jdbc:db2:*local: ... ... naming=system;libraries=MYLIBA,MYLIBB,MYLIBX"); IBM i Access for Windows ADO.NET Provider When using ADO.NET the naming convention and the library list for System Naming can be set when establishing the connection. The iDB2Connection object connects to DB2 for i. The naming convention is provided as Connection String property. The following code shows how System Naming and the library list can be set with the iDB2Connection object: Listing 5. ADO.NET setting Naming Conventions iDB2Connection conn = new iDB2Connection("DataSource=abc; userid=XXX;password=YYY; Naming=System; LibraryList=*USRLIBL,MYLIB"); SQL CLI - Call Level Interface When using the SQL CLI function, the naming convention is an attribute that can be set by executing the SQLSetConnectAttr function. To set the naming convention to System Naming, the SQL_ATTR_DBC_SYS_NAMING constant must be passed for the attribute parameter and the SQL_TRUE constant for the attribute value parameter as shown in the next example. Listing 6. Set Connection Attributes using SQLCLI rc = SQLSetConnectAttr(ConnHandle: SQL_ATTR_DBC_SYS_NAMING: SQL_TRUE: 4); STRSQL – Start SQL Interactive Session If you want to change the naming used to run your SQL statements in interactive SQL, execute the STRSQL CL command, press function key F13=Services and select option 1 (Change Session Attributes) after. Schema – Container to hold Database Objects A schema is a container to be used to store database objects. On IBM i the term schema is used as being an equivalent of a library. Schemas or libraries can be created with either the CRTLIB (Create Library) CL command or the CREATE SCHEMA SQL statement. While the CRTLIB command only creates an empty container, the SQL statement automatically adds a journal, a journal receiver and a couple of catalog views with information about all database objects that are located in this schema. When creating a library with the CRTLIB command, the owner of the library will be either the user profile that creates the library, or the group profile. Whether the user or a group profile becomes the owner depends on the OWNER option setting for the user profile. If the OWNER option is set to *GRPPRF, the user profile specified in the GRPPRF option will become the owner of all objects created by this user, otherwise the user profile becomes the object owner. The following example shows the CHGUSRPRF (Change User Profile) command to be used to set the owner of all objects in future created by the PGMRGRP2 user profile to the QPGMR group profile. Listing 7. Change User Profile Command setting Owner = Group Profile CHGUSRPRF USRPRF(PGMRGRP2) GRPPRF(QPGMR) OWNER(*GRPPRF) All example database objects that are created in this article will be created by the user profile named PGMRGRP2. This user profile is associated with the QPGMR group profile. Based on this the QPGMR group profile will be the owner of all the database objects created by PGMRGRP2. Creating a Schema using System Naming When creating a schema with the CREATE SCHEMA statement using System Naming the following rules apply: - The owner of the schema is the user profile or the group profile depending on the OWNER option setting in the user profile definition. - The owner has *ALL object authority while the *PUBLIC object authority is based on the QCRTAUT (Create Default Public Authority) system value whose default value is *CHANGE. Creating a schema or library with either the CRTLIB command or the CREATE SCHEMA statement with System Naming results in the same ownership and identical object authorities. The PGMRGRP2 user profile creates two schemas (PGMRUSR2 and PGMRXXX2) with the following SQL statements using System Naming: Listing 8. Create Schema Example CREATE SCHEMA PGMRXXX2; CREATE SCHEMA PGMRUSR2; The owner for both schemas is the QPGMR group profile. The group profile has *ALL object authority while the *PUBLIC authority is set to *CHANGE depending on the QCRTAUT system value. Figure 4. CREATE SCHEMA with System Naming The object owner and the assigned object authorities can be displayed, set or removed with either the EDTOBJAUT (Edit Object Authority) command or the System i Navigator Permission interface. With System i Navigator this interface can be accessed by right-clicking on a database object and selecting the Permissions task. The object ownership can be changed with the CHGOBJOWN (Change Object Owner) CL command. There is no SQL statement or System i Navigator interface to change the object owner with SQL. Creating a Schema using SQL Naming When creating a schema with SQL Naming in effect the rules are more complicated: - If a user profile with the same name as the schema exists, the owner of the schema and all objects created into this schema is that user profile. For example, a developer creates the schema WEBERP for a new web-based application. There happens to be an employee named Weber Peter whose user profile is also WEBERP. The user profile WEBERP becomes the owner of the WEBERP schema. - If the schema name does not match a user profile name, the owner of the schema is the user profile of the job executing the CREATE SCHEMA statement. When creating a schema with SQL Naming, the OWNER option setting of the user profile definition is ignored. The owner is the only user profile having any authority to the schema. If other users require object authority to the schema, the owner or a user profile with security administration authority (*SECADM) or all object authority (*ALLOBJ) can grant authority to the schema using the GRTOBJAUT (Grant Object Authority) CL command. There is no SQL statement available to grant object authority for a schema. - For database objects created with SQL Naming the *PUBLIC object authority is always set to *EXCLUDE. The QCRTAUT system value is ignored. To compare the differences between system and SQL Naming, the schemas previously created with System Naming are dropped and recreated by the same user with SQL Naming. When comparing the schemas with regard to the ownership and object authorities we will discover several differences: - The owner of schema PGMRXXX2 is PGMRGRP2, the creator of the schema. The owner setting for the PGMRGRP2 user profile is ignored. - The owner PGMRGRP2 gets *ALL object authority, while the *PUBLIC object authority is set to *EXCLUDE. Contrary to System Naming where the *PUBLIC object authority depends on the QCRTAUT system value. Consequently, a different developer who is also member of the QPGMR group profile is not allowed to modify the schema or to create an object in this schema. This behavior may be problematic for a company that works intensively with group profiles and where the owner of all objects created by any developer has to become the group profile. - The owner of schema PGMRUSR2 is PGMRUSR2, because there is an existing user profile with this name. Previously, when creating the schemas using System Naming the owner of both schemas was the QPGMR group profile. - The owner of the PGMRUSR2 schema, PGMRUSR2, gets *ALL object authority, while the *PUBLIC authority is set to *EXCLUDE. Even though the user PGMRGRP2 was able to create the schema PGMRUSR2, that user does not have any authority on the schema. PGMRGRP2 cannot modify the schema nor create or change any object within this schema. The following screen shots show the permissions (also known as authorities) for the schemas created with SQL Naming. Figure 5. CREATE SCHEMA with SQL Naming Table, Views and Indexes – Objects to maintain Data Tables are objects to store persistent user data in multiple columns and rows. Views and indexes are database objects associated with a table but do not contain any data. Creating Tables, Views and Indexes with System Naming The rules for determining the ownership and applying object authorities match the rules that are used for creating schemas. The owner is either the creator of the object or the group profile and the *PUBLIC object authority is set to the QCRTAUT system value. For the next example (Figure 6. CREATE TABLE with System Naming) the table EMPLOYEE is created with System Naming in two different schemas, PGMRUSR and PGMRXXX, using the following SQL statement: Listing 9. Create Table EMPLOYEE Create Table MySchema/Employee (FirstName VarChar(50) Not NULL Default '', Name VarChar(50) Not NULL Default '', Street VarChar(50) Not NULL Default '', ZipCode VarChar(15) Not NULL Default '', City VarChar(50) Not NULL Default '', Country Char(3) Not NULL Default '', Birthday Date Not NULL); Both schemas were previously created with the CREATE SCHEMA statement using System Naming by the user profile PGMRGRP2. The owner of both schemas is the QPGMR group profile, based on the OWNER setting of the PGMRGRP2 user profile. The group profile is the owner of the table created in schema PGMRUSR even though there is a user profile PGMRUSR. The owning user profile, QPGMR, has *ALL object authority while the *PUBLIC authority is set to *CHANGE (based on QCRTAUT system value). Consequently, all users that are associated with the QPGMR group profile are allowed to access, modify and even delete the EMPLOYEE table in both schemas, PGMRXXX and PGMRUSR. Figure 6. CREATE TABLE with System Naming Creating Tables, Views and Indexes with SQL Naming When using SQL Naming different rules apply: - If a user profile with the same name as the schema into which the table, view or index is created exists, the owner of the table is that user profile. - If there is no user profile with the same name of the schema, the owner will be either the user profile or group profile depending on the OWNER option setting in the user profile definition. - When creating database objects other than schemas with SQL Naming, the OWNER option setting in the user profile definition is considered and the group profile will become the owner of the database object. Figure 7. CREATE TABLE with SQL Naming in a Schema that does not match a User Profile displays the authority results for the EMPLOYEE table created in the schema PGMRXXX2 by user PGMRGRP2. The owner of the EMPLOYEE table is the QPGMR group profile. The owner QPGMR has a value of *ALL for object authority while the *PUBLIC authority is set to *EXCLUDE. As a result, all users that are associated with the QPGMR group profile are not only allowed to access the EMPLOYEE table, but are also allowed to modify or delete the table. Figure 7. CREATE TABLE with SQL Naming in a Schema that does not match a User Profile In the next example (Figure 8. CREATE TABLE with SQL Naming in a Schema that does match a User Profile) the user PGMRGRP2 tries to create the EMPLOYEE table in the PGMRUSR2 schema. The schema was previously created with the CREATE SCHEMA statement with SQL Naming by the user PGMRGRP2. Because there is a user profile named PGMRUSR2 this user profile became the owner of the schema and got *ALL object authority for the schema while *PUBLIC authority was set to *EXCLUDE. The execution of the CREATE TABLE statement fails with an SQL State value of 24501, because PGMRGRP2 is not authorized to the PGMRUSR2 schema, even though that user created the schema. (Figure 5. CREATE SCHEMA with SQL Naming demonstrates the lack of authority that user PGMRGRP2 has on the PGMRUSR2 schema). To allow PGMRGRP2 to create a table or any object in the PGMRUSR2 schema with SQL Naming, that user profile or the associated QPGMR group profile must be explicitly authorized to the schema, by executing either the GRTOBJAUT or EDTOBJAUT command. Figure 8. CREATE TABLE with SQL Naming in a Schema that does match a User Profile Assuming that the QPGMR group profile was explicitly authorized to the PGMRUSR2 schema, the PGMRGRP2 user would be able to create the EMPLOYEE table in this schema. Because the table is created using SQL Naming and PGMRUSR2 is an existing user profile, this user profile again becomes the owner of the EMPLOYEE table, with *ALL object authority while the *PUBLIC object authority is set to *EXCLUDE as shown in Figure 9. Permissions for Schema = User Profile and Table. In this situation, the PGMRGRP2 user is able create the table, but is not allowed to use it in any way. The PGMRGRP2 user or the QPGMR group profile must be explicitly authorized to get access to the object they previously created. Figure 9. Permissions for Schema = User Profile and Table Potential Problem Situation Creating database objects for existing applications using SQL Naming may cause unexpected problems on the IBM i. Assume that all of the database objects for the existing material management application are stored in a library called MAWI. This library was created long ago with the CRTLIB command. The owner of library MAWI is the QPGMR group profile. *PUBLIC object authority for library MAWI is *CHANGE. On this system all user profile names are combined of the first 2 characters of the last name and the first 2 characters of the first name. A data entry clerk in the HR department named Willy Maier, so accordingly his user profile is MAWI. If a developer creates a new table or view in library MAWI using SQL Naming, the owner of this new table will be Willy Maier, because he has a user profile matching the name of the library. Only the MAWI user profile will have access authorities for the new table or view. The developer and any other user are excluded, due to SQL Naming forcing the *PUBLIC access authority to be set to *EXCLUDE by default. SQL Routines are executable SQL objects similar to high-level language (HLL) programs. The term SQL Routine is used to refer to a stored procedure, a trigger or a user-defined function (UDF). These routines are written in either SQL or with a high-level language such as RPG or Cobol. In either case a Stored Procedure or an UDF is created with one of the following SQL statements: - CREATE PROCEDURE - CREATE FUNCTION Ownership and Object Authorities for SQL Routines The rules for determining the object ownership and authority match the rules that are used for creating tables, views or indexes with either System or SQL Naming. The SQL statements embedded in the routine are executed based on the naming convention used when creating the routine, even though the SQL routine might be invoked in a runtime environment that uses a different naming mode. For example, a stored procedure was created from an interface where SQL Naming was used. If this stored procedure is called from a RPG program with Embedded SQL where System Naming is used by default, the embedded SQL statements in the RPG program will use System Naming while the SQL statements within the stored procedure are executed with SQL Naming. The ownership and access authorities of the routine object are only used for calling this routine. The object ownership and authority values may or may not be applied to the SQL statement executed by the routine itself. The authorization-ID (or user profile) that is applied to the SQL requests run by the routine depends on the naming convention used at the time the routine was created and whether the SQL statements executed by the routine are static or prepared dynamically. When using System Naming DB2 utilizes the user profile who calls the routine. When using SQL Naming to execute static SQL statements within the routine, DB2 uses the routine's owner by default to perform its authorization processing on the static SQL statements. The user profile that called the routine is always applied by default to the execution of dynamic SQL statements by the routine independent of whether System or SQL Naming is used. The user profiles applied to the security validation and execution of static and dynamic SQL statements can be manually controlled in the SET OPTION statement by specifying the USRPRF (User Profile for static SQL statements) and DYNUSRPRF (User Profile for dynamic SQL statements) options. The USERPRF Option can be set to one of the following values: - *NAMING: *USER is used for System Naming and *OWNER for SQL Naming - *OWNER: Static SQL Statements are executed with the owner's authorities - *USER: Static SQL Statements are executed with the user's authorities Option DYNUSRPRF can be set to: - *USER: Default value for both System and SQL Naming. Dynamic SQL statements are executed with the user's authorities - *OWNER: Dynamic SQL statements are executed with the owner's authorities If you are using SQL Naming and want your dynamic SQL statements being executed by the same user profile as your static SQL statements, you would need to set both of these options, USRPRF and DYNUSRPRF, to either *OWNER or *USER. The following SQL statement shows the abridged source code for a SQL stored procedure that will be created using SQL Naming. At runtime all of the static and dynamic SQL statements embedded in the procedure will be executed by the *OWNER based on the values specified on the SET OPTION clause. Listing 10. CREATE PROCEDURE Create Procedure PGMRUSR2.HSINFO (In Parm1 Integer) Dynamic Result Sets 1 Language SQL Set Option DYNUSRPRF = *OWNER,USRPRF = *NAMING Begin /* Routine code goes here */ End ; Triggers are a special kind of SQL routine. Trigger programs are linked to either a table or a SQL view and are activated by the database manager for a specified event (Insert, Update or Delete). The ownership of trigger programs is determined in the same way as for all other SQL routines, but the object and execution authorities are set differently by the CREATE TRIGGER statement. *PUBLIC object authority is set to *EXCLUDE, independent of whether System or SQL Naming is used. For all other SQL objects created with System Naming the *PUBLIC object authority is set to the QCRTAUT system value. The next example shows the source code for a SQL Trigger to be created with System Naming. Listing 11. CREATE TRIGGER CREATE TRIGGER PGMRUSR/TRGEMPLOYEE BEFORE INSERT ON PGMRUSR/EMPLOYEE REFERENCING NEW ROW AS N FOR EACH ROW MODE DB2ROW BEGIN ATOMIC /* Source code goes here */ END; Figure 10 displays the permission chart for this trigger program created with System Naming. The owner is the QPGMR group profile and the owner has all object authorities while the *PUBLIC object authority is set to *EXCLUDE. Figure 10. Trigger created with System Naming To create a trigger program with SQL Naming in a schema with the same name of an existing user profile, the creator must be either explicitly granted to the table or view or have one of the the special authorities *ALLOBJ or *SECADM. The owner of the trigger program will be either the user with the same name as the schema or the creator's user profile or its associated group profile depending on the OWNER option setting in the creator's user profile definition. The following Figure 11. Trigger created with SQL Naming shows the permission chart for the trigger program created by user PGMRGRP2 into schema PGMRUSR2 using SQL Naming. Because PGMRUSR2 is also an existing user profile, this user profile becomes the owner of the trigger program. The owner PGMRUSR2 has *ALL object authority while *PUBLIC object authority is set to *EXCLUDE. Figure 11. Trigger created with SQL Naming The trigger will always be activated with the adopted authority of the owner of the trigger program, independent of the naming convention used to create the trigger. GRANT / REVOKE Authorities Whether your database objects are created with System or SQL Naming, the ownership and object authorities must be checked carefully. If the default behavior does not meet your security requirements, the GRANT or REVOKE SQL statements can be used to adjust settings. Object authorities for any user or group profile and even for *PUBLIC use can be set with the GRANT statement. If object authorities must be removed, the REVOKE statement can be used. The GRANT and REVOKE statements can be used in composition with all database objects that can be accessed or executed with the exception of schemas and triggers. It is also possible to use the GRTOBJAUT (Grant Object Authority) and EDTOBJAUT (Edit Object Authority) CL commands to modify object authorities for database objects. However, there are some differences in providing authorities with either CL commands or SQL statements. SET SESSION AUTHORIZATION The SET SESSION AUTHORIZATION and SET SESSION USER statements can impact object ownership and authorities when working with SQL Naming. After a connection has been established, the user profile can be switched to a different user profile (authorization id) to adopt the access authorities of this user profile by using the SET SESSION AUTHORIZATION or SET SESSION statements. You have already learned how the user profile value is applied to object ownership and authorities when creating objects with SQL Naming. You should now have a good understanding of why DB2 objects created with System or SQL Naming have ownership and access authorities assigned differently. Because of these different behaviors, you should decide on a single naming convention method for all your database objects created with SQL (or at least for all database objects located in a single schema). - If you intend to design an application that has the ability to run on different database systems, SQL Naming is the right method to achieve maximum portability. - If you are working only with DB2 for i and have to maintain older applications with a mix of DDS based objects and SQL database objects and use IBM i specific object authorities (such as group profiles), System Naming is the better solution. And now have fun in planning, designing, creating and maintaining database objects with either system or SQL Naming.
<urn:uuid:053068ad-5820-4918-ad6f-e5b8707b260e>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/ibmi/library/i-sqlnaming/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869766
6,140
3.265625
3
AlphaGo and What it Means for Artificial Intelligence In the artificial intelligence and machine learning community the big story a few weeks ago was the “Go” competition between the Lee Sodol and the computer program AlphaGo build by Google DeepMind. As of today March 15, 2016 AlphaGo is winning the match against Sodol with a score of 3 games to 1. AlphaGo is the next progression in game playing artificial intelligence following in the footsteps of its ancestors IBM Deep Blue(Chess playing computer) and IBM Watson(Jeopardy playing computer). The difference between AlphaGo and its predecessors is the complexity of the game Go. Go is played on a very large board and allows for extremely large number of moves for each turn which quickly makes the feasibility of many other game playing algorithms impossible. The achievement of AlphaGo and the DeepMind team is incredible and needs to be applauded. The solution they arrived at for an extremely complicated problem has achieved exceptional results and has done wonderful things for artificial intelligence research. However many people are considering this the last great game for machines to conquer but I think this cant be farther from the truth. Every time we write a program to solve a problem we increase our own understanding of that problem and the outcome of that program helps us better understand and make discoveries we hadn’t previously realized. In the case of Deep Blue and chess, the game has never been the same but the players today are better than they have ever been before because the theoretical and practical understanding of the game has increased significantly due to artificial intelligence. This is the same with Go and all areas where artificial intelligence has or can be applied. Programs written by people only reflect what we already know in a way that our minds have trouble operating. Artificial intelligence and machine learning gives us just another tool to research a topic and grow our understanding. In the case of BPM, introduction of new analytical technology in the space will only help us better understand business problems and help us make better decisions. These new technologies aren’t competitors as many people suggest but tools to be used to help us grow. There is no doubt still room to grow and learn within most games computers have been used to optimize and there are still an uncountable number of problems in the real world that artificial intelligence can be used to help better understand. The word of game theory and artificial intelligence goes far beyond the bounds of board games and I wouldn’t consider any of the great board games to be completely solved yet.
<urn:uuid:50254913-fa1f-4baf-8674-4803087c7346>
CC-MAIN-2017-04
http://www.bp-3.com/blogs/2016/04/alphago-and-what-it-means-for-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964072
505
2.828125
3
In 1961, a new Digital Equipment Corporation PDP-1 mini-computer was installed at M.I.T. In order to demonstrate its full capabilities, including its cathode ray tube display, students Martin Graetz, Steve Russell, and Wayne Wiitanen created a game which simulated a battle between two spaceships. They called it Spacewar! and it enabled two users to each control a spaceship using either the keyboard or a joystick. It soon had a cult following and play at M.I.T. had to be banned during working hours. While it wasn’t the first-ever computer game, Spacewar! was the first shooter game and the first one to require quick reflexes. Today you can play Spacewar! using a browser-based PDP-1 emulator.
<urn:uuid:e3dfcf97-68bb-4428-b778-535d77e3ad4c>
CC-MAIN-2017-04
http://www.cio.com/article/2369254/education/154003-Busy-Beavers-10-things-M.I.T.-computer-scientists-have-given-the-world.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958167
164
2.65625
3
This time, we’ll use the shortcuts to solve some different types of problem. Instead of “find the mask”, this time the challenge is to “find the subnet”. Ready? Here’s the first problem: given a host address of 192.168.1.100 and a subnet mask of “/27”, determine the host’s subnet, the legal range of host addresses on that subnet, and the broadcast address for the subnet. Since we know we’ll need it, here’s the powers of two chart: The first step is to determine the subnet increment. Since there are 5 host bits (32 total bits in the mask minus 27 network-type bits), the increment is 32 (from the chart, 2 to the 5 is 32). Thus, the subnets (incrementing by 32 in the last octet) are: Since 100 lies between 96 and 128, the host is on the 192.168.1.96/27 subnet. Based on the shortcuts we developed before, we can determine that the range of hosts on that subnet is 192.168.1.97 – 126 (one more than the current subnet to two less than the next subnet), and that the broadcast address is 192.168.1.127 (one less than the next subnet). Done! Let’s do another … given a host address of 10.50.100.200 and a mask of 255.255.255.192, find the host’s subnet, the range and the broadcast address. Recall that when given a mask in dotted-decimal, the subnet increment can be determined three ways: - Subtract the value in the octet of interest from 256 - Two to the number of host bits in the octet of interest. - The value of the least significant subnet bit In the case of a 255.255.255.192 mask, the first approach gives: 256 – 192 = 64. For the the second approach, a 255.255.255.192 mask is a “/26” (that is, six host bits), and two to the sixth is 64. In the third approach, a decimal 192 is equivalent to 11000000 in binary, and the least significant subnet bit (the rightmost one) is the 64 bit. No matter how you do it, the increment is 64. Since the increment is 64 (in the last octet), the subnets are: Since 200 is greater than 192 (the last subnet), the host is on the 10.50.100.192/26 subnet, the range of hosts on that subnet is 10.50.100.193 – 254, the broadcast address is 10.50.100.255, and we’re done! Let’s do another … the address and mask are 172.16.1.203/29. Based on the mask, we can see that the subnets increment by eight (because there are three host bits). While it’s possible to count by eight’s all the way to 203, for most people that would take a while (and be error prone). A quicker approach is to divide 203 by eight, which we can round down to 25*. Now just multiply 25* by 8 (the increment) to get 200, which is the subnet (172.16.1.200/29, to be exact). Since the increment is eight, the next subnet is 208 (actually 172.16.1.208/29), the range of addresses on the host’s subnet is 172.16.1.201 – 206, and the broadcast address is 172.16.1.207 for the host’s subnet. Okay, since you insist, just one more … the address is 10.1.2.153, and the mask is 255.255.255.252 (that is, “/30”). Based on the mask, we can see that the subnets increment by four. Just take 256 and subtract 252, or use the fact that there are two host bits, or that the rightmost one in 11111100 (252 written in binary) is worth four. While it’s possible to count by fours all the way to 153, that would be boring, so let’s just divide 153 by 4, giving 38 (if we round down). Now multiply 38 times 4, which is 152. Thus, the subnet is 152 (10.1.2.152/30), the range is 10.1.2.153 – 154, and the broadcast is 10.1.2.155 (the next subnet is 10.1.2.156). Next time, we’ll use the shortcuts to solve more complex subnet masking problems. Authors: Al Friebe *Correction to a typo in the original post
<urn:uuid:1691c491-302b-4521-9b75-382903c744c5>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/04/07/subnetting-shortcuts-part-5/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890937
1,043
3.59375
4
A new report by the EU’s cyber security agency ENISA analyses the conditions under which online security and privacy seals help users to evaluate the trustworthiness of a web service. The report underlines the need for clear icons, standards, assessment and evaluation methodology. Furthermore, a second report addresses the framework, methodology and evaluation for security certification and provides a qualitative analysis of certification practices in the EU. Numerous policy documents identify marks, seals, logos, icons (collectively referred to as “seals”). These help users to judge the trustworthiness of services offered on the web. But there are many obstacles for users to use these seals, as it is not clear how the seals are granted to the services. ENISA analyses the current situation and identifies key challenges, solutions, and recommendations for online seals. The two reports deal with (1) how users can use seals to base their trust in a service, and (2) what we can learn from other certification initiatives to improve these seals. Some of the key challenges and corresponding recommendations are: Users suffer from information overload. Therefore, web designers need to develop clearer privacy icons, which are based on research, including cultural and legal differences. Users are not sufficiently aware of what seals mean. Educational material should be provided to spread knowledge of the existence and meaning of seals. Seals are not checked by the user. Service providers and web developers need to provide and implement seals that can be automatically checked. Transparency. Policy makers should demand reliable statistics on certification and seals. The bodies issuing certificates/seals should keep updated, public records on certificates/seals that they have issued. Reduction of burden. Standardization bodies and responsible stakeholders should develop best practices and standards merging the requirements for security and data protection in order to reduce burden. Enforcement. The national policy makers should ensure enforcement of such requirements for genuine compliance, for instance by applying sanctions and/or ad-hoc assessments carried on by third parties. The Executive Director of ENISA, Professor Udo Helmbrecht remarked: “The effectiveness of trust signals must be improved. Regulatory bodies at the EU and national level should set incentives for service providers to obtain better online security and privacy protection”.
<urn:uuid:c4f4ccab-7b15-4bbc-a86e-2d7f3114e818>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/01/15/when-can-you-trust-web-services-to-handle-your-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920655
460
2.59375
3
Microservice architecture is an emerging programming solution that major companies like Netflix, Twitter, PayPal, and Amazon are already using to improve their services. Microservices are independent processes within a larger application that communicate with each other to perform a larger overarching task. This alternative programming philosophy focuses on getting things to work correctly quickly instead of taking elegant, elaborate approaches to adding and debugging functionality. It works well for business solutions where a monolithic approach is ill-equipped to handle the immediate changes both the business and customers demand. Microservices work well within a larger application that sports many complex features. The design notion caters well to complexity, but this can also have a negative side: keeping track of all the different moving parts can be difficult. The architecture approach also works well when the application interfaces with web apps, mobile apps, and traditional HTTP browser interfaces. Companies may find the variety in user interfaces and platforms is a powerful motivator to transition away from all-inclusive programming solutions in favor of the more versatile microservices model. Microservices offer incredible upgradability potential for continuous delivery. The decoupled, modular system style of approach breaks individual features into separate components. This means the components can be worked on individually without much risk of impacting outside features. The approach lets small teams and single developers work on individual components, so they can work very quickly. Developers that are using microservices find it relatively simple to push new code as soon as it’s ready.This means that if a customer reports a bug, the developers can immediately step in, address the issue, and push the fix. This nimble approach also means it favors speed of development, so the disjointed code can become difficult to follow and may encounter efficiency issues. A Relationship, Not a Book on the Shelf The microservice model works to change and adapt with a business’s needs: microservice-based platforms are an ongoing relationship concerning the development process. The quick upgradability of the platform makes it possible for developers to quickly integrate new features and fixes that end-users request as they come up. This differs from other solutions platforms that are treated as a complete product; the development team takes ownership of the software in an ongoing relationship. While this approach can work with singular-structured services, the all-in-one nature of those setups makes them less compatible with continuous delivery. Monitoring Is Essential Microservices rely substantially on communicating with larger parts of the same entity via platform agnostic communication standards like REST and SOAP. Additionally, each service handles its own data entry so the platform will use decentralized storage. This means that if one part of the platform is working, a specific microservice can fail because contingencies from other microservices fall apart. It’s wise, then, for developers to build microservices with contingencies for when other parts of the platform aren’t working. However, even with fallback contingencies in place, the application will encounter problems as a whole when part of it goes down. Your company can avoid downtime using a web application load testing service that helps identify problems before the system fails. Monitoring ensures you’re aware of the hardware and software capabilities necessary to keep your business’s microservice system running at 100 percent.
<urn:uuid:01216c93-c127-4d91-a1b7-42f45785be5d>
CC-MAIN-2017-04
https://www.apicasystem.com/blog/microservice-architecture/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00153-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933272
664
2.703125
3
Joseph Schumpeter, one of my favorite economists, coined the term "creative destruction" to describe the way in which innovation disrupts how things are done, and in the process, gives rise to new companies and new ways of operating. What's been called the Internet of Things -- the rapidly proliferating connection of all devices, sensors, machines and people -- is set to create disruption on a huge scale. This ups the ante significantly for analytics and real-time computing. The driverless car is an excellent example as a disruptive innovation that impacts both consumers and businesses. For instance, when driverless cars become common, not only will they change commuters' experiences, they are expected to lessen the incidence of traffic accidents, improve the density of road use, smooth subsequent planning for maintenance, ease long-term planning for other transportation systems such as light rail, and much more. What makes all of this possible? The Internet of Things' flow of data between the cars, street lights, people, radios, cellphones, etc, And the real-time analytics that makes the important real-time decisions for the driverless cars. It is only human nature. Once consumers and businesses have a taste of the Internet of Things and real-time analytics benefits, they'll want more of it. In fact, it has been said that along with the influx of data, by 2017 more than 50 percent of analytics implementations will make use of event data streams generated from instrumented machines, applications, and/or individuals. How can companies keep up with this real-time analytics demand? By changing how analytics is currently done to fit the new digital need, including: - Analytics of vast amounts of data will increasingly be performed in the cloud or on devices themselves. - New ways of distributing analytics will be used. Currently, a lot of analytics applications are large and run on servers. In the next few years we'll start seeing more and more limited and targeted "apps" running on small sensors embedded in devices. These will have to be updated remotely, as it will be too expensive to distribute the analytics any other way. - The analytics conducted on servers and laptops today will start being performed on sensors and chips, which will allow decisions to be made far from where the code was originally written. For example, the personal devices that monitor and analyze individuals' health or the success - or otherwise - of their workout offer real time, minute-to-minute performance insights and suggestions, telling their wearer how to achieve the fitness goals they've set. We'll increasingly see those immediate insights and recommendations extended to many more areas of life and business. Important to note is that for businesses to jump into the Internet of Things and truly take advantage of the real-time analytics benefits it can offer, organizations must look beyond the existing data and analytics and approach a larger strategy to enable success. Elements of the strategy should involve test and learn pilots, a data governance program, and a technology infrastructure that supports mobile and big data. If you think that the world of driverless cars, robots carrying out maintenance in hazardous locations like oilrigs, or advertising that reads and responds to individuals' unique facial expressions sound like science fiction, it's time to think again. These are all developments happening today and they're prompting a new exciting phase in analytics that needs to be addressed now. Those that embrace the data will be more likely to be surfing on top of the wave of creative destruction, instead of having it crash down on top of them.
<urn:uuid:978fdc0b-916b-4ae0-a336-cdf52d84d79e>
CC-MAIN-2017-04
http://www.computerworld.com/article/2475818/business-intelligence/the-internet-of-things-and-real-time-analytics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951502
709
2.671875
3
The truth has a funny way of coming out when there’s a camera in the sky taking pictures of, well, everything. Google Earth has revealed a lot since it was introduced in 2001, from nude backyard sunbathers to North Korea’s expansive prison camps where an estimated 200,000 live and 400,000 have died. Most recently, researchers used Google Earth to uncover that many countries around the Persian Gulf have been under-reporting how many fish they’re catching to the U.N. “Underreporting fish catches can jeopardize a country’s food security, economy, not to mention impact entire marine ecosystems,” a researcher told Quartz. “This is particularly important in the case of the Persian Gulf, where fisheries are the second most important natural resource after oil.” Scientists used Google Earth’s ruler tool to measure the size of each trap and then calculated the daily catch based on historical records, length of the fishing season and composition of fish species present at each location. The scientists reported that every country in the region, except Kuwait, has been grossly under-reporting their catches, in some cases reporting well under half of what is actually being caught. Google Earth is also used to identify illegal logging in locations like the Amazon rainforest. In 2007, the tribal leader of the Surui people in Brazil learned of Google Earth during a visit to an Internet café and has since started using Google Earth to ensure a sustainable future for his people. “Google’s technology plays an important role in helping build a better future -- a future with a conscience,” said Chief Almir Surui on one of Google’s promotional pages. Google Earth isn’t all prison camps and rainforest devastation, though. In 2006, a group of students from Oregon State University made a 220-foot-wide crop circle in the shape of the logo for the Mozilla Firefox Web browser. The team contracted a plane and a helicopter to help document their progress. “Maybe the Google Earth cameras picked it up!” the project’s page reads. Google Earth did pick it up and it’s still viewable today. Like a crop circle, the U.S. Naval Amphibious Base in Coronado, Calif., looks unremarkable from the ground. But in 2006, a view from Google Earth’s cameras showed that its four L-shaped buildings form the shape of a swastika. Former host of The Power Hour radio program Dave von Kleist voiced his objection to the buildings on air. ”I’m concerned about symbolism,” von Kleist told the Los Angeles Times. “This is not the type of message America needs to be sending to the world.” More complaints followed, and in 2007 the Navy said it would do something about it. But with an estimated $600,000 price tag for a redesign, the structure has still not been changed. Built in the late 1960s, the Navy contended that the swastika-shaped design was unintentional. Others have pointed out, however, that standard architectural practice would involve the creation and review of both two- and three-dimensional plans. It seems unlikely that a $2.3 million project (in 1969 dollars) would be built without at least glancing at some blueprints. In 2003, the S.S. Jassim, a Bolivian cargo ferry, ran aground off the coast of Sudan. Not the only shipwreck visible on Google Earth, the 264-foot ship is, however, one of the largest. The Davis-Monthan Air Force Base just outside Tucson, Ariz., is home to more than 4,000 military aircraft. Aircraft are parked there and broken down to be salvaged or scrapped. Though the base was never really a secret, not many people knew about it before Google Earth came along and revealed the spectacle. While its imagery has shed light on some unpleasant truths, Google Earth also reveals beautiful things. Stratocam.com allows users to view and rate the most striking imagery captured by Google Earth around the globe. Google also offers tours of various cities, buildings, environments, UNESCO sites and even Mars and the moon through using Google Earth, Google Maps and Street View.
<urn:uuid:f7594239-eb71-476c-bc3e-fce9a7617570>
CC-MAIN-2017-04
http://www.govtech.com/internet/Secrets-Uncovered-by-Google-Earth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00089-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949658
877
3
3
As integrated circuits get smaller, past the current 20nm-22nm process technology, they increasingly come up against quantum mechanics quirks such as electron tunneling and current leakage. Chip designers and academic researchers mainly pursue the next nanometer threshold, in adherence to the ITRS (the International Technology Roadmap for Semiconductors), using all kinds of clever workarounds to get there. However, another segment of ambitious researchers have chosen to focus on fundamental redesigns to create circuits at the atomic scale. (For comparison’s sake, let’s recall that one nanometer is 50,000th of a hair width and a silicon atom has a diameter of .22 nanometers.) Researchers from the University of Rochester and Duke University have achieved a breakthrough in this exciting space, using a bi-layered molecular interface to send an electric charge across a circuit one molecule wide. Their work appears in the April edition of the journal Advanced Material Interfaces. Led by Alexander Shestopalov, an assistant professor of chemical engineering at the University of Rochester with a focus on unconventional nanoscale electronics, the team used a single layer of organic molecules to connect the positive and negative electrodes in a molecular-junction OLED (organic light-emitting diode). One of the main problems that scientists face in developing circuits at the atomic scale is how to control the current flowing through the circuit. Shestopalov responded to the challenge by adding a second, inert layer of molecules. This inert layer acts like the plastic casing on electric wires that insulates the live wires from the surrounding environment. “Until now, scientists have been unable to reliably direct a charge from one molecule to another,” said Shestopalov in an official release. “But that’s exactly what we need to do when working with electronic circuits that are one or two molecules thin.” The inert layer is comprised of a microscopic chain of organic molecules. On top of that is the active layer, a one-molecule thin sheet of organic material. Following the wire analogy, the top layer conducts the charge while the lower inert layer insulates it, thus reducing interference. Shestopalov was able to control the current by making small changes to the organic molecules’ functional groups – using some functional groups to accelerate the charge transfer and others to slow it down. The ability to alter the functional group enables fine-tuning of the charge to support different applications. For example, an OLED may need a faster charge transfer to output a certain luminescence, while a biomedical injection device may require a slower rate for delicate procedures. The accomplishment is a significant milestone for molecular electronic devices, however there is still work ahead, namely with respect to durability. “The system we developed degrades quickly at high temperatures,” said Shestopalov. “What we need are devices that last for years, and that will take time to accomplish.” The applications for such nanoscale circuitry are numerous, ranging from solar cells and other photovoltaics to drug delivery and bioimaging – not to mention the potential for atomic-scale computing.
<urn:uuid:e4af8142-6a21-4c24-92da-3d8631abe205>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/23/research-holds-promise-atomic-scale-circuitry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00483-ip-10-171-10-70.ec2.internal.warc.gz
en
0.899489
648
3.3125
3
What is the minimum number of disks required in a RAID5 array? A server has three disks of 80GB each and must manage a database with 4 million recordsof 30KB each. The best configuration for this server, with the criteria being performance,configurability and flexibility, is: The correct command sequence to create logical volumes on a Linux system is: What is the usable disk space of a RAID 5 array of five 18GB drives with one drivededicated as a spare? You decide to use the logical volume manager (LVM) to manage four 4GB disk drives. Aftercreating the volume group, how would you create a 10GB logical volume called big-app? What is the purpose of vgextend? Which RAID level provides the most redundancy? Which pseudo-file contains information on the status of software RAID devices? What information does the file modules.dep provide? Before compiling a new kernel, what needs to be done?
<urn:uuid:d6954262-d6df-4285-8ed0-e5fa33041ab1>
CC-MAIN-2017-04
http://www.aiotestking.com/linux/category/exam-117-201-lpi-level-2-exam-201-advanced-level-linux-certification-part-1-of-2-update-april-9th-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz
en
0.881614
203
2.515625
3
A group of researchers from Indiana University and Microsoft Research have recently published a paper detailing the risk of cross-origin attacks on two of the most popular mobile operating systems today – iOS and Android – and have introduced an origin-based protection mechanism of their own design. Unlike modern browsers, which enforce the same origin policy that prevents the dynamic web content of one domain from directly accessing the resources from a different domain, today’s mobile OSes do not have origin-based security policies that would control the cross-origin communications between apps, and between an app and the web, the researchers note. “[Cross-origin] attacks are unique to mobile platforms, and their consequences are serious: for example, using carefully designed techniques for mobile cross-site scripting and request forgery, an unauthorized party can obtain a mobile user’s Facebook/Dropbox authentication credentials and record her text input,” they point out. “Mobile apps essentially play the same role as traditional web browsers at the client side. However, different from conventional web applications, which enjoy browse-level protection for their sensitive data and critical resources (e.g., cookies), apps are hosted directly on mobile operating systems (e.g., Android, iOS), whose security mechanisms (such as Android’s permission and sandbox model) are mainly designed to safeguard those devices’ local resources (GPS locations, phone contacts, etc.),” the researchers explained. “This naturally calls into question whether the apps’ web resources are also sufficiently protected under those OSes.” During their research, they came across five separated cross-origin issues in popular SDKs (software development kits) and high-profile apps such as Facebook and Dropbox – and they discovered that they can be easily exploited to steal users’ authentication credentials and other confidential information. They also concluded that fixing cross-origin flaws would be difficult for app developers, and that origin-based protection must be supported by the OS. In order to prove their point, they designed a protection mechanism they dubbed “Morbs”, which “labels every message with its origin information, lets developers easily specify security policies, and enforce the policies on the mobile channels based on origins.”
<urn:uuid:71b84bfa-8a82-4fb3-93ea-8636c89bca5f>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/08/29/new-protection-mechanism-prevents-mobile-cross-app-content-stealing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938903
458
3.28125
3
Because grid computing puts a layer of virtualization, or abstraction, between applications and the operating systems (OS) those applications run on, it can be used to tie together all a corporation's CPUs and use them for compute-intensive application runs without the need to for stacks and stacks of new hardware. And because the grid simply looks for CPU cycles that are made available to the grid though open grid services architecture (OGSA) APIs, applications simply interact with the CPU via the grid's abstraction layer irregardless of OS, said Tom Hawk, IBM's general manager of Grid Computing. In this way, Windows applications can run on Unix and Unix applications can run on Windows and so on. Basically, grid can be thought of as similar to the load balancing of a single server but extended to all the computers in the enterprise. Everything from the lowliest PC to the corporate mainframe can be tied together in a virtualized environment that allows applications to run on disparate operating systems, said Hawk. "The way I like to think about it really simply is the internet and TCP/IP allow computers to communicate with each other over disparate networks," he said. "Grid computing allows those computers to work together on a common problem using a common open standards API." Some companies in the insurance industry, for example, are utilizing grid to cut the run-time of actuarial programs from hours to minutes, allowing this group to use risk analysis and exposure information many times a day verses just once. In one example, IBM was able to cut a 22-hour run-time down to just 20 minutes by grid enabling the application, said Hawk. But any large, compute-intensive application, such as those used in aerospace or the auto industry to model events or the life sciences industry, can be (and are) grid-enabled to take advantage of a company's unused CPU cycles, said Ed Ryan, vice president of products for perhaps the oldest commercial grid company, Platform Computing. By doing so, a company can reduce its hardware expenditures while raising productivity levels through the faster analysis and retrieval of critical information. By utilizing the compute resources of the entire enterprise, CPU downtime is put to productive work running programs that once had to wait until nightfall before enough CPU time was available. Servers, which typically have a very low CPU utilization rate, can be harnessed to run more applications more frequently and faster. But this can get addictive, said Ryan. "Our biggest customers go into this to drive up their asset utilization and what ends up happening is their end-user customers get hooked on having more compute power to solve their problems," he said. What this means to the average CIO, who typically has stacks of hardware requests waiting for attention in the inbox, is they can provide this power while throwing most of the new hardware requests into the circular file. Even data retrieval and integration is being targeted by at least one firm for grid enablement. Avaki is taking grid to a new level by using it as a enterprise information integration (EII) engine that can either work with or by-pass altogether current EII efforts, said Craig Muzilla, vice president of Strategic Marketing for Avaki. In fact, Avaki's founder is confident grid will become so pervasive in the coming years it will be commoditized as just a standard part of any operating system. That is why Dr. Andrew Grimshaw founded Avaki as a EII vendor. "For the CPU cycles it's maybe a little bit more straightforward," said Muzilla. "Instead of having to go buy more servers to speed things up or do analysis faster, to run the application faster I can go harvest the untapped CPU cycles. We think eventually that kind of compute grid technology will be embedded in the operating system so we don't think long-term it's that attractive for ISVs." Grid also plays right into the hands of companies looking to implement on-demand, utility or service-orientated architectures (SOA) since it enables the integration of disparate, heterogeneous compute resources by its very nature. Therefore, on-demand environments can piggy-back on the grid to achieve the integration and productivity promises of those methodologies, said IBM's Hawk. "Right now, I'd say the No. 1 reason customers are deploying this technology is to gain resolution or to fix specific business problems they're having around either computing throughput or customer service," he said. "The real cool thing here, long-term, is about integration and about collaboration and that's why I keep harping on this concept of productivity."
<urn:uuid:2d2fe269-20f7-4ec2-a82d-58e003305b37>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/3286361/Making-the-Case-for-Grid.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949472
930
3.015625
3
Liu J.,Chongqing Center for Environmental Monitoring | Liu J.,Chongqing Key Laboratory Of Urban Atmosph Environmental For Integrated Observ And Pollution Prevention And Control | Jiang C.,Chongqing Center for Environmental Monitoring | Jiang C.,Chongqing Key Laboratory Of Urban Atmosph Environmental For Integrated Observ And Pollution Prevention And Control | And 4 more authors. Chongqing Daxue Xuebao/Journal of Chongqing University | Year: 2014 Based on the observation data of ozone and its precursor compounds and meteorological factors in near surface layer of Chongqing urban area, this paper emphatically analyzes the temporal and spatial distribution characteristics of ozone in summer. Meanwhile, the correlation between ozone and its precursor compounds, such as NO, NO2, CO, and some related meteorological factors are analyzed. Results show that the concentration of ozone in urban area is lower than that in the surrounding regions. Only one peak of ozone concentration appeared in about 4 p. m. The ozone concentration has an obvious negative correlation with precursor compounds, and has obvious positive correlation with meteorological factors, such as solar radiation and temperature. The appearance of high ozone concentrations resulted from the high pressure weather conditions, especially for the high intensity of solar radiation, breeze, low relative humidity and high temperature. Elevated ozone concentration is closely related to the decrease amplitude of atmospheric pressure. Higher ozone concentrations appear when the decrease amplitude is 0.4 kPa. ©, 2014, Editoral Board of Journal of Chongqing University, Chongqing University Journals Department. All right reserved. Source
<urn:uuid:10b481e1-09b2-4ce1-b8a5-f0dcb647819d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/chongqing-key-laboratory-of-urban-atmosph-environmental-for-integrated-observ-and-pollution-prevention-and-control-821658/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868677
332
2.625
3
It doesn’t matter if you’re using iOS, Android, Windows Phone or BlackBerry, these tips apply to every mobile device that connects to the Internet. “How susceptible a device is to malware depends on its operating system and how users interact with their device. There are far less threats targeting iOS than there are targeting Android, but a few iOS threats do exist. Scam sites that are designed to steal your personal information, for example, exist on both platforms. Hacks often occur on the server side and users who don’t have a security application installed on their device will not be aware that their information could be misused,” told us Filip Chytry, Mobile Malware Analyst at AVAST. Here are some essential tips that will increase the security of your mobile device: 1. Lock your device. It may sound like a no brainer to some, but a great deal of users leave their devices permanently unlocked for the sake of convenience. It doesn’t matter if your phone was stolen or you simply misplaced it, an unlocked device is easier to misuse. If you have the remote wipe function enabled as well, a locked device can buy enough time for you to remove private data from prying eyes. 2. Install software and operating system updates. If your device can be updated, always keep the software up to date to eliminate a variety of vulnerabilities. Users with older devices that cannot be updated should be especially cautious when downloading apps. Always download apps from trusted developers and markets like Google Play. Also, check the app’s permissions to make sure it’s only granted access to functions it truly requires. 3. Install anti-malware software. AVAST’s mobile malware team collects 2,500 new samples of mobile malware every day. It’s therefore essential to protect yourself against mobile malware such as ransomware and spyware, which are both on the rise. If you’re running Android, try a solution like the free avast! Mobile Security, which protects against malware, privacy intrusion, theft and data loss. 4. Secure your Internet connection. Most of us are used to being connected all the time and don’t think twice about using an open wireless connection for browsing the Internet, using social networks, and even Internet banking. The overwhelming majority of open Wi-Fi access points are not secure and you should tread carefully while using them. One way to make sure your connection remains private is to use a VPN solution like avast! SecureLine. 5. Backup your data. Your device can malfunction, it can get stolen, you can lose it. By frequently backing up your data you also don’t have to worry about accidentally deleting files. It’s easier to replace a device than to replace the data so remember to save contacts, text messages, emails, call logs and photos.
<urn:uuid:6b67a5fd-6d30-4d12-aa47-d0965e1b66c6>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/06/30/5-essential-mobile-security-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00136-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917472
590
2.5625
3
The wireless world is evolving rapidly in response to the explosion of intelligent devices, applications and data, and the IEEE 802.11ad standard, commonly known as WiGig, is poised to help. WiGig is a step change from the 802.11 evolution we have witnessed over the last 10 years, adding a new frequency band, 60 GHz, to existing products: 802.11n operates in the 2.4-GHz and 5-GHz bands, while newer 802.11ac products operate in the 5-GHz band. By enabling the emergence of tri-band products -- such as 802.11n/ac + 802.11ad -- WiGig opens the door to exciting applications and makes it more possible for all intelligent devices to be connected through a high speed Wi-Fi network. But, we get ahead of ourselves. First, let's see what WiGig is all about. * Speed.A The WiGig spec is defined to support speeds up to 7Gbps, but this is really just a start. By using some of the basic techniques we see today in .11n and .11ac, up to 100Gbps is achievable within the next few years. How? " A While .11ac can go up to 8x8 MIMO, 256 QAM modulation and channel bond four 40 MHz channels, .11ad is able to achieve the same speeds with one spatial stream, 64 QAM modulation and a single channel. " A Using all of the techniques that 802.11 has used in the last decade to achieve higher performance in the legacy bands (channel bonding and MIMO and higher modulation), the Wi-Fi industry can easily achieve much faster speeds. " Using an industry analogy 60 GHz technology today is in the ".11b phase" of the technology evolution. We are just at the beginning of an exciting technical road map progression that will take WiGig to new heights using proven techniques, as well as new ones, yet to be discovered. * High capacity. 16-32 antenna element arrays generate real spatial separation and make high capacity wireless deployments a reality. " Approximately the same area of a single 2.4 GHz antenna module can contain 32 or more 60GHz antennas. " Beamforming with such a large antenna array allows for highly directive communication, allowing for multiple devices working side by side in the same room with minimal effect on devices around them. " Interference-free transmissions allow for capacity to be additive. In most omni-directional radio technologies, the overall bandwidth is divided by the number of users, which is dilutive rather than additive. * Most power efficient Wi-Fi technology yet. Running multiple gigabits per second of real throughput at only hundreds of milliwatts total system power is achievable with 802.11ad, making it far and away the most power efficient technology in the Wi-Fi portfolio. " Power splits differently between 60GHz radios and traditional Wi-Fi products in legacy bands. " Receiver data processing is biggest energy consumer, rather than transmit power amplifiers. " A full system, generating a 20+ dBm effective isotropic radiated power (EIRP) and sustaining a 4Gbps link can be built at 0.5 W (1/2 Watt), aligning well for phone/tablet integration. * Great free space rate/range. Path loss offset by antenna gain and wide channel enables high speed, even in low signal-to-noise ratio (SNR) situations. 2Gbps at 100 feet is "easy." " One of the biggest misconceptions of 60GHz is that it is short range. " 60 GHz does not have a range problem. In fact, in free space line-of-sight, 60GHz has the best rate/range profile of any Wi-Fi technology. " But 60GHz does have a blockage problem. It doesn't go through most walls or through people. Rather, it reflects. Therefore, while 60GHz has the best rate/range, it often must use this range to find a reflective path in-room to get to its target. * Coverage. When discussing WiGig performance, we need some new terminology. Standard rate/range graphs have become familiar but do not apply here. " One can be 100 feet away and get 2Gbps or, due to blockage and required reflections, one can be 10 feet away but only get 1Gbps. " For WiGig, we talk about coverage. In a typical room, what percent of the time can I achieve a specific rate for example, in a conference room, 65% of the locations can achieve 4Gbps, 80% can achieve 2Gbps, and 95% can achieve 1Gbps. * Very low latency. Around ~10 microseconds (us) round trip is real, comparable to wire latencies. " WiGig was designed from the ground up to be extremely low latency ~10 us round trip comparable to wired-equivalent latencies. This is important, because now the latencies are close enough that you can trick the system that it is running over a wire. And if the system thinks there is a wire, you can reuse all of the software that has been developed for that environment. For example, WiGig Wireless Bus Extension (WBE) is able to run the PCIe bus over the air, and therefore, seamlessly reuse the last decade's worth of host controllers and device drivers that have already been developed. In summary, WiGig offers unrivaled raw speed, interference resistance, good range, high capacity networking, multi-gigabit real throughput in a handheld power envelope, and near-wire equivalent latency. Given these benefits, WiGig is well-suited for a broad range of applications, from tri-band networking (2.4/5/60 GHz) to wireless storage and edge caching to wireless docking. Products integrating WiGig are currently available in the market, including multiple Ultrabook SKUs and a wireless docking station, and more products are on the horizon. Wilocityis the leading developer of 60 GHz multi-gigabit wireless chipsets, allowing revolutionary performance and capacity by enabling a tenfold increase compared to legacy Wi-Fi. The company is committed to making wireless the ubiquitous connectivity solution for mobile platforms everywhere where wireless is simply faster and easier than wires in the office, at home, and on the road. Wilocity has partnered with industry leaders to provide greater wireless performance in short-range wireless networking, docking, point-to-point outdoor links, and more.A Read more about anti-malware in Network World's Anti-malware section. This story, "Understanding Where 802.11ad WiGig Fits Into the Gigabit Wi-Fi Picture" was originally published by Network World.
<urn:uuid:222ec586-6677-4c56-a94b-c1bb66df8ded>
CC-MAIN-2017-04
http://www.cio.com/article/2380475/wireless-networking/understanding-where-802-11ad-wigig-fits-into-the-gigabit-wi-fi-picture.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919003
1,411
2.765625
3
Researchers are looking to build self-configuring network technology that would identify traffic, let the network infrastructure prioritize it down to the end user, reallocate bandwidth between users or classes of users, and automatically make quality of service decisions. The system will have a minimum of 32 levels of prioritization. These prioritization levels will be configurable and changeable at the system level in an authenticated method. Data with a higher priority will be handled more expeditiously than traffic with a lower priority. If that sounds like a major undertaking, it is, but consider who wants to develop such a beast: the Defense Advanced Research Projects Agency (DARPA). This advanced prioritization system is part of DARPA's Military Networking Protocol (MNP) program which is looking to develop an authenticated and attributable identification system for packet based, military and government data networks, the agency said. Military or government data sent with the MNP will be compatible with normal Internet equipment to allow MNP traffic to pass through legacy network or encryption equipment, DARPA said. Not only should the prioritization scheme be radically advanced, the system should be extremely difficult to spoof or inject false traffic into, DARPA said. At the heart of the system is the priority level setting though. Some meatier MNP description from DARPA goes like this: The MNP system will be able to change the priorities within the system in a trusted and authenticated manner by network administrators and/or unit commanders. For operational reasons, it is highly desirable that these changes may be made from more than one location within a single administered network or network domain. It is desirable that these changes be made while interacting with a Network Controller and not directly from a user level device. There may be times when a Network Controller's network configuration is missing or incorrect. In this case, the router, or Network Controller as DARPA calls them, will seek and discover other Network Controllers, exchange authentication tokens, retrieve, and load an appropriate network level configuration. It is desirable that a centralized network level configuration repository not be used for operational purposes. Multiple MNP domains will eventually be linked together. MNP performers must develop technology to have these different MNP domains interact with each other, exchange configuration and prioritization data, and to correct and alert network administrators to problems with the joined MNP domains. Connection mistrust is a network or domain administrator tunable parameter. Vendors must provide protocol implementations that replace or modify both the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) for the user level devices and the Network Controllers. There may be times when it is not desirable to alter either the operating systems or other software/hardware of user level devices or servers. This prioritization technology is one of many leading edge network systems DARPA has outlined. For example, in August the agency gave BBN Technologies $4.4 million to develop advanced network monitoring technology BBN Technologies is building for the military. The high-tech firm is set to develop novel, scalable attack detection algorithms; a flexible and expandable architecture for implementing and deploying the algorithms; and an execution environment for traffic inspection and algorithm execution. Layer 8 in a box Check out these other hot topics
<urn:uuid:d4223a49-c908-4982-bedb-c4960316ecaf>
CC-MAIN-2017-04
http://www.networkworld.com/article/2233630/security/researchers-seek-advanced-network-prioritization--security-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00164-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89875
660
2.640625
3
New regulations could change data center efficiency efforts Monday, Aug 26th 2013 Data centers take up a considerable amount of energy to function and may now be affected by new laws to reduce greenhouse gas emissions, including those generated from electric plants. Legislation from California, along with upcoming federal initiatives, could change the way that data centers strategize their energy use. In California, data centers have some of the highest electricity consumption rates in the state, driving the motive behind the new law to improve energy-efficiency efforts. Under the new law, the state is required to reduce emissions to 1990 levels by 2020 by increments of 2 percent to 3 percent a year, according to Network World. Allowances will be given for every metric ton of gases emitted, and the permits can be bought and sold to accommodate plants that have difficulty cutting back. How will data centers be affected? The trickle down effect to data centers may show more energy-efficient hardware, virtualization and new cooling strategies. Policies set by California and the Obama administration's national climate action plan would raise operating expenses, although efficiency investments could offset energy cost increases, according to The Data Center Journal. The new regulations could also place more stress on businesses and consumers with higher compliance costs. Although alternative energy could generally help the electric grid, certain forms are sporadic and could cause harm to the data center if there are significant variations. With the added rules, it could be more difficult to manage a data center while remaining compliant. Here are a few ways to help the equipment be in the proper conditions while meeting the new standards set by the laws: - Environmental control systems Now that the plan has been put into place, environmental control systems may be the answer to regulating the center. Not only could they ensure that the hardware is in the proper conditions, but they could also help organizations comply with the new regulations. The new recommended temperature has increased to 80 degrees Fahrenheit, however, the servers must still be in a cool enough environment to function. Having it be too warm could influence the humidity level and short the system or cause a fire. - Virtualize servers Another way to cool down the space is by virtualizing servers. This would take up less space with fewer physical servers and increase utilization to 40 or 50 percent, according to Network World. Having too many servers with low utilization can waste a lot of energy, but by putting multiple virtual servers in a physical one, it will create more efficient power use. - Proper cooling strategies Putting a lot of servers together in one space can cause significant cooling increases, however, using proper techniques will help keep the data center at the right temperature. Using outside ventilation to circulate the air can help take out the heat from the room and resupply the servers with optimal humidity and temperature conditions. Temperature monitoring will also give accurate measures for better decision making and increased efficiency. With new laws coming to the forefront of energy-efficiency initiatives, their emergence will affect data centers and the way they are run. Making strategic utilization of environmental controls, virtualization and cooling practices will help organizations comply with the new rules without compromising how they run the data center.
<urn:uuid:157136ce-53ad-4534-97af-b347cfd6d694>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-regulations-could-change-data-center-efficiency-efforts-496701
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937519
632
2.875
3
Best Computing Practices 101: Improve your password security Is your most terrifying dark secret that you’ve used the same password on every website you’ve visited for the past 10 years? Are you worried about the security of your financial, e-mail and social media accounts? Hackers have tools at their disposal that allow them to gain access to your password, but they all require that you first make a mistake. By following some simple password security practices, you can protect the security of your accounts from prying eyes. How Do Hackers Steal Passwords? If you’ve watched too many Hollywood movies, then you might have a mental image of a hacker sitting behind a computer screen running software that automatically guesses each character in your password until it locks in on the correct value. Fortunately, this isn’t really possible because almost any website will lock out your account after repeated login failures. How, then, do hackers gain access to the accounts of unsuspecting users? One of the easiest ways for hackers to learn your password is to simply ask you for it. Don’t think that you’d fall victim to this type of attack? The data says otherwise. In Microsoft’s most recent Computer Safety Index, released in 2014, the company estimated that phishing attacks affected 15 percent of adults last year. Phishing attacks have become increasingly sophisticated over the past five years. Emails originating from Eastern Europe written in broken English are a relic of the past. Today’s phishing attack is well written, uses logos and brand markings of legitimate companies and redirects users to a carefully crafted decoy website that appears legitimate. Once users enter their password, hackers use it immediately on the legitimate site, sell it on the black market or store it away for future use. Another common way that hackers gain access to passwords is to eavesdrop on your network communications. If you use a public wi-fi network without using encryption technology, then your password is open to interception. Think carefully the next time you access an account from an airport, coffee shop or other public location. If you’re not connecting to the website via an HTTPS connection, someone across the room could be eavesdropping and stealing your password. Hackers don’t need to trick you to obtain your password — they might be able to obtain it directly from the website where you created an account. Websites need to store your password in a database so that they can verify your login attempts. If a hacker gains access to the website’s database, they can steal thousands or millions of passwords at a time. This is the type of attack that results in headlines like “Millions of passwords compromised.” To protect against this type of attack, well-designed websites don’t directly store passwords in their databases. Instead, they store a copy of the password that is irreversibly encrypted using a technology known as hashing. When you attempt to log in to the website, the site hashes the password you provided and compares it to the hashed value in the database. Using this technology slows hackers down, but they still may be able to determine your password by hashing millions of possible passwords and comparing those values to the hashed value. Protecting Your Passwords The situation may sound bleak. Indeed, hackers have many tools and techniques at their disposal that help them gain access to your secret passwords. Fortunately, there are steps that you can take to protect your accounts from unauthorized access. First and foremost, you should always use strong passwords. Choose a password that is at least eight characters long and consists of a mixture of letters, numbers and symbols. Avoid using dictionary words, names, telephone numbers or other values that might be easily guessed. All of these measures dramatically increase the number of guesses that an attacker will need to make before successfully stumbling upon your password. It’s time to throw out all of those “password12” passwords that you created years ago and replace them with something more like “1+rILCitt!”. Think that’s hard to remember? Use a mnemonic device like “One positive reason I like cheese is the taste!” Next, you need to use different passwords on every website you visit. Burdensome as it is, there’s really no way around this requirement. The simple truth is that hackers know people reuse passwords all over the place. When hackers steal a password database from a low security website, the first thing they do is try those username and password combinations on high value sites. If the same password protects your social media accounts and your online banking accounts, you’re a victim in the making. Protecting your passwords also means protecting yourself from phishing messages. If you receive a strange email from an organization you do business with, think twice before clicking the link it contains. If you suspect that it’s fraudulent, visit the company’s website by typing the URL directly into your web browser. You can also pick up the phone and give them a call to verify a suspicious message. The ultimate way to protect the security of your accounts is to supplement your passwords with other authentication mechanisms. Many websites, including banks, Twitter, LinkedIn and Google, now offer two-factor authentication technology as a free optional service. When you enable two-factor authentication, the website prompts you for your username and password in the normal fashion. After you successfully provide your password, the website sends a code to your phone that you must type into the website before gaining access. If you enable this feature, hackers will not be able to gain access to your account unless they have knowledge of your password and physical possession of your phone. Can’t Remember All Those Passwords? We’ve known for years that using unique, complex passwords greatly enhances Internet security. So why don’t people use them? The most common complaint is that it’s simply too hard to remember a long list of complex passwords. Fortunately, technology can come to the rescue here as well. Password managers centralize the management of your passwords across many different accounts. They store the passwords in an encrypted database protected by a master password. The password manager automatically generates complex passwords for each website that you access and then fills those passwords in for you, saving you from the burden of remembering many different passwords. If you’d like to give this technology a try, some of the products that you may wish to evaluate include LastPass, OnePass and KeePass. If you decide to use a password manager, remember that the password used to protect access to that account truly provides the “keys to the kingdom.” If someone learns that password, they will then not only have access to all of your passwords, they will also have a laundry lists of all of the sites where they may be used! For this reason, you should always use a very strong master password that is completely unrelated to any other password that you’ve ever used. Commit it to memory and guard it carefully! Better yet, consider using two-factor authentication to protect your password manager account. Hackers do have a wide variety of tools that they can use when attempting to gain access to your passwords. By following the advice in this article, you can reduce the risk that you’ll fall victim to their attacks and protect the security of your online accounts.
<urn:uuid:fabc4482-3519-40af-8e28-ff82c11e074c>
CC-MAIN-2017-04
http://certmag.com/improve-your-password-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929437
1,517
3.296875
3
Docker is an open source framework that provides a lighter-weight type of virtualization, using Linux containers rather than virtual machines. Built on traditional Linux distributions such as Red Hat Enterprise Linux and Ubuntu, Docker lets you package applications and services as images that run in their own portable containers and can move between physical, virtual, and cloud foundations without requiring any modification. If you build a Docker image on an Ubuntu laptop or physical server, you can run it on any compatible Linux, anywhere. In this way, Docker allows for a very high degree of application portability and agility, and it lends itself to highly scalable applications. However, the nature of Docker also leans toward running a single service or application per container, rather than a collection of processes, such as a LAMP stack. That is possible, but we will detail here the most common use, which is for a single process or service. [ First look: Docker 1.0 is ready for prime time | Prove your expertise with the free OS in InfoWorld's Linux admin IQ test round 1 and round 2. | Subscribe to InfoWorld's Data Center newsletter to stay on top of the latest developments. ] To continue reading this article register now
<urn:uuid:3ae51c79-b6dc-481f-b33b-ffd375c26a3f>
CC-MAIN-2017-04
http://www.computerworld.com/article/2491451/app-development/how-to--get-started-with-docker.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911168
244
2.734375
3
Big changes are coming to data protection laws in the European Union. Are you ready? On Saturday, the U.K. will begin to enforce the amended Directive on Privacy and Electronic Communications--better known as the E-Privacy Directive-which it passed last year. Meanwhile, all 27 member nations of the economic and political confederation are debating much broader draft legislation, introduced by the European Commission (E.C.) in January, which would reform and harmonize data protection laws across the E.U. The E-Privacy Directive, which the U.K.'s Information Commissioner will begin to enforce on May 26, requires consent for all non-essential tracking of individuals as they traverse the Web, whether that tracking involves tags, cookies or other tracking technology. In other words, Websites must inform consumers in detail about any tracking that takes place on the site and obtain consent before proceeding with the tracking. Updating the Data Protection Directive Like many other European data protection laws, the U.K.'s implementation of the E-Privacy Directive is an outgrowth of the Data Protection Directive, adopted by the E.C. in 1995 and intended to apply a set of common rules and safeguards for personal data throughout the member countries of the E.U. But as a 'directive' rather than a 'regulation,' it was up to the individual member countries to implement specific laws. In the 17 years since the E.C. adopted the Directive, E.U. member states have adopted a patchwork quilt of data protection laws that vary in both language and the penalties for non-compliance. For example, when it comes to the E-Privacy Directive, some of the member countries adopted opt-in laws, others adopted opt-out laws and still others have considered annual consent procedures. In effect, organizations operating in Europe have had to deal with a dizzying array of laws governing the holding and processing of personally identifiable information (PII). Additionally, the Data Protection Directive was drafted in the early days of the public Internet: Hotmail did not yet exist and the public had yet to know what the term "Google search" meant. The directive did not anticipate the changes to computing that would come from software-as-a-service (SaaS) and other forms of cloud computing. "Currently, we have 27 member states in Europe, and each one of those member states have taken it upon themselves to create their own version of the Data Protection Act" says Jason Currill, CEO of Ospero, a provider of global hosting, infrastructure and platform services. "Most of them are pre-cloud, based on the Data Protection Directive formulated in 1995. Everything has now changed. Geopolitical barriers have been smashed by the cloud. There are data privacy issues and data sovereignty issues that didn't exist back in 1995," Currill says. In January of this year, the E.C. published a first draft of a new legislative package intended to both harmonize the data protection laws across the E.U. member states and update them to address the new technological reality. "17 years ago, less than 1% of Europeans used the Internet," E.U. Justice Commissioner Viviane Reding, the E.C.'s vice president, said in a statement in January when the draft was released. "Today, vast amounts of personal data are transferred and exchanged across continents and around the globe in fractions of seconds. The protection of personal data is a fundamental right for all Europeans, but citizens do not always feel in full control of their personal data. My proposals will help build trust in online services because people will be better informed about their rights and in more control of their information. The reform will accomplish this while making life easier and less costly for businesses. A strong, clear and uniform legal framework at E.U. level will help to unleash the potential of the digital single market and foster economic growth, innovation and job creation." Scope of the Data Protection Legislation The new legislation is expected to have a substantial effect on all organizations that operate or focus on Europe, say Ulrich Bäumer and Stephanie Ostermann of the International Law Office, an online legal update service for companies and law firms worldwide. Bäumer and Ostermann say the new laws, as currently written, would increase the regulatory burden on organizations with European operations; increase the amount of time, money and personnel required to achieve compliance; and raise the stakes in terms of potential fines and brand damage arising from non-compliance. "The new law will apply to anyone processing data in the European Union, as well as those outside Europe which are offering goods or services to E.U. citizens," they wrote in a paper about the new regulation. "For a multinational organization, the location of its European headquarters will determine which E.U. member state's laws will apply and which regulatory authority will have jurisdiction. That said, individuals will be given a wider range of powers to bring personal action against an organization (either in the country where a non-compliant organization is located or in the individual's local courts). Trade associations will also be empowered to bring class actions on behalf of their members. For the first time, data processors will share equal responsibility and liability for compliance with the new laws, raising the stakes for IT service suppliers." Cloud Service Providers Would Feel the Impact One of the new provisions most likely to affect non-European businesses attempting to do business in the E.U., or European businesses seeking to use non-European cloud service providers, revolves around data transfer to non-E.U. countries. The extant data protection laws already prohibit data transfers to countries outside the E.U. that don't have data protection laws of the same strength as the E.U.'s laws-the U.S., for instance--unless specific compliance steps have been taken. "Prospective E.U. customers of SaaS services face significant legal hurdles if they wish to make use of third-party vendor software that runs through a Web browser and involves the hosting of the customer's data-including personal data-outside Europe," Graham Hann and Sally Annereau of Taylor Wessing wrote in a white paper commissioned by VMware and Ospero. They noted that the hurdles include security rules for diligence and oversight of outsourced processing, rules restricting exports of personal data outside of the E.U. and threats from overseas regulator 'long arm' requests for personal data. "Concerns about the difficult in overcoming these hurdles, worries about compliance risks leading to regulator enforcement litigation and damage to reputation, coupled with uncertainty about the future shape of proposed E.U. law protecting personal data, has made E.U. business wary of switching to cloud-based SaaS solutions hosted outside of Europe," they say. The proposed legislation would give organizations more options for dealing with this prohibition, specifically with regard to binding corporate rules (BCR), which govern multinational businesses. Ospero's Currill says that he's in favor of the new legislation because it will give companies one set of regulations they must adhere to rather than the many different laws currently in place. Ospero has, in fact, already positioned itself to prosper from the E.U.'s data transfer laws by taking a cue from the physical world's warehouse distribution model. "A lot of these issues kind of go away if you just embrace the local culture that you're trying to do business in," Currill says. "The pitch to a German, to a French person, to an Italian, they're all completely different. The simplest thing to do is to embrace the local jurisdiction and embrace the local customer." To do that, Ospero is marketing its data centers as "compliance hubs" that allow customers to operate in a country without the compliance issues involved in data transfer. Essentially, Currill says, customers host an image of their application in an Ospero data center in the country in which they wish to do business, while Ospero manages the data and the application without it ever leaving Europe. The new legislation would also put strict restrictions in place with regard to consent requirements. It would require that consent for the use of PII be obtained in advance on an opt-in basis before it could be used, and would require parental consent for individuals age 13 and younger. It also mandates data portability, giving individuals the right to demand that an organization transfer any information about them to a third-party organization in a format determined by the individuals. Under the new legislations, organizations would be required to prove they undertake regular data protection audits and privacy impact assessments. Additionally, all private sector companies with more than 250 employees, all private sector companies whose core activities involve regular monitoring of individuals and all public authorities would be required to formally appoint a data protection officer (DPO). "The data protection officer must be empowered by the organization to act as an independent assessor of its compliance with data protection laws and report to the board of directors in doing so," say Bäumer and Ostermann. "The E.U. regulation specifically requires the data protection officer to coordinate data protection by design and privacy impact assessment initiatives and to be responsible for data security initiatives generally. Responsibility for training staff is also mentioned as important. In short, the data protection officer must ensure that his or her organization has adopted good data governance policies and procedures." The new legislation would also obligate organizations to notify data protection authorities of data breaches within 24 hours of discovering a breach, or to explain to authorities why it is not possible to provide full details of the breach. To give teeth to the new legislation, the E.C. has proposed hefty fines for non-compliance. A provision would allow national supervisory authorities to send a warning letter for first offenses, but serious violations (like processing sensitive data without an individual's consent) would allow those supervisory authorities to impose penalties of up to ¬1 million or up to 2% of a company's global annual turnover. Bäumer and Ostermann recommended a number of steps that organizations can take to prepare themselves for compliance with the new regulations. Implement Good Data Protection Governance Measures They recommend that organizations review their policies and procedures to ensure they reflect a serious focus on data protection issues. "An organization's policies and procedures are a key benchmark against which its compliance is judged by regulators," they say. "The thought that has been given to both indicates how seriously data privacy compliance is taken. Information provided in policies, whether staff or customer facing, and the practices which they encourage are also at the heart of achieving compliance with two frequently breached principles of data protection law, namely: data security obligations which require "appropriate technical and organizational measures" to be in place to prevent data loss and unauthorized access to data (in other words, companies need to be well organized when it comes to information security); and knowledge/consent obligations which require an organization to inform its staff, customers and suppliers what data it processes about them, and what it uses that data for (again, internal and externally facing policies provide a key mechanism for supplying that information)." Bäumer and Ostermann also recommend regular and well-thought-out training programs for staff that handle valuable data. In addition, they recommend organizations make a point of taking compliance seriously by running regular audits and privacy impact assessments before introducing any new significant data processing activities. With regard to data transfer compliance, Bäumer and Ostermann recommend adding an assessment of an organization's data transfer compliance to any compliance review of potential third-party partners. And because organizations are responsible and liable for the compliance acts and omissions of their suppliers, they recommend organizations adopt four mitigation measures, as follows: - Encryption. One of the first steps regulators often take following a data breach is to require the adoption of encryption technology. Organizations can sidestep the expense and difficulty of implementing encryption on short notice by implementing it now. - Service levels. The data protection laws require companies to have strong written service levels in place with suppliers that are given access to PII. Bäumer and Ostermann note that regulators will look poorly on companies that suffer a data breach if they do not have strong SLAs in place. - Data breach notifications. Some European countries already have data breach notification laws in place, and some sectors (like financial services and telecom) are also already broadly subject to such laws. But the new legislation would extend those requirements to all organizations in the E.U. Bäumer and Ostermann recommend company management determine whether their organization is ready to meet the new requirements. - Supplier due diligence. They note that in the event of a security incident, regulators will look closely at the pre-contract due diligence undertaken on the supplier. Regulators are likely to look more favorably upon organizations which undertake such due diligence. The new legislation would update the existing E-Privacy Directive to require that opt-in consent be obtained before implementing any device or Internet usage tracking technology. Bäumer and Ostermann say that the biggest challenge many businesses would face is how explain and obtain consent for the usage of such cookies or other tracking technologies without putting off visitors to their Websites. They recommend companies undertake an audit of their cookies and other tracking technologies to assess what they are used for and why. In addition, they suggest companies review their privacy policies with regard to tracking technologies and present notices to users. Thor Olavsrud covers IT Security, Open Source, Microsoft Tools and Servers for CIO.com. Follow Thor on Twitter @ThorOlavsrud. Follow everything from CIO.com on Twitter @CIOonline and on Facebook. Email Thor at email@example.com This story, "Security: Prepared for the EU's new data protection regulation?" was originally published by CIO.
<urn:uuid:00a616b8-91d4-48e6-a0c7-6dc58a9d345b>
CC-MAIN-2017-04
http://www.itworld.com/article/2727017/security/security--prepared-for-the-eu-s-new-data-protection-regulation-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00395-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94678
2,833
2.65625
3
A couple months ago, Lawrence Livermore National Laboratory’s Sequoia supercomputer broke the million core barrier for a real-world application; now it’s done it again. This time, not only has the Sequoia surpassed the 1.5 million mark, but researchers have successfully harnessed all 1,572,864 of the machine’s cores for one impressive simulation. |OSIRIS simulation on Sequoia of the interaction of a fast-ignition-scale laser with a dense DT plasma. The laser field is shown in green, the blue arrows illustrate the magnetic field lines at the plasma interface and the red/yellow spheres are the laser-accelerated electrons that will heat and ignite the fuel.| Frederico Fiuza, a physicist and Lawrence Fellow at LLNL, used what are known as particle-in-cell (PIC) code simulations on Sequoia as part of a fusion research project. The simulations provide a detailed look at the interaction of powerful lasers with dense plasmas. According to LLNL: These simulations are allowing researchers, for the first time, to model the interaction of realistic fast-ignition-scale lasers with dense plasmas in three dimensions with sufficient speed to explore a large parameter space and optimize the design for ignition. Each simulation evolves the dynamics of more than 100 billion particles for more than 100,000 computational time steps. This is approximately an order of magnitude larger than the previous largest simulations of fast ignition. The code used in these simulations is OSIRIS, a PIC code that was developed over the last 10 years through a collaboration between the University of California, Los Angeles and Portugal’s Instituto Superior Técnico. Extending the code to all 1.6 million cores of Sequoia was an exercise in extreme scaling and parallel performance. There are two ways to implement scaling. Increasing the number of cores for a problem of fixed size is called “strong scaling.” Using this approach, OSIRIS obtained 75 percent efficiency on the full machine. The second method, called “weak scaling,” involves increasing the total problem size – this approach led to 97 percent efficiency. To illustrate the principle in real-world terms, Fiuza states that “a simulation that would take an entire year to perform on a medium-size cluster of 4,000 cores can be performed in a single day [on Sequoia]….Alternatively, problems 400 times greater in size can be simulated in the same amount of time.” Thermonuclear fusion research, which essentially seeks to recreate the power of the sun in a laboratory setting, is considered the holy grail of green energy. It’s a very difficult proposition but the payoff is huge. If such a feat is achievable, and scientists generally agree that it is, it will require these kinds of advances. As Fiuza reports: “The combination of this unique supercomputer and this highly efficient and scalable code is allowing for transformative research.” Sequoia is a National Nuclear Security Administration (NNSA) machine, installed at Lawrence Livermore National Laboratory. With a top theoretical speed of 20.1 petaflops and a benchmarked (Linpack) performance of 16.32 petaflops, the IBM Sequoia supercomputer replaced the K computer as the world’s fastest in June 2012. In November 2012, Sequoia dropped into second place, ousted from the top spot by the 17.6 petaflop (Linpack) Titan installed at Oak Ridge National Laboratory.
<urn:uuid:801f29e8-6041-4026-ab77-c9fd6a457d05>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/03/20/sequoia_goes_core-azy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00211-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90405
741
2.8125
3
Most people just want their smartphones to take great pictures of their dogs or their kids' soccer games. NASA is a bit more ambitious. NASA combined multiple photos from the orbiting smartphones, called PhoneSats, to create images of Earth as seen from space. "During their time in orbit, the three miniature satellites used their smartphone cameras to take pictures of Earth and transmitted these image-data packets to multiple ground stations. Every packet held a small piece of the big picture. As the data became available, the PhoneSat Team and multiple amateur radio operators around the world collaborated to piece together photographs from the tiny data packets." The three PhoneSats were launched into orbit on April 21 aboard an Antares rocket from NASA's Wallops Flight Facility in Virginia. The smartphones completed what NASA called a successful mission on April 27. The goal of NASA's mission was to see how capable these tiny nanosatellites are and whether they could one day serve as the brains of inexpensive, but powerful, satellites. The phones were encased in 4-inch metal cubes and hooked up to external lithium-ion battery packs and more powerful radios for sending messages from space. The devices went into a orbit about 150 miles above Earth, after six days fell back to Earth, burning up in the atmosphere. In addition to the photos, the three PhoneSats transmitted messages about their functions and condition. The transmissions were received at multiple ground stations, indicating that they were operating normally. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org.
<urn:uuid:5b7d25c9-fe8a-447c-881e-1130505decc2>
CC-MAIN-2017-04
http://www.computerworld.com/article/2496931/smartphones/space-shots--android-phones-beam-back-earth-pix.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949157
362
3.546875
4
What is Application Switching? A way of handling network traffic by identifying and analyzing packets of information before they reach the server. What is it? A way of handling network traffic by identifying and analyzing packets of information before they reach the server. Also known as content switching, application switching can identify legitimate requests, deny attacks and, in some cases, rank traffic by priority. A financial transaction or important database query, for example, might be given priority over a page-surf. How does it work? The International Organization for Standardization's (ISO) model divides networks into seven layers, each handling a specific aspect of network communications. Layer 1 is where physical operations take place. Layer 4 handles data connections and transfers between clients and servers; the protocols here examine the header messages of requests to learn how to steer the requests to the right server. Applications directly involving the user are executed at Layer 7. Switches act as guides for the different layers, which have evolved to reflect the increasingly sophisticated ways in which networks are being used. The typical Layer 7 application switch sits between the user's network and the server from which the user is requesting information. The switch reviews and analyzes incoming information packets in order to block false requests, guide legitimate requests to servers and prioritize traffic. These switches may also compress and encrypt data, thus ensuring security and privacy. What's the business benefit? A network that's faster, more secure, and easier and cheaper to manage. Layer 7 switches can analyze more of the information that accompanies a packet of datasuch as the type of device or application that sent the packetthan can switches based on lower layers. Because of this, these switches are able to follow more-complicated business rules. An application switch also conducts more checks on requests before they reach the server; it can catch a greater number of false requests and lighten the load the server bears. Those checks also look for signs of security breaches or attacks. With fewer requests to process and less likelihood of an attack, server administration becomes simpler. And, of course, users notice an improvement because application switches are better able to handle traffic surges and usage spikes than their predecessors. Doesn't my network already do these things? Perhaps, but probably not well. The explosion in Web-based applications prompted many vendors and network managers to graft new functions onto the infrastructure at Layer 4 and lower. But those layers are ill- prepared to examine packet information deeply or to make complicated decisions. Accelerator cards and the like promise higher speeds but still fail to make use of all the information available at Layer 7. Earlier approaches also have trouble sorting out the flurry of requests in a traffic surge, and managing the add-ons makes administering the network more difficult. Who's using it? High-traffic Web sites such as MSN, Yahoo and eBay, along with telecommunications firms such as NTT, are the first to put the switches to work. Established application-switch vendorsincluding Cisco, NetScaler, Redline, Nortel, Radware, and F5are offering products ready to install from the low $20,000s to the upper $70,000s.
<urn:uuid:9cd9bf01-8029-4ea8-b915-9de1792405fe>
CC-MAIN-2017-04
http://www.baselinemag.com/storage/Primer-Application-Switching
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929904
639
3.125
3
In today’s connected world, authentication is ubiquitous. Whether it’s a website, mobile app, laptop, car, hotel door lock, retail kiosk, ATM machine, or video game console, security is essential to all networked systems. Even individuals must use authentication through state-issued ID cards to validate their identities within the network of a city or state. Whether virtual or physical, the improper access obtained from failed authentication has tangible effects ranging from stolen identities, fraudulent transactions, intellectual property theft, data manipulation, network attacks, and state-sponsored espionage. These consequences have the potential to cost companies millions of dollars, ruin reputations of individuals, and disrupt business. Authentication in the Internet Age Let’s be honest. Historical forms of authentication were never meant for the networked landscape we live in today. The first passwords were adequate authentication solutions only because the systems they secured were isolated. Unfortunately, the isolated systems that pervaded the early days of the computer revolution has set the foundation for authentication in the Internet Age. Within just a few years, the global computer market transitioned from a disconnected world of isolated computers to a fragmented world connected by the cloud. Not only are computers now interconnected, devices themselves and the applications running on them are as mobile as the users who own them. No longer are applications restricted to specific machines or data centers, they can be distributed, dispersed, or local to mobile devices. The security of any individual system or user now affects the security of those systems networked to it. The Internet has been ingrained in global culture and commerce to such a drastic degree that every new day increases the risk and impact of improper authentication. And with the impending Internet of Everything — that is, the millions or billions of devices, sensors, and systems that will connect to the Internet — not only is the need for secure authentication exponentially rising, the landscape is also changing. Today, the tempo of security breaches directly related to stolen passwords and bypassed authentication is increasing along with the severity of their consequences. Further compounding these issues, past breaches are creating a snowball effect resulting in subsequent attacks being easier, quicker, and more widespread than their predecessors. A new approach to authentication and authorization is required to face the new generation of modern security challenges. Houston…We Have a Password Problem I believe that passwords aren’t simply used incorrectly today; they’re fundamentally insecure and present problems for device authentication in the future. Traditionally, the primary form of user authentication in networked systems has been the username and password combination. More recently, the concept of strong authentication has become popular whereby an additional factor of authentication is used on top of the password layer for added assurance. Unfortunately, neither passwords nor strong authentication built on top of passwords are bulletproof solutions for today’s security challenges. As we begin to consider an Internet of Things (IoT) world of connected devices, it’s easy to see how passwords are incompatible with the vast majority of smart objects that constitute the future of our networked world. The in-band centralized nature of passwords requires that users input credentials into the requesting application. However, most devices, such as sensors, door locks, and wearables don’t have an attached keyboard for password input. This means that authentication must happen out of band. Instead of the user supplying a device with credentials, that device must obtain authorization externally in a decentralized manner. The Problem with Two-Factor Authentication Security experts have long recommended strong authentication to compensate for the weakness of passwords. While strong authentication is the correct approach to take, the traditional method, known colloquially as two-factor authentication, is inadequate. Let’s take a look at a few of the key issues: Shared secret architectures involve a token or one-time password (OTP) that is sent to a mobile phone or fob that the end user owns. This OTP is compared with a token generated by the application being secured. The symmetric key cryptography that this process relies on is an inferior security approach because if either the user’s device or the application is compromised, the shared secret can be obtained, thereby allowing an attacker to generate their own correct token. Additionally, since the user’s token must be transposed or delivered back to the application for comparison, there is a risk that the token can be intercepted by a hacker, malware, or observer in a man-in-the-middle (MITM) attack. Password Layer Remains Unresolved Traditional two-factor authentication retains the in-band password layer which means the core password problems remain unresolved. The application still holds on to the “bait” that hackers and malware are after, keeping the application layer in the crosshairs of any attack on the authentication layer. Poor User Experience Transposing tokens that quickly expire may be considered an annoying user experience that many users will opt to avoid in lieu of a smoother authentication flow. OTP flows that rely on SMS are unreliable and inconsistent. End users’ preference towards convenience over security means traditional two-factor authentication implementations like OTP may go unused. For organizations and applications, traditional two-factor authentication means sending their users outside of the branded experience that they control. Additionally, traditional two-factor authentication approaches involve sending the end user to a third-party application. Often, this involves a company or online service forwarding their users to mobile apps or hardware with unaffiliated branding and user experience. Such an approach is often unacceptable, especially for consumer-facing organizations. Use Cases Are Limited Authentication is integral to more use cases than login forms. Whether a user wants to approve a purchase in real time, sign for a package, verify their identity, or access a secure corporate office, authentication plays a critical role. In many of these scenarios, an input form to submit credentials like a password and OTP token isn’t available, thereby placing such scenarios out of the scope of traditional two-factor authentication. Many two-factor authentication solutions represent a tangible cost and logistical burden. A single hardware token can cost as much as $100 or more, making a two-factor authentication solution that only satisfies a limited subset of use cases unrealistic. Time to Move Beyond the Password Password-based authentication is no longer capable of meeting the demands of modern security. Passwords are inherently insecure as a method of authentication, and their efficacy relies on end users, developers, system administrators, and the applications themselves, all of which are vulnerable to a wide variety of attack vectors currently being exploited by cyberattacks around the world today. Traditional strong authentication methods like two-factor authentication built on top of passwords does nothing to address the liability and risk of the insecure password layer, while their shared secret architecture (e.g. OTP) is cryptographically inferior, vulnerable to many attack vectors, and creates a cumbersome experience that users dislike and often avoid. Furthermore, both passwords and the strong authentication built on top of them are incompatible with many of the devices and remote “things” that will require user authentication in the future, but lack the requisite input mechanisms like keyboards and forms to use them. Organizations and applications must remove the vulnerability and liability that passwords have created while implementing more secure authentication methods that account for an evolving and diversified landscape of use cases, end users, and threats. About the Author: Geoff Sanders is Co-Founder and CEO of LaunchKey
<urn:uuid:06d7d4ae-eb50-494c-a9b7-3afb2bfa7937>
CC-MAIN-2017-04
http://infosecisland.com/blogview/24634-Businesses-Should-Take-a-Pass-on-Traditional-Password-Security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00541-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925246
1,575
3.015625
3
NEW YORK -- Sixty years from now, we'll look back on today's 3D-printed tissue and organ technology and think it's as primitive as the iron lung seems to us now. Six decades out, replacing a liver or a kidney will likely be a routine procedure that involves harvesting some patients cells, growing them and then printing them across artificial scaffolding. Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine, spoke at the Inside 3D Printing Conference here about where the technology is today, and what hurdles it still must overcome. The biggest hurdle is being able to 3D print supportive vascular structure so that tissue can receive the oxygen critical to its survival once it's implanted into a patient. Today, scientists can replicate tissue in small amounts beginning with the simplest: flat human skin tissue. Researchers have been able to create tubular blood vessels, and even parts of hollow organs of the body, such as the bladder. But, it's the solid, hard organs, such as the liver and the kidney, that are more complex and require far more vascular support to recreate successfully. "We're working very hard to make sure we get there someday," Atala said. It's not as insurmountable a goal as it may seem. For one, scientists don't need to replicate an entire organ. In fact, up to 90% of an organ can fail before it seriously affects the health of a patient, Atala said. "Imagine you're playing tennis one weekend and you get chest pains. You go and get an x-ray and they find 90% of you're your heart vessel is occluded. The fact is you never had chest pain when your heart vessel was 80% occluded," he said. "It's the same thing with kidneys. You don't get into kidney failure until about 90% of your kidney is gone. So you have to burn 90% of your reserve before you get in trouble." The current strategy of bioengineers is to create enough tissue to boost an organ that's failing while not completely replacing it, Atala said Typically, the maximum distance between tissue and the vascular structures that support them is 3mm. That means that for every 3mm of tissue, physicians will have to be able to construct capillaries to support them. Today, 3D bioprinting constructs a series of tissue held up by artificial scaffolding, like the iron beams in a building. First a layer of scaffolding is laid down, and then a layer of cells is laid down on top of that. In the past, scientists have had to separately create the scaffolding, and then coat it with the living cells. That not only takes longer to complete, but it also places the living cells in danger of dying before they can even be implanted. 3D printing allows the scaffolding and living tissue to be printed together. In order to construct veins and capillaries, they first print out a tubular construct made of dis-solvable material; they then coat the outside of that tube with muscle cells and the inside with venous barrier cells. A heart valve can be constructed in the same way; first by using dis-solvable scaffolding followed by outer muscular cells and interior barrier cells.
<urn:uuid:9942bb9d-c9e0-4b74-979d-aeb96d221026>
CC-MAIN-2017-04
http://www.computerworld.com/article/2489548/emerging-technology/3d-printing-a-new-face--or-liver--isn-t-that-far-off.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9734
672
3.46875
3
Multiprotocol Label Switching (MPLS) is a technology that has received a great deal of attention in recent years. The IETF alone has produced over 300 Internet Drafts and numerous RFCs related to MPLS and continues its work on refining the standards. So, what is MPLS all about? We asked Bill Stallings to give us a basic tutorial. The tragic events of September 11, 2001 have focused attention on the stability and robustness of the Internet. The Internet played an important role in the aftermath of the terrorist attacks. While popular news Web sites initially appeared overloaded, a great deal of private traffic in the form of instant messaging and e-mail took place. Companies directly or indirectly affected by the events in New York and Washington were quick to use the Web as a way to disseminate important information to their clients as well as to their employees. In many cases, the Internet was used in place of an overloaded telephone network. With this in mind, The Internet Corporation for Assigned Names and Numbers (ICANN) has decided to re-focus its next meeting to address issues of Internet stability and security, particularly with regard to naming and addressing. (See " ") To provide some background information, we bring you the article "A Unique, Authoritative Root for the DNS," by M. Stuart Lynn, the president and CEO of ICANN. Since this article has been posted for public comment, you are encouraged to address your feedback to: We would like to remind our readers to send us postal address updates. The computer-communications industry is one where people change jobs and locations often. While we do receive some address changes automatically when mail is returned to us, it is much more reliable to send us e-mail with the new information. In the near future, readers will be able to make address changes and select delivery options through a Web interface which will be deployed at . Until then, please send your updates to Ole J. Jacobsen, Editor and Publisher
<urn:uuid:e0b35e65-0b63-45f1-b028-359e1a3243f3>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-10/from-editor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00477-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961408
408
3.0625
3
RAID: Faster and Cheaper with Linux Welcome to today's thrilling howto on implementing Linux software RAID with no expense other than however many hard disks you wish to use, whether they be inexpensive ordinary PATA (IDE) drives, expensive SCSI drives, or newfangled serial ATA (SATA) drives. RAID (define) is no longer the exclusive province of expensive systems with SCSI drives and controllers. In fact it hasn't been since the 2.0 Linux kernel, released in 1996, which was the first kernel release to support software RAID. What RAID Is For A RAID array provides various functions, depending on how it is configured: high speed, high reliability, or both. RAID 0, 1, and 5 are probably the most commonly used. RAID 0, or "striping," writes data across two or more drives. RAID 0 is very fast; data are split up in blocks and written across all the drives in the array. It will noticeably speed up everyday work, and is great for applications that generate large files, like image editing. It is not fault-tolerant -- a failure on one disk means all data in the array are lost. That is no different than when a single drive fails, so if it's speed and more capacity you want, go for it. RAID 1, or "mirroring," clones two disks. Your storage space is limited to the size of the smaller drive, if your two drives are not the same size. If one drive fails, the other carries on, allowing you to continue working until it is convenient to replace the disk. RAID 1 is slower than striping, because all writes are done twice. RAID 5 combines striping with parity checks, so you get speed and data redundancy. You need a minimum of three disks. If a single disk is lost your data are still intact. Losing two disks means losing everything. Reads are very fast, while writes are a bit slower because the parity checks must be calculated. You may use disks of different sizes in all of these, though you'll get better performance with disks of the same capacity and geometry. Some admins like to use different brands of hard disks on the theory that different brands will have different flaws. What RAID Is Not It is not a substitute for a good backup regimen, backup power supplies, surge protectors, and other sensible protections. Linux software RAID is not a substitute for true hardware SCSI RAID in high-demand mission-critical systems. But it is a dandy tool for workstations and low- to medium-duty servers. PATA (or IDE) drives (define) are not hot-swappable, but you can set up an array with standby drives that automatically take over in the event of a disk failure. If you don't want to use standby drives your downtime is limited only to the time it takes to replace the drive, because the system is usable even while the array is rebuilding itself. Hardware RAID controllers come in a rather bewildering variety. Mainboards come with built-in IDE RAID controllers, and PCI IDE RAID controller cards can be had for as little as $25. Most of these are like horrid Winmodems, in that they require Windows drivers to work and have Windows-only management tools. I wouldn't bother with IDE RAID controllers -- Linux software RAID outperforms them in every way, and costs nothing. A true hardware RAID controller operates independently of the host operating system. You'll find a lot of choices for SATA (define) and SCSI drives. SATA controllers cost from $150 to the sky's the limit, depending on how many drives they support, how much onboard memory they have, and other refinements that take the processing load away from the system CPU. Good SCSI controllers start around $400 and have an even higher sky. Both SATA and SCSI controllers should support hot-swapping, error handling, caching, and fast data-transfer speeds. A good-quality hardware controller is fast and reliable; but finding such a one is not so easy. Many an experienced admin has lost sleep and hair over flaky RAID hardware. Something to keep in mind for the future- as SATA support in Linux matures, and the technology itself improves, it should be a capable SCSI replacement for all but the most demanding uses. (For more information see the excellent pages posted by the maintainer of the kernel SATA drivers, Jeff Garzik.) Software RAID Advantages Linux software RAID is more versatile than most hardware RAID controllers. Hardware controllers see each drive as a single member of the RAID array, and handle only one type of hard disk. Most hardware controllers are picky about the brand and size of hard disk -- you can't just slap in any old disks you want, but must carefully choose compatible disks. And it's not always documented what these are. Linux RAID is a separate layer from Linux block devices, so any block device can be a member of the array -- a particular partition, any type of hard drive, and you can even mix and match. Endless debates rage over which offers superior performance, hardware or software RAID. The answer is "it depends." An old slow RAID controller won't match the performance of a modern system with a fast CPU and fast buses. The number of drives on a cable, the types of drives and cabling, the speed of the data bus- all of these affect performance in addition to the speed of the CPU and the demands placed on it. One disadvantage is hot-swap ability is limited and not entirely reliable. Converting An Existing System To RAID First of all, your power supply must be capable of powering all the drives you want to run on the system. Adding as many drives as you want is easy and inexpensive. If you're going to purchase new hard disks, you might as well get SATA, because the cost is about the same as PATA. SATA drives are faster and use less cabling, and will soon supplant PATA drives. PCI controller cards for additional PATA and SATA disks cost around $40, and will run two disks each. The built-in IDE channels on mainboards can handle two disks each, but you should run only one disk per channel. You'll get better performance and minimize the risk of a fault taking out both hard disks. Next, install the raidtools2 and mdadm packages. If you want your RAID array to be bootable, you'll need RAID support built into the kernel. Or use a loadable module and use an initrd file, which to me is more trouble than rebuilding a kernel. Next week in Part 2 we'll cover how to do all of this. You may get a head start by consulting the links in Resources.
<urn:uuid:f3da6edc-c71c-4099-9db0-dc8baec68fa5>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3504136/RAID-Faster-and-Cheaper-with-Linux.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934667
1,390
2.78125
3
Kaspersky Lab presents an article entitled Mass Defacements: The Tools and Tricks by David Jacoby, a Senior Security Researcher with the company's Global Research & Analysis Team. During his research David identified over a hundred web servers that had been infected by 'defacers', including web servers belonging to some high-profile companies. These infections resulted in confidential data from sites, as well as information on how to infect them, being sold on the black market. The cybercriminals were also able to use the web servers to launch DDoS attacks or conduct spam mailings. Everyone knows that the Internet is plagued by hackers and criminals cracking websites, turning computers into nodes in botnets, and openly selling spyware and stolen passwords to user accounts on the black market. One type of cybercrime, however, remains a bit of mystery. Defacement attacks change the content or visual appearance of random websites; while the attackers are not doing this to make a direct profit, such attacks can damage the reputation of the organisations targeted, or cause financial losses. There is a defacer community whose groups and members compete with each other to see who can crack and deface the most sites. There are a number of online archives where defacers can see how many times and by whom a particular site has been modified. These archives include the names of high-profile sites belonging to some major companies. A PHP backdoor, deployed to a cracked site from within, is central to any defacement attack. The backdoors have a range of functionality, but most of them will have methods to bypass PHP security functions, steal information, read/modify files, access SQL databases, crack passwords, execute arbitrary commands and escalate privileges. Moreover, the server where the site is located can be used to send out spam or to carry out DDoS attacks. Defacers generally use scanners to find vulnerable servers, checking for Remote and Local File Inclusion and SQL injection vulnerabilities, among others. "One major problem in combating defacements is that defacers aren't only exploiting technical vulnerabilities, they are also exploiting ignorance. Most people who work with web servers today do not understand the importance of having a system which is up-to-date and fully patched," says David Jacoby. "Companies and organisations often put a lot of time and effort into teaching their IT personnel about how SQL injection and buffer overflows work, and how they can be exploited, when it would be more sensible to focus on ensuring that systems are fully patched and configured properly." The full version of the article is available at www.securelist.com.
<urn:uuid:5f159a07-325e-416a-bd48-01283887139f>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2010/Mass_Defacement_of_Websites_Hacker_Fun_that_Threatens_Business
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95469
524
2.59375
3
HTTP Verb Tampering is an attack that exploits vulnerabilities in HTTP verb (also known as HTTP method) authentication and access control mechanisms. Many authentication mechanisms only limit access to the most common HTTP methods, thus allowing unauthorized access to restricted resources by other HTTP methods. Many Web server authentication mechanisms use verb-based authentication and access controls. Such security mechanisms include access control rules for requests with specific HTTP methods. For example, an administrator can configure a Web server to allow unrestricted access to a Web page using HTTP GET requests, but restrict POSTs to administrators only. However, many implementations of verb-based security mechanisms enforce the security rules in an unsecure manner, allowing access to restricted resources by using alternative HTTP methods (such as HEAD) or even arbitrary character strings. For example, Java Platform Enterprise Edition (Java EE) supports verb-based authentication and access control through the web.xml configuration file. In Java EE, one can limit access to the admin/ directories for “admin” users by adding the following to web.xml: These security rules ensure that GET or POST requests to admin/ directories from non admin users will be blocked. However, HTTP requests to admin/ directories other than GET or POST will not be blocked. While a GET request from a non admin user will be blocked, a HEAD request from the same user will not. Unless the administrator explicitly configures the Web server to deny all methods other than GET and POST, the access control mechanism can be bypassed simply by using different methods that are supported by the server. Other examples of Web servers that are affected by this issue include IIS 6.0, Apache 2.2.8, and TomCat 6.0. In some Web servers–for example, Apache 2.2/PHP–it is even possible to bypass the access control mechanism by using arbitrary character strings for HTTP methods. Such Web servers implement default handlers for requests that are not bound to a specific HTTP method. Unlike an HTTP Servlet where a GET request is only handled if a doGet() is defined, some Web servers attempt to process any and all methods including unknown methods. Thus, by replacing a legitimate method with an arbitrary one (MPRV instead of GET) the attacker can exploit vulnerabilities in the internal processing logic and bypass the access control mechanism. HTTP Verb Tampering Prevention Verb tampering attacks exploit either configuration flaws in the access control mechanism or vulnerabilities in the request handlers’ code. As presented in the example above, blocking requests that use non-standard HTTP methods is not enough because in many cases an attacker can use a legitimate HTTP method like HEAD. Imperva SecureSphere combines two mitigation techniques to detect and stop verb tampering attacks. In the first, SecureSphere learns which methods are allowed for each URL. Any attempt to use HTTP methods that are not part of the application’s normal usage will be detected and blocked. The second technique detects non-standard HTTP methods and blocks requests using such methods. In cases where the application uses non-standard methods normally, this mechanism can be easily updated with the allowed methods.
<urn:uuid:37c59347-0fd4-4c58-8407-6c698d0af51b>
CC-MAIN-2017-04
https://www.imperva.com/resources/glossary?term=http_verb_tampering
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.872667
627
3.21875
3
Argonne National Laboratory recently published several sessions from its Summer 2013 Extreme-Scale Computing program to YouTube. One of these is a lesson on combining performance and portability presented by Argonne Assistant Computational Scientist Jeff Hammond. For some reason the video image does not match the lecture, but you will find a link to Hammond’s slide deck here. The strongest point that Hammond makes is the admonition to never forget the essence of portable performance: use libraries. There is no reason to roll your own unless something does not exist. If a library does not exist, Hammond’s suggestion is to write it and actively solve it the right way and then share it. In nearly all cases, you’ll want to trade performance for portability, says Hammond. “In general, your code will outlive the machine,” he adds. “The most successful supercomputers make it just over five years; the most successful scientific applications may reach five decades.” The way to ensure longevity is through portability. But if you require something non-portable stick it in something that is portable (encapsulation). Hammond compares this approach to using virtual machines to test potentially unsafe code. The next section of the lecture is devoted to portable MPI communication. While the standard is perfect, the implementation is not, says Hammond. Wrapping MPI may cost cycles, but has huge payoffs in many instances, enabling bugs and performance quirks as well as other issues to be addressed with minimal headache. The rest of this one-hour lecture gets pretty technical, but should be required viewing for any programmers who are considering a career in HPC.
<urn:uuid:94acca3d-0b3b-41e9-bfe6-b2a832e5934a>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/07/24/portability-mandate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924623
341
2.84375
3
BIG DATA NAMED THE #2 BUZZ WORD OF 2012 Most simply, it’s a way to describe the crazy amount of data generated these days. Information is churned out in racing streams by everything from social networks to weather balloons: we live in an era of big data. More than 90% of Big Data is BIG DATA TRANSFORMS BUSINESS One quadrillion bytes. 1 PB = 1,000,000,000,000,000 B, 350,000 digital pictures, or a mile-long stretch of beach. One quintillion bytes. 1 EB = 1,000,000,000,000,000,000 B, half the information generated worldwide in 1999, or a beach from Maine to North Carolina. One sextillion bytes. 1 ZB = 1,000,000,000,000,000,000,000 B, unimaginable, or a beach as big as all the coastlines in the world. We live in a world of Big Data, where ever-increasing quantities of information are measured in petabytes, exabytes, and zettabytes. Businesses, governments, and other organizations must now deal with huge amounts of structured data—banking transactions, airline reservations, customer information, patient records, and more—as well as increasing amounts of unstructured data from tweets, texts, smartphone photos, Facebook posts, and other social media content. AMOUNT OF DATA CREATED AND REPLICATED GLOBALLY Total for 2012: Projected by 2020: BIG DATA PRESENTS BIG OPPORTUNITIES Big Data enables businesses to identify new and better ways to interact, innovate, and improve, distancing themselves from competitors. But traditional technology approaches and architectures can’t support Big Data’s volume, variety, and velocity. EMC offers a comprehensive Big Data solution with: EMC PROTECTS DATA By making sure that: Data is not lost or leaked. In the last 3 years, 5 billion files have been analyzed. Critical data can be recovered. 100 petabytes of data are backed up by EMC. You really are who you say you are when shopping or banking online. More than 40 million transactions by 19+ million card users are protected every month. Currently, less than a third of the world’s data is properly protected. EMC ANALYZES DATA While some might see the proliferation and use of Big Data as an intrusion into personal privacy, there’s plenty of evidence to show that Big Data can play a significant role in empowering consumers to make better purchasing decisions, and in helping organizations to manage their operations more efficiently. EMC helps businesses perform 2.3 trillion complex analytical queries every year... ...while customers for EMC’s data analysis have increased their data volumes by 300% each year. EMC technology and services—Isilon and Atmos storage, Greenplum, Documentum xCP, NetWitness, and EMC Consulting—store, analyze, and protect Big Data, enabling businesses and organizations to achieve new levels of efficiency and agility. Market-leading scale-out storage A business-process modeling tool A unified analytics platform Application development services EMC STORES DATA Here’s what’s creating the deluge of information: Smart electrical grids Mobile phone sensors In the last decade, EMC shipped (11,600,000,000,000,000,000 bytes of storage). That’s 24% of all external storage shipped. SUSTAINABILITY & BIG DATA Big Data means smarter decisions for: EMC PROTECTS DATA through end-of-life, tracking millions of drives and other data-bearing media throughout the disposal process. Public Service Delivery THE HUMAN FACE OF BIG DATA While EMC leads in powering technology that brings Big Data to life, we’re also the lead sponsor of The Human Face of Big Data project, a vast global snapshot of how Big Data affects our lives not just in the future, but today. Every order that is handled by the NYSE Euronext markets in the U.S. is analyzed and archived using EMC software. In 2011, this averaged more than 2 billion orders per day. The Broad Institute uses 10 petabytes of EMC storage to perform gene sequencing. Data volumes at gene sequencing company Ambry Genetics are growing by 100% per year. Legend3D (2-D-to-3-D media conversions) helped produce film hits Transformers, Smurfs, Hugo, and Spiderman. Four hundred artists generate over 100 terabytes each week during movie production. The Associated Press speeds access to HD video with EMC. Data volumes will grow from 800 to 2.5 petabytes over the next two years. With help from EMC, SilverSpring analyzes data from more than one million smart grid meters in under one minute. The national Baseball Hall of Fame Museum runs on an EMC platform containing 500,000 photographs, 12,000 hours of audio and video content, three million documents, and 40,000 three-dimensional artifacts. English soccer team Fulham stores all closed-circuit video on EMC gear. The resolution of the 27 stadium cameras is high enough to read a number plate at 200 feet. eBay has 9 million users. More than 500 million objects are stored and managed every day using EMC infrastructure. The JFK Archive runs on EMC. It includes 8.4 million pages of personal, congressional, and presidential papers and 40 million pages from individuals associated with the administration. The archive also contains 400,000 still photographs, 9,000 hours of audio recordings, and 1,200 hours of video recordings. Stereo D and Deluxe Entertainment use EMC to facilitate 3-D rendering. In the future, 3-D movies are expected to consume up to 10 petabytes of data. Due to the expanded market for interactive audio and video content, 200-year-old publisher Wiley and Sons has, over the past two years, increased its data volume from 15 to 150 terabytes of EMC storage. The Library of Congress runs on EMC. 750,000 to 1 million items are being digitized annually. To develop the new Land Rover Evoque, Jaguar Land Rover used EMC to build its Virtual Reality Cave. In its image library, Digital Globe stores over 1.87 billion square kilometers of earth imagery, using 2 petabytes of EMC storage. LinkedIn members made nearly 4.2 billion professionally oriented searches on the EMC platform in 2011 and surpassed 5.3 billion searches in 2012. With EMC, comScore processes over 1 trillion customer records per month, compared with 473 billion monthly records just a year ago.
<urn:uuid:61f181d9-da95-4b45-8112-273ce5c74c07>
CC-MAIN-2017-04
https://www.emc.com/corporate/annual-report/big-data.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868165
1,428
2.796875
3
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill. While some supercomputers today use low-power processors that were originally designed to power smartphones, that does not mean that a smartphone is a supercomputer. It should go without saying, but people really love their phones today and imagine they posses super powers. “I often hear the saying, ‘My cellphone is a supercomputer because it’s just as fast, or faster, than a supercomputer was 30 years ago,'” says Barry Bolding, Cray’s vice president of storage and data management and corporate marketing, in a blog post on the Cray website. Not so fast, Bolding says. “Believe it or not, you still cannot predict a tornado on a cellphone; it just won’t work. You need a supercomputer for that type of scientific prognostication. While today’s smartphones are, well, smart, they’re not that smart.” The same holds true for cloud clusters, Bolding says. While today’s new cloud clusters can amass much more computing power than an iPhone–and perhaps deliver more CPU cycles than some supercomputers–they still don’t work well for HPC workloads. It all comes down to finding the best tool for the HPC job, Bolding says. The Cray exec says that HPC systems like the Cray XC30, Cray XC30-AC, and Cray CS300 are targeted at heavy-duty scientific and engineering workloads. “What makes them supercomputers is that they can do work that cloud-clusters either cannot do, or are clumsy and inefficient for,” Bolding says. “Weather prediction, turbulent airflow in aircraft engines, and high resolution seismic modeling are supercomputer workloads,” says Bolding, who has been with Cray for 21 years and holds Ph.D in chemical physics from Stanford University. “Grinding out the most accurate forecast every few hours, simulating accurate combustion and investing in the right oil/gas drilling site are all critical, production workloads. This is where Cray’s computing solutions shine.” Bolding uses a car analogy to illustrate the differences. A ZipCar may be great for somebody who only occasionally needs a car, but it doesn’t make sense for somebody who needs a car everyday. If you need to move lots of people but still don’t want to buy a car, a bus (i.e. cloud cluster) provides lots of capacity at low cost, he says. For somebody who needs their own car everyday, neither the bus nor the ZipCar make much sense. Once one decided they need a car, there are many to choose from. A Toyota Prius, for example, excels at economical transportation of people, while a Toyota Tundra excels at transporting lots of stuff. Supercomputers are similar. “Fit the right computing model to the right application and you will be happy. Try to fit a production supercomputing application into a cloud-cluster and you will be sorely disappointed,” Bolding concludes. “Remember, the supercomputer of 30 years ago is a terrible cellphone and the cloud-cluster of today is not a supercomputer.”
<urn:uuid:e6bee04c-ed43-488c-91a5-9f1137578c34>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/17/supercomputers_still_the_king_of_the_hpc_hill/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942271
743
2.78125
3
Introduce yourself to Linux, and advance your proficiency, through a spectrum of self-paced tutorials. With these tutorials, you can build fundamental skills in Linux systems administration at your level of expertise. Exam 101 is the first of two junior-level system administrator certification exams offered by the Linux Professional Institute (LPI). Both exam 101 and exam 102 are required for junior-level certification, or LPIC-1. New developerWorks articles corresponding to the April 2009 objectives for exam 101 and exam 102 are in progress. The developerWorks articles below are part of the new series. They help you prepare for the topics in LPI exam 101: - LPI exam 101 prep: Hardware and Topic 101. Learn to configure your system hardware with Linux. By the end of this tutorial, you will know how Linux configures the hardware found on a modern PC and where to look if you have problems. - LPI exam 101 prep: Linux installation and package Topic 102. In the five new articles for this topic, covering the latest (April 2009) LPI exam 101 objectives, learn how to design a hard disk layout, install a boot manager, manage shared libraries, use Debian package management, and use RPM and YUM package management. - LPI exam 101 prep: GNU and UNIX Topic 103. In the eight new articles for this topic, covering the latest (April 2009) LPI exam 101 objectives, learn how to work on the command line; process text streams using filters; perform basic file and directory management; use streams, pipes, and redirects; create, monitor, and kill processes; modify process execution priorities; search text files using regular expressions; and perform basic file editing operations using vi. - LPI exam 101 prep: Devices, Linux filesystems, Topic 104. Get acquainted with Linux devices, filesystems, and the Filesystem Hierarchy Standard. By the end of this tutorial, you will know how to create and format partitions with different Linux filesystems and how to manage and maintain those systems. See all LPI exam-prep tutorials on developerWorks.
<urn:uuid:4349924d-03d6-429f-b05e-bb7f6e268329>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/linux/lpi/101.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.855444
430
2.640625
3
One unfortunate aspect of human psychology involves how people tend to deal with potential threats. As long as the threats are more abstract than actual, all too often we reason that there’s no rush to build defenses against them. Only after a threat materializes and does actual harm do we start to really take it seriously. Anyone who thought the dangers associated with the Internet of Things (IoT) were more bark than bite was disabused of that notion on October 21. That morning, a distributed denial of service (DDoS) attack against Dyn, a dynamic DNS service, significantly reduced the availability of dozens of major websites and internet services. The source of the attack: tens of millions of IoT devices ranging from closed-circuit cameras to home DVRs. The devices had been compromised with the Mirai malware program, which used password guessing to infect them. On the morning of the attack, the malware directed the devices to send a crippling deluge of requests to the Dyn servers. Prior to this attack, there were plenty of warnings about the risks posed by poorly protected IoT devices. An AT&T Cybersecurity Insights report released earlier in the year, for example, explored IoT-based threats in depth and recommended a number of best practices to limit exposure to these threats. Among the recommendations: don’t permit easy-to-hack default passwords for IoT devices. Just that one recommendation, if followed by device vendors, would likely have prevented the October attack. IoT-based threats, of course, are only one form of “new” vulnerabilities organizations must address as both technology and business operations transition through rapid changes. Two other fast-emerging sources of cyberthreats are cloud computing and mobile computing. The new AT&T Cybersecurity Insights report, “The CEO’s Guide to Navigating the Threat Landscape,” examines all three of these emerging cyberthreat sources – IoT, cloud and mobile. An AT&T survey cited in the report found companies storing more than half of their data in the cloud report higher frequencies of malware, ransomware, advanced persistent threats, information theft and unauthorized access. Even so, the report cautions that data stored in corporate servers may not be any safer than data stored in the cloud. Meanwhile, about 40 percent of the cybersecurity professionals surveyed by AT&T reported that their organizations’ mobile devices were compromised in the prior 12 months. Evidence suggests attackers are increasingly targeting app stores to distribute mobile apps infected with malware, and free Wi-Fi networks continue to pose significant risks to corporate users and enterprise data. Long story short, organizations must ensure that their security infrastructure keeps pace with the ways technology is evolving and is used. Meeting that objective requires companies to conduct regular risk assessments, to continuously educate their employees, and to deploy security controls tailored to meet both established and emerging threats. Most fundamentally, it means taking new categories of cyberthreats seriously, even if they haven’t yet materialized as actual attacks. There’s too much at risk – intellectual property, customer confidence, legal liability and your businesses’ bottom line – to fall victim to the “it can’t happen to us” line of thinking. Dwight Davis has reported on and analyzed computer and communications industry trends, technologies and strategies for more than 35 years. All opinions expressed are his own. AT&T has sponsored this blog post.
<urn:uuid:c33de793-18b9-44dd-834f-c86b60d2db17>
CC-MAIN-2017-04
http://www.csoonline.com/article/3144043/techology-business/emerging-cybersecurity-vulnerabilities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952772
701
2.578125
3
Before we proceed with news about the research showing how important bass is to music -- and thus the vital role filled by bass players -- here are some of our favorite bass player jokes, courtesy of TalkBass.com: How many Pop bass players does it take to change a light bulb? None. The keyboard player does it with his left hand. Why can't bass players get through a door? He either can't find the key, or he doesn't know when to come in! What do you call a bass player without a girlfriend? Homeless. How do you get a bass player to turn down? Put sheet music in front of him. And our favorite (not from TalkBass.com): Why did the bass student miss his third lesson? He had a gig. We kid the bass players, but the truth is they have an important and underrated job. For as Mic.com's Tom Barnes argues, "[T]here's scientific proof that bassists are actually one of the most vital members of any band. There are powerful neurological and structural reasons why our music needs bass." Last year McMaster University researchers demonstrated that the brain hears low tones more clearly, meaning it also can detect mistakes in low tones more easily. This means the bass plays a major role in setting a song's rhythm. And a study at Northwestern University showed that "bass-heavy music is far more effective at inspiring feelings of power and drive in listeners," Barnes writes. This story, "All About That Bass, Indeed" was originally published by Fritterati.
<urn:uuid:ea923bd7-675d-4e64-8494-a1b40b388f46>
CC-MAIN-2017-04
http://www.itnews.com/article/2934784/all-about-that-bass-indeed.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00037-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978243
322
2.59375
3
We hope all of you had a wonderful Earth Day this past Monday, and challenged yourself to tackle one of the behavior changes we recommended in our last blog. We received an outpouring of ideas on other ways to reduce your personal carbon footprint, and wanted to share those with you in this follow-up article. Create a compost area at your house. Back in the day, well before disposals, this was a common practice. Was a tremendous fertilizer for the garden, too. – Megan For more information about composting and how to get started, visit this link. Ditch the plastic water bottles. Buy a reusable water bottle instead; it can go a long way in reducing your carbon footprint. Even though some plastic water bottles are made from recycled material, most are not. – Kurt Here is a collection of some reusable BPA-free water bottles that we love. Keep your cat indoors. Did you know that domestic cats kill over a billion small birds and animals every year? (One outdoor car averages about 40 kills per year). This upsets the natural predator/prey balance that keeps our environment sound. – Chris Here are some other good reasons to keep your cat indoors. Shutdown your laptop when it is not in use. Leaving your computer running is a waste of electricity and energy. – Sampath Here are some more tips on how to save power on a laptop. Energy-proof your home. This post offers 101 easy ways to save energy, and all it takes is a few minutes each month. Start with an energy audit and analyst your home’s structure, appliances, insulation and overall family lifestyle. – Christian Turn your thermostat up to 78*F/25*C in the summer. Even better, use a system like Nest that monitors your daily behavior and adjusts your thermostat accordingly. On the flip side, you can also use blankets to warm up in the winter rather than your home heating system. – Johnny Thanks for the all of the wonderful green ideas! For more, visit our Facebook page or submit your own in the comment box below.
<urn:uuid:069138ed-fae1-47c7-b04c-83eee8f40286>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/reduce-your-carbon-footprint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932285
431
2.53125
3
Kuwait is one of the driest countries in the GCC and MENA regions. Receiving no more than an average of 110 mm of rainfall per annum, the country is devoid of fresh water streams. Over-depleting of aquifers has led to reliance on desalination for almost all the needs of the country (90% of all water used in the country is derived from desalination). The country has a very high consumption of water at 500 liters per person per day. While the government is implementing awareness programs, it is unlikely that the consumption will lower, with the population increasing at almost 1.62% annually. As of 2015, the desalination market in Kuwait was worth USD X.X billion. The market size is expected to grow at a CAGR of XX.XX%. The depleting natural precipitation and ground-water levels and increasing population are the major drivers of the sector in the region. A continued effort at increasing diversification of government income from hydrocarbons is another factor that has led to an increase in construction projects, industries, manufacturing plants, etc., leading to more demand for fresh water. Moreover, the government is supporting and encouraging the establishment of desalination plants to meet the nation’s demands. Restraints and Challenges The biggest challenge of desalination is the cost. As per a study, the cost of desalinated water per meter cube was USD 1.04, 0.95 and 0.82 for MSF, MED, and RO, assuming a fuel cost of USD 1.5/ GJ. Moreover, energy accounts for approximately three-fourths of the supply cost of desalination. Transportation cost is also added to the overall cost, making desalination a very costly process. Another negative impact of desalination is on the environment with the treatment of brackish water leading to pollution of fresh water resources and soil. Discharge of salt on coastal or marine ecosystems also has a negative impact. The country’s adverse business climate due to rifts between the National Assembly and the executive branch has led to a slow pace of economic diversification. However, efforts pushed forth by way of a planned USD 104-billion-investment have borne fruit only partly because of the uncertain political climate. The relative increase in economic diversification has led to a more open economy, engendering an increase in the number of businesses, services, and industries, all implying an increase in the need for water. It is estimated that consumption will increase from 405 million imperial gallons per day in 2011 to an estimated 780 million imperial gallons per day by 2020. Currently, all desalination plants in Kuwait are owned and managed by the government. However, going forward, the emphasis will be on setting up new desalination plants in the public-private-partnership in the Integrated Water and Power Plants format. Towards this end, the MEW (Ministry of Electricity and Water) is currently interested in issuing tenders. Hence, there are many opportunities in the sector in Kuwait. About the Market
<urn:uuid:de8cfb63-97e7-41a1-8163-da8a032e7f58>
CC-MAIN-2017-04
https://www.mordorintelligence.com/industry-reports/desalination-industries-in-kuwait-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950049
628
3.078125
3
In this blog series, we will shed light on the legislative framework of mobile application development in major countries and regions across the globe. The fourth and last part of the series is an analysis of Chinese regulations that are of concern to application developers. China is generally believed to be the 'Wild West' for mobile application developers. There is virtually no government oversight and apps could be put online without any requirements. The app industry is skyrocketing with nearly a billion smartphone owners in the country. There is a downside to this remarkable growth in the app market. Dozens of Chinese Android App Stores exist, many of them swarmed by malicious or pirate applications. In November 2012, the Ministry of Industry and Information Technology (MIIT) proposed a law to set up an evaluation system of smartphone applications in order to tackle the problem. Every single app had to be approved by the Chinese government before release. This plan raised a lot of protest, due to fear of censorship, favoritism or even large-scale corruption. However, the main argument of the opponents was that MIIT regulation would delay release dates by an enormous amount of time, due to the Ministry's sheer incapability to regulate all apps. According to them, this could severely damage the development of the Chinese mobile app ecosystem. Nonetheless, there is no all-encompassing regulation as of yet. The Chinese government took a first step towards regulation in 2014, when it issued rules that limit the role of mobile messaging services in spreading news. Especially mobile chatting apps, such as the enormously popular WeChat app, were targeted by this law. Under the new regulations, only news agencies and other groups with official approval are authorized to publish political news allowed by the government. Moreover, users of these mobile messaging services have to register with their official IDs. The second and provisionally last step to tighten government's control over the mobile app market was last year's proposal, named 'Draft Regulation for Mobile Smart Device Applications for Pre-installation and Distribution'. If enacted, manufacturers of mobile devices cannot commit actions that infringe the legal interests of users or compromise cyber-security, such as omitting essential information on pre-installed applications in the user manual, forcibly imposing applications irrelevant to the basic functions of the device or collecting and using personal information without explicit notification and consent of the users. In conclusion, there is practically no governmental control regarding to the Chinese application market. Chinese legislators took their first steps towards regulation, but the Chinese Android App Stores are still abundant in malware and pirate applications.
<urn:uuid:6bcdcda7-0505-4545-9354-c1140dc4f7a4>
CC-MAIN-2017-04
https://www.guardsquare.com/en/blog/legislative-framework-application-development-china
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952856
509
2.5625
3
It is a safety riddle that neither the government nor the private sector is figuring out very well. As high-tech gadgets for cars proliferate and consequently increase the opportunity for accidents, collision avoidance systems and other advanced equipment is being developed to keep drivers out of harm's way. But are the advanced safety systems a help or hindrance? In a 107 page report critical of how the federal Department of Transportation handles the onrush of advanced technology for drivers and the car itself, the Government Accountability Office said technology-based trends alternatively present an opportunity to reduce highway fatality rates and a threat that, if not countered, may result in increased fatalities. The bad news is that from now through 2015, more complex devices will be developed for use in cars. The advanced and rapid evolution of complex products including new applications and detailed screens will increase driver distractions, the GAO said. This includes portable devices such as cell phones with text messaging and more advanced features as well as MP3 players the report stated. Now and in the future, teens may have the highest risks, the GAO stated. A recent National Highway Transportation Safety Administration-sponsored study noted that cell phone use is increasing, cited findings that younger, and in some cases novice, drivers are "leading the way" in using various new devices and that the combination of distraction and lack of "fully developed driving skills" suggests accelerating risks for this group, the GAO stated. Additionally, driver use of portable phones with touch screens can now be facilitated with dashboard holders with swivel mounts for landscape and portrait viewing. Motorcycle helmet equipment is also now available to facilitate phoning while riding. Finally, wireless Internet is becoming available in cars that will become "a moving WiFi hotspot with Internet access." From 2015 through 2020, problems could increase if young drivers continue texting and middle-aged drivers continue voice calling as they age and if new cohorts of teen drivers text or use newer complex devices at levels similar to or higher than today's teens, the GAO stated. According to NHTSA, 19 states and the District of Columbia have implemented cell phone bans. One objective of further research could be to describe whether states with cell bans are using, encouraging, or considering for new technologies the future, such as devices that could help police detect ongoing calls in passing cars or in-car equipment to track, record, and report-to either parents or police-driver use of portable phone use, the GAO stated. The good news is that a number of systems are being developed - and some are being deployed in higher-end cars - that promise to help drivers avoid problems. For example, the Cooperative Intersection Collision Avoidance System (CICAS), is a DOT-sponsored effort that could help reduce accidents by alerting drivers when they or other vehicles are projected to violate traffic control devices and telling drivers about potential problems at intersections - such as some one possibly running a red light. - CICAS consists of: - Vehicle-based technologies and systems-sensors, processors, and driver interfaces within each vehicle - Infrastructure-based technologies and systems-roadside sensors and processors to detect vehicles and identify hazards and signal systems, messaging signs, and/or other interfaces to communicate various warnings to drivers - Communications systems-dedicated short-range communications to communicate warnings and data between the infrastructure and equipped vehicles According to the DOT, intersection-related crashes take a heavy toll on lives, productivity, and the economy. In 2003 alone, 8,569 people died and more than 1.4 million suffered injuries as a result of intersection-related crashes. CICAS Intelligent intersection systems offer a significant opportunity to improve safety by enhancing driver decision-making at intersections that will help drivers avoid crashes, the DOT stated. Other overarching research systems being developed include the Vehicle Infrastructure Integration (VII) and the Effectiveness of Vehicle Safety Communications Applications (EVSCA) programs. EVSCA is a NHTSA program to evaluate whether the effectiveness of vehicle-to-vehicle communications (either alone or in combination with stand-alone crash-avoidance technologies) could benefit from technologies allowing the vehicle to communicate with roadside or other sensors. Both programs are designing technologies to alert drivers to hazards and decrease fatality rates, the GAO stated. The conundrum arises from the fact that such systems can cause problems too. For example, older drivers may be helped by crash avoidance technologies such as backup warning systems and night vision assistance but those systems would enhance mobility of older drivers, and "might encourage older adults to continue driving well beyond when they would ordinarily cease operating vehicles," thus raising risks, the GAO said. This fact is important as by 2025, the annual number of road fatalities for older drivers may be double what it was in 2005. The main reason for the projected increase is that the first members of the baby boom generation will reach their 65th birthday in 2011, and the number and percentage of Americans older than 65 will steadily increase for several years, the GAO said. Crash avoidance technologies could mitigate negative effects of drivers using cell phones or other distracting devices but drivers using a portable touch-screen phone and examining a dashboard screen image at the same time could be further distracted. Such systems could also create complacency that could exacerbate dangers. Representatives of the automobile industry have said that consumer training in the use of new technologies could be key to maximizing safety benefits, the GAO stated. Last year the National Institute of Standards and Technology said that two car collision warning systems it is testing have passed most, but not all, performance tests. According to NIST, the systems passed most of the more than 30 tests conducted this fall but the systems had some warning system problems in detecting whether forward vehicles were in-lane or out-of-lane on curves or during lane changes. NIST also measured significant warning delays that resulted in test failures. Such problems are common in automotive crash warning systems that must operate in real-time, at highway speeds, and use multiple low-cost sensors to measure complex three-dimensional scenes, NIST said. It's obvious a lot more work needs to be done and likely some new laws enacted to ensure future driving safety. Layer 8 in a box Check out these other hot stories:
<urn:uuid:1803ce6a-c996-4f6d-b4d8-dab1de1ec80d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2346605/security/high-technology-trends-causing-auto-safety-conundrum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00028-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950239
1,275
2.8125
3
Editor’s note: This article is part of a series examining issues related to evaluating and implementing big data analytics in business. Most, if not all big data applications achieve their performance and scalability through deployment on a collection of storage and computing resources bound together within a runtime environment. In essence, the ability to design, develop, and implement a big data application is directly dependent on an awareness of the architecture of the underlying computing platform, both from a hardware and, more importantly, from a software perspective. One commonality among the different appliances and frameworks is the adaptation of tools to leverage the combination of collections of four key computing resources: - Processing capability, often referred to as a CPU, processor, or node. Generally speaking, modern processing nodes often incorporate multiple cores that are individual CPUs that share the node’s memory and are managed and scheduled together, allowing multiple tasks to be run simultaneously; this is known as multithreading. - Memory, which holds the data that the processing node is currently working on. Most single-node machines have a limit to the amount of memory. - Storage, providing persistence of data – the place where datasets are loaded, and from which the data is loaded into memory to be processed. - The network, which provides the “pipes” through which datasets are exchanged between different processing and storage nodes. Because single-node computers are limited in their capacity, they cannot easily accommodate massive amounts of data. That is why high performance platforms are composed of collections of computers in which the massive amounts of data and requirements for processing can be distributed among a pool of resources. A General Overview of High Performance Architecture Most high performance platforms are created by connecting multiple nodes together via a variety of network topologies. Specialty appliances may differ in the specifics of the configurations, as do software appliances. However, the general architecture distinguishes the management of computing resources (and corresponding allocation of tasks) and the management of the data across the network of storage nodes, as is seen in the figure below: In this configuration, a master job manager oversees the pool of processing nodes, assigns tasks, and monitors the activity. At the same time, a storage manager oversees the data storage pool and distributes datasets across the collection of storage resources. While there is no a priori requirement that there be any colocation of data and processing tasks, it is beneficial from a performance perspective to ensure that the threads process data that is stored in a way that is directly local to the node upon which the thread executes, or is stored on a node that is close to it. Reducing the costs of data access latency through co-location improves performance speed. To get a better understanding of the layering and interactions within a big data platform, we will examine aspects of the Apache Hadoop software stack, since the architecture is published and open for review. Hadoop is essentially a collection of open source projects that are combined to enable a software-based big data appliance. We begin with a core aspect of Hadoop’s utilities, upon which the next layer in the stack is propped, namely HDFS, or the Hadoop Distributed File System. How HDFS Works HDFS attempts to enable the storage of large files, and does this by distributing the data among a pool of data nodes. A single name node (sometimes referred to as NameNode) runs in a cluster, associated with one or more data nodes, and provides the management of a typical hierarchical file organization and namespace. The name node effectively coordinates the interaction with the distributed data nodes. The creation of a file in HDFS appears to be a single file, even though it blocks “chunks” of the file into pieces that are stored on individual data nodes. The name node maintains metadata about each file as well as the history of changes to file metadata. That metadata includes an enumeration of the managed files, properties of the files and the file system, as well as the mapping of blocks to files at the data nodes. The data node itself does not manage any information about the logical HDFS file; rather, it treats each data block as a separate file and shares the critical information with the name node. Once a file is created, as data is written to the file, it is actually cached in a temporary file. When the amount of the data in that temporary file is enough to fill a block in an HDFS file, the name node is alerted to transition that temporary file into a block that is committed to a permanent data node, which is also then incorporated into the file management scheme. HDFS provides a level of fault tolerance through data replication. An application can specify the degree of replication (that is, the number of copies made) when a file is created. The name node also manages replication, attempting to optimize the marshaling and communication of replicated data in relation to the cluster’s configuration and corresponding efficient use of network bandwidth. This is increasingly important in larger environments consisting of multiple racks of data servers, since communication among nodes on the same rack is generally faster than between server nodes in different racks. HDFS attempts to maintain awareness of data node locations across the hierarchical configuration. In essence, HDFS provides performance through distribution of data and fault tolerance through replication, the result is a level of robustness for reliable massive file storage. Enabling this level of reliability should be facilitated through a number of key tasks for failure management, some of which are already deployed within HDFS while others are not currently implemented: - Monitoring: There is a continuous “heartbeat” communication between the data nodes to the name node. If a data node’s heartbeat is not heard by the name node, the data node is considered to have failed and is no longer available. In this case, a replica is employed to replace the failed node, and a change is made to the replication scheme. - Rebalancing: This is a process of automatically migrating blocks of data from one data node to another when there is free space, when there is an increased demand for the data and moving it may improve performance (such as moving from a traditional disk drive to a Solid State drive that is much faster or can accommodate increased numbers of simultaneous accesses), or an increased need to replication in reaction to more frequent node failures. - Managing integrity: HDFS uses checksums, which are effectively “digital signatures” associated with the actual data stored in a file (often calculated as a numerical function of the values within the bits of the files) that can be used to verify that the data stored corresponds to the data shared or received. When the checksum calculated for a retrieved block does not equal the stored checksum of that block, it is considered an integrity error. In that case, the requested block will need to be retrieved from a replica instead. - Metadata replication: The metadata files are also subject to failure, and HDFS can be configured to maintain replicas of the corresponding metadata files to protect against corruption. - Snapshots: This is incremental copying of data to establish a point in time to which the system can be rolled back. This is not currently supported. These concepts map to specific internal protocols and services that HDFS uses to enable a large-scale data management file system that can run on commodity hardware components. The ability to use HDFS solely as a means for creating a scalable and expandable file system for maintaining rapid access to large datasets provides a reasonable value proposition from an Information Technology perspective: decreasing the cost of specialty large scale storage systems, reliance on commodity components, the ability to deploy using cloud-based services, and even lowered system management costs. David Loshin is the author of several books, including Practitioner’s Guide to Data Quality Improvement and the second edition of Business Intelligence—The Savvy Manager’s Guide. As president of Knowledge Integrity Inc., he consults with organizations in the areas of data governance, data quality, master data management and business intelligence. Email him at firstname.lastname@example.org.
<urn:uuid:bfc81b15-3cd8-4908-8df5-c794f14327e6>
CC-MAIN-2017-04
http://data-informed.com/understanding-the-big-data-stack-hadoops-distributed-file-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00358-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933555
1,653
3.015625
3
With new security threats cropping up every day, network managers are understandably protective of their computing assets. Enhanced security measures, however, can inflict significant hardships on legitimate users and can lead to frustration, productivity losses, and dangerous attempts at circumvention of restrictions. Equipping yourself with proper tools for connectivity can make your tasks easier while still maintaining network security and integrity. One of the most valuable tools in the IT toolkit is Secure Shell (SSH). Using SSH as a replacement for Telnet is familiar to most users, but many are unaware of its other benefits. When properly configured, a server and client can connect and communicate virtually any service using the SSH link. And, since SSH is inherently more secure than Telnet, FTP, or other unencrypted protocols, most network managers are accommodating to requests to open ports and allow SSH communications through the firewall. The following is a summary of the techniques that I have used to make connectivity through firewalls both effective and secure. Setting up an SSH client What you need to start: - Make sure you have a Windows® host (referred to as the SSH client) and SSH client software (such as PuTTY). - An SSH server (referred to as the SSH server) is required -- typically inside a firewall-protected network zone, the servers are usually based on AIX®, Linux®, or other UNIX® variant, but Windows-based SSH servers can also be used. - Additional server processes (Server Message Block (SMB), Virtual Network Computing (VNC), and so forth) are also needed, as appropriate. Setting up an SSH server Most current distributions of UNIX-type operating systems include some variant of the OpenSSH software. If your particular distribution did not come with it, you can check the OpenSSH site (see Resources) for binaries or source code. In most cases, installing the server is as simple as installing a Redhat Package Manager (RPM) and verifying the configurations settings. The exact installation method varies, depending upon the specifics of your environment. In practice, OpenSSH is installed by default in most modern UNIX systems. Given its popularity, the following description uses the free SSH client, PuTTY, as a reference. The configuration is relatively generic, however, and the choice of clients is up to the user. The foregoing configuration steps assume a familiarity with basic SSH connections and provide detail on establishing connections to tunneled services. Figure 1 illustrates the basic concepts of forwarding a basic VNC connection over an SSH tunnel. Figure 1. Forwarding a basic VNC connection over an SSH tunnel - Ensure that no services are running on the local port that you will be connecting to on the remote server. In subsequent steps, you will use an SSH client to define a virtual service on your local client that corresponds to the services you would like to access on the remote systems; therefore, the local ports that correspond to the remote ports must be available. For example, if you will be attempting to access an SMB share on a remote system using tunneling, you must disable File and Printer Sharing for Microsoft® Networks on the local system. Similarly, if you intend to attach to a VNC server on a remote host, you must ensure that if any VNC server sessions are running on the local system, they are not on the same port as those on the remote system to which you are connecting. - On the client workstation, create a standard SSH connection profile to connect to the remote host. The following example assumes that you are using the free PuTTY SSH client; however, the principal is the same regardless of the client you choose. Figure 2. Alternate PuTTY configuration - After entering the information in Figure 2 above, be sure to select the Session item from the Category tree and save the session for future use. - On the Client workstation, open the session created above. You will be prompted to confirm the addition of the server's key to your local key store the first time a connection is established between a client and server. Once you have reviewed and accepted the remote server's key, supply the user ID and password for the remote server. When you receive a command prompt on the remote system, your SSH connection and tunnel are ready for use. - On the Client workstation, launch the VNC viewer software and enter the localhost:5900in the VNC server field, as shown in Figure 3 below, and then click Connect. The connection window's appearance will vary depending upon the version of the VNC in use; however, the principle is the same. Figure 3. VNC options - In a few seconds, you should see the VNC window. The systems are now conducting a VNC session through the SSH link. Once you have established the basic connection detailed above, you can repeat the pattern to tunnel any service for which you can identify the port configuration. In addition, forwarding can be configured to access multiple services and additional servers within the secured zone using a single link. In the following example, the SSH client connects to the SSH server, just as in the previous example. In addition, the SSH client is configured to forward an additional port (139) to another remote server. Note that the remote server needs no special configuration or software for this configuration to function, and the configuration is independent of the operating system: The SSH function serves merely as a conduit for the communication. The SSH server configuration from the previous example is depicted below, with the new configuration items in blue. This allows the client to access several servers and services through the use of a single, secure port, simultaneously. Figure 4 illustrates how such a connection can be used. Figure 4. SSH server configuration Two of the more common ports are: - Windows File Sharing: 139 - Windows Remote Desktop Protocol: 3389 This is not an exhaustive list; as noted above, most network services for which a port can be identified can be forwarded in this manner. - Following the steps in the previous section, add an additional forwarded port to the PuTTY Configuration window, as depicted below. Figure 5 illustrates the definition of a tunnel for Windows File sharing. Figure 5. Windows file sharing - After defining the tunnel and saving the profile, open the SSH session and log in to Server 1. - On the local Windows Client, open a Windows Explorer session and enter \\localhost\sharename_on_Server2and press Enter. - After a few seconds, you should receive a password prompt requesting login credentials for the remote share. Enter the username and password for the share on Server 2. - After a few seconds more, the window will refresh and present a listing of the files present on the share. One of the more esoteric uses of SSH, X11 forwarding, allows the graphics portion of an application to be rendered on the SSH client while the logic executes on a remote server. By using such a method, users can avoid the network overhead of forwarding an entire desktop over the link and receive only the relevant portions of the display. Figure 6 depicts the basic scenario for X11 forwarding. Figure 6. X11 forwarding What you need: - One Windows host (referred to as the SSH client) X server and SSH client (such as Cygwin-X) - One SSH server (referred to as the SSH server) -- SSH server software installed and operating (*IX: OpenSSH) - Any graphical server applications To configure an X server on your local Windows workstation, access the Cygwin setup program. The default installation installs the Cygwin-X packages. Once the default installation completes, execute the following steps to enable forwarding of the GUI display to your local system. - On the remote SSH server, find the sshd_config file. Typically, this file will reside in /etc/ssh/, but the location might vary depending upon your specific operating system or distribution. - Edit sshd_config and find the line containing X11Forwarding, similar to the example below: . . #AllowTcpForwarding yes #GatewayPorts no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes . . Ensure that the line is identical to the example above when you finish editing it. It should not be preceded by a comment mark (#), and its value should be set to Yes. If it is necessary to make changes to the sshd_config file, you must restart the SSH subsystem. - On the local SSH client, find the ssh_config file. On systems configured with CygwinX, the file resides in /cygdrive/c/cygwin/etc/. - Edit ssh_config and find the following line: . . #Host * #ForwardAgent no ForwardX11 yes #RhostsRSAAuthentication no #RSAAuthentication yes . . Ensure that the line is identical to the example above when you finish editing it. It should not be preceded by a comment mark (#), and its value should be set to Yes. If it is necessary to make changes to the ssh_config file, you must restart any active SSH sessions for the changes to go into effect. - On the local SSH client, start the X server by executing <cygwin home>\usr\X11R6\bin\startxwin.bat. After a few seconds, a command shell window appears. - In the command shell window, initiate a standard SSH connection to the SSH server that you have configured in the previous steps. The command takes the form ssh email@example.com. - Once you have successfully logged in and have access to the remote machine, you can then execute any application with a graphical component and the graphical portion will display on the SSH client's display. To verify quickly that the link has been correctly established, type xclockto launch the standard X Window clock utility. You should immediately see a window with an analog clock face appear, as shown in Figure 7. Figure 7: The xclock and the DB2® Control Center Although you cannot eliminate the inconveniences associated with increased security measures, you can reduce their impact on productivity. Through the use of commonly available or OpenSource tools, such as SSH, PuTTY, and Cygwin, users can create simple, secure connections to almost any resource they need to access. By encouraging the use of the techniques described above, network managers can ensure that users comply with security requirements while permitting them a higher level of control over their daily activities. - Deploying SSH on AIX (developerWorks, September 2002): Take this tutorial to learn how to deploy SSH on AIX. - The AIX and UNIX developerWorks zone provides a wealth of information relating to all aspects of AIX systems administration and expanding your UNIX skills. - developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts. - AIX 5L Wiki: Visit this collaborative environment for technical information related to AIX. - Podcasts: Tune in and catch up with IBM technical experts. - Browse the technology bookstore for books on these and other technical topics. Get products and technologies - OpenSSH for AIX : Get the latest version of OpenSSH for AIX. - Download the following products: - IBM trial software: Build your next development project with software for download directly from developerWorks. - Participate in the AIX and UNIX forums:
<urn:uuid:9bd08a1d-978d-4af8-836d-4d7c250cc010>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-tunnelingssh/?ca=dgr-lnxw01TUNNELSSH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887521
2,382
2.953125
3
If you went up to a person on the street 50 years ago and told them, “In 50 years, we’ll be flying to work on great flaming jetpacks,” they might have believed you. After all, we went to the moon 47 years ago. If you told them, instead, that in 50 years’ time we will carry in our pockets a shared global library of all the world’s information, they would have roared with laughter. Who could carry the weight of a book containing all the world’s wisdom? Yet today, more than three billion people now have access to the Internet, which is not only a shared global library, but also a global community, meeting space, marketplace, and more. It’s an all-around awesome invention that is often taken for granted. How did it get here? Who built it? It’s a mystery to many, much like the Pyramids of Giza. Who should we thank for the state of routing and switching? Here, we take a look at a few of the technological heroes that made the magic of the Internet possible. Telephone and the Telegraph Yes. Routing and switching begins with the telephone. It makes sense, right? Telephone switchboard? A fellow by the name of Tivadar Puskas invented the first telephone exchange system in 1877 while working for Thomas Edison. The telephone took a good few decades to mature into a viable public service. But once it became widespread, the groundwork for much of the internet was complete before the internet even existed. Thank you, Tivadar. You couldn’t have had any idea that your first switchboard, built from “carriage bolts, handles from teapot lids, and bustle wire,” would lay the groundwork for the internet. Strictly speaking, the internet isn’t one big network. It’s a network of networks. It enables you to send a message along one network, through another, and another, until your message arrives at the right address, regardless of location. Network sorcery, in other words. During the rise of the telephone age, the problem of sending calls to the right addresses was solved by human operators who manually switched connections all day. It wasn’t an ideal solution by any means. With the birth of computers, the problem was more easily solved by giving each computer an IP address, which is like a phone number for a computer, and then letting machines carry the message along the right path for a connection. We now call these machines routers and switches, and the first real router ever created was a special computer called the IMP (Interface Message Processor), which sold for just under $10,000 in 1969, when work began on the Internet’s experimental predecessor, ARPANET. The IMP was known as a “gateway,” and it was designed to route messages between disparate networks across the US. It was big, expensive, and offered basic functionality, but the concept of decentralized routing via small, semi-intelligent demons — er, devices — placed between computer networks has worked pretty well for the internet so far. Thank you to the entire IMP team. The Domain Name System Remember what you had for lunch yesterday? Don’t worry, most people don’t. The Domain Name System is a giant table of records existing on decentralized servers around the world that act as a mental crutch for our fallible human memory. DNS works by changing the words you type in your browser (e.g., google.com) into the IP address of the computer hosting that website or resource. Simple. But brilliant. We could thank many people for DNS, but let’s give big thanks to Jon Postel, an internet pioneer, who originally manually assigned addresses to ARPANET computers. Yes. He had a text document with every single domain name in the world. Think about that. Bonus: You should take some time read more about Jon Postel. His fingerprints are all over the history of the internet. Thunk up by computer scientists Donald Davies and Paul Baran in the 60s, packet switching is a clever method of routing messages that we use every day on the internet. With packet switching, a message to be sent over the Internet is chopped up into smaller packets of data, each with their own header (containing info on where the packet is going), and payload (a part of the entire message). Once the message is split into packets, each packet can now merrily go on its way to its final destination, taking whichever network route is operational, or fastest. The fortunate twist is that most packets don’t take the same route, so the network load is distributed across multiple networks. So, why do we care? Because if there was no packet switching, traffic flowing through the Internet would slow to a standstill. Thank you, Paul and Donald. TCP/IP is truly the meat-and-potatoes of what makes networking work so well. These are the core protocols that lay down the laws of communication between computers, network hardware, and software on the Internet. Serious thought and theory went into the making of TCP/IP back when the ARPANET was being developed, and we can thank Vint Cerf and Bob Kahn for authoring TCP/IP in 1974, along with DARPA’s Information Processing Technology Office (IPTO). Long live IPv4! Thank you, Vint Cerf. You’re still looking good in a vest. And thank you, Bob Kahn for leading the IPTO to victory. If you’re looking for the perfect gift for yourself (or your Internet history-minded friends), pick up Katie Hafner’s Where Wizards Stay Up Late from your favorite bookseller. It goes into the life and time of Jon Postel, Vint Cert, Robert Kahn, Paul Baran, and the rest of the Internet pioneers. We’re thankful for all the technology that brought us to modern-day routes and switches! Show your gratitude to the pioneers of this amazing technology by sneaking in a little training in their honor during your Thanksgiving holiday. Not a CBT Nuggets subscriber? Start your free week now.
<urn:uuid:282d084f-d82e-49ee-9007-d0f68407649d>
CC-MAIN-2017-04
https://blog.cbtnuggets.com/2016/11/technology-were-thankful-for-routers-and-switches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946629
1,313
3.109375
3
Traditional cybersecurity approaches revolve around building a defensive posture. Cybercriminals come up with new, inventive ways to break into networks, and cybersecurity professionals scramble to stop them. But what if you flipped this approach on its head? What if rather than a defensive approach, you went on the offensive? Is that even possible? It is – with threat intelligence. It involves applying data capture and analytics techniques in the context of identifying and preventing cyber threats. Threat intelligence will play a bigger and bigger role in security strategies as risks continue to multiply, partly as a result of the Internet of Things, which aims to interconnect all objects than can be connected. As data capture and analysis methods become more sophisticated, cybersecurity models will move beyond prevention to include a critical preventive component. These models will be a digital manifestation of the maxim that a “best defense is a good offense.” Current Threat Intelligence Currently threat intelligence consists primarily of subscription-based information feeds provided by security vendors. Some threat intelligence feeds provide a generalist view of current threats while others drill down into specific areas of risk and groups of threat actors. The better feeds are updated frequently, put information in context and provide clues on how to avoid the threats. They tell you about threat actors’ tactics and techniques, and the formats they exploit to deliver malicious payloads. The information is collected from endpoints, malware-detection engines and various other sources. Effective threat intelligence collection goes beyond signature-based malware-detection by looking for code traits, patterns, behavior and anomalies that hint at the presence of malicious code for which no malware signatures exist yet. In so doing, threat intelligence adds a critical layer of protection by helping to identify new malware variants, the websites that house and distribute them, and the methods employed by attackers to hack into systems and spread infections. Down the Pike The future of threat intelligence is predictive. It will largely be about determining probability of threats and where they originate. Just as organizations are starting to leverage data analytics for predictive maintenance of equipment and systems, security professionals will do the same to predict where cyber attacks will come from and who is likely to execute them. Call it the digital version of “know thy enemy.” The more information you can gather about those intent on harming you, and the methods they employ, the better you can prepare to defeat them. “Organizations with a sophisticated approach to cybersecurity are no longer satisfied with locking the doors after the robbery has been committed,” consulting firm Deloitte explained in a recent report, Analytics Trends 2016: The Next Evolution. “Organizations such as these are beginning to employ more predictive approaches to threat intelligence and monitoring – in short, going on the offensive.” What does that mean? Monitoring IRC (internet relay chat) and social media “chatter” by shady groups and individuals suspected of cybercriminal activity. Analyzing past hacks and breaches to build predictive models of impending new threats. Regularly testing corporate defenses to prevent cybercriminals from finding security holes before you do. Predictive threat intelligence will not eliminate cybercrime, but it will certainly help prevent a lot of attacks. And more importantly, it will turn cybersecurity strategies from a primarily defensive endeavor to more of an offensive effort. Read more about Reducing Dwell Time with Behavioral Analytics
<urn:uuid:1e501b9f-45df-40da-a9f6-a0af5af5ed49>
CC-MAIN-2017-04
https://blog.iboss.com/executives/the-future-of-cyber-threat-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927422
677
2.5625
3
The launch of Johns Hopkins University Center for Advanced Modeling in the Social, Behavioral and Health Sciences brings the power of computer-based simulation to bear on a host of real-world hardships, such as disease, economic turbulence and catastrophic disasters. This is the subject of a feature piece at The Gazette, the Johns Hopkins University newspaper. Heading the project is Joshua Epstein, a professor in the Department of Emergency Medicine and a leader in the burgeoning field of “agent-based” simulation modeling. This modeling approach uses virtual worlds populated by “agents” that act like real people. The agents are programmed to respond to a variety of real or imagined threats, such as a disease outbreak or a chemical spill. By shedding light on these what-if scenarios, the models help scientists predict and better prepare for critical real-life situations. Agent-based models, or ABMs, are similar to what is seen in war games. Epstein explains that the simulation models are “highly visual and spatially realistic” with the agents moving much like real people to, from home to work to school, even travelling long distances. The application is similar to a video game, but instead of providing entertainment value, the goal here is to address specific scientific challenges. What makes this research ground-breaking is that it draws from both the social sciences and the physical/computational sciences. For example, Epstein developed 3D video simulations to demonstrate the effect a sudden toxic chemical cloud would have on a crowded urban center like Los Angeles or New Orleans. This modeling method is the first to combine fluid dynamics (airborne chemical dispersion) and agent behavior. Epstein, who holds a doctorate from the Massachusetts Institute of Technology, outlines his vision for the center: “I see this as a place where the top professors and researchers from around the country, indeed the world, will want to come and work on collaborative projects, participate in symposia or develop entirely novel lines of research. I want it to be an intensely collaborative environment…which welcomes students and faculty to come and brainstorm, collaborate on papers, attend seminars, come up with brilliant new ideas. I see all kinds of innovative, exciting work coming out of this center, work that pushes important interdisciplinary research forward in a really dynamic way.” The Center for Advanced Modeling, or CAM, will be a multi-disciplinary meeting ground, where the the best minds from a diversity of fields — emergency medicine, disaster health, social behavior, supercomputing and economics — gather in pursuit of “practical, novel scientific solutions to the many complex medical, social and institutional problems that society faces today.” CAM’s distinguised partner institutions include the Santa Fe Institute (where Epstein is an external professor), Pittsburgh National Center for Supercomputing Applications, Virginia Bioinformatics Institute at Virginia Tech, the National Center for Computational Engineering at Tennessee and ETH Zurich.
<urn:uuid:688547cd-1a07-4fba-855d-0069a0fc3047>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/31/game-like_simulations_boost_disaster_preparedness/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00202-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918395
599
2.734375
3
This course teaches core process modeling skills, project development methodology, implementation fundamentals, and delivery best practice patterns that improve the speed and quality of process definition and implementation efforts. This course is designed for users who have purchased any of the IBM BPM software packages, including basic, standard, and advanced. In this course, students use the IBM BPM Process Designer component to create a business process definition (BPD) from business requirements that are identified during process analysis. Students learn how to work within the parameters of the BPM life cycle methodology to maximize the functionality of IBM BPM and project development best practices, such as meeting the target playback goal and validation of model functionality. This course begins with an overview of BPM and process modeling. Students learn how to make team collaboration more efficient by enabling all team members to use standard process model elements and notation, which makes expressing and interpreting business requirements consistent throughout the BPM life cycle. The course also teaches students how to build an agile and flexible shared process model that can be understood by key business stakeholders, implemented by developers, and adjusted to accommodate process changes. The course continues with the implementation of the process model, providing an overview of the architecture of IBM BPM and describing the use of process applications and toolkits within the tool. Students create variables, implement gateways, and enable swim lanes to demonstrate process flow on their diagrams. Students also build customized Web 2.0 user interfaces (Coaches) to enable business and process data flow throughout a process model. The course emphasizes the concepts of reuse, ease of maintenance, and using best practices. The course uses an interactive learning environment, with hands-on demonstrations, class activities to reinforce concepts and check understanding, and labs that enable hands-on experience with BPM tasks and skills. This course is designed to be collaborative, and students can work in teams to perform class activities.
<urn:uuid:90bf79b2-bd32-49fb-9800-e7ed7379fdae>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/118848/process-modeling-and-implementation-with-ibm-business-process-manager/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919454
380
2.734375
3
A close look at vulnerabilities in about 15,000 websites found 86% had at least one serious hole that hackers could exploit, and “content spoofing” was the most prevalent vulnerability, identified in over half of the sites, according to WhiteHat Security’s annual study published today. “’Content spoofing’ is a way to get a website to display content from the attacker,” says Jeremiah Grossman, CTO at WhiteHat, an IT security vendor. A criminal might do this to steal sensitive customer information or simply to embarrass the owners of a website. In any event, in content spoofing the fake content is not actually on the website as it would be in a web defacement, but simply appears to be there, Grossman points out. The Open Web Application Security Project (OWASP) group says content spoofing is also sometimes referred to as “content injection” or “virtual defacement,” and it’s an attack made possible by an injection vulnerability in a web application that does not properly handle user-supplied data. [SECURITY SCOOP: Phishing tactics and how hackers get away with it] The content spoofing attack can supply content to a web application that is reflected back to the user, who’s presented with a modified page under the context of the trusted domain, according to OWASP. It’s said to be similar to a cross-site scripting attack but uses other techniques to modify the page for malicious reasons. ’Content spoofing’ is a way to get a website to display content from the attacker. — Jeremiah Grossman, CTO, WhiteHat Security The annual WhiteHat Website Security Statistics Report examined vulnerabilities found over the course of 2012 in the 15,000 websites of 650 companies and government agencies for which it provides web application vulnerability assessments. These range from financial, manufacturing, technology, entertainment, energy to media, and government. The top 15 vulnerability classes for websites are said to be cross-site scripting; information leakage; content spoofing; cross-site request forgery; brute force; insufficient transport layer protection; insufficient authorization; SQL injection; session fixation; fingerprinting; URL redirector abuse; directory indexing; abuse of functionality; predictable resource location; and HTTP response splitting. Grossman says there were a few unexpected findings related to how quickly organizations fixed vulnerabilities when taking into account how much they’d invested in application security training for their programmers. Emphasis on training was correlated with 40% fewer website vulnerabilities and a 59% faster rate of resolving them than in organizations that didn’t do training. But the actual remediation rate to close all the holes related to the vulnerabilities was 12% less than in organizations without training. Grossman says WhiteHat’s analysis indicates that the poorest rates of remediation overall are associated with organizations where their regulatory compliance requirements are the No.1 driver for resolving vulnerabilities. If the vulnerability wasn’t tied to compliance, it was ignored. “When organizations’ website vulnerabilities go unresolved, ‘compliance’ was cited as the #1 reason, closely followed by ‘risk reduction,’” according to the WhiteHat study. The study also found the best remediation rates occurred when customers or partners demanded it. Other findings in the website 2012 vulnerability study show: - 85% of organizations use some variety of application security testing in pre-production website environments - 55% have a Web Application Firewall in some state of deployment - In the event of of a website data or system breach, 79% said the “Security Department” would be accountable. - 23% experienced a data or system breach as a result of an application-layer vulnerability. Ellen Messmer is senior editor at Network World, an IDG publication and website, where she covers news and technology trends related to information security. Twitter: MessmerE. E-mail: email@example.com.
<urn:uuid:0999571d-f23c-447b-923b-4c377cbeab85>
CC-MAIN-2017-04
http://www.networkworld.com/article/2165851/security/-content-spoofing--a-major-website-vulnerability--study-finds.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00532-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925527
832
2.859375
3
More and more often the terms data virtualization, data federation, and data integration are used. Unfortunately, these terms have never been defined properly. And, as can be expected, this leads to confusing discussions, a misuse of the terms, vendors using the terms the way it benefits them, and so on. Some regard them as synonyms, others see them as overlapping concepts, and there are those who prefer the see them as opposites. Barry Devlin also referred to this discussion in his recent blog published at BeyeNetwork.com: Virtualization, Federation, EII and other non-synonyms†. It looks as if everyone assigns their own personal meaning to these terms. This meaning is probably based on personal background, experience with certain products, and on how he or she interprets the words virtualization and federation. Wikipedia is not helping us either with their definition: ďData virtualization is a method of data integration and is often referred to as data federation, enterprise information integration (EII) or data services.Ē This would imply that data virtualization and data federation are the same. All this confusion is, as we all understand, not very productive. We need clear definitions. This article, therefore, proposes definitions for these three related terms. I am interested in hearing your reaction, so if you have any comments, please let me know. Letís see if, together, we can come up with generally accepted definitions. Virtualization is not a new concept in the IT industry. It all started years ago when virtual memory was introduced in the 1960s using a technique called paging. Memory virtualization was used to simulate more memory than was physically available in a machine. Nowadays, almost everything can be virtualized, including processors, storage, networks, and operating systems. In general, virtualization means that applications can use a resource without concern for where it resides, what the technical interface is, how it has been implemented, the platform it uses, how large it is, and how much of it is available. Based on the definitions of those other forms of virtualization, we propose the following definition for data virtualization: Data virtualization is the process of offering data consumers a data access interface that hides the technical aspects of stored data, such as location, storage structure, API, access language, and storage technology. Data virtualization provides an abstraction layer that data consumers can use to access data in a consistent manner. A data consumer can be any application retrieving or manipulating data, such as a reporting or data entry application. This abstraction layer hides all the technical aspects of data storage. The applications donít have to know where all the data has been stored physically, where the database servers run, what the source API and database language is, and so on. Technically, data virtualization can be implemented in many different ways. Here are a few examples: - With a federation server, multiple data stores can be made to look as one. The applications will see one large data store, while in fact the data is stored in several data stores. More on data federation next. - An enterprise service bus (ESB) can be used to develop a layer of services that allow access to data. The applications invoking those services will not know where the data is stored, what the original source interface is, how the data is stored, and what its storage structure is. They will only see, for example, a SOAP or REST interface. In this case, the ESB is the abstraction layer. - Placing data stores in the cloud is also a form of data virtualization. To access a data store, the applications will see the cloud API, but they have has no idea where the data itself resides. Whether the data is stored and managed locally or whether itís stored and managed remotely is transparent. - In a way, building up a virtual database in memory with data loaded from data stored in physical databases can also be regarded as data virtualization. The storage structure, API, and location of the real data is transparent to the application accessing the in-memory database. In the business intelligence (BI) industry, this is now referred to as in-memory analytics. - Organizations could also develop their own software-based abstraction layer that hides where and how the data is stored. In most cases, if the term federation is used, it refers to combining autonomously operating objects. For example, states can be federated to form one country. If we apply this common explanation to data federation, it means combining autonomous data stores to form one large data store. Therefore, we propose the following definition: Data federation is a form of data virtualization where the data stored in a heterogeneous set of autonomous data stores is made accessible to data consumers as one integrated data store by using on-demand data integration. This definition is based on the following concepts: - Data virtualization: Data federation is a form of data virtualization. Note that not all forms of data virtualization imply data federation. For example, if an organization wants to virtualize the database of one application, no need exists for data federation. But data federation always results in data virtualization. - Heterogeneous set of data stores: Data federation should make it possible to bring data together from data stores using different storage structures, different access languages, and different APIs. An application using data federation should be able to access different types of database servers and files with various formats; it should be able to integrate data from all those data sources; it should offer features for transforming the data; and it should allow the applications and tools to access the data through various APIs and languages. - Autonomous data stores: Data stores accessed by data federation are able to operate independently; in other words, they can be used outside the scope of data federation. - One integrated data store: Regardless of how and where data is stored, it should be presented as one integrated data set. This implies that data federation involves transformation, cleansing, and possibly even enrichment of data. - On-demand integration: This refers to when the data from a heterogeneous set of data stores is integrated. With data federation, integration takes place on the fly, and not in batch. When the data consumers ask for data, only then data is accessed and integrated. So the data is not stored in an integrated way, but remains in its original location and format. The third term we want to define is data integration. According to SearchCRM, integration (from the Latin word integer, meaning whole or entire) generally means combining parts so that they work together or form a whole. If data from different data sources is brought together, we talk about data integration: Data integration is the process of combining data from a heterogeneous set of data stores to create one unified view of all that data. Data integration involves joining data, transforming data values, enriching data, and cleansing data values. What this definition of data integration doesnít enforce is how the integration takes place. For example, it could be that original data is copied from its source data stores, transformed and cleansed, and subsequently stored in another data store. This is the approach taken when using ETL tools. Another solution would be if the integration takes place live. For example, a federation server would do most of the integration work on demand. Another approach is that the source data stores are modified in such a way that data is transformed and cleansed. Itís like changing the sources themselves in such a way that almost no transformations and cleansing are required anymore when data is brought together. A term that is used in relationship to the three above is enterprise information integration (EII). I have one remark on this term. There is an essential difference between data and information. Data is what is stored and processed in our systems. Users determine whether the data they receive is information or not. Conclusion, we donít integrate information, we integrate data, which could lead to information. Therefore, the term should have been enterprise data integration. That said, EII is a synonym for data integration. We summarize with a few closing remarks. Data virtualization might not need data integration. It depends on the number of data sources being accessed. Data federation always requires data integration. For data integration, data federation is just one style of integrating data. Hopefully, these definitions are acceptable to most of you, and as indicated, I appreciate any comments to improve them. Recent articles by Rick van der Lans
<urn:uuid:504e905a-a509-4846-b6ba-5032ba5c01ca>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/14815
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923457
1,739
2.828125
3
Grazed from MIT Technology Review. Author: Christopher Mims. Which are precisely the technologies used to build apps that run in the Web browser. This means that Google, with its Chrome OS and Chromebook laptops, and Microsoft are now both concentrating on making it easy for Web developers to create for their platforms. Competition with Apple makes for strange bedfellows, indeed. Based on everything that Microsoft has said so far, it does not appear that the new class of apps that will run on Windows 8 will be true Web apps, as is the case with Google's Chromebook. And the unique interface Microsoft recenly showed off means that porting them directly to the Web would be unrealistic. But apps built on the same foundation as web apps mean that Redmond may in the future rely increasingly on the giant pool of Web and app developers who are now coding up a storm for Android and iOS. (This has many loyal Microsoft developers freaking out.) Maybe it's too much to ask for a code-once deploy-everywhere future. But a code-once, do some minor tweaking to port to every platform future? If Google and Microsoft can get their payment systems and app stores in order, it's hard to see how any other app platform could compete with their combined might. Economics alone could drive coders into an inherently cloud-centric development environment built on open standards. Does this mean the death of native apps? No—but it does mean the rise of an entirely new species, an adaptable and hardy breed that can live anywhere, and is at home in the OS as it is in the browser. As Microsoft developer Andres Aguiar put it: To which I'd add, Google pioneered this "solution," in the form of a Web app store. Apple, meanwhile, won't budge until it's forced to.
<urn:uuid:fde8ace8-3219-4859-af50-fbadc3d364cc>
CC-MAIN-2017-04
http://www.cloudcow.com/content/windows-8-proves-web-apps-are-future-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00102-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952712
378
2.671875
3
Dissecting the CompTIA A+ Exam Editor’s Note: This post is outdated. For an updated guide see Jed Reisner’s A+ 220-801 and 220-802 guide. Because of the huge amount of ground it covers, A+ requires candidates to pass two exams—one on core PC hardware, the other on operating systems (OS) technologies. It’s designed to identify entry-level PC technicians with at least six months of experience who possess the skills and knowledge necessary to install, configure, diagnose and maintain PCs, and who also understand basic networking. Given the ubiquitous use and importance of PC technology in IT nowadays, it’s no surprise that A+ certification is one of the most popular credentials around. With more than 600,000 individuals now holding this certification, it’s well known and well recognized around the world. CompTIA last updated the objectives for the A+ exam in November 2003. As with other exam updates, part of the motivation was to make these exams more current in terms of the platforms, hardware components and technologies they cover. Despite a fair amount of new technical content on both exams, CompTIA states, “changes incorporated in the 2003 upgrade were not major.” That said, the A+ FAQ does state that the exams now cover “…basic network and Internet connectivity, dial-up, DSL and cable…” and “…the latest memory, bus, peripherals, operating systems (Windows Me and XP) and wireless technologies in addition to technologies already represented in the 2001 objectives” (such as Windows 9x and 2000). Core Hardware Exam Objectives A survey of working PC professionals drove the creation of the A+ Core Hardware exam and its content. These responses were used to identify and weight the knowledge domains covered in this exam, as follows: - Installation, Configuration and Upgrading: This begins with the ability to recognize and identify key system hardware components. It also includes understanding what’s involved in adding or removing replaceable modules in desktop systems, including motherboards, storage devices, power supplies, cooling system components, CPUs, memory, display devices, input devices and adapter cards; likewise for portable components, including storage devices, power sources, memory, input devices, PC cards and mini PCI devices, docking stations and port replicators, LCD panels and wireless networking devices. Candidates must also understand low-level device configuration, including IRQs, DMA and I/O addresses, and how to work with these settings when installing or configuring devices. This part of the exam also includes coverage of names and characteristics of peripheral ports, connectors, cables and so forth, including visual identification, as well as working with IDE and SCSI devices. Issues related to hardware optimization or improvements are also covered, as well as planning and steps involved in performing hardware upgrades. Much of this part of the exam is scenario-based. - Diagnosing and Troubleshooting: This domain seeks to establish candidates’ knowledge and skills regarding common troubleshooting tasks related to all PC system modules and interfaces, including I/O ports and cables, motherboards, peripherals, PC case, storage devices, cooling systems and so forth. It also covers basic troubleshooting tools, techniques and procedures, including customer support tasks like documenting user environments, dealing with symptoms or error codes and understanding the problem context. - Preventive Maintenance: This includes coverage of cleaning techniques, hard disk maintenance and power regulation/conditioning equipment. It also deals with standard safety measures and precautions when working on PCs, such as avoiding problems related to electrostatic discharge (ESD) and safety procedures to avoid high-voltage hazards when working on power supplies, CRTs or other high-voltage equipment. Covers environmental concerns related to PC repair and disposal, including how to handle batteries, CRTs, chemical solvents and cans and material safety data sheets (MSDSs). - Motherboards/Processors/Memory: This requires candidates to identify and understand popular CPU chips in terms of voltage, speeds, cache, sockets or slots, and voltage regulator modules (VRMs). It also requires them to identify and understand memory types, form factors and characteristics, including those associated with Extended Data Output RAM (EDO RAM), Dynamic RAM (DRAM), Static RAM (SRAM), Video RAM (VRAM), SDRAM, DDR and RAMBUS, as well as various form factors and operational characteristics (parity versus non-parity, error correction and so forth). Motherboard topics include types of motherboards, onboard components, memory, cache, bus types and characteristics (ISA, PCI, AGP), and chipsets, as well as CMOS settings, configuration and behavior. - Printers: Requires understanding print technologies (laser, inkjet, dot matrix and so forth) and printer interfaces, plus common options and upgrades. Also requires knowledge of common printer problems and related resolution or repair techniques, including printer drivers, firmware updates, print/output errors, memory or configuration problems and so on. Also includes issues related to safety precautions, print job management, preventive maintenance and consumables. - Basic Networking: This covers basic network medium types (coax, twisted pair, fiber-optic, wireless) and connectors. It also includes understanding of basic networking concepts including addressing, bandwidth, status indicators, protocols, half- versus full-duplex transmission, networking models and so forth. Internet connectivity topics covered include LAN, DSL, cable modem, ISDN, dial-up, satellite and wireless connections, in terms of communications technologies, bandwidth and connection types. OS Technologies Exam Objectives As with the Core Hardware exam, a survey of working PC professionals drove the creation of the A+ OS Technologies exam and its content. These responses were used to identify and weight the knowledge domains covered, as follows: - OS Fundamentals: This is where one must distinguish among Windows desktops included on the exam: Windows 9x/Me, NT 4.0 Professional, 2000 Professional and XP (Home and Professional). This includes coverage of the registry, file systems and virtual memory, as well as key interfaces (Windows Explorer, control panel, consoles, system tools, command line, task bar, start menu, device management and more). Candidates must also understand names and functions of key system files, command line functions and utilities, disk partitions, file systems and directory structures, as well as OS utilities (disk, system and file management tools in particular). - Installation, Configuration and Upgrading: This means knowing how to install and make operational all the various Windows versions mentioned, including hardware compatibility, OS installation options and types, disk preparation, setup utilities, device driver configuration and standard install troubleshooting. It also means understanding how to upgrade from one Windows version to another, including valid upgrade paths, upgrade startup utility, hardware and application compatibility, OS service packs, patches and updates, and installing additional Windows components. For all versions candidates must also be familiar with boot sequences and boot methods, including emergency boot or repair disk preparation and boot modes. Likewise, candidates must know how to install devices, including working with device drivers (manual and PnP installation and configuration), working with permissions and installing additional Windows components. OS optimization and tuning topics covered include disk defragmentation, virtual memory management, tuning files and buffers, working with various cache settings and managing temporary files. - Diagnosing and Troubleshooting: This covers a wide range of topics, from error codes rel
<urn:uuid:0c7a1b60-0a9c-4bdf-95a3-5be5a12b848e>
CC-MAIN-2017-04
http://certmag.com/dissecting-the-comptia-a-exam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915986
1,539
2.875
3
One of our Web Security Analysts — Chu Kian — came across a relatively old threat this week. It was during his day-to-day work that he encountered a VBS malware, Virus.VBS.Confi. It's not something new, detection was added in 2005, but it still works and it can still infect some unpatched systems if they browse websites with the malware code present. Visiting an infected website with the malicious code will prompt for a Java virtual machine component installation, shown below: On one of our test machines, after selecting to download, the sample displayed a script error. Luckily Windows Script Debugger was open to prompt of any scripting errors, and so up came the actual decoded script of the malware. Inspecting the decoded script shows that it will try to save the downloaded file as KERNEL.DLL or KERNEL32.DLL (detected as Virus.VBS.Confi) depending on where WSCRIPT.EXE is located. This downloaded file is also used to reference the startup registry key as well as in its shell spawning routine which is achieved by modifying the registry key in opening DLL files. It can also infect files that have extensions of HTM, HTML, ASP, PHP, and JSP. Taking a look at the infected website and viewing the page source, we saw that the site is actually embedded with the malware code. Maybe this is unknown to the website owner that is why it's still there. (We've now sent abuse messages regarding this.) Having come across one site, we looked further using Google. You can easily discover more websites that contain the same malware code. Here are some sample search results: So even though most of today's threats live and die within a few days, there's still some old script malware that exists it can still infect unwary travelers.
<urn:uuid:11919c64-be41-479b-858d-d35835757d83>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00001518.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948347
388
2.671875
3
“I think that it is just a matter of making the offenders’ behavior and personality a part of the process,” Clausius said. “If you go overseas and see the intelligence being done, they look at a lot of offender behavior and offender profiles.” Douglass agreed. He expects nontraditional information for predictive policing will come from more study of social behaviors. It’ll just take some time to make it a reality in the U.S. For example, Douglass said it took years for law enforcement to realize that 80 percent of homicides are done by people who know the victim. That revelation was 25 years ago. Only recently, police officers have started to realize that many homicides in big cities are connected to others in the same vicinity going back a decade. Douglass said that in Kansas City, investigators were able to trace back a string of 40 or 50 murders over a 15-year period to one specific incident. “Many of these homicides are located in a geographical area amongst a group of people who are simply retaliating back and forth in a culture where they don’t tell the police what is going on,” Douglass said. “That becomes the remedy, and consequently all these homicides are related.” “I think that the social scientist will be able to help us determine social patterns that we will be able to take advantage of,” he added. Social networks and virtual environments are another source of unexploited data that experts believe will impact predictive policing in the future. Platforms such as Twitter and Facebook are based on the concept of sharing details — information that law enforcement is hoping it can capitalize on. Leonard Scott, former police chief of Corpus Christi, Texas, thinks the data gleaned from observing social media will fundamentally alter the way commanders assign patrols to certain areas. Instead of officers being dispatched to a particular location in response to an event, the information taken from virtual existences will be used to assign a “flex unit” that will move into an area within a half mile of a particular location and watch for various disturbances. Those units are an extension of predictive policing based on social media data streams. Clausius agreed, but said mining social media will be more difficult as time goes on. Many people are locking down their social media accounts so that data isn’t as readily accessible, but she says law enforcement still must figure out how to tap deeper into the information that social networks can provide. One might assume that criminals would be smart enough to vary where they spend their time, particularly if cops are homing in on new sources of information that may pinpoint the likelihood of a crime occurring in an area. But Colleen McCue, senior director of social science and quantitative methods for GeoEye, a geospatial services firm, said it’s unlikely. McCue, author of Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, explained that humans are aware of a vast majority of their behaviors, but location preferences tend to be subtle and unconscious in many cases. For example, at a grocery store, next to the bananas, you might see a display of Nilla Wafer cookies, which go well with the fruit. McCue described that type of product placement as a method of optimizing decision-making. Criminals have the same type of decision process that is largely unconscious. “Even if they are aware of what they are doing, it is very difficult to bypass some of those unconscious decision processes,” McCue said. “It is very difficult to engage in truly random behavior, and it is that fact that makes the whole crime analyst thing work.” Virtual gaming is another arena Clausius believes will be a gold mine for data in the next decade or two. From gambling sites to independent virtual identities to trade money for crime, Clausius thinks cyberspace is ripe for the picking when it comes to data to improve predictive policing efforts. “I don’t think law enforcement and public safety have even tapped into that as far as a data source or intelligence,” she said. “There are all kinds of games
<urn:uuid:92321e9c-bd78-4710-b927-43aa57fd7186>
CC-MAIN-2017-04
http://www.govtech.com/Behavioral-Data-and-the-Future-of-Predictive-Policing.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958154
860
2.546875
3
Anyone in business today is familiar with seeing timelines to help visualize data. Sales figures are plotted over time to show rates of growth or signs of trouble. Wall Street practically lives for comparing a company's results on timelines with its emphasis on comparing numbers quarter-to-quarter and year-over-year. Product development schedules wouldn't exist without them. And when we review market forecasts from researchers like Gartner and IDC, it's impossible to imagine their reports without seeing data plotted along timelines. They're everywhere. In their splendid book, Cartographies of Time: A History of the Timeline, Daniel Rosenberg and Anthony Grafton reveal that people have been visualizing the passing of time in many creative ways through tables and illustrations since the ancient Greeks. However, the timeline itself is a relatively new phenomenon. The model we follow today, the authors claim, stems from 1765 with A Chart of Biography created by Joseph Priestly, who, among other accomplishments, was among the first to discover oxygen in its gaseous state. Timelines have always been popular as ways to display the march of history. Sebastian Adams's 5-meter long Synchronological Chart from 1871 sold well to the public and was hung on many a (long) wall. Another popular world history timeline was the 1.5 meter long Histomap published in 1931 by John Sparks, which is still in print. But timelines were not destined to be just lovely tools for history buffs. They quickly were adopted by business. For example, in the 19th century railroads popularized them by using timelines as the basis for train schedules. And at the turn of the 20th century, Marconi Telegraph published timelines of ships crossing the ocean and the approximate positions during their voyages to "depict the shifting wireless communication network linking the North Atlantic." This is an excellent book for anyone involved in visualizing data. It shows that humans have a preference, even a passion to understand their world through their eyes. While timelines are only one way to visualize business data, they are so ubiquitous that it is almost impossible to conceive of our understanding a company or market without them. That's why this book is not just enjoyable and beautiful, but important as well. Invent new possibilities with HANA, SAP's game-changing in-memory software SAP Sybase IQ Database 15.4 provides advanced analytic techniques to unlock critical business insights from Big Data SAP Sybase Adaptive Server Enterprise is a high-performance RDBMS for mission-critical, data-intensive environments. It ensures highest operational efficiency and throughput on a broad range of platforms. SAP SQL Anywhere is a comprehensive suite of solutions that provides data management, synchronization and data exchange technologies that enable the rapid development and deployment of database-powered applications in remote and mobile environmentsOverview of SAP database technologies
<urn:uuid:027fff46-1057-404c-8dd2-9994e78a5cc9>
CC-MAIN-2017-04
http://www.itworld.com/article/2722108/big-data/humans-like-seeing-data-arranged-chronologically--and-always-have.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00000-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950662
576
2.90625
3
Milky Way in High Resolution / February 7, 2012 A team of scientists led by the Max Planck Institute for Astrophysics (MPA) has produced the highest resolution map of the Milky Way's magnetic field by using more than 41,000 measurements from 26 projects, as reported in Gizmag.com. Each of the map’s 41,330 individual data points represents a Faraday depth measurement, which is a value of magnetic field strength along a particular line of sight. “Polarized light from radio sources in space is observed for the Faraday effect, which describes the rotation of the plane of polarization,” Gixmag.com reported. “The degree and direction of rotation are determined, and from this the magnetic field strength in a given direction is established.” Above is a view of the Milky Way by Bala Sivakumar from our point of view; below is the actual map, whose red areas indicate the parts of sky where the magnetic field points toward the observer and blue areas indicate parts of sky where the magnetic field points away from the observer.
<urn:uuid:f174a3c5-b95d-4b95-b1a2-c885c3bedfb3>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Milky-Way-in-High-Resolution-02042012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935673
228
3.46875
3
State government agency Land and Property Information has launched a new initiative that lets users explore NSW geospatial datasets from within Google Earth. LPI's Google Earth-based project lets users enable information overlays within Google Earth. The project is dubbed "NSW Globe". "The tool allows users to find out more about their property or local area, and provides access to historical information including aerial photographs of Sydney from the 1940s, as well as flood maps from places like Bourke, Moree and Wagga," NSW minister for finance and services Andrew Constance said in a statement. NSW Globe includes medium and high resolution aerial and satellite imagery from government and private sources, terrain data, historic images, boundary information, including suburbs and electorate information, roads and rail routes, and addresses. LPI is also intending to launch another project that will "enable access to full GIS capabilities without the need to be a GIS professional". "That will allow complex business problems to be visualised via spatial information," a media statement said. In November, the NSW government introduced a policy of "open by default" for government datasets.
<urn:uuid:31d18a8e-53bf-4fbb-af61-e92e2299f9d5>
CC-MAIN-2017-04
http://www.computerworld.com.au/article/537306/google_earth_adds_new_dimension_nsw_data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928845
230
2.546875
3
Understanding the Threat of Insider Misuse This according to the Computer Economics study Insider Misuse of Computing Resources, which analyzes 14 forms of insider misuse in detail. The study shows a number of ways that violation of an organizations acceptable use policy may result in harm. Making insiders aware of these threats is an important part of mitigating the risk of insider misuse as we discuss later in the full study. A sister report, Malicious Insider Threats, addresses threats where the insider intends to harm the organization or acts in a purposeful way that threatens the organizations interests. There is sometimes a fine line between malicious intent and mere misuse. For example, an employee downloading music or video files to a desktop computer would not usually be doing so with intent to harm the organization. But if the files being downloading are pirated, the employee is putting the organization at risk. Furthermore, if the employee is using a peer-to-peer file-sharing program to download music, his behavior could inadvertently give outsiders access to confidential files on the computer. The employee may not intend to harm the organization, but his actions put the organization at risk. Nevertheless, we find it useful to separate threats from insider misuse from threats by insiders with malicious intent. Furthermore, many of the countermeasures against insider misuse are also useful to counter malicious insiders. How Serious Is It? Before delving into our analysis of each threat, it is useful to examine them in total. For this analysis, we look at all types of insider misuse and rank them according to the perceived seriousness of each threat. In our survey, we asked respondents to rate the seriousness of each category of insider misuse as no threat, a minor threat, moderate threat, or major threat. We recognize that the word seriousness has no formal definition in risk management. Typically, risk management professionals quantify risks by their severity (potential harm) and the likelihood of experiencing an incident within a given time frame. However, because many forms of insider misuse are not readily quantifiable, we use the word seriousness to gauge how concerned IT security professionals are with each threat. We believe the seriousness level provides a useful measure of the perceived importance of each threat, while being mindful that perception and reality are not always consistent. In assessing the seriousness of each category, we asked respondents to consider all forms of potential damage to the organization, such as effect on system availability or integrity, network performance, legal liability, disclosure of confidential information, loss of worker productivity, and damage to the organization's reputation. In addition, we asked respondents to evaluate these threats without consideration of any countermeasures their organizations were taking to deter misuse. Interestingly, the 14 categories of insider misuse fall into two distinct groups. The first eight categories form one group, where at least 40% of our respondents view each as a major threat. The first group includes: Unauthorized copying of files to portable storage devices; Downloading unauthorized software; Use of unauthorized P2P file-sharing programs; Remote access programs; Rogue wireless access points; Downloading of unauthorized media; and Use of personal computing devices for business purposes. What do these forms of misuse have in common? They all pose a threat primarily in terms of loss of information, security breaches, and legal liability. For example, unauthorized copying of files is a threat as it may lead to loss of confidential information. An employee using his own laptop for business purposes may inadvertently take confidential information home at night or retain this information when he leaves the organization. Downloading unauthorized software or using P2P programs may introduce malware into the organization, leading to theft of information or loss of system availability. It is not difficult to envision the seriousness of the threats that these forms of misuse pose to the organization. There is a significant gap between this first group and the bottom six categories. Only 25% or fewer of our respondents considered these as major threats. This group includes: Unauthorized blogging or participating in message boards concerning the organizations business; Instant messaging using personal accounts; Non-work-related Web browsing; and Using the organizations email system for personal matters. The forms of misuse in this second group are perceived as less serious threats than those in the first group. The perceived threat in the second group is primarily loss of worker productivity. One may argue that some of these forms of misuse also lead to loss of confidential information. For example, an insider blogging about the organizations business without authorization could disclose trade secrets. Or an insider using a personal instant messaging account through the corporate network could introduce malware into the organization. Nevertheless, our respondents do not view these forms of misuse as being as serious as those in the first group. Whether these forms of misuse should be treated more seriously is a subject for analysis in the full report. Sample of Key Findings The points below summarize some of the findings of the full study: Unauthorized copying of files to portable storage devices is the most serious threat and a major source of information leakage from organizations. The majority of organizations categorize it as a major threat, yet approximately one-third make no attempt to deter such activity. Downloading unauthorized software is a close second in perceived threat level, and nearly 90% of organizations have policies forbidding this activity. Unauthorized P2P file-sharing programs are considered a major threat by more than half of organizations, but one-quarter make no mention of P2P programs in their acceptable use policies. Use of unauthorized remote access programs and services round out the top four perceived threats, with 17% reporting widespread violations of policy. Downloading of unauthorized media content such as video and music is not judged as serious as the preceding four threats. The majority of organizations nevertheless give verbal warnings to insiders that violate organizational policy against unauthorized downloading. Unauthorized authorship of blogs concerning the organizations business is not addressed in the policies of most organizations. Similarly, most organizations make no attempt to deter insiders from making unauthorized postings to message boards concerning the organizations business. More than one-third of organizations have no policy concerning instant messaging using personal accounts. The majority of organizations view use of personal email accounts from within the corporate network to be a moderate or major threat, but 29% either have no policy or take no action when policy violations are detected. More than half of organizations consider non-work-related Web browsing to be a moderate or major threat, but one-third explicitly allow insiders to browse the Web from within the corporate network. This may be because the majority of companies have specific controls in place to monitor or block inappropriate web browsing, though there are significant variations in the types of sites restricted. More than half of the study respondents view use of business email for personal matters as a moderate or major threat, but one-third do not address this behavior in their acceptable use policies or make any attempt to deter it. Nearly half of all organizations report widespread violations of corporate policy. To deter or detect insider misuse, most organizations have email monitoring policies in place, and the majority of organizations examine insider computer files or monitor insider Internet traffic when misuse is expected. Few log insider keystrokes, however.
<urn:uuid:0388408a-442e-46dd-bac7-5591c4749842>
CC-MAIN-2017-04
http://www.cioupdate.com/print/research/article.php/11052_3811896_2/Understanding-the-Threat-of-Insider-Misuse.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00515-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937534
1,457
2.5625
3
The technical breakthrough set an internet speed record too fast to be of use with present-day computers, but could open the way for scientists to share and ship massive databases around the world, researchers said. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. In a recent trial, a team of scientists at the Stanford Linear Accelerator Center, California Institute of Technology, Dutch research institute NIKHES, and the University of Amsterdam sent the equivalent of four hours of DVD movies nearly 7,000 miles across fiber-optic lines in less than a minute. The uncompressed data sped along at 923 megabits per second for 58 seconds from Sunnyvale, California, to Amsterdam via Chicago during the test. Findings from the trial may be applied in networks over the next one to two years for scientists working in the data-rich field of particle physics, said Les Cottrell, assistant director of Stanford Linear Accelerator Center's Computing Services. "People will no longer have to ship large planeloads of packages around the world," Cottrell said. "It brings to people's attention that the way we do science today and the way we conduct business could change radically," he said. "Scientists will be able to really collaborate without ever having to leave their homes." Cottrell said researchers are conducting further trials in a bid to set even higher transmission speeds. For their recent trial, the researchers set up their network with personal computers in Sunnyvale and Amsterdam running the Linux operating system and connected locally to network routers at one gigabit per second. Additionally, routers in Sunnyvale, Chicago and Amsterdam were connected to each other with 10-gigabit fibre-optic links. The cost of the trial was about $2.2m, showing the investment needed to create such a network is within the reach of many businesses, though such a network's capacity is likely more than what most businesses require, Cottrell said. "It shows you can do this today with standard off-the-shelf components and the best of today's networks," Cottrell said. "We didn't have to do any magic to do it."
<urn:uuid:a8228791-b3f2-46f6-a651-92287d21c81d>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240049830/Stanford-researchers-set-internet-speed-record
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955923
460
2.5625
3
The rising intensity and sophisticated nature of cyber-attacks has created a hostile and precarious environment for businesses across all industries. Malware has evolved from large-scale massive attacks to include Targeted Attacks and Advanced Persistent Threats that cannot be stopped by antivirus alone. To be successful, Enterprise IT Security systems must implement a number of different techniques, including: Condo Protego security specialist will help you choose the RIGHT security strategy and the RIGHT technologies – that is most appropriate for your organization as one size does not fit all.” Once identified, many attacks have specific signatures that are used to detect and mitigate a threat before it is allowed to take any action on the targeted endpoint device. The concept of sandboxing involves taking an untrusted application and allowing it to run in a very limited environment. The application is allowed to run and perform its function without access to the complete system or to other locally running services. Host Intrusion Detection Systems (HIDS) and Host Intrusion Protection Systems (HIPS) work hand in hand with signatures; these systems could initially scan a specific resource for a recognizable threat signature and along with this, pass it through a heuristic analyzer/engine which looks for specific odd behaviors by the resource that are not expected to be seen. The major distinction between detection vs prevention is that HIDS will detect and alert a user/administrator of the potential threat, but not perform any further automatic action; a HIPS has a mechanism of automatically mitigating the detected threat. The concept of a firewall is rather simple; don’t allow traffic into a device that is unexpected. For many endpoints it is rare for it to be offering a service or expecting traffic (above layer 2) without first initiating the service; because of this, it is common for a device to lock down all network ports coming into a device and only allow inbound traffic if the device initiated the connection first. It is the function of the firewall to perform this locking down and to keep track of the ongoing sessions to ensure that allowed traffic is permitted without disrupting the user experience while also protecting from unpermitted traffic. There can be times when a specific site or file could be labeled as a threat, but still need to be accessed. In this situation a whitelist can be used to automatically permit traffic from that specific site or allow a specific file to run. On the opposite end, there can be times when a specific site or file is not listed as a threat, but it is considered a threat by an organization. In these situations a blacklist can be used to specifically disallow traffic from the threat location or disallow the ability to run a specific file. A rootkit is a tool that is used by an attacker to take control of part or all of a device; there are several types of rootkits, but as with viruses their level of threat can be from almost no real threat to the threat of losing complete control of a device and allowing the attacker to have the equivalent of root/administrative access. Data Execution Protection operates by only allowing programs to be run from a specific area in memory (executable), thus disallowing potential threats that take advantage of exploits in the data (non-executable) specified part of memory.
<urn:uuid:9e873c91-0385-4dce-a821-366ca1cead57>
CC-MAIN-2017-04
https://www.condoprotego.com/datacenter-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940684
665
2.546875
3