text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
As soon as the new concerns that cell phones might pose a radiation risk to users, scammers were sharpening their online skills to take advantage of the situation. That's why the Federal Trade Commission today told cell phone users to avoid products that supposedly "shield" users from cell phone emissions. More on health news: High-tech healthcare technology gone wild According to the FTC, there is no scientific proof that so-called shields significantly reduce exposure from cell phone emissions. In fact, products that block only part of the phone, such as the earpiece, are totally ineffective because the entire phone emits electromagnetic waves. By interfering with the phone's signal, phony shields may cause it to draw even more power and possibly emit more radiation, the FTC said. Health studies about any relationship between the emissions from cell phones and health problems are ongoing. But for those consumers who want to limit their exposure to cell phone emissions, the FTC offered these tips: - Use an earpiece or the speakerphone feature. - Consider texting more, and keep calls brief. - Wait for a good signal. When you have a weak signal, your phone works harder and emits more radiation. Phones emit more radiation when transmitting than when receiving, so tilt the phone away from your head when you're talking. - Before you buy a phone, research its specific absorption rate (SAR), which tells how much radiation the body absorbs while using the phone. Different phones emit different amounts of radiation. In the U.S., a phone's SAR cannot exceed 1.6 watts per kilogram. The Federal Communications Commission has SAR information for cell phones produced and marketed within the last two years. It's accessible using the phone's FCC ID number (usually found on the phone's case) and the FCC's ID search form. The FTC has gone after such radiation shield scammers in the past. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:f13a0307-3eb5-4a27-b3f9-9f7cb0a8d85d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229493/smb/ftc-warns-of-cell-phone-radiation-scams.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00386-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938013
406
3.046875
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. E-skills blames an uninspiring curriculum for putting young people off the subject early on. The organisation also says "misperceptions" contribute to a lack of interest, with students "assuming that IT is concerned with computers not people, is dull and repetitive, and is not well paid". Margaret Sambell, director of strategy at e-skills, said, "We believe that a radical review of the technology curriculum in schools is essential. "In order to compete in the technology intensive globalised economy, we need an inspiring curriculum in schools that attracts increasing numbers of talented students into technology-related degrees and careers." She added that the curriculum needs to be rebuilt with input from employers and universities. It is hoped the new IT diploma, developed in conjuction with employers, will encourage more students to take IT. The number of students taking computing A-level has plummeted 50% in the last five years and just 5,068 pupils took the course this year, which was 10% less than in 2007. GCSE IT suffered a 14% decline compared to 2007 with 85,599 students sitting the exam. Universities have experienced similar problems, with 50% fewer students taking IT related degrees than in 2003.
<urn:uuid:dbab2be5-f11a-42f6-b488-046ed6897a0e>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240086739/Curriculum-needs-radical-review-as-pupil-numbers-fall-50-says-e-skills
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00166-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962308
280
2.6875
3
The new technology could hugely benefit the development of delivery drones. A birdlike drone that can perch on an electricity line has been developed that could revolutionalise they way in which unmanned aerial vehicles are powered. The single-motor glider, created by scientists at MIT’s Computer Science and Artificial Intelligence Laboratories (CSAIL), could lead to UAVs recharging their batteries using the magnetic fields emitted by the power lines. The device has a complex control system that automatically directs it to slow down, tip its wings, and hook onto a line, even in moderate wind conditions. Previous versions required wall-mounted cameras and a separate computer, but CSAIL’s latest iteration has on-board sensors and electronics that can plan and execute moves in real-time. PhD student Joe Moore said that when his team was first thinking about how to improve UAV agility, they thought it would be helpful to take cues from birds. They spent hours researching eagles’ and pigeons’ abilities to stall – a complex phenomenon that involves flaring their wings, angling their bodies, maintaining high velocity, and accurately judging the trajectory needed to perch. Creating a computer model to execute a stall manoeuvre has typically been computationally difficult. As described in a 2010 MIT News article, the angles needed to pull it off result in airflow over the wings that is difficult to predict, which is why engineers have designed conventional planes to land the way they do — the long descent, the gradual braking, and the mile-long runway. Moore said: "It’s challenging to design a control system that can slow down a fixed-wing aircraft enough to land on a perch. Our strategy accomplishes this and can do so in outdoor environments using only on-board sensors."
<urn:uuid:f4c5d592-cc1d-4bfb-a7c3-4ca53cbdaf75>
CC-MAIN-2017-04
http://www.cbronline.com/news/enterprise-it/drones-could-recharge-their-batteries-by-perching-on-power-lines-like-birds-4299654
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959181
370
3.875
4
January 04, 2012 The Japanese government collaborated with Fujitsu to create a virus which detects malware and collect info on the hackers. A virus on a virus? This is the technical schematic: I see a few problems with it: - As we all know, most malware and attacks are distributed through non-involved 3rd parties. Obviously the "fight back" mechanism is going to affect these by standers rather than the actual attackers. There are of course tools that can be developed to try and track the actual source of the attack but I don’t see a reason to distribute them as a virus at end-points rather than take a honey-pot approach. I remember that back in the late 90s, there was a trend of "fight back", mainly trying to automatically break into the computer that sent an attack (or allegedly sent an attack) and take it down (or DDoS it). It quickly turned out to be a disaster in terms of going after the wrong people. - Deliberately introducing viral code into end-points is a one of these things that will only end in tears. Any misconfiguration or vulnerability in the "protection" code will allow attackers to efficiently introduce their code into each end point in the organization. Authors & Topics:
<urn:uuid:dc5ce348-3fb5-4269-85c5-c1e615bbc2f6>
CC-MAIN-2017-04
http://blog.imperva.com/2012/01/anti-virus-virus.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00192-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962517
261
2.609375
3
At the Hot Chips 22 conference this week at Stanford University, IBM engineers shed some light on the interconnect hub chip that is connecting the Power7 nodes on their PERCS supercomputing system. PERCS ((Productive, Easy-to-use, Reliable Computing System) is IBM’s contribution to DARPA’s High Productivity Computing Systems (HPCS) program, whose goal is to deliver highly productive multi-petaflop supercomputing systems for government and industry. Although both IBM’s and Cray’s HPCS designs rely on general-purpose processors — Power7, in the case of IBM, and Opteron, in the case of Cray — the hub controllers are proprietary devices that turn these machines in elite supercomputers. Likewise, SGI’s Altix UV hub is the secret sauce that makes the shared memory capability on the company’s new UV machines possible. The PERCS IBM hub module contains 48 10Gbps optical links and delivers more than 1.1 terabytes/second of bandwidth. A hub connects each Power7 quad-chip module (QCM), with each Power7 drawer consisting of 8 QCMs and 8 hubs. Presumably this is the same setup going into the PERCS-class Blue Waters system at NCSA. Rick Merritt covered the IBM presentation at Hot Chips 22 for EE Times, and has a nice writeup, along with a video interview of Baba Arimilli, the chief architect of the hub chip.
<urn:uuid:7d4087aa-bc32-4b22-ad6d-ae240709dcae>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/08/25/power7_hub_chip_key_to_ibms_percs_super/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00220-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901541
315
2.71875
3
The Wi-Fi industry seems dominated by discussions on the ever-increasing bandwidth capabilities and peak speeds brought with the latest product offerings based on 802.11ac. But while industry marketing touts Gigabit capable peak speeds, the underlying factors affecting WLAN performance have changed little. 802.11ac does bring modest gains in speed through higher order modulation with 256-QAM, but the practical limitations of its use greatly reduce its benefit. The bulk of recent improvements in peak speeds are not due to some magical advancement in RF capabilities that grant us new bandwidth or capacity, but stem from the more mundane explanation that we are simply using the spectral resources we have in a different, re-arranged, fashion through wider channel widths. With wider channel widths we are effectively “robbing from Peter to pay Paul.” That is, we are stealing spectrum from neighboring APs in order to increase potential peak speeds any single AP. This new arrangement can work well in consumer, home, and rural applications where AP density is low enough that wider channels allow us to utilize dormant spectrum that we weren’t using prior. However, in enterprise environments where all available unlicensed spectrum is already being utilized we have no such luxury as dormant spectrum. Using wider channel widths may or may not make sense based on a number of factors, ultimately boiling down to the resulting effect on medium contention. Medium contention is the true driver in the success or failure of a WLAN and we must effectively understand its effect on WLAN performance in order to design and optimize our networks. A Framework for Network Performance Let’s begin by providing a frame of reference for network telecommunications performance in general. The two largest factors in network performance are bandwidth and latency (also referred to as delay). These two are inextricably tied together. We increase bandwidth through improvements that allow higher speeds (through lower packet serialization delay) and more data in transit at once (for example leveraging lower end-to-end delay with larger TCP window sizes). We reduce latency by minimizing geographic delay, serialization delay, and contention delay (as Martin Geddes has expertly explained). Stated another way, we increase bandwidth by reducing the amount of time it takes to send bits from one point to another. Network Sources of Loss and Delay, courtesy of Martin Geddes We are at a point where we’ve squeezed most of geographic delay we can out of the system since communications are transmitted near the speed of light. We have also reduced serialization delay (and improved bandwidth speeds) by enormous amounts to the point at which there is little gain left to be realized. We can see an example of this with 802.11ac; the use of 256-QAM over a highly variable and lossy wireless link is extraordinary, but the practical use of this higher packet serialization rate (what Wi-Fi engineers refer to as modulation rate) is limited to a range of few meters from the access point. Truly little additional gains can be realized in this area. Reducing Serialization Delay, courtesy of Martin Geddes So what is left to improve network performance is the reduction of variable contention delay. Factors Affecting WLAN Contention In order to optimize WLAN performance we need a thorough understanding of the factors affecting medium contention. This requires Wi-Fi engineers to focus on evaluating airtime demand by clients, optimizing cell density (clients per radio), using all available spectrum, and carefully designing frequency re-use. Factors of Wi-Fi contention include: - Airtime Demand – the amount of time that each client or AP requires in order to transmit the data required by one or more applications. This is largely a function of AP and client capabilities, application throughput and packetization characteristics, and resulting spectral efficiency. - Cell Density – the number of transmitters within a Wi-Fi contention domain (frequency or channel) and their airtime demand (probability of transmission) affects the frame error and retransmission rate. The goal is to optimize cell density to efficiently utilize AP and channel resources without overloading a Wi-Fi cell and causing significant contention induced performance degradation. - Spectrum Inventory – the number and width of Wi-Fi channels available that will be used to segment users into different contention domains to avoid sharing airtime and capacity. This one is pretty simple to understand: the more spectrum we have, the more we can segment users, co-locate APs if necessary, and increase aggregate WLAN capacity. A more nuanced examination includes analysis of tradeoffs between the number of channels and channel width to optimize WLAN performance and capacity. - Frequency Re-Use – the ability to effectively re-use Wi-Fi channels to avoid co-channel interference (CCI), which is another form of contention induced performance degradation. We use capacity planning to model airtime demand, provide the appropriate quantity of access points that optimizes cell density (clients per radio) to prevent contention induced performance degradation, and perform what-if scenario analysis related to spectral efficiency (such as channel width permutations). This is the foundation of the Revolution Wi-Fi Capacity Planner tool. We use RF planning to leverage our spectrum inventory into a design that provides optimal RF frequency re-use which allows APs to co-exist without causing co-channel interference (CCI), which is another form of contention induced performance degradation. This includes channel and transmit power planning, AP placement, and appropriate antenna selection to focus signal propagation. Finally, we must integrate capacity and RF planning together in an iterative design approach to achieve a final WLAN design that provides sufficient coverage and capacity to meet the needs for each unique network environment. In the next article, I’ll dive deeper into the mechanics of client airtime demand in order to better understand WLAN capacity planning. Andrew von Nagy You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
<urn:uuid:47ec2e1c-3f94-41b9-80dc-dc7d0cd5db33>
CC-MAIN-2017-04
http://community.arubanetworks.com/t5/Technology-Blog/Contention-Delay-Killed-the-WLAN-Star/ba-p/225103
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00458-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922893
1,228
2.703125
3
When I think about all the hype for Green IT, I always ask myself, where is the beef? Or where is the real green as in greenbacks? When considering the following trends, I start to believe that Green IT is for real. Google, Yahoo and Microsoft are deploying datacenters in the Columbia River Basin of Washington State in order to draw on renewable hydro-power, and places like British Columbia are promoting their hydro power. My friend Mike Hrybyk, president of BCNET, tells me how he is looking to harness the “green power” of BC Hydro. CANARIE, the Ottawa-based agency that supervises standards for the Canadian Internet backbone, initiated a $3 million RFP, called the Green IT Pilot Program, to encourage zero-carbon datacenter deployments in Canada by 2011. I wonder what the impact will be on hockey rinks. In the US, for the first time ever, new power capacity brought online from renewable energy sources in 2008 was greater than half — reaching 60 percent — thanks to solar, wind, hydro-dam, and geothermal. As recently as 2005, new renewable energy sources accounted for only 15 percent of marginal capacity. The US government is spending $4 billion from its economic stimulus package on smart grid initiatives, which in my opinion, is a much better way to spend our tax dollars. Peak demand needs have exceeded current capacity (while electricity costs have skyrocketed), and the problem is expected to worsen in the US. According to projections from the US Energy Information Administration, electricity generation around the world will nearly double from about 17.3 trillion kilowatt-hours (kWh) in 2005 to 33.3 trillion kWh in 2030. In the case of the US, its power grid simply will not be able to keep up with the growth and demand for additional power using conventional means. In the US, it’s estimated that reduction in peak demand by a mere five percent would yield savings of about $66 billion over 20 years — to say nothing of the resulting reduction in green house gas emissions that would accompany a five percent peak demand reduction. Now we are talking money and real greenbacks! Over the last five years, no marginal power capacity has been added with nuclear plants due to prohibitive regulations and a general public opinion of “not in my backyard” — a trend likely to continue — so what does all this mean for IT and facilities professionals? Green IT practices are more than just designing greener datacenters and installing the most energy-efficient IT equipment. IT and facilities professionals need to think about powering datacenters with renewable energy sources, such as windmill farms, hydro-dams, geothermal, or solar-powered sources. It’s no longer enough, nor cost-competitive, to focus solely on the energy efficiency of new IT gear. Finally, energy rebates offered by utilities for green datacenters will be maximized when a more holistic approach is taken that considers IT equipment, facilities, and the actual sources of power.
<urn:uuid:e3340cfe-a5d5-4f76-af0e-f718027396cb>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/04/14/green_it_is_for_real/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943543
625
2.515625
3
The Japanese Defense Ministry is creating a computer virus capable of tracking, identifying and disabling sources of cyberattacks, according to reports. The development of the virtual cyberweapon was launched in 2008. Since then, the weapon has been tested in a closed network environment. "The most distinctive feature of the new virus is its ability to trace cyber-attack sources. It can identify not only the immediate source of attack, but also all "springboard" computers used to transmit the virus." |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:8e74ec64-250b-4be7-b296-d954a4caa433>
CC-MAIN-2017-04
http://www.circleid.com/posts/japan_developing_distinctive_anti_cyberattack_virus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00146-ip-10-171-10-70.ec2.internal.warc.gz
en
0.749186
161
2.828125
3
Does Size Matter? Picking a Sane Password Policy If you choose a password made up of 60 random characters, it would take a hacker billions of years to crack it by brute-force. Pretty good security, all in all. But since a password like that would be impossible to remember, it's not really practical for most end user applications. So how long should your corporate password policy specify that a password should be? In the first piece in this series we looked at the desirability of choosing passwords made up of random characters chosen from as large a pool as possible--preferably including upper and lower case letters, numbers and special characters such as punctuation marks and symbols. The SANS Institute recommends passwords should be at least 15 characters long, which effectively means that these password can't be carried around in end users' heads. Let's take a look at how secure a password this long would be. If we take a scenario in which user passwords are made up of upper and lower case letters and numbers, each password character can be one of 62 possible characters. A fifteen character password thus has 62^15, or more than 750 million, million, million, million possibilities. That's a lot. If you got a pool of a million computers working on the problem, it would take about 2 million million years to check them all. A healthy dose of realism is clearly in order. "A lot of guidance about password length and complexity is just a sticking plaster over an underlying problem with passwords," says Dr Ant Allan, a research vice president at Gartner. "It's important to remember that if you increase length or complexity you are only defending against some kinds of attacks anyway," he says. "If the end user's machine is infected with spyware then the password will still be discovered, regardless. And a long password does nothing to prevent a hacker getting a password using social engineering. These types of policies are beloved of auditors, trotting out established ideas." Fifteen is an arbitrary figure for password length, so what would happen if shorter ones were used? They would certainly be easier to remember, and since, as Dr Allan points out, security is only as good as the weakest point, the reduction in security would not be as great as it might at first appear. The passwords might be a little more easy to crack, but since a ten character password would still take a great deal of time to crack, it's still far more likely that any security breach would come from an internal attacker, a social engineer, or through a malware attack than a successful brute-force attack. Over time computers get more powerful, and the time needed to crack passwords of a given length goes down. Increasing password length by a single character is surprisingly effective at counteracting several years' of advances in technology: if the extra character is drawn from a pool of a hundred possibilities, then essentially adding a random character makes the password 100 times harder to crack. Password Change Intervals Password change intervals are usually also specified in corporate password policies, and the SANS Institute recommends that end user passwords are changed every four months. The rationale behind this is not clear: with this policy in force a hacker would still have an average of two months to exploit any password he acquired more than enough time to do some harm. Given that users forget passwords more often when they are changed regularly, and that there is a usually a significant cost involved in providing a help desk to reset large numbers of user passwords, you could argue that changing passwords is a fairly pointless but rather expensive exercise. "There has certainly been an argument around for a few years now that changing passwords is more trouble than it is worth", says Dr Allan. "People argue that it prevents employees who leave an organization from exploiting their passwords after they have left, but this is just a cover for poor administration." One possible solution to the problem of using passwords which are difficult to remember is to use a password manager. These applications encrypt and store passwords securely for end users so they don't need to be written down, and ensure they can only be accessed by the user after entering a password. The virtue of these systems is that users are only expected to remember a single password instead of numerous different ones. In the final piece in this series we'll be taking a closer look at this type of application.
<urn:uuid:b7aa8366-2a30-419b-86d6-7b8b651f1668>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3840181/Does-Size-Matter-Picking-a-Sane-Password-Policy.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962214
885
2.78125
3
It’s Fun And Games Until An Identity Gets Stolen The variety and quantity of video games available to children is quite frankly — overwhelming. How many times has your child asked you to enter your Apple ID into iTunes so they can download another app to their iPad? Identity and Safety Risks Not knowing how to navigate the social risks associated with careless gaming could easily lead to identity theft, or worse, your child being contacted by an adult posing as a child, soliciting personal information, or attempting to arrange a place for them to meet. And hackers are innovative when it comes to developing new malaware (computer viruses, trojan horses, worms, etc.) to steal credentials and other valuable personal information. The risks are real and Stay Safe Online offers an exhaustive list of gaming tips for parents, children, and teens and tweens. Here are some tips to help you keep your identity intact and your computer hardware virus-free: - Avoid executable (.exe) add-ons that promise to add extra functionality to a game, as these may be infected with keylogging software or viruses designed to steal your login credentials. - Game Masters (GMs) will never ask for your login information - Create a strong password that includes upper and lowercase letters, numbers and symbols that only you could know. - Be skeptical of third-party applications, especially knock-off games; do your research. - Be wary of links and attachments delivered to you via email or instant message that say your password has been compromised; they could be phishing attempts that direct you to imposter sites! Research by contacting the site through established channels. - Before purchasing a game online, make sure the URL, or web address, begins with “https”; use a credit card instead of debit card - Like a toothbrush, don’t share your computer, because a less savvy person could easily and unknowingly download malaware. - Keep your anti-virus software updated - Review your credit card and banking statements on a monthly basis - Stay abreast of the latest hacker methods by reading sites such as ours - and the risks associated with over-sharing; let them know that you’re not trying to be a “downer,” but have their best interests at heart Here’s a clever Minecraft Youtube video created by some students for a school project. It could be a great way to get the conversation started. Do you know what online games your child plays, and whether they’re interacting or chatting with strangers on public servers? Protect yourself and your children by taking control of your security and having those difficult conversations. To learn more about privacy and identity theft issues related to video games, please read our article on Minecraft. Image courtesy of Flickr user jDevaun Latest posts by Melanie Medina (see all) - An Insider’s View: Meet Donna Parent, IdentityForce’s Senior VP of Marketing - January 5, 2017 - Steer Clear of Holiday Hiring Scams - December 6, 2016 - Black Friday Identity Theft Risks - November 15, 2016 - January 2017 - December 2016 - November 2016 - October 2016 - September 2016 - August 2016 - July 2016 - June 2016 - May 2016 - April 2016 - March 2016 - February 2016 - January 2016 - December 2015 - November 2015 - October 2015 - September 2015 - August 2015 - July 2015 - June 2015 - May 2015 - April 2015 - March 2015 - February 2015 - January 2015 - December 2014 - November 2014 - October 2014 - September 2014 - August 2014 - July 2014 - June 2014 - May 2014 - April 2014 - March 2014 - February 2014 - January 2014 - December 2013 - November 2013 - October 2013 - September 2013 - August 2013
<urn:uuid:16b8cd6c-fdb7-45bd-ac91-11957181977f>
CC-MAIN-2017-04
https://www.identityforce.com/blog/fun-games-identity-gets-stolen
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944636
816
3.078125
3
Nanocomputers implanted in our brains that connect to the cloud will usher in a new era of critical thinking and human advancement, Google’s Director of Engineering predicts in a TED Talk. Ray Kurzweil says that by the 2030s, nanobots embedded through the bloodstream into our brains will create hybrid-minds that combine the current power of our brain with the almost limitless processing capacity of cloud-based computers. These micro-computers will help us get quick answers to complex problems and will provide the extra juice needed to come up with creative new ideas. Check out the full TED Talk here. + MORE AT NETWORK WORLD: 12 Terrific Techie TED Talks you have to watch + This new cloud-based capacity we can unlock for our brains will not be the first time we’ve had a vast expansion of the processing ability of our minds. 200 million years ago human beings developed a new area of the brain named the neocortex in the front portion of our forehead, which gave us the ability to think critically and creatively. It has allowed humans – but no other species – to develop advanced language and thinking. But the neocortex is limited in its growth by the size of our skulls. That’s where nanocomputers that provide a link between our minds and the almost limitless-capacity of the computers sitting in a data center/the cloud come in. Computers are getting smart enough to provide supplemental processing power and contextually relevant assistance. Natural-language reasoning, predictive analytics and artificial intelligence are all advancing quickly. The proof is in the pudding: IBM’s Watson was able to handily beat the best players of advanced natural-language processing game Jeopardy. In the coming decades the continual shrinking of technology hardware will allow tiny bots to be placed in our bodies that connect with computers sitting in the cloud. Hybrid-thinking that combines our own reasoning with the capacity of cloud-based computers when we need it will usher in a new era of unknown advancements, Kurzweil predicts. The last time our brains got access to a new capacity for processing 200 million years ago humans invented speech, critical thinking and great technological advancements. With the power of the cloud providing supplemental processing power to our brains, Kurzweil says the potential is limitless.
<urn:uuid:081dfbe1-b547-40a6-843b-fd58ed75e8c1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2906381/cloud-computing/ted-talk-your-brain-the-cloud-super-humans.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00323-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921749
469
3.109375
3
Thanks to the power of supercomputing, scientists from the Universities of Göttingen and Copenhagen now have a better understanding of one of the most ancient stars in the universe. The results of their study were published in the July 21st edition of Astrophysical Journal Letters. The group used high-resolution computer simulations to model the formation of the oldest known star, which was discovered right in our Milky Way Galaxy. The star, which has the abbreviated name of SM0313, was born 13.6 billion years ago — just 100 or 200 million years after the Big Bang. The astrophysicists performed cosmological simulations with a supercomputer from the North-German Supercomputing Alliance to uncover the dynamics of gas and dark matter as well as the chemical evolution. The thing that sets SMSS (SkyMapper Southern Survey) J031300.36−670839.3 apart is its chemical composition, which can be seen in its spectrum lines. The scientists expect this simulation to shed light on the transition from the first to the second generation of stars in the universe. So-called first generation stars were formed out of a primordial gas comprised of hydrogen and helium. They were denser than our Sun having ten to five hundred times more mass. Nuclear processes deep inside these stars formed heavy elements like iron, silicon, carbon, and oxygen. These stars eventually perished in supernova explosions and the heavy elements that were ejected became second-generation stars. Stars with very few heavy elements indicate that not many stars contributed to that star’s birth. Such is the case with SM0313. “Even for the oldest-known star in the Milky Way galaxy, our simulations indicate that the gas efficiently cools due to the presence of heavy elements,” says Dr. Stefano Bovino at the Institute for Astrophysics Göttingen, lead study author. Such conditions favor the formation of low-mass stars and suggest that the transition to the second generation resulted from a supernova explosion. “The heavy elements provide additional mechanisms for the gas to cool, and it is very important to follow their chemical evolution,” explains co-author Dr. Tommaso Grassi from the Center for Star and Planet Formation at the University of Copenhagen. The new simulations were enabled by a chemistry package called KROME, through a joint effort led by the University of Copenhagen. A video of the computer simulations can be viewed at vimeo.com/101191120.
<urn:uuid:b0b1a0b6-d774-40cb-ba60-07f115ee3c58>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/07/23/simulations-reveal-oldest-known-star/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00231-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959494
511
3.90625
4
number of quick wins." NASCIO's Robinson predicted green IT will gain prominence in the public sector as energy costs climb. Part of government's sluggishness on the issue stems from the fact that many agency data centers are renovated state office buildings that weren't built to be data centers. Many of these buildings aren't conducive to the rack-mounted servers, blade servers and storage area network arrays that energy-efficient data centers contain. Due to the bureaucracy involved in building their own data centers, many governments may get greener by outsourcing data center operations to energy-efficient contractors, said Robinson. "It's a capital construction project, and it can take many years to get it approved," Robinson explained. Governments unable to approve those projects would likely outsource to vendors with green data centers. Many state and local governments remain quiet about their green IT research for now, according to James Costa, vice president of government industry for IBM. He said many don't want to publicize their efforts without completing their needs assessments. "If you're talking 12 months from now, you'll see that three or four state governments have major efforts in this area they've actually had results from," Costa said. Di Maio said energy-efficient data centers would be an easy first step for green IT in government. Green data centers already have well established designs, and the cost savings are obvious. The next challenge will be implementing green initiatives that don't necessarily reduce energy bills, but promote green values. An example would be a more environmentally friendly disposal process for computers. Green IT will have different meanings for different government agencies, based on what produces each agency's "carbon footprint," said Di Maio. For example, the carbon footprint of an agency mostly composed of employees using computers would come from electricity consumption. Energy-efficient data centers and computers would be the focus of green activities for those agencies. On the other hand, the overall carbon footprint of an agency focused on managing fleets of trucks comes from internal combustion engines. So rather than reducing data center energy consumption, that agency might deploy software that helps the agency use vehicles more efficiently. Financial motivations are a start, Di Maio added, but only a cultural change will make government IT truly green.
<urn:uuid:d774e579-5a7f-47a9-a3b7-2bbdfa6ab8b7>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Money-Talking.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00139-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955357
453
2.5625
3
Borg16 is a circuit designed to drive 16×16 LEDs. It was designed by Suschman of Das Labor and consists mainly of 256 LEDs, an Atmel ATmega32, UDN2981AN drivers, 74HCT164 shift registers and a few additional components. They provide their own software, but I wanted to learn more about programming microcontrollers. I wrote a program to display pictures streamed over a RS-232 connection to the device with 16 brightness levels. It could be used to watch videos (like mplayer with aalib output), visualize music or just play simple games actually running on the computer (which has more resources than the Atmel µC). At a serial speed of 57600 Baud, I was able to display about 22 frames per second, but it also depends on the host’s serial driver. The software was written to be compiled using GCC/avr-libc and to be uploaded using foodloader, but adapting it to other compilers should be pretty straightforward. You will likely have to adapt it to your use anyway.
<urn:uuid:7bb5cd32-56bd-4bb2-8674-6f1d708f8ab1>
CC-MAIN-2017-04
http://hansmi.ch/hardware/borg16
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957286
223
2.671875
3
Apparently no longer content to rely on the proven quick thinking, digital dexterity and dogged determination of the nation's youth, Dutch officials are abandoning a 150-year-old flood-control methodology in favor of one fashioned upon technology's flavor of the month: big data. From an IDG News Service story on our site, which notes that 55% of the Dutch population lives in flood-prone areas: A big data project called Digital Delta aims to investigate how to transform flood control and the management of the entire Dutch water system and save up to 15 percent of the annual Dutch water management budget. IBM will collaborate with Rijkswaterstaat, the part of the Dutch Ministry of Infrastructure and the Environment that is responsible for the design, construction, management and maintenance of the waterways and water systems in the Netherlands. The project also involves the University of Delft, local water authority Delfland and the Deltares Science Institute, the organizations said in a joint news release Tuesday. Noted Dutch flood-control expert Hans Brinker was unavailable for comment.
<urn:uuid:8e039fe1-0d1b-4b50-8415-4c0bb76e4f17>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224850/software/dutch-abandon-tried-and-true-solution-in-favor-of--big-data-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926462
216
2.703125
3
Most discussions on Big Data have centred on how to get to the stuff – how to capture it, via Hadoop clusters and/or process and access it using MapReduce for instance. That’s all well and good, but compiling Big Data is the start, not the end, of the process. In fact, the emphasis on building Big Data datasets overlooks the most important question of all: "What are you going to do with the data?" The truth is there isn’t a ‘one size fits all’ answer. Why? Because Big Data’s usage depends on your organisation’s specific requirements and goals. Success lies in recognising the multiple types of Big Data sources, identifying the most appropriate technologies in each case, and then unlocking the riches within. Having successfully organised your data at this stage, you are then in a position to analyse, visualise and operationalise those precious insights according to your unique business aims to get the value (namely, better decisions) that improve your bottom line. To make explicit what is involved on this Big Data journey, here are the main data source types and the corresponding analysis and visualisation techniques that might be applied to find the Big Data "gold" you’re after. User profiles from social networking sites, search engines or interest-specific social sites may be mined for individual profiles and target group demographics. Technology wise, this involves API integration. Another potential and increasingly influential data source comprises contributions from reporters, analysts and subject experts to articles, user forums, blogs, Twitters etc; also user feedback from Facebook, catalogue and review sites; plus user-review-based sites like Amazon and so on. The mining technique has to involve Natural Language Processing and/or text-based search to assess the evaluative nature of comments and derive usable insights. The next big source area is activity-generated data from computers and mobile logs. Also – and increasingly so – data generated by processors within vehicles, video games (soon, household appliances as the Internet of Things becomes a reality). Here, parsing technologies such as Splunk may well help make sense of these semi-structured text files and documents. Cloud Data from SaaS applications such as salesforce.com, etc., may require distributed data integration technology, in-memory caching and API integration. There is also a wealth of publicly available data from the likes of Microsoft DataMarket, Wikipedia, etc., that you may wish to incorporate in your Big Data bucket. These resources require the same types of text-based search, distributed data integration and parsing technologies mentioned above. Finally, there are all those filing cabinets full of original and only print documents. Parsing and transforming this semi-structured legacy content to prepare for analysis can be aided by specialist document management tools, e.g. Actuate’s Xenos. We have been talking about sources and the analysis and visualisation techniques that can assist you in your Big Data task. Let’s consider other technologies that should form part of this conversation. The next-generation Hadoop and MapReduce style tools for handling and parallel parsing of data from logs, Web posts, etc. promise to create new generations of data. Plus, don’t forget that older data warehouse appliances, such as Teradata, Netezza, Plumtree, etc., have been busy for years collecting internal, transactional data. These should all become integration targets for your Big Data architecture. Meanwhile Cassandra and other packet evaluation and distributed query processing-like applications, as well as email parsers, are also technologies that fill gaps in Big Data environments and will help deliver the goods. Finally, there are many useful tools such as BIRT (Business Intelligence and Reporting Tools), the Eclipse Open Source project that serves as the foundation for the ActuateOne product suite, that help your Big Data mission. In conclusion, as an industry we have yet to appreciate that it’s not only how well we capture Big Data, but what we do with it that matters. As ever, "Why do we want to do this?" is the only really interesting discussion business and IT should have; we need to empower both sides to have that conversation about Big Data. It’s an effort worth undertaking. Imagine life when Big Data starts making its mark. Think weather forecasts that are actually predictive, useful restaurant or accommodation recommendations on your phone when you reach your holiday destination, the fridge that could talk you through a recipe based on contents and meal preferences; or we could start to learn something about the fundamentals of life with all that genome information. With Big Data, the possibilities are genuinely exciting. It’s time to start moving beyond the enthusiasm and froth to the real business benefits – a process that can only happen with a pragmatic, properly thought-out implementation strategy that takes your business through the organising, visualising and operationalising stages of effective Big Data management. Nobby Akiha is Senior Vice President of Marketing at Business Intelligence (BI) specialist Actuate; email@example.com
<urn:uuid:6260aa3a-261b-4a98-b607-bf6a8230a08c>
CC-MAIN-2017-04
http://www.cbronline.com/blogs/cbr-rolling-blog/guest-blog-finding-big-data-gold-240812
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00129-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909082
1,046
2.53125
3
With the long dry spells leading into seasons of drought, one’s thoughts turn to the long-held promise that is desalinization. Abutting California, which is facing one of the most severe droughts on record, is the vast Pacific Ocean – if only the salt could be separated from the valuable H2O in an economically feasible way. Enter graphene. The ultra-thin carbon material that has so tantalized the computer industry as a potential semiconductor material might have another use transforming salt water into salt-free potable water. According to new research from Oak Ridge National Laboratory and Rensselaer Polytechnic Institute, a hybrid material, called graphene oxide frameworks, or GOFs, could provide a big advantage over the inefficient desalination processes currently in use. “This is basically sheets of oxidized graphene connected by specific chemical linkers from some of the oxidation sites,” explains ORNL’s Bobby Sumpter. “Because it’s composed mainly of strongly bonded carbon, it doesn’t decompose in water and has good mechanical properties. It’s an exciting material with potential for numerous applications.” Sumpter and RPI’s Vincent Meunier were initially drawn GOFs’ tunable electronic properties, but with the help of supercomputer simulations performed at RPI’s Center for Computational Innovations soon recognized the material’s potential to be used as a desalination membrane, capable of removing contaminants such as salt ions, pictured below in blue and green, from water. Among the current techniques for getting salt out of salt water, reverse osmosis systems is used in about 40 percent of desalinzatioplans. The method pushes saltwater through a semi-permeable membrane to generate fresh water, but with reverse osmosis, speed is a limiting factor. The membrane can only handle a certain water pressure. “You can have a great membrane material but if you can treat only a cup of water a day, that’s not going to be useful or cost-effective,” says Meunier. Sumpter, Meunier and RPI’s Adrien Nicolaï created computational models at the atomic level and then set out to determine the ideal configuration for a GOF desalination membrane. High-performance computers were used to simulate different elements of the design. The team was concerned with how layer thickness, the density of the linking pillars, and applied pressure all affect the material’s performance. The sweet spot for removing salt as efficiently as possible involved balancing the selectivity and permeability of the membrane. The simulations showed where fine-tuning the GOF structure would boost its ability to handle a greater load. It is thought that the new technology can operate approximately 100 times faster than the materials currently used as reverse osmosis membranes. The addition of water-repellent graphene as part of the porous membrane further boosts performance. “Water is trying to avoid being in contact with graphene, so you can design it in such a way that you’re forcing the water not to be close to one layer but also not to be close to the other,” Meunier said. “This effect creates channels, which direct water through the system very quickly.” While this research focused on salt ions, the GOF material has other applications, including acting as the filtration membrane for contaminants such as bacteria. Furthermore, GOFs are made with abundant, inexpensive materials through a standard fabrication process, setting the stage for more affordable desalinization efforts. “We believe it’s scalable, that the chemical engineering industry could potentially produce it in bulk,” Sumpter said. The results of the research have been published in the journal Physical Chemistry Chemical Physics. Over the last few years the field of materials research has grown by leaps and bounds thanks to computational modeling. The researchers point out how interdiscplinary collaborations and nanoscience are making a big impact in areas that benefit humanity.
<urn:uuid:816cd8c4-6b8c-47bb-93f6-ea7fb9ad15c9>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/05/06/hpc-speeds-desalinzation-efforts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934877
841
3.65625
4
I recently conducted a penetration test of a web application. Because of design decisions, I was able to bypass CAPTCHA to brute force user accounts and, ultimately, bypass file upload restrictions to upload malware onto the web server and into the internal network environment. The owner had taken a healthy view of security, had conducted frequent vulnerability assessments, and wanted a full pentest. As expected, a quick web app vulnerability scan showed minimal findings, a possible DOM based XSS that turned out to be a false positive and some scattered SSL/TLS certificate findings that presented minimal risk to this business. I set aside the vuln scanner and actually looked at the code. Upon doing so, I found several critical vulnerabilities that needed to be addressed immediately. All of the issues had a common theme, the web application relied solely on Client Side Security. What is Client Side Security? Client Side Processing, in general, is an application design principal that moves app functionality into the client, or user’s computer. In a client-server relationship of this type, the client conducts some level of application processing on the user’s computer before sending information to the server for further processing. Client Side Processing has many benefits, including relieving processing stress on the server and creating a customized user experience. When security is placed client side, a problem arises. The security controls are now in the hands of the user. Most users are not malicious nor technically capable of exploiting this control, but a skilled malicious user can easily bypass controls that are placed in their hands. Multiple vulnerabilities existed in this web app, but two serve as good examples of bypassed Client Side Security: CAPTCHA with Client Side Invocation and File Upload Functionality with Client Side File Restrictions. CAPTCHA with Client Side Invocation The web application allowed anyone to create a user profile. Once created, the user profile was protected with a username and password. After two unsuccessful login attempts, a CAPTCHA was generated. The CAPTCHA was designed so that a human could read the image text and submit the text along with the user credentials. Conversely. the CAPTCHA image was designed to be difficult for a computer to read. As such, a CAPTCHA is a very good security measure to prevent automated brute force login attempts. If a computer cannot read the CAPTCHA image, it cannot submit the text, and cannot attempt a login. The web application CAPTCHA, itself, worked just fine. The CAPTCHA was generated and processed on the Server Side. The invocation of the CAPTCHA, however, was Client Side. The code to keep track of unsuccessful login attempts was maintained and sent via Client Side session cookies. As a proof of concept attack, a test user account was created. The POST request for the login page, without CAPTCHA, was copied. The POST request was sent to the web application server 1000 times with different and incorrect password attempts. At the end of the 1000 password attempts, the correct password was attempted and login was successful. The entire proof of concept was conducted in less than a minute. While the CAPTCHA itself worked as designed, the invocation was kept Client Side. I bypassed the invocation by sending the same initial POST request over and over. The session cookie was never updated with the unsuccessful login count, the CAPTCHA was never invoked, and security mechanism was rendered useless. As the web application profiles contained sensitive financial user information, a malicious actor could have exploited this vulnerability to brute force multiple accounts using frequently used passwords or brute force a single account until the correct password was guessed. File Upload with Client Side File Restrictions While it was out of scope for this assessment, the company was well aware that this sort of vulnerability could allow for a malicious actor to upload various malware and hacking tools, including ransomware and remote shell tools. Conclusion and Recommendations None of this is to say that Client Side Security is worthless. Proper implementation of Client Side Security paired with Server Side Security can prevent unintentional and unskilled attacks. A reliance on Client Side Security, without Server Side Security, is easily bypassed by a modicum of skill, however. In the case of these specific vulnerability examples, the number of unauthenticated login attempts and invocation of the CAPTCHA should have been kept track of and invoked by the server. And while the File Upload functionality could keep the Client Side Security checks, a second set of checks should have been performed by the server. Client Side Security relies on trust in the user: trust that the user has no ill intention, will make no mistakes, nor has the skill to attack. If the web application designers trust the user implicitly, no security is required. As business requires proper security, Client Side Security should not be relied upon and Server Side security checks should always be in place.
<urn:uuid:0d9ccd8d-13b5-413b-8575-6bc9190a0902>
CC-MAIN-2017-04
https://www.criticalstart.com/2016/09/on-the-reliance-of-client-side-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00551-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947638
985
2.8125
3
Definition: A quantum algorithm to determine whether a function is constant or balanced, that is, returns 1 for half the domain and 0 for the other half. For a function taking n input qubits, first, do Hadamards on n 0's, forming all possible inputs, and a single 1, which will be the answer qubit. Next, run the function once; this exclusive or's the result with the answer qubit. Finally, do Hadamards on the n inputs again, and measure the answer qubit. If it is 0, the function is constant, otherwise the function is balanced. See also quantum computation. Note: The algorithm needs only one (quantum) evaluation of the function. A classical algorithm to answer the same question is to examine one more than half the domain in the worst case. See Arthur O. Pittenger, "An Introduction to Quantum Computing Algorithms", page 41, or Michael A. Nielsen and Isaac L. Chuang, "Quantum Computation and Quantum Information", pages 34-36. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "Deutsch-Jozsa algorithm", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/deutschJozsaAlgo.html
<urn:uuid:94adf410-c6d7-4ff6-b13b-7781a8792f8c>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/deutschJozsaAlgo.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00277-ip-10-171-10-70.ec2.internal.warc.gz
en
0.793288
340
3.328125
3
On July 5, 2012, the U.N. Human Rights Council adopted a resolution on the promotion, protection, and enjoyment of human rights on the Internet. The U.N. General Assembly established the Human Rights Council in 2006 to replace the former U.N. Commission on Human Rights. The Council consists of 47 U.N. member states from all geographic regions that are elected by the General Assembly for three-year terms. Current members include China, the Russian Federation, and the United States. The Internet resolution was co-sponsored by more than 80 countries, both members and non-members of the Council, including Sweden, the United States, and Brazil. The Council adopted the resolution by consensus during its 20th regular session, which ran from June 18 through July 6, 2012, in Geneva. The resolution “[a]ffirms that the same rights that people have offline must also be protected online, in particular freedom of expression, which is applicable regardless of frontiers and through any media of one’s choice, in accordance with articles 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights.” The resolution also “[c]alls upon all States to promote and facilitate access to the Internet and international cooperation aimed at the development of media and information and communications facilities in all countries.” In a press statement, U.S. Secretary of State Hillary Rodham Clinton called the “landmark resolution” a “welcome addition in the fight for the promotion and protection of human rights and fundamental freedoms online.”
<urn:uuid:274214bf-4c55-4dc1-ba69-4c767c93f6ec>
CC-MAIN-2017-04
https://www.insideprivacy.com/international/un-human-rights-council-addresses-human-rights-on-the-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00185-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956441
325
2.625
3
A dental practice in Florissant (a suburb of St. Louis, Missouri) has revealed that a recent data breach could involve 10,000 people. The medical data breach was possible because patient data encryption software was not used to secure laptops that were stolen during a burglary. According to stltoday.com, an attorney that is representing the orthodontist's office has confirmed that "extensive investigation[s]" had to be performed to see who was affected by the burglary, although he did mention that "most of the patient were probably teenagers," which makes sense when you consider who generally gets orthodontic treatment (think: braces).HIPAA rules do not discriminate based on age, however: since the computers were not protected with disk encryption software – but only with password-protection, which is easily "crackable" – Olson & White are forced to report the data breach not only to patients but the Department of Health and Human Services (HHS). In this case, because more than 500 are affected, the HHS has to be contacted immediately. Furthermore, certain other rules may apply, such as having to contact a media outlet to get the news out.Why does the use of encryption software give a medical organization a way out from report a data breach? Legally, it's because the Breach Notification Rule (found under the HITECH amendments to HIPAA) offers safe harbor from reporting a medical data breach if encryption is used.From a technical standpoint, it's because encryption offers one of the best ways of protecting digital information. The use of strong encryption software – like AES-256 – is considered to be unbreakable with modern computing tools. Testing by cryptologists, that continues today, has upheld this theory so far. Under the circumstances, chances are that PHI encryption can easily prevent data on stolen or lost laptops from falling into the wrong hands. Simply put, medical organizations will demur at the use of encryption because of cost. Not only financial cost – like actually paying for the encryption licenses – but also for other costs, such as opportunity costs. For example, if facing a tight budget, money diverted towards non-performing expenses like security software could mean having to give up on hiring a dental technician or the latest x-ray machine that could speed up consultations and treatment.Furthermore, there is the added problem of hidden cost when deploying encryption: most encryption providers only list the cost of licenses (usually per machine or device to be protected, sometimes per user, regardless of how many devices are involved) but the encryption budget needs to cover things like central management servers, the software that is required to ensure such servers can to their job (the underlying operating system, for example), space for the server in a data center, etc. Hidden costs can also include the hours worked by an IT technician as well as any ongoing operational and maintenance costs.Since data breaches may not affected a medical organization for an extended period of time, many myopically decide to forego encryption, possibly thinking that it won't happen to them, or promising that they'll do it "soon."Of course, it doesn't have to be that way. AlertBoot FDE complies with HIPAA encryption requirements (namely, it's a FIPS 140-2, NIST validated solution) and states all costs upfront.
<urn:uuid:a1b71c02-0656-481a-8c75-228c0c05e601>
CC-MAIN-2017-04
http://www.alertboot.com/blog/blogs/endpoint_security/archive/2013/09/02/dentist-encryption-olson-amp-white-orthodontics-reports-10-000-affected-by-data-breach.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966537
672
2.59375
3
Following on with our current series of articles exploring cell phone hacking this time we will look at how phones can be attacked. There are 4 main ways in which cell phones can be compromised: - Spyware can be loaded onto a phone. This, in turn, can activate the phone as a bugging device with full remote control available to an eavesdropper. Advanced spyware has a number of features, including voice-activated microphones to save on battery life and the ability to auto forward SMS messages and the contact list on a phone. - GSM encryption can be hacked. A number of attacks have been demonstrated and, in theory, given suitable resources, mobile phone encryption could be compromised. This is a passive attack and is undetectable as the signals are received using a specialised radio, which is both portable and easy to hide. - Cell phone "capture". This attack exploits a couple of design weaknesses found within GSM cell phones. The first is that, whilst a cell phone needs to authenticate itself to a network, the network itself is not authenticated by the cell phone. Couple this with the design requirement for cell phones to connect to the most local base station, based on signal strength, a fake base station can be set up and all local call traffic captured. As mobile phone calls are only encrypted from the phone to the base station a fake base station will be able to process calls "in the clear". This is called an active attack and, whilst it may appear complicated, a number of commercial products are available to authorised agencies and government departments. In early 2010, active attacks were demonstrated using hardware and software that can be purchased for around $1000, less than 1 of commercially available solutions. The widespread availability of home base stations, such as Vodafone SureSignal, has provided a source of equipment that could be adapted for this type of intercept. In reality this attack does have limitations. As the cell phone is using a fake base station it is not registered with the cell phone network, so any incoming calls will be diverted to voice mail or receive a "cell phone unavailable" message. More sophisticated versions of this attack provide two connections—one to the compromised phone and one to the network base station. Using this man-in-the-middle approach the cell phone is able to connect to the authentic network, albeit via a fake base station that will intercept the traffic, so "normal" two-way calls can be initiated whilst the call and data flow is being monitored. 3G phones utilise mutual authentication between the phone and the network so aspects of these attacks will no longer be valid when networks are exclusively 3G and above. Until then the sharing of GSM and 3G systems in support of broader network coverage can still see 3G phones subject to compromise using this approach. - Inside threat. Threats to information security systems often emanate from inside an organisation. These can take the form of knowledgeable insiders being bribed or bullied into supplying relevant cell phone data and can even be an employee planted by a security agency. In June 2010, a technician who worked in a Lebanese mobile phone operator was arrested for being an Israeli spy and giving access to phone calls for 14 years. Because of the man's role on the technical side of the cell phone network's operations, it was assumed that the entire national network had been compromised. The good news is that there are some steps you can take to help protect your phone: - Most obviously keep your phone with you at all times, and don't be fooled into allowing someone else to use it. It can take a matter of seconds for a hacker to compromise your phone by switching out a SIM card or downloading an application. Consider using a PIN to prevent unauthorised access, but make sure you change it from the default setting and guard it as you would a banking PIN. - Be aware of your environment when using a mobile phone. Despite all the hi-tech ways in which a phone can be compromised, simply eavesdropping into a conversation remains the most common way of obtaining information. Consider techniques such as hiding your lips to prevent lip reading if you are particularly concerned. - 3G networks may provide a better level of security than 2G if they implement A5/3 encryption, but be aware that a 3G network may degrade calls to 2G in areas without you realising. Some targeted attacks will deliberately downgrade a 3G cell phone connection to an easier-to-attack 2G connection without the user realising it. Consider the country that you are calling from and remember that there may be different attitudes to privacy and confidentiality than in your home country. It has been reported that some countries record all phone calls as a matter of policy, so this is especially important when you know that you are dealing with sensitive commercial, political or industrial intellectual property in these areas. - Watch out for malware. This may take the form of applications, SMS messages, service messages or email attachments in smart phones. A seemingly innocent game or applet could easily be a piece of Trojan software, carrying a phone bugging application. An unguarded Bluetooth connection can also be a route into your phone, so switch it off if you are at all concerned. A number of vendors are starting to provide anti-malware for mobile and smart phones, which may help. - If you are concerned that your phone has been compromised turn it off and remove the battery. It is possible to have your phone examined by a forensic expert but it may be cheaper and quicker to remove your SIM card and get a new phone. Remember to back up your phone contacts to another device so that you can quickly copy them to any new phone. - Don't leave voicemail as these systems can be targeted by interceptors. If you do need to use voicemail ensure that your PIN is changed from the default, as voicemails can be accessed from any phone. Deleting messages after you have received them is good practice. In the next article we will look at voice encryption technologies.
<urn:uuid:9415791d-03b1-4e45-aa2d-8172e2f0a932>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/cell-phone-hacking-attacks-a-real-and-present-danger-part-p1-p1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95174
1,224
3.015625
3
For example, during Sandy, FEMA accessed more than 150,000 geo-tagged photos from the Civil Air Patrol, which helped the agency perform assessments and make better decisions. “All that big data helped us very quickly come to a very definitive answer on how many people were affected,” said FEMA Geospatial Information Officer Chris Vaughan. “It helped us determine who was exposed and where there were structural damages so we could do a better job of providing assistance to disaster survivors faster than we have ever done before.” Social media is a different story. Ole Mengshoel, associate research professor of electrical and computer engineering for Carnegie Mellon University, Silicon Valley, said restrictions on the availability of public data on social media sites could slow progress in using it as a reliable tool in the big-data arena. Users who protect their tweets and make their Facebook postings private limit the amount of data available and therefore impact the data’s scope and dependability. From an academic point of view, Mengshoel said it “would be a pity” if big data’s potential based off social media data streams wasn’t reached because the companies were too protective of it. Although there are privacy and proprietary concerns with sharing some of that information, Mengshoel said that for emergency managers to truly harness the power of social media data, they’ll need the ability to sample or access it. GIS and sensor data may be easier to come by, but presenting that data in a useful form can be a daunting task. Vaughan said it is “insane” how many layers of information can be embedded on a Web-based map. The real challenge, said Vaughan, lies in putting the data in an easily understood format for emergency managers. “The faster we can provide imagery to the right person or group of people with the right assessments, it helps us streamline and make better decisions,” he said. Despite the challenges, Pardo feels the attention on big data will eventually benefit the industry. She believes that because there’s so much new data being generated, decision-makers will get more confident leveraging analytical information in policy development, program evaluation and delivery. Pardo called big data’s exposure in the last few years a mutually reinforcing process that draws attention to the need for a higher level of capability to use data more generally in the emergency management community, be it big or small. Event simulations is one area that Pardo felt big data could help improve. She said that as a major part of responders’ preparation activities, disaster simulations can at times suffer from a lack of statistical information to fuel predictive models. So where earthquakes, hurricanes or even shoreline erosion events are being trained for, large-volume data sets could help increase the accuracy and reliability of those models. “We’re in the phase right now where there’s a lot of very obvious and relatively straightforward ways to use these large-volume data sets,” Pardo said. “But we’re just beginning to develop new analytical tools and techniques to leverage that data.” Splunk4Good has made some inroads, Botterrell said, improving efficiency using big data could take some time. Actual emergency situations aren’t the best times to test the quality of data and do experiments because lives are usually at stake, he explained. Exposing people to large data sets doesn’t mean decision-making will be more accurate, Okada said. He said it could be a small fraction of a larger set of trends that can be overlooked that leads to a bad decision during a disaster. Instead of relying solely on data, Okada said a three-pronged approach can help protect decision-makers from the pitfalls of information overload. He referenced a principle from Robert Kirkpatrick, director of the Global Pulse initiative of the United Nations Secretary-General, a program that aims to harness the power of big data, as one way to prevent mistakes. Kirkpatrick advocates using the power of analytics combined with the human insight of experts and leveraging the wisdom of crowds. “That kind of data triangulation can help protect us going forward,” Okada said.
<urn:uuid:7a94d859-bfbb-4e54-a029-1cd3f7c8c017>
CC-MAIN-2017-04
http://www.govtech.com/How-Emergency-Managers-Can-Benefit-from-Big-Data.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939107
876
3.03125
3
Introduction to Web Cookies Because HTTP is a stateless protocol, it cannot internally distinguish one user from another. To address this issue, cookie technology was invented in 1994. By using cookies, servers instruct browsers to save a unique key and then send it back with each request made to the server. When a request is sent from a browser to a website, the browser checks if it has a stored cookie that belongs to that website. While carrying out this process, it checks to see whether the properties and flags of the cookies (domain, path, secure), match the website's data which has been requested. If they match, the browser sends the relevant cookies along with the request. Cookies Misuse Can Lead to Cross-site Request Forgery This behavior is also repeated in the same way for requests made by third parties through the browser. By "third parties" we mean other websites that we don't visit directly. The critical point from a web application security perspective is that when you visit website A, all cookies kept in the browser for site B will be added to the request initiated toward site B by site A. So, a session that belongs to B on the browser can be used and even abused in this way. In security terminology, abusing this bevahior of browsers is known as Cross-site Request Forgery (CSRF). It is carried out by misusing a session belonging to an authorized user by using this browser behavior. This browser behavior can also be misused for other purposes like tracking users or advertising. When you enter a site, for instance example.com, your browser may make a few requests to different sites because of the HTML elements placed on the page of example.com, for example Facebook Like buttons, Google Analytics code, etc. Along with these requests, the cookies in the browser that belong to these other sites will also be sent. Therefore those third parties can track and log your activity by using Cookie and Referrer information. Should You Block Cross-site Requests to Prevent CSRF? Normally, it is possible to avoid tracking like this in Firefox and Chrome browsers. However, when these browsers block tracking, they prevent the sending of cookies along with the request made by any third party website. But by doing so your browsing experience will be very poor. So by blocking cookies, you can totally prevent CSRF, but is it worth the consequences? Introducing the Same-Site Cookie Attribute to Prevent CSRF Attacks Thanks to a new cookie attribute, that Google Chrome started supporting on the 29th of March, and other the popular browsers followed, there is now a solution. It is called the Same-Site cookie attribute. Developers can now instruct browsers to control whether cookies are sent along with the request initiated by third party websites - by using the SameSite cookie attribute, which is a more practical solution than denying the sending of cookies. Setting a Same-Site attribute to a cookie is quite simple. It consists of adding just one instruction to the cookie. Simply adding 'SameSite=Lax' or 'SameSite=Strict' is enough! Set-Cookie: CookieName=CookieValue; SameSite=Lax; Set-Cookie: CookieName=CookieValue; SameSite=Strict; Differences Between the Strict and Lax SameSite Cookie Attributes Strict: As the name suggests, this is the option in which the Same-Site rule is applied strictly. When the SameSite attribute is set as Strict, the cookie will not be sent along with requests initiated by third party websites. Setting a cookie as Strict can affect browsing experience negatively. For example, if you click on a link that points to a Facebook profile page, and if Facebook.com has set its cookie as SameSite=Strict, you cannot continue navigation on Facebook (view the Facebook page) unless you log in to Facebook again. The reason for this is because Facebook`s cookie was not sent by this request. Lax: When you set a cookie' SameSite attribute to Lax, the cookie will be sent along with the GET request initiated by third party website. The important point here is that, to send a cookie with a GET request, GET request being made must cause a top level navigation. Only in this way, the cookie set as LAX will be sent. Let me explain more. Resources can be loaded by iframe, img tags, and script tags. These requests can also operate as GET requests, but none of them cause TOP LEVEL navigation. Basically, they don't change the URL in your address bar. Because these GET requests do not cause a TOP LEVEL navigation, thus cookies set to Lax won't be sent with them. See the table below for more clarification: |Request Type||Example Code||Cookies sent| |Link||<a href="..."></a>||Normal, Lax| |Perender||<link rel="prerender" href=".."/>||Normal, Lax| |Form GET||<form method="GET" action="...">||Normal, Lax| |Form POST||<form method="POST" action="...">||Normal| Does this really mean "goodbye" to CSRF? Yes, it looks like the SameSite cookie attribute is an effective security measure against CSRF attacks. You can avoid sending your cookies with the request initiated by third parties by using this feature. Let me clarify with an example: Let's say you are logged in to the website www.badbank.com. Using a phishing attack, an attacker can trick you into entering www.attacker.com in another browser tab. Using a code on www.attacker.com, the attacker tries to transfer money from your account by posting a FORM to www.badbank.com. Your browser sends the cookie belonging to www.badbank.com with this request. If the form on www.badbank.com lacks CSRF tokens to prevent a CSRF attack, your session can be exploited by the attacker. If the cookie of www.badbank.com had been set to SameSite=Lax, the cookie in the browser would not have been sent with the POST request and the attack would not be successful. CSRF Popularity is Going Down CSRF attacks were at number 5 in the OWASP Top 10 list published in 2010, but they declined to number 8 in the OWASP Top Ten in 2013. People suggested that the reason for this was increased awareness of CSRF and the common use of Anti-CSRF tokens by frameworks. Preventing CSRF Vulnerabilities Although we're now using the SameSite Cookie attribute, we should still be cautious! We should make the whole changes with POST request instead of GET. GET is designed for navigational purposes, not for state changes, so using GET requests is generally considered a safe method. However, when we are performing actions (such as ordering a product, changing a password, or editing profile information), using POST requests is much safer. There are 3 important reasons for this: - When the parameters are carried by GET, they stay in the browser history. They also will be placed in server logs and the Referrer header in the request made toward third parties. - Another reason for not using GET requests is that cookies set to Lax are still sent along with GET requests, giving attackers another opportunity to exploit users. - Lastly, exploiting a CSRF vulnerability by using GET is much easier. To exploit a CSRF vulnerability in a form using GET, an attacker does not have to own a site. He can inject this payload into a forum message, post comment or image tag. How Does Netsparker report this? At Netsparker, we are constantly paying attention to the latest security developments and adding new features and security checks into our engine. In fact just a few weeks after the technical details of the Same-Site cookie attribute were released, we implemented the check for it in both Netsparker Desktop and Netsparker Cloud, therefore the web vulnerability scanner will alert you if cookies do not have such attribute.
<urn:uuid:94ebd249-5426-405c-bd43-52efc07a8369>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/same-site-cookie-attribute-prevent-cross-site-request-forgery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00029-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913331
1,681
3.578125
4
Norway under CyberAttack, The Norwegian National Security Authority said that industrial secrets from the oil, energy and defense industries had been stolen .At least 10 different attacks, mostly aimed at the oil, gas, energy and defense industries, were discovered in the past year, but the agency said it has to assume the number is much higher because many victims have yet to realize that their computers have been hacked. This is the first time Norway has unveiled such an extensive and widespread espionage attack Spokesman Kjetil Berg Veire added it is likely that more than one person is behind the attacks. The methods varied, but in some cases individually crafted e-mails that, armed with viruses, would sweep recipients’ entire hard-drives for data and steal passwords, documents and confidential documents. The agency said in a statement that this type of data-theft was “cost-efficient” for foreign intelligence services and that “espionage over the Internet is cheap, provides good results and is low-risk.” Veire would not elaborate, but said it was not clear who was behind the attacks. Important Norwegian institutions have been targeted by hackers before.
<urn:uuid:009b7bea-b981-46c6-8eca-e7b06ea52d7b>
CC-MAIN-2017-04
http://www.ehackingnews.com/2011/11/norway-under-cyberattack-hackers-steal.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz
en
0.981519
239
2.578125
3
Solid-state drives are all the rage lately, thanks to their high transfer speeds and ultrafast access times, but most people still use cheap, spacious mechanical hard drives. Unfortunately, mechanical hard drives also constitute one of the most significant performance bottlenecks in modern computer systems. Even when paired with the fastest processors and lots of memory, a slow hard drive will drag down the a system's overall performance and responsiveness, which is why upgrading to an SSD usually yields such significant performance gains. [ FREE DOWNLOAD: The law of unintended storage consequences ] If upgrading to a solid-state drive isn't the cards for you right no, you can improve the performance of your hard drive through a technique colloquially known as "short stroking." In simple terms, short stroking a drive means partitioning it so as to use its highest-performing sectors. Hard drives perform differently depending on where data is stored on their platters. Knowing where the fastest sections of the drive are and partitioning the drive to take advantage of them are the keys to optimizing it. Finding the Sweet Spot Generally, the smaller you make the initial, primary partition on a hard drive, the better that volume will perform. But no one likes to be limited by a tiny volume size, so it's very useful to be able to determine where transfer rates begin to drop off on a hard drive. With that information in hand, you can tune your partition to balance overall performance against volume size. To measure a hard drive's performance, you'll need access to a system that already has a fully functional OS installation on another drive. Connect the drive you want to test to this system as a secondary volume, and then run the benchmark tool. You'll notice that performance starts at a relatively high level and then gradually tapers off. For this article, we tested a 1TB Western Digital Velociraptor drive and initially saw transfer rates in the vicinity of 210 megabytes per second, which gradually slowed to about 116 MBps. Similarly, access times were fastest in the early part of the test and grew slower as the test progressed. This phenomenon occurs because hard drives are fastest when they access data from the outermost tracks on its platters. Given a constant spindle speed (10,000 rpm, in the Velociraptor's case), the drive's read/write heads can simply cover a larger area in a shorter amount of time when positioned over the outer edges of the platter, resulting in better performance. For optimal system performance, you need to place your OS and all of your most commonly used applications and files in the fastest areas on the drive. Accomplishing this goal involves creating a primary partition of the correct size on the drive and then installing your OS and apps there. You can partition and use the remainder of the drive, too, but you should store only infrequently accessed data there. With the Velociraptor hard drive we tested, performance began to drop noticeably at about the 200GB mark, as the HD Tune graph above indicates. By the 300GB mark, transfer rates had fallen by about 50 MBps from their initial speed, and they continued to decline from there. 200GB is plenty of space for a primary partition, so that's the size we'd make ours. Once you've identified the sweet spot on your drive, create a primary partition of the optimal size. You can do this either during the initial setup phase (when installing the OS) or while the drive is connected to a system whose OS is already installed. To create a partition during a fresh installation of Windows, follow the on-screen prompts during the first phase of the setup process until you reach the point of choosing a target drive. Then click Drive Options (advanced), select your drive on the resulting screen, and specify the partition size. To create a partition on a drive connected to a system that already has Windows installed, connect the drive, boot into Windows, click the Start button, type Disk Management in the Search/Run field, and press Enter. The Disk Management utility will open and, if it detects a new blank drive, will usually launch a wizard. If no wizard launches, right-click the entry for the drive in the list at the bottom of the window, and choose the option to create a new volume. Because Windows uses binary measurements in megabytes to specify partition sizes, 1 gigabyte contains 1024 megabytes. Consequently, in specifying our 200GB partition, we had to identify a partition size of 204,800MB (200 × 1024). To gauge the performance benefits of short-stroking a hard drive, we ran a couple of popular benchmarks--HD Tune 5.0 and PCMark 7--on our 1TB Velociraptor hard drive, first with a single partition that spanned the entire drive and a second time with a primary partition consisting of the drive's highest-performance, first 200GB of space. WD Velociraptor 1TB w/ 1TB partition WD Velociraptor 1TB w/ 200GB partition HD Tune 5.0 (read test) Average transfer rate Minimum transfer rate Maximum transfer rate PCMark 7 Secondary Storage Benchmark Windows Media Center * In milliseconds; on this measure, lower scores indicate better performance. Test system: Intel Core i7-2700K, Asus P8Z68-V Pro (Z68 Express), 8GB DDR3-1600, Western Digital Raptor 150GB (OS), Nvidia GeForce GTX 285, Microsoft Windows 7 Ultimate 64-bit A hard drive's access times and minimum transfer rates benefit most from short stroking, though the average transfer rate will also jump significantly. According to HD Tune, our drive's minimum transfer rate increased from 116.2 MBps to 181.1 MBps, a boost of more than 56 percent. Also, our drive's average access time decreased from 7.13 ms to 5.43 ms, an improvement of about 23.8 percent. And the drive's average transfer rate saw a nice gain of 18.46 percent, from 164.1 MBps on the 1TB partition to 194.4 MBps on the optimized 200GB partition. PCMark 7's Secondary Storage benchmark--a suite of trace-based tests that measure performance of simulated real-world workloads, rather than raw transfer speeds and access times (as HD Tune does)--tells a somewhat different story. Though the gains reported by PCMark 7 are less dramatic than those identified by HD Tune, system performance improved nearly across the board. The drive's overall score increased by 1.63 percent after short stroking, with the biggest gain coming in the Windows Defender test, which saw an improvement of 4.06 percent. Ultimately, short-stroking a hard drive won't raise your hard drive's performance to the level of a solid-state drive. Nevertheless, the right partition configuration can yield tangible gains, as our test results show. A fast storage subsystem usually delivers perceptible performance improvements for the end user, so if you're stuck with a hard drive in your system, why not ensure that it's configured for peak performance? This story, "How to partition your hard drive to optimize performance" was originally published by PCWorld.
<urn:uuid:31847e8b-8f5e-4591-a25b-b63e01831b85>
CC-MAIN-2017-04
http://www.itworld.com/article/2726650/storage/how-to-partition-your-hard-drive-to-optimize-performance.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00331-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907515
1,483
3.046875
3
Unix Security: How Do You Know When You've Been Owned? So you suspect that something strange is happening with a server, but you're not quite sure what. Perhaps it's been compromised, perhaps not. Let's talk about some methods for figuring out what's up on Unix-based operating systems. It's not always straightforward. If you're running a Web server or user login server, it's even more difficult to tell if you've experienced a security incident. Seemingly strange traffic could be legitimate; you just don't know. If your server is blasting out UDP packets as fast as possible, that's a pretty good indicator that it's being used in a DOS attack, but we're talking about more subtle issues. First, we need to understand the difference between a simple user account compromise and a root compromise. Unlike Windows, you don't need to format and reinstall the OS if a user account gets compromised in Unix. They can't do anything harmful, unless perhaps you're behind on kernel patches. Most of the time, if there's no evidence of root-level activity, you'll probably be fine with just disabling the affected user account. Is a Web page compromise a system compromise? Sometimes, yes. Most of the time a compromised Web page is just that: defacement. Frequently spammers will fill a page with tons of links and other strange looking data. Likewise, botnet owners may post exploit code so that other hosts can download it. These types of activities are quite harmless—just restore the page from backup and try to fix the attack vector so it doesn't happen again. However, spammers will sometimes launch PHP or Perl scripts on your web server, which will then start sending out spam. This type of compromise is easy to track down: there will be a process running as the Web server user. Unfortunately most of these exploits will download their code to /tmp, and then delete themselves once they're running, so you don't know how they were able to get in in the first place. This is where your Web server logs come in handy. The lsof command is your friend. When you first find that a strange process is running, the first thing to do is check what files it has open. You may have discovered this process from your network people, who told you that a Unix server was joining (or running, gasp!) a botnet server via port 8881, for example. You need to figure out what process has that port open, and then see what other files it's using. Most of the time you'll find an exploit written in Brazilian or Russian stashed away somewhere in /tmp. Chances are that they downloaded it from another site via a vulnerable Web page. The Web server logs will show you exactly what PHP (most likely) script was involved. But chances are good that if they're running from /tmp it's nothing more than a user-level compromise. Repair the entrance point and get on with life. What about a root compromise? This is where things get a bit more difficult. Most of the time a guessed user account will lead to an attacker getting a local account, after which they will attempt to run any number of root exploits locally. This is a dangerous place to be, but it isn't the end of the world. With properly configured logging, you may notice that a certain username is running a process that crashes via a segmentation fault (attempting to write memory that isn't theirs). Inspect that account immediately. If a remote root exploit is used you're compromised at the get-go. Your first step is to find out what they've done, and next you get to figure out how they did it. The old method is to start searching for setuid files. Unfortunately, this is less than useful. There's setuid files all over the place on a standard Unix install, and if you don't know what's "normal," you won't know what's "strange." Finding a root-owned setuid file in /tmp, buried in /dev, or in someone's home directory is a pretty good indicator though. Next, check log files and wtmp. If someone has logged in to your server from an unknown location, you know something is up. Also check for open ports, and try to telnet to them. Root kits often include a 'bindshell,' which simple listens on a port and provides a root shell for anyone who tries to connect. The chkrootkit program is very useful for detecting root kits, assuming you don't have a brand new root kit. It inspects all kinds of things; more thoroughly than I can explain here. Be careful running commands as root; they might have been tampered with. Most operating systems' package management system includes some sort of checksum. You can verify that programs haven't changed easily enough, though beware that it's certainly possible to alter a package checksum database. Of course, it'd be nice to know beyond the shadow of a doubt whether or not system binaries have been tampered with. This is where host intrusion detection software is useful. Products like Tripwire store a database of every file's checksum on a central server. It's a bit of a pain to configure initially, since the normal operation of a server results in hundreds of files changing daily. Once a sane list of files to monitor has been established, Tripwire become very useful. There are many aspects of a server to check when a compromise is suspected. Each new clue will lead the investigator off in strange and unpredictable directions. A good, but dated, starting point is outlined on a CERT Web page. Programs like chkrootkit will uncover most of these items, but it's still a useful review of the "common" items to check. Every compromise is different, and the hardest part is discovering the attack vector. You want to prevent the intruders from compromising other server that may be vulnerable to the same thing, so do some investigation before reinstalling. Yes, you must reinstall a Unix server is root was compromised—root kits are tricky, and you can never be 100 percent certain that you're repaired the system. Happy Hunting.
<urn:uuid:18283d6c-bd32-444a-8e43-072cb637dfcc>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3684851/Unix-Security--How-Do-You-Know-When-Youve-Been-Owned.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00147-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958938
1,270
2.71875
3
Trading Parallelism for Performance It is a common belief that only sequential applications need to be adapted for parallel execution on multicore processors. However, many existing parallel algorithms are also a poor fit. They have simply been optimized for the wrong design parameters. In the past we have been striving for algorithms to maximize parallelism and at the same time minimize the communication between the threads. For multicore processors, however, the cost of thread communication is relatively cheap as long as the communicated data resides in a cache shared by the threads. Also, the amount of parallelism that can be explored by a multicore processor is limited by its number of cores multiplied by the number of threads running on each core. Instead, a third parameter is gaining importance for parallel multicore applications: the memory usage. In this, the second article of the series, we contrast the behavior of a highly parallel state-of-the-art algorithm with that of a moderately-parallel algorithm in which some of the parallelism has been traded for lower DRAM bandwidth demands. We show the latter outperforms the highly parallel algorithm by a factor three on today’s multicore processors. The techniques used and some of the performance numbers are summarized here. A more detailed description of the algorithms discussed in this paper was presented at ICS 2006 together with colleagues and students from Uppsala University. Highly Parallel Algorithm The Gauss-Seidel algorithm (GS) is used to smooth an array with NxN elements. The original GS algorithm is pictured in Figure 1a. The new value (yellow) for each element of an NxN array is calculated as the average of its own and its four neighbors’ values. The elements of the array are updated row-wise. The element numbers in the figure refer to their iteration age. At the end of each iteration, convergence is checked and, if the condition is not met, the array will be iterated again. Typically, the array is iterated 10–30 times before the convergence is met. The red arrows in Figure 1a indicate the data dependences of this algorithm. The new values to the left of and above the yellow element have to be calculated before the yellow value can be calculated. These data dependencies make the original algorithm hard to parallelize. Figure 1b shows the popular red/black variation of the algorithm, where only every other element is updated in a sweep of the array (the update of red elements is shown in Figure 1b). In a second sweep, the other (black) elements are updated. Unlike the original scheme, this red/black algorithm has no data dependencies during sweeps since red elements do not depend on any other red elements. In other words, all the elements of a sweep can theoretically be updated in parallel – its parallelism is N2/2. Figure 1c shows how two cores may divide the work. This scheme keeps the communication between the cores at a minimum: only values of the element on the boarder between the threads need to be communicated, and the threads only need to synchronize once per sweep. So, according to the old definition of a good algorithm, the red/black algorithm is close to perfect: plenty of parallelism and a minimum of communication. There is only one drawback: it runs slowly on a multicore processor, as shown in Figure 2. Typically, the array size used with Gauss-Seidel is too large to fit in a multicore processor cache. Each iteration will force the entire array to be read from memory. Actually, for the red/black scheme, the array will have to be read twice per iteration, first during the red updates and then during the black updates. This will quickly saturate the DRAM bandwidth and limit the performance on a multicore processor. Finding the Door in the Memory Wall Instead of just maximizing parallelism, we could try to minimize DRAM bandwidth usage for a GS implementation. If we apply a blocking scheme to the original GS algorithm, we could keep an active subset of the array, called a block, in the cache and reuse these elements many times before the data are evicted from the cache. Because of the data dependence of the original GS algorithm (the red arrows in Figure 1a) we have to apply a sliding blocking technique, shown in Figure 3a. The active block inside the red frame shown includes three rows. Once the next iteration values for all the elements in the block have been updated, as shown in 3a, the block is slid down one row, as shown in Figure 3b, and the next iteration values for those elements are updated. This improves the reuse of element values while they reside in the cache. Using this scheme, each element of the array will advance three iterations per sweep, which means that the array is only read from DRAM every third iteration. This implies that only one sixth of the DRAM bandwidth is needed compared with the red/black algorithm. If the number of rows in the active block increases, even less DRAM bandwidth will be needed. Figure 4 shows the relationship between bandwidth usage and block size, as shown by the ThreadSpotter tool for different cache sizes. A typical last-level multicore cache is in the 2-12 Mbyte range. Bandwidth demand can be reduced in this range by more than an order of magnitude using the blocking GS scheme instead of the red/black GS scheme. Figure 3c shows a parallel version of the blocked GS. A drawback is that the threads will have to synchronize row-wise to make sure the thread to the left stays slightly ahead of the thread to the right. In sum, the blocked GS algorithm produces about an order of magnitude more thread communication than red/black GS. Also, its parallelism is much worse, in the order of N parallel threads (one per column) can help out simultaneously, compared with the N2/2 parallelism of the red/black algorithm. Still, it outperforms the red/black algorithm by a factor of three on a dual-socket quad-core system thanks to the much lower DRAM bandwidth need. Similar results have been observed when comparing 3D versions of the algorithms. Figure 5 compares the performance of the two algorithms when running on a two-socket quad-core system. The red/black saturates the bandwidth already at two active cores, while the sliding GS algorithm scales well even on a two-socket system without any special thread applied. In Part 1 of this article series, we saw how a throughput workload created a superlinear slowdown on a multicore architecture due to increased cache pressure, and in this article — Part 2 of the series — we were forced to change “the ideal” highly parallel and low-communication algorithm in order for it to run well on a multicore processor. This once more drives home the point made by Sanjiv Shah a couple of weeks ago: only focusing on parallelism is not always the best way to get good performance on a multicore architecture. In Part 3, we will take a look at various techniques for identifying when optimizations are needed and compare a few simple optimization tricks. About the Author Erik Hagersten is chief technology officer at Acumem, a Swedish-based company that offers performance analysis tools for modern processors. He was the chief architect for high-end servers at Sun Microsystems (the former Thinking Machines development team) for six years before moving back to Sweden in 1999. Erik was a consultant to Sun until Acumem started in 2006. Since 2000 his research team at Uppsala University has developed the key technology behind Acumem.
<urn:uuid:b18b3fd7-8562-4330-bdaf-ee068ac85985>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/03/23/finding_the_door_in_the_memory_wall_part_2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00480-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93004
1,552
3.046875
3
Adding server power only improves site server performance to a point: Once server power is maxed out, the physical distance between a site’s host server and the visitor becomes a key component in how long a page takes to load. Content Delivery Networks (CDNs) are a widely used solution to improve load times and performance on websites by decreasing the physical distance data travels. Not every CDN is created equal, however. Your business may opt to measure latency and acceleration to determine which CDN, if any, best suits the audience. A server’s “megabits per second” performance only tells part of the tale when it comes to load speed. Bandwidth measures how much data moves at once, while latency measures how long that data takes to move from the source to the destination. Ookla Speedtest explains the situation with a pipe metaphor: latency measures how much time it takes for water to enter a pipe and reach the end of that pipe, while bandwidth measures the pipe’s diameter. Moving large amounts of data, such as app updates, isn’t time-sensitive (in terms of when it starts)—so latency is not an issue here. However, latency is an extremely important performance metric for things like loading web pages, which should at most take only a few seconds. How CDNs Work CDNs utilize a network of servers across multiple geographical locations that mirror website content from the original source. When website visitors access a web page, their devices can receive the information from a physically closer server, reducing the time it takes for site data to reach the viewer. It’s a lot like buying milk from the corner store down the street from your house instead of driving out to a rural farm to get it. However, CDNs don’t improve performance for everyone. For example: If the person trying to buy the metaphorical milk lived closer to the dairy farm, going to the store would take longer. Additionally, if the dairy farm and corner store were equidistant from the milk shopper, they would not see a performance boost. CDNs can also help with capacity and bandwidth management. Calculating just how much a CDN improves a site’s performance in one location is straightforward: Measure how long a page takes to load before and after utilizing the CDN. In practice, however, testing how well a site and CDN are performing across different regions is tricky without designated testing stations. The acceleration test is often accomplished by measuring how long it takes to download files of various sizes in each tested geographical location from the mirror server and the host server. According to cloud services provider Radware, picking the most effective CDN often requires some market research to identify where a site’s users are located. For example, a site that’s hosted out of Boston and does most of its traffic in the Northeastern and Western United States would benefit more from a CDN that improves load times in Los Angeles, Portland, and Seattle than one that boosts load times in Boston, Beijing, and London. Acceleration testing data can help businesses make smart decisions when it comes to CDNs. Find out more about how Apica can support your CDN testing process from more than 83 countries and 2,600 monitoring nodes across the globe on our website.
<urn:uuid:68b2fb01-45c1-4b7c-b839-7be47696b3bb>
CC-MAIN-2017-04
https://www.apicasystem.com/blog/cdn-companies-measure-latency-acceleration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00140-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934052
679
2.734375
3
Physicists studying the results of tests at the Large Hadron Collider (LHC) at CERN, the Swiss nuclear research laboratory outside Geneva, have a lot of data to ponder. In fact, they have nearly 50% more than they had originally estimated. Initially, the LHC was expected to generate 15 petabytes of usable data each year. Recent reports have raised that number to more than 22 petabytes annually. However, CERN says that its tests produce vastly greater amounts of data than gets studied. Some experiments can create up to one petabyte of data per second. Lucky for the data storage managers, not to mention the tired-eye physicists, on average all but 200 Mbytes per second of that petabyte are deemed "uninteresting data" and discarded by the system. It's Big Data quantities like we see at the LHC that helped prompt the U.S. government late last month to announce $200 million dollars in research and development funds specifically for scientists confronting the data deluge. According to a I am optimistic that those of us on the technology side will be up to the task of handling the data needs of science. But even I was daunted by recent news of the Square Kilometer Array (SKA). Headquartered in Manchester, UK, with a target completion date in 2024, the proposed 20-nation radio telescope research project dwarfs all other Big Data initiatives I've seen so far. This single project is currently estimated to produce one exabyte of data every day; or the equivalent of six weeks worth of the total volume of data traversing the Internet in 2011. Large-scale, multi-nation collaborative science, such as the LHC and SKA, are relatively small in number and confront Big Data problems on a scale few of us can imagine. But they bear watching because almost all of us can learn from their Big Data solutions. Invent new possibilities with HANA, SAP's game-changing in-memory software SAP Sybase IQ Database 15.4 provides advanced analytic techniques to unlock critical business insights from Big Data SAP Sybase Adaptive Server Enterprise is a high-performance RDBMS for mission-critical, data-intensive environments. It ensures highest operational efficiency and throughput on a broad range of platforms. SAP SQL Anywhere is a comprehensive suite of solutions that provides data management, synchronization and data exchange technologies that enable the rapid development and deployment of database-powered applications in remote and mobile environmentsOverview of SAP database technologies
<urn:uuid:134c0e72-3564-40d8-adb4-6ebe97cfd45b>
CC-MAIN-2017-04
http://www.itworld.com/article/2725667/big-data/cern--us--uk-projects-push-big-data-limits.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933084
507
2.625
3
Because of the compute and power density of petascale systems, all new supercomputer facilities are being built with energy efficiency in mind. This includes that new supercomputer center at the University of California at San Diego and the facility under construction at University of Illinois at Urbana-Champaign. The latter is being built to house the multi-petaflop “Blue Waters” supercomputer in 2011. Both datacenters will employ chilled water to be routed directly into the into the computer housing — a much more efficient cooling method than forced air. State-of-the art cooling for petascale machines is now a given, but even industrial HPC datacenters are going green. This includes the CFD centers being built for Formula One racecar designs. While racecars aren’t exactly known for their fuel efficiency, there are plenty of opportunities to save energy when developing them. Most serious F1 teams now use high performance computers to help design these cutting-edge autos, so choosing the right HPC system and housing it in a well-designed facility can go a long way in minimizing environmental impact. At SC08 this week, Appro announced it had completed the final deployment of 38 teraflop Xtreme-X supercomputer for the ING Renault F1 Team. The new system embodies pretty much the latest generation of cluster technology, with AMD quad-core nodes lashed together with DDR InfiniBand. The Appro machine represents a new level of commitment to HPC by the F1 team at Renault. Its previous machine was a 1.8 teraflop cluster housed in a conventional forced-air computer room. The new system lives in a brand new Computational Aerodynamics Research Centre located in the English countryside, north of Oxford. The facility was built green — not just in terms of energy efficiency, but also in regards to overall environmental impact. According to Graeme Hackland, the CFD center’s IT manager, they were committed to operating an environmentally responsible facility from the start. And lessons learned from their previous computing facility led them to develop a much more energy-efficient plan. Since the facility was built in the countryside, they had to negotiate with local farmers to bring the electric cable across their fields, while also working with Scottish and Southern Energy to get the energy onsite. “The cost of upgrading energy on this site is going to be huge, so the more we can do to reduce waste, the better it is,” explained Hackland. The whole structure, which includes the offices and the computer room, was built underground. Undoubtedly, this was more expensive to build than an above-ground structure, but it was still just one-fourth the cost of building a new wind tunnel, even taking into account the cost of the computer hardware. The unconventional design also presented another immediate advantage. The underground nature of the building meant they had no planning restrictions. The request for the new structure passed on its initial application. In the UK, where land is especially precious, there are many more land use restrictions than the US, so getting past the local planning commission is a big deal. The other nice attribute of an underground facility is an evenly cool temperature. Once you get into the subsoil, the temperature varies very little from season to season, since the soil acts as an enormous thermal buffer. In the middle of England, the temperature below ground is about 10 degrees Celsius (50 degrees Fahrenheit). While this may be a bit chilly for humans, its pretty much perfect for sweaty supercomputers. Of course you can’t rely on the ambient temperature of the room to cool a multi-teraflop cluster, even at 50 degrees Fahrenheit. The Appro machine is water cooled, using APC’s InfraStruXure solution, which allows them just to cool the hot aisle instead of the whole room. No forced air is used at all, saving even more energy. Furthermore, the CFD center operators have plans to recycle some of the waste heat to be used in the rest of the facility. Presently the CFD center is using about 40 percent of its allotted power, so they have some room for further expansion. They’re also counting on increases in performance per watt as new processors and systems are rolled out. Since the size of the datacenter is static, computational density is also important. Here again, they’re counting on Moore’s Law and clever system engineers to keep shrinking computers. So is the Appro cluster performing as expected in its new digs? It’s probably too soon to tell. The Renault engineers have only had access to the machine for production work since late summer. They’ve already used the system for some design mods for two of the races for this year’s R28 F1 racecar, but the 2008 circuit is coming to a close. Most of the CFD design work is now being applied toward next year’s R29. The first physical iteration of that car is expected before Christmas. Wayne Glanfield, the CFD Analysis project leader, says with the larger system, they’re able to run more simulations concurrently, vastly improving turnaround time for design explorations. They’re also able to run much more refined simulations than they could with the 1.8 teraflop machine. On the old system only 10 percent of the aerodynamic design was done on the cluster, the remainder was accomplished with physical modeling in the wind tunnel. With the new system they’re aiming for a 50-50 split. “We’re currently running about three times the size of the model we were previously running,” said Glanfield. “Our option was to build a second wind tunnel, or to do this — to go for CFD in a really big way.”
<urn:uuid:d3346027-96a5-4285-aa26-79036ce946f2>
CC-MAIN-2017-04
https://www.hpcwire.com/2008/11/19/the_greening_of_renaults_formula_one_cfd_program/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00168-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961212
1,203
2.875
3
Since 1982, 37,000 people, including 7,000 Americans, survived potentially disastrous incidents because of the COSPAS-SARSAT rescue network. That record earned the satellite system an induction into the Space Technology Hall of Fame today. The honor recognizes technologies originally developed for space applications that ultimately improve live on Earth, and few technologies rival COSPAS-SARSAT in life-preserving metrics. In 2013 alone, COSPAS-SARSAT’s network of satellites that detect and locate distress signals from emergency beacons led to the rescue of 253 people from potentially deadly situations. The network involves numerous satellites, including the National Oceanic and Atmospheric Administration’s geostationary and polar-orbiting satellites. Altogether the program comprises 43 countries and organizations. “The technology on NOAA satellites is not just for gathering environmental intelligence and weather forecasting, it also saves lives thanks to our role with COSPAS-SARSAT,” said Mary Kicza, assistant administrator for NOAA’s Satellite and Information Service. The United States, Canada, France and the Soviet Union teamed up together in 1979 to for COSPAS-SARSAT. Notably, the project was able to survive political divisiveness during the Cold War in the 1980s to continue to grow. Hundreds of thousands of aircraft, ships and other off-terrain vehicles are now outfitted with emergency beacons that, when activated, set into motion a chain of events that ultimately ensure emergency rescues can take place. “It is an honor for COSPAS-SARSAT to receive this prestigious distinction. It is high praise, not only for the creators of the technology and the team of scientists and technicians behind the scenes, but the brave first responders, who make the rescues,” said Chris O’Connors, program manager for NOAA SARSAT.
<urn:uuid:03eb4961-9c02-4733-89f8-73464c88ef2a>
CC-MAIN-2017-04
http://www.nextgov.com/cloud-computing/2014/05/satellite-rescue-network-gets-space-technology-hall-fame-recognition/85174/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92011
388
3.265625
3
In this entry we introduce block ciphers in a general way, as well as its modes of operation. Further, we'll see how to generate message authentication codes (MAC) using block ciphers. As we already said in the previous entry, block ciphers are symmetric ciphers which encrypt fixed length blocks. Therefore, a block cipher generally applies a series of operations combining the input block and the secret key (which isn't necessarily the same length) to obtain the output block (ciphertext). Since they are symmetric, the decryption primitive uses the same key as the encryption primitive, and applies the operations needed to get back the plaintext at its output: Most block ciphers can be classified as product ciphers or iterative block ciphers, based on a series of basic operations (rounds) which are repeated a number of times. These rounds provide confusion and difusion to the cipher, two concepts identified by Shannon in his famous treaty about communication theory. Confusion refers to breaking the relationship between ciphertext and key as much as possible, while diffusion refers to destroying the statistical characteristics of the message source. Shannon identified these concepts and established the need for a secure cipher to provide them. These kind of ciphers are generally Substitution-Permutation Networks (SPN), where several permutations (scrambling) and substitutions (changing values for others) take place one after the other, using a key, trying to achieve the goal: destroy the statistical properties of the source and obtain a secure cipher. In subsequent entries we'll see how DES and AES, two well-known symmetric encryption standards, work. The remaining of this article treats block cipher modes of operation and how to authenticate messages using these ciphers. Modes of operation We'll see now some constructions that allow the use of a block cipher to encrypt texts larger than the block length. Some of them can be viewed as stream ciphers in which a key stream is generated and gets mixed with the plaintext. First, we'll see the most simple way of using a block cipher. The construction that would come to every mind would be dividing the plaintext in blocks of the suitable length and encrypt each of them. This is what we call Electronic Codebook Mode (ECB), and as can easily be observed, it mantains the structure of the plaintext at the block level (not inside blocks): two identical blocks produce the same ciphertext under the same key. After ECB, one of the most famous modes is the Cipher Block Chaining (CBC). In this case, the plaintext is also divided into several blocks, but before encrypting them with the secret key, they are XORed with the previous ciphertext block: Where would be the so called Intialization Vector (IV), which can be different each time but doesn't need to be secret. Actually, it's usually known, either being a fixed value defined in the concrete protocol's specifications or sent together with the message as a header. In this way, each encrypted block depends on each one of the previous blocks. A simple bit change in one of the blocks would produce a cascade effect and make the remaining blocks completely different. Clearly, message structure at the block level is not revealed. This is well illustrated in the following image from Wikipedia: But not only CBC exists. For instance, the Output Feedback Mode (OFB) generates a bit stream to be used as a key, in the most pure stream cipher style. The cipher is initialized with an IV in the same way as CBC, but it is encrypted using the secret key. The resulting block has the k initial bits of key stream, which are XORed with the plaintext to produce the ciphertext. To generate the next keystream bits, the previous block is used. Using the usual notation: Obviously, decryption will be performed calculating the same keystream and XORing it with the ciphertext. This construction creates a stream cipher, and as other stream ciphers, if one bit is flipped in the plaintext, it will also be flipped in the ciphertext (and the other way around) due to the usage of XOR. Another quite common mode is the counter mode (CTR), in which a counter is used at the input of the block cipher, and the output is used in the same mode as in OFB mode. These are not all the existing modes, but the intention is simply to provide an overview of the options and to refer the interested reader to other sources. See for instance the famous Applied Cryptography from Bruce Schneier, or the Handbook of Applied Cryptography. Message Authentication Codes One of the problems that Cryptography's tried to solve, is the authentication of the data origin. This is, trying to assure that a message has been actually created by a certain person, machine or, more in general, entity. The solution to this problem based using symmetric crypto is known as Message Authentication Codes, or MACs. These codes are just a block of groups generated by some alrogithm using a secret key and a plaintext message. The most common construction for generating these codes is based on using a block cipher in CBC mode, but taking just the last block as the MAC. As we've seen previously, this last block depends on all the previous blocks, as well as on the key. Therefore, this code is binded to the complete message (provides message integrity) as well as to the entity with whom the secret key is shared (provides data origin authentication). Thus, the receiver of the message, who shares a secret key with the source, is able to check whether the message was actually generated by the expected entity and that it has not been altered.
<urn:uuid:e833a296-6b0c-4e2a-ab1a-c5711ff2d48c>
CC-MAIN-2017-04
https://www.limited-entropy.com/crypto-series-block-ciphers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942974
1,193
4.21875
4
Table of Contents A very common question we see here at Bleeping Computer involves people concerned that there are too many SVCHOST.EXE processes running on their computer. The confusion typically stems from a lack of knowledge about SVCHOST.EXE, its purpose, and Windows services in general. This tutorial will clear up this confusion and provide information as to what these processes are and how to find out more information about them. Before we continue learning about SVCHOST, lets get a small primer on Windows services. Services are Windows programs that start when Windows loads and that continue to run in the background without interaction from the user. For those familiar with Unix/Linux operating systems, Windows services are similar to *nix daemons. For the most part Windows services are executable (.EXE) files, but some services are DLL files as well. As Windows has no direct way of executing a DLL file it needs a program that can act as a launcher for these types of programs. In this situation, the launcher for DLL services is SVCHOST.EXE, otherwise known as the Generic Host Process for Win32 Services. Each time you see a SVCHOST process, it is actually a process that is managing one or more distinct Windows DLL services. Outlined below are three methods, depending on your Windows version, to see what services a SVCHOST.EXE process is controlling on your computer as well as some advanced technical knowledge about svchost for those who are interested. Process Explorer, from Sysinternals, is a process management program that allows you to see the running processes on your computer and a great deal of information about each process. One of the nice features of Process Explorer is that it also gives you the ability to see what services a particular SVCHOST.EXE process is controlling. First you need to download Process Explorer from the following site: Download the file and save it to your hard drive. When it has finished downloading, extract the file into its own folder and double-click on the procexp.exe to start the program. If this is your first time running the program, it will display a license agreement. Agree to the license agreement and the program will continue. When it is finished loading you will be presented with a screen containing all the running processes on your computer as shown in the figure below. Remember that the processes you see in this image will not be the same as what is running on your computer. Process Explorer Screen Scroll through the list of processes until you see the SVCHOST.EXE process(es). To find out which services are running within a particular SVCHOST.EXE process we need to examine the properties for the process. To do this double-click SVCHOST.EXE entry in Process Explorer and you will see the properties screen for the process like in the image below. Finally, to view the services running in this process, click on the Services tab. You will now see a screen similar to the one below. This window displays the services that are being managed by this particular SVCHOST.EXE process. As you can see the SVCHOST.EXE that we are currently looking at in this tutorial is managing the DCOM Server Process Launcher and Terminal Services. Using this method you can determine what services a SVCHOST.EXE process is controlling on your computer. For those who like to tinker around in a Windows command prompt/console window, and have Windows XP Pro or Windows 2003, there is a Windows program called tasklist.exe that can be used to list the running processes, and services, on your computer. To use task list to see the services that a particular SVCHOST.EXE process is loading, just follow these steps: 1. Click on the Start button and then click on the Run menu command. 2. In the Open: field type cmd and press enter. 3. You will now be presented with a console window. At the command prompt type tasklist /svc /fi "imagename eq svchost.exe" and press the enter key. You will see a list of the processes on your computer as well as the services that a SVCHOST.EXE process is managing. This can be seen in the image below. TaskList /svc output When you are done examining the output, you can type exit and press the enter key to close the console window. Windows Vista and Windows 7 have enhanced their Windows Task Manager and one of its features allows us to easily see what services are being controlled by a particular SVCHOST.EXE process. To start, simply start the task manager by right clicking on the task bar and then selecting Task Manager. When Task Manager opens click on the Processes tab. You will now be presented with a list of processes that your user account has started as shown in the image below. Windows 7's Current User Processes We, though, need to see all of the processes running on the computer. To do this click on the button labeled Show All Processes. When you do this, Windows may prompt you to allow authorization to see all the processes as shown below. Show all Processes Confirmation Press the Continue button and the Task Manager will reload, but this time showing all the processes running in the operating system. Scroll down through the list of processes until you see the SVCHOST processes as shown in the image below. All Windows 7 Processes Right-click on a SVCHOST process and select the Go to Service(s) menu option. You will now see a list of services on your computer with the services that are running under this particular SVCHOST process highlighted. Now you can easily determine what services a particular SVCHOST process is running in Windows Vista or Windows 7. The Windows 8 Task Manager makes it much easier to find what services are running under a particular SVCHOST.exe instance. To access the Task Manager, type Task Manager from the Windows 8 Start Screen and then click on the Task Manager option when it appears in the search results. This will open the basic Task Manager as shown in the screenshot below. To see the list of processes, click on the More details option. Scroll down until you see the Windows Processes category and look for the Service Host entries as shown in the image below. Next to each Service Host row process will be a little arrow. Click on this arrow to expand that particular Service Host entry to see what services are running under it. Under the expanded Service Host, you will now see the list of services that is running under it. This allows you to easily determine what services a particular SVCHOST process is managing in Windows 8. Now that we know that a single SVCHOST.EXE process can load and manage multiple services, what determines what services are grouped together under a SVCHOST instance? These groups are determined by the settings in the following Windows Registry key: Under this key are a set of values that group various services together under one name. Each group is a REG_MULTI_SZ Registry value that contains a list of service names that belong to that group. Below you will see standard groups found in XP Pro. Services in the group |LocalService||Alerter, WebClient, LmHosts, RemoteRegistry, upnphost, SSDPSRV| 6to4, AppMgmt, AudioSrv, Browser, CryptSvc, DMServer, DHCP, ERSvc, EventSystem, FastUserSwitchingCompatibility, HidServ, Ias, Iprip, Irmon, LanmanServer, LanmanWorkstation, Messenger, Netman, Nla, Ntmssvc, NWCWorkstation, Nwsapagent, Rasauto, Rasman, Remoteaccess, Schedule, Seclogon, SENS, Sharedaccess, SRService, Tapisrv, Themes, TrkWks, W32Time, WZCSVC, Wmi, WmdmPmSp, winmgmt, TermService, wuauserv, BITS, ShellHWDetection, helpsvc, xmlprov, wscsvc, WmdmPmSN Each of the service names in these groups corresponds to a service entry under the Windows Registry key: Under each of these service entries there is a Parameters subkey that contains a ServiceDLL value which corresponds to the DLL that is used to run the service. When Windows loads it begins to start services that are set to enabled and have an automatic startup. Some services are started using the SVCHOST.exe command. When Windows attempts to start one of these types of services and there is currently not a svchost instance running for that services group, it will create a new SVCHOST instance and then load the DLL associated with the service. If on the other hand, there is already a SVCHOST process running for that group it will just load the new service using that existing process. A service that uses SVCHOST to initialize itself, provides the name of the group as a parameter to svchost.exe command. An example would be: C:\WINDOWS\system32\svchost.exe -k DcomLaunch In the above command line, the svchost process will look up the ServiceDLL associated with the service name from the DcomLaunch group and load it. This can be confusing, so let's use an example. There is a Windows service called Distributed Link Tracking Client which has a service name TrkWks. If we examine the table above, we can see that the TrkWks service is part of the netsvcs group. If we look at the Registry key for this service we see that it's ServiceDLL is %SystemRoot%\system32\trkwks.dll. Therefore, using this information and what we learned above, we know that the executable command for the TrkWks service must be: C:\WINDOWS\system32\svchost.exe -k netsvcs When the TrkWks service is started Windows will check to see if there is a SVCHOST process for the netsvcs group already created. If not it will create an instance of one to handle services in the netsvcs group. The SVCHOST process for netsvcs will then start the service by executing the %SystemRoot%\system32\trkwks.dll. Once the DLL has been loaded by SVCHOST the service will then be in a started state. Now that you understand what SVCHOST.EXE is and how it manages certain Windows services, seeing multiple instances in your process list should no longer be a mystery or a concern. It is not uncommon to see numerous SVCHOST entries, sometimes upwards to 8 or 9 entries, running on your computer. If you are concerned with what is running under these processes, simply use the steps described above to examine their services. If you are unsure what a particular service does and need help, feel free to ask any question you may have in of our Windows forums. A common misconception when working on removing malware from a computer is that the only place an infection will start from is in one of the entries enumerated by HijackThis. For the most part these entries are the most common, but it is not always the case. Lately there are more infections installing a part of themselves as a service. Some examples are Ssearch.biz and Home Search Assistant. One of the top questions I see on forums is "How do I know if I have been hacked?". When something strange occurs on a computer such as programs shutting down on their own, your mouse moving by itself, or your CD constantly opening and closing on its own, the first thing that people think is that they have been hacked. In the vast majority of cases there is a non-malicious explanation ... Ever since Windows 95, the Windows operating system has been using a centralized hierarchical database to store system settings, hardware configurations, and user preferences. This database is called the Windows Registry or more commonly known as the Registry. When new hardware is installed in the computer, a user changes a settings such as their desktop background, or a new software is installed, ... Many programs that you install are automatically run when you start your computer and load Windows. For the majority of cases, this type of behavior is fine. Unfortunately, there are programs that are not legitimate, such as spyware, hijackers, Trojans, worms, viruses, that load in this manner as well. It is therefore important that you check regularly your startup registry keys regularly. Windows ... One of the more frustrating experiences when using a computer is when you want to delete or rename a file or folder in Windows, but get an error stating that it is open, shared, in use, or locked by a program currently using it.
<urn:uuid:46085231-0312-4eeb-9a86-dfdf3a0effc4>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/list-services-running-under-svchostexe-process/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914383
2,691
2.5625
3
June 6 — A University of Wyoming professor of mechanical engineering is among authors of a new report that offers guidance to NASA regarding the computational resources it will need to design aircraft and spacecraft in the future. Dimitri Mavriplis, head of UW’s lab in computational fluid dynamics (CFD), joined researchers from Stanford University, the Massachusetts Institute of Technology, Boeing, Pratt & Whitney, and the National Center for Supercomputing Applications to produce the report for NASA. Titled “CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences,” the report includes recommendations for the nation’s aeronautical research agency to continue advancing aerospace design. “Sustaining future advances in CFD and related multidisciplinary analysis and optimization tools will be critical for achieving NASA’s aeronautics goals, invigorating NASA’s space program, keeping industry competitive, and advancing aerospace engineering in general,” the authors wrote. “The improvement of a simulation-based engineering design process in which CFD plays a critical role is a multifaceted problem that requires a comprehensive long-term, goal-oriented research strategy.” CFD is a branch of fluid mechanics that uses computer-based numerical methods and algorithms to analyze and solve fluid flow and aerodynamic problems. Use of computer simulations to test aircraft designs, instead of costly and time-consuming wind-tunnel tests, has revolutionized aeronautical engineering. Mavriplis, who will mark a decade at UW this year, is one of the nation’s leaders in the field. Before coming to UW, he worked for 16 years at NASA’s Langley Research Center in Hampton, Va. He is one of UW’s major users of the National Center for Atmospheric Research Wyoming Supercomputing Center (NWSC) in Cheyenne, which contains one of the world’s most powerful supercomputers. It can operate at 1.5 petaflops (equal to 1.5 quadrillion computer operations per second). The report for NASA notes that further technological advances are expected, with a move to exascale computing — machines 1,000 times faster than today’s fastest supercomputers. That will require a rethinking of current CFD algorithms and software, along with more power-efficient computing hardware. The report also says that NASA’s investment in basic research and technology development for simulation-based analysis and design has declined significantly in the last decade “and must be reinvigorated if substantial advances in simulation capability are to be achieved.” Mavriplis’ personal research goals include achieving more accurate aerodynamic simulations by resolving increasingly fine details of turbulence; being able to run simulations for more realistic and complex aerospace vehicle configurations; and being able to run such simulations faster and eventually enabling engineers to perform such computations on desktop computers or tablets. Source: University of Wyoming
<urn:uuid:08ec344b-2427-46ba-a589-cd3d59841c3d>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/uw-professor-advises-nasa-supercomputing-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00194-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914611
614
2.765625
3
New imaging technology emerging out of one of the largest non-profit integrated healthcare systems in the country, the University of Pittsburgh Medical Center (UPMC), however, now promises to speed up the process of digitizing images from x-rays, CT scans and MRIs. UPMC has created a completely "filmless," all-digital system for handling x-ray and other medical images. The hospital system does more than 1 million exams requiring medical images each year. With iSite, doctors at UPMC no longer need to order xrays films from storage the day before a patient is scheduled to be seen. Instead, they can now instantly call up the x-rays from any desktop computer at ten of the hospitals in the UPMC system. That improves patient care, says Dan Drawbaugh, UPMC's CIO. Doctors can look at all of a patient's images, comparing current x-rays, for example, with ones taken a year ago. Putting medical images online also removes the need for expensive courier services to shuttle x-rays, or -- as sometimes happens when doctors are pressed -- patients themselves carrying x-rays between doctor's offices. It also cuts radiology costs, which have been climbing steadily, and now represent about six percent of health care costs nationwide, according to Drawbaugh. Medical orphans: Xrays While larger numbers of medical facilities have written patient records online, only about 10% of hospitals today have digital image systems, which the industry refers to as PACS, for Picture Archive and Communication Systems, says Drawbaugh. That can make things awkward for doctors, according to Guy Creese, research director at the Aberdeen Group. "As more and more written medical information has gone online," he says, "x-rays, CAT scans and other digital images have become "orphans" of sorts. If I go to my doctor, and he has my medical records online, but not my x-rays, he has to flip back and forth between reading my history online and looking at the physical xrays." Advances in technology are now making it easier to put medical images online. "One of the things that makes this possible now is the plummeting price of disk drives," says Creese. "Ten years ago, you would have had the recent images on disk, but older ones would have been on optical drives, because it was cost prohibitive to do anything else. But now it's not." Now, says Creese, the commodity in high demand is bandwidth, rather than storage. One of the keys to the Stentor system, he says, is that UPMC "has figured out compression algorithms that make it not as onerous to move images around." Keeping costs down -- and reducing mistakes For the health care industry, these technology advances are arriving just in time. A number of forces -- including the federal government -- are pushing hard to digitize medical records. The Health Insurance Portability and Accountability Act of 1996, or HIPPA, spelled out requirements for protecting the confidentiality of patient records, including being able to establish an audit trail and establish who has had access to a patient's medical history. Since it's much easier to establish an audit trail electronically, that by itself would probably be sufficient to nudge hospitals and insurance companies towards digital records. But HIPPA goes further, requiring the health care industry to begin exchanging patient information electronically, using national standards such as EDI. HIPPA is not the only force driving the adoption of digital records in health care, however. "Everyone recognizes that electronic patient records are one way to keep costs down," says Creese. "They also decrease mistakes." That leads UPMC's Drawbaugh to conclude that PACS will become a pervasive technology within healthcare. "It's inevitable," he says. The paperless hospital At one new hospital, the all-digital future that Drawbaugh envisions is already a reality. The Indiana Heart Hospital, a new 88 bed cardiac care center in Indiana that opened its doors just this week, was designed from the start to be an all-digital facility. The hospital has no files, and no medical records department. The hospital uses an all-digital medical information system from GE Medical Systems called Centricity, which integrates patient information from every care area of the hospital into a single electronic record. "We're so totally committed to a paperless, filmless and wireless environment that we don't even have nursing stations," says the hospital's CEO, David Veillette. "Instead, all our caregivers can input and retrieve patient information right at the bedside." The system is expected to reduce medical errors by up to 80 percent, according to the hospital. The Institute of Medicine, part of the National Academy of Sciences, says medical errors cause up to 98,000 deaths in hospitals each year.
<urn:uuid:677e0b38-31b2-4695-bf59-3cb4d2f0a96d>
CC-MAIN-2017-04
http://www.cioupdate.com/trends/article.php/1587251/For-Hospitals-an-Inevitable-Path-Towards-an-All-Digital-Future.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963439
993
2.609375
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The rootkit is basically a technology that hides the infection, which started almost five years ago. and has become common and a standard part of malware. Rootkits within malware can hide infected files, registry key and open ports. Even an advanced user may be unable to tell if a machine is infected, as rootkits hide the cause of infection. It can be used to create a 'backdoor' into the system for the hacker's use, alter log files, attack other machines on the network, and alter existing system tools to escape detection. SearchSecurity.in: Can you give us some rootkit removal tips? There are two rootkit removal options after you detect them. The first way is to restore a known clean backup. If you encourage regular backups, you don't have to clean anything as such. If you don't have a full backup, then you should remove rootkits by undoing all the system changes performed by these malware. This can be a complicated process. The most complicated rootkits that we have seen cannot be removed from within Windows. So you should basically reboot to a different operating system (from a CD-ROM or a USB stick), and then perform the cleaning. SearchSecurity.in: How much of a serious threat are botnets and rootkits in Asia, especially in India? In India, the average internet connection speed is slower as compared to the US or Europe, a primary factor that has a direct impact on activities undertaken through an infected computer. If a machine does not have enough bandwidth, the attackers are not interested. They need machines with fast connections and sufficient bandwidth to send spam and malicious emails, so this works to the advantage of Indian users. We have observed rootkit-enabled Trojans which are complicated in structure and target online banking transactions. Targeted corporate espionage attacks are also on the rise; however, they are few in number. SearchSecurity.in: In what ways can organizations detect and mitigate bot attacks within their networks? Organizations must strengthen individual workstations to block, prevent and detect the infection. With network traffic monitoring, IDS and IPS, companies should be able to locate inhouse infections created by botnets. An administrator who monitors firewall logs can also manually detect bots by keeping an eye on user activity. If user PCs connect to servers used by known botnets, they can identify infected machines on the corporate network. After detecting bots, administrators can disconnect the computer and manually clean it to avoid re-infection of other machines. SearchSecurity.in: Can you provide us with some best practices to avoid bot attacks? Training, education and a strong user policy are the first best practices. Users should be trained about the infection mechanisms and best practices to avoid such attacks. A good way to avoid infection is to establish a policy where users can use their work computers only for business purposes. Most infections come though the Web, of which most come from work and recreational access. People conducting Google searches end up on pages that often infect their computers. SearchSecurity.in: What other trends do you see in usage of bots? Botnets on mobile platforms are on the rise. We have already seen two mobile phone botnets so far, and it can only get worse. We saw a botnet running on a Symbian based mobile device and another on an Apple iPhone. Smart phones have access to the Internet and can be targeted for hacks similar to computer based attacks. The attacker benefits through mobile malware by making money quickly and easily, since he can trace calls, messages and expensive premier numbers straight from the phone.
<urn:uuid:fa6e9281-0faf-48ad-98d5-d99096020ef5>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1381070/A-botnet-and-rootkit-removal-101
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946743
762
2.640625
3
Telnet is a text-based tool that you can use at the command prompt to connect to another computer on the Internet. Unfortunately, telnet is not enabled by default in Windows 8. But it's easy and fast to setup telnet on your computer. Here's how you can turn on telnet in Windows 8: 1. Open the Control Panel. 2. Click on Programs and Features. 3. Click on Turn Windows Features On or Off. 4. Check the Telnet Client box in the Windows Features dialog box. 5. Click OK and telnet will be installed. 6. You can then access telnet from the command prompt by typing "telnet" without the quotes. This video from KC Sahu on YouTube will walk you through each step of enabling telnet in Windows 8 and Windows 8.1.: Hat tip: Install and Enable Telnet in Windows 8 and 8.1 | Sysprobs
<urn:uuid:ba6b45e0-f372-47e0-aca5-b8ecec16d8e3>
CC-MAIN-2017-04
http://www.itworld.com/article/2693375/enterprise-software/enable-telnet-in-windows-8.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00240-ip-10-171-10-70.ec2.internal.warc.gz
en
0.78863
195
2.625
3
Manufacturing Breakthrough Blog Friday November 18, 2016 Limitations of the Standard Cost System Srikanth and Umble tell us that many of the problems that plague American industries today are a direct result of the application of standard cost principles throughout the organization. They further explain that many manufacturing problems derive from the view that manufacturing management’s goal is to control and reduce the standard cost of each individual operation. To illustrate the problems of the standard cost approach, Srikanth and Umble demonstrate how this approach might be misapplied to a typical investment decision. The following is a case study on how Srikanth and Umble illustrate these points. Suppose the plant manager of a manufacturing firm is considering a proposal to purchase a new and faster stamping machine. The basic information available to analyze the decision is as follows: - The old machine is able to process material at a rate of 100 units per hour (one every 36 seconds) The new machine is three times as fast, producing material at a rate of 300 units per hour (12 seconds per unit). This saves 24 seconds, or 0.00666 hours, per unit processed. - The stamping machine is operated by one machinist, who is available to work approximately 2,000 hours per year (40 hours per week for 50 weeks). This is true for either the new machine or the old machine. - Approximately 150,000 units per year are processed at the work station where the stamping process is performed. - The cost of direct labor is $15 per hour. - The overhead factor for this process is 280% of direct labor. - The net cost of the new machine is $27,000 (This includes the salvage value of the old machine). The standard cost approach would be to determine whether or not the proposed investment has a sufficiently high return. In most cases, the return is measured in terms of cost savings, and the projected savings would be compared to the initial investment to calculate the payback period. The expected savings from the purchase of the new machine would typically be calculated in the following way: - Annual direct labor cost savings = reduction in process time per unit x units produced per year x direct labor cost. In most cases today, the standard cost procedures generally charge overhead to an are based on the amount of direct labor consumed in that area. Therefore, any savings in direct labor cost for an area will eventually result in less total overhead being charged to that area. Thus, any projected savings in direct labor cost can further be projected to reduce the overhead charged to that area. The projected amount of annual overhead cost savings can be calculated by applying the appropriate overhead factor: Annual overhead cost savings = annual direct labor cost savings x overhead factor And the total annual cost savings for the area are calculated as follows: Total annual cost savings = annual direct labor cost savings + annual overhead cost savings While the exact procedure for the above calculations may differ from firm to firm, the fundamental approach is the same. Continuing with the illustration of the stamping machine: Annual direct labor cost savings = 0.00666 hours per unit x 150,000 per year x $15 per hour = $15,000 per year Annual overhead cost savings = $15,000 x 280% = $42,000 per year Total annual cost savings = $15,000 = $42,000 = $57,000 per year Since the net cost of the new machine is $27,000, the direct labor cost savings of $15,000 translates to a payback period of 1.8 years. In my next post, we will complete our decision-making analysis on whether or not to purchase the new machine. We will then look at a more realistic way to appraise the information to make a better decision. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond. Until next time, L. Srikanth and Michael Umble,Synchronous Management – Profit-Based Manufacturing for the 21st Century, Volume One – 1997, The Spectrum Publishing Company, Wallingford, CT
<urn:uuid:bcd2182b-6146-4348-871c-9a65bbaa324a>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2016/november/problems-with-traditional-management-accounting-part-2.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00148-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928058
852
2.8125
3
It’s garnered a truckload of consumer media buzz recently because developers claim it’s the most sophisticated augmented reality headset in development so far. Company literature plays up the stereoscopic 3D view that presents unique, parallel three-dimensional images for each eye the same way our eyes perceive images in the real world. The imagery spans approximately 100 degrees, stretching beyond a person’s peripheral view. The consumer gaming community probably won’t be able to enjoy the Rift until at least later this year, but gamers aren’t the only ones interested. Facebook bought Oculus VR earlier this year for $2 billion, presumably to enhance the social networking experience beyond instant messaging and random status updates. But around the world, the military’s already adapted the technology for training and intelligence purposes, indicating that the Rift’s reach will extend to the highest levels of government. Here are three places where trainees are using the headset to hone their skills: - The Norwegian Army is testing the Rift’s application in tank driving scenarios. M-113 drivers navigated using Rift goggles that were connected to image processing software and external cameras that captured tank surroundings. Thanks to the set-up’s situational awareness, vehicle operators could maneuver independently without needing verbal commands. Testers experienced some dizziness and noticed that the goggles lacked screen resolution to see well at distances, but they believe these bugs can be fixed in time. - The United States Navy is using the Rift in a similar fashion to train sailors in Project Blue Shark. Future war fighters could drive or repair ships with three-dimensional awareness while communicating with others in real time thousands of miles away. - It’s no secret that the American military is all about drones these days, and the private sector’s experimentation with the Rift could enhance the drones’ functionality. According to Digital Trends, many camera-equipped unmanned aerial vehicles (UAVs) need two people to operate: one to drive the drone, and one to control the camera. Norwegian researches, however, figured out a way to equip a drone with two cameras that forwarded images to the Rift and moved based on the headset wearer’s head movements. The process is still in its early stages, but refinements could lead to a wave of VR-enabled and controlled drones.
<urn:uuid:4f4e76e2-9286-4e51-a25a-042d6e2648a3>
CC-MAIN-2017-04
http://www.govtech.com/videos/3-Ways-the-Oculus-Rift-Could-Change-the-Military.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930881
475
2.703125
3
This Sunday (July 20, 2014) marks the 45th anniversary of when astronauts Neil Armstrong and Buzz Aldrin landed on the surface of the moon. To help mark the anniversary, NASA today released a new animation/video that shows where the lunar module landed with photos that correlate where equipment was placed and the tracks of the astronauts’ footprints, etc. The images were created by NASA’s Lunar Reconnaissance Orbiter (LRO), which NASA says “makes it possible to visit the landing site in a whole new way by flying around a three-dimensional model of the site.” By using a stereo pair of images, scientists were able to use software that could “infer the shape of the terrain, similar to the way that left and right eye views are combined in the brain to produce the perception of depth.” Of course, the tin-foil hat brigade will likely bring up the concepts of Photoshop, video manipulation software and all sorts of other theories as to why “the moon landing was faked”, but you can ignore those folks and enjoy the photos/video. Heck, even Weird Al Yankovic makes fun of the tin-foil hat brigade in his latest video, “FOIL” (parody of Lorde’s song “Royals”): Happy Friday, folks! Moon landing video and Weird Al, all in one post! Keith Shaw also rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+.
<urn:uuid:f7cc623f-f847-4c89-886a-294c58648372>
CC-MAIN-2017-04
http://www.itworld.com/article/2696513/consumer-tech-science/nasa-animation-shows-apollo-11-landing-site.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00481-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930396
346
2.53125
3
During Hurricane Sandy’s landfall in the U.S. and the time immediately after it, about 15 million search queries were made through Google about the storm. In contrast, the Hurricane Sandy pages on FEMA.gov were viewed fewer than 1 million times — a DHS report put the total page views at about 740,000. While important and even life-saving information is housed on government websites, the way it’s posted can impact how quickly and easily platforms like Google can locate and highlight the data. Technology leaders told members of Congress in June that posting emergency-related information online in open, machine-readable formats is necessary for allowing it to be quickly disseminated to the public. But what does that really mean for emergency managers? Nigel Snoad, product manager for crisis response and civic innovation at Google, told Emergency Management how response and preparedness organizations can post information online in ways that make it easily searchable and shareable. 1. Post free text. Snoad said PDFs and images are the worst offenders. Don’t put text in an image, which requires a person to read the information to make sense of it. Google’s Web-crawling bot, or Googlebot, searches the Web for new and updated sites, automating the information-seeking process. Similarly, Bing has four crawlers that handle different search needs. These Web crawlers search and collate the information, allowing for the search engine sites to make use of it. “A machine-readable format allows us to structure the data and use it in a really relevant way,” Snoad said. While PDFs that have text in them are discoverable, PDFs that contain images or PDFs of maps make the information difficult to share. In written testimony to a House Homeland Security subcommittee, Matthew Stepka, Google vice president of technology for social impact, advised publishing alerts using open Web formats like Atom and RSS. In addition, live feeds can be published using the Common Alerting Protocol, GeoRSS for encoding location information and KML for maps. This makes the data available to Google — and other search engines — and its tools within seconds of publication. “When we set up our Hurricane Sandy crisis map, we had to spend time copying and pasting information about public hazards from a PDF,” Stepka said. “After we did so, the data quickly became obsolete, and we had to ask for an updated version.” 2. Don’t lock up data in licenses. Snoad said they see many websites that say the information is not for commercial use, cannot be distributed and/or is fully copyrighted — even with emergency data. “During a crisis this doesn’t make much sense,” he said. Government agencies and organizations should consider how they want their information to be used; choose a data license that allows the information to be reused in a way that helps the public and allows for wide distribution. “Having the data available via an open license means someone like Google or any other company or citizens can take it and share it and make it really useful for everybody else,” Snoad said. A PDF that a user downloads or views from a website that is vague about reuse also isn’t helpful. “A lot of people will do the right thing and use it appropriately, but if it’s an emergency evacuation notice that’s attached to a page that says ‘copyright, not allowed to copy this information,’ that’s just kind of silly,” Snoad said. “People need to be thoughtful about who the users are.” 3. Use open, commercial tools to share and save data. Even if information is published in a way that allows for easy reuse and machine readability, the servers that store the data may crash during an emergency. “You don’t want your evacuation zone maps to be on a server that gets overwhelmed by the public trying to look at it — that can cause a tragedy unfortunately,” Snoad said. He recommends using open, commercial tools to share important messages. This includes posting on social media sites, saving data in the cloud and publishing data on open mapping systems. And what are the benefits to emergency managers who follow this guidance? Google seeks to consolidate and discover relevant information for users. “It’s our core mission to organize information and make it accessible and useful during a crisis,” Snoad said. “Information is absolutely critical; it’s lifesaving — and not just for first responders but for citizens.” The data helps power tools like Google’s Crisis Maps, while helping search engines highlight useful information for citizens, including shelter locations, evacuation maps and emergency alerts.
<urn:uuid:b3ffb9b2-5fe2-4dc2-bc07-2830007d81df>
CC-MAIN-2017-04
http://www.govtech.com/internet/3-Tips-for-Posting-Emergency-Information-Online.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92686
989
2.921875
3
What email address or phone number would you like to use to sign in to Docs.com? If you already have an account that you use with Office or other Microsoft services, enter it here. Or sign in with: Signing in allows you to download and like content, which the author will be aware of. Embed code for: 10-18n19 PerTbl D3 NOTES Select a size Bellwork Tuesday 10-18-16 Wednesday 10-19-16 1. What is quantum energy and how is it related to an energy change of matter? 2. What is the “ground state” of an atom? 3. In a nutshell, what’s Schrodinger’s WAVE FUNCTION? Get out your POGIL: Electron Configuration packet & homework In 1900, Max Planck found: Matter can gain or lose energy only in small, specific amounts called quanta. A quantum is the minimum amount of energy that can be gained or lost by an atom. Which led Albert Einstein to say*: “Hey! Light carries quantums of energy in PHOTONS.” (*probably) This explained the photoelectric effect Then Bohr suggested that an electron moves in circular quantum orbits, based on the light coming from heated elements: But it didn’t explain all the elements, so Louis deBroglie flipped Einstein’s WAVE/Particle duality to find all moving matter makes waves Ground state = lowest energy state Excitedstate = anything above ground Meanwhile, Heisenberg was busy determining that you can’t find electrons without disrupting a) its location or b) its speed, called the Uncertainty Principle Which also, by default, knocked out Bohr’s circular orbit idea …and Schrodinger combined both de Broglie’s and Heisenberg’s work by treating electrons as waves and introduced WAVE FUNCTIONS: Used a cute(?) cat story to explain all of the possible states coexist until you go looking for a particle. WAVE FUNCTIONS show us ATOMIC ORBITALS Students will know the concept of wave-particle duality; properties of the quantum mechanical model; Wave functions get us orbitals Students will be able to write electron configurations for a given atom’s electrons; write noble gas configurations Orbitals, 3 rules, P. T. map E- Config. Notes & Practice E- config partner activity. Extension Questions #16-19, E- Config. POGIL Please take out your notes and periodic table… Another “map” within your table: On your Periodic Table: Label these area with a highlighter: Electron Configuration (E.C.) Formally: “the representation of the arrangement of electrons distributed among the orbital shells and subshells” Informally: address for electrons within the e- cloud Last class you looked at Orbital Diagrams (the boxes/apt building); E.C. is the “manager’s code” for those pics Decoding E.C.; Biggest to smallest areas: Energy level – 1, 2, 3, etc. (each row) Sublevel – type of area within each level – s, p, d, f Orbitals – rooms within sublevels- determined by the wave functions for electrons. d and f sublevels fill below the valence shell level d is 1 lower while f is 2 lower. Start at this level …have this many Orbitals …can hold #e- TOTAL What do sublevels(orbitals) look like? What do sublevels look like? What were the 3 rules for filling electron orbitals? Aufbau principle – electrons will always occupy the lowest energy orbital available. Pauli Exclusion principle – Only 2 electrons may occupy a single orbital, and they must have opposite spins. Hund’s Rule – Equal energy orbitals must be filled with same-spin single e- before any orbitals can have 2e-. Orbital Diag. vs. Electron Config. Here’s the thing: Do you need to know all of these rules when completing an ORBITAL DIAGRAM? Do you need to know all of these rules to complete Electron Configuration? Orbitals ALWAYS fill from LR, top to bottom following the PT. And by default, follow the 3 rules! *Will you still be asked about it on a test? Go back and put in the energy levels: *Why does 4s fill before 3d? Because it is lower in energy… Noble Gas Notation Accounts for full inner shells 8 electrons, also called an octet The first noble gas so there is no Noble Gas notation for He Try one… of each notation Mid-Unit Quiz (Ch. 5!) You may write on this piece of paper. Answer every question, no blanks. You may use your notes and Periodic Table. Now that you’ve read Chapter 5.3 (and really, all of Chapter 5 at this point) Do Problems 5.3 #21a,c,e, 22-24, 26-28 Read Section 6.1 & 6.2 Hydrogen’s single electron is in the n = 1 orbit in the ground state. When energy is added, the electron moves to the n = 2 orbit. Remember s & p are the same as the row – but d fills 1 energy level lower and f is 2 lower Need more? Do K, N, Ga, or W Try Barium and Lead also Need at least 10 minutes, up to 15 given.on moves in circular quantum orbits, based on the light coming from heated elements:
<urn:uuid:4887213d-c348-43fb-8e11-a9a1080a69c5>
CC-MAIN-2017-04
https://docs.com/msvchem/1090/10-18n19-pertbl-d3-notes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz
en
0.820984
1,256
3.140625
3
To install Tenup on a Linux or Mac system, first identify whether you have a 32-bit or 64-bit version of Linux. Then, open the corresponding folder (e.g., if you have a 32-bit copy, open the lin32 folder and if you have a 64-bit copy, open the lin64 folder). Then, copy the Tenup executable (e.g., either tenup32 inside the folder to the file path: To install on a Mac, follow the same procedure as the Linux instructions above, except copy executable file from the Tenup/osx directory instead of one of the linux In order to copy the file to You willl most likely need administrative access on your computer. You can also add the directory where Tenup is stored to your $PATH variable. This method doesn't require you to have administrative access. To run Tenup as a command from any directory, simply add the directory where Tenup is saved by following these steps. - Go to your home directory with the following command - Open your bash profile with the following command: Note: vim is just one text editor that is available in a standard bash installation. If you're not familiar with Vim you can use your favorite text editor - Add the following command to the next empty line of the file: - Save the changes to your file Once you've added the current Tenup directory to your $PATH you can invoke Tenup from any directory on your computer. Note: You may have to close your terminal and re-open it before the changes to your $PATH take effect. Once you've copied the file to its correct location, you will be able to run Tenup as a shell command. However, the exact command is dependent on the version of the operating system and ODBC driver combination you're using. The next section will cover all the basic information you need to perform a basic extract and load with Tenup.
<urn:uuid:25dc1903-7043-4fc8-bdaf-45a01f132bcf>
CC-MAIN-2017-04
https://www.1010data.com/downloads/tenup/doc/InstallingTenup.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887059
420
2.546875
3
Gaining significant knowledge from the growing tons of information available about big picture topics such as biology, economics, astronomy, health or climate is a challenge beyond most human minds and computer programs. But the scientists at the Defense Advanced Research Projects Agency (DARPA) want to change that with a program called Big Mechanisms they say could gather all exiting data about a particular topic, keep it up-to-date and develop new conclusions or research directions. +More on Network World: IRS warns on 'Dirty Dozen' tax scams for 2014+ "Having big data about complicated economic, biological, neural and climate systems isn't the same as understanding the dense webs of causes and effects-what we call the big mechanisms-in these systems," said Paul Cohen, DARPA program manager. "Unfortunately, what we know about big mechanisms is contained in enormous, fragmentary and sometimes contradictory literatures and databases, so no single human can understand a really complicated system in its entirety. Computers must help us." The Big Mechanism program might bring about new ways to understand complicated systems, DARPA said. "Today's researchers read deeply but struggle to keep up with relentless streams of relevant publications. To stay current, a researcher must specialize, becoming expert in a small part of something much bigger. The vision for the Big Mechanism program is fundamentally different: Every publication would immediately become part of a public, computer-maintained, causal model of a complicated system-a big mechanism-and every aspect of a big mechanism would be tied to the data that supports it or contradicts it. To the extent that we can automate the construction of Big Mechanisms, we can change how science is done," DARPA said. In a nutshell the Big Mechanism program will develop technology to read research abstracts and papers to extract fragments of causal mechanisms, assemble these fragments into more complete causal models, and reason over these models to produce explanations. DARPA said it will aim the Big Mechanism program at cancer research first, specifically cancer pathways or the molecular interactions that cause cells to become and remain cancerous. From DARPA: The program has three primary technical areas: Computers should read abstracts and papers in cancer biology to extract fragments of cancer pathways. Next, they should assemble these fragments into complete pathways of unprecedented scale and accuracy, and should figure out how pathways interact. Finally, computers should determine the causes and effects that might be manipulated, perhaps even to prevent or control cancer. "The language of molecular biology and the cancer literature emphasizes mechanisms," Cohen said. "Papers describe how proteins affect the expression of other proteins, and how these effects have biological consequences. Computers should be able to identify causes and effects in cancer biology papers more easily than in, say, the literatures of sociology or economics." Actually building the Big Mechanism system sounds complicated as you might imagine. According to DARPA: " The Big Mechanism program will require new research and the integration of several research areas, particularly statistical and knowledge-based Natural Language Processing (NLP); curation and ontology; systems biology and mathematical biology; representation and reasoning; and quite possibly other areas such as visualization, simulation, and statistical foundations of very large causal networks. "Machine reading researchers will need to develop deeper semantics to represent the causal and often kinetic models described in research papers. Deductive inference and qualitative simulation will probably not be sufficient to model the complicated dynamics of signaling pathways and will need to be augmented or replaced by probabilistic and quantitative models." For a look at what exactly DARPA will be looking for go here. Check out these other hot stories:
<urn:uuid:e76c4027-f1a7-4233-a8cf-94892ddfaa12>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226393/applications/darpa-wants-to-automate-big-data-findings-to-solve-complicated-problems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93512
741
2.953125
3
Two weeks ago, Isis Wenger took the social media world by storm and coined the term “#ILookLikeAnEngineer” in a Twitter post challenging sexist and discriminating comments she faced in the workplace. Since then, there have been thousands of posts supporting the cause, created by men, women, engineers, and other professionals. As a female engineer, it’s exciting to see the tremendous amount of support that people are starting to provide toward this cause. I’ve always been passionate about being a technical woman, and have made it a goal to ensure that young females interested in engineering receive the support that they need to help them be successful. But amongst all of the support that has been generated, there has been some confusion about what it means to be an “engineer”. Across the Web and even in the workplace, I have heard and seen many people state that some individuals who have claimed to be engineers aren’t actually engineers. For example, a group of female engineers at Axcient gathered to take a photo supporting #ILookLikeAnEngineer. To my surprise, the photo was met with claims that not all of the women pictured were engineers. This instance, along with other instances I have witnessed online, have led me to wonder – what does it really mean to be an engineer? To many people, the term “engineering” refers to those who apply math, science, and technology to complete tasks in their professional life. For example, this might include writing code to develop computer programs, or mixing chemicals to create the next breakthrough in medicine. But to me, there is much more that goes into being an engineer than simply a job title and knowledge of STEM concepts. Although I come from a technical background and was trained in engineering throughout my college career, I believe the main reason I am an engineer is because everyday, I set out with the intention to solve the problems that my colleagues and our customers are experiencing to make their lives easier. I may not have a job title that explicitly says I’m an engineer at Axcient, but I am certainly applying engineering principles like critical thinking and creative problem solving to the work I am doing on a daily basis. Being an engineer simply means that you are actively building and creating solutions for the everyday problems that people face. Although in most cases, this does involve the application of science, technology, or math, none of these elements are the main ingredient in the recipe for solving real world issues. They are useless without ingenuity, or the ability to imagine and implement creative solutions. Ingenuity is what allows humans to solve the world’s most challenging problems – without it, we would be stuck living in the status quo, trying without luck to solve problems with the same, unsuccessful solutions. When we look at the true meaning behind #ILookLikeAnEngineer, it is meant to discourage the idea that all engineers fit into some kind of cookie cutter stereotype. It’s a statement that tells us that anyone can be an engineer – without regard for one’s appearance, cultural background, education, training, or job title. As long as you’re trying to solve problems and better the lives of others, you are practicing the principles of engineering and your ideas should be welcomed. The intent of #ILookLikeAnEngineer is to provide a safe and welcoming place for innovation to thrive, but questioning the identity of those who claim to be engineers does the opposite. It causes us fall into the very cycle of discrimination and judgment that we are trying to combat. Although allowing everyone with an innovative idea to call him or herself an engineer might subtract from the prestige of the title, I believe that there are bigger problems in the world that need to be solved. If we all have the same goal of solving real world problems, does it matter whether the person that came up with the idea was an engineer or not? Instead of making judgments about people and worrying about job titles, let’s each do our part in creating a community where all are welcomed to contribute and the true values of engineering are upheld. At Axcient, I am grateful to be surrounded by people who truly believe in these principles – innovation, creativity, and community. My experience as a female engineer has been nurturing, yet competitive, so that I am held to the same standard as other members of our company. I am excited to see how #ILookLikeAnEngineer will inspire change for all women to have the same kind of positive experience in the workplace.
<urn:uuid:733071a8-5f6f-403a-ba2e-27867583a8fe>
CC-MAIN-2017-04
https://axcient.com/blog/what-ilooklikeanengineer-really-means/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00415-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973508
928
2.84375
3
11 May 2011 – Ford researchers are harnessing the power of cloud computing, analytics and Google innovation to identify technologies that could make tomorrow’s vehicles smart enough to independently change how they perform to deliver optimal driveability and fuel efficiency. Ford researchers are applying Google’s Prediction API to more than two years of their own predictive driver behavior research and analysis. The Google API can convert information such as historical driving data – where a driver has traveled and at what time of day for example – into useful real-time predictions, such as where a driver is headed at the time of departure. How it works Ford is hoping to use these types of cloud-stored data to enable a vehicle essentially to optimize itself and perform in the best manner determined by a predicted route. This week, Ford researchers are presenting a conceptual case of how the Google Prediction API could alter the performance of a plug-in hybrid electric vehicle at the 2011 Google I/O developer conference. Here’s how the technology could work: • After a vehicle owner opts in to use the service, an encrypted driver data usage profile is built based on routes and time of travel. In essence, the system learns key information about how the driver is using the vehicle • Upon starting the vehicle, Google Prediction will use historical driving behavior to evaluate given the current time of day and location to develop a prediction of the most likely destination and how to optimize driving performance to and from that location • An on-board computer might say, “Good morning, are you going to work?” If the driver is in fact going to work, the response would be, “Yes,” and then an optimized powertrain control strategy would be created for the trip. A predicted route of travel could include an area restricted to electric-only driving. Therefore, the plug-in hybrid could program itself to optimize energy usage over the total distance of the route in order to preserve enough battery power to switch to all-electric mode when traveling within the EV-only zone Because of the large amount of computing power necessary to make the predictions and optimizations, an off-board system that connects through the cloud is currently necessary. Work is now underway to study the feasibility of incorporating other variables such as driver style and habits into the optimization process so Ford can further optimize vehicle control systems, allowing car and driver to work together to maximize energy efficiency.
<urn:uuid:b6b55080-d44a-46eb-a4a6-15846bb1941b>
CC-MAIN-2017-04
http://www.machinetomachinemagazine.com/2011/05/13/ford-uses-google-prediction-api-for-navigation-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939997
489
2.890625
3
Current wireless networks have a problem: The more popular they become, the slower they are. Researchers at Fudan University in Shanghai have just become the latest to demonstrate a technology that transmits data as light instead of radio waves, which gets around the congestion issue and could be ten times faster than traditional Wi-Fi. In dense urban areas, the range within which Wi-Fi signals are transmitted is increasingly crowded with noise—mostly, other Wi-Fi signals. What’s more, the physics of electromagnetic waves sets an upper limit to the bandwidth of traditional Wi-Fi. The short version: You can only transmit so much data at a given frequency. The lower the frequency of the wave, the less it can transmit. But what if you could transmit data using waves of much higher frequencies, and without needing a spectrum license from your country’s telecoms regulator? Light, like radio, is an electromagnetic wave, but it has about 100,000 times the frequency of a Wi-Fi signal, and nobody needs a license to make a light bulb. All you need is a way to make its brightness flicker very rapidly and accurately so it can carry a signal.
<urn:uuid:051617b0-302f-4199-a690-06414fea054c>
CC-MAIN-2017-04
http://www.nextgov.com/emerging-tech/2013/10/plan-turn-every-lightbulb-ultra-fast-alternative-wi-fi/72251/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00351-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93501
240
3.40625
3
Application Programing Interface (API) An Application Programing Interface (API) is a specification, or interface, that allows different elements of software to more easily communicate with each other. Developers frequently use APIs in a mobile context to hook third-party applications into the wireless capabilities of an operating system or other application. Wireless operators use them to allow for easy access to their network resources, including subscriber data, location and quality of service functions, so developers can build more robust applications. Today, only 9 percent of all Web and 5 percent of mobile apps use APIs, according to Research and Markets , but the firm expects that to grow by 68 percent by the end of 2016 as vendors, carriers and developers invest more in APIs. For more on APIs and the companies involved: - Alcatel-Lucent Nurtures Its API Grove - Alcatel-Lucent Puts Its APIs to Work - Apigee Unleashes an API Free-for-All - AT&T Opens DevLab Program for Developers - Deutsche Telekom Grows an M2M Developer Garden - Wave Goodbye to WAC - Vodafone Leads Open Global App-Store Push - CashFlows Offers Smartphone API - Photos: Inside Verizon's Developer Playground - Verizon Rallies Developers for LTE - Sprint Opens Up More APIs - Applications Unbound: Can Telcos Learn to Dance?
<urn:uuid:b8fcb030-5502-481e-9f49-2fadddebbb63>
CC-MAIN-2017-04
http://www.lightreading.com/document.asp?doc_id=698296
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.852255
295
2.8125
3
Who uses virtualization, and why? Virtualization is used widely in small to large data centers by corporations, government entities, service providers, and ISVs, as well as within SMB IT environments. Fundamental to cloud computing, virtualization enables IT organizations to pool and share resources across multiple users and deploy quickly without over provisioning. More efficient resource utilization results in lower equipment, space, and power and cooling costs. Virtualization also helps reduce complexity and management overhead, increase application availability, disaster recovery, and increase IT security.How this technology works Virtualization in IT environments encompasses several forms including: Server virtualization utilizes a thin software layer called a hypervisor to create “virtual machines” (VMs), an isolated software container with an operating system and application inside. The VM is called a guest machine and is completely independent, allowing many to run simultaneously on a single physical “host” machine. The hypervisor allocates host resources (CPU, memory) dynamically to each VM as needed. Storage virtualization has many forms, spanning block, file, disk and tape. Physical storage is hidden and presented as logical volumes, including different mediums (e.g. tape as disk). Storage virtualization enables pooling devices and provisioning capacity to users as logical drives. Advanced solutions enable arrays to be managed as one logical unit and capacity provisioned from one logical pool. Thin provisioning allocates shared physical resources (memory, CPU, disk) based on need versus the amount appearing available. This allows more resources to be allocated than physically available – called oversubscription – and avoids resources being left unused. Network virtualization enables network resources (hardware and software) to be deployed and managed as logical vs. physical elements. Multiple physical networks can be consolidated into a single logical network, or a single physical network can be segmented into separate logical networks. Network virtualization also includes software emulating switching functionality between virtual machines. Virtual Desktop Infrastructure (VDI) decouples the desktop from the physical machine. In a VDI environment, the desktop O/S and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or internet connection.Benefits of virtualization IT virtualization provides numerous benefits, to include: - Rapid application deployment - Higher application service levels and availability - Greater utilization of infrastructure investments - Fast and flexible scalability - Lower infrastructure, energy, and facility costs - Less administrative overhead - Anywhere access to desktop applications and data - Enhanced IT security
<urn:uuid:12ec5bf2-b5b3-4780-bc0c-bc9b73d1bb41>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/virtualization.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89713
540
3.28125
3
Ever been sitting around waiting for the latest wacky cat video to finish buffering, wondering what the holdup is? In these days of high speed Internet access, it seems like we shouldn’t ever have to wait for such important things, yet we still do, at least every now and again. Recently, researchers looked at data on Internet congestion in an effort to better understand where the slowdowns occur, and their work has provided some interesting findings. The study, Where in the Internet is congestion? by Daniel Genin and Jolene Splett of the National Institute of Standards and Technology, was released last week. It’s based on publicly available data from the FCC’s ongoing Measuring Broadband America study, which was begun in 2010. The study has deployed more than 10,000 wireless routers with custom firmware in homes and businesses, spanning 16 ISPs. These routers collect performance data hourly and send them to a handful of geographically distributed servers. Genin and Splett looked at two metrics to evaluate Internet congestion: a download throughput benchmark (based on multi-threaded TCP download speed tests) and website download speed (based on the time to download the homepages, including all code and images, of ten of the most popular websites). While the authors were primarily concerned with finding where the congestion occurs (i.e., closer to home or farther upstream), they had some interesting findings, particularly relating to the performance of cable via DSL networks. Looking at data on roughly 3,000 Internet connections via cable or DSL ISPs between March and June of 2011, Genin and Splett found that your slow access may be because: You’re using DSL On average, people using cable had faster download speeds than those on DSL; the average measured download throughput speed for cable was 13.5 Mbps, while it was 5.4 Mbps for DSL. You’re using cable While cable ISPs offered higher average download speeds than DSL, there was also more congestion on their networks. Specifically, roughly 30% of cable connections experienced “recurrent congestion,” defined as failing to reach 80% of the measured average download speed more than 20% of the time. On the other hand, only about 10% of DSL connections experienced recurrent congestion. You’re using the wrong ISP While Genin and Splett found that congestion rates were similar across DSL ISPs, they found that they varied greatly among cable providers. Unfortunately, individual ISPs were not identified, so we can’t say which ones to avoid. There’s congestion far away from your house In determining where along the path of content delivery congestion occurred, the researchers found that, in the majority of instances, it was happening before the “last mile” of the trip. That is, most congestion doesn’t happen at the edge of the network (i.e., close to the house or business), but rather closer to the core, such as where the ISP connects to the public Internet. The whys of all this congestion, and the differences between cable and DSL networks, remain to be determined. More information should be forthcoming as the FCC’s study continues and, hopefully, as ISPs decide to be more open about sharing their traffic data so the patterns and causes of congestion can be better understood. In the meantime, you’ll just have to patient while that video of skydiving cats buffers. Oh well. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:d384234b-af2d-4086-b53c-0414f01cd232>
CC-MAIN-2017-04
http://www.itworld.com/article/2707737/networking/4-reasons-why-your-cat-videos-are-buffering.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956709
762
2.515625
3
Internet standard MIME is able to extend the e-mail format for the purpose to support the following things as text, non-text attachment, a message organization with manifold parts and non-ASCII header information. The existence of MIME protocol has become very important because SMTP as basic e-mail transmission protocol can support only 7-bit of ASCII characters set. But MIME as Presentation layer protocol can hold up 8-bit binary content by describing the ways of sending certain types of data via an electronic mail. But data types are including that text other than English language too, files with images, sounds, or movies, etc. MIME also specified about the set of e-mail’s headers to facilitate a message additional attributes specification, which may include content type, and description of transfer encoding set. MIME is as well specified the set of laws and non-ASCII character’s encoding in the message headers of e-mail. But when RFC 822-style header’s practice is being made for the attributes of MIME message then without making any change to on hand email servers, plain texted e-mails are permitted to play their role in both directions with reachable clients. But all this is possible when MIME headers are declared optional. Anyway, following RFCs are mentioned here for your better understanding regarding the subject. RFC 1426 is dealt with 8bit MIME transportation. But RFC 1847 specifies the MIME security multiparts while RFC 3156 describes MIME security by means of OpenPGP. Similarly, RFC 2183 is dealt with the information regarding communicating presentation in e-mail messages. RFC 2387 is related with MIME multipart or related content-type but RFC 1521 covers that mechanism specifies the Internet Messaging Bodies format. Extensible MIME definition is included a scheme to schedule fresh contents types and attribute values of MIME. Typical Header fields of MIME Header’s typical fields are as under: - MIME-Version such a s MIME-version: 1.0. B ut all of MIME headers must include comments enclosed within parentheses such as following comment like (generated by application 1.3) can be added with MIME version specification - Content-Type is not necessary with RFC 2045 confirmed document. But MIME parser is requisite a top ranked Content-Type. - Syntax of MIME header can be mentioned below as: Both type and as well subtype is used to define the type of content. But all other not obligatory parameters will be surrounded by semicolons. Following table is displayed with some common values of Content-Type. |text/xml||Used along with SwA .| |application/xml||Utilized for xml data that is application specific.| |multipart/signed||Employed for several related parts within a message.| |multipart/mixed||Employed for various self-sufficient parts within a message.| - Content Transfer Encoding header’s field is utilized to point out the form of transformation with which this data type is encoded into a 7-bit set-up. - Content-ID is an optional field, which facilitates parts labeling. - Content-Description is an optional field, which facilitates parts depiction. MIME encodings is performed in the form of base64 and quoted-printable encoding according to RFC 1521. In case of base64, the actual data is divided into 3 octets groups but quoted-printable encoding is suitable for the data included printable characters as 33-60 characters range and 62-126 characters range.
<urn:uuid:73040a28-9b72-400f-bca3-db512eeff74f>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/mime
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.874772
754
3
3
A BBC story with over 450 comments outlines the push to make software programming a basic for British schoolchildren, as Latin once was. Can public schools teach coding? The article, Coding – the new Latin, says Britain could become a major programming center for video games and special effects if the educational system provided better early training. Dropping enrollments in university computer science courses are being blamed, at least partially, on the public schools teaching introductory computer classes for clerical skills rather than technology. While making coding as a foundation for further technology study is a nice goal, making computer science cool will take more work. Geeky myths are getting worse, as the number of male British computer science applicants has risen from 84 percent to 87 percent over the the past eight years. The percentage of computer science students during that time has dropped from five percent down to three percent of university students. Can earlier exposure to better technology courses reverse that trend? Slap the schools My experience with IT education in the UK is genuinely atrocious. During secondary school we made a spreadsheet in Excel and a couple of Word documents.pseudonimble on news.ycombinator.com We are teaching kids skills that they don't necessarily need to the detriment of skills that really are needed.Sue Denim on bbc.co.uk most IT teacher's can't code themselves so couldn't teach it effectively. Hopefully, that will change!LosOjos on educgeek.net When I think about all of the problems with the education system I am not sure that "teaching coding" would top that list.tpatke on news.ycombinator.com Slap the government Let me guess....the government buying hundreds of thousands of PCs at £4,000 each, and buying the required software at double what it would cost from even PC World.Aidy on bbc.co.uk "Coding - the new Latin" is a marketing slogan. And if it breeds misunderstanding, or has to be explained, then it's not a good marketing slogan.delinka on news.ycombinator.com My advice is … The simplest way would be to shove computing under the maths curriculum.lkclarkmichalek on news.ycombinator.com I'd say language is not overly important, and if anything, teach them two or three similar OO so they learn flexibility...sonofsanta on edugeek.net I think we might have all missed an important health and safety issue first too - do we really want to be teaching our kids to get hooked on coffee?localzuk on edugeek.net The real question is which programming language could get by the controversial Texas schoolbook selection process.
<urn:uuid:4f4249b6-7c45-4aff-bf23-eef239b2c8fb>
CC-MAIN-2017-04
http://www.itworld.com/article/2734882/enterprise-software/new-school-curriculum--reading--writing--programming-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951616
570
2.8125
3
5 levels of hazard controls in OHSAS 18001 and how they should be applied Within the planning phase, the OHSAS 18001 standard has a requirement for organizations to establish a hierarchy of controls. During the process of identification of occupational health and safety hazards (for more information, see How to identify and classify OH&S hazards), the organization needs to identify if there are any existing controls and whether they are adequate for the identified hazard. When defining controls or making changes to the existing ones, the organization needs to take into account the hierarchy of the controls. What does it mean? Hierarchy of the controls does sound a little confusing, and I think “hierarchy” is not the best term. It basically means the priority in selection and implementation of controls related to occupational health and safety hazards. There are several groups of controls that can be established to eliminate or decrease the occupational health and safety hazards: elimination, substitution, engineering controls, signage/warnings, administrative controls, and/or personal protective equipment. But, the problem is that the effects of these groups of controls are not the same, and some of them don’t really eliminate or decrease the risk of a hazard in the most satisfying way. And that is why the hierarchy is introduced, to encourage the organization to try to implement the better controls and really eliminate the hazard, if possible. How does it work? Once you have completed a risk assessment and taken account of existing controls, you should be able to determine whether existing controls are adequate or need improving, or if new controls are required. If new or improved controls are required, their selection should be determined by the principle of the hierarchy of controls, i.e., the elimination of hazards where practicable, followed in turn by risk reduction (either by reducing the likelihood of occurrence or potential severity of injury or harm), with the adoption of personal protective equipment (PPE) as a last resort. Basically, this hierarchy defines the order of considering the controls; you may choose to implement one or a combination of several kinds of controls. Here is where you need to start when planning the controls: Elimination – modify a design to eliminate the hazard; e.g., introduce mechanical lifting devices to eliminate the manual handling hazard; Substitution – substitute a less hazardous material or reduce the system energy (e.g., lower the force, amperage, pressure, temperature, etc.); Engineering controls – install ventilation systems, machine guarding, interlocks, sound enclosures, etc.; Signage, warnings, and/or administrative controls – safety signs, hazardous area marking, photo-luminescent signs, markings for pedestrian walkways, warning sirens/lights, alarms, safety procedures, equipment inspections, access controls, safe systems of working, tagging, and work permits, etc.; Personal protective equipment (PPE) – safety glasses, hearing protection, face shields, safety harnesses and lanyards, respirators, and gloves. Although the first three levels are the most desirable, they are not always possible to implement. In applying the hierarchy, you should consider the relative costs, risk reduction benefits, and reliability of the available options. The work of establishing and selecting of controls is still far from over, as there are still a lot of things to consider: - The need for a combination of controls, combining elements from the above hierarchy (e.g., engineering and administrative controls), - Establishing good practice in the control of the particular hazard under consideration, adapting work to the individual (e.g., to take account of individual mental and physical capabilities), - Taking advantage of technical progress to improve controls, - Using measures that protect everyone (e.g., by selecting engineering controls that protect everyone in the vicinity of a hazard rather than using individual personal protective equipment (PPE)), - Human behavior and whether a particular control measure will be accepted and can be effectively implemented, - Typical basic types of human failure (e.g., simple failure of a frequently repeated action, lapses of memory or attention, lack of understanding or error of judgment, and breach of rules or procedures) and ways of preventing them, - The need to introduce planned maintenance of, for example, machinery safeguards, - The possible need for emergency/contingency arrangements where risk controls fail, - The potential lack of familiarity with the workplace and existing controls of those not in the direct employment of the organization, e.g., visitors or contractor personnel. Let’s make it work Once the controls have been determined, the organization can prioritize its actions to implement them. In the prioritization of actions, the organization should take into account the potential for risk reduction of the planned controls. It is preferable that actions addressing a high-risk activity or offering a substantial reduction of risk take priority over actions that have only limited risk-reduction benefit. In some cases, it is necessary to modify work activities until risk controls are in place or apply temporary risk controls until more effective actions are completed – for example, the use of hearing protection as an interim measure until the source of noise can be eliminated, or the work activity segregated to reduce the noise exposure. Temporary controls should not be regarded as a long-term substitute for more effective risk control measures. Legal requirements, voluntary standards, and codes of practice can specify appropriate controls for specific hazards. In some cases, controls will need to be capable of attaining “as low as reasonably practicable” (ALARP) levels of risk. Selection and implementation of controls is the most important part of the Occupational Health and Safety Management System, but that is not enough to make it work. The effects of the implemented controls must be monitored to determine whether they achieve desired results, and the organization should always pursue new controls that are more effective and less costly. The cost of controls can be very high in some cases, but the most expensive are the ineffective ones. Use our free whitepaper OHSAS 18001 Implementation diagram to plan your OHSAS 18001 implementation effectively.
<urn:uuid:1e50101e-8647-4e00-95e9-5c583b63f8f0>
CC-MAIN-2017-04
https://advisera.com/18001academy/blog/2015/09/02/5-levels-of-hazard-controls-in-ohsas-18001-and-how-they-should-be-applied/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00187-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916979
1,251
3.140625
3
What is Ransomware? It’s a malware attack and that encrypts the specified files in your computer and mapped drives ! How is Ransomware getting spread? The most common way is via mail attachment. Specific file types in your network drives and local computer will get encrypted when you open the Ransomware attachment from your mail. What is the impact of Ransomware? You won’t be able to access the files which are encrypted. Think about this from an enterprise perspective – most of our machines have at least couple of network drives/file shares access and these file shares are mapped to you machine. All those files (with specific file types) will get encrypted and to decrypt those files you need to pay ransom money to hackers!! These kind of attacks are increasing day by day ! Altaro is organizing a Webinar to explain what is ransomware? How to prevent this from happening on your Hyper-V file servers? What are the methods to recover impacted Hyper-V hosts (file servers) from Ransomware? And real-world infections and resolutions (and failures!). Free webinar is scheduled for 23rd Aug 2016 2PM CEST / 1PM BST (RoW) OR 10AM PDT / 1PM EDT (US).
<urn:uuid:31a16ee6-9066-45e8-a3b0-1cab3f025a03>
CC-MAIN-2017-04
https://www.anoopcnair.com/what-is-ransomware-and-attend-webinar-to-know-more-about-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940465
254
2.65625
3
Life After DES Burt Kaliski, Chief Scientist Ever since DES was first proposed in the 1970s, it has been criticized for its short key size. Proposals for "brute-force" DES crackers have been circulated every so often, though until recently no DES cracker is known publicly to have been built. But the message has been the same for the past two decades: a 56-bit key would someday no longer be sufficient for security in many applications. As a result, researchers have proposed a number of replacements for DES over the past 20 years. Not surprisingly, many proposed replacements have been broken. Indeed, it was not until the early 1990s that researchers in the open crypto community began to appreciate the design principles behind DES. The work of Eli Biham and Adi Shamir on differential cryptanalysis (which DES designer Don Coppersmith acknowledged was known previously to the DES design team) unlocked an entire series of research results on block cipher design. Still, no block cipher has emerged as a standard rivaling DES. The announcement in 1997 that NIST would be developing an Advanced Encryption Standard (AES) (http://csrc.nist.gov/encryption/aes/aes_home.htm) has changed the landscape significantly. As the U.S. government's replcement for DES, the AES process gives a focal point for the research on alternative algorithms. Previously, the many proposals were distributed among research conferences and analysis was primarily of an academic nature. Now, 15 AES candidates (including one from RSA Laboratories (http://www.rsasecurity.com/rsalabs/rc6/) are being analyzed together by an international group of experts. The AES is at least a year and possibly more from completion, so developers who wish to move away from DES do not yet have a clear place to go. The financial services industry has developed ANSI X9.52, a standard for "triple-DES" encryption, as one interim solution. In triple-DES, each 64-bit block of a message is encrypted with three successive DES operations rather than one, and the operations involve two or three different keys. Triple-DES offers an effective key size of 112 bits in typical applications, as opposed to 56 bits for DES -- but the encryption and decryption time per block is three times that of DES. Another interim solution, a kind of "lightweight" triple-DES, is DESX, an algorithm developed in the 1980s by Ron Rivest for RSA Data Security. In DESX, secret values are exclusive-ored with a message block before and after an ordinary DES operation (the X stands for exclusive-or). DESX provides an effective key size of about 120 bits for exhaustive search -- with essentially no impact on encryption and decryption time. DESX also offers greater resistance to certain other types of attack than DES, though triple-DES is even stronger. An analysis of DESX can be found in an article by Phillip Rogaway in the Summer 1996 issue of RSA Laboratories' CryptoBytes newsletter (http://www.rsasecurity.com/rsalabs/cryptobytes/). Some perspective on the choices involved in triple-DES, as well as on DES and AES, can be found in the Summer 1998 CryptoBytes article by Eli Biham and Lars Knudsen. Both triple-DES and DESX are appropriate interim steps while waiting for AES. Also, by building in support for alternate algorithms, designers can pave the way for the eventual transition to AES. The AES will culminate two decades of research in block cipher design with open discussion unlike anything during the development of DES. With its key size of 128 bits and larger, AES will have more than enough security against any brute-force search envisioned. While analysis of the basic design of the AES will need to continue as a means of providing assurance about the security of AES against other kinds of attack, an "AES Challenge" based on brute-force search is not something we're likely to see any time soon.
<urn:uuid:b1803e0a-5ffb-4b54-b79f-375e56cdb44e>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/historical/life-after-des.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957239
833
2.671875
3
During the last couple of years, I have run into more and more questions regarding encryption and encryption key management than I thought existed. Over the years, I have collected my thoughts based on all of the questions and developed this distilled and very simplified version of guidance for those of you struggling with encryption. For the security and encryption purists out there, I do not represent this post in any way, shape, or form as the “be all, to end all” on encryption. Volumes upon volumes of books and Web sites have been dedicated to encryption, which is probably why it gets the bad reputation it does as the vast majority of these discussions are about as esoteric as they can be. In addition, this post is written in regards to the most common method of encryption used in encrypting data stored in a database or file and that is the use of an encryption algorithm against a column of data or an entire file. It does not cover public key infrastructure (PKI) or other techniques that could be used. So please do not flame me for missing your favorite algorithm, other forms of encryption or some other piece of encryption minutiae. There are all sorts of nuances to encryption methods and I do not want to cloud the basic issues so that people can get beyond the mysticism. This post is for educating people so that they have a modicum of knowledge to identify hyperbole from fact. The first thing I want to clarify to people is that encryption and hashing are two entirely different methods. While both methods obscure information, the key thing to remember is that encryption is reversible and hashing is not reversible. Even security professionals get balled up interchanging hashing and encryption, so I wanted to make sure everyone understands the difference. The most common questions I get typically revolve around how encryption works. Non-mathematicians should not need to know how an encryption algorithm works, that is for the experts that develop and prove that they work. In my opinion, unless you are a mathematician studying cryptography, I recommend that people trust the research conducted by the experts regarding encryption algorithms. That is not to say you should not know strong cryptography from weak cryptography. I am just suggesting that the underlying mathematics that defines a strong algorithm can be beyond even some mathematicians, so why we expect non-mathematicians to understand encryption at this level is beyond me. My point is that the algorithms work. How they work is not and should not be a prerequisite for management and even security professionals to using encryption. This leads me to the most important thing people need to know about encryption. If you only take away one thing from this post, it would be that strong encryption comes down to four basic principles. - The algorithm used; - The key used; - How the key is managed; and - How the key is protected. If you understand these four basic principles you will be miles ahead of everyone else that is getting twisted up in the details and missing these key points. If you look at PCI requirement 3, the tests are structured around these four basic principles. On the algorithm side of the equation, the best algorithm currently in use is the Advanced Encryption Standard (AES). AES was selected by the United States National Institute of Standards and Technology (NIST) in 2001 as the official encryption standard for the US government. AES replaced the Data Encryption Standard (DES) that was no longer considered secure. AES was selected through a competition where 15 algorithms were evaluated. While the following algorithms were not selected as the winner of the NIST competition, Twofish, RC6 and MARS were finalists and are also considered strong encryption algorithms. Better yet, for all of you in the software development business, AES, Twofish and MARS are open source. Other algorithms are available, but these are the most tested and reliable of the lot. One form of DES, Triple DES (3DES) 168-bit key strength, is still considered strong encryption. However how long that will remain the case is up for debate as 3DES 168-bit was recently broken for up to six character key lengths using the Amazon EC3 cloud. I have always recommended staying away from 3DES 168-bit unless you have no other choice, which can be the case with older devices and software. If you are currently using 3DES, I highly recommend you develop a plan to migrate away from using it as soon as you can, as it is just a problem waiting to happen. New implementations of encryption should never even consider 3DES as an option. This brings up another key take away from this discussion. Regardless of the algorithm used, they are not perfect. Over time, encryption algorithms are likely to be shown to have flaws or be breakable by the latest computing power available. Some flaws may be annoyances that you can work around or you may have to accept some minimal risk of their continued use. However, some flaws may be fatal and require the discontinued use of the algorithm as was the case with DES. The lesson here is that you should always be prepared to change your encryption algorithm. Not that you will likely be required to make such a change on a moment’s notice. But as the experience with 3DES shows, what was considered strong in the past, is no longer strong or should not be relied upon. Just because you use AES or another strong algorithm does not mean your encryption cannot be broken. If there is any weak link in the use of encryption, it is the belief by many that the algorithm is the only thing that matters. As a result, we end up with a strong algorithm using a weak key. Weak keys, such as a key comprised of the same character, a series of consecutive characters, easily guessed phrase or a key of insufficient length, are the reasons most often cited as why encryption fails. In order for encryption to be effective, encryption keys need to be strong as well. Encryption keys should be a minimum of 32 characters in length. However in the encryption game, the longer and more random the characters in a key the better, which is why you see organizations using 64 to 256 character long random key strings. This brings us to the topic of encryption key generation. There are a number of Web sites that can generate pseudo-random character strings for use as encryption keys. To be correct, any Web site claiming to generate a “random” string of characters is only pseudo-random. This is because the character generator algorithm is a mathematical formula and by its very nature is not truly random. My favorite Web site for this purpose is operated by Gibson Research Corporation (GRC). It is my favorite because it runs over SSL and is set up so that it is not cached or processed by search engines to better guarantee security. Using such a site, you can generate keys or seed values for key generators. You can combine multiple results from these Web sites to generate longer key values. In addition, you can have multiple people individually go to the Web site, obtain a pseudo-random character string and then have each of them enter their character string into the system. This is also known as split key knowledge as individuals only know their part of the total key. Just because you have encrypted your data does not mean your job is over. Depending on how your encryption solution is implemented, you may be required to protect your encryption keys as well as periodically change those keys. Encryption key protection can be as simple as storing the keys on paper in a sealed envelope or on an encrypted USB thumb drive in a safe to as complex as investing in a key management appliance. Finally, key changes are where a lot of organizations run into issues. This is because key changes can require that the information be decrypted using the old key and then encrypted with the new key. That decrypt/encrypt process can take days, weeks even years depending on the volume of data involved. And depending on the time involved and how the decrypt/encrypt process is implemented, cardholder data can potentially be decrypted or exposed because of a compromised key for a long period of time. The bottom line is that organizations can find out that key changes are not really feasible or introduce more risk than they are willing to accept. As a result, protection of the encryption keys takes on even more importance because key changes are not feasible. This is another reason why sales of key management appliances are on the rise. That is encryption in a nutshell, a sort of “CliffsNotes” for the non-geeky out there. In future posts I intend to go into PKI and other nuances to encryption and how to address the various PCI requirements in requirements 3 and 4. For now, I wanted to get a basic educational foundation out there for people to build on and to remove that glassy eyed look that can occur when the topic of encryption comes up. Cross-posted from PCI Guru
<urn:uuid:3a88fab0-f4ac-4bb7-89a2-fc62bc2c05a7>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/19690-Encryption-Basics-Its-Not-a-Mystical-Science.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00517-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958312
1,817
2.828125
3
Carrier Grade Network Address Translation Network Address Translation (NAT) is a technology that has been used for a long time and by now has a ubiquitous presence in firewalls and Internet gateways. Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN) is now becoming the new standard. Initially, traditional NAT was used for translating the address ranges between two networks. In the last decade, NAT has been used for virtually every household or enterprise connection, as part of a home Internet router. The main contribution to NAT's popularity is the ability to share a global (public) IP address among multiple local (private) IP addresses. IP addresses have become increasingly scarce over the last decade; ISPs would only hand out one IP address per home subscriber. The depletion has gotten even worse recently: In 2011, the Internet Assigned Numbers Authority (IANA) issued the last remaining /8 address blocks to the Regional Internet Registries (RIR). NAT can help in alleviating the IPv4 address shortage by oversubscribing the remaining global IP addresses. The problem with NAT is that it breaks the end-to-end principle of networking. Applications such as peer-to-peer (P2P), VoIP, video streaming, tunneling or any application that uses IP addresses in the payload, suffer from this. NAT behavior is not fully standardized among network equipment vendors, though there are IETF RFCs that help make a NAT more transparent and deterministic. Evolution To CGN Carrier Grade NAT (CGN/CGNAT), also known as Large Scale NAT (LSN), is the next level for NAT implementations; it aims to provide a solution for Internet Service Providers (ISPs) and carriers, but also is a good replacement for NAT devices in an enterprise network. CGN enables these organizations to deliver transparent IPv4 connectivity and a seamless user experience while oversubscribing their limited global IPv4 addresses. Carriers can assign local (private) IPv4 addresses in their access network, and use a centralized device to manage the address translation to the global (public) Internet. This setup has one level of NAT, and is also referred to as NAT44. CPE NAT devices create a second translation layer; this setup is also referred to as NAT444. - Transparent connectivity (EIM/EIF) - User Quotas CGN provides the most transparent NAT connectivity for a device because it has features such as Endpoint Independent Mapping (EIM), Endpoint Independent Filtering (EIF) and Hairpinning. Traditional NAT implementations do not allow any traffic that is initiated from the outside (EIM, EIF), or for inside protocols to loop their traffic back to the inside (Hairpinning). Another important aspect of CGN is the ability for an administrator to limit the amount of TCP and UDP ports that can be used by a single subscriber. This is crucial in order to maintain fairness in sharing port resources among subscribers. "Botnets" used in Distributed Denial of Service (DDoS) attacks use a large amount of connections per end device, which rapidly depletes port availability. If left unregulated, the overall connectivity for other subscribers can easily be compromised by external individuals. While CGN provides the most transparent NAT connectivity, some protocols require special consideration, for example they may use separate control and data IP/port combinations in their communications, which have to be translated. An Application Layer Gateway (ALG) provides deep-packet inspection to identify and allow correct NAT traversal for these applications. Because the local private IP address is not shown to the public Internet, logs are another major aspect of CGN that have to be considered. All devices that connect to the Internet produce a multitude of sessions. Tracking all sessions produces a vast amount of log messages. A CGN device must provide various advanced techniques that help reducing the volume of logs, such as Port Batching, Zero-Logging, compact logging and others. CGN is designed for larger scale global IP address oversubscription, while providing the most transparent connectivity for a user. This means it is not only a solution for ISPs and carriers, but for enterprises as well. This is why LSN and CGN are terms that are often used interchangeably. The industry is gravitating towards the term CGN. Typically, CGN devices handle large amounts of concurrent connections, and high bandwidth throughput. Note that when a NAT device (such as a firewall or legacy load balancer) claims to be carrier grade because it is able to handle large volumes of traffic, does not mean it is a Carrier Grade NAT device, as some vendors try to make their customers believe. CGN Use Cases A10 has many customers worldwide that have successfully deployed CGN as part of their IPv6 migration strategy. For example, a deployment at one of the nation's largest mobile carriers uses A10's CGN solution to maintain IPv4 connectivity for the ever growing mobile and smartphone market. The A10 devices provide a feature-rich CGN solution, and superior High Availability (HA) because of active session synchronization. This means that all active sessions remain intact if a single A10 device were to lose its power, for example. The A10 devices leave the competition behind with large number of features supported, superior processing power, while being extremely cost-efficient (typically 10x to 100x less per subscriber cost versus traditional network vendors). One single A10 device provides more power than multiple hyper-expensive, chassis-based processing cards that are part of large networking vendor's NAT solutions. More features and more power out of the box means A10's CGN solution can fit in and adapt to any growing network. The A10 devices can be easily clustered together, combining the processing power in a way that is easy to administer.
<urn:uuid:4e221599-1067-41a7-9dc2-8df76686403f>
CC-MAIN-2017-04
https://www.a10networks.com/resources/glossary/carrier-grade-network-address-translation
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932661
1,191
3.234375
3
Net-Worm:W32/Sasser refers to a small family of worms that spread to new hosts over the Internet by targeting the known MS04-011 (LSASS) vulnerability, which is caused by a buffer overrun in the Local Security Authority Subsystem Service. Allow F-Secure Anti-Virus to disinfect the relevant files. For more general information on disinfection, please see Removal Instructions. Caution: Manual disinfection is a risky process; it is recommended only for advanced users. To manually disinfect an infected system, first apply the Microsoft patch MS04-011, then use Task Manager to kill the "avserve.exe" process, then delete the file AVSERVE.EXE from your Windows directory and reboot. For step-by-step instructions, see Microsoft's site: Sasser will affect all machines that are: - Running Windows XP or Windows 2000 - Haven't been patched against the known MS04-011 (LSASS) vulnerability - Are connected to the Internet without a firewall This vulnerability has been addressed and patched. For more information, please refer to the Microsoft Bulletin (http://www.microsoft.com/technet/security/bulletin/MS04-011.mspx) for more details. Sign of infection is the existence of a file named 'C:\win.log' and frequent crashes of 'LSASS.EXE'. Sasser generates traffic on TCP ports 445, 5554 and 9996. Sasser was written in Visual C++; the first variant, Sasser.A, spreads in a single executable which is packed and protected with several envelopes. A later variant, Sasser.B, using the filename AVSERVE2.EX. When the worm enters the system it creates a copy of itself in the Windows Directory as 'avserve.exe'. This copy is added to the Registry as: - [SOFTWARE\Microsoft\Windows\CurrentVersion\Run] "avserve.exe" = "%WinDir%\avserve.exe" To ensure that only one copy of the worm is running it creates a mutex named 'Jobaka3l'. Sasser exploits the the MS04-011 (LSASS) vulnerability to gain access the remote systems. The worm starts 128 scanning threads that try to find vulnerable systems on random IP addresses. Computers are probed on port 445 which is the default port for Windows SMB communication on NT-based systems. The probing might crash unpatched computers. Under Windows 2000, users can see a Windows error message like this: Under Windows XP, users can see a Windows error message saying: When attacking the worm first determines the version of the remote operating system then uses the appropriate parameters to attack the host. Different parameters are used for - - Windows XP (universal exploit) - - Windows 2000 (universal exploit) - - Windows 2000 Advanced Server (SP4 exploit) Other operating systems, such as Windows Me and NT are not infected by this worm. If the attack is successful a shell is started on port 9996. Through the shell port Sasser instructs the remote computer to download and execute the worm from the attacker computer using FTP. The FTP server listens on port 5554 on all infected computers with the purpose of serving out the worm for other hosts that are being infected. Transactions through the FTP server are logged to 'C:\win.log'. Summary of TCP ports used by the worm: - 445/TCP: The worm attacks through this port - 5554/TCP: FTP server on infected systems - 9996/TCP: Remote shell opened by the exploit on the vulnerable hosts
<urn:uuid:5b34dc70-cdb3-449b-9be4-1bff8b9d4a4c>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/sasser.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.845825
768
2.671875
3
University of Texas at Austin researcher George Biros has received a $2.85 million grant from the Department of Energy to develop methods for estimating uncertainty in large-scale computer simulations. The project has three main thrusts of particular interest to the DOE: the melting of continental ice sheets in Antarctica; complex fluid flows (such as what is observed in potential algae biofuels); and complex multiscale models. Prototype simulation of dynamics of Antarctic ice sheet |Prototype simulation of dynamics of Antarctic ice sheet The modeling and simulation of complex natural or engineered systems can require billions of parameters, each of which involves a degree of uncertainty, notes Biros, a mechanical engineering professor at the Institute for Computational Engineering and Sciences (ICES). “Estimating the overall uncertainty of the outcome can be quite challenging,” he added. Biros explains that the mathematical structure that underlies simulations of physical systems is well-understood; it’s the input values that are the source of uncertainty. The larger the scale of the simulation, the more uncertainty is introduced. Even small unknowns can have a big impact on accuracy. One way the effects of uncertainty can appear is the cone-like shape of predicted hurricane paths. “You typically see the cone opening as you look into the future,” Biros said, “which means whatever small perturbation that you have gets amplified so you’re more and more uncertain about the future.” So far uncertainty research has mainly been relegated to small-scale systems with not very many parameters using software that can be run on laptops. Biros and his team are studying more sophisticated models, which require the advanced processing power of supercomputers. While this work has direct implications for energy applications, it can also serve as a model for other complex systems. If this project is successful, Biros believes that with modifications, it should be transferable to other systems. Well-known in HPC circles, George Biros is a two-time recipient of the Gordon Bell Prize. Awarded by the Association for Computing Machinery, the prize has been referred to as the Nobel Prize for supercomputing.
<urn:uuid:26dc60df-0085-4d54-b280-6dd6c2253b6c>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/10/09/quantifying-uncertainty-at-scale/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936577
446
2.828125
3
This white paper examines the technology behind Web Services and web services security – how the system is made available to the user, and the way connections are made to back-end (and therefore sensitive) data. These different elements come together to make Web Services a portal for users to access data, but also provide different entry points which may be exploited for illegitimate purposes. These security flaws bring about the need for an added security-assessing component in the Acunetix WVS solution. Support for Web Services vulnerability scanning is now provided by a dedicated component which is specifically designed to detect exploitable entry-points in a Web Services system. The web services building blocks The Web Service architecture comprises different technologies which enable a client to obtain data from a server, using the SOAP protocol. SOAP originally stood for Simple Object Access Protocol; however it is now a free standing acronym since the W3C body deemed it as misleading. A Web Service provides a web API (application programming interface) which enables two applications to communicate using XML over the web, or a network connection. This system was created to act as a middle agent when application-to-application integration was an issue which required a solution. A Web Service may be developed in any language and deployed over any platform, but most importantly it may be accessed by any other application regardless of the language used to develop it. SOAP serves as the entity which uses XML to collect the specific message, the service, the interface or port type, and the service binding (the binding contains information about the service such as its hosting redirector and access point). Technologically defined, the word Service describes a resource which is utilized by an application and not by a person. Following that definition, a Web Service is a server-oriented system which therefore operates on the server-side, and performs a task when it is called upon by an application. Like any service, a Web Service requires an API to provide an interface which allows it to be called by another application. As can be seen in an operating system of a common personal computer, a service is registered in the system registry which allows applications to locate the specific service to process a specific task. In the same way, a Web Service is registered in a Web Service registry, which an application uses to call the specific service it requires. As mentioned earlier, a Web Service is not language and platform dependent, it uses XML to communicate with other services or applications, and just like any internet web-based system it does not require a specific platform on which to operate. XML (Extensible Markup Language) is a versatile language which was designed to enable various different systems to share information and instructions in a universal manner. Web Services use a format of XML developed to describe network services as a set of components which exchange messages containing procedure or document descriptive data. This language is known as WSDL (Web Serviced Description Language), and is a format of XML because of its flexibility as a markup language. A WSDL file contains information about the different components and their respective messages, the message format being used, and the network protocol over which the messages are being communicated. Simply put, the WSDL file is the key communicative agent between the various entities exchanging service messages, and instructions between them. An essential element of the Web Services architecture is the central directory which contains all the service descriptions. A service-oriented system must have a registry which takes care of associating the right service to the request being processed, and also functions as a discovery system for the correct service to be identified by the requester. The mechanism which performs this task is the UDDI Provider. UDDI stands for Universal Description Discovery and Integration. The UDDI Provider hosts a standardized record which creates the profiles of registered services, and through this standardized profile it is possible to match a particular request with its corresponding service. International and publicly available business service descriptions are hosted in a directory known as a Public Business Registry… Web services in action After becoming familiar with the key elements responsible for making the Web Services work, one needs to see how these elements interact with the whole system, from the client requesting a service to perform a task, the service being executed, and data delivery. A simple Web Service which may be used as an example is one which allows a client to convert one currency to another. The web application used as the front-end contains a simple form which allows the user to select the starting currency, and the currency to which he wants the conversion to be done. The user submits this data, and the application contacts the UDDI provider to look up the service required to perform this conversion. The UDDI provider then creates the binding, which associates the message to the service requested, and its location. The UDDI provider then returns a WSDL file to the client, which the application completes as a SOAP message. The SOAP message then gets sent to the application server which hosts the Web Service needed to execute the currency conversion. This is done using the binding details in the WSDL file from the UDDI. Using the SOAP instructions, the Web Service can correctly execute the task according to the parameters it was given, and deliver the processed currency conversion back to the requestor. Web services security concerns Fundamentally, Web Services operate on the same structure used by normal web applications. The beginning of the chain is a request forwarded by an application viewed in a web browser, which for Web Services is a SOAP request over HTTP. Since SOAP data is received by the server, but not sent to the client, one can understand that the threat is primarily aimed at the server itself. The following are methods of attack, and how Web Services can be exploited to fulfill these attacks: Common Effects: DOS (Denial of Service), data corruption, malicious code execution. An attacker can craft XML data causing the XML to call upon itself repetitively therefore constantly increasing in size. This causes a memory overflow, or trigger error messages which reveal information about the application. A DOS attack can be caused by forcing a server to parse an abnormally long XML file, which in essence uses up much more resources then actually generating one, and can crash the application. Another type of attack consists of sending a block of data to an application, which is stored in a buffer of insufficient size. This block of data can then overwrite genuine data and cause a function return which gives control to the malicious code in the hacker’s data block. Common Effects: Command execution, data theft and deletion, schema poisoning. SQL Injection is a high-risk exploit which may be performed using SOAP messages. If a server does not validate data correctly, a SOAP message can easily be used to create XML data which inserts a parameter into an SQL query and have the server execute it with the rights of the Web Service. SQL Injection is only one of the threats a server is exposed to if data is not validated. Another such example is Schema Poisoning. A schema file is what an XML parser uses to understand the XML’s grammar and structure, and contains essential pre-processor instructions. An attacker may damage the XML schema or replace it with a modified one which would then allow the parser to process malicious SOAP messages and specially crafted XML files to inject OS commands on the server or database. Common Effects: Obtaining of user privileges within application or network Session hijacking involves gaining illegal control of a legal user’s session state. It occurs when an attacker steals a valid session ID (valid session cookie), and uses it to gain that particular user’s privileges in the application. By intercepting or sniffing SOAP messages, an attacker can hijack a user’s session in the same ways as with normal web application attacks, however once a hacker is authenticated as a valid user he may perform more dangerous activities. Summary and Conclusions The idea of the internet as we know it is quickly surpassing the simple need to obtain information with ease through web applications, and is now evolving into a multitude of systems which perform tasks, calculations, accurate searches, and many other complex operations. Web Services are the perfect example of a solution to the need for a simplistic system which allows many different technologies to collaborate and communicate with each other. Being available to the end user over the internet, Web Services will keep increasing in popularity due to their functionality, and this popularity will also expose the threat to the servers hosting them. Over the past year, there has been an increased concern among developers and security analysts searching for a tool to reveal the vulnerabilities associated to Web Services. The increase in concern has not yet raised enough awareness about the risks which threaten the security of the servers hosting Web Services and the data which risks being compromised. Acunetix is a feature-packed solution for detecting vulnerabilities and securing web applications. The Web Services Security scanning tool will allow you to run an automated vulnerability assessment against a Web Service with a more accurate and improved version of the same scanning engine which till now assessed web applications. Another new addition is the Web Services Security Editor which extends the functionality of the Web Services scanner by allowing deeper analysis of XML responses, WSDL structure, WSDL XML analysis, syntax highlighting for all coding languages, and regular expression searching. These new features make Acunetix a complete solution for securing web applications and now also Web Services.
<urn:uuid:6a13f74e-d484-4db1-aeba-262d36ce378a>
CC-MAIN-2017-04
https://www.acunetix.com/websitesecurity/web-services-wp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923878
1,921
2.921875
3
Missel J.,Texas A&M University | Mortari D.,Aerospace Engineering Journal of Guidance, Control, and Dynamics | Year: 2013 Low Earth orbit is overcluttered by rogue objects. Traditional satellite missions are not efficient enough to collect an appreciable amount of debris due to the high cost of orbit transfers. Many alternate proposals are politically controversial, costly, or dependent on further technological advances. This paper proposes an efficient mission structure and bespoke hardware to deorbit debris by capturing and ejecting them. These are executed through plastic interactions, and the momentum exchanges during capture and ejection assist the satellite in transferring to subsequent debris with substantial reduction in fuel requirements. The proposed hardware also exploits existing momentum to save fuel. Capturing debris at the ends of a spinning satellite, adjusting angular rate, and then simply letting go at a specified time provides a simple mechanism for redirecting the debris to an Earth-impacting trajectory or lower perigee. This paper provides analyses for orbit and hardware functionality and aspects of the control for debris collection. Copyright © 2012 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Source The research team have developed the first demonstration of 3D printing of composite materials. Ultrasonic waves produce a pattern of microscopic glass fibres which give the component increased strength. A laser cures the epoxy resin and creates the component. Credit: Matt Sutton, Tom Llewellyn-Jones and Bruce Drinkwater 3D printing techniques have quickly become some of the most widely used tools to rapidly design and build new components. A team of engineers at the University of Bristol has developed a new type of 3D printing that can print composite materials, which are used in many high performance products such as tennis rackets, golf clubs and aeroplanes. This technology will soon enable a much greater range of things to be 3D printed at home and at low-cost. The study published in Smart Materials and Structures creates and demonstrates a novel method in which ultrasonic waves are used to carefully position millions of tiny reinforcement fibres as part of the 3D printing process. The fibres are formed into a microscopic reinforcement framework that gives the material strength. This microstructure is then set in place using a focused laser beam, which locally cures the epoxy resin and then prints the object. To achieve this the research team mounted a switchable, focused laser module on the carriage of a standard three-axis 3D printing stage, above the new ultrasonic alignment apparatus. Tom Llewellyn-Jones, a PhD student in advanced composites who developed the system, said: "We have demonstrated that our ultrasonic system can be added cheaply to an off-the-shelf 3D printer, which then turns it into a composite printer." In the study, a print speed of 20mm/s was achieved, which is similar to conventional additive layer techniques. The researchers have now shown the ability to assemble a plane of fibres into a reinforcement framework. The precise orientation of the fibres can be controlled by switching the ultrasonic standing wave pattern mid-print. This approach allows the realisation of complex fibrous architectures within a 3D printed object. The versatile nature of the ultrasonic manipulation technique also enables a wide-range of particle materials, shapes and sizes to be assembled, leading to the creation of a new generation of fibrous reinforced composites that can be 3D printed. Bruce Drinkwater, Professor of Ultrasonics in the Department of Mechanical Engineering, said: "Our work has shown the first example of 3D printing with real-time control over the distribution of an internal microstructure and it demonstrates the potential to produce rapid prototypes with complex microstructural arrangements. This orientation control gives us the ability to produce printed parts with tailored material properties, all without compromising the printing." Dr Richard Trask, Reader in Multifunctional Materials in the Department of Aerospace Engineering, added: "As well as offering reinforcement and improved strength, our method will be useful for a range of smart materials applications, such as printing resin-filled capsules for self-healing materials or piezoelectric particles for energy harvesting." Explore further: Breakthrough in 3-D printing of replacement body parts More information: Thomas M Llewellyn-Jones et al. 3D printed components with ultrasonically arranged microscale structure, Smart Materials and Structures (2016). DOI: 10.1088/0964-1726/25/2/02LT01 Once unpacked, Commander Scott Kelly will attach the satellite to the JEM slide table interfaced with CYCLOPS, a mechanism used to robotically deploy satellites from ISS. The CYCLOPS Experiment Attachment Fixture (EAF) is attached to the large cylindrical CYCLOPS Standoff on the bottom of AGS4. The EAF will be used to lock AGS4 onto the deployment table which will release the satellite from the ISS. The deployment activities scheduled for Friday include capturing CYCLOPS with the JEM Remote Manipulator System, maneuvering CYCLOPS to the deployment location, and final deployment of AGS4 from CYCLOPS. An example of the deployment mechanism can be seen below. There are four switches, embedded on the CYCLOPS EAF, that inhibit AGS4 from turning. The first event that will occur after deployment will be the release of these inhibits. Once these inhibits are removed, the Electronic Power System (EPS) starts and initiates a 10 minute timer. After the timer ends the Command and Data Handling System starts and initiates a checkout of every system on AGS4. When complete, AGS4 will begin sending a signal to Earth with its Low Data Rate (LDR) radio, indicating that it is alive and well. The team expects to start receiving signals from AGS4 on Friday evening. Several days after the release of AGS4, and upon verification that all systems are running correctly, the satellite will power on its torque coils and detumble itself. The torque coils generate a strong magnetic field that will orient AGS4 along the magnetic field of the Earth. There are three torque coils on board giving full 3-Axis motion. The detumble process will negate any rotation imparted on AGS4 by the CYCLOPS deployment mechanism and orient AGS4 for optimal data downlinking. Once this is done data from the satellite can be downloaded faster and all necessary software patches can be uplinked to AGS4. The AggieSat4 team has been busy preparing for the deployment, setting up the AggieSat ground station at Texas A&M University's Riverside campus. Three antennas have been installed that will be used to communicate with AGS4 while it orbits Earth and collects valuable mission data. One of the LDR antennas has been tested and was able to receive a signal from the same type of handheld radio that is on board the satellite. This antenna will be the main receiver of data transmitted from AGS4 while in orbit. The team is working to ensure the other two antennas transmit and receive properly, and then will raise them to the top of the truss structure they are currently mounted on. Dr. Helen Reed, professor in the Department of Aerospace Engineering, and the AggieSat team members will be at NASA Johnson Space Center in Houston for the installation and deployment activities. Team members Dexter Becklund and Andrew Tucker will sit on console in Mission Control to assist the astronauts with unpacking, assembly, procedures and any queries. AggieSat team members include Adelin Destain, David Alfano, Dexter Becklund, Ryan Campbell, Jake Cooper, Daniel Ghan, Michelle Gilbert, Hyder Hasan, Andy Holm, Alex Hutson, Mitchel McDonald, Sig Salinas, Robert Singletary and Andrew Tucker. Say hello to Nadine, a “receptionist” at Nanyang Technological University (NTU Singapore). She is friendly, and will greet you back. Next time you meet her, she will remember your name and your previous conversation with her. She looks almost like a human being, with soft skin and flowing brunette hair. She smiles when greeting you, looks at you in the eye when talking, and can also shake hands with you. And she is a humanoid. Unlike conventional robots, Nadine has her own personality, mood and emotions. She can be happy or sad, depending on the conversation. She also has a good memory, and can recognize the people she has met, and remembers what the person had said before. Nadine is the latest social robot developed by scientists at NTU. The doppelganger of its creator, Professor Nadia Thalmann, Nadine is powered by intelligent software similar to Apple’s Siri or Microsoft’s Cortana. Nadine can be a personal assistant in offices and homes in future. And she can be used as social companions for the young and the elderly. A humanoid like Nadine is just one of the interfaces where the technology can be applied. It can also be made virtual and appear on a TV or computer screen, and become a low-cost virtual social companion. With further progress in robotics sparked by technological improvements in silicon chips, sensors and computation, physical social robots such as Nadine are poised to become more visible in offices and homes in future. Professor Thalmann, the director of the Institute for Media Innovation who led the development of Nadine, said these social robots are among NTU’s many exciting new media innovations that companies can leverage for commercialization. “Robotics technologies have advanced significantly over the past few decades and are already being used in manufacturing and logistics. As countries worldwide face challenges of an aging population, social robots can be one solution to address the shrinking workforce, become personal companions for children and the elderly at home, and even serve as a platform for healthcare services in future,” explained Professor Thalmann, an expert in virtual humans and a faculty from NTU’s School of Computer Engineering. “Over the past four years, our team at NTU have been fostering cross-disciplinary research in social robotics technologies — involving engineering, computer science, linguistics, psychology and other fields — to transform a virtual human, from within a computer, into a physical being that is able to observe and interact with other humans. “This is somewhat like a real companion that is always with you and conscious of what is happening. So in future, these socially intelligent robots could be like C-3PO, the iconic golden droid from Star Wars, with knowledge of language and etiquette.” Telepresence robot lets people be in two or more places at once Nadine’s robot-in-arms, EDGAR, was also put through its paces at NTU’s new media showcase, complete with a rear-projection screen for its face and two highly articulated arms. EDGAR is a tele-presence robot optimized to project the gestures of its human user. By standing in front of a specialized webcam, a user can control EDGAR remotely from anywhere in the world. The user’s face and expressions will be displayed on the robot’s face in real time, while the robot mimics the person’s upper body movements. EDGAR can also deliver speeches by autonomously acting out a script. With an integrated webcam, he automatically tracks the people he meets to engage them in conversation, giving them informative and witty replies to their questions. Such social robots are ideal for use at public venues, such as tourist attractions and shopping centers, as they can offer practical information to visitors. Led by Associate Professor Gerald Seet from the School of Mechanical & Aerospace Engineering and the Being There Centre at NTU, this made-in-Singapore robot represents three years of research and development. “EDGAR is a real demonstration of how telepresence and social robots can be used for business and education,” added Professor Seet. “Telepresence provides an additional dimension to mobility. The user may project his or her physical presence at one or more locations simultaneously, meaning that geography is no longer an obstacle. “In future, a renowned educator giving lectures or classes to large groups of people in different locations at the same time could become commonplace. Or you could attend classes or business meetings all over the world using robot proxies, saving time and travel costs.” Given that some companies have expressed interest in the robot technologies, the next step for these NTU scientists is to look at how they can partner with industry to bring them to the market. The Soret Coefficient in Crude Oil experiment will measure how hydrocarbon molecules redistribute when the temperature is not uniform. Learning how complex liquids behave is of interest to the petroleum industry and academia, who can apply the data to model real-life conditions of oil reservoirs deep underground. These measurements can only be performed in weightlessness. Set for launch on China's SJ-10 satellite on 6 April local time, the experiment consists of six sturdy cylinders, each containing just one millilitre of crude oil but compressed up to 500 times normal pressure at sea level on Earth – making it one of the highest-pressure items ever launched into space. Lifting-off from China's Juiquan site in the Gobi desert, the satellite will spend almost two weeks in orbit before it returns to Earth. After landing in Si Chuan province, the team will retrieve the experiment for detailed analysis. The experiment is a partnership between ESA, China's National Space Science Centre, France's Total oil company and China's PetroChina oil company. "The experiment is designed to sharpen our understanding of deep crude oil reservoirs up to 8 km underground," explains Antonio Verga, overseeing the project for ESA. "Imagine a packet of cornflakes – over time the smaller flakes drop to the bottom under gravity. On a molecular scale this experiment is doing something similar but then looking at how temperature causes fluids to rearrange in weightlessness," says ESA's Olivier Minster. "Deep underground, crushing pressure and rising temperature as one goes down is thought to lead to a diffusion effect – petroleum compounds moving due to temperature, basically defying gravity. "Over geological timescales, heavier deposits end up rising, while lighter ones sink. "The aim is to quantify this effect in weightlessness, to make it easier to create computer models of oil reservoirs that will help guide future decisions on their exploitation." The experiment's crude oil sits in six small titanium cylinders. One end of each cylinder is warmed while the other end is cooled. Before returning to Earth, a valve is closed to prevent the liquid from remixing during reentry. Sending such a high-pressure device into space is not to be taken lightly and the cylinders were built to withstand more than double pressure than they will during normal operations – 1000 times atmospheric pressure. A specialist company, Sanchez Technology in France, worked for the prime contractor QinetiQ Space in Belgium. The electronic unit was developed and built by the Shandong Institute of Aerospace Engineering at Yantay. The experiment passed testing with the SJ-10 spacecraft at the China Academy of Space Technology in Beijing, China last year – including thermal cycling to reproduce the extreme changes in temperature the experiment will be subjected to during its orbits of Earth, as well as vibration and shock testing to simulate launch and reentry. Two weeks ago ESA and QinetiQ staff took the 8.5 kg flight unit – about the size of a desktop computer – on a four-day drive to the remote launch site in Gansu province. The experiment with its oil-filled cells is now ready for its journey to space tomorrow. Explore further: From earth to space and back again
<urn:uuid:013ccded-4648-408d-b620-349ea0330142>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/aerospace-engineering-691888/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925465
3,209
3.234375
3
10 Household items that are already part of the Internet of Things As the internet, mobile phones, and electronics in general become embedded into virtually every facet of our lives, connectivity between devices and the additional layer of functionality that that entails is becoming an increasingly important focus of manufacturers, software developers, and technology innovators. The “Internet of Things” (IoT) is the most recent iteration of a decades-long push towards greater inter-device integration and automation aimed at improving the accuracy, efficiency, and cost-affordability of the devices individuals and businesses use on a daily basis. What is the Internet of Things? The Internet of Things is essentially a network of devices (such as appliances, phones, computers, personal electronics and more) that have internet connectivity built into them, allowing them to send and receive data autonomously. The “things” or connectivity-enabled devices in the IoT all have unique identifiers that allow them to transmit data and communicate with one another without human input. The IoT extends far beyond laptops and mobile phones, and includes a wide array of devices, appliances, and machinery with built-in internet connectivity and data sharing functions, tremendously increasing their utility and functionality. The IoT and its vast potential for improving the efficacy and efficiency of the objects we interact with every day has provided the impetus for an explosion in the amount of internet-enabled devices currently being manufactured. Researchers estimate that there are more than 12 billion devices that can currently connect to the internet and that, by 2020, there will be 26 times as many connected things as individual humans on the planet. A study published by Gartner, an international research firm, estimates that, by 2020, there will be roughly 20.8 billion internet-enabled devices that store, analyze and transmit data to us and one another. In 2016, almost 5.5 million things will be added to the IoT every single day. Many of these objects will be household items that you interact with on a daily basis and many of them are things that you may not even know are already internet-connected and already exist in the IoT. Light Bulbs — Phillips Hue LED Light Bulbs have a “bridge” that allows you to connect your smartphone or tablet to your LED light bulbs. Users can program their lighting according to their preferences remotely and can fine tune it with remarkable precision. Whether it’s dimming the lights at certain times of the day or automatically coordinating the lighting in your room with the Netflix program you’re watching, Phillips IoT-enabled light bulbs allow users ultimate customization via their Wi-Fi capabilities and bridge that connects to users’ smartphones. Shirts — The Polotech Shirt by Ralph Lauren has silver fibers woven directly into the fabric of the shirt. These silver fibers allow the shirt to connect with your iPhone or Apple Watch and receive instantaneous, real-time workout data. From determining your heart rate to measuring your energy output, IoT-enabled T-shirts are revolutionizing the way people work out and track their athletic performance. Internet-enabled t-shirts and the apps that they use to display the data they aggregate often offer user-specific health advice to improve athletic performance and optimize workouts according to individuals’ needs and preferences. Refrigerators — From specialized tech companies to industry giants such as LG, internet-enabled “smart” refrigerators are the next iteration in food storage technology and are poised to drastically improve the efficiency and ease of food storage. These smart fridges automatically keep track of food stocks and expiration dates by scanning barcodes and sending the data to an app on the user’s phone, letting him or her know exactly what is in the fridge and when it is set to expire. Often, these IoT-enabled refrigerators are programmed to independently determine whenever a particular foodstuff needs to be restocked. Television Sets — “Connected” TVs are IoT-enabled television sets that have integrated Web 2.0 and internet features. These connected TVs allow users access to internet applications, user-generated content, streaming providers, and interactive services. While many TVs become IoT-enabled via external machines such as game consoles or digital media players — devices that already have internet and data-sharing capabilities — many television sets currently being produced go farther. These new sets offer integrated internet connectivity and custom operating systems that give users total internet integration in their TV. Washing Machines — Many newer washing machines come with robust internet integration, providing users with an unprecedented level of control over their washing. Most smart washing machines have built-in Wi-Fi and software that transmits data to downloadable apps on the user’s phone. The apps allow users to remotely control washing cycles, receive finishing alerts, and track water and energy usage to plan washing cycles more efficiently. IoT-enabled machines even have integrated trouble shooting solutions that instantly inform users of any issues that the machine may be experiencing. Security Systems — Many homes have security systems and with the advent of the IoT, a lot of new security systems have integrated internet capabilities allowing users to have direct control over their home security on their smartphone or tablet device. Most of these security systems have dedicated hubs that connect to a home’s router, allowing the hub to transmit all relevant data to whatever device the user prefers. Air Conditioners — Air conditioners can be incredibly costly and, at the same time, are absolutely necessary in certain climates. IoT-compatible air conditioners feature integrated internet solutions that connect to your home’s router and allow users remote control over their air conditioning systems via smartphones or tablet devices. Users can develop tailor-made, totally automatic air conditioning settings based on personal schedules, temperature preferences, and budgeting concerns. Lawn Sprinkler Systems — Companies like Lono allow users to connect home and business sprinkler systems to the internet and manage them remotely. Users simply install an application onto their phone that connects to Lono’s Wi-Fi-enabled hardware and can manage each aspect of their sprinkler system and their lawn care preferences directly from their phones. Another impressive feature is that Lono’s hardware also connects to various weather monitoring services to determine optimal sprinkler settings for the various parts of your property. Cooking Appliances — Cooking is an inherently time-sensitive task which is why leading manufacturers of cooking appliances are increasingly making their devices internet compatible. IoT-enabled cooking appliances connect to a home’s router and can be managed remotely from a user’s smartphone, allowing them to adjust heat settings, monitor cooking status, and turn appliances on and off remotely. Thermostat Systems — Internet-enabled smart thermostats are perhaps the most well-known recent innovation in IoT. Most smart thermostats, the most famous of which is Nest, have integrated Wi-Fi and internet connectivity that relay all relevant information to a device of your choosing. They are pre-programmable and can be controlled remotely, allowing users to heat their homes according to their exact preferences and adjust settings to maximize the cost-effectiveness of their heat usage. IoT and the increasing trend of internet-enabling the devices and products we interact with on a daily basis is a rapidly growing economic phenomenon that is affecting every sector of the global economy. Given the increases in efficiency and convenience, as well as the unprecedented level of control that IoT-enabled devices give to users, it is unlikely that this trend will wane anytime soon. Furthermore, with increased rates of adoption and an ever-growing public enthusiasm for IoT-compatible devices, many of the IoT products that are out of most people’s budgets now will become more than affordable for average households in the near future, becoming well-placed to revolutionize the way we work, live, and play forever.
<urn:uuid:0f979eb3-f0cf-4a59-9d8a-d4760f3e5b56>
CC-MAIN-2017-04
http://certmag.com/10-household-items-already-part-internet-things/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00508-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934213
1,591
2.65625
3
Assuming we are talking about the same thing, a callback is a procedure called from a procedure as a means of invoking some function in the caller. For example, in Scott Klements, HTTPAPI he uses callbacks to for each time that start of a XML element and one for the end of an XML You call Scott's HTTPAPI functions and you request that the XML that gets returned be parsed. To do that you create a procedure in your RPG that receives a given set of parameters. One for the start <ELEMENT> and one for the end </ELEMENT>. Each time that the XML parser sees a start element it calls the procedure you told it to use for start element and when it sees an end element it calls the procedure you defined for end element. Thus the term callback. It calls back to your program to do the processing. Very powerful tool. Over the years I have used them for various purposes. One was a state transition parser. The parser would parse a FedEx string returned from a FedEx server. Every time it found a action it needed to process (PutChar, SaveString, etc) it would callback to the calling program to perform the action requested. In this example, I am invoking only the end element procedure. Note the line %pAddr(EndTagProcessing). This tells Scott's program call procedure EndTagProcessing each time it sees the end of an element. // Call Scott's API to get xml. ReturnCode = HTTP_Url_Get_XML(URL : The procedure that is calledback looks like this. d 1024a Varying d 24567a Varying d 65535a Varying d s 64a Varying If InPath = '/WeatherBank/Body/Current' And InTagName = 'Temperature'; If %Trim(InTagValue) = 'Missing'; InOutData.TemperatureMissing = cTrue; AttributeValue = %Str(InAttributeArray(2)); If AttributeValue = InOutData.TemperatureType; InOutData.Temperature = %Dec(%Trim(InTagValue):5:1); InOutData.TemperatureSeen = cTrue; On Mon, Oct 15, 2012 at 2:50 PM, Richard Reeve <richreeve@xxxxxxxxx> wrote: Has anyone ever heard of a callback as it is related to the IBM i? I was asked to explain a callback during an interview and I'd never even heard of it. Can any of you explain to me what a callback is and how it is used? I tried google but didn't get a good explanation. This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives
<urn:uuid:8460d40a-b0fc-4edf-b31f-5c2e311dfbac>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201210/msg00633.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.751179
650
3.109375
3
There are several advantages to implementing a route-based VPN (a.k.a. tunnel interface VPN) instead of a site-to-site one. While both establish a secure tunnel between appliances, a route policy controls the traffic that passes through the tunnel, giving you more flexibility for the services (ports) you want to open across the tunnel as well as redundancy to reroute traffic in case of an outage between the appliances. Let’s say you have built tunnel interfaces between three sites: New York, Los Angeles, and Houston. They all have route policies directly to each other, and you must build a backup policy to reroute the traffic if the direct tunnels go down. Consider this scenario: The New York tunnel interface to Los Angeles goes down, but the interfacesbetween New York and Houston and between Houston and Los Angeles are still up. You can reroute traffic from New York to Los Angeles via Houston. You can accomplish this by having a second route policy in New York with a different metric whose destination network is still Los Angeles. But, you must use the tunnel interface policy that sends traffic to Houston first by making that selection under the Interface field. Houston, seeing the destination network is actually Los Angeles, will use its tunnel to Los Angeles to then route the traffic. Same thing happens with traffic from Los Angeles back to New York. A site-to-site VPN does not give you that type of redundancy since the network is configured in the policy itself. Tunnel interface offloads that configuration from source network to destination network to a route policy. Tunnel interface also has the ability to turn on advanced routing, which utilizes either RIP or OSPF routing protocols. In the Advanced tab of a tunnel interface policy, you will find a check box for advanced routing. Once that’s on, you can go to the Network Routing window and switch the view to Advanced Routing. There, you will see the tunnel interface policy which will allow you to turn on RIP, a distance vector routing protocol that uses the path with the least amount of hops between points, or OSPF, a link state routing protocol that uses a metric of link speed to determine the best path between points. Once RIP or OSPF is configured, the appliances will advertise their routes to each other, which avoids needing to build static route policies between the tunnel interface VPNs. It will become dynamic, which is a definite advantage over site-to-site. For details on configuring redundant routes for route-based VPN, take a look at the SonicWALL Knowledge Base website article ID 7902. For info on configuring OSPF, reference article ID 8086.
<urn:uuid:9752cf0c-a556-4066-85c4-4931e6b12ad6>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2013/10/21/advantages-of-using-dell-sonicwall-route-base-vpn-instead-of-site-to-site/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00050-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936453
545
2.609375
3
Things are, literally, looking up in Japan more than a year after a 3.11-magnitude earthquake and tsunami hit its northeastern coastline, causing the worst nuclear accident since the rupture and explosions at the Chernobyl RBMK reactor in the Ukraine in April 1986. This week, the nation known as the Land of the Rising Sun announced plans to build a solar power complex with a total generating capacity of 100 megawatts (MW) that will replace the crippled Fukushima Daiichi Nuclear Power Plant— making it the biggest solar project in Japan ever. The Tokyo-based multinational electronics corporation, Toshiba (News - Alert), said it will spend around 30 billion yen (US$379.6 million) to construct several utility scale solar plants in Minamisoma, about 16 miles north of the original Fukushima generation sites. Toshiba said it will start building the plants this year and start operations in 2014. The project surpasses an earlier plan by Kyoto-based solar system manufacturer Kyocera Corp. (News - Alert), which, in partnership with two companies headquartered in Tokyo—heavy machinery maker IHI Corp. and Mizuho Corporate Bank—proposed to launch a 70-MW plant in southern Japan. Toshiba’s announcement followed closely upon the Japanese government’s approval of new incentives for renewable energy starting that will be effective as of July 1— including the introduction of feed-in tariffs (FITs), a move that is calculated to unleash billions of dollars in clean-energy investment. Indeed, according to Reuters (News - Alert), Japan is poised to overtake Germany and Italy, to become the world’s second-biggest market for solar power, as incentives drive sales for equipment makers—from Yingli Green Energy Holdings Co. to Sharp (News - Alert) Corp. to Kyocera Corp. To take advantage of the subsidies, Yingli, based in Baoding, China, has made plans to start operations in Japan. Under the new program, utilities will buy solar, biomass, wind, geothermal and hydro power. All costs will be passed on to consumers in the form of surcharges, which the government today said will average out at 87 yen (about US$1.00) a month per household. The government’s previous average estimate was 100 yen (about US$1.25). The measures expand on a program launched in late 2009 that requires utilities to buy solar power that the generator doesn’t need. That policy expanded the market for rooftop residential panels. The new incentives will encourage utility scale projects, including those already planned by Toshiba and Kyocera. Solar stocks rallied upon release of the news. In related news, several days ago, a power company in western Japan was given the go-ahead by the government to begin work to restart two reactors in Ohi town, a process that is expected to take several weeks. Despite lingering safety concerns, the restart could speed the resumption of operations at more reactors across the country. All of Japan's 50 nuclear reactors currently are offline for maintenance or safety checks. Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO West 2012, taking place Oct. 2-5, in Austin, TX. ITEXPO (News - Alert) offers an educational program to help corporate decision makers select the right IP-based voice, video, fax and unified communications solutions to improve their operations. It's also where service providers learn how to profitably roll out the services their subscribers are clamoring for – and where resellers can learn about new growth opportunities. For more information on registering for ITEXPO click here. Stay in touch with everything happening at ITEXPO. Follow us on Twitter. Edited by Amanda Ciccatelli
<urn:uuid:346a8b86-239c-4ff6-ba45-c1441dab3be9>
CC-MAIN-2017-04
http://www.iotevolutionworld.com/topics/smart-grid/articles/2012/06/20/295715-japan-utility-scale-solar-plants-rise-from-fukushima.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00500-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93371
776
2.765625
3
On June 28, researchers on the International Space Station received an urgent warning that space junk was on its way—and that they needed to take cover and prepare. The astronauts rushed to take refuge in Russian Soyuz lifeboats while a hunk of cosmic debis hurtled just 260 meters past them. As space exploration efforts continue and we continue to send satellites and other probes out of Earth’s atmosphere, the threat of space junk looms. A recent endeavor using high performance computing and advanced visualization is hoping to create a next-gen system to track this galactic garbage via a “Space Fence” that will be operational by 2015. According to NASA and the U.S. Strategic Command Center, the 20,000 pieces of drifting space junk that have been categorized and tracked represent a small fraction of what is out there. They estimate that there are somewhere between 100,000 and 500,000 objects larger than half an inch currently in orbit around the Earth. Since these travel in orbit at around 17,500 mph, satellites, ships, probes and other equipment could easily be obliterated. As a report from MSNBC noted: “The next-generation Space Fence will also rely on high-performance computing to identify and keep track of orbital paths for what’s likely to be hundreds of thousands of bits of orbiting junk. That should provide better “predict-ahead ability,” Burgess said. Right now, NASA’s rules call for the space station’s crew to take evasive action — or prepare to abandon ship — if a piece of debris is projected to fly within an imaginary ”pizza box” that’s about 15 miles (25 kilometers) on each side and a half-mile (0.75 kilometers) above and below the station. The Space Fence would reduce the margin of error.”
<urn:uuid:12132c6c-1c31-4b5a-b43e-216f6a66bea3>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/07/12/cosmic_debris_faces_new_enermy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935978
382
3.34375
3
Survey results released Tuesday, Aug. 9, by the Pew Internet and American Life Project found that 92 percent of all adults who go online use e-mail and search engines — leaving one to wonder what, exactly, the remaining 8 percent are doing. “E-mail and search form the core of online communication and online information gathering, respectively,” according to the Pew findings. “And they have done so for nearly a decade, even as new platforms, broadband and mobile devices continue to reshape the way Americans use the Internet and Web. “ By comparison, in 2002, “more than eight in 10 online adults were using search engines, and more than nine in 10 online adults were e-mailing,” Pew reported. If there’s a difference between now and a decade ago, it’s that Americans are e-mailing and searching online more regularly now — about 61 percent of online adults do both now on an average day, whereas in 2002, just 49 percent of online adults used e-mail during the average day and only 29 percent used a search engine. Pew surveyed nearly 2,300 adults within the past year for the research. While governments integrated e-mail into their everyday work from the very beginning, they seem to finally be catching on that search engines are the dominant means of information gathering on the Internet. Some, like Texas and Utah, are redesigning their .gov Web portals to make search the primary way to navigate the site. The Pew research didn’t address online usage trends among children. A comScore report from early 2011 found that during the past 12 months, time spent using Web-based e-mail had dropped 59 percent among 12- to 17-year-olds, and was also down among those age 18 to 54. Conversely mobile e-mail usage is increasing, with 70.1 million users accessing e-mail on a mobile device — an increase of 36 percent year over year.
<urn:uuid:af94df61-9489-4edd-ab0b-793edbf7e0ff>
CC-MAIN-2017-04
http://www.govtech.com/wireless/Eight-Percent-Those-Online-Dont-Use-E-Mail-Search.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950101
407
2.703125
3
The frames are actually switched all over the internetwork, it is important for the switches to keep the record of all the various kinds of frames, and also know how to utilize it on the basis of the hardware address. The management of frames is done in a different manner depending on the kind of link they are passing through. In the switched environment there are two various kinds of links access and trunk links: Access links This link is referred to as as the native VLAN of the port as it is part of one VLAN only. When any device is connected to an access link then it is not aware of a VLAN membership—the device does not understand about the physical network and so it just presumes that it’s a component of a broadcast domain. All the information of VLAN is actually removed by switches from the frame before it reaches to an access-link device. No communication or interaction can take place between the Access-link devices and the devices outside their VLAN, the communciation is possible only when the packet is routed through a router. Trunk links The term trunks is named after the telephone system trunks that carry number of conversations. Similarly, the trunk links can carry/move multiple VLANs. There is a fixed trunk link i.e. 100- or 1000Mbps between a switch and a router, between two switches or between a server and a switch. At one time these can carry the traffic of as many as 1 to 1005 VLAN. It is not possible to run them on links of 10Mbps. Trunking permits to make one port part of many VLANs simultaneously. This can be really beneficial. In other words, you can easily arrange things up to a server in 2 broadcast domains at the same time, and it would be easy for the user to log in and access it without crossing a layer-3 device (router). There is one more advantage to trunking when you are attaching switches. Trunk link carries little or all information of VLAN across the link, but if the switches are not trunked then the VLAN 1 information will be carried across the link and this will happen by default. Due to this reason the configuration of all VLANs is done on a trunked link unless it is deleted by an administrator manually. In the figure you can see the utilization of various links in a switched network. It is the trunk link between the two switches that makes the communication possible to all VLANs. On the contrary, when you use an access link then it permits the use of single VLAN between switches. Here you can easily notice that these hosts are making use of access links in order to link to the switch, which means that they can only communicate in single VLAN.
<urn:uuid:61d465af-da97-4a3d-abe7-f1a2a07fd8a2>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/vlan-trunk-access
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954085
561
3.015625
3
The way students learn about subjects from the Middle East to Mars and from history to health care is undergoing a remarkable transformation. No longer will faculty be limited by pictures in a textbook or videos on classroom televisions. A new wave of technological innovation is redefining what it means to learn — bringing to life new places, topics and experiences in ways that will revolutionize learning for students, no matter where they live. Today colleges and universities are exploring the power of immersive learning technologies to solve real learning challenges, offering benefits for students in fields from nursing education and engineering to construction and surveyor training. This is being done through immersive learning, as facilitated through virtual reality and new technologies such as Microsoft HoloLens, an “augmented” or “mixed” reality device that allows users to see and interact with holograms in their own environment. An Explanation of Immersive Technologies Augmented/Mixed Reality — In Augmented Reality (AR), learners can still see the environment around them, but digital content is overlayed into their space. Mixed Reality (MR) is a subset of AR and is powered by a headset – usually a 3D holographic model – which is superimposed over the user’s current surroundings. MR allows the user to walk around and interact with said model and analyze it from angles or select specific areas with which to interact (see image). Mobile-based AR allows users to view digital content via a handheld device (see image). The user can be guided by voiceover located in the headset and only needs to use their hands and own body movements to control interactivity within the environment. Virtual Reality — Virtual Reality (VR) is a completely immersive experience in which users are taken from their real world surroundings and transported virtually into an entirely new digital and game-like environment. The user can look around and see a full panoramic view of what is happening in the virtual space, and can listen to accompanying audio, and interact with things that they see. In being unable to see what is happening outside of the headset, the user is fully transported into this virtual world, allowing us to use visualization in new and previously unimagined ways. 360 Content – This is a full panoramic video or photographic view of a real environment – similar to VR but with video. This 360 content can be viewed in a headset or via PC. Immersive Learning in Action at Colleges and Universities Around the World Pearson is collaborating with Microsoft to explore the power of mixed reality to solve real challenges in areas of learning, ranging from online tutoring and coaching, nursing education, and engineering to construction and surveyor training. With Microsoft HoloLens, the world’s first self-contained holographic computer, Pearson is developing and piloting mixed reality content at colleges, universities and secondary schools in the United States and around the world. HoloLens embraces virtual reality and augmented reality to create a new reality – mixed reality. With virtual reality, the user is immersed in a simulated world. Augmented reality overlays digital information on top of the real world. Mixed reality merges the virtual and physical worlds to create a new reality whereby the two can coexist and interact. By understanding the user’s environment, mixed reality enables holograms to look and sound like they are part of that world. This means learning content can be developed for HoloLens that provides students with real world experiences, allowing them to build proficiency, develop confidence, explore and learn. For example, at Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens for connecting students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking. If successful, this out-of-the-box solution could provide struggling students with richer, more personalized just-in-time support from expert tutors as if they were sitting side-by-side. Bryn Mawr will also experiment with using holographs and mixed reality to explore 3D content and concepts in a number of academic disciplines, including physics, biology, and archaeology. Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education. Today many nursing programs hire and train actors to simulate scenarios nurses will face in the real world — a process that is hard to standardize and even harder to replicate. As part of the mixed reality pilot, faculty at the two universities’ schools of nursing are collaborating with Pearson to improve the value and efficacy of the types of simulations in which students participate. To develop the content for this pilot, Pearson will use Microsoft’s holographic video capture capability, filming actors to simulate patients with various health concerns and then transferring that video into holograms for the student nurses to experience in a clinical setting. When student nurses participate in the simulations using HoloLens, they will have a real world experience diagnosing patients, building the confidence and competence that they will need in their careers. In today’s technological and budgetary climate, a technology audit automation solution is an essential element of your technology plan. Knowing what assets you have, how they’re used, and who is using them is critical to efficient and cost-effective asset management, especially when budgets are tight. Download our FREE guide and discover if an automated technology audit will work for your organization.Can You Benefit from an Automated Technology Audit Pearson’s work with mixed reality and HoloLens isn’t limited to higher education. The company is in the early stages of evaluating the impact of holographic learning at the late grammar school stage. At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning. It’s exciting to see how these technologies are being leveraged to create high-quality immersive learning experiences designed to meet specific learning needs in higher education and vocational training. With the addition of effective faculty training to help educators become more confident with the use of these technologies in the classroom, immersive learning can make a measurable difference in the lives of students and instructors.
<urn:uuid:f2ca6b14-14d6-499e-940a-c07fa4142133>
CC-MAIN-2017-04
https://techdecisions.co/mobility/can-immersive-technologies-improve-learning-higher-ed/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937151
1,310
3.75
4
The definitive definition of ‘The Cloud’, from the experts for the kids (and adults, too). 1. Richard Davies, CEO, Elastichosts "Once upon a time, everyone’s computers used to feed off a box in their office, but the more and more we used, the more boxes we needed, until there was no room for people anymore! Now there is a man called Mr Cloud who lives in the sky and looks after all the boxes for us; Mr Cloud has magic powers which means he can give people the exact amount of power they need, whenever they need it and from wherever they need it!" 2. Founder of CloudView, James Wickes "Clouds are really safe places where we keep all our important information so that we can quickly and easily access it from anywhere under the sun with an iPhone, iPad or computer. When we have used the information we can store it safely back in the clouds." 3. Simon Antoniou, operations manager at Evercom "Cloud Computing is the Internet’s baby. If the Internet was God , then Cloud is Jesus and he has risen to create a common ground and level playing field for small businesses, business people and entrepreneurs trying to compete with the big boys." 4. Bengt Höjer, cloud manager, UNIT "Cloud computing is like electricity, but instead of plugging into the wall to receive power you plug into the Internet to receive computing services, with both you only pay for what you use." 5. Izak Oosthuizen, MD of Exec Sys "With the cloud, you can use a whole load of technologies directly from the internet in the same way that you access games like Minecraft!"
<urn:uuid:81d8e233-20f4-4dfd-ac39-e905c836e39b>
CC-MAIN-2017-04
http://www.cbronline.com/news/cloud/aas/20-ways-to-explain-cloud-computing-to-a-five-year-old-4283091
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938236
358
2.59375
3
Magistr follows in the footsteps of the infamous "Chernobyl" virus Cambridge, United Kingdom, April 6, 2001 - Because of the significant increase in the number of infections caused by the recently discovered "Magistr" virus, Kaspersky Lab, an international data-security software-development company, are issuing a second warning about the threat this malicious code poses, and recommend computer users perform a full virus-check of their computers using KasperskyTM Anti-Virus with the latest updates installed and maximum scanning options switched on. As is known, exactly 1 month since the day of the first computer infection, Magistr deletes all files from local and network disks, discards the CMOS memory settings and destroys data in FLASH BIOS microchips. Taking into account that the first reports about infection with this virus were received in the middle of March, Kaspersky Lab expects a real avalanche of destructive incidents by Magistr to happen in the middle-end of April. As a result, Magistr can cause the loss of important information and infect computer hardware. "We classify Magistr as a so-called 'sleeping' virus that usually insidiously operates on infected computers until the time when it activates its destructive payload," said Denis Zenkin, Head of Corporate Communications for Kaspersky Lab. The same thing happened with the "Chernobyl" virus about two years ago: many people refused to believe in this virus' existence, and some anti-virus vendors even tried to accuse Kaspersky Lab of spreading virus hysteria. However, soon the prediction was confirmed, and in April 1999, the virus disabled hundreds of thousands of unprotected computers worldwide, causing data loss and hardware fault. Kaspersky Labhas full reason to believe that the same thing could happen with the Magistr virus because of the significantly increased number of reports about infections resulting from the virus. "The distinctive feature of Magistr is that it contains e-mail addresses of the last ten computers that were infected, and possibly still are infected, before reaching the current destination," said Eugene Kaspersky, Head of Anti-Virus Research. "A study of the list of previously infected computers demonstrates the extremely wide spreading capability of the virus that includes Poland, the United States and United Kingdom, Brazil, Slovakia, the Czech Republic, Spain, Russia, Ukraine, France, Switzerland and many other countries." After a detailed analysis of the existing details regarding Magistr's prevalence, Kaspersky Lab estimates the number of computers that are still infected with the virus at about 5,000 units. "It is important to emphasize that this number is only the tip of the iceberg, while the real scope of the virus epidemic is nearly impossible to calculate," added Eugene Kaspersky. Kaspersky Lab has performed an internal test of the most popular anti-virus software, and has come to the realization that not all of them are capable of detecting and effectively removing an infection with such a technologically advanced polymorphic virus as Magistr. Therefore, we recommend users of other anti-virus programs download a FREE demo version of Kaspersky Anti-Virus, enable the latest virus-signatures updates and perform a comprehensive virus-check of your computer. Kaspersky Anti-Virus can be purchased in the Kaspersky Lab online store or from a worldwide network of Kaspersky Anti-Virus distributors and resellers.
<urn:uuid:0b42c2a4-e690-497c-b428-8d9ee06e5dad>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2001/A_Time_Bomb_Called_Magistr_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927673
708
2.625
3
Tech companies will spend a whopping $45 billion by 2016 to make data centers green, according to a report by Pike Research. But Facebook wants to take the concept of "green IT" a step further, by developing biodegradable data centers. The social network is sponsoring an Open Compute Foundation contest with Purdue University's College of Technology to develop a more sustainable server chassis. Servers, according to Purdue, are replaced about every four years, which results in a lot of waste. Does the contest seem a little far-fetched to you? Here’s a bit more on the compost concept behind it: "Open Compute wants to change [the amount of waste] starting with the server chassis. These are typically made of steel, which is recyclable, but even recycling generates waste. What would happen if these chassis could be placed in compost instead?" Purdue's participating students will receive a server to use to test new designs, and the winners will attend the Open Compute Summit to present the design and have "a chance to be a rock star in the open source hardware movement." Should the designs be successful, could we expect to see more biodegradable tech hit the market? Given the fast-paced innovation and how quickly new tech becomes outdated, it certainly seems plausible. Would you buy easily compostable gadgets?
<urn:uuid:c87e9c8c-38cf-43f7-b173-2dbb5d742113>
CC-MAIN-2017-04
http://www.cio.com/article/2370943/data-center/facebook-wants-to-build-dissolving-data-centers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950892
276
2.890625
3
9.5 Recovering from a Disaster Use snapshots of the nodes to recover from a disaster. It is important to take snapshots of each node in the cluster regularly so you do not lose information. To recover from a disaster: On a regular basis, take snapshots of the nodes in the cluster. Power off the working node, then take a snapshot. Take a snapshot of the running node including the virtual machine’s memory. Repeat Step 1.a for each node in the cluster, within a short time. When a failure happens, restore the master node snapshot first. Restore the other nodes in the cluster. Use these steps only for disaster recovery. Never restore one snapshot. Access Gateway for Cloud contains a database that is time-sensitive. Restoring one node only and not the others causes corruption in the appliance.
<urn:uuid:7650ca4d-299a-4151-9dda-2a5373608ed8>
CC-MAIN-2017-04
https://www.netiq.com/documentation/accessgatewaycloud/install_config/data/maintenance_disaster.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz
en
0.824693
174
2.546875
3
Microsoft revealed Project Natick this weekend – an effort to test data centers that are deployed underwater. Is it really viable for the cloud to live in the ocean? +MORE AT NETWORK WORLD: Despite layoffs, VMware is banking on this one big cloud innovation + The rationale for Project Natick makes a lot of sense. Microsoft says half of Americans live within 200 KM of the ocean. Locating data centers near end users reduces latency. Since dry ground is expensive near large population areas, why not throw the data centers in the ocean? Microsoft has been developing and testing Project Natick since 2013 when the idea was initially conceived by an employee who used to work on a US Navy submarine. Microsoft built a miniature data center, enclosed it in a waterproof steel vessel and sunk it off the coast of California (see photo above). Here’s my question: What happens when something breaks? “It’s kind of like launching a satellite for space,” Project Natick research engineer Jeff Kramer says, emphasizing my piont exactly. “Once you’ve built it, and you hand it to the guys with the rocket - or in our case the guys with the crane – you can’t do anything about it if it screws up.” Here’s the problem: the cloud screws up a lot. Amazon.com CTO Werner Vogels has a famous saying: “Everything fails. All the time.” What happens when a server inexplicably fails, or a router in the underwater pod goes awry? Microsoft says Project Natick data centers are very resilient. With the end of Moore’s Law, the cadence at which servers are refreshed with new and improved hardware in the datacenter is likely to slow significantly. We see this as an opportunity to field long-lived, resilient datacenters that operate “lights out” – nobody on site – with very high reliability for the entire life of the deployment, possibly as long as 10 years. An FAQ on the Project Natick website goes on to explain that the data centers are designed to be deployed for five years, then reloaded with new computers and redeployed. That all sounds fine and good. But it still doesn’t answer what will happen when – not if – something breaks inside. Those pods would need to be coming back up to the surface a lot sooner than every five years. Let’s say Microsoft does figure that out. Then they have to still worry about sharks eating their underwater cables, as Google knows all too well.
<urn:uuid:5f662be5-0fa9-4f63-beb3-c35d434d8706>
CC-MAIN-2017-04
http://www.networkworld.com/article/3028615/cloud-computing/take-microsoft-s-underwater-data-centers-with-a-grain-of-salt.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00115-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945682
532
2.78125
3
Nestled between the crashing waves of the Pacific and the sprawling city of Los Angeles, the Hyperion wastewater treatment plant has served residents for more than a century. In operation since 1894, the plant has evolved to become a modern infrastructure marvel. Originally the plant collected sewage from the city and deposited it in the Santa Monica Bay, destroying most of the marine life in the area. These practices led L.A. to start a program in 1980 to heal the bay, in which the Hyperion plant also played a role. Today, thanks to cutting-edge water treatment technology, water returned to the ocean is 95 percent free of biosolids. The biosolids extracted also help power the facility. Photo courtesy of California State University, Long Beach
<urn:uuid:984935c8-7e3a-45ce-b083-b8ac69228753>
CC-MAIN-2017-04
http://www.govtech.com/hyperion-wastewater-treatment-plant-021511.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00235-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957781
152
3.0625
3
Peng L.,Geology Environment Geology Survey Institute | Ma X.,Geology Environment Geology Survey Institute | Jin J.,Geology Environment Geology Survey Institute | Tian H.,Geology Environment Geology Survey Institute | And 2 more authors. Journal of Natural Disasters | Year: 2014 After the 14 April 2010 Yushu earthquake, the earthquake caused mountain looseness, channel blockage, and a lot of landslides, avalanches, debris flows as well as other geological disasters. The spatial distribution of geological disasters is affected by topography, lithology and human engineering activities and other factors, but for geological disasters triggered by the earthquakes, its spatial distribution is mainly controlled by the seismogenic faults. This paper selects the geological disaster development concentrated zone of Batang River-Jiegu town to study. It is found that the Yushu earthquake fault has the following impact effects on the geological hazards: (1) distribution of the geological disaster development on two sides of the earthquake fault has some differences: strong earthquake geological disasters are developed within the range of 2500m on both sides of the fault zone, where the 500 m range is the most developed area. (2) Yushu earthquake triggered geological disasters occur mainly in districts of Intensity VII degrees, (3) the main slip direction of Yushu earthquake landslides and avalanches is NE-SW direction, which is approximately perpendicular to the strike of the seismgenic fault. Source
<urn:uuid:83f0bc3e-0b7b-467b-aa4b-ae35a8e210da>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/geology-environment-geology-survey-institute-1722297/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921672
299
2.828125
3
Enterprises employ several mechanisms to optimize the performance of their networks inorder to ensure high availability. Clustering for failover (Active /Passive mode) and load-balancing (Active / Active mode) is a commonly adopted technique that supports redundancy, session or database replication, and load balancing requests across the servers in the cluster. Some networks have software or hardware load-balancers even outside the cluster to increase horizontal scalability. Most businesses happen over the Internet and Enterprises prefer clusters or invest in load-balancers because of the fault-tolerant architecture. Critical servers and applications such as database servers, exchange servers etc are hosted in clustered environment for high availability. Further, load balancers like Big IP are plugged into the network to distribute the load on servers to address the primary need of un-interruped availability at all times. Service interruptions, planned or unplanned, are costly affairs and are unacceptable for businesses of any size. Administrators exhaust almost every resource within their means to keep their networks happy and available. We are all huge consumers of various services the internet supports. Imagine a situation where you are accessing your bank account to transfer funds to a friend's for an emergency. And the site simply says, 'Oops! Sorry, the service is unavailable!'. The message concludes politely requesting you to try accessing after a little while. As end users, we would hate to be in this situation. That you get to call the customer service and bombard at random and look out for an ATM or a branch office is some respite. The damage caused to the bank is beyond economic repairs and the man behind the show, the administrator/network engineer gets to listen to variety music for this huge goof-up. This despite having invested in a cluster for a reliable service! The job is only half-done by employing clustering for failover or load-balancing. Unless an administrator has a clear visibility of the good, bad, and the ugly components of his network including the ones in that critical cluster, or the important resources on the expensive load-balancers, its impossible to have an alert-free holiday! Example 1: A two-node SQL cluster in an enterprise: The reasons for an Active node to failover to a standby could be either a system or an application failure, or both. An administrator must have a clear visibility into the system and application performance, which is possible only when proactively monitored. In the scenario discussed above, it is possible that the clustering controller instance has failed causing the whole system to fall apart! Or, despite the Active node successfully failing-over to the Passive, the Passive too fails due to insufficient resources! This mix-up could have been avoided or identified a little earlier to reduce the damage by monitoring the basic resources on these systems. Example 2: A load-balancer distributing requests across a few servers: Despite reduntant servers set up for load-balancing, imagine a hardware resource failure on the load-balancer leading to the service unavailability! The user requests never make it to the server even when the service is up and running fine! The purpose of clustering is lost if the resources are not constantly monitored. Even as the administrator tries to ensure that the end-users do not 'feel' any service failure, he must quickly identify the cause for the failover from active to the passive node, or why the load exerted on a particular server is on the high. So, all the components or resources that need to run for the clustering to work well, needs monitoring. This includes the Cluster service on the nodes, the dependent services, the system resources on the load-balancer, response time of the individual devices etc.Contant automated monitoring of key components helps reduce the damage and helps realize the goal of ensuring high availability at all times. The key resources include: Availability of the nodes: A detailed availability report indicating if the node is unavailable due to a dependent device failure or if the node is pulled down for maintenance. Response time of the nodes: Response time of the nodes at any given time, and its average response time indicating the load on it. Services availability and response time: Availability of the cluster service and its related services on the nodes. System Resource utilizations: A constant check on the performance of the hardware resources because the last thing you want is insufficient resources rendering a critical service unavailable! Service Parameters: Critical parameters of a service that can lead to a potential failure. System Events pertaining to the cluster: Keeping a tab on the system events including the application events so that there are no sudden surprises and all avenues of fault are watched. Availability and Performance of the load balancer: Ensureing the basic availability and responsiveness of the load balancer. System resources on the load balancer: Monitoring the critical resources on the load-balancers to identify and problem indicators much ahead. Cluster Groups (Business Views): A holistic view of the nodes in a cluster with an ability to drill down to the root cause. This provision to visualize a cluster helps to understand the health of the cluster at a glance. ManageEngine OpManager is a network monitoring software that monitors all the resources on your LAN and WAN. The performance and fault management capability of OpManager helps identify performance bottlenecks quickly. Its ability to drill-down to the root cause of a fault and the huge custom-capability, makes OpManager a preferred solution among thousands of network administrators world-wide. A few useful plug-ins and add-ons such as NCM Plug-in, NetFlow Plug-in, VoIP add-on, and the provision to easily integrate with other applications in the ManageEngine suite such as ServiceDesk Plus, makes it a one-stop shop for all your network and IT management needs.
<urn:uuid:27aa3170-13ed-44bd-9a53-f6482bce2a7c>
CC-MAIN-2017-04
https://www.manageengine.com/network-monitoring/monitoring-clusters-loadbalancers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00041-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915691
1,191
2.65625
3
Codecs like H.264 reduce bandwidth by only sending full frames every so often, mixing them with partial frames only capturing changes in between the full ones. They are called 'I' frames because they are the initial / full frames, followed by 'P', or predictive frames.* Note: if you are not familiar with codecs, please read our Surveillance CODEC Guide before continuing. I Frame Questions Since I frames require much more bandwidth than P frames (frequently 10 or 20x more), some will argue that reducing the rate of I frames will reduce overall bandwidth significantly. For instance, instead of having an I frame each second, reduce it to 1 every 5 seconds. On the other hand, some will argue that reducing I frames can result in quality problems because it can be harder for the processor to continue to faithfully update and represent the image if it has changed significantly since the last I frame. We seek to answer these two questions: - How much bandwidth savings can you achieve by reducing the I frame interval? - How much quality degradation can occur by reducing the I frame interval? The Tests Conducted In order to answer these questions, we used five 720p cameras at various price points and performance levels: - Avigilon H3 1MP - Axis M1114 - Axis Q1604 - Bosch NBN-733V - Dahua HF3101 We aimed these cameras at a toy train set to create consistent motion, and varied I-frame levels from a default of one per second to as high as five and as low as one every four seconds. *Some versions of H.264 also support 'B' or bidirectionally predictive frames, but these are less common in surveillance cameras and therefore excluded from this study.
<urn:uuid:401528c0-b9a3-4e0c-b845-9e3542735870>
CC-MAIN-2017-04
https://ipvm.com/reports/test-i-frame-rate
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936109
366
3.421875
3
Java Script is a user interface scripting language developed by Netscape for its Navigator and Communicator World-Wide Web browsers. Microsoft has developed a compatible language, called J-Script, for its Internet Explorer browser. While the syntax of the Java Script programming language resembles that of Java, the two languages are actually unrelated. Java Script source code is embedded in HTML documents, and is interpreted by a World-Wide Web browser. Java source code is compiled into a bytecode, stored in a separate file, which the World-Wide Web browser downloads and executes separately from an HTML page.
<urn:uuid:e53c0bd8-485d-4ec4-a7f3-59ebaba1f6be>
CC-MAIN-2017-04
http://hitachi-id.com/concepts/javascript.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865359
123
3.21875
3
Picture This: A Visual Guide to Failure Values As you study for a variety of certification exams (such as CompTIA’s Cloud+, A+, Security+, etc.), there are a number of “failure”-related values you will need to learn. Typically, these are associated with hardware components such as the hard drive. You will be tested repeatedly on the same set of failure values — or a subset thereof. The formulas for computing the values, where they exist, are given in the descriptions that follow, along with other pertinent information. Make sure that you thoroughly understand the definition for each of the values, and the differences between these topics, before writing any exam. The images are a play upon the cards used in Monopoly (© 1936 by Parker Brothers) and are fitting because the first rule of any game is to know that you’re in one. Trying to keep IT systems up and running, with availability high and failures at a minimum, can indeed be viewed as a game — and a taxing one at that. Mean Time To Restore (MTTR) The simplest of the values to understand is MTTR: The first three letters always represent Mean Time To and the “R” changes between Restore, Repair and Recovery. Regardless of which word is used for the last one in the acronym, MTTR is the measurement of how long it takes to repair a system or component once a failure has occurred. If the MTTR is 20 minutes for a particular entity, then it takes — on average — 20 minutes to fix that entity when it breaks and move on. While MTTR is considered a common measure of maintainability, you need to be careful when evaluating it during negotiations with a vendor on a service level agreement (SLA), because it doesn’t always include the time needed to acquire a component and have it shipped to your location. I once worked with a national vendor who thought the MTTR acronym stood for mean time to respond. A technician would show up on site within the time the contract called for but would only begin to look at the problem and then make a list of any needed supplies as well as get coffee, make a few phone calls, and so on. That vendor’s actual time to restore always far exceeded the contracted MTTR number. Make sure any contract agreement you commit to spells out exactly what your client can expect. In general, the lower this number, the better. Figure One: MTTR is the average time it takes to correct the failure. The time it takes to resume normal operations can be dependent upon a wide number of variables but two of the biggest factors are the availability of substitutes and knowledge. If a hard drive in a rack fails, for example, and there is a spare nearby, then it can take less than one minute to swap one for the other. If there is no spare, or no one working at 3 a.m. who knows how to swap the drives, then the time to repair naturally takes longer. To reduce this number, keep spare parts on hand and have a knowledgeable person who can respond to problems — and fix them — readily available. When faced with exam questions on this variable, common sense is your best guide. Mean Time To Failure (MTTF) The Mean Time To Failure (MTTF) is the average time to a nonrepairable failure of a component. It can sometimes be used in place of MTBF (discussed next), but that is an improper use and there is a distinct difference between them. The easiest way to think of MTTF is to equate it with the life of the item. Almost every computing component has an MTTF associated with it and devices that commonly fail include hard drives, power supplies and memory. Devices that fail a little less commonly would include network cards, controllers, fans, motherboards and the like. The higher the MTTF on a device, the better. Figure Two: The Mean Time to Failure (MTTF) represents time when the system is running in the absence of a problem. The more components you are working with, the more you decrease MTTF. As a simplified example, assume that in your entire organization — a little one-man office — you have only one hard drive and it is a SATA with a MTTF of 1 million hours. When you first start out, the odds of your hard drive failing in the first year are only slightly greater than 0.8 percent. Those odds get worse as the hard drive ages, but still stay fairly small. Now assume that business needs grow and you move from the single hard drive to an array of 32 drives, each with the same rating. Since any one of those drives could fail, the odds of failure have now increased 32-fold, and you’ve gone from less than 1 percent likelihood of a problem during the year to 25 percent. In short: the more interconnected components you have, the more possibilities exist that something can go wrong. Mean Time Between Failure (MTBF) Often confused with MTTF, the Mean Time Between Failures (MTBF) is the measure of the anticipated incidence of failure for a system or component, or how frequently a component will fail. The word “between” implies that the failures are recoverable. This measurement is often used in the industry for hard drives, but failures there are not recoverable and MTTF is the more accurate variable. If the MTBF of a cooling system is one year, you can anticipate that the system will last for a one-year period; this means you should be prepared to rebuild the system once a year. If the system lasts longer than the MTBF, your organization receives a bonus. Like the other variables, MTBF is helpful in evaluating a system’s reliability and life expectancy. Figure Three: Like MTTF, MTBF decreases as you add more possible things to go wrong. Logically, MTBF is a superset of both MTTF and MTTR. Put another way, the time between recoverable failures is equal to the time to the failure of a nonrecoverable component plus the time to fix it. For example, consider a server in which the power supply fails. The MTBF is equal to all the time the power supply functioned properly (it’s MTTF) and the time it took to replace it (its MTTR). Recovery Time Objective (RTO) The Recovery Time Objective (RTO) is the maximum amount of time that a process or service is allowed to be down and the consequences still be considered acceptable. Beyond this window, the break in business continuity is considered to negatively affect business. The RTO is agreed on during the process of business impact analysis (BIA) creation. Figure Four: The RTO is a goal you’d like to achieve in terms of time to returning to operations. The BIA is simply a study of the possible impact if a disruption to a business’s vital resources were to occur. This analysis isn’t typically concerned with external threats or vulnerabilities, but focuses on the impact a loss would have on the organization. It can be useful in identifying the true loss potential and may help you in your fight for a more substantial IT budget. Recovery Point Objective (RPO) The Recovery Point Objective (RPO) is similar to RTO, but it defines the point the system would need to be restored to. It can be expressed in terms of time: if RPO can be two days before a crash occurred, then whip out the old backup tapes, but if it needs to be five minutes before the crash occurred, then you need to rely on journaling. As a general rule, the closer the RPO matches the item of the crash, the more expensive it is to be able to obtain. Figure Five: RPO represents an amount of loss you are willing to live with. RPO and RTO are quite often expressed together. For example, in the event of a failure, it may be that the BIA specifies you need to be back up and operating within two hours (RTO) with a maximum loss of the last 10 minutes of transactions (RPO). Summing it Up These five variables — MTBF, MTTF, MTTR, RPO and RTO — are used to express/calculate variables associated with the failure of systems and/or components. All five are prominently featured in a number of certification exams, most notably a number of the entry level certification tests from CompTIA.
<urn:uuid:3a456796-093e-4d92-ab86-609aee5c9bc5>
CC-MAIN-2017-04
http://certmag.com/picture-visual-guide-failure-values/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941649
1,756
2.6875
3
Ramirez-Villegas J.,International Center for Tropical Agriculture | Ramirez-Villegas J.,CGIAR Research Program on Climate Change | Ramirez-Villegas J.,University of Leeds | Boote K.J.,Agronomy Dep. | And 2 more authors. Agricultural and Forest Meteorology | Year: 2016 Common bean production in Goiás, Brazil is concentrated in the same geographic area, but spread across three distinct growing seasons, namely, wet, dry and winter. In the wet and dry seasons, common beans are grown under rainfed conditions, whereas the winter sowing is fully irrigated. The conventional breeding program performs all varietal selection stages solely in the winter season, with rainfed environments being incorporated in the breeding scheme only through the multi environment trials (METs) where basically only yield is recorded. As yield is the result of many interacting processes, it is challenging to determine the events (abiotic or biotic) associated with yield reduction in the rainfed environments (wet and dry seasons). To improve our understanding of rainfed dry bean production so as to produce information that can assist breeders in their efforts to develop stress-tolerant, high-yielding germplasm, we characterized environments by integrating weather, soil, crop and management factors using crop simulation models. Crop simulations based on two commonly grown cultivars (Pérola and BRS Radiante) and statistical analyses of simulated yield suggest that both rainfed seasons, wet and dry, can be divided in two groups of environments: highly favorable environment and favorable environment. For the wet and dry seasons, the highly favorable environment represents 44% and 58% of production area, respectively. Across all rainfed environment groups, terminal and/or reproductive drought stress occurs in roughly one fourth of the seasons (23.9% for Pérola and 24.7% for Radiante), with drought being most limiting in the favorable environment group in the dry TPE. Based on our results, we argue that even though drought-tailoring might not be warranted, the common bean breeding program should adapt their selection practices to the range of stresses occurring in the rainfed TPEs to select genotypes more suitable for these environments. © 2016 Elsevier B.V. Source Aina O.,Agronomy Dep. | Quesenberry K.,Agronomy Dep. | Gallo M.,Agronomy Dep. Crop Science | Year: 2012 Arachis paraguariensis Chodat & Hassl. is a potential source of novel genes for the genetic improvement of cultivated peanut (Arachis hypogaea L.) because some of its accessions show high levels of resistance to early leaf spot caused by Cercospora arachidicola Hori. In this study, induction of high frequency shoot regeneration from quartered-seed explants was accomplished for six plant introductions of A. paraguariensis under continuous light on Murashige and Skoog (MS) medium containing 4.4 mg L -1 thidiazuron (TDZ) in combination with 2.2 mg L -1 6-γ-γ-(dimethylallylamino)-purine (2ip). Recovery of a moderately high number of plantlets per quarter seed cultured was also achieved on medium containing 4.4 mg L -1 thidiazuron in combination with 1.1 to 4.4 mg L -1 6-benzylaminopurine (BAP) with bud formation occurring as early as 1 wk after culture initiation. There were no differences in seed production or in early leaf spot incidence between plants of two genotypes of A. paraguariensis derived from seeds vs. in vitro tissue culture derived plants; however, cultivated peanut cv. Florunner had a higher incidence of early leaf spot. © Crop Science Society of America. Source
<urn:uuid:2044e371-679b-42ca-abfe-b9b01aa9fe84>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/agronomy-dep-2028859/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919263
810
2.890625
3
Motorola has found a way to keep your mobile phone charged using only sunlight. The company recently received a patent for an LCD (liquid crystal display) that includes solar cells capable of charging the battery in a mobile phone or other portable device. The patent, which offered no hint of commercial product plans or timing, also outlines how solar cells can be added to OLED (organic light-emitting diode) and touch-screen displays. The basic premise has been proposed before: A display screen is stacked over one or more solar cells, which are charged by the light passing through the display. But earlier designs allowed a relatively small amount of light to reach the solar cells, so little power was generated even in the best light conditions, Motorola researchers said in the patent. The ultimate goal is to develop a device that could remain charged indefinitely, without requiring users to plug into a socket or carry external chargers. Until now, the major obstacle has been the LCD’s polarizer and reflective screen, which sends light back to the viewer. In earlier designs, the reflective screen allowed less than 6 percent of the available light to reach the solar cells, Motorola said. To solve this problem, Motorola proposed using either cholesteric liquid crystal or polymer-disbursed liquid crystal in the display, instead of super-twisted nematic liquid crystals. This change in materials eliminates the need for both a reflective screen and polarizer in the LCD screen. As a result, Motorola claims as much as 75 percent of available light is able to reach the solar cells, providing a sufficient amount of power to charge the battery of a mobile device. Motorola rival Nokia also recently applied for a unique U.S. patent: Nokia is working on technology to warn cell phone users of impending lightning strikes.
<urn:uuid:25720cea-cfe6-451e-8a4e-4e959afb3d12>
CC-MAIN-2017-04
http://www.cio.com/article/2438746/energy-efficiency/solar-power-to-fuel-cell-phones.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00088-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92373
362
2.609375
3
September 2, 2013 6 Comments In a previous article, I discussed the somewhat pedantic question: “What’s the difference between EtherChannel and port channel?” The answer, as it turns out, is none. EtherChannel is mostly an IOS term, and port channel is mostly an NXOS term. But either is correct. But I did get one thing wrong. I was using the term LAG incorrectly. I had assumed it was short for Link Aggregation (the umbrella term of most of this). But in fact, LAG is short for Link Aggregation Group, which is a particular instance of link aggregation, not the umbrella term. So wait, what do we call the technology that links links together? LAG? Link Aggregation? No wait, LACP. It’s gotta be LACP. In case you haven’t noticed, the terminology for one of the most critical technologies in networking (especially the data center) is still quite murky. Before you answer that, let’s throw in some more terms, like LACP, MLAG, MC-LAG, VLAG, 802.3ad, 802.1AX, link bonding, and more. The term “link aggregation” can mean a number of things. Certainly EtherChannel and port channels are are form of link aggregation. 802.3ad and 802.1AX count as well. Wait, what’s 802.1AX? 802.3ad versus 802.1AX What is 802.3ad? It’s the old IEEE working group for what is now known as 802.1AX. The standard that we often refer to colloquially as port channel, EtherChannels, and link aggregation was moved from the 802.3 working group to the 802.1 working group sometime in 2008. However, it is sometimes still referred to as 802.3ad. Or LAG. Or link aggregation. Or link group things. Whatever. What about LACP? LACP is part of the 802.1AX standard, but it is neither the entirety of the 802.1AX standard, nor is it required in order to stand up a LAG. LACP is also not link aggregation. It is a protocol to build LAGs automatically, versus static. You can usually build an 802.1AX LAG without using LACP. Many devices support static and dynamic LAGs. VMware ESXi 5.0 only supported static LAGs, while ESXi 5.1 introduced LACP as a method as well. Some devices only support dynamic LAGs, while some only support static. For example, Cisco UCS fabric interconnects require LACP in order to setup a LAG (the alternative is to use pinning, which is another type of link aggregation, but not 802.1AX). The discontinued Cisco ACE 4710 doesn’t support LACP at all, instead only static LAGs are supported. One way to think of LACP is that it is a control-plane protocol, while 802.1AX is a data-plane standard. Is Cisco’s EtherChannel/port channel proprietary? As far as I can tell, no, they’re not. There’s no (functional at least) difference between 802.3ad/802.1ax and what Cisco calls EtherChannel/port channel, and you can set up LAGs between Cisco and non-Cisco without any issue. PAgP (Port Aggregation Protocol), the precursor to LACP, was proprietary, but Cisco has mostly moved to LACP for its devices. Cisco Nexus kit won’t even support PAgP. Even in LACP, there’s no method for negotiating the load distribution method. Each side picks which method it wants to do. In fact, you don’t have to have the same load distribution method configured on both ends of a LAG (though it’s usually a good idea). There is are also types of link aggregation that aren’t part of the 802.1AX or any other standard. I group these types of link aggregation into two types: Pinning, and fake link aggregation. Or FLAG (Fake Link Aggregation). First, lets talk about pinning. In Ethernet, we have the rule that there can’t be more than one way to get anywhere. Ethernet can’t handle multi-pathing, which is why we have spanning-tree and other tricks to prevent there from being more than one logical way for an Ethernet frame to get from one source MAC to a given destination MAC. Pinning is a clever way to get around this. The most common place we tend to see pinning is in VMware. Most ESXi hosts have multiple connections to a switch. But it doesn’t have to be the same switch. And look at that, we can have multiple paths. And no spanning-tree protocol. So how do we not melt down the network? The answer is pinning. VMware refers to this as load balancing by virtual port ID. Each VM’s vNIC has a virtual port ID, and that ID is pinning to one and only one of the external physical NICs (pNICs). To utilize all your links, you need at least as many virtual ports as you do physical ports. And load distributation can be an issue. But generally, this pinning works great. Cisco UCS also uses pinning for both Ethernet and Fibre Channel, when 802.1AX-style link aggregation isn’t used. It works great, and a fantastic way to get active/active links without running into spanning-tree issues and doesn’t require 802.1AX. Then there’s… a type of link aggregation that scares me. This is FLAG. Some operating systems such as FreeBSD and Linux support a weird kind of link aggregation where packets are sent out various active links, but only received on one link. It requires no special configuration on a switch, but the server is oddly blasting out packets on various switch ports. Transmit is active/active, but receive is active/standby. What’s the point? I’d prefer active/standby in a more sane configuration. I think it would make troubleshooting much easier that way. There’s not much need for this type of fake link aggregation anymore. Most managed switches support 802.1AX, and end hosts either support the aforementioned pinning or they support 802.1AX well (LACP or static). So there are easier ways to do it. So as you can see, link aggregation is a pretty broad term, too broad to encompass only what would be under the umbrella of 802.1AX, as it also includes pinning and Fake Link Aggregation. LAG isn’t a good term either, since it refers to a specific instance, and isn’t suited as the catch-all term for the methodology of inverse-multiplexing. 802.1AX is probably the best term, but it’s not widely known, and it also includes the optional LACP control plane protocol. Perhaps we need a new term. But if you’ve found the terms confusing, you’re not alone.
<urn:uuid:b9c0bcc1-b23a-4433-a283-8572571e565f>
CC-MAIN-2017-04
https://datacenteroverlords.com/category/gifmadness/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00024-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936089
1,527
2.890625
3
Compression is very important. While resolution gets the attention, compression is critical and can be a silent killer - both for quality and bandwidth. Regardless of resolution, all surveillance video is compressed. And even if 2 cameras have the same resolution, their compression levels can be much different. [See our compression / quality tutorial for background.] Thankfully, compression in H.264 is standardized on a scale of 0 to 51, as shown in the image below: However, camera manufacturers almost never disclose Q levels used. Instead, they use a variety of homemade scales and naming systems. Here is a sample of ones we tested inside: So you can have 2 manufacturer's cameras with the same resolution but significantly different compression levels, and therefore varying image quality and bandwidth consumption. An industry first, IPVM has analyzed each of these manufacturers and answered these key questions: - What is the real H.264 quantization level for each camera manufacturer's default settings? How do they vary? Who defaults the lowest and highest? - To normalize the H.264 quantization levels so that each manufacturer had the same compression, what camera settings should be used? - How does the range of compression levels used for each manufacturer map to H.264 quantization levels? - What is the impact of bandwidth as H.264 quantization / compression levels are varied for different manufacturers? If you really care about image quality and optimizing bandwidth / storage use, this is a critical report.
<urn:uuid:52e04c01-2cda-4dcf-986b-9eb56faccb23>
CC-MAIN-2017-04
https://ipvm.com/reports/ip-camera-compression-comparison
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00418-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931249
300
2.734375
3
Security is more important today than it has ever been in human history. It seems every week there are several new headlines regarding a firm that was razed to ground in light of a successful attack by hackers. Though these headlines merit plenty of attention, most people don’t realize that hackers target the common Internet user far more often than they do gigantic corporations. Whether you knew it or not, chances are a nefarious hacker somewhere on the Internet has had their eyes on your data at some point in your life. Don’t believe me? Then believe Edward Snowden, who revealed how the NSA had been harvesting unbelievably massive stores of data from domestic US firms such as Google, Yahoo!, Microsoft, and others. That’s right – some of your data has likely already wound up somewhere on a storage server deep within the bowels of Ft. Meade due to their PRISM program. Countless attacks happen to regular people every single day, and the saddest part is that they could have been easily avoided. Understanding how to protect yourself from other hackers will make you a better hacker, because it will help you identify weak points in security. Let’s take a closer look at how you can better protect yourself from other hackers. - Password Security Despite the warnings we’ve heard time and time again, you need to make sure that your passwords are 100% secure. It’s always best to store them in encrypted format, because they could easily be captured by a hacker if they are stored in plain text. Fortunately, software like KeePass will protect your passwords with incredibly strong encryption that is impossible to break. In addition, it will generate random passwords that are extremely complex, which brings me to my next point. You can’t afford to use weak passwords anymore. Instead of using your birthdate, address, or the name of your dog as your password, it’s time to get more creative. Try to make passwords a minimum of 8 characters long (though they really should be longer), and you must make sure they contain lowercase letters, uppercase letters, numbers, and special symbols. Otherwise, chances are much higher that your password can be hacked with a dictionary based attack. While strong passwords can eventually be hacked with a brute force attack, they take much, much longer. - Stay Away from P2P I can already hear all the Bit Torrent users groaning, but the fact is that P2P networks are incredibly unsecure. Even with a VPN tunnel, you still aren’t 100% safe. Without a VPN tunnel, other Bit Torrent users will be able to see your IP address, which by itself is incredibly unsecure. But consider that you don’t always know if the file you’re downloading has been compromised. The integrity of files is easily compromised by hackers, who know how to slip code like viruses, Trojans, keyloggers, and other nefarious applications into Bit Torrent files to attack users. Even though prudent torrent users know to first scan downloaded files with antivirus software, they still aren’t perfect. If you really want to protect yourself from hackers that use P2P networks to facilitate their attacks, remember that an ounce of prevention is worth a pound of cure. Stay away from P2P networks. - Don’t Use the Internet without Encryption I’ve got three letters for you: V-P-N. In the information age, everyone – and I mean everyone – should use a VPN tunnel every time they connect to the Internet. With exception to websites that only provide HTTPS connections, your data is frequently sent in plain text. This makes it tremendously easy for ISPs, governmental agencies such as the NSA, and all kinds of hackers to capture and read your data. The NSA has made statements in the past that claimed they only wiretap communications that are one end domestic and one end foreign in an attempt to combat terrorism. However, it seems that they were capturing domestic data as well by harvesting information from Google, Microsoft, Yahoo, and other popular technology companies. Though there is little we can do to prevent a cloud service provider from mishandling information, we certainly have the power to make our data impossible to read while it is in transit through the Internet. Once your data is encrypted, it is impossible to read. But there are other benefits as well. VPN tunnel provide anonymity by masking your IP address, making it nigh on impossible for websites and online services to run audits that trace back to your unique IP address. Privacy is a large concern these days, and VPN tunnels will protect your data and your identity. Use them as much as possible! - Use Off-Shore Services in Countries without Governmental Surveillance If you’re online storage provider or VPN provider is based in the United States, you really have no way of knowing for sure if the government has coerced the service provider into letting them steal your information. Though most VPN providers have no-logging policies, the government still may force them to forfeit information regarding user activities. The PRISM program by the NSA left people all around the world distrusting US-based services, and for good reason. Most users don’t understand how to protect themselves from hackers, making it as easy as taking candy from a baby to steal personal information. Becoming a strong hacker means that you need to understand security, and the best ways to increase personal security and anonymity on the Internet include strong password creation, discretion downloading files from P2P networks, securing data with encryption, and using online services that are located outside the United States. If you fail to obey these basic principles, you make it that much easier to become hacked.
<urn:uuid:215b4a54-b863-44a1-8b01-bad4bd5f3625>
CC-MAIN-2017-04
https://www.hackingloops.com/category/hacking-news/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953533
1,168
2.640625
3
Despite iOS being traditionally regarded as the safest platform, there are a number of reasons why that assumption may be becoming outdated. Firstly, occurrences of ransomware, malware, rotten apps on the iTunes store, and social engineering have been coming into the news far more often in recent times. Then there is the question of the iPhone's encryption being closed source firmware, meaning that any reliance on it is based on trust. On the other hand, it is hard not to admit that for people with a limited understanding of the risks, Android can be somewhat of a digital gauntlet. With torch apps, for example, having been found to contain demanding permissions that allow for snooping, there would have to be prejudice involved not to admit that the platform has its problems. Why is it, then, that Android feels like it has a fighting chance of holding its own in the battle between the platforms? Apple asked to 'jailbreak' its own system Widely reported in recent news, Apple was asked by the FBI to help break the passcode on the San Bernardino shooter’s phone. In fact, what really happened is that the FBI asked Apple not to decrypt the phone but rather ‘jailbreak’ its iOS operating system; to take off the time delays between failed attempts that make its 4-bit encryption ‘relatively’ safe. So, what does that mean? With the 4-bit encryption on that particular iPhone, there are 10,000 unique possibilities for unlocking the phone. These days, that would actually be very easy for data cracking software to decrypt with brute force (trying each possible option one at a time). It’s for this reason that in order to make a 4-bit encryption work, there must be a time delay security feature of exponentially increasing waiting periods between failed attempts. When a person (or cracking software) fails five attempts of guessing the four digit passcode, the phone implements a 20 minute cool off period. Get it wrong another five times and there is a 40 minute wait, followed by 80 minutes and so on. It’s this security feature the FBI wanted Apple to ‘jailbreak’ (and not the encryption itself), because not having to wait for the delay between attempts would allow the phone to be decrypted vastly more quickly. Apple refused. Cue John McAfee, a computer programmer famed for his antivirus company. He has offered to decrypt the phone in question for the FBI with social engineering. Many people were quick to dismiss the claims, calling McAfee mad for suggesting it, but in reality social engineering could indeed help to unblock the phone. That is because ‘social engineering’ could, in this case, be as simple as using all available information on the San Bernadino shooter to decide what four digit codes to attempt first. These could include family birthdays, former house numbers, dates of graduating from schools, or perhaps numbers that relate to the man’s culture history and religion - all which can be considered as part of a social engineering hacking technique for narrowing the data field. McAfee (or anybody else trying to hack the phone), would still be left with the problem of the iOS encryption’s extra security feature of an ever-increasing delay time between failed attempts. There are, of course, ways around this problem. Firstly, as the FBI asked Apple to do, the phone could be ‘jailbroken’ of this added security, and perhaps McAfee believes he could have done this with the resources at his disposal. Another theoretical possibility is to make multiple copies or ‘emulations’ of the phone’s iOS within another computer: virtual machines. Running a thousand emulations of that phone's iOS, you would only have to wait for the delay between tries ten times, this would allow the cracking software to break into the iPhone much quicker. Make ten thousand virtual versions of the phone’s iOS and you would be able to crack it in one go. A combination of social hacking (for more precise targeting) and the implementation of virtual emulations (or a ‘jail broken’ delay) would allow the phone to theoretically be cracked more quickly. What about Android? Firstly, Android users can make use of third party apps that are open source. This means that the encryption can be independently reviewed and means that there is no real trust involved; unlike with Apple. With Android, it is also much more likely that third-party developers will implement higher end encryption sooner, to gain a foothold in an encryption software market estimated to be worth $4.82bn (£3.46bn) by 2019. Apple, on the other hand, is a multinational conglomerate that is more concerned with commercial marketing (delivering profits to shareholders) and by ruling with a tight corporate fist over its proprietary software. It is for this reason, that as time passes we can perhaps expect to see higher end encryption on Android sooner than on iOS. The big question that we are left with, as to the future of the two platforms, is this: Will Apple continue to make users rely on its word, and insist on making people trust that it is implementing safe encryption? Or will it dare to step into the world of open source, peer reviewed technology that would allow people to trust iOS encryption, and perhaps allow Apple to regain its place as the distinctively better platform on the market? If Apple clings to corporate secrecy, on the other hand, it could allow Android to leap ahead by benefiting from the surge of encryption. Ray Walsh, Cyber Security Analyst, BestVPN Image Credit: lucadp/Shutterstock
<urn:uuid:c05129ac-d8cf-4904-b51f-610bb9ab8589>
CC-MAIN-2017-04
http://www.itproportal.com/2016/02/25/android-vs-ios-the-great-security-debate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966031
1,158
2.671875
3
The companies have created a prototype which focuses on rapid reconfiguration of terabit networks. IBM along with AT&T and ACS have developed a software-defined networking prototype (SDN) technology which can transfer large amounts of data over the cloud in case of disaster. Funded by the Defense Advanced Research Projects Agency (DARPA)’s CORONET programme, the new prototype drastically reduces time to set up a cloud-to-cloud high-speed network through a service provider from days into seconds. The SDN prototype is a resource management system that will enable cloud service providers to access extra bandwidth in case of disasters. AT&T Labs executive director of network evolution research, Robert Doverspike, said: "These shifts have driven the need to develop rapid and high rate bandwidth-on-demand in the Wide Area Network (WAN). "By combining software defined networking (SDN) concepts with advanced, cost-efficient network routing in a realistic carrier network environment, we have successfully demonstrated how to address this need." IBM provided the cloud platform and intelligent cloud data centre orchestration technologies to support the dynamic provisioning of cloud-cloud communications, while AT&T developed the networking architecture for the SDN, with ACS providing network management and optical layer routing of the cloud networking architecture. During the demonstration, the setup time clocked less than 40 seconds, and the companies were able to get results within seconds by using reconfigurable optical add-drop multiplexer (ROADM) equipment, which increases bandwidth. IBM Research member Douglas Freimuth said: "This technology not only represents a new ability to scale big data workloads and cloud computing resources in a single environment but the elastic bandwidth model removes the inefficiency in consumption versus cost for cloud-to-cloud connectivity."
<urn:uuid:4e2a1605-156d-43e4-abb1-12d6e8d50ef9>
CC-MAIN-2017-04
http://www.cbronline.com/news/data-centre/ibm-att-and-acs-develop-technology-for-faster-relocation-of-data-during-disaster-310714-4331449
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91119
374
2.78125
3
Links to the Columnar Database series: I received several direct questions on my latest blog posting regarding the row-oriented data page. Before I move on to more discussion of columnar, I thought I'd answer those questions here. 1. How does the DBMS know where fields begin that are after variable length fields since it will not be in the same place for every record? It's correct that any field that follows a variable length field will not be in the same position on each record. The DBMS actually reads the field length of the variable length field in its first 2 bytes and uses that to "jump" to the start of the next field. All subsequent fields, provided they are not variable length also, will begin at fixed offsets from the end of the variable length field. This is why you should put variable length fields at or towards the end of the record and why you should only use variable length fields when the field size really does vary. Variable length fields save space, but keep in mind 2 bytes are added to each field for the length bytes. 2. If a field is null, wouldn't it be 0 or 1 bit in length? First of all, nothing inside actual fields is a bit in length. Everything in the DBMS is done on byte (8 bit) boundaries. So, there will be at least 1 byte for each field, even if that field were a decimal (1), which could only contain values up to 9 and could be held in 4 bits. For nulls, there is an ADDITIONAL byte pre-pended to the field. If that byte is 'on', the field is null and the DBMS will ignore whatever leftover data may happen to be in the field. If the byte is 'off', the value is not null and the field value is legitimate. So, if a value is set to null, space is not saved! For any nullable field, you actually have to ADD a byte to storage. However, nullability does not make the column variable length (see answer #1 above.) Of course, a variable length column can be nullable. 3. What happens when a record is deleted? Not much really happens when a record is deleted. There is a bit in the record header that is on/off depending on whether it's a valid record or not. That bit is set to 'off' when a record is deleted. I'll refer to invalid records as holes. Holes just sit there taking up space. Holes are actually linked together within the page! Why? Because if this is the page that the DBMS wants to insert a record, it can use an existing hole or compress holes together to make a bigger hole to use for the new record. If you have a clustering index, the DBMS MUST put the records in order. However, that 'order' is only according to the Row ID map. Physically on the page, they can still be out of order. This is fixed, as well as all holes removed, during reorganization processes, either explicitly called or, in the case of some DBMS, as background processes. Posted April 30, 2010 8:08 AM Permalink | No Comments |
<urn:uuid:b8fbf7b0-ebc5-497f-8397-cac7288e3526>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/mcknight/archives/2010/04/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00308-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935521
658
2.515625
3
Sustainability is an emerging topic as governments large and small struggle to handle the impact of shrinking resources, global warming and increased energy costs. In New Bedford, a mid-sized city along Massachusetts’ southern coast, we are taking aggressive steps to reduce our carbon footprint while increasing energy efficiency — steps that could easily be duplicated in towns and municipalities across the country. Our goal is to reduce our government’s energy use by 20 percent within the next five years. Using a blueprint developed by a Sustainability Task Force we created four years ago, New Bedford is moving forward with a far-sighted plan that will lower our use of fossil fuels, reduce costs and help make our city and our planet a better place for our children and grandchildren. The plan includes building solar systems on city-owned sites, buying fuel-efficient vehicles for our municipal fleet, installing electric charging stations for fishing vessels that dock in our port, encouraging the private sector to install solar systems, converting oil-heated buildings to natural gas, promoting business and residential weatherization, and installing super-efficient LED lighting at our fishing piers. Our energy sustainability program has four clear goals: In some ways, New Bedford owes its existence to the last great global warming event in earth history. The retreat of the glaciers at the end of the ice age 12,000 years ago helped create the deepwater port and plentiful supplies of water that are the foundation of the modern New Bedford economy. By the 1800s, we were the home of the country’s whaling industry — Herman Melville left from New Bedford on his way to sea to write Moby Dick. Today we remain the No. 1 commercial fishing port in the United States. New Bedford’s dependence on water and the earth’s natural resources makes us extra sensitive to the impact that our city — both our government and our citizens — have on the environment every day. That is why we were very proud to announce last month that New Bedford, in cooperation with Consolidated Edison Solutions Inc. and BlueWave Capital LLC, will build solar panels on city-owned sites with the goal of producing 10 megawatts of clean and renewable energy. That is enough to power 1,500 homes. This program will add value to our city’s facilities and underutilized space and, we hope, encourage local businesses to follow our lead and install solar panels on their properties. The program was funded, in part, with an $80,000 federal grant authorized by the American Recovery and Reinvestment Act of 2009. The rooftop and ground-based photovoltaic units will be installed on buildings, schools and other municipal land parcels and will be operational by 2013. By decreasing our reliance on fossil fuels, our solar panels could produce up to 25 percent of the electricity consumed by city-owned facilities and could generate savings as high as $10 million by 2033. ConEdison Solutions will own the solar installations and enter into long-term power purchase agreements with New Bedford. The firm will also be responsible for installation, ongoing operations and maintenance — and provide financing for the projects. ConEdison Solutions is already a partner with the city in controlling its energy costs. We buy our energy through the SouthCoast Electric Power Group, which purchases its energy from ConEdison Solutions. Our solar initiative will not only save on energy costs, but also boost our local economy. BlueWave will work with us to bring solar energy to local businesses and residences, creating job and apprenticeship opportunities. And ConEdison Solutions has agreed to maximize its use of local construction contractors as it builds its solar installations. Our solar initiative is only the most recent innovative program in New Bedford’s drive to become a national leader in energy efficiency and renewable energy. Our energy program targets municipal, commercial, residential, transportation and port-related end-users. At New Bedford’s Harbor’s Wharves, we have begun an initiative, funded by the commonwealth of Massachusetts and the U.S. Environmental Protection Agency, to reduce emissions along the waterfront. We are installing 42 dockside electric pedestals — each with four outlets — so boats using our port can replace their fossil fuel with electricity. That means our port will use 310,000 fewer gallons of diesel fuel every year. This effort is crucial to maintaining the port as a vital element of our local economy. The port currently generates more than $1 billion a year in economic activity and is used by some 500 fishing boats. We are also moving to make our government operations more energy efficient. In addition to installing solar panels at government-owned sites, we will switch 5 percent of our municipal vehicle fleet to highly efficient vehicles within five years, convert our oil-heated buildings to natural gas, and investigate the feasibility of biomass conversion for one of our oil-heated buildings. Other facets of our energy sustainability program include: I believe that local governments must be proactive in responding to climate change to protect our natural resources and conserve energy. New Bedford’s energy program creates a floor upon which future generations can build more far-reaching initiatives that will protect the planet, grow our economy and sustain our community far into the future. Scott W. Lang is the mayor of New Bedford, Mass.
<urn:uuid:e73e8a55-4e1e-4ab7-8f30-165803f0c4ee>
CC-MAIN-2017-04
http://www.govtech.com/technology/Energy-Sustainability-Plays-Key-Role-in-New-Bedford-MA.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00126-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939486
1,078
2.65625
3
This post looks at how an attacker can intercept and read emails sent from one email provider to another by performing a DNS MX record hijacking attack. While our research on the state of email delivery security indicates that this attack is less pervasive than the TLS downgrade attack, it is equally effective at defeating email in-transit encryption. This article explains how this attack works, how it can be mitigated and to what extent it also affects the security of a website. Before delving into how this attack works and countermeasures, I will briefly summarize DNS and DNS MX records for the readers who are not familiar with this aspect of the Internet. If you are familiar with this topic, you can skip the next two sections. Understanding DNS records DNS records are used to translate a domain address, let’s say www.elie.net, into an Internet address, which are commonly known as IP addresses. This translation is needed because computers only know how to communicate with an IP address and not a domain address. This translation is also helpful because it allows multiple servers and IP addresses to have the same domain address, which allows redundancy and scalability. It also helps make the Internet faster by allowing big services and CDNs to host the same content in many different countries on various servers and return the IP address of the closest server to the client when they look up the domain address. This technique is called geoIP load balancing. Understanding DNS MX records DNS MX records are a specific form of DNS record that allows us to know which IP address to use when sending an email to a given domain. As visible in the diagram above, when Alice wants to send an email to Bob (firstname.lastname@example.org), her server (smtp.source.com) needs to resolve the IP address of Bob’s mail provider server. To do this, her mail server asks the DNS server for the MX record for the domain, destination.com. The server will reply with the IP address that Alice’s server will connect to to deliver the email to Bob. In our example, Bob’s server has the IP address 220.127.116.11. DNS MX record hijacking DNS hijacking attacks work as follows. The attacker poses as or compromises the DNS server used by Alice’s mail server to find out where to deliver Alice’s email to Bob. Instead of returning the legitimate IP address, the DNS server returns the IP address of a server owned by the attacker, as illustrated in the diagram above. Alice’s server believes this IP address is the legitimate one for Bob’s server and delivers the email to the rogue server. The attacker reads the email and to make the attack invisible, forwards the email to the real server. This attack is possible because DNS was not designed with security in mind and as a result, there is no default security mechanism baked into it to authenticate that the request was sent by the rightful owner of the domain. This shortcoming will eventually be fixed with the deployment of DNSSEC and DANE. This deployment and other ways to mitigate this type of attack are discussed at the end of this post. Are websites vulnerable as well? Can an attacker use DNS hijacking to prevent HTTPS and read or intercept web pages? At the moment (2015), the answer is complicated but hopefully in a few years the answer will be a straightforward no) Like email until DNSSEC is deployed and enforced, websites are vulnerable to DNS hijacking. However, there are a few mitigations that make such attacks significantly harder than for emails, at least until almost the same mitigations are deployed for emails in transit, which is what Gmail and the other big email providers are working on. Here are the two key differences that make DNS attacks harder against websites. HTTP vs HTTPS separation: In the web world, the non-encrypted version (HTTP) and the encrypted version of the protocol (HTTPS) use different addresses and are treated differently by browsers (same orgin policy). When you enter a URL starting with https, e.g. https://www.elie.net, you are instructing your browser to establish an encrypted connection. In that context, carrying out a DNS hijacking attack does not help the attacker because they will still need a valid certificate for the domain because the browser will refuse to establish the connection otherwise. So, if you type a URL starting with https or click on a link with the https prefix, you are safe. HTTP Strict Transport Security (HTST): This specification helps mitigate what happens when you don’t specify whether you want to talk to the server in clear (http) or encrypted (https). Typing the URL directly in a browser is common, for example, www.elie.net instead of https://www.elie.net. In that case, the browser has no idea if you want the encrypted version of the site or not. For backward compatibility reasons, as some sites don’t support HTTPS yet, your browser will default to the unencrypted version. HSTS aims to mitigate this issue by allowing websites to tell the browsers that they should only connect over HTTPS. Technically, a website sets HSTS by sending a HTTP header to the browser. Once this header is received by the browser, every subsequent request to the site (and possibly its subdomains) will be automatically upgraded to HTTPS by the browser. Therefore, this also protects against the set of attacks in which the attackers supply to their victims a link that starts with http:// in an attempt to intercept the communication, since the browser will upgrade it to HTTPS before the request is sent over the network. Preventing DNS hijacking attacks The long-term solution to this issue is the deployment and enforcement of DNSSEC, which will hopefully make DNS hijacking an obsolete attack by requiring DNS records to be signed with the domain owner’s private key. This will guarantee that an attacker won’t be able to send a spoofed DNS record to the client because they can’t forge the signature. This will protect every protocol, including SMTP and HTTP, against those attacks. In the shorter term, mail providers are working on developing a technology similar to HSTS but for SMTP traffic. This “SSTS” protocol (the name is yet to be defined) will allow us to pin a certificate and enforce that all emails are sent encrypted. This will prevent both MX hijacking attacks and TLS downgrades for providers that deploy it. This protocol is still in the early stage of specification but hopefully deployment is not too far in the future. Today, signing emails with DKIM and enforcing signing with DMARC help alleviate the issue by preventing an attacker from modifying intercepted emails. The attackers don’t have access to the legitimate DKIM private key, so when the receiving server checks for the presence of DKIM and checks the email signature, if the email was modified in any way, the receiving server will reject it. DMARC also helps in detecting attacks against your domain by allowing you to supply an email address where you will receive a statistical report of how many emails have failed the DKIM signature check.
<urn:uuid:2fb19190-a887-45ed-b521-f762e4ecb0c8>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/01/18/how-email-in-transit-can-be-intercepted-using-dns-hijacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935465
1,482
2.609375
3
Is PHP Secure? In a classic watering hole attack, hackers compromised a well-known, respected high-traffic Website and planted malware in a bid to infect unsuspecting visitors. On Oct. 24, Google began to flag PHP.net as being a site hosting malware, i.e., potentially a watering hole. PHP.net is the community home for the open-source PHP programming language that powers hundreds of millions of Websites around the world (including this one). Initially, PHP.net administrators claimed that Google was wrong and that it was some kind of false-positive scan for malware. Within hours, PHP.net administrators realized that in fact multiple PHP servers and domains were compromised. Among the compromised sites are www.php.net, static.php.net, git.php.net and bugs.php.net. That's a very big deal with huge implications. Is PHP itself now at risk? What about all the people who visit php.net? According to PHP.net, malware was served to "a small percentage" of users between Oct. 22 and Oct. 24. More importantly, PHP.net administrators verified that the core Git repository was not compromised, though the site is currently now in a read-only mode. Git is the version control system used by PHP (and countless other software projects today). If the PHP Git server was compromised, the implication is that malicious code could be inserted into the PHP language itself. In a way this reminds me of the kernel.org attack from 2011 in which hackers somehow breached the main site behind Linux kernel development. In that case, malware wasn't being delivered to users, but attackers were likely trying to get at Linux kernel code. It's an attack that caused some minor delays in Linux kernel development but ultimately was found to be unsuccessful in creating a malware infection. Kernel.org also uses Git as its version control system. The way that Git works is that it's a distributed system, making it easy for developers to identify where things come from and when something has been altered. The highly distributed nature also means that there isn't really a single point of failure in the system either. So in the case of the PHP.net attack, I personally see little risk in any malware injection into PHP itself as being something that wouldn't be found quickly. On the other hand, the malware/watering hole attack is a bigger problem in my view. Although PHP itself is not likely at risk, developers who visited the php.net site were potentially compromised. In response, PHP.net is resetting user passwords and is now also in the process of getting a new SSL certificate for php.net. In my opinion, the fact that malware got onto the site in the first place means there is a weak spot on php.net in terms of either access control, input validation or code scanning. Perhaps just a single user (perhaps compromised, perhaps maliciously) was able to put a file on the site that wasn't validated by some form of automated scanner that might have caught the issue. The integrity of a site like php.net relies on developers as much as it does on the server in some cases. The official post mortem on the incident is expected next week. While this incident is alarming, it's also reassuring. The PHP.net community has responded rapidly to the issue and the nature of Git limits the risk such that the security integrity of the PHP language is likely still intact. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
<urn:uuid:546b3cf6-5d6e-43af-906b-e1837dea8a29>
CC-MAIN-2017-04
http://www.eweek.com/blogs/security-watch/is-php-secure.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965967
735
2.953125
3
GIS in the Pen The Nevada Department of Corrections (NDOC) announced in June 2006 that it will begin using a geographic information system (GIS) to track inmates in correctional facilities by the end of the year. The Southern Nevada Correctional Center, where the new system will be set up, is one of the state's nine major correctional institutions. Nevada operates 21 detention centers of various categories. The NDOC expects the system -- TRaCE -- to pay for itself by letting officials immediately improve allocation of available human resources. TRaCE provides wireless identification and location of inmates, correctional officers and staff members wearing the transmitters both inside and outside the prison buildings. The system also features officer-duress and man-down alerts that let officials know when correctional officers are in trouble. This is the sixth TRaCE installation in the United States. The system is already being used in Minnesota, California and Ohio, and in some European facilities. -- Elmo-Tech In a few years, telesurgery performed by multi-armed robots remotely controlled by real surgeons located hundreds or thousands of miles away will become commonplace. Canadian doctors from the Centre for Minimal Access Surgery are developing the technology for NASA. Their goal is to build a portable robotic unit to be used in space missions, war zones and remote areas within five years. The experiments done so far in Canada and for NASA are extremely encouraging. -- ZDNet The Defense Advanced Research Project Agency (DARPA) is funding research into minefields that intelligently react to enemy actions. A minefield is usually set up to stop enemy tanks from entering strategic areas. In the old days, enemy foot soldiers breached minefields by using mine-detection systems to clear paths for tanks. As a result, minefields consisted of both antipersonnel mines for the soldiers and regular mines for the tanks. DARPA's Self-Healing Minefield uses "intelligent," mobile antitank mines connected to each other via a wireless network. The technology allows the minefield to reconfigure itself to prevent enemy attack. Once scattered across an area, the mobile mines create an ad hoc, wireless network to establish their locations via geographic positioning system information, communicate with each other and monitor enemy attempts to breach the minefield. Once the minefield detects a breach, the mines calculate how to respond, and then individual mines hop to new locations to fill in the lane opened by the enemy. -- DARPA Cramped housing conditions and air pollution in Athens, Greece, have given rise to a "super breed" of mosquito that's larger, faster and more adept at locating human prey. Athens-based mosquitoes can detect humans at a distance of 25-30 yards unlike their counterparts elsewhere in the country that only smell blood at 15-20 yards. Unlike mundane mosquitoes, the super skeeters can also distinguish colors. The "super mosquitoes" of the Greek capital also beat their wings up to 500 times a second, compared to 350 beats for other variations. -- Agence France-Presse A government consultant, using computer programs easily found on the Internet, cracked the FBI's classified computer system, gaining 38,000 employees' passwords, including that of FBI Director Robert Mueller. The consultant -- who was working on a computer upgrade project for the agency -- broke into the system four times in 2004 and accessed records in the Witness Protection program and details on counter-espionage activity, according to documents filed in U.S. District Court in Washington, D.C. As a direct result, the bureau was forced to temporarily shut down its network and commit thousands of man-hours and millions of dollars to ensure no sensitive information was lost or misused. Joseph Thomas Colon pleaded guilty in March to four counts of "intentionally accessing a computer while exceeding authorized access and obtaining information from any department of the United States." The government does not allege that Colon intended to harm national security. Colon said in court filings that he used the passwords and other information to bypass bureaucratic obstacles and help the FBI install its new computer system. He also said agents in the Springfield, Ill., office approved his actions. -- The Washington Post Although breakdown rates for desktop and notebook computers improved in 2005-2006 compared with 2003-2004, a recent Gartner survey still found that one in every six laptops needs repair within one year of purchase, and motherboards and hard drives cause most notebook problems. Most U.S. broadband households will adopt voice over Internet protocol (VoIP) phone service in the next few years, but researchers differ on the timing. International Data Corp. projects there will be 44 million VoIP subscribers in 2010, while a Jupiter Research report, Broadband Telephony: Leveraging Voice Over IP to Facilitate Competitive Voice Services forecasts VoIP usage will grow to only 12.1 million by 2009. Web users in northern Scandinavian countries of Europe love to shop online more so than the rest of the continent, with more than 70 percent of Internet users in both Denmark and Sweden having purchased goods or services online, versus only 44 percent in both Spain and Italy, according to InSites Consulting.
<urn:uuid:6bb99e7f-fc26-4f36-a859-0158652427f3>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/100494064.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92518
1,046
2.890625
3
Precisely what can be done to restore mobile communications in flooded areas without electricity obviously is a big question in the wake of hurricane Sandy. Some now argue that more power generators are needed at mobile tower sites. As a practical matter, that might not have helped at locations where towers were flooded, as the generators would have been under water and inoperable. Image via Shutterstock Related questions are raised about how users can recharge mobile devices, and get messages out, when the towers are out of commission. Time Warner Cable has a partial answer to the latter question. Time Warner Cable is sending out trucks equipped with mobile charging stations and free Wi-Fi access points to areas in Manhattan and Staten Island without electrical power. In addition to recharging, the Wi-Fi will allow users to send messages. That doesn't address the question of how to better harden the mobile network, though. Adding more generators would help maintain service in the event of widespread utility power outages, but might do nothing in cases of major flooding. Edited by Brooke Neuman
<urn:uuid:2745c5c4-e550-4e7a-ac56-09d3891e0e79>
CC-MAIN-2017-04
http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2012/11/02/314416-what-be-done-protect-mobile-infrastructure.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953436
215
2.703125
3
Bill Scott, PE, PMP, PgMP - Global Knowledge Instructor A lot of project professionals have heard of earned value - most of the time as a method, as in the "earned value method." If fact, many project professionals use the earned value method as a tool to show project status (cost variances, schedule variances, cost efficiency, and schedule accomplishment efficiency) and to forecast the future. One word of caution about using the earned value method for determining schedule status and forecasting schedule: the earned value method normally ignores critical path considerations. So, the earned value method could show that, in general, you are ahead of schedule; and at the same time, a critical path analysis could show you are behind schedule. The purpose of this article is to discuss how one determines earned value (EV) - one of the three data points needed to use the earned value method. First off, let's establish what earned value is. Earned value is a measurement of work performed. In other words, earned value, or EV, is the dollar-value of work accomplished in a defined period of time. This should be reasonably easy to determine (calculate), except for the most complex of projects. The first part of the term "earned" refers to how many dollars of work were completed by the project during the evaluation period. In other words, I "earned" the value of project work actually accomplished. For example, if I was going to use the "percent complete" method of calculating earned value, then: Project Planning Period Being Examined: Period One Project Planning Period Two The above two charts show what activities (from the network diagram and or schedule) and their value (from the bottom-up cost estimate) the project planned to accomplish during period one and period two. There would be similar charts for all of the remaining periods of the project. At the evaluation time, this project would receive credit for all work accomplished during planning period one and any work that was completed outside of planning period one, such as activities I, J, and M from planning period two. So, the dollar value of work "earned" during planning period one is $23,000 ($19,000 + $4,000). The reason $4,000 is included from period two is because the associated activities were actually accomplished in period one (ahead of schedule). The above example used a percentage of activity completion as the method to determine earned value. There are several other methods for determining earned value. The project cost management plan should have told you which method would be used on which activities, based on the type of project. Various methods and their advantages and disadvantages are: Other factors that may apply in determining which of the above methods to use may include: 1. Ability to measure discrete effort 2. Activity size (value) 3. Activity duration 4. Number of measurement periods an activity spans 5. A combination of #2 and # 4 Project personnel have many tools and techniques at their disposal to determine earned value. The method or methods picked are usually dictated by the project's characteristics. Once determined, and several methods can be used on a given project, one should use that same method for like-activities throughout the project. In today's world, there are many software packages that will calculate earned value if you feed it: 1. The schedule 2. Activity cost estimates 3. EV method to be used 4. Actual completion data
<urn:uuid:72758a12-ef30-4ab1-a0be-50f024e45005>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/earned-value/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959706
710
3.28125
3
Optical communication is a type of communication in which light is used to carry the signal to the remote end instead of electrical current. Optical communication relies on fiber optic cables to carry the light signals to their destinations. In addition, a complete optical communication network is also including a modulator/demodulator, a transmitter/receiver, a light signal and a transparent channel. They are the building blocks of the optical communications system. Furthermore, to increase the bandwidth of the optical path, there are two methods: One is to increase the fiber single-channel transmission rate, and the other is to increase the number of single fiber transmission wavelength. The fact that the wavelength division multiplexing technology (WDM), optical communication equipment is only suitable in the last few km distance. Compared to the electrical transmission, optical communication has more advantages and now has largely replaced copper wire communications in core networks in many developed countries. Optical communication is becoming a trend in the next period of time. Optical fiber is the most common type of channel for optical communications. In addition to the manufacturing process, material composition and optical properties, applications, optical fiber is often classified by purpose so that optical fiber can be divided into communication with fiber-optic and sensing fiber. Transmission medium, optical fiber is divided into two kinds of general and special-purpose, functional devices optical fiber means for the completion of the amplification of light waves, the plastic divider, multiplier, modulation as well as light oscillation function of the fiber, and often some kind of functional devices form. The principle of optical communication will simply be described as following: Firstly, the sender sends the information (eg, voice, image or data etc.) into electrical signals, and then modulated onto the laser emits a laser beam, so that the change of light intensity with the signal amplitude (frequency) as well as sending via optical fiber. In the receiving end, detector receives the optical signal to transform it into an electrical signal by demodulation after reinstatement. The ultimate goal of the transmission network is to build all-optical networks including the access network, MAN, backbone network and so on. In a word, we will make a full realization of the optical transmission instead of copper wires. The backbone network is the highest part of the network speed, distance and capacity requirements of ASON technology used in backbone networks is an important step in the intelligent optical network, the basic idea is the introduction of intelligent control plane in the optical transmission network, which realization of resources according to need. DWDM is also in the backbone network show their skills, the future may be completely replaced by SDH, in order to achieve IP OVER DWDM. MAN will become the operator to provide bandwidth and services and bottlenecks. At the same time, MAN will become the largest market opportunities. SDH-based MSTP technology is mature and has good compatibility and flexibly and effectively support a variety of data services. About access network, FTTx (fiber to the x) is a long-term ideal solution. FTTx evolutionary path will gradually be fiber to the user to push close to the process, from FTTH (Fiber to the home) or FTTN (Fiber to the neighborhood) to FTTC (fiber to the curb) and FTTB (fiber to the building) and even finally to FTTP (fiber to the premises). Of course, it’s still a very long period to make it universally apply. In this process, fiber access will coexist with ADSL/ADSL2+.
<urn:uuid:df5aa678-c09c-49ab-98cb-d0712753d1ea>
CC-MAIN-2017-04
http://www.fs.com/blog/optical-communications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937678
732
3.71875
4
Traditionally, the conversation surrounding identity theft has been focused solely on individuals. In the past, identity theft typically referred to a third party gaining access to an individual’s personal data such as credit card and social security information. Over the past few years, the identity landscape has expanded dramatically. Due to the rise in cloud usage, applications, objects and devices, the number and variety of identities that an individual has to keep track of has skyrocketed. A person, in other words, no longer has a single identity. A typical person might have upwards of 15 identities distributed across social media accounts, applications, cloud services, mobile and physical devices. Naturally, as a result of the identity explosion, there are now more cyber threats than ever before. With so many profiles scattered throughout cyberspace, and each holding sensitive personal information, consumers are at a greater risk now for identity fraud than in the past. Here is a list of some of the common threats that exist in the identity landscape today: - MITB/ MITM - Session Riding/ Token Stealing - ZITMO/ MITMO - Key Logging These types of threats all target identities but with different goals and attack vectors. For instance, a man-in-the-browser (MITB) or man-in-the-middle (MITM) attack will compromise a person’s online identity. On the other hand, a Zeus-in-the-mobile (ZITMO) or man-in-the-mobile (MITMO) attack will compromise the person’s mobile device identity. Yet as varied as each attack is, there is one common goal amongst all threats, regardless of how they are deployed: to compromise or steal a person’s digital identity. These identities are then used to access items such as intellectual property, trade secrets and funds. Identities can be used to gain access to entities that, if compromised, could cause a great deal of harm. Think of the damage that could ensue, for instance, if unauthorized individuals were to gain access to identities that would allow them to make changes to critical infrastructure. Aside from critical infrastructure protection, there are also threats related to private information. For consumers and security decision makers, it is important to recognize the growing number of threats that exist today and understand that traditional identity security solutions, such as passwords, are no longer effective.
<urn:uuid:43ccba87-c8c4-48d8-97f6-eadbe7bde743>
CC-MAIN-2017-04
https://www.entrust.com/many-identities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946649
492
2.953125
3
You may not be aware of this, but merely erasing your data the regular way does not make it disappear for good. Some of it can still be retrieved with the use of recovery tools. This means that your personal information is not at risk only if your computer is stolen or broken into, but also if you simply sell it before getting a new one. If your hard drive was not wiped clean you’ve potentially given another person access to a variety of personal information. To illustrate the magnitude of this threat, in 2003 Simson Garfinkel and Abhi Shelat published an article in “IEEE Security & Privacy Magazine” reporting on an experiment in which he purchased 158 used hard drives on the secondary market (most of them from different sellers on eBay) and checked to see whether they still contained readable data. To their astonishment, around one third of the drives appeared to have information that was highly confidential and should have definitely been erased prior to the drive’s resale. They acquired a total of 75 Gbytes of data, consisting of 71 Gbytes of uncompressed disk images and 3.7 Gbytes of compressed tar files. One of these drives was most likely used in an ATM machine in Illinois, and that no effort was made to remove any of the drive’s financial information. The log contained account numbers, dates of access, and account balances. In addition, the hard drive had all of the ATM machine software. Another drive contained 3,722 credit card numbers (some of them repeated) in a different type of log format. In order to make sure that your data is erased properly I’d recommend using one of the programs listed below, each is for a different operating system: Windows, Linux or Mac OS X. Eraser (Windows) – Free Eraser is an advanced security tool for Windows, which allows you to completely remove sensitive data from your hard drive by overwriting it several times with carefully selected patterns. ShredIt X (Mac OS X) – Shareware ShredIt is the file shredder / hard drive cleaner that offers all the features you need to clean a hard drive, wipe a file and more – as well as the ease of use and safety features you really want from data file shredder software. dcfldd (Linux) – Free dcfldd is an enhanced version of GNU dd with features useful for forensics and security. Based on the dd program found in the GNU Coreutils package, dcfldd has the following additional features: - Hashing on-the-fly – dcfldd can hash the input data as it is being transferred, helping to ensure data integrity. - Status output – dcfldd can update the user of its progress in terms of the amount of data transferred and how much longer operation will take. - Flexible disk wipes – dcfldd can be used to wipe disks quickly and with a known pattern if desired. - Image/wipe Verify – dcfldd can verify that a target drive is a bit-for-bit match of the specified input file or pattern. - Multiple outputs – dcfldd can output to multiple files or disks at the same time. - Split output – dcfldd can split output to multiple files with more configurability than the split command. - Piped output and logs – dcfldd can send all its log data and output to commands as well as files natively.
<urn:uuid:0ebadac9-54c1-48a0-9ab3-95a4d326a5a1>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2007/11/27/privacy-erase-your-hard-drive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00329-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949332
722
2.59375
3
Finding green IT projects in U.S. government or abroad that reach beyond rote "environmentally friendly" talking points is difficult. More often, green IT initiatives focus on reducing hardware's electricity consumption to cut costs -- and being green is a secondary goal. A prime example is data center consolidation, which currently is the largest green agenda item for state and local IT departments. Consolidation modernizes equipment, saves money and slashes energy usage. The latter outcome just so happens to reduce government's carbon footprint -- the measure of human-caused carbon emissions, which most scientists say is a contributor to climate change. But it begs the question: Does government IT have a green role to play beyond energy-efficient hardware? We may find out in roughly nine months. That's when Seoul, South Korea, should be able to report conclusively on the progress and success of its Smart Transportation Program, said Simon Willis, senior director of the Global Public Sector Internet Business Solutions Group for Cisco Systems Inc. (No English-speaking city representatives from Seoul were available to be interviewed for this story.) The transportation program aims to increase public transit ridership with flexible, distance-based fares and Web-based technology for determining public transit routes from any city location. Seoul is one of seven cities enlisted in Connected Urban Development (CUD), a partnership with Cisco that commits those cities to creating IT projects that cut carbon emissions by reducing traffic congestion. The six other cities are Amsterdam, Netherlands; Birmingham, England; Madrid, Spain; Hamburg, Germany; Lisbon, Portugal; and San Francisco. Many of the cities plan to use IT to make public transportation more attractive to citizens, and Seoul's project is the furthest along, Willis said. American metropolises tangled by traffic difficulties would be wise to keep an eye on Seoul's progress. The project is moving from the planning stage to execution, according to Cisco. "It's pretty early in the project," Willis said. "This is cutting-edge stuff." Convenient Public Transit If you work in a big city and don't use public transportation, there's a good chance that's because it's a hassle to use. Seoul wants to make its system more accommodating. The city will create a platform of Web-delivered software, in conjunction with several vendors, which will combine information from the city's various transportation fleets and the Korean National Railroad. The platform will offer real-time traffic information and determine a user's most efficient public transit route based on wherever he or she happens to be located in the city. The platform will combine different modes of public transportation, such as trains and buses, and it will analyze traffic on each route and the available parking. The user will be able access the system from an iPhone, BlackBerry or wireless Internet connected laptop. Seoul's project targets everyday drivers as well as public transit commuters. For example, imagine a driver is stuck in a traffic jam on his route to work. That driver could access Seoul's Web-based traffic platform from a smartphone and quickly learn that a nearby train or bus routed toward his destination was picking up passengers near the traffic jam, Willis said. The system could also report whether the corresponding train or bus station had available free parking. Willis said the next version of Seoul's traffic platform also might enable citizens to remotely book seats on trains and buses. In addition, users will be able to access the system while riding public transit to find out the expected wait times at connecting bus or train stations. Seoul officials expect riders to value these new features because it would empower them to more accurately organize their days around public-transit schedules. But not all users will have laptops or smart phones handy, so the city is considering installing devices for accessing the system at bus stops and train stations. The travel data necessary for Seoul's unified platform sits in agency silos, as it does in many U.S. cities. IT departments in Seoul's various agencies that maintain information on bus and train routes, traffic congestion and parking availability will each transmit that data to private-sector partners charged with building the technology. As is true of local agencies in America, coordinating all of those silos for a centralized project is a cumbersome task, said Willis. Most IT workers who have participated in multiagency IT projects can attest to the importance of a "higher" government power mandating them. Cisco always sells a city's mayor on a CUD plan first to establish a clear mandate that agencies can't ignore. One long-term goal for Seoul's Smart Transportation Project is futuristic. Someday the system might be able to automatically route buses and trains where riders are waiting and bypass vacant stops, Willis said. Citizens would use their smartphones or on-site bus stop devices to alert the system they need a ride. Willis said that could shorten waits. "The whole routing of buses would become a more flexible and demand-affected process. The first thing you need to do is connect all the buses and other transport assets so you can see where they all are and communicate with all of them. Even that step has not been taken by most cities yet," Willis said.
<urn:uuid:312b6c25-b8be-432d-b9b0-81cdb5389934>
CC-MAIN-2017-04
http://www.govtech.com/featured/South-Korea-Uses-Web-Technology-to.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00319-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952252
1,045
2.71875
3
Administrators, privileged network deities or just a type of ordinary network user much the same as anyone else? Years into an age where IT security has become a mainstream topic, this remains the sort of polarizing question that can provoke one of two reactions; shock or relief. Those in the ‘shock’ camp will probably have grown up used to the traditional divide in which there were only two types of network being; the queen bees at the center of chaotic and uncertain network who needed absolute power and were called ‘network admins’. Everyone else was mortal and had to make do with a support number stating the hours of service. In too many organizations, the power of admins was not only seen as natural so much as necessary, a benign dictatorship of those ‘in the know’. This model persists, especially in smaller organizations, but it is obsolete because, quite simply, it creates unquantifiable risk. For anyone who agrees with this analysis, the realization that admins are just a specialized type of user is more likely to elicit the second response… that of relief. The arguments that justify the second world view are myriad. Privilege management for users is a cornerstone of good IT governance; an essential mechanism for making the actions of each and every employee visible regardless of job role. Everyone is a risk and handing out unaccountable rights to any network user is dangerous because it creates a single point of failure. Privilege management introduces accountability which benefits everyone, admins included. Organizations that ignore such principles risk adding their names to the long and dark catalogue of anecdotes about unhappy admins running amok on networks for one reason or another or those where an error caused a botched configuration change with embarrassing consequences. So much for the theory…but what about making privilege management work on a practical level? The basic mechanism of control for all network users remains the old-fashioned login, which for standard users will be to access applications and data and for admins is to access the datacenter servers where these resources are located. Introducing privilege management such as that offered by Avecto’s Defendpoint into this setup allows admins to be granted the on-demand elevation of rights to a server as well as verified elevation where access is best authorized by a second admin. This adds a layer of authentication for mission-critical resources – those on which the organizations depends – and does so by creating an audit trail recording access through the Enterprise Reporting Pack. Server access can then be divided very strictly by responsibility so that in the heat of the ‘admin moment’ individuals aren’t tempted to stray on to servers in ways that might have unintended consequences. All server access is visible through comprehensive dashboards. The old world of the admin worked satisfactorily at a time when organizations were still working out how IT was going to be used in their business model. These days, IT is more likely to be the business model and the risk calculation has been turned on its head. Admins, users, applications and data are the four corners of a secure network and they are all equal. This is how grown-up organizations work.
<urn:uuid:786c979e-6e5e-4978-a8ce-114886833052>
CC-MAIN-2017-04
https://blog.avecto.com/2013/08/whose-job-is-it-to-watch-the-admins/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954332
636
2.515625
3