text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Universities across the nation are facing more challenges than ever before. According to Mehdi Maghsoodnia of University Business, shrinking budgets are contrasted with higher costs and aging facilities. The government is decreasing funding while getting more involved from a regulatory standpoint. Demand is up, enrollment is unpredictable, and graduation rates are down. Schools are fighting for ways to cope, retain students, and help them graduate on time. In response to these challenges, many schools are implementing MOOCs, massive open online courses offered by companies like Coursera, edX, and Udacity. Virtually attended by students around the world, these online courses offer access, often at no cost, to top-notch classes. MOOCs are, ultimately, a product. Schools are using them along with other adaptive learning applications — like flipped classrooms and e-textbooks — to deliver education, leaving schools with the task of developing and delivering potentially hundreds of digital learning tools to thousands of students on hundreds of different devices. To set themselves up to succeed in this new environment, institutions need to embrace technology at a fundamental level, join a platform — a holistic software solution — that could enable any product to be integrated into the school’s educational framework. This platform would act like a switchboard, providing the ability to adopt and integrate many solutions like MOOCs, gamification, adaptive learning, and e-books. The type of data that flows into a university from MOOCs is different than that of traditional classes. Without some sort of framework in place to analyze and assess this data, it’s rendered meaningless. In regard to this, universities can imitate corporations, most of who have put in place a common framework. Without a framework such as this organizations could not have taken advantage of advancements like ERP, CRM tools or even simple workplace productivity software like word processing. This infrastructure required the standardization of hardware, operating systems and software frameworks. The need for a common framework for educational content is just as significant. Without this framework, the experience for the educator and student could be severely fragmented, making it harder to adopt new, innovative tools into the academic framework. IT solutions providers can support MOOCs with cloud-based technology, security, e-commerce support, and data capture and analytics to assist colleges in integrating MOOCs and other types of educational technology into the learning environment.
<urn:uuid:6a759e05-f388-4321-88a9-618dee34e028>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/universities-embrace-technology-to-deliver-moocs-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949828
487
2.84375
3
If you've checked just about any mainstream news source today, you'll know that July 1 marks the 150th anniversary of the beginning of the Civil War's most bloody conflict between the North and the South -- the Battle of Gettysburg. Fought over a period of three days in the farmlands of southern Pennsylvania near the Maryland border, Gettysburg was the turning point of the Civil War. While in terms of casualties the battle roughly was a draw -- both sides had more than 23,000 soldiers killed, wounded, captured or missing -- it was the beginning of the end for the Confederacy. Not only did General Robert E. Lee lose nearly one-third of his officers, Lee's aura of invincibility -- established over the previous two years in one remarkable victory after another against larger and better-equipped armies -- was permanently shattered. If you're interested in Gettysburg and the Civil War in general, there are a number of databases online with a wealth of information about that horrific period in our history. I've highlighted a couple of them below: American Civil War Research Database -- Created by a Massachusetts-based company called Historical Data Systems, this relational database focuses not on "senior military officers and major battles," but on "an analytical look at the War from the perspective of the individual soldier": "Using information from each soldier's military and civilian experiences to build a database from 'ground up' rather than 'top down,' Historical Data Systems has created the only database of its kind that can be used for statistical and analytical examinations of the War. It is now possible to examine and measure the impact these individual soldier experiences had upon regimental effectiveness." Information in the database includes regimental comparisons, regimental rosters, assignments, casualty analysis by battle fought, historical overview, and regimental combat effectiveness calculated by combining regimental casualty analysis, soldier entry/exit rates and battle experience. Users also can follow a soldier through the war and get information about the regiments he served and in many instances information about his home town. This database isn't free, but it's not expensive -- $25 a year, or $10 for a seven-day visitor pass. The Civil War Homepage -- This site features a Civil War photo database and gallery, battle maps and official records, including battle reports written by commanding generals such as Lee, Ulysses S. Grant, George McLellan, Ambrose Burnside and Joshua Lawrence Chamberlain. Created in 1997, the Civil War Home Page is owned, managed and personally funded by a gentleman named Michael Frosch. Now read this:
<urn:uuid:efba0220-9339-40c4-aca2-78535a9be3c2>
CC-MAIN-2017-04
http://www.itworld.com/article/2706794/hardware/a-pair-of-great-civil-war-databases.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950446
517
2.640625
3
The purpose of this text is to introduce or better explain the art of social engineering. This is one of the most difficult things to explain but I found a rather easy method of doing it until you perfect your skills. To begin social engineering you must know how to cross-analyze someone. Cross Analyzation is basically determining someone’s personality by looking at them. This can be farely easy because people make the way they live so obvious to others. For example, how would you determine someone as being a homeless drunk? Well by their clothes, hygeine, smell, a bottle of alcohol in their hand, etc. Another way cross analyzation can be used is over the phone but this method is not reliable in some cases. But it is a very useful tool when social engineering. Look for things such as their tone of voice, the way they answer the phone, mumbling, stuttering, etc. You can also see how dumb someone is if you tell them that they won a sweepstakes and you want to collect some information about them and they dont hang up on you. Information Gathering these days is very easy for example if you wanted to find out the phone # your next door neighbor simply log onto the internet and goto portal.cyberarmy.com and you just type in what you know whether it be the phone number, address, or full name. Another method would be by snooping through their mail and trash. The best method (in my opinion) is to run a tap on their line. This way you can use what you heard to make a better cross-analyzation and you might gain some valuable information such as usernames and passwords, (which could doubtfully be obtained by social engineering or you wouldnt be reading this)and maybe you’ll even be able to get some dirt on them. Another method would be to get the old binoculars and watch as they enter passwords for e-mail on their computer or as they chat with people. Another method I have used in the passed is hacking their voice mail box or answering machine. Now if you do all of this you will have a shit load of information on them. Now to get to the engineering. Before calling your victim you might want to get a caller id spoofer or go box someone else’s line. Another useful tool is a voice changer. Now if you do use a voice changer make it sound like you are a woman because most men will listen and believe what women tell them more then men. You might also want to see what kind of counter measures they have by snooping around the building or office that is your target. If its just someone’s house then you can assume that they have caller id and some have anonymous call rejection. You might also want to route your call through a pbx if you have access to one. That way if you call out of area its billed on them. Another tip would be to call your target up to see what you have to say or do to get the information you want from a person. for example, if you were trying to get an operator to tell you the number you are dialing from you might want to immatate a telco guy and say you are calling from a trunk (which might work). You might also wanna find out about that person’s boss so that you could immatate him/her. This is a very good idea if you can immatate their of voice and their slang. And if they give you some shit just give them shit back and say you’re gonna fire them!!! Now its time to put all that effort to work. This should actually be the easiest part if you successfully completed the other stuff. But if not you are gonna have to do some serious bullshitting. You might wanna reherse what you are gonna say as best as you can but you must expect the unexpected. If the target is on a PBX then hack into an account (such as the bosses) and call them from that. Or you could box the bosses house if you know where he/she lives. This is only the first version of this text. I will be releasing more tips and tricks of the trade. This maybe a short text but it is a lot more informative than the other ones out there. Anything to add or change? Send it to firstname.lastname@example.org
<urn:uuid:6130c8b8-da2b-4fa8-a845-1e0cdb4e0246>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/04/08/social-engineering/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00092-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967253
908
2.65625
3
Configure support for voice Questions derived from the 642-812 – Building Converged Cisco Multilayer Switched Networks (BCMSN) Cisco Self Test Software Practice Test. Objective: Configure support for voice SubObjective: Describe the characteristics of voice in the campus network Item Number: 642-8184.108.40.206 Multiple Answer, Multiple Choice Which network problem areas are addressed with the implementation of QoS features? (Choose three.) Quality of service (QoS) is a set of tools and services that attempts to provide improved network services for voice, video, and data applications in terms of loss, delay, and jitter. QoS provides the ability to predict and manage the network performance for specific types of traffic. At optimum times when bandwidth is plentiful, QoS is not an issue. In times of congestion, the QoS processes include classifying of traffic, setting traffic priorities, allocating dedicated bandwidth, and managing congestion. An application’s network requirements are usually addressed in terms of loss, delay, and jitter. Data applications tend to be more sensitive to loss and less sensitive to minor delay or jitter. In comparison, interactive or “real-time” applications such as voice and video can tolerate intermittent loss of packets but delay and jitter can render the application unusable. Delay is the amount of time it takes a packet to arrive at the final network destination after being transmitted into the network. There are a variety of factors that accumulatively contribute to the overall delay of a transmission including: - Packetization delay: This is the amount of time required to segment the original data stream into supported packets. - Serialization delay: This is the amount of time it takes to encode the bits of the packet onto the media (wire, fiber, etc.) - Propagation delay: This is the amount of time it takes to send a bit from one end of the media to the other. - Processing delay: This is the amount of time it takes to get the packet from the input queue to the output queue of a device. - Queuing delay: This is the amount of time that a packet sits in a queue. - Jitter is the variation in of the amount of delay one packet experiences versus the next. Loss is the difference between how many packets made it through the network and how many were sent.
<urn:uuid:0011ae3a-eacd-4f49-b006-417296829228>
CC-MAIN-2017-04
http://certmag.com/configure-support-for-voice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919138
503
2.5625
3
Mad Dog 21/21: New Moth by Hesh Wiener Something funny happened when the 1816 edition of the Farmer's Almanac, also known as the Old Farmer's Almanac, was being prepared. Due to some kind of fluke, the Almanac forecast a snowstorm in New England for July 13. And that's exactly what happened! It's no wonder, then, that readers of the Almanac, which has been published continuously since 1792, take its weather forecasts seriously. But you don't need that venerable publication to forecast this: For the foreseeable future, you're going to spend a lot more for the electricity to heat and cool your computer. You heat up your computer with electricity every time you turn it on. Just about all the power the machine consumes turns into heat, although a little bit may become light or sound. If your computer is a laptop, the heat your computer generates might not be all that much, something in the vicinity of 20 watts when it's working hard, which means it's throwing off heat at roughly 6.8 BTUs per hour. If you have a workstation with a large CRT display, you're probably burning up 150 watts, maybe more, and kicking out more than 500 BTUs per hour. If you have a big iSeries, such as a 9119-59X, you can pull down more than 20 KW of electricity and kick out more than 77,000 BTUs of heat per hour. IBM's largest mainframe, the new System z9 "Danu", uses less power and generates less heat, but it doesn't have internal disks like the iSeries does. With outboard disks, a Danu mainframe will use more power and give off more heat than the big iSeries. Most business sites are between these extremes. They use desktop computers that burn about 100 watts and servers that eat up several kilowatts. But if that typical site has 200-odd seats, it could be kicking eating 25 KW and out 80,000 BTUs an hour, with or without the server. And that doesn't count the lighting for the offices where the computers happen to be or the power cost of HVAC systems to keep the place comfortable. Old Farmer's Almanac: Like the old farmer himself, it is outstanding in its field All the power used for heating the machines and then cooling the rooms they are in has been costly all along. Now, with energy costs half again as high as they were a year ago, companies are going to start looking at the power portion of their physical plant costs . . . and possibly point some fingers at the computers. The good old days, when most companies didn't connect electric bills to computer usage, are gone. It's going to become fashionable to talk about the amount of juice computers use, and in some cases there will be arguments that only add more heat to the environment without shedding much light on possible options. The people charged with managing computers, which might mean you, poor reader, are going to have to have some answers ready, or at least to be able to explain, coherently, why there might not be any answers right this second. You'll have to explain this to bean-counters, whose initial take on the situation is about as useful as a weather forecast based on wooly bear caterpillars. Wooly bear caterpillars, the larvae of Isabella tiger moths, are supposed to know what's coming down the weather channel this winter. According to the folklore, these caterpillars, with black ends and a brown strip in the middle, have a broader brown stripe if the winter is going to be mild and a narrower one if the winter is going to be a real humdinger. I don't know a lot about the grubs, but I can tell you this: Don't believe a word they say. The caterpillars, generally speaking, change color as they age. They start out mostly black. They molt, maybe half a dozen times, as they grow during the summer and fall. With each molt, if they are eating well and are otherwise in good fettle, they brown stripe gets longer. You might say that the black ends stay the same and the caterpillar gets longer in the middle, but that's not exactly the case. Eventually winter does come, to wooly bears as it does to people, and the caterpillars hide somewhere and enter the insect equivalent of a state of hibernation, a state that may cause your laptop to crash until you go through two or three scary BIOS updates. The caterpillar, unlike your laptop, doesn't crash, it crashes out, sleeping its way through the winter and living off fat it built up when its favorite veggies were in season. It can survive extreme cold, perhaps 50 or more degrees below zero. In the spring it will wake up, pupate, and become a very pretty moth. At that point it has to start a family, because once it gets rid of its fur coat and grows wings it cannot live through a winter. A similar fate may lie ahead for computer users who plan big applications that depend on powerful client-side software to work. When the applications are in their larval stage and can run on skinny workstations, the computer folk can probably ride out an energy crisis that lasts a couple of seasons. Once the full-blown suite of software is deployed, and workstations upgraded to handle all the new code, PC power consumption might jump from 75 watts a seat to 150 (depending on how much bigger the PCs get and now intensively they are run), and cooling costs may rise, too. Meanwhile, back at the server ranch, those client-heavy are still going to need more server power, more storage, and maybe more networking apparatus, all of it burning power and bleeding heat. If your servers are X86 machines, you might find yourself studying the new Sun Galaxy products, which use AMD CPU chips instead of Intel processors. When it comes to doing more computing per watt, AMD is way ahead of Intel. Intel has vowed to close the gap or maybe even do better than AMD, but you can't plug its plans into your network. If you're using any other kind of server, you're just plain stuck; you won't have any nice answers for the bean counters. But if, like most companies, you use a lot more power on desktops than in the glass house, you may have options. Moreover, the computer vendors that already have been rolling out workstations that use less juice are going to continue along that path. So, too, are the display makers who want you to dump your CRTs and switch to flat screens that use a lot less power and, incidentally, yield better financial results for their manufacturers. Wooly Bear: The insect world's answer to Punxatawny Phil, the groundhog Of the big vendors, Hewlett-Packard seems to have the lead in low power desktop systems that use AMD chips and come in configurations that include fewer power-hungry components, but perversely has pressed for a complicated solution involving blade PCs and thin clients. Dell, a favorite in the corporate world, is staying loyal to Intel, and as a result the best it can do is come up with some new products that take advantage of laptop technology, where Intel does indeed provide superior low power chips. All the vendors can do a lot more than they have, if they can persuade customers to buy conservation flavored client machines, which might cost more up front even if they save money over the long haul. One example is the disk drive in every PC. Smaller drives use less power. A typical drive used in desktop computers, a 120 GB Seagate Barracuda 7200, uses 12.5 watts when it's seeking data. A typical disk drive used in desktop replacement laptops, an 80 GB Toshiba MK8026GAX, uses less than 3 watts while seeking. Both drives use much less power when idling. Moreover, client machines in most offices don't need anywhere near this amount of disk capacity. Microsoft says they could get by with Windows XP with no more than 1.5 gigabytes of disk space. Office XP in a typical configuration uses another quarter of a gigabyte. So a client with, say, 5 GB of disk storage might be more than ample. A machine with such low disk requirements could even use the 1.8-inch disks that show up in music players and which use less power than laptop drives, although the relatively slow speed of very small disk drives might interfere with user productivity. But wait, there's flash memory. It might cost a lot more than a hard drive, but it hardly uses any power at all. Flash can pull peak currents of more than 2 amps per gigabyte (at 3 volts, which is typical for common flash chips), but it uses that current for only a small fraction of its active cycle; when flash is idling, it draws only leakage current that's barely measurable. Flash could cut the power cost of permanent storage on a 5 GB client to levels well below the power levels a disk drive requires. And because it's very fast, it could allow client machines to be built that use less dynamic memory (that can draw 2 or 3 watts per stick) and more swap space on flash. I would not be surprised if the companies that now sell keychain flash drives start offering models that live inside PCs, imitate hard drive interfaces, and run a lot faster than they could off a USB port. What I cannot predict, any more than a caterpillar can guess the way a winter will work out, is how the marketplace would react to a PC with no moving parts that consumes less power than a laptop and still provides a nice big display suitable for office work and all the capacity any reasonable software stack requires. BitMicro Networks, Adtron, and M-Systems already make solid-state flash drives that look like SCSI, IDE, or ATA disks in both 2.5-inch and 3.5-inch form factors that look like regular hard disks as far as software and systems are concerned; these are used primarily for military applications. Isabela tiger moth: The air carrier of the wooly bear, one of the few not in Chapter 11 Here's the new math: Computers don't use as much power when they are idle as they do when they are in active use, but it's not hard to reckon that a PC using 100 watts when it's really working can consume a 1,000 watt-hours a day or 365 kilowatt-hours a year. At 10 cents a KWH, that's something like $36, and it's possible to bring that to $60 when related HVAC costs are added in; it can cost nearly as much to get rid of heat as it does to create it in the first place. Now if that same power costs 20 cents per KWH, the direct cost jumps to $72 per PC and maybe $120 or so with HVAC thrown in. In big cities like New York, where the population of computers is very high, juice already costs more than a dime per KWH for residential and commercial customers and the rates on record are based in part on oil costs that were half what they are today. If you can cut that power cost by 80 percent, which seems possible, by moving to machines that use only 200 watt-hours a day, there's a possibility that cleverly configured low power PCs, whether they use small disks or flash, can make it into the marketplace. All that's required is a cost of ownership per box similar to the price users pay for the more familiar PCs that use more familiar amounts of electricity. And if these low power PCs were shrunk to bumps on the back of displays, which they could be, they might even be able to sell at a premium to their older cousins in locations where size matters. The market shift that has made laptop clients as popular and nearly as affordable as desktop models might turn out to be the larval stage of a broader trend, one towards smaller and less power hungry machines. The first signs of a move in that direction would most likely appear in showcase settings, like bank lobbies and open plan offices at trendy firms. Back room operations, even if they have more workstations and server, won't change as quickly. If things go the way I think they will, there will be no end of arguments about whether cool running PCs are really practical, or just the PC--meaning politically correct--thing for corporations to do. The bets won't be settled by debates. They'll be settled by the nature of new machines the computer industry offers, the long-term trends in power costs, and by the willingness of the people who lay out computing strategies to examine unfamiliar issues. You won't have to wait long to see which way things are moving. The way prospects for winter power bills are looking, the idea of ultra low power PCs could, like tiger moths, take off in the spring.
<urn:uuid:1b6e3239-dfdb-4d68-b82a-5dc55a6e8434>
CC-MAIN-2017-04
http://www.itjungle.com/tug/tug100605-story04.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964325
2,679
2.53125
3
It Takes a Lot of Supercomputing to Simulate Future Computing October 27, 2016 Nicole Hemsoth The chip industry is quickly reaching the limits of traditional lithography in its effort to cram more transistors onto a piece of silicon at a pace consistent with Moore’s Law. Accordingly, new approaches, including using extreme ultraviolet light sources, are being developed. While this can promise new output for chipmakers, developing this technology to enhance future computing is going to take a lot of supercomputing. Lawrence Livermore National Lab’s Dr. Fred Streitz and his teams at the HPC Innovation Center at LLNL are working with Dutch semiconductor company, ASML, to push advances in lithography for next-generation chips. Even as a physicist, he says what is required from extreme ultraviolet lithography is stunning, if not unbelievable. In essence, in order to keep miniaturizing and adding transistors at the 7nm level and below, the light wavelength has to be reduced as well, which is no small feat. “You can’t create something that is 12 or 14nm wide if you’re using light at a wavelength of 40nm, for instance. Right now, EUV sources are down in the 10-12nm range and under. The trick is creating light at that wavelength, but that is its own great challenge.” This type of problem is exactly what U.S. based company, Cymer focuses on—they make the light sources used by ASML, which used to contract that technology before they purchased Cymer over a year ago. At that time, Streitz and team were already working with Cymer, and now, with ASML, they are seeking to refine the process by complex multi-physics simulations running on unclassified machines at LLNL, including one of the last-standing IB BlueGene machines, the “Vulcan” supercomputer, which while reaching the end of its lifespan, still sees 90-95% utilization at the lab. It might sound simple to change the wavelength of light for this purpose, at least until you understand what has to happen to create the light—and do so at the idea wavelength. The light is made by spitting tiny droplets of molten tin and smacking those with a pre-pulse from a laser to flatten them into tiny pancakes. Along comes a much larger laser, which turns that blob into a plasma pancake. That plasma then radiated out into the correct wavelength. Then, light detectors collect it, focus it, and turn it into the light required for lithography to pattern transistors onto chip. Again, this is what Cymer did—now under the ASML banner. And it’s not simple. “Since smacking small bits of molten metal with lasers and then understanding both the physics of creating a plasma in the generation of light and doing it efficiently—they are spitting out micron-sized drops at kilohertz rates, getting smacked with the laser, blown into plasmas, and then trying to get the light back out over and over again, there are a lot of ways this can go wrong or not be done efficiently,” Streitz tells The Next Platform. “In terms of doing this efficiently, that light has to be as bright as possible to make lithography efficient. Looking downstream, if you’re a company building a chip, you want bright light so you’re only making a single pass—time is money and the brighter the light, the fewer number of passes.” Modeling this process to meet the real-world needs of science industry is what supercomputers are made to do, and Streitz’s group at the HPC Innovation Center jumped at the chance. After all, as a weapons laboratory that handles both classified and unclassified workloads, simulating the interaction of plasmas with materials is nothing new, and neither are complex multi-physics codes. “This was all quite a bit more complex than the folks at Cymer or ASML realized. Our mission here at the lab overall is certifying the reliability and safety of our stockpile, which means dealing with complex physics. This wasn’t crazy for us to do and has some similarities to things we do for the National Ignition Facility, he explains. Ultimately, research and simulation work like this are of incredible value to competitiveness—and it takes large-scale computing resources and expertise to do this. For Cymer in particular, this sort of research is what has powered their business for decades, but they reached the limits internally for solving the problems associated with EUV, which pushed them to look to other experts. What is interesting here is that this is a case of U.S. funded supercomputers working for private industry. This is more frequent in Europe and in many Asian countries, particularly China, seeing the dividing line between public and private funds isn’t always simple. There are programs in the United States that match supercomputing sites with private industry (for example, the INCITE program at Oak Ridge National Lab) to help advance certain areas, there is an extensive vetting process. In short, the industry leaders need to approach those programs with a problem that might benefit their work, but have to wrap in the cloak of solving a wider scientific problem. Stretiz stresses that this is a beneficial model and program, but for some areas, specifically the plasma interactions they studied to assist Cymer and ASML, they simply solve problems. They are not committed to publishing the results publicly as science problems. In short, Cymer came to the lab with a problem, the HPC Innovation Center had the expertise, and they paid simply for the resources they needed. This is an alternate model—and one Streitz hopes will catch on. As head of the HPC Innovation Center, Streitz says he had to look at how other countries navigate the public/private partnership waters. For the lab in particular, which is a weapons facility and therefore under tight lock and key, especially to outsiders from companies and foreign nations, having a center that is not under such strict guard where companies can bring real-world problems is of great value. Finding expertise to understand and model mission-critical industry problems on world-class supercomputers is not easy. Streitz’s group is working to change that—and without requiring companies to make their problem a science problem with published results. “This is the benefit of being a lab but also a not-for-profit organization,” he details. While even these advancements might not be enough to rescue Moore’s Law in the long term, seeing cooperation between private industry and one of the few places that can model such problems at scale is important. At a time when crunched budgets are the norm, it also emphasizes the role of supercomputing in the future of computing—a future we all depend on in nearly every aspect of our lives.
<urn:uuid:6650c356-c4cc-4584-93b0-38f5eea04f04>
CC-MAIN-2017-04
https://www.nextplatform.com/2016/10/27/takes-lot-supercomputing-simulate-future-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959358
1,434
3.28125
3
The rapidly increasing interaction of consumers with social online networks, mobile phones and other intelligent devices has brought about significant lifestyle benefits that are under a serious threat from cybercriminals according to an international virus analyst. Addressing the audience of Kuwait’s ICT Security Forum, Stefan Tanase, Malware Analyst at the EEMEA Research Center, Kaspersky Lab Global Research and Analysis Team, said that in 2009 social networking sites will be used by around 80 per cent of all Internet users, the equivalent of more than one billion people. "The growing popularity of social networking sites has not gone unnoticed by cybercriminals; last year, such sites became a hotbed of malware and spam and yet another source of illegal earnings on the Internet. The Kaspersky Lab collection contained more than 43,000 malicious files relating to social networking sites in 2008 alone," said Tanase. "Malicious code distributed via social networking sites is 10 times more effective than malware spread via email. Social networks have, approximately, a 10 per cent success rate in terms of infection compared to less than 1 per cent for malware spread via email,” he said. Stolen names and passwords belonging to the users of social networking sites can be used to send links to infected sites, spam or fraudulent messages such as a seemingly innocent request for an urgent money transfer. "Generally, users of social networking sites trust other users and accept messages sent by someone on their friends list without thinking; this makes it easy for cybercriminals to use such messages to spread links to infected sites. Various means are used to encourage the recipient to follow the link contained in the message and download a malicious program." According to the Kaspersky Lab expert, major Web 2.0 platforms such as Facebook or Twitter are highly vulnerable to malware attacks and end users need to be aware of the risks and be ready to take precautionary measures to protect themselves. During his presentation, Tanase also highlighted the rapid spread of mobile phone hacking. "In the last week alone we have found five new Trojans which send such money transfer requests without the permission or knowledge of the phone's owner. The goal is to transfer large quantities of small sums in the hope that while individual users might not notice the leak, the overall sum of transfers will be significant. "There is a rise of the number of attacks targeting mobile phones and a more clear shift towards methods for monetization of these attacks." About Kaspersky Lab Kaspersky Lab is the largest antivirus company in Europe. It delivers some of the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. The Company is ranked among the world’s top four vendors of security solutions for endpoint users. Kaspersky Lab products provide superior detection rates and one of the industry’s fastest outbreak response times for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. Learn more at . For the latest on antivirus, anti-spyware, anti-spam and other IT security issues and trends, visit www.viruslist.com.
<urn:uuid:b5c68455-29c3-49fd-82d4-1855f6af1986>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2009/One_billion_social_networkers_this_year_exposed_to_Cybercrime_Kuwait_ICT_delegates_hear
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913299
671
2.921875
3
The annual cost of cybercrime is either staggering, or a mere blip on the world’s economic bottom line, depending on how you look at it. It is notoriously difficult to quantify, since a majority of cybercrime incidents go unreported, some companies don’t even realize they have been compromised and many are not able to put a dollar value on intellectual property (IP) that they still have, but is now also in the hands of a competitor, a thief or another nation state. But most estimates put global losses in the hundreds of billions of dollars. One report released last month, by the Center for Strategic and International Studies (CSIS) and titled “Net Losses: Estimating the Global Cost of Cybercrime,” puts it between $375 billion and $575 billion. That, on the high end, would make it more than the U.S. defense budget. It would be more than the entire economies of many countries. And the report’s authors say while it is possible they have overestimated that cost, they believe it is far more likely they have underestimated it. Even so, the losses for most individual countries, including the U.S., amount to less than 1% of gross domestic product (GDP). For the U.S. it is estimated at 0.64%. The worst of the G20 countries is Germany, at 1.6%. By some reckoning, that could be viewed simply as another minor cost of doing business. That, in essence, is the view of Jason Healey, director of the Cyber Statecraft Initiative of the Atlantic Council. “When I hear about the massive cybercrime problem, I want to know what specifically do you mean?” he said. “If we are going to take the IP loss as seriously as they want us to take it, we need to know how it was actually used.” Healey said that estimating the real economic cost of cybercrime has been almost impossible for decades. He said it has had a range of two orders of magnitude since 1988. “We really don’t have a good answer,” he said. But he does agree with other experts and with reports that say the raw number matters less than the trend, which is that losses from cybercrime are increasing. TK Keanini, CTO of Lancope, is among them. “The important point here is that it is trending in the wrong direction and the rate is increasing year over year,” he said. He added that some companies were damaged so badly by cybercrime that they are no longer in business. So, for individual companies, “that is a much greater number than 0.64% in my book,” he said. More worrisome is that a majority of companies, while their leaders express heightened concern about cyber attacks, are not taking security measures that have been recommended by experts for years. A second report by PwC, also released in June, titled, “US Cybercrime: Rising Risks, Reduced Readiness” (CSO is a cosponsor of the report, along with the CERT Division of the Software Engineering Institute at Carnegie Mellon University and the U.S. Secret Service), did not attempt to estimate total global or U.S. losses, but found that, “7% of U.S. organizations lost $1 million or more due to cybercrime incidents in 2013, compared with 3% of global organizations; furthermore, 19% of US entities reported financial losses of $50,000 to $1 million, compared with 8% of worldwide respondents.” There are a number of reasons suggested for the growth in cybercrime. One is that defenders are, effectively, outgunned. The PwC report, based on a survey of more than 500 U.S. executives, security experts, and others from the public and private sectors, was blunt: “The cybersecurity programs of U.S. organizations do not rival the persistence, tactical skills, and technological prowess of their potential cyber adversaries,” it said. According to the CSIS report, the incentives are with the attackers. “Cybercrime produces high returns at low risk and (relatively) low cost for the hackers,” it said, while for companies, it is a business decision based on their perception of their risk. “The problem with this is that if companies are unaware of their losses or underestimate their vulnerability, they will underestimate risk,” the report said. Many are indeed unaware of their risk, according to PwC, which reported that, “the FBI last year notified 3,000 US companies – ranging from small banks, major defense contractors, and leading retailers – that they had been victims of cyber intrusions.” In other words, they didn’t discover the intrusions on their own. And that lack of awareness apparently leads to broad failures to implement even fundamental security practices – practices that have been recommended by the U.S. Commerce Department’s National Institute of Standards and Technology (NIST). The PwC survey found that 54% of respondents don’t provide security training for new hires, and only 20% train on-site first responders to handle potential evidence. Only half reported having a plan to respond to insider threats, and fewer than 40% reported that they have a mobile security strategy, encrypt devices and have mobile device management. It found that many organizations, including utilities and operators of other critical infrastructure, are using outdated software like Windows XP, which is no longer supported, even though the warnings about the end of support were issued six years in advance. And relationships with third parties are lax, and getting worse. The survey found that only 44% of companies have a process for evaluating third parties before they launch business operations with them. That is down from 54% the previous year. Only 31% reported including security provisions in contracts with external vendors and suppliers, and a mere 27% conduct incident-response planning with supply chain providers. To counter, or even slow the growth of cybercrime, experts agree that a much larger percent of organizations need to implement those basics – what most of them call “security hygiene.” Tom Bain, senior director at CounterTack, said it is important to remember that much cybercrime is not all that sophisticated, such as SQL injection and basic malware, “like a Trojan that has been around in millions of variants for years. It doesn't always have to be a sophisticated attack, or executed with precision and stealth,” he said. But beyond that, Bain said companies could actually turn the tables by, “applying stealth methods of monitoring, and doing that at-scale, so that organizations can essentially spy on attackers.” Keanini recommended, “treating cybercrime as a business problem – as a competitor or disrupter to one's business continuity is the first step. “Attackers are more than anything beating defenders by their innovation and creativity,” he said.” It is time that defenders meet them on these terms and outplay them for once. Healey believes that the market, not government regulation, has the best chance of making companies take cybersecurity seriously, and that the most effective way to achieve it is though shareholder pressure. In a recent column in U.S. News & World Report, he argued that the road to real reform should start in Omaha, Nebraska, home to the iconic “Oracle of Omaha” Warren Buffett; and then proceed to Sacramento, Calif., home to one of the nation’s most activist investor groups – CalPERS (California Public Employees Retirement System). If Buffett, famously risk averse, were to reject investments in companies that didn’t take cybersecurity seriously, “every other investor, corporate board director and executive would take notice,” he wrote. “Perhaps not even President Obama could command such attention on the issue.” CalPERS, he said, even when it is a minority shareholder, has been effective in a grassroots way in pressing companies to change policies or actions that they believe will hurt the long-term value of its shares. “I think that’s a great approach,” Healey said. “Convince shareholders that they’re at the risk of losing.” Companies are much more likely to respond to that kind of pressure than to another round of government regulations, he said. “I say let’s start with market solutions,” he said. This story, "Cybercrime: still only a tiny percentage of GDP, but it’s growing" was originally published by CSO.
<urn:uuid:23e28f6c-358a-4290-a26a-e83106d0c430>
CC-MAIN-2017-04
http://www.networkworld.com/article/2456116/data-protection/cybercrime-still-only-a-tiny-percentage-of-gdp-but-it-s-growing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9712
1,815
2.609375
3
Pros and Cons of Wireless Networks Many people today do not realize the first wireless standards were ratified in 1999, with wireless equipment appearing shortly after. Unfortunately, because of the high cost of the equipment necessary for implementing a wireless network, very few were deployed in the beginning. But as more companies and users purchase laptops with integrated wireless network cards, and the cost of wireless devices decreases, wireless networks are becoming much more common. For those considering installation of a wireless network, the pros and cons must be weighed before making the final decision. The biggest advantage of wireless networks is the convenience of such networks — cables are no longer required to go to each machine on the network. With proper planning and design, a wireless network can provide coverage to vital areas with limited cabling requirements. Often, the only network cabling necessary is an Ethernet cable to the access point itself. In addition, some access points can take advantage of Power over Ethernet (PoE), either by use of a power injector or a switch that provides PoE. This eliminates the need to ensure a power outlet is near the access point. Of course, with the convenience of wireless come the problems, including interference to the network from outside sources. Potential sources of interference include some microwave ovens, cordless phones or even other wireless networks on the same channel or an overlapping channel. These sources will affect the wireless network’s reliability by either reducing the network’s range or effectiveness, possibly blocking access to the network. Another potential problem is due to the nature of a wireless network — because the signal travels through the air, often leaking outside the office or building where the network is, this presents the possibility of unauthorized access to the network. To help reduce this risk, various wireless encryption standards have been developed in an effort to protect the network from unauthorized access and protect the data sent over the wireless network. The first of these encryption standards was Wired Equivalent Privacy (WEP). Unfortunately, WEP was found to have serious issues that allowed an attacker to easily break the encryption used. Wi-Fi Protected Access (WPA) was created in an effort to solve the issues discovered in WEP. This was followed later by Wi-Fi Protected Access 2 (WPA2), which improved upon the encryption by using Advanced Encryption Standard (AES) to encrypt the data. The final — and probably biggest — potential problem with wireless networks is bandwidth. One of the first wireless standards, 802.11b, supports up to 11 Mbps speeds, with a typical speed of about 6.5 Mbps. The first wireless standard, 802.11a, and a later standard, 802.11g, both support speeds of up to 54 Mbps, with about 24 to 25 Mbps being typical. By contrast, the new 802.11n standard is being developed with a maximum speed of 540 Mbps and a typical speed of 200 Mbps. All these wireless standards share one major problem, however: the available bandwidth is shared among all devices on the network. This means that even with an 802.11n network, the more users there are on the network, the less bandwidth is available for each user. Also, because the bandwidth is shared, one device on the network can potentially use all the available bandwidth, thus limiting other devices or preventing them from connecting altogether. This can become a serious problem when transferring large files over the wireless network. Although wireless networks can be quite useful, serious consideration should be given to their intended use, and steps should be taken to alleviate the potential pitfalls. With proper design and by setting appropriate expectations, wireless networks can become an essential part of any business’ network.
<urn:uuid:e2099f64-c83d-4987-aab3-5326abd5f441>
CC-MAIN-2017-04
http://certmag.com/pros-and-cons-of-wireless-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00138-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946713
750
3.265625
3
It goes without saying that cybersecurity is a serious concern, especially as internet and online services become more ingrained in our lives. Since the advent of Internet of Things (IoT), the number of connected devices in our homes, office and on our person is growing at a fast pace. Connected devices already outnumber human beings, and continue to propagate at a chaotic pace across many fields, including healthcare, home appliances, industrial control systems (ICS) and vehicles. The rise of IoT brings huge advantages to businesses, consumers, government agencies and researchers in different sectors. Energy savings, better customer service, enhanced health data, improved vehicle performance and accurate crash analysis are just some of the benefits of IoT technology. But the benefits it brings to malicious hackers and cybercriminals are enormous as well, and the IoT security nightmare has already become a cause of serious concern. In this post, I will explain how IoT security is different from traditional cybersecurity we’ve all come to know and love (or loath, if you like), and why it should be taken more seriously. IoT devices generate a lot of data. Some of this data, such as health-related information, is quite confidential and intimate, and is subject to laws and regulations such as HIPAA. Others, such as data generated by your connected toaster or light bulb, might not be very sensitive per se, but when combined with data from your smart lock, smart fridge, motion sensors… it can give away much about your life patterns and habits. Moreover, the storage and distribution of the generated data is the issue of much debate. For most devices, the data is stored on cloud servers, and is later used by service providers to make assumptions about user interaction with devices and make decisions that will improve user experience (or at least that’s what they say). However, regulations that are in place pertaining to the boundaries of ownership of data are not nearly enough to address the issues we’re facing with the explosion of data generation and consumption. What kind of data can vendors collect exactly (does anyone remember the connected TVs that spy on users or Hello Barbie dolls that record children’s interactions)? How much authority do vendors have over the data they collect from their consumers? Whom can they share it with? How long can they store it? What are the encryption and storage protection laws that apply to IoT data? These are just some of the questions tech experts and legislators will have to deal with very soon. And the inconsistencies in data privacy rules across different countries only adds dimensions to the IoT privacy Rubik’s Cube. Network security issues A considerable percentage of IoT devices are lacking proper means to protect themselves against network breaches. In some cases, this can be critical, such as a smart lock that is remotely compromised and unlocked by a malicious actor, or vulnerable baby monitors that allow hackers to pick up live feed of you children. In other cases, such as smart sensors or connected kettles, it might not be a big deal, you might argue. Or is it? Cyber criminals usually grab at every opportunity to exploit a vulnerability. And as far as they’re concerned, IoT security issues aren’t a “let me hack your light bulb and turn it on and off at my own will” situation (though I do admit that such an occurrence would be annoying) but rather an “I’ll compromise you light bulb and gain access to your network” opportunity. See where it’s leading? The problem is each new connected device can become a path into the network, which we call “attack vectors” in cybersecurity jargon. Compromised devices can become beachheads for more serious attacks, allowing hackers to move laterally across the network and gain access to more critical information and devices. Smart kettles that give away Wi-Fi passwords and smart fridges that give away Gmail credentials are testament to the case. Of special concern are smart homes, which are lacking the IT security infrastructure that organizations and tech firms are equipped with, house some of the most vulnerable devices, and can become attractive targets for malicious actors. IoT security issues go beyond the simple data theft, network manipulation hacks, and financial losses. In many cases, it has to do with the health and safety of real human beings or the functionality of critical infrastructure that affects the lives of thousands and millions of people. Smart rifles that can be hacked to designate new targets remotely, drug infusion pumps that can be compromised to harm – or kill – the patient through dosage change, cars that can be shut down remotely while driving at 70 mph, and entire power grids that can brought offline are just some of the cases that have surfaced in the recent year. The IoT is now responsible for many critical functionalities in the home, office and across the entire metropolitan life. And with the forecasts made by Gartner, it will only grow larger and more prominent in the coming years. It can easily run out of control and pave the way for a new wave of totally different acts of terrorism and felony. Just think about the spooky opportunities that’ll arise when driverless cars become mainstream. Remote abductions and car crashes are two things that comes to the mind. I don’t know about you, but it gives me the shivers. As we approach singularity, more and more of our identities are being digitized and sent into the cloud, thanks in large part to IoT. IoT is the future, and it is one of the biggest things that has happened in the history of the internet. We have to prepare ourselves for the worst if we want to take advantage of the best. Taking IoT security seriously will be an important factor in this regard.
<urn:uuid:0121a016-15d7-498d-ab18-ffb8443b39b0>
CC-MAIN-2017-04
https://bdtechtalks.com/2016/04/20/why-do-we-need-to-take-iot-security-more-seriously/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955732
1,176
3.0625
3
In video surveillance applications, lenses generally take a 'back-seat' to their sexier camera counterparts. However, selecting a proper lens for an application helps ensure the highest quality image. Specify a lens too narrow for your application and one will miss out on capturing the entire area of interest. By contrast, a lens that is too wide will result in poor video quality of the subjects monitored. A lens calculator can determine the appropriate lens focal length required for a desired field of view and therefore optimize system performance. Before using any lens calculator tool it is important to understand a few key concepts: - Sensor Size or Format - Cameras have an imaging sensor that comes in a variety of sizes, such as, 1/4", 1/3", 1/2.5", and 1/2". Calculator tools usually provide a means to select one of these values for the calculation. This sensor size or format is easily determined by looking at the specifications sheet of the camera being considered. Note that whether the sensor is a CCD or CMOS has essentially no bearing on calculations. - Focal Length (mm) - Lenses generally fall into two categories: fixed focal length and vari-focal length lens types. The focal length is essentially the distance between the lens and the imaging sensor. This distance determines how wide or narrow the scene captured will be. Shorter focal lengths are able to capture relatively wider scenes, while longer focal lengths provide tighter viewing angles for capturing more distant objects. Vari-focal lenses have the advantage of providing a range of focal-lengths for added flexibility and control. - Horizontal Field of View (FoV) - The horizontal field of view (HFoV) is probably the most practical and well recognized property of the camera/lens system. It represents the width in units of length (feet or meters) of the camera scene at a specified distance from the lens. This is a measurement typically obtained first-hand, perhaps during a site survey. It can also be theoretically calculated given a 'distance from lens' parameter and a 'lens angle' parameter. IPVM has the industry's best camera / lens calculator. Try it out here. Learn how to use it below:
<urn:uuid:12ee73fc-ec98-4324-980e-c64780f1bac5>
CC-MAIN-2017-04
https://ipvm.com/reports/training-lens-calculators
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.892468
454
2.96875
3
A new EMC study reveals how the emergence of wireless technologies, smart products and software-defined businesses are playing a central role in catapulting the volume of the world’s data. Due, in part, to the Internet of Things, the digital universe is doubling in size every two years and will multiply 10-fold between 2013 and 2020 – from 4.4 trillion gigabytes to 44 trillion gigabytes. - The amount of information in the digital universe would fill a stack of iPad Air tablets reaching 2/3 of the way to the moon (157,674 miles/253,704 kilometers). By 2020, there will be 6.6 stacks. - Today, the average household creates enough data to fill 65 iPhones (32gb) per year. In 2020, this will grow to 318 iPhones - Today, if a byte of data were a gallon of water, in only 10 seconds there would be enough data to fill an average house. In 2020, it will only take 2 seconds. The Internet of Things comprises billions of everyday objects that are equipped with unique identifiers and the ability to automatically record, report and receive data – a sensor in your shoe tracking how fast you run or a bridge tracking traffic patterns. According to IDC the number of devices or things that can be connected to the Internet is approaching 200 billion today, with 7% (or 14 billion) already connected to and communicating over the Internet. The data from these connected devices represents 2% of the world’s data today. IDC now forecasts that, by 2020, the number of connected devices will grow to 32 billion – representing 10% of the world’s data. The Internet of Things will also influence the massive amounts of “useful data” – data that could be analyzed – in the digital universe. In 2013, only 22% of the information in the digital universe was considered useful data, but less than 5% of the useful data was actually analyzed – leaving a massive amount of data lost as dark matter in the digital universe. By 2020, more than 35% of all data could be considered useful data, thanks to the growth of data from the Internet of Things, but it will be up to businesses to put this data to use. This phenomenon will present radical new ways of interacting with customers, streamlining business cycles, and reducing operational costs, stimulating trillions of dollars in opportunity for businesses. Conversely, it presents significant challenges as businesses look manage, store and protect the sheer volume and diversity of this data. For example, IDC estimates that 40% of the data in the digital universe require some level of protection, from heightened privacy measures to fully-encrypted data. That said, only half of that data – just 20% – is actually protected. Other key findings: Emerging markets are producing more data: Currently, 60% of data in the digital universe is attributed to mature markets such as Germany, Japan, and the United States, but by 2020, the percentage will flip, and emerging markets including Brazil, China, India, Mexico and Russia will account for the majority of data. Data is outpacing storage: The world’s amount of available storage capacity (i.e., unused bytes) across all media types is growing slower than the digital universe. In 2013, the available storage capacity could hold just 33% of the digital universe. By 2020, it will be able to store less than 15%. Fortunately, most of the world’s data is transient (e.g. Netflix or Hulu stream, Xbox ONE game interactions, Digital TV.) and requires no storage. Data touched by the cloud will double: In 2013, less than 20% of the data in the digital universe was “touched” by the cloud. By 2020, that percentage will double to 40%. Consumers create data but enterprises are responsible for it: Two-thirds of the digital universe bits are created or captured by consumers and workers, yet enterprises have liability or responsibility for 85% of the digital universe. Vernon Turner, Senior Vice President, IDC, comments: “The Digital Universe and The Internet of Things go hand in hand. As sensors become connected to the Internet, the data that they generate becomes increasingly important to every aspect of business, transforming old industries into new relevant entities. Traditional storage services will be elevated to new levels of resiliency and tolerance to support the Digital Universe, which can only be guaranteed in a software-defined environment.”
<urn:uuid:cfa99afb-edb7-4c08-9a8a-6ce550c5e52e>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/04/11/the-digital-universe-will-reach-44-trillion-gigabytes-by-2020/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00159-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921278
915
3
3
When future historians look back on 2011, they’ll certainly conclude that we were a society obsessed with video games, minicomputers masquerading as phones and an endless supply of online distraction. But in a few years, many technologies developed in service of these functions may be repurposed in extraordinarily sensible ways. Motion control, for example, is driving a revolution in video gaming, but may soon help doctors diagnose patients via video conference. Augmented reality, used on smartphones to track down bars, might soon make police officers smarter and safer. In two decades, unmanned aerial vehicles plying the skies might be mundane. The following five emerging technologies are poised to go from amazing to ordinary — and the change will most certainly benefit us. Whether you play video games or not, you’ve no doubt heard of the Nintendo Wii. Launched in 2006, the video game console sparked a revolution in interactive entertainment. Now Sony, Microsoft and others have leapt into the motion control market with more powerful and accurate motion controllers. In Microsoft’s case, the premise of the Xbox 360 maker’s new Kinect peripheral is that you are the controller. The technology not only opens the door for innovative video games, but also can transform how people work in the classroom, the operating room or even on the battlefield. “Right off the bat, areas outside of gaming that have sparked the most interest for the use of Kinect and our natural user interfaces are health care and education,” said Chris Niehaus, director of innovation for Microsoft Public Sector. Kinect uses a 3-D image viewer and a highly sensitive microphone to isolate a user’s movements and voice. This allows Kinect to respond to both gestures and verbal commands. “I think public safety would be one you would think about right away for that sort of biometric recognition ability,” he said. “In the next few months, you’ll be seeing more announcements and pieces of our technology coming forward around speech recognition.” Niehaus said Microsoft is refining the Kinect technology’s sensitivity to pick up subtle movements like hand tremors and fluttering eyelids — a capability that will make Kinect technology a tool for doctors conducting telemedicine. “If [a doctor] is doing a video conference with someone in the living room, the Kinect sensor is not only providing a video link so that you’re seeing and talking to the other person, but it’s also watching different movements to determine if those movements are indicative of pain or side effects,” Niehaus said. “That’s going to assist with early diagnosis and evaluation.” Microsoft, he says, has talked with the U.S. Department of Defense about using the technology for rehabilitation therapy for wounded veterans. On the education front, Niehaus said there’s interest from schools to create interactive curriculum using Kinect. “There is a big trend toward gamification [adding game mechanics to otherwise traditional activities] and personalized learning,” he said. “There are some education-based games already available for the Xbox — and a lot of them are really STEM (science, technology, education and math) focused.” For example, 20 Chicago-area public school districts are experimenting with Xbox and Kinect in their classrooms and after-school programs, Niehaus said. “We’re getting a lot of support from organizations like Get up and Move, Play 60 and different nonprofit programs that are focusing on getting kids up and moving, active and keeping them engaged. When you combine that with education, it is really taking off.” A common problem on the front lines — be it in war, a disaster or any other emergency — is a lack of communication. In the years since walkie-talkies made their debut, technology has evolved, making it easier for soldiers and first responders to communicate. But most communications improvements have hinged on fixed, physical infrastructure to transmit voice and data over distance. In remote areas, this usually requires personnel to erect radio repeater towers atop geographical high points to facilitate communication over a wide area. In a battle, such personnel are vulnerable to the enemy. In a forest fire, crews risk getting caught behind fire lines. What if that troublesome tower could be replaced with a balloon? You’d have what Chandler, Ariz.-based Space Data calls a balloon-borne repeater platform — a floating communications hub that can be deployed in minutes by personnel miles from danger. The company offers what are essentially weather balloons loaded with radio repeater gear. The StarFighter model, already used by troops in Afghanistan, facilitates two-way radio communication up to 500 miles while floating at 80,000 feet, safely away from enemy fire. The StarFighter soon may be used by emergency responders. “It’s a platform that you really can put any type of communication on,” said Gerald Knoblach, CEO of Space Data. “It takes 15 to 20 minutes to prepare it for launch, and the platform rises at about 1,000 feet per minute, so it gets to 90,000 feet in an hour and a half, and then it levels off there and starts relaying voice and data traffic across this big footprint.” Knoblach said the balloons suspend for about 12 hours and operate at one-tenth the cost of communications aircraft. The range, responsiveness and interoperability of the balloons might make them ideal for emergency responders who suddenly find gaps in their communication networks after a disaster. “This is tailored for wide communications and really complex terrain,” Knoblach said. “We all see how fast phones are becoming smaller and more capable; we can take all that kind of consumer technology and put it inside this, and every year get more capacity.” Augmented reality apps are popular for iPhones and Androids. Generally such apps ask users to point their in-phone camera at a horizon, and the software then overlays the image with restaurant and bar information, or it provides walking directions and other details. A few apps use augmented reality to make a video game of the real world, using large smartphone screens to place digital bad guys on an otherwise normal cityscape. It’s one of those nascent technologies that gets many people excited about future possibilities. Public safety officials say augmented reality can make public safety personnel more effective while keeping them safer, something Motorola is exploring. “We spend a lot of time trying to understand the needs of public safety officers and folks in the federal government police and security forces. And it is just not understanding what their needs are today, but also understanding what they are tomorrow,” said Curt Croley, Motorola’s senior director of Innovation and Design. “What piqued our curiosity, and what we are very much watching, is the augmented reality space.” The confluence of data analytics, high-speed wireless data and sophisticated end-user devices is enabling significant developments in augmented reality, a lot of which is being developed in the consumer world, said Craig Siddoway, Motorola’s director of Advanced Radio Concepts, Innovation and Design. “And we can learn a lot from that. The challenge here is to really allow [a police officer] to focus on what he is trying to do, and that obviously changes under certain conditions.” An approach might be to provide officers with lightweight glasses that flash different colors in the officer’s peripheral vision indicating danger, or display simple data gleaned from a license plate. The trick is to provide data via augmented reality that improves situational awareness without overwhelming or distracting an officer. “The context always has to be, first and foremost, the safety of the officer,” Siddoway said. “If he is at a traffic stop, there might be a covert alert that is either a vibration, audible via earpiece or something visual by glasses that suggests, ‘Heads up. Something is going on.’” Other public safety applications for augmented reality include speech or facial recognition to find suspects in a crowd. Building inspectors could be equipped with 3-D maps of a structure. These capabilities could all be made available in a smart handheld device or even a heads-up display. But there’s only so much data a human can process at once. “There are variables that we have to understand,” said Motorola CTO Paul Steinberg. “How much information can you present before users shift their focus from something that is more important?” While no amount of augmented reality is going to lead to a real-life Robocop, the future of augmented reality is so bright you’ve got to wear data-analyzing, situationally aware shades. Photo: Augmented reality technology may safeguard officers while making them more effective. Photo courtesy of Motorola Smart infrastructure, intelligent transportation systems, even the so-called “Internet of things” — all add up to an environment that’s more than meets the eye. But there’s at least one common fixture few of us give a second thought to, yet it’s uniquely positioned to deliver an array of high-tech services — the humble streetlight. A company called Illuminating Concepts transforms typical streetlights into highly intelligent network nodes that do more than fend off darkness. The Farmington Hills, Mich., company launched a product called Intellistreets that adds lighting control, wireless communication, audio, video and digital signage to any standard streetlight. Ron Harwood, president and founder of Illuminating Concepts, said Intellistreets can help cities save energy and enhance citizen safety, while even turning a small profit. For instance, restaurants could pay to run advertising messages on downtown intersections equipped with digital signage. Cities also could use visual or audio messages for emergency communications or to guide citizens to emergency evacuation routes. It’s unbelievable how much more the cities can communicate with pedestrians,” Harwood said. The wireless mesh network capability of Intellistreets also means the streetlights could display — or tell — people bus or train schedules, information on Amber Alerts, that an emergency vehicle is approaching, or help reroute drivers during road closures. Outfitting a streetlight with Intellistreets costs about $500, according to Harwood. Each fixture operates individually and includes a microprocessor, a dual-band radio system, audio amplifier, digital sound processor, video output and HD video card. He said the technology is an affordable option to implement smarter streetlights. “Los Angeles and Seattle are spending a lot of money in retrofitting streetlights, and departments of transportation in all 50 states are experimenting with LED fixtures,” Harwood said. “There is a lot of awareness in the cities around retrofitting, but for many, there’s just too little money available for it to happen.” Most of us are familiar with unmanned aerial vehicles (UAVs) — at least the variety used by American military forces to wage war in the air without risking pilots’ lives. But some might wonder why UAVs aren’t being used for mundane activities. It’s because UAVs have been federally regulated since their inception, meaning the marketplace hasn’t had the freedom to conjure up new ideas for these revolutionary machines, said James Grimsley, president and CEO of Norman, Okla.-based Design Intelligence Inc., a company that develops technology for unmanned aerial systems. “We call them unmanned aircraft, and we’re not describing them in terms of potential, we’re describing them in terms of what we see is missing, which is the man,” Grimsley said. “But that’s going to be changing in the next two to five years.” That change will be possible thanks to an evolution in how the Federal Aviation Administration (FAA) regulates UAVs. The FAA’s website states, “To address the increasing civil market and the desire by civilian operators to fly UASs [unmanned aircraft systems], the FAA is developing new policies, procedures, and approval processes.” But the agency says these changes aren’t anticipated until at least 2015. There are many potential uses for UAVs, Grimsley said, including package delivery. Think for a moment about sending a package overnight. It often means the package is put aboard a piloted airplane. It might then be loaded onto a truck and driven miles to a remote destination. “UPS charges you $15 to deliver a package, and they have to deliver it overnight regardless of the cost for them,” Grimsley said. “If we had planes that could handle 10 or 20 pounds of cargo that would fly to these small areas and regional hubs, we could move mail and very small cargo and packages. Small vehicles don’t require big airports, they don’t require the infrastructure planes do, and they’re cheaper and safer.” Grimsley points out all the problems that accompany manned flight just to deliver packages: safety devices, life-support systems, and the destruction that can occur if a large plane crashes. By using UAVS, this could be circumvented and things like organ delivery could be more streamlined. UAVs may also soon be used as communications relays. Instead of incurring the high cost of launching a satellite, solar-powered UAVs could stay aloft for years and serve the same function as orbit satellites. Another practical use for UAVs, Grimsley said, would be monitoring municipal assets. “Cities often buy large amounts of equipment that are all over the place, like tractors and trucks. Those things can be stolen, and it can take quite a while before the government will even realize they’re gone,” he said. “They can be implanted with RFID tags, and you could have a UAV flying around mapping all of these vehicles, and when one shows that it’s no longer within the map, you can go looking for it.” In the end, the development of the next generation of UAVs will primarily be driven by safety. Just as NASA came to accept robots as far superior for exploration in terms of safety, cost and efficiency, so too will the coming era of everyday UAVs. “We typically think of the sexy and exciting things first, but they don’t necessarily turn into big financial opportunities,” Grimsley said. “What turns into big opportunities are mundane things like delivering mail, cargo, packages — almost a sort of railroad-in-the-sky type thing. That’s what will really turn into major drivers and economic opportunities.”
<urn:uuid:c2bf1c89-f96b-40e7-916f-4fa1e613e175>
CC-MAIN-2017-04
http://www.govtech.com/featured/Five-Emerging-Technologies-Soon-to-Hit-the-Government-Market.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00093-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942011
3,076
2.59375
3
In my previous post about steganography and rainbow tables, I explained a technique to hide data in a rainbow table. The disadvantage of this method is that there is a way, albeit costly, to detect the hidden data. This is because we replace the random bytes, that makeup the start of the chain, by the data we want to hide, thereby breaking the chain. A broken chain can be detected by recalculating the chain and comparing the recalculated hash with the stored hash. If they differ, the chain is broken. But if we know that we are breaking chains, why don’t we fix them? We can proceed as follows: - replace the start of the chain (random bytes) with the data we want to hide - recalculate the chain - replace the hash of the chain with the new hash we calculated This way, there are no more broken chains that give away our hidden secret. But now there is another telltale sign that the rainbow table has been modified to hide data: the hashes aren’t sorted anymore. Remember that a rainbow table has to be sorted (the sort key is the index of the hash) to be useful. It is very unlikely that our new hash is greater (or equal) than it’s predecessor and smaller (or equal) than its successor. Detecting an unsorted rainbow table is much easier than finding broken chains. OK, so if the new rainbow table is unsorted, why don’t we just sort it again? Well, if we resort the rainbow table, we destroy the order in which we stored our hidden data, so we loose the hidden data itself. You could keep the original order of the hidden data by creating an index, this is another file that indexes the chains with hidden data. For example, you could make a list of all the hashes with hidden data. This list will then allow you to retrieve all chains with hidden data in the correct order. And the fact that you have such a list of chains isn’t necessarily suspicious, it’s just a list of hashes you want to crack… But there is a simple way out of the unsorted rainbow table problem. Rainbow tables generated with the rtgen program are unsorted. In fact, you have to sort them with the rtsort command after generating them, before they can be used by the rtcrack program. The solution is to adapt the rtgen program to generate a rainbow table with hidden data, and keep this unsorted rainbow table. And this is not so difficult. We add this method to the chain class: void CChainWalkContext::InjectHiddenData(FILE *fFile, int bytes) unsigned char *byteInject; byteInject = (unsigned char *) &m_nIndex; for (iIter = 0; iIter < bytes; iIter++) if ((iChar = fgetc(fFile)) == EOF) byteInject [iIter] = iChar; The arguments are a file handle to the file with data we want to hide, and the number of bytes per chain we use to hide data. We call the InjectHiddenData method in the rtgen program just after having generated random data (cwc.GenerateRandomIndex();, line 206 of file RainbowTableGenerate.cpp). Our modified rtgen program allows us to generate an unsorted rainbow table with hidden data. The only way to detect this hidden data is with statistical analysis, provided that the hidden data doesn’t appear random. There are no broken chains that indicate hidden data, unlike with the previous method. The disadvantage of this method is that you’ll have to generate a new rainbow table to hide your data, which is a lengthy process. To extract the data file, use the same program as for the previous method, rtreveal If you don’t feel comfortable using an unsorted rainbow table to hide data, I have probably two extra techniques for you. One technique creates a sorted rainbow table without broken chains and it is fast. The disadvantage is that it stores much less hidden data. But you’ll have to wait a bit before I publish this technique. I’ve submitted an article about this steganographic technique to 2600 Magazine, and I can only release it after it gets published or refused. The other technique also creates a sorted rainbow table without broken chains and it is fast, but I still have to work on it. It works, but it might be detectable. I’ll publish it when I’ve finished working on it.
<urn:uuid:1ce981e9-0d58-4eb9-ab4b-6a7d9867a3ce>
CC-MAIN-2017-04
https://blog.didierstevens.com/2007/05/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00479-ip-10-171-10-70.ec2.internal.warc.gz
en
0.877346
963
2.65625
3
Insidious unknown planets lurking behind the sun ready to slam into Earth, supernova set to engulf the planet and giant, unseen asteroids screaming toward our globe are all theories espoused across the Internet as to how we will meet our demise next week on 12/21/2012. Do any of these theories even remotely hold out a scintilla of evidence they could happen? Not even remotely if you look at the material NASA has put out lately which pretty much debunks any and all of the notions being floated in across the cybersphere. The latest video from NASA is dated 12.22.12 and its date and content are intended to further poke a hole in those theories that would have us think civilization will shortly be ending. Not much has changed from an earlier Layer 8 post on NASA's thoughts on the doomsday scenario, but if you are interested here what NASA said earlier this year: "There apparently is a great deal of interest in celestial bodies, and their locations and trajectories at the end of the calendar year 2012. Now, I for one love a good book or movie as much as the next guy. But the stuff flying around through cyberspace, TV and the movies is not based on science. There is even a fake NASA news release out there..." posted Don Yeomans, NASA senior research scientist on a NASA website. On its main website NASA posted: "Contrary to some of the common beliefs out there, the science behind the end of the world quickly unravels when pinned down to the 2012 timeline." NASA posted its own FAQ on the topic of 2012 doomsday scenarios. It goes like this: Q: Are there any threats to the Earth in 2012? Many Internet websites say the world will end in December 2012. A: Nothing bad will happen to the Earth in 2012. Our planet has been getting along just fine for more than 4 billion years, and credible scientists worldwide know of no threat associated with 2012. Q: What is the origin of the prediction that the world will end in 2012? A: The story started with claims that Nibiru, a supposed planet discovered by the Sumerians, is headed toward Earth. This catastrophe was initially predicted for May 2003, but when nothing happened the doomsday date was moved forward to December 2012. Then these two fables were linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 -- hence the predicted doomsday date of December 21, 2012. Just as the calendar you have on your kitchen wall does not cease to exist after December 31, the Mayan calendar does not cease to exist on December 21, 2012. This date is the end of the Mayan long-count period but then -- just as your calendar begins again on January 1 -- another long-count period begins for the Mayan calendar. Q: Could phenomena occur where planets align in a way that impacts Earth? A: There are no planetary alignments in the next few decades, Earth will not cross the galactic plane in 2012, and even if these alignments were to occur, their effects on the Earth would be negligible. Each December the Earth and sun align with the approximate center of the Milky Way Galaxy but that is an annual event of no consequence. Q: Is there a planet or brown dwarf called Nibiru or Planet X or Eris that is approaching the Earth and threatening our planet with widespread destruction? A: Nibiru and other stories about wayward planets are an Internet hoax. There is no factual basis for these claims. If Nibiru or Planet X were real and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye. Obviously, it does not exist. Eris is real, but it is a dwarf planet similar to Pluto that will remain in the outer solar system; the closest it can come to Earth is about 4 billion miles. Q: What is the polar shift theory? Is it true that the earth's crust does a 180-degree rotation around the core in a matter of days if not hours? A: A reversal in the rotation of Earth is impossible. There are slow movements of the continents (for example Antarctica was near the equator hundreds of millions of years ago), but that is irrelevant to claims of reversal of the rotational poles. However, many of the disaster websites pull a bait-and-shift to fool people. They claim a relationship between the rotation and the magnetic polarity of Earth, which does change irregularly, with a magnetic reversal taking place every 400,000 years on average. As far as we know, such a magnetic reversal doesn't cause any harm to life on Earth. A magnetic reversal is very unlikely to happen in the next few millennia, anyway. Q: Is the Earth in danger of being hit by a meteor in 2012? A: The Earth has always been subject to impacts by comets and asteroids, although big hits are very rare. The last big impact was 65 million years ago, and that led to the extinction of the dinosaurs. Today NASA astronomers are carrying out a survey called the Spaceguard Survey to find any large near-Earth asteroids long before they hit. We have already determined that there are no threatening asteroids as large as the one that killed the dinosaurs. All this work is done openly with the discoveries posted every day on the NASA NEO Program Office website, so you can see for yourself that nothing is predicted to hit in 2012. Q: Is there a danger from giant solar storms predicted for 2012? A: Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications, although engineers are learning how to build electronics that are protected against most solar storms. But there is no special risk associated with 2012. The next solar maximum will occur in the 2012-2014 time frame and is predicted to be an average solar cycle, no different than previous cycles throughout history. Q: How do NASA scientists feel about claims of pending doomsday? A: For any claims of disaster or dramatic changes in 2012, where is the science? Where is the evidence? There is none, and for all the fictional assertions, whether they are made in books, movies, documentaries or over the Internet, we cannot change that simple fact. There is no credible evidence for any of the assertions made in support of unusual events taking place in December 2012. Last year NASA also debunked the theory of a giant supernova engulfing Earth in 2012. From NASA: "Given the incredible amounts of energy in a supernova explosion - as much as the sun creates during its entire lifetime - another erroneous doomsday theory is that such an explosion could happen in 2012 and harm life on Earth. However, given the vastness of space and the long times between supernovae, astronomers can say with certainty that there is no threatening star close enough to hurt Earth. Astronomers estimate that, on average, about one or two supernovae explode each century in our galaxy. But for Earth's ozone layer to experience damage from a supernova, the blast must occur less than 50 light-years away. All of the nearby stars capable of going supernova are much farther than this." Check out these other hot stories:
<urn:uuid:79150b2e-10c0-4e45-a928-21c978a78513>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223700/security/nasa-on-full-court-press-to-deflate-doomsday-prophecies.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956479
1,501
2.9375
3
Case Study: Hess Taps Linux for Profits How one company is using Linux to drill for cost savings and get smarter about how to pump up profits.A map of the Gulf of Mexico covers an entire wall of geophysicist John Potters office at Amerada Hess Corp.s Houston-based R&D lab. It is as complex as it is large, charting thousands of square miles of underwater terrain. Hand-drawn concentric circles and lines separate, into hundreds of square lease units, thousands of miles of ancient rock formations and subterranean cliffs available for offshore oil exploration. A much more detailed map, this one digital, is stored on Potters desktop computer. Created from sound waves and complex mathematical algorithms, it measures the density and composition of bedrock located miles beneath the Gulf floor that dates back, in some cases, to the days when dinosaurs walked the earth. But whats most striking about Potters map and the computerized one that he and his colleagues in Hess Geophysical Group use is not so much the age of the underwater landscape but Hess detailed knowledge of it. Potters maps include precise measurements not only of the thickness of a rock layer in one area versus another, but also 3-D images viewable from all angles to look for clues about any undiscovered oil that might lie within. Such maps would not have existed even five years ago: By necessity, oil exploration has been a high-stakes guessing game of the highest order. Supercomputers, a relatively recent phenomenon, have helped fine-tune the analysis, but at a hefty price, which has limited their use by some companies and made it cost-prohibitive to conduct sustained, ongoing number-crunching and digital depth analysis. But times are changing. Thanks to the emergence of low-cost computing power in the form of Linux computing clusters, Hess and other oil companies now can run algorithms they never could have dreamed of running before. At far more affordable prices, Hess can now extract terabytes more information about what lies beneath the earths surface than it could with a supercomputer. Indeed, Linux cluster technology has "dropped our cost of computing by an order of magnitude of two," says Hess CIO Richard Ross. "One of our guys wrote a program that allows him to interactively work with multiple terabytes of data. All the books stored in the Library of Congress would equal 20 terabytes. Just think about working with that much information in real time." Better yet, Hess Linux clusters have nearly doubled in power each year since 1998, enabling Hess engineers to process data more often and in a wider variety of formats. Ultimately, this gives Hess executives a continuously improving stream of information with which to make crucial decisions about which oil fields to lease, where to drill and how much money to bid for a particular field. The difference is like night and day: CIO Ross says todays seismic images put Hess old maps to shame. "Its like the difference between looking at a low-resolution image, where you can barely make out two human figures, and a high-resolution image, where you realize theres a man and woman holding flowers and candy," he says. "Because we can now process more data going into our bids for oil leases, our risk of doing something stupid is lower."
<urn:uuid:c97d9e53-aabf-4c79-b730-a87ce9d9d35d>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/Case-Study-Hess-Taps-Linux-for-Profits
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00195-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953273
667
2.546875
3
It may come as no surprise to those who know NASA's penchant for coming up with amazingly cool solutions to major problems, but its still pretty intersting when you some major innovation pulled off. This maybe the case with NASA's planet-hunting space telescope Kepler, which has been out of commission since May and thought to be kaput. But this week the space agency said it has come up with a way to make use of the Sun and Kepler's orbit around it to stabilize the craft and let it start taking images of space again. The story goes that in May, Kepler lost the second of four gyroscope-like reaction wheels, which are used to precisely point the spacecraft for extended periods of time, ending new data collection for the original mission. The spacecraft required three working wheels to maintain the precision pointing necessary to detect the signal of small Earth-sized exoplanets, which are planets outside our solar system, orbiting stars like our sun in what's known as the habitable zone -- the range of distances from a star where the surface temperature of a planet might be suitable for liquid water, NASA stated. With the failure of a second reaction wheel, the spacecraft could no longer precisely point at the mission's original field of view where it would look for these exoplanets. NASA Kepler and Ball Aerospace engineers say they have developed a way of recovering this pointing stability by maneuvering the spacecraft so that solar pressure - the pressure exerted when the photons of sunlight strike the spacecraft -- is evenly distributed across the surfaces of the spacecraft. NASA says by orienting the spacecraft nearly parallel to its orbital path around the sun, which is slightly offset from the ecliptic, the orbital plane of Earth, it can achieve spacecraft stability. The ecliptic plane defines the band of sky in which lie the constellations of the zodiac. This technique of using the sun as the 'third wheel' to control pointing is currently being tested on the spacecraft and early results look good, NASA said. During a pointing performance test in late October, a full frame image of the space telescope's full field of view was captured showing part of the Sagittarius constellation. "Photons of light from a distant star field were collected over a 30-minute period and produced an image quality within five percent of the primary mission image quality, which used four reaction wheels to control pointing stability. Additional testing is underway to demonstrate the ability to maintain this level of pointing control for days and weeks," NASA said. NASA says the Kepler "Second Light" concept has been presented to NASA Headquarters and a decision on whether or not to proceed with it could come by the end of December. So why is Kepler so important? Some of the newer stats on Kepler findings include: - From the first three years of Kepler data, it has spotted 3,583 planet candidates. Recently released analysis led by Jason Rowe, research scientist at the SETI Institute in Mountain View, Calif., determined that the largest increase of 78 % was found in the category of Earth-sized planets. Rowe's findings support the observed trend that smaller planets are more common. - A research team led by Erik Petigura, doctoral candidate at University of California, Berkeley using statistical analysis of nearly all four years of Kepler data suggests that one in five stars like the sun is home to a planet up to twice the size of Earth, orbiting in a temperate environment. - Kepler's mission is to determine what percentage of stars like the sun harbor small planets the approximate size and temperature of Earth. For four years, the space telescope simultaneously and continuously monitors the brightness of more than 150,000 stars, recording a measurement every 30 minutes. More than a year of the collected data remains to be fully reviewed and analyzed. Check out these other hot stories:
<urn:uuid:bb4d17ac-110a-467e-8477-45591254d4fd>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225895/wireless/nasa-may-salvage-its-planet-hunter-spacecraft-after-all.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00103-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94713
774
3.234375
3
Introduction to Local File Inclusions File inclusions are part of every advanced server side scripting language on the web. They are needed to keep web applications' code tidy and maintainable. They also allow web applications to read files from the file system, provide download functionality, parse configuration files and do other similar tasks. Though if not implemented properly, they can become an exploitable web vulnerability which malicious attackers can take advantage of. How do Local File Inclusions Work? Usually the path of the file you want to open is sent to a function which returns the content of the file as a string, or prints it on the current web page, or includes it into the document and parses it as part of the respective language. Typical Scenarios Where Local File Inclusions Are Used and Their Risks Scenario 1: Including Files to be Parsed by the Language's Interpreter To keep a website's code readable and modular the code is usually split into multiple files and directories, ideally separated into logical pieces. To tell the interpreter where those files are you have to specify a correct file path and pass it to a function. This function will open the file and include it inside the document. This way the parser sees it as valid code and interprets it accordingly. You create several different modules for one page and to include them you use the GET parameter with the filename of the respective function, such as: The Risks of Introducing a Local File Inclusion Vulnerability If the developer failes to implement sufficient filtering an attacker could exploit the local file inclusion vulnerability by replacing contact.php with the path of a sensitive file, which will be parsed and the attacker can see its content, such as: In such scenario the malicious hacker could also inject code from somewhere else on the web server and let the parser interpret it as instructions to exploit the LFI vulnerability. A good way to do that is a picture upload functionality with an image containing malicious code in its source, such as: Scenario 2: Including Files that are Printed to a Page Sometimes you need the output of a file to be shared across different web pages, for example a header. file This comes in handy especially if you want the changes of such file to be reflected on all the pages where it is included. Such file could be plain html and does not have to be interpreted by any parser on the server side. Though it can also be used to show other data such as simple text files. You have a collection of .txt files with help texts and want to make them available through a web application. These files are reachable through a link such as: In this scenario the content of the text file will be printed directly to the page without using a database to store the information. The Risks of Introducing a Local File Inclusion Vulnerability If no proper filtering is implemented, an attacker could change the link to something such as https://example.com/?helpfile=../secret/.htpasswd to retrieve the password hashes of a .htpasswd file, which typically contains the credentials of all users that have access to restricted areas of the webserver. The attacker might also be able to access and read the content of other hidden configuration files containing passwords and other sensitive information. Scenario 3: Including Files that are Served as Downloads Some files are automatically opened by web browsers when accessed, such as PDF files. If you want to serve files as downloads instead of showing them in the browser window you have to add an additional header instructing the browser to do so. You can include the header Content-Disposition: attachment; filename=file.pdf in the request and the browser will download the files instead of opening them. You have the company brochures in pdf format and the web application visitors use this link to download them: The Risks of Introducing a Local File Inclusion (LFI) Vulnerability If there is no sanitization of the request, the attacker could request the download of files that make up the web application, therefore being able to read the source code and possible find other web application vulnerabilities or read sensitive file contents. For example the attacker can use the same function to read the source code of the file connection.php: If the attacker finds the database user, host and password he can connect to the database remotely with the stolen credentials. At this stage the malicious hacker can execute database commands and compromise the web server if the database user has file write privileges. Impacts of an Exploited Local File Inclusion Vulnerability As shown above, the impacts of exploiting a Local File Inclusion (LFI) vulnerability vary from information disclosure to complete compromise of the system. Even in cases where the included code is not executed, it can still give an attacker enough valuable information to be able to compromise the system. Even though old ways of exploiting the first scenario won't work anymore on most modern systems, e.g. including the access.log file, there are still some methods that can still lead to a complete system compromise through evaluated script code. Peventing Local File Inclusion Vulnerabilities in Your Web Applications Tips for Letting Users Read or Download Files Securely - Save the file paths in a database and assign an ID to each of them. BY doing so users only see the ID and are not able to view or change the path. - Use a whitelist of filenames and ignore every other filename and path. - Instead of including files on the web server, store their content in databases where possible. - Instruct the server to automatically send download headers and not execute files in a specific directory such as /download/. That way you can point the user directly to the file on the server without having to write additional code for the download. An example link could look like What You Should NOT Do to Avoid LFI Vulnerabilities - Blacklisting filenames; attackers have a variety of filenames to include for information disclosure or code execution. Maintaining such a list is practically not possible. It also is not enough to blacklist files commonly used for testing against LFI like /etc/passwd or /etc/hosts - Removing or blacklisting character sequences. There are known bypasses for removing or blacklisting those. - Encoding the file path with base64, bin2hex or similar functions as this can be reversed relatively easily by an attacker.
<urn:uuid:faf71c5a-d5d2-4ab6-971a-3494fcd7fc11>
CC-MAIN-2017-04
https://www.netsparker.com/blog/web-security/local-file-inclusion-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00316-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89026
1,321
3.203125
3
Unbreakable encryption remains a pipe dream, even on a quantum Internet - By John Breeden II - May 08, 2013 The goal of unbreakable encryption has been a dream of governments since time immemorial. The ancient Greeks sent coded messages by way of a Scytale, which consisted of cloth wrapped around rods on which messages were written. The cloth was unwrapped during transit. An authorized viewer would then re-wrap the cloth around an identically sized rod to read the complete message. Believe it or not, the Scytale, though easy to break, is in some ways similar to quantum encryption, which is likely unbreakable. In a quantum computing code system, an object like a photon has its state measured, which is always changing. The state of the photon is the encryption key, which is sent along with a message. Any attempt to monitor this state slows down the data, which ruins the key and makes it very obvious on the other end that someone is trying to tap into the feed. Cambridge University and Toshiba have put this quantum theory into practice, and they’ve been fairly successful in laboratory settings. The problem, which is where the Scytale has the advantage, is that these unbreakable encryption set-ups are point to point in nature. One computer can send data to another that is pre-programmed to get the signal, and that’s it. The Toshiba/Cambridge setup has a maximum limit of 56 miles too. The reason for the limitation is because if the signal is sent through a router, that router has to read at least part of the message to know where to forward it. And that is no different from someone trying to eavesdrop on the line. It corrupts the data about the quantum state ever so slightly, but more than enough to ruin the key and destroy and therefore protect the message. Recently, MIT Technology Review reported that scientists at the Los Alamos National Labs in New Mexico have been running a quantum Internet for almost two years, with all computers on the network able to send and forward secure messages to every other one. How are they able to do this? Simple. They set up a series of point-to-point connections between computers and a specialized router. Computer A is not sending a quantum-protected signal to Computer B. It’s sending it to the hub. The hub then converts that message back to normal, sees where it’s supposed to go and then sets up a second quantum-state-protected communication to its destination. It’s not Computer A to Computer B. It’s Computer A to hub and then hub to Computer B, or C, or D. The problem with a system like that is two fold. First, the hub interjects a non-secure element into the communications. The message can be snooped, at least in theory, while it sits in its unencrypted and unprotected state at the hub before being sent off to its destination. Second, all of the connections are pre-programmed, which works fine in what is really a Los Alamos-based Intranet, but could not be setup on the Internet where destinations are constantly in flux. There would have to be many hubs to send a quantum-secured message cross the country, and every one would need to know every possible destination. But the system at Los Alamos is a good start. Perhaps secure routers could be created and implemented along paths, giving users the option to send a quantum-state secured message if a path is available. For government, this is even more attractive right now. Imagine the Pentagon setting up all of its systems on a completely secure network, something that would easily be possible within a single building, or even a small campus. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:021e5bec-6efb-4644-a5ea-a83db8a2d6ce>
CC-MAIN-2017-04
https://gcn.com/articles/2013/05/08/unbreakable-encryption-quantum-internet.aspx?admgarea=TC_SECCYBERSSEC
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958975
792
2.71875
3
AS9100: What it is and how it relates to ISO 9001 AS9100 is a set of guidelines for implementing a Quality Management System for use by aviation, space and defense organizations (often referred to as the aerospace industry). The standard is produced by the International Aerospace Quality Group, which includes representatives from aerospace companies worldwide. The document is sometimes mistaken as “ISO 9100”; however, AS9100 is not maintained by the International Organization for Standardization (ISO). Instead, it builds on the requirements for a Quality Management System as defined in the ISO 9001 Quality Management System requirements. The current version of the document, released in 2009, is AS9100C, which is updated to incorporate the requirements of the ISO 9001 version issued in 2008. AS9100: How it is structured The AS9100 standard follows exactly the clauses in the ISO 9001 standard. The content of the standard is identical to that of ISO 9001 with no deletions; however, additional requirements have been added that relate to the needs of stakeholders in the aerospace industry. In order to make the additions easy to recognize, they are in bold and italics in the document. What additions have been made? The main additions in AS9100 occur in the primary sections on “Product Realization” and “Measurement, Analysis and Improvement.” The main sections added are for Project Management, Risk Management, Configuration Management and Control of Work Transfers. Additionally, there are many updates to the requirements for the Design and Development, Purchasing, Production and Non-conforming Product processes. The main point to remember on this standard is that it is designed by the aerospace industry specifically for aerospace companies and has little application outside this industry.
<urn:uuid:730ae9e4-d942-49d0-acfe-6f4aac34a9c3>
CC-MAIN-2017-04
https://advisera.com/9001academy/knowledgebase/as9100-what-it-is-and-how-it-relates-to-iso-9001/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00462-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948422
360
2.5625
3
Check out this quick time-lapse video of the city of Las Vegas as it grows from a small desert town in the 1970s to the modern urban city that it is today: The videos were taken by NASA's Landsat 5 satellite, which today celebrates its 28th birthday. According to NASA, "Landsat data have been instrumental to our increased understanding of forest fires, storm damage, agricultural trends, and urban growth." In this Vegas video, NASA says the large red areas are actually green space, mostly golf courses and city parks. In addition, around 1984, the images got much better, as new instrument designs increased the sensitivity of the satellite images. In the video, you can clearly see the distance that McCarran Airport was from "The Strip" initially, and now it's practically right on top of Las Vegas Boulevard, as the number of new casinos grew towards the airport. Pretty cool stuff if you've also seen the growth of Vegas on the ground, not from space. Read more of Keith Shaw's ITworld.TV blog and follow the latest IT news at ITworld. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:3e0d5067-6475-44be-9b66-95f568ec849d>
CC-MAIN-2017-04
http://www.itworld.com/article/2730278/it-management/watch-las-vegas-grow-from-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943999
260
2.6875
3
VoIP Case Study #1 Voice over Internet Protocol, commonly referred to by the acronym VoIP, is a communications technology originally designed to displace the public switched telephone network. Telephony's evolution beyond POTS and PBXes has opened a wide spectrum of services able to marry voice and data, of which VoIP is but one. The primary selling point for this technology is the potential for saving organizations money in toll calls and beyond. But interest extends well beyond that single factor. "There are customers out there converging their networks and taking advantage of VoIP" says Alan Eng, product manager for the access communications group Cisco Systems, Inc. "Cost saving was the first easy application. The converged solution really addresses that plus, it has improved productivity. And the technology is really evolving." Although some pundits would argue that VoIP is still maturing, corporate users are extremely interested in implementing the technology, creating exponential growth. Within the last four years, VoIP minutes increased from less than 0.5 to 2 percent of outbound international calls, according to research from TeleGeography. Additionally, predictions as to the size of the market itself vary, with Allied Business Intelligence projecting the VoIP market to grow from $3.7 billion in 2000 to $12.3 billion in 2006 and Synergy Research Group projecting the VoIP equipment market to grow to $13.3 billion by 2005. There are many misconceptions about VoIP. It's not just for mammoth Fortune 500 companies. Governments and non-profit organizations now use these telephony services as are retail establishments like restaurants and car dealerships, even banks. Eng says some of that misconception can be traced to the high profile, "household names" organizations such as Merrill Lynch and Dow Chemical that were among VoIP's early users. "We've actually seen activity not only across business sizes, but also across different verticals," he says. These include education, financial services, insurance, retail and government. "We've also seen a lot of activity in the midmarket," says Eng. "We have a lot of customers in the less than 100 user configurations. All businesses have the same needs. They have different price sensitivities and scalability issues as well as other internal operating guidelines." The only difference between these different-sized businesses seems to be, Eng says, that larger companies tend to have more resources to deploy VoIP. What doesn't change is the cost savings. Less money is spent on equipment -- infrastructure, telephones and switches -- as well as toll calls. Administrators quickly point out they're spending less time and money on training as well. What's also becoming more apparent is it doesn't take a complete overhaul to implement VoIP effectively. The City of Daytona is an excellent case in point. If Gene McWilliams, the City of Daytona, Florida's manager of information services, isn't in his office when you phone him, it's as easy to talk with him as punching "5" on your telephone keypad. There's no lengthy message with pager and alternate telephone numbers for callers to wait through. That simple gesture links your call to his cellular telephone over the city's VoIP network. The system, designed and implemented within the last year, eliminated a hodge-podge of telephone systems at 23 offices throughout the city. "We had been planning to put data in these," explains McWilliams. "These locations ranged from fire stations throughout the city to two- and three-man offices. We could have either used single line services through OPX -- very expensive -- or key sets -- also very expensive. The city chose Voice over IP. "It works out to our advantage in that the phones ride free. We already had the data connection. The 23 offices we serviced had incompatible key sets on our central phone switch or were individual one FBs. These gave us none of the features we needed." With VoIP, he explains, the city was able to centralize and integrate voice and data communication services. They're enjoying features including the aforementioned digital forwarding and are able to give users full- featured PBX options on a single line. McWilliams says the city eliminated 17 independent systems. They still have their PBX, but worked with Nortel to integrate VoIP into the traditional switch. This, he says, "gives us a lot more robustness. And, it's the reason we went with the Nortel. ... That's what I like about it." Another deciding factor was the ease of adding voice and data services for remote, temporary offices. "Five times a year our population swells into in excess of a million [people] for weeks at a time," McWilliams says. "We've set up temporary police precincts. Those are very expensive." Typically, they would install 1FB line and dial up modems to the network. "Now, with this integrated system, we just run voice over IP over fiber." The obvious advantage is enhanced public safety. Officers stationed in these locations for Bike Week or the Daytona 500 can access records, wants and warrants without any problem. They also have full telephone services. "It gives them full communication without having to learn something new. They use the same type of phones. To me that's one of the key features of this system." They have also been able to extend city telephone services to city officials' remote or home offices with the new VoIP system. The city did eliminate one of its telephone operators, but the cost-savings has been nothing less of phenomenal. They've not only gotten their telephone directory listing down to a single published number, they also saved "right off the bat," a quarter of a million dollars. Additionally, another quarter of a million was saved by bringing all the telephone maintenance in house as well as reducing the system from 17 various switches to one single switch. McWilliams ticks off other various savings. The response has been positive thus far. So much so that information services was made its own independent city department. Typically in government agencies, technology resides under the umbrella of city or county finance departments. McWilliams advises network administrators to disregard any existing misconceptions about VoIP they may have heard circulating. "With today's money crunch we need to look at all avenues to save money and give users the same or better services at a reduced cost. [VoIP] really drives it home. Granted, we're not having to worry about toll calls, but across state or across town, it'll work really well." The services will be expanded. Videoconferencing was eliminated from the initial implementation. McWilliams hope that will be added in the next budget year. He is cryptic about other projects in the works, but says several ideas are being tossed around. City-administered VoIP pay phones for all those Daytona tourists? You never know. It could be quite possible. What seems to be the most ideal scenario would be starting from scratch to build a combined voice and data network, which is exactly what the West Virginia University Foundation was able to do when they recently moved to new facilities. We'll examine their situation tomorrow.
<urn:uuid:85c39edf-ec11-4915-b092-4cee4734ea8f>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/957361/VoIP-Case-Study-1.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.974937
1,457
2.609375
3
this letter , that I ask for clearing my fundamentals:- At the software level:- say we have a lot of information , for example we have a variable/ object (anything) called "A" , its attributes are defined , it is being used , accessed from somewhere ---- , but all it boils down to the fact that everything is held in the memory. Again we find structures, (Logical) ,of the various forms of memory like cache memory, RAM, ROM.. We think of stacks, Queue.... when we explain programs ....etc My question is what is, a variable , or an object or a pointer or a delegate , a method --> beneath the vaneer of the logical construct of the memory . How does such hundreds of information about a single element , like a pointer, et all .... are all held in actuality ----- How does such an avalanche of voltages we carry in a laptop, we drop the laptop (not intentionally ofcourse), or a hard diskdrive and things do not become explosive ,----- when we code thousands of line ... are for reality....How does the Battery-Health identifies that the battery charging is needed....... . And how does the O/S ministering them without getting polarised. May you name a book / study materials /websites clearing these doubts? You may even reply with concrete answers, as usual Edited by hamluis, 23 February 2013 - 07:08 PM. Moved to Gen Chat from Internal Hardware - Hamluis.
<urn:uuid:1f94a9f8-cf3d-4443-b9e5-e37a204bddce>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/486398/underneath-programs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891014
310
2.875
3
CHICAGO, IL--(Marketwired - Apr 24, 2014) - As Spring is beginning to bud around us, the pollen forecast is looking heavy. With the significantly snowy winter that we all experienced, we can expect full-on pollen production. When you start to feel that tickle in your nose, you need to be able to decide "is it really allergies, or have I caught a cold?" "Between the runny nose, sneezing, coughing, and post-nasal drip, sometimes it is difficult to distinguish a cold from allergies," explains Chicago allergist, Brian Rotskoff, MD. "Colds are more commonly associated with wintertime ailments, but the springtime cold does occur. At the early onset of symptoms, it's important to care for your symptoms to get the most effective results." What is a cold? What are allergies? Colds are caused by germs. If you think you have a cold, you more than likely caught it from another germ, and you should try to keep your distance from others. Allergies are caused by an overactive immune system, and are not contagious. If you have allergies, your body reads certain foreign substances, such as dust or pollen, as germs. When the body fights germs, it produces cold-fighting chemicals, known as histamines. These histamines cause a runny nose, coughing and sneezing. Thus bringing you the joy of allergies. Timing is everything When assessing symptoms, it is important to consider how long the symptoms have persisted and how quickly they came on. "The biggest difference between the two is that allergies won't go away as quickly as a cold," explains Dr. Rotskoff. A cold typically last 3-14 days, allergy symptoms will stick around for as long as the allergen is present. Meaning, if you are allergic to a springtime allergen, such as ragweed pollen, you can experience symptoms for as long as days to months. "Another key indicator is how quickly the symptoms have set in," explains Dr. Rotskoff. "With a cold, symptoms take some time to begin to show. Allergy symptoms, on the other hand, start to rear their heads as soon as a person is exposed to the allergen." One more indicator? "The color of your mucus," says Dr. Rotskoff. "If you see a lot of yellow mucus being produced, then you have a cold and it's time to get some rest." Treating a cold or allergies "With either a cold or allergies, there are treatment options," says Dr. Rotskoff. "For a cold, you should look to make yourself more comfortable as it runs its course. Get some decongestants, pain relievers, humidifiers and rest. You can also let your allergies run its course, but that is a long course to run." While allergy sufferers can take anti-histamines, eye drops and nasal sprays for temporary relief, allergy shots provide a permanent fix. Dr. Rotskoff specializes in immunotherapy, which is a regimen of allergy shots or drops over a period of time that gradually builds an immunity to allergens. Over time, a person will no longer be affected by allergens. If you have experienced symptoms for longer than two weeks, then it is time to see Dr. Rotskoff at the Clarity Allergy Center in Chicago. Knowing that your allergies are not a cold is the first step in finding relief.
<urn:uuid:f07d9f35-b011-4e3b-87c4-a2c2cf0be69b>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/cold-or-allergies-a-guide-to-knowing-the-difference-1902841.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967603
734
2.625
3
Consumers should understand that the optical fiber cables come in serveral kinds. This diversity compels them to choose the appropriate type depending on the application. A type of cable may be a more suitable for a given situation. In spite of this, optical fiber is used to transfer data to various devices. And more importantly, they are used to transmit data in the mainland and sea. Optical fibers have become very important in today’s communication system. There is a single mode and multimode fiber cable, both basically performing a rather similar task. The names right away tell the difference between the two.A single mode fiber cable emits only one beam or a one color of light. This type of cable to send data through a single beam of light. Interference is avoided becaused only a single ray of light. This means a single mode cable can send data over a long distance. Single mode optical fiber cable is more popular in the market. They come in various lengths, and they are more adaptable, being able to perform a wider range of tasks. A multimode cable several beam of light at the same time, each beam is sent in a different perspective, so as not to interference occurs between beam. The directional transmission of this nature is sensitive. Because of the sensitivity of the multiple beam mechanism, multi mode cables are usually only used for short distance data transmission to avoid disturbances. The two cables have roughly similar designs though. There are other types of optical fiber cables. It is to make these cables, make them different. No matter use which types of cable, its main function is to transmitdata. All optical fiber cables are basically the same in terms of essential component and function. It is important to note, however, because the difference of a cable may be more appropriate than any other on occasion. In addition to single mode and multimode cable, also have loose tube cable. As the name suggests, these have no insulation. The optical fibers are bare or are only covered thinly. They may only be insulated against water or chemicals by a special insulating gel. Loose tube cables are used for outdoor or underground transmission systems. Tight pack cables are composed of a large bunch of optical fiber wires that are sorted into several sets. These are covered in usual cable insulation, but each fiber set has no protection. A special termination unit is required for every fiber set to stop connections. This type of cable is only apt for short distance transmission purposes. A simplified version of a tightpack cable is the simplex cable, which makes use of one or two big optical fibers. This allows simple operation and lower energy usage. Outdoor and heavy-duty simplex cables are coated with hard-wearing insulation. These types of cable can confuse and trouble carefree homeowners. Please note that your device may use fiber optic cable in China. If you are enjoying your computer and the Internet, you most likely to enjoy the benefits brought by the optical fiber. For more information about optical fiber cable and buy fiber optic cable please visit our website.
<urn:uuid:9de43805-388f-48ca-9b21-ee20881143f4>
CC-MAIN-2017-04
http://www.fs.com/blog/types-of-optical-fiber-cables-available-in-the-market.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00334-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944417
619
3
3
MERX -- a new, electronic tendering system -- is an electronic covenant that bonds the Canadian federal government and 7 of 10 provinces. In the United States, that's the equivalent of having the federal government and 35 states agree to do something important, do it the same way, do it using the latest technology, and do it using the same external supplier! On Oct. 27, 1997, the Canadian government instituted MERX, which has tremendous potential for electronic commerce and facilitating procurement reform -- a hot item in the United States. MERX is a Web-based system used by buyers to post opportunities (requests for proposals, requests for quotations, etc.) and used by vendors to identify business opportunities that match their capabilities. WHAT DOES IT DO? Like other electronic tendering systems, MERX connects suppliers of goods and services to purchasers. Many states have these systems, but MERX is a private-sector business serving most of the public sector, not simply one state or city. It is the latest in an evolving system that began with the Canadian federal government and was first outsourced in 1991. From the buyers' perspective, MERX is a system designed to manage opportunities. Using the system, a buyer can post procurement announcements to the Web site, electronically transfer documents from the buyer's computer to MERX, update bid and award information, view all of the posted opportunities and see which suppliers have requested documents. From the suppliers' perspective, new marketing opportunities can be quickly and easily located by browsing or specifying selection criteria. They can then electronically order and pay for documents and have the documents downloaded or have them sent from a regional distribution center if -- like blueprints, for example -- not available in electronic form. They can also be notified by e-mail or fax when new documents are added to the system, if these opportunities match their predefined selection criteria. From the perspective of the participating governments, the system improves access to government business, increases competition and provides a level playing field for all businesses competing for government work. A LITTLE HISTORY ... In 1991, the Canadian government contracted out the notification and distribution of procurement documents through a centralized procurement service, called the Open Bidding Service (OBS). While this earlier system provided value, it had some shortcomings. In April 1997, a report was issued by the Standing Committee on Government Operations, House of Commons, on the topic of government contracting. This report states, "witnesses from the private sector told our committee they did not use the OBS because they found it too expensive in terms of time and cost; it was not 'user friendly' ... it lacked transparency, feedback and responsiveness to unsuccessful bidders; it did not meet the needs of the construction industry; and in some business sectors, it was not readily or directly accessible." The committee went on to conclude that OBS "[did] not adequately serve the needs of both private and public sectors; for issuing firms, there are difficulties of access, cost, transparency and fairness. There is a clear need to revamp the current open-bidding system to ensure its universal application, and furthermore, [that] it obtains best value for the Crown." In May 1996, an RFP was issued for a new service provider. This RFP was prepared by a committee established under a trade agreement among the federal, provincial and territorial governments. Cebra -- a company dedicated to electronic commerce and owned by the Bank of Montreal -- was selected from eight proposals as the company with the best solution. The Cebra proposal addressed the shortcomings of the Open Bidding System and was a Web-based solution. It incorporated the use of powerful search tools, Internet technology, national marketing initiatives, regionalization of the service offering and low overall cost. WHO USES IT? Currently, the entire federal government and seven of the provinces have signed on. All departments within these organizations are required to use the system exclusively. There are about 1,500 buyers using the system, loading 200 documents per day. There are also 20,000 registered suppliers, and this is increasing at 300 per week. In November, they were getting 1,500 orders per day for documents, and there were about 2,000 active documents (competitions that were still open) on the system. In 1998, MERX will expand by adding municipalities, academic institutions, school boards and hospitals for these provincial governments. In a year or so, according to Bob Binns, the executive responsible for MERX, they will have between 5,000 and 10,000 buyers and up to 100,000 suppliers throughout North America. MERX is used to post opportunities as required by the North American Free Trade Agreement. It therefore represents a simple, inexpensive way for U.S. firms to identify business opportunities throughout Canada with every level of the public sector. Clearly U.S. companies will benefit from this service. The prospects for electronic commerce using MERX are exciting. In the short term, it will continue to expand the number of distribution centers, the number of buyers and the number of suppliers. In the longer term, MERX will be a player in streamlining procurement via electronic submission of responses, distribution of award notices, supplier conferences and by providing other new services. OBS vs. Merx 1991-Oct. 26, 1997 Open Bidding System Oct. 27-Dec. 31, 1997 Federal government and six provinces Expanded to municipalities, boards and hospitals Centralized in Ottawa Phone-in to Ottawa, ship documents Local access, print and pick up Limited download capabilities CA$130 per year plus CA$430 for bid matching CA$8.95 per month 100,000 subscribers forecast for 1998 WHAT DOES IT COST? MERX doesn't cost government anything. Using MERX means that an organization has outsourced document management related to procurement at zero cost. This alone represents a savings of millions. MERX does not charge suppliers to browse. They charge suppliers who frequently use the system CA$8.95 per month plus a page-based charge when a document is downloaded or sent to them from the distribution center. Charges for paper documents are CA20 cents per page; electronic (downloaded) documents cost CA8 cents per page. The opportunity matching service costs CA60 cents per notice via e-mail and CA50 cents per notice via fax. ELECTRONIC COMMERCE IS HERE! In 1997, the Organization For Economic Cooperation and Development published Electronic Commerce, Opportunities and Challenges for Government, which communicates the views of the leading international users of electronic commerce. The report states, "Governments must act in concert with each other, and with private-sector users and suppliers of electronic commerce facilities, to create a commercial environment that is responsive to technical change." In addressing transaction management, the report declares, "many of the separate steps that normally intervene between a buyer and a seller in a commercial transaction can be integrated and automated electronically." The report also states, "another major incentive is the desire to achieve greater production and distribution efficiencies." Although MERX is in its infancy, it is clearly a solid example of electronic commerce on a large scale and seems positioned for success: 1. Marketing efforts are directed at significantly expanding the number of buyer organizations in Canada and the number of suppliers throughout North America. 2. It's a Web-based transaction-driven electronic commerce system. Most documents can be downloaded when ordered. Payment is handled electronically and automatically using a credit card. 3. MERX is a well-funded initiative dedicated to electronic commerce. 4. MERX is committed to providing other value-added products on its Web site for both buyers and suppliers. It will soon be adding substantial reference material -- such as The RFP Report, a quarterly newsletter of checklists, ideas and information -- as another product that is downloaded when ordered. MERX is the largest electronic tendering system in Canada. It has the potential to become Canada's national public-sector procurement system and a leader in electronic commerce and procurement reform. To access MERX, visit its Web site at . Michael Asner is a consultant specializing in procurement and information technology. He also publishes "The RFP Report." Two of his books, "The Request For Proposal Process" and "Handling Supplier Complaints and Protests" are available from Government Technology Press . April Table of Contents
<urn:uuid:7e9b69fd-0cf6-4a82-9a95-842d348040f2>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Tendering-Advances-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941152
1,725
2.65625
3
Types of Cloud Deployments Cloud computing is a big force in IT today, and it isn't going away. In fact, cloud adoption is going up geometrically, both for end users (think apps on your phone or tablet) as well as for organizations of all sizes. In fact, many smaller organizations may not have any on-premises infrastructure at all, other than networking infrastructure to get connected to the cloud. With this transformation in IT, it behooves all of us in the industry to understand it and adapt or risk being out of a job, like punch card operators. This white paper will discuss the basics of cloud computing, including a brief discussion on the location of the resources, followed by a review of the characteristics of cloud computing and the types (models) available. We will also briefly compare and contrast the various models. This document is the first in a series of white papers that will discuss each of these cloud-computing models in further detail, with a separate document for each type. If you are already familiar with cloud computing, you may wish to skip this white paper and jump directly to the particular type(s) you are interested in. Cloud Computing Locations While not very relevant to the cloud-computing models available (as each model is available at any of the possible locations), the locations nevertheless will be mentioned in this and future white papers and thus will be briefly defined here. The National Institute of Standards and Technology (NIST), an arm of the US federal government), has defined much of what cloud computing is (at least to them, but as they are a standards organization, many others have followed their definitions). We will refer to them throughout this series of white papers for consistency. Note that NIST doesn't call them cloud locations, but rather "Deployment Models." The public location means that the resources (servers, storage, networking, and/or applications) you will be accessing are usually located on the Internet (hence publicly available and the name of this type). This is not always true as there are some specialized networks (such as those used by the government) that may have restricted access, but for the most part, the resources you want to access are reached via the Internet. The broader definition is that the resources are owned by a third party (the cloud provider) which is rented (either by directly paying or via ads you are shown) in some fashion from them. The resources are located at one or more datacenters of the provider. The private location means that the resources are (usually) owned by and accessible through a private network, but in any case always for the exclusive use by a single entity or company. Typically, the idea is that an IT department at an organization owns the resources and makes them available to employees of the company in the various ways that cloud-computing offers. This doesn't have to be the case, as a third party could own them and make them accessible to just that organization. One of the biggest advantages is that the company owns, or at least controls, all the resources and can optimize them any way they wish and deploy them much more quickly than traditional methods provided; the (potential) downside is that the company must purchase all the resources. Another advantage of this model is that the company has complete control over all the security aspects of the deployment. Hybrid is simply some combination of the previous two locations, where some resources are located within the organization's datacenters and some are accessed publicly. This doesn't have to be the case, as it is possible to federate several private clouds or public clouds as well, but this is a far less common scenario. Use cases for this model include the following: - Development and testing, where resources can be quickly provisioned as needed and just as quickly deprovisioned when the project is complete. Sensitive company data may be stored in the private cloud onsite. - Cloud bursting, where the normal load is handled by the company's own resources, but during period of peak demand (such as during the holidays for an e-commerce site), when the company's own resources are fully utilized, additional capacity is rented as needed to maintain desired performance levels. The advantage is that it is generally cheaper to own something than to rent or lease it if it will be used most or all of the time, but if needed for a short duration, renting is cheaper. This provides the best of both worlds, minimizing the total cost required to meet required performance levels. - Backup/Disaster Recovery, where data may be kept onsite, but backed up offsite somewhere, similar to the way that tapes used to be shipped offsite. It can also be used for companies that need a disaster recovery location, but only have a single datacenter for all their resources and need someplace they can run temporarily in the event of an emergency, much like companies like Sun Guard (now Sun Guard Availability Services) provided in physical datacenters in the past. In other words, they kept servers in a datacenter that could be powered up in the event of a disaster. These servers were available to multiple customers.
<urn:uuid:aecdce10-f275-4a70-9df6-b33471037e5e>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/types-of-cloud-deployments/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964805
1,056
2.671875
3
Autonomous driving is not to an all-or-nothing affair. But in reality, automation of driving functions has a long history that has been steadily expanding for decades. The use of speed control with a centrifugal governor dates back to the 1900s and 1910s, while modern cruise control was invented in 1948. Anti-lock brakes were first used for aircraft in 1929. The growing use of automation of driving functions is also apparent in newer features like intelligent parking assist and lane keeping assist systems. Here, we take a look at the growing use of automation and present a model that captures the transition to truly self-driving cars based on work by the U.S. government and SAE International.
<urn:uuid:94ec94df-7a64-4add-9727-a6850d615405>
CC-MAIN-2017-04
http://www.ioti.com/transportation/what-are-5-levels-autonomous-driving
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966392
141
3.3125
3
Reynier M.V.,University City | Tamega F.T.S.,Marine Biodiversity Institute | Tamega F.T.S.,Almirante Paulo Moreira Marine Research Institute | Daflon S.D.A.,University City | And 6 more authors. Environmental Toxicology and Chemistry | Year: 2015 Discharge of drill cuttings into the ocean during drilling of offshore oil wells can impact benthic communities through an increase in the concentrations of suspended particles in the water column and sedimentation of particles on the seafloor around the drilling installation. The present study assessed effects of water-based drill cuttings, barite, bentonite, and natural sediments on shallow- and deep-water calcareous algae in short-term (30d) and long-term (90d) experiments, using 2 species from Peregrino's oil field at Campos Basin, Brazil: Mesophyllum engelhartii and Lithothamnion sp. The results were compared with the shallow-water species Lithothamnion crispatum. Smothering and burial exposures were simulated. Oxygen production and fluorescence readings were recorded. Although less productive, M. engelhartii was as sensitive to stress as Lithothamnion sp. Mesophyllum engelhartii was sensitive to smothering by drill cuttings, barite, and bentonite after 60d of exposure and was similarly affected by natural sediments after 90d. These results indicate that smothering by sediments caused physical effects that might be attributable to partial light attenuation and partial restriction on gas exchange but did not kill the calcareous algae in the long term. However, 1-mo burial by either natural sediments or drill cuttings was sufficient after 60d for both species to reduce oxygen production, and the algae were completely dead under both sources of sediments. Environ Toxicol Chem 2015;34:1572-1577. © 2015 SETAC. Source
<urn:uuid:2a46aa9e-9c6b-454f-a9f6-19c15cf8d6fb>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/almirante-paulo-moreira-marine-research-institute-2719589/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934759
419
2.640625
3
An operator working in the Rochester, N.Y., Office of Emergency Communications had such acute pain in her fingers, she was forced to leave her job. And she wasn't alone; at one point, more than one-third of Rochester 911's work force was diagnosed with musculo-skeletal disorders that included numbness in the hands, wrists and elbows. The culprit is ergonomics, or lack thereof. Staff members handling emergency calls in the office were using new technology without ergonomically designed furniture or equipment. The situation was resolved when the workers' union and city management collaborated to design an entirely new facility that accommodated workers' physical needs, including workstations with adjustable keyboards and screens. Not every computer-related health problem becomes so extreme, nor do all situations require a top-to-bottom overhaul of everything from workstations to lighting. But illness and injury from computers is an all-too-real problem that's costing workers their health, while government loses productivity and tax dollars. "Poor ergonomics is a significant issue in the workplace," said Hank Austin, senior vice president for ErgoTeam, a consulting firm specializing in ergonomics. "People working with computers can develop a wide range of problems that affect every part of the body." It's not just happening in high-stress work situations, but in any government agency with computers. For example, nearly 96 percent of public employees who are members of the American Federation of Teachers (AFT) use computers or technology equipment at work. Of those, 26 percent have developed health problems using computer-related equipment. The Bureau of Labor Statistics (BLS) reported that more than 600,000 workers suffered serious workplace injuries caused by ergonomic hazards in 1999, the most recent year for statistics. The National Academy of Sciences puts the injuries from repetitive stress at 1 million annually. Neither government agency breaks down the number of injuries due directly to computer use, but in 1999 the BLS reported about 28,000 cases of carpal tunnel syndrome, which is often related to computers. The debilitating condition, which can occur over years, is also one of the most costly. Because so many people with carpal tunnel receive surgery, it is the leading cause of lost workdays, and the average cost is more than $13,000 per case. But ergonomic experts say the true cost is triple that amount. Austin has nearly 20 years of experience in the field of ergonomics and worker safety. To spot ergonomic troubles in any office, he suggests taking a look at the workers. "See how many are rubbing their wrists, how many have small pillows behind their backs while they sit. That will begin to give you an idea of the ergonomic conditions," he said. Wrists and backs aren't all that hurt when workers use computers -- vision blurs, and hips, thighs and even ankles throb with pain. Less obvious problems with poor computer ergonomics include what Austin calls "psycho-social issues," which arise when workers are in constant discomfort or pain. The psychological effect of poor ergonomics can be especially acute for workers in customer service, or those who feel they have little control over their situation. "The impact can start with lost workdays as workers stay home to recuperate mentally and physically," Austin said. The problems can grow into morale issues and eventually lead to valued employees quitting their jobs. Unfortunately, managers are often the last to realize what's going on because workers are reluctant to complain about a sore wrist or fatigue from using a computer. Meanwhile, public-sector employees report more stress and stress-related illnesses today than they did 25 years ago, according to the AFT. Most people think poorly designed keyboards or computer monitors that are too high, too low or too close to the workers are causes. Nonadjustable chairs and desks also are reasons workers suffer while using computers. Other factors that contribute include poor lighting and ventilation.
<urn:uuid:7cce2762-c145-4925-ab78-49a90c89bf9d>
CC-MAIN-2017-04
http://www.govtech.com/e-government/The-Price-of-Progress.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971638
806
2.703125
3
Thanks to the U.S. Department of Veterans Affairs (VA) and its electronic medical records (EMRs) system, all veterans' medical records remained intact. In disaster situations, current and available medical information is crucial to the safety and health of affected populations. By linking data across all its medical facilities and departments, the VA guarantees both access and high-quality care to veterans when they need it -- no matter their location. The VA is also experimenting with other ways technology can help improve the delivery of health care, such as alleviating the amount of paperwork by switching to EMRs and completing a system overhaul, each of which will improve the agency's efficiency and effectiveness in terms of providing care. The VA has embarked on a sweeping effort to use IT throughout its health care system. The federal agency's response during Katrina is but one striking example of what happens when IT is integrated with a mission-critical business need of a government agency. The VA, the second largest Cabinet-level department in the United States based on staff size, operates a nationwide system of health care for military veterans via the Veterans Health Administration (VHA), headquartered in Washington, D.C. With more than 7.5 million participants enrolled, the VHA is the nation's largest integrated health-care provider. Many health-care systems countrywide suffer from excess paperwork and too little information sharing. The same held true for the VA before the mid-1990s, when paper records were the norm and hospitals did not "talk to each other." During that time, the VA set out to improve quality of care for veterans, and transform the business processes of an unwieldy and inefficient system. The agency saw a 25 percent reduction in costs, and dramatic improvements in quality of care and patient satisfaction between 1995 and 2000, according to the VA. Since World War II, the veterans' health-care system experienced marked growth in its patient population and incorporated additional services when former President Ronald Reagan established it as the Department of Veterans Affairs. In addition to providing medical care for veterans, the VHA subsequently added education, training, research, emergency management and care for homeless veterans to its mission. By the mid-1990s, the VA's bloat reached critical mass. That was until two people stepped in to transform -- via IT -- the agency's services. One of them, Dr. Robert Kolodner, the VHA's chief health informatics officer, has served with the VA for 28 years -- 15 years as a psychiatrist, and more recently in the IT field. "Until the early 1990s, we had a hospital information system that jumped from department to department," Kolodner said. Operational and managerial problems were common, and the VA was forced to transform itself, from the inside out, to remain a viable government agency. "Major systemic change clearly was needed," said Dr. Ken Kizer, the VA's former undersecretary of health from 1994 to 1999, and the administration's other change agent. "The first few years of the decade were spent diagnosing what was wrong with the health-care system and consensus-building," Kizer said. "The VHA transformation sought to correct the fragmentation of service delivery by a set of initiatives that aimed to structurally, procedurally coordinate health-care services." Most IT reforms occurred during Kizer's tenure, and he oversaw the EMR's establishment throughout the VA health-care system in the 1990s. This information sharing technology proved a key element in the VA's IT transformation, and revolutionized the way doctors and nurses cared for their patients. Tool of Reform The VA's current front-line clinical system, which Kolodner compared to a set of Legos, is a product of reforms that began in 1995. Kizer's team formed an interdisciplinary clinical group to address the burdens of paperwork, and resulting backlogs and inaccuracies. The VA then began to release experimental software specifically aimed at record keeping for front-line clinicians. "The Health Summary, which we leveraged from the Indian Health Service, pulled patient information together in a report format," Kizer said. "We were able to make it even more useful by providing both an online or paper version, rather than just a paper version spit out by the computer." The system was choice-based, where the clinician could select the information needed for each patient and its order. Different reports could be generated for outpatients, inpatients or specialty clinics. "It was very flexible, didn't take new programming and was part of the setup that could be done at the local facility," Kolodner said. "This was the first of the tools designed primarily for the clinician across the disciplines." The VA later rolled out the EMR system while simultaneously striving to change the workplace culture. "Exceptions taught us that you need to have clinicians who buy into this at the local level and can be the champions," he continued. "We needed to have an ongoing group of people who were there to facilitate the clinicians initially to get over the first uses. And we needed to configure the system and understand the workflow of the individuals, and give software that fit what they needed." To illustrate this approach, managers worked with each individual specialty during testing and implementation to determine their most common orders. Doctors and nurses didn't have to change the code, Kolodner said, but used what was already in the system, which became more useful to clinicians. Creating a culture of ownership in the electronic system did not happen instantly, however. "We had our share of doctors who were reluctant to use the system," Kolodner said. "But a large proportion of nurses, social workers, pharmacists and other employees were using the system and happy with it, and word caught on." Because the VA has hundreds of test sites around the country, the agency has had more opportunities to hone and improve the system. "There is a lot of local innovation and creativity, and we've been fortunate to take advantage of that on the national level," he said. "Some of our most successful applications were developed locally when a doctor or administrator teamed up with an IT person. It's been an evolution at each facility as it catches on." The EMR rollout was gradual but methodical, Kolodner remembered. "We found that when 40 percent to 50 percent of clinicians were using it, leadership could step in and set a time frame when everyone would need to use it. Once leadership makes that commitment, the new technology can roll out," he continued, "We did round-the-clock support the first few weeks of the rollout for doctors. At that time, we learned that if you have the right combination of leadership, support and configuration, you can be successful. And once we had the tools in place we could do more." Quality of Care The VA has an integrated outpatient and inpatient information system called the Veterans Health Information Systems and Technology Architecture (VistA), in which patient records are centralized and accessible regardless of a person's residence or location of his/her hospital, and are viewable in all 1,400 VA facilities. Work on VistA began in 2001 and is expected to be complete by 2012. At the end of fiscal 2005, the VA had spent $514 million on VistA, according to the General Accountability Office. The system has had an impact on health care's biggest issue: error rates. Automatic alerts built into the system remind doctors and nurses when a patient needs a particular procedure, test or follow-up visit. "We are trained to administer certain types of care to certain patients, but in busy clinics, the computers always remind us and raise the standards of care we want to practice," Kolodner said. In busy hospitals, EMRs can reduce human error and prompt health-care professionals to double-check their work and patients' specific conditions and needs. Since work on VistA began, the system has saved 6,000 lives by improving vaccination rates, according to the VA, which also stated that the system has reduced hospital stays for certain illnesses, cutting costs by $40 million per year. In addition, VistA costs the VA approximately $87 per patient per year to maintain. There are still opportunities to refine and experiment with EMRs in individual locations. For example, one VistA tool creates templates that are envisioned as boilerplate text for creating progress notes. A local hospital took them and created elaborate interactions with clinicians, capturing other information and using logic to make sure clinicians were properly administering the right medication to the right patients. The VHA is currently revamping its enrollment information system to automate health-care eligibility, enrollment and case management, and more effectively maintain VHA eligibility policies and procedures for veterans and their families, said VHA spokesman M. Jay Eigenbrode. For instance, the agency's Health Eligibility Center uses ILOG JRules -- a business rule management system -- to streamline the enrollment process with a "central bank" of rules and a platform for cross-agency collaboration, Eigenbrode said. JRules allows VA health-care administrative staff and clinical providers to access the latest eligibility and benefit level information on all patients, in real time, as it's being updated in the system. "[With the system], policymakers and business analysts can access and modify the rules more quickly, react to change easier, and ultimately improve the speed and quality of the decision-making eligibility processes," he said. Northrop Grumman built the first Bidirectional Health Information Exchange (BHIE) for EMR for the VA -- and the Department of Defense (DoD). BHIE provides immediate and updated information on outpatient prescriptions, drug and food allergies, laboratory results, radiology text results, and demographic data on shared patients from the VA's Computerized Patient Record System. The EMR system, although electronic and Internet-based, also works efficiently for senior veterans, who can authorize the creation of their personal health record and perhaps give their children permission to look at the records. Seniors then discuss their medical needs over the phone and avoid going on the computer. In Other News ... The VA also leads in bar-code scanning technology, which is used on patients at the bedside. Ken Kleinberg, senior director of marketing and health-care solutions for Symbol Technologies, works with the VA on enterprise mobility, wireless technologies and bar-code scanning. The VA's deployment of mobile technology, he said, is catching on in the health-care sector. In the past, nurses and doctors entered data into a desktop or laptop computer at a separate nursing station; now they scan a wristband to give medication or track the patient. The VA has also adopted PDA mobile computers with added capabilities, such as collecting specimens and any other bedside data. "The new excitement at [the] VA is the use of mobile computers," said Kleinberg. "Many organizations started out with desktop applications and pushed the cart around with full-sized keyboards and screens, but those were very unwieldy, and were left in the hallways. You had to worry about cleaning them, and they are expensive." Nurses now carry mobile devices that enable them to administer medication throughout their shifts, and newer ones have VoIP capability. Over time, Kleinberg expects the laser bar-code scanner to keep pace and move to newer units as they are developed. EMRs have been shown to decrease the error rate for medication administration, remove needless laboratory tests, cut costs and improve overall delivery of medical care. "Out in the community, presenting at conferences and helping people understand it, the VA has done more to promote these technologies than any other organization I know of," Kleinberg said. Also, as CEO of Medsphere, an IT provider for health-care organizations, Kizer has taken specifics about the VA's system to the commercial sector and is working to promote its use. Other federal and state agencies are looking at how an EMR system like VistA can be applied in their jurisdictions. In 2005, the U.S. Department of Health and Human Services established the Certification Commission for Healthcare Information to certify health-care IT products and help speed adoption of EMRs. "[The] VA is effectively the world's leading user of information technology in hospitals," Kleinberg said. "[The] DoD is looking to VistA software as a benchmark for how to do it right and cost-effectively. The VA is viewed as the leader by governments as well as the private sector." And the change in recordkeeping for VA medical professionals has been transformative. "Just having doctors able to e-mail each other, share notes and find computer records of appointments is key," said disabled veteran and government contractor Bruce Fenton. "Investment in IT systems has already helped the VA a great deal, and will continue to help them deal with the stresses on the system. The VA has a unique mission among federal agencies because its primary work is to care for all these veterans -- a mission of large scope and scale." Other reasons that have made it easier for the VA to adopt health IT, according to Kleinberg, are that a large population of patients give it economies of scale; and the physicians are employed by the VA, not by a typical hospital, giving the agency greater control over the system's users. This reduces political and cultural barriers to accepting new technologies and the changes that come with it. Kolodner chalks up a lot of the VA's success to numerous factors, but one in particular stands out, he said. "Leadership has played an active role at all levels, with ongoing testing in the laboratory and test sites, feedback and making sure software really is useful. What we are achieving now is much better than 10 years ago and on many more levels because it's tailored and provided only when patients need it. "The VA is a laboratory for the nation," Kolodner continued. "True, we have some alignment of incentives and no barriers of state laws, but we can demonstrate there isn't a technical barrier to better care -- the level and quality we all deserve. So with the VA being that laboratory, we do what we can so the rest of the nation can use it too."
<urn:uuid:79a62a1b-6ed7-417a-98c5-2806e1a0937a>
CC-MAIN-2017-04
http://www.govtech.com/magazines/pcio/Prescription-for-Improvement.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00232-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968752
2,928
2.5625
3
The term unicast refers to the network distribution method of sending messages from a sender to a single destination. For this to happen, the sender - in this case, a switch - needs to have an entry in its forwarding table that maps the MAC address of destination device to the port where it resides. When a switch receives a packet whose destination MAC address is not present in its forwarding table, its solution is to "flood" the packet out of all of its ports that belong to that layer 2 broadcast domain (VLAN). In addition to being a potential security risk, this can significantly reduce network efficiency if it occurs frequently. Unicast flooding can also be a sign of a MAC flooding attack. Monitoring your network for unicast flooding can help you pinpoint network problems as soon as they occur and protect the security of your system and users. When Is Unicast Flooding a Problem? Unicast flooding, while usually an innocuous and expected behavior, can indicate problems in your network configuration if it occurs in excess. High levels of flooding can degrade network performance in terms of latency and packet loss, especially for low-bandwidth systems. Here are a few of the most common causes of unicast flooding: 1. Asymmetric Routing If there are two paths through the network connecting hosts 1 and 2, and packets sent from each host take different paths, unicast flooding would occur when either host sends a packet to the other. Specifically, if host 1 connects to switch A via, VLAN1, and host 2 connects to switch B via VLAN2, when host 2 sends a packet to host 1, that packet will be flooded to VLAN1, because switch B doesn't know which port to use to reach host 1. The same situation would occur for packets sent in the opposite direction. To limit unicast flooding in this situation, set your router's ARP timeout slightly shorter than the timeout for the switch's address table so that most entries are relearned before the switch ages them out. 2. Forwarding Table Overflow If a switch's address table is full and it receives a packets destined for any host whose MAC address isn't in its table, it is forced to flood the packets until there is space in the address table to store the new address associations. This can also be caused by a type of network attack called a MAC flooding attack. This attack involves a malicious host flooding the switch with frames that have fake MAC addresses to completely fill the address table so that all other traffic will be flooded from all ports and can be observed by the attacker. Methods of preventing MAC flooding attacks include enabling port security on a switch to limit the number of MAC addresses learned on certain ports and increase the timeout period of known addresses in the table, as well as verifying MAC addresses against an AAA server. 3. Network Topology Changes If a link on a network goes down or if a new port opens for forwarding, address tables need to be updated since paths to some destinations may have changed. Since the length of time that entries in the table normally take to age out is longer than would be ideal for these events, a Topology Change Notification (TCN) is triggered on networks using the Spanning Tree Protocol (STP) to age out table entries more quickly so the forwarding table stays up to date. A side effect of aging out entries is increased unicast flooding until addresses are relearned. Unicast flooding due to TCNs doesn't usually last long. However, it can be problematic if multiple TCNs are occurring over a short period of time. On Cisco switches, you can use the PortFast command for ports that are only connected to end stations which go up and down often. This causes the port to go directly to forwarding mode, bypassing the learning and listening states and preventing a TCN when the port goes up or down. For a more detailed explanation of topology changes, check out this article on the topic. 4. Unidirectional Protocols Unidirectional or connectionless networking protocols such as the User Datagram Protocol (UDP) can also lead to unicast flooding since they don't require acknowledgements or responses from the destination application. In a situation where one system sends packets to another over a long period of time without getting any packets in return from the receiver, the switch connecting the two systems will eventually age out its entry for the destination, so any further packets from the source to that destination will result in unicast flooding. The Transmission Control Protocol (TCP), a connection-oriented protocol, avoids this potential pitfall by requiring acknowledgements to be sent back to the source for data that the destination receives successfully. How You Can Monitor Unicast Flooding If you already use ExtraHop, you can download the Unicast Flooding Detection Bundle in our bundle gallery. The bundle utilizes a trigger to capture metrics about suspected instances of unicast flooding and displays the information on an easy-to-understand dashboard that also matches flooded packets with their intended destination. With the bundle, you can correlate other network metrics with instances of suspected unicast flooding. If you're not an ExtraHop user yet, check out our interactive online demo and see its capabilities for yourself.
<urn:uuid:fcd42ed5-2469-459b-b798-aad7747da250>
CC-MAIN-2017-04
https://www.extrahop.com/community/blog/2016/find-network-configuration-issues-by-monitoring-unicast-flooding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936926
1,070
3.359375
3
While it may seem like science fiction, implantable medical devices such as pacemakers, insulin pumps, and even devices designed for the treatment of epilepsy could be hacked. While it would be nice to think that devices buried deep within the body are secure, security research released at DefCon demonstrated otherwise. While it has been known for many years that various devices such as microwaves and iPods can interfere with communications of biomedical devices, no one looked at this as a means of attack since these devices don’t offer a method of direct communication and send data to the outside world by means of radio frequency telemetry. Just as with cordless phones, Bluetooth, and WLAN technology, these devices can be eavesdropped on. What’s even worse, there’s potential that someone could send rouge instructions to an implanted device by intercepting the device’s wireless signal and then broadcasting a different signal. When a computer fails, you reboot it, but when a pacemaker fails, someone may die. While this may seem far fetched, attackers will always think outside the box. Someone may consider such activity as an act of sabotage to inflict financial or personal injury, target government officials or military leaders, or as an attack by a disgruntled employee. More care must be placed into building strong security into all electronic devices as we increase our dependency on them.
<urn:uuid:626b17b0-0b5e-470d-b194-031083648686>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2011/08/17/implantable-devices-are-susceptible-to-hacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00352-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964243
280
2.8125
3
Pain D.,WWT | Green R.,Conservation Science Group | Clark N.,Bto Inc. British Birds | Year: 2011 The Spoon-billed Sandpiper Eurynorhynchus pygmeus is thought to be one of the most endangered birds in the world. The latest information suggests that the population is in free fall and, if current trends continue, could be at such low levels that extinction through random events could happen within 5-10 years. Habitat loss at key staging posts on the bird's 8,000-km migration route to and from its southern and southeast Asian wintering grounds is one factor in this charismatic wader's decline, but recent research suggests that trapping on the wintering grounds may be a key reason for the recent acceleration in the rate of decline. Conservation priorities for the species are outlined and the feasibility of a conservation breeding programme, either to support an existing small population or to re-establish one that has become extinct, is discussed. Source Coad L.,Conservation Science Group | Coad L.,Imperial College London | Coad L.,Environmental Change Institute | Abernethy K.,University of Stirling | And 4 more authors. Conservation Biology | Year: 2010 Bushmeat hunting is an activity integral to rural forest communities that provides a high proportion of household incomes and protein requirements. An improved understanding of the relationship between bushmeat hunting and household wealth is vital to assess the potential effects of future policy interventions to regulate an increasingly unsustainable bushmeat trade. We investigated the relationship between hunting offtake and household wealth, gender differences in spending patterns, and the use of hunting incomes in two rural forest communities, Central Gabon, from 2003 to 2005. Households in which members hunted (hunting households) were significantly wealthier than households in which no one hunted (nonhunting households), but within hunting households offtakes were not correlated with household wealth. This suggests there are access barriers to becoming a hunter and that hunting offtakes may not be the main driver of wealth accumulation. Over half of the money spent by men in the village shop was on alcohol and cigarettes, and the amount and proportion of income spent on these items increased substantially with increases in individual hunting offtake. By contrast, the majority of purchases made by women were of food, but their food purchases decreased actually and proportionally with increased household hunting offtake. This suggests that the availability of bushmeat as a food source decreases spending on food, whereas hunting income may be spent in part on items that do not contribute significantly to household food security. Conservation interventions that aim to reduce the commercial bushmeat trade need to account for likely shifts in individual spending that may ensue and the secondary effects on household economies. © 2010 Society for Conservation Biology. Source Fourcade Y.,University of Angers | Fourcade Y.,University of East Anglia | Fourcade Y.,Swedish University of Agricultural Sciences | Richardson D.S.,University of East Anglia | And 5 more authors. Biological Conservation | Year: 2016 Understanding patterns of genetic structure, gene flow and diversity across a species range is required to determine the genetic status and viability of small peripheral populations. This is especially crucial in species distributed across a large range where spatial heterogeneity makes it difficult to predict the distribution of genetic diversity. Although biogeographical models provide expectations of how spatially structured genetic variation may be at the range scale, human disturbance may cause strong deviations from these theoretical predictions. In this study, we investigated genetic structure and demography at a pan-European scale in the corncrake Crex crex, a grassland bird species strongly affected by agricultural changes. We assessed population structure and genetic diversity, as well as demographic trends and direction of gene flow, in and among 15 contemporary populations of this species. Analyses revealed low genetic structure across the entire range with high levels of genetic diversity in all sites. However, we found some evidence that the westernmost populations were, to a very limited extent, differentiated from the rest of the European population. Demographic trends showed that population numbers have decreased in western Europe and remained constant across eastern Europe. Results may also indicate asymmetric gene flow from eastern to western populations. In conclusion, we suggest that the most likely scenario is that contrasting demographic regimes between eastern and western populations, driven by heterogeneous human activity, has caused not only asymmetric gene flow that has buffered small peripheral populations against genetic diversity loss, but also erased any genetic structure that may have existed. Our study not only highlights the need for coordinated action at the European scale to preserve source populations of the corncrake, but also to ensure persistence of the most threatened sites. Only by doing so will we avoid losing adaptive potential and prevent over-reliance on eastern source populations whose future may be uncertain. © 2016 Elsevier Ltd. Source Groves C.R.,Conservation Science Group | Game E.T.,Conservation Science Group | Anderson M.G.,The Nature Conservancy | Cross M.,Wildlife Conservation Society | And 10 more authors. Biodiversity and Conservation | Year: 2012 The principles of systematic conservation planning are now widely used by governments and non-government organizations alike to develop biodiversity conservation plans for countries, states, regions, and ecoregions. Many of the species and ecosystems these plans were designed to conserve are now being affected by climate change, and there is a critical need to incorporate new and complementary approaches into these plans that will aid species and ecosystems in adjusting to potential climate change impacts. We propose five approaches to climate change adaptation that can be integrated into existing or new biodiversity conservation plans: (1) conserving the geophysical stage, (2) protecting climatic refugia, (3) enhancing regional connectivity, (4) sustaining ecosystem process and function, and (5) capitalizing on opportunities emerging in response to climate change. We discuss both key assumptions behind each approach and the trade-offs involved in using the approach for conservation planning. We also summarize additional data beyond those typically used in systematic conservation plans required to implement these approaches. A major strength of these approaches is that they are largely robust to the uncertainty in how climate impacts may manifest in any given region. © 2012 The Author(s). Source Balmford A.,Conservation Science Group | Green R.,Conservation Science Group | Phalan B.,Conservation Science Group Proceedings of the Royal Society B: Biological Sciences | Year: 2012 Farming is the basis of our civilization yet is more damaging to wild nature than any other sector of human activity. Here, we propose that in order to limit its impact into the future, conservation researchers and practitioners need to address several big topics-about the scale of future demand, about which crops and livestock to study, about whether low-yield or high-yield farming has the potential to be least harmful to nature, about the environmental performance of new and existing farming methods, and about the measures needed to enable promising approaches and techniques to deliver on their potential. Tackling these issues requires conservationists to explore the many consequences that decisions about agriculture have beyond the farm, to think broadly and imaginatively about the scale and scope of what is required to halt biodiversity loss, and to be brave enough to test and when necessary support counterintuitive measures. © 2012 The Royal Society. Source
<urn:uuid:598b3dd4-da6c-4208-9258-523a75349c77>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/conservation-science-group-485660/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00260-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930436
1,506
2.8125
3
Who uses computerized decision support? by Dan Power Many people use computerized decision support for work and in recent years to aid in personal decision making. Identifying the targeted or intended users for computerized decision support helps differentiate the specific system. Knowing who does or will use a capability provides useful information about how the content and design of the application might or should differ. This discussion provides examples of job titles and occupations of targeted users for decision support, business intelligence and analytic systems. Let's go back in time to the "first" DSS developed by Michael S. Scott Morton (1971). That system was designed to support the market planning manager, the production manager and the marketing manager of a consumer product division of a large multi-business firm. "Every month they developed both a production plan and a sales plan for the following twelve months (p. 43)." In 1978, Keen and Scott Morton described six diverse DSS: a DSS to help investment managers (Gerrity, 1971) with a stock portfolio, a DSS used by the president of a small manufacturing company to evaluate an acquisition prospect, an interactive DSS used by product planners for capacity planning, a model-driven DSS used by a brand marketing manager for making marketing allocations (using J.D.C. Little's BRANDAID), a DSS (IBM's Geodata Analysis and Display System called GADS) used by police officers and commanders in teams of four to redesign police beats, and also GADS used by school officials to explore and define alternative school district boundaries. Holsapple and Whinston (1996) identify many management users of DSS. For example, the management staff of the distribution department at Monsanto used a DSS for ship-scheduling decisions, a DSS helped managers with vehicle fleet-planning decisions, cargo planners used a DSS for scheduling ship unloading in Rotterdam, plant supervisors at Dairyman's Cooperative used a PC-based DSS to optimize daily production planning, maintenance planners at American Airlines used a DSS, and analysts and executives in the U.S. Coast Guard used a document-driven DSS to help make procurement decisions. Turban and Aronson (1998) also identify DSS used by staff for special studies. Staff at Group Health Cooperative used a data warehouse and statistical analysis tools to generate periodic reports and for monitoring key performance indicators and staff at Siemens Solar Industries constructed a simulation model DSS of a "cleanroom" to explore alternative design options. DSSResources.com has 46 case studies that identify many users including managers, staff, customers, the general public, and workers in business, government and not-for-profit organizations. Job titles of users include: engineers, loan officers, salesmen, fire department commanders, examiners in the Pennsylvania Department of Labor and Industry, business and financial analysts, and emergency management professionals. A web search identifies even more uses and users. Medical doctors using a web-based clinical decision support system. According to http://www.openclinical.org/dss.html, Clinical Decision Support Systems are "active knowledge systems which use two or more items of patient data to generate case-specific advice" (cf., Wyatt, J. & Spiegelhalter, D., 1991). Other Web documents focus on DSS for trainee lawyers and mediators, a DSS for crop rotation targeted to farmers and agricultural policy makers, and an example from Scotland of judges using a Sentencing Information System. The first international Workshop on Judicial Decision Support Systems was held in Melbourne, Australia in 1997 (cf., http://www.globalcourts.com/text/jdss.html). The U.S. Marine Corps (USMC) needed an application that allowed Marine Command staff to import, manipulate, and analyze terrain data relative to their operations. Road maintenance supervisors evaluated a Maintenance Decision Support System (MDSS) during the winter of 2003 in Central Iowa. DSS are used for air traffic monitoring. Also, a decision support system is used by staff to facilitate manpower planning for the U.S. Marines. Military analysts use a Financial Data Mart at the Military Sealift Command at the Navy Yard in Washington, D.C. This list can probably go on for many pages. The last system I'll mention is TIAA-CREF's decision support system for more than 160 billion US dollars of daily equity investment. This on-line system supports portfolio managers of the world's largest pension fund with over 250 billion USD in assets. Eric Siegel (Taylor blog) identified seven innovative uses of predictive analytics including: 1) improving text mining, 2) predicting ad quality, 3) sending targeted follow-up emails, 4)analyzing satisfaction surveys to drive the operational decisions, 5) using reliability modeling to predict when things will need repair and then scheduling proactive maintenance, 6) predicting the success of startups , and 7) detecting anomalies for fraud detection so followup decisions can be made. Fico.com cites many uses of predictive analytics by companies. The company website claims "Predictive analytics is widely used to solve real-world problems in business, government, economics and even science—from meteorology to genetics." Managers and staff implement and use analytics and especially predictive analytics in credit scoring, underwriting, collecting past due accounts, increasing customer retention and up-selling, and fraud detection. So who uses computerized decision support including analytics and business intelligence systems? Managers, knowledge workers and staff specialists in a wide variety of professions, occupations, industries and disciplines. Decision support users include internal and external stakeholders of an organization. Ultimately, anyone who makes decisions and has access to a computer is a potential user of a computer-based decision aiding applications. Fico.com, "Who uses predictive analytics?" at URL http://www.fico.com/en/Communities/PredictiveAnalytics/Pages/who-uses-predictive-analytics.aspx Gerrity, T.P., Jr., "The Design of Man-Machine Decision Systems: An Application to Portfolio Management," Sloan Management Review, vol. 12, no. 2, pp. 59-75, Winter 1971. Holsapple, C. W. and A. B. Whinston, Decision Support Systems: A Knowledge-based Approach, St. Paul, MN: West Publishing, 1996. Keen, P. G. W. and M.S. Scott Morton,Decision Support Systems: An Organizational Perspective, Reading, MA: Addison-Wesley, 1978. Power, D. J., Decision Support Systems: Concepts and Resources for Managers, Westport, CT: Greenwood/Quorum, 2002. Scott Morton, M. S., Management Decision Systems, Boston: Division of Research Graduate School of Business Administration, Harvard University, 1971. Taylor, J., "7 Innovative uses of analytics," February 16, 2010 at URL http://jtonedm.com/2010/02/16/7-innovative-uses-of-analytics-pawcon/ Toigo, J. W., Decision Support Systems Make Gains in Government, Washington Technology, Vol. 14 No. 4, 05/24/99 at URL http://www.washingtontechnology.com/news/14_4/tech_features/530-1.html . Turban, E. and J. Aronson, Decision Support Systems and Intelligent Systems (5th edition), Upper Saddle River, NJ: Prentice Hall, 1998. Wyatt, J.C. and D.J. Spiegelhalter, "Evaluating medical expert systems: what to test and how?", Medical Informatics. 1991. Author's noteMy email had a number of interrelated questions related to "who uses DSS?" Afria King asked "What are some DSS products related to business administration?" I replied "Most DSS are targeted for use by managers." Janine Engledoe wrote "What are the different applications of DSS?" Lynn Oelke asked "What are the major DSS products specifically used by Health Care Administrators?" Emily Bell wondered about the "cost associated with different DSS?" Nastaran Razavi asked me to "please specify some commonly used DSS software". Wong Soon Chen sent me an essay question "'Managers need computerized decision support and supporting technologies to do their job better.' Justify the above statement with relevant facts and figures." Finally, Road Runner writes "Find information on the use of computers to support decisions versus TPS. Each group member collects an application in a different industry (e.g., banking, insurance, food services, etc.). The group then summarizes the findings, points out similarities and differences of the applications." Some weeks I receive multiple emails about who uses decision support. Some emails I answer quickly, but I usually wait a few months to answer "examination" type questions in an Ask Dan! column. Even then I sometimes change the focus or combine questions. What do the above questions have in common? The questions suggest that there are different DSS for people performing different jobs. Also, they suggest the costs and functionality of DSS differ. I agree. This column is a revised version of Power, D., "Who uses DSS?" DSS News, Vol. 7, No. 12, June 4, 2006 at URL http://dssresources.com/newsletters/163.php, updated August 31, 2012. Last update: 2012-09-02 05:13 Author: Daniel Power You cannot comment on this entry
<urn:uuid:a6b3f183-3f1a-4da9-8564-b0ccce806bef>
CC-MAIN-2017-04
http://dssresources.com/faq/index.php?action=artikel&id=118
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00133-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894656
1,982
2.515625
3
We’ve all heard of writer’s block: when a literary type just can’t figure out what to write. If you’re a programmer, though, chances are you’ve experienced the cousin of writer’s block (Cousin Oliver, if you will), coder’s block. What’s that, you non-programmers say? How could coders get blocked? They just have to follow instructions, right? Doesn’t somebody else figure out what an application is supposed to do and programmers just have to, you know, make it work? It’s not a creative process like writing, now is it? So how could programmers get “blocked”? Like writers, programmers can have days (or longer) where they have trouble writing code, or writing good code or just feeling like they’re “in the zone”. Programming isn’t just about following instructions and making something work. There’s often a lot creativity involved in how you make something work, and a near infinite number of paths to implementing a given functionality. Some ways are good, some ways not so good, and some ways downright bad. It’s a much more creative process than most non-programmers think. Plus, programmers, like anybody else, can have lots of different projects on their plate, and sometimes simply deciding what project to tackle first can be daunting and can prevent you from getting started on any one chunk of work. Ergo, henceforth and QED your average coder can be just as blocked staring at his or her favorite code editor as a writer can staring at a blank Word document. It’s a real problem for some. How to deal with coder’s block is a topic that comes up regularly in developer forums. The suggested remedies are often familiar and, truth be told, not all that different from those offered to break through writer’s block. You’ll often see suggestions like: - Get some exercise to clear your head - Take a nap - Treat yourself to ice cream - Just write something, anything, no matter how crappy it is Then there are suggestions that are fairly programmer specific, that probably wouldn’t have worked for, say, Hemingway, such as - Write some unit tests - Take a break to learn a new programming language - Phone a fellow developer The big question, though, is what to do if you’ve tried these old chestnuts and you still can’t get your brain in gear enough to spit out some brilliant code? Well, then you may need to get a little more creative to bust out of your rut. Try something like: - Doing a triathalon - Maybe you just didn’t exercise enough. Try swimming 2 miles, biking 100 miles and running a marathon, right in a row. That oughta clear your brain, though - WARNING - it might also kill you, especially if your normal solution to coder's block is to eat ice cream. Seriously, consult your doctor (and ease up on the ice cream). - Start looking for a new job - Your coder’s block may be your subconscious telling you that your current gig is going nowhere. Start perusing career sites; you may see lots of exciting opportunities, which may inspire you to code again; or, there may be nothing out there, in which case you’d better snap out of it, pronto, so you don’t lose your current job, crappy though it may be. - Going around the office and removing the trademark symbol from all of the corporate signage - Oops, I’m sorry - this is a method for breaking CEO's block. That’s a whole different thing. How about you? Have you suffered from coder’s block? How did you break out of it? Do tell.
<urn:uuid:93298096-3a6a-47ff-b55c-4876bb340ea2>
CC-MAIN-2017-04
http://www.itworld.com/article/2721123/it-management/breaking-coder-s-block.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00133-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940344
823
2.53125
3
By: Steve Mullins The Virtualization Boom Datacenter server virtualization saves space, power and hardware cost for thousands of enterprises by consolidating physical machines. The reduction in the number of physical machines is achieved by increasing hardware (CPU and memory) utilization from a typical 10-15% to as much as 75-85%. In addition to the savings on hardware purchases, there are reduced cooling requirements and maintenance cost savings associated with fewer machines. Energy cost savings have been estimated to be in the range of $300 to $600 per year for each server that is eliminated by virtualization. The total savings due to virtualization can be in the millions of dollars per year for large enterprises. This is why 60% to 80% of IT departments have server consolidation projects underway, according to analyst reports. Server virtualization has broken the bonds of legacy datacenter IT architecture in which a single application and a single operating system (OS) run on each server. In the virtual datacenter, multiple applications and operating systems can run securely on one server. It is this capability that allows hardware utilization to increase dramatically. The trend is to take it a step further and create pools of shared hardware resources that include not only multiple servers (compute resources), but also I/O and storage resources, that can be efficiently and dynamically allocated to many virtual machines. This virtual infrastructure provides increased flexibility, high availability, and scalability to meet today’s enterprise datacenter needs. Server Virtualization—A “New Again” Paradigm for Computing While server virtualization is one of the hottest trends in IT today, the idea dates back to the 1960’s IBM mainframes. Server virtualization allows multiple software instances of a computing platform to run concurrently on one physical machine. These virtual machines or “VMs” are capable of running an operating system and a set of applications. Each VM may run a different OS—Windows, Linux, UNIX, etc.—or different versions of the same OS, depending on the needs of the software applications. This provides tremendous flexibility and security. The dominant approach to server virtualization is through a thin software layer—hypervisor—between the physical machine and the VMs, as shown in Figure 1. The hypervisor is installed on the “bare metal” of the server, taking the place of the traditional OS. Hypervisors dynamically allocate hardware resources to each VM. This is the approach taken by the leading server virtualization solutions from VMware (ESX Server), Citrix (Xen) and Microsoft (Hyper-V). Virtualization and License Compliance Challenges While there are many benefits to virtualization, there are also a few challenges. An often overlooked aspect of the virtual datacenter is increased risk of software license non-compliance. There are two key drivers. First, it’s easy to create new virtual machines running copies of operating systems and software applications. Second, software publishers have adopted software licensing rules for virtual environments that add significant complexity to the already complicated task of managing software licenses. Datacenter software is generally the biggest slice of the application investment pie, with typical costs for licenses in the tens to hundreds of thousands of dollars per server. Therefore, it’s critical to understand whether software is properly licensed on virtual machines to avoid unexpected true-up costs and prevent under or over buying. Enterprises should implement software asset management (SAM) programs that provide license reconciliation between what was purchased and what applications are installed on both physical and virtual machines from the desktop to the datacenter. A quick look at license complexity reveals that a next generation software asset management, commonly known as Enterprise License Optimization, solution is required. For example, some vendor licenses require knowledge of the number of VMs associated with a given physical server. In one case, an application is entitled to be installed on up to 4 VMs per physical server and still consumes only one license. Additional copies of the application running on other VMs on that same physical server each require an additional license. Other types of licenses require knowledge of the underlying physical hardware such as the processor speed, number of processors, and/or the number of cores. This can be problematic because the physical hardware may be hidden from the virtual environment by the hypervisor. Dynamic virtualization, where running VMs can be moved from one physical host to another, further complicates software license compliance. Software licensing that is bound to physical host processors, may result in an enterprise drifting out of licensing compliance, if a VM is relocated to a different physical host with more CPUs. Some software vendors place license restrictions on the frequency of application transfers from one server to another (mobility restrictions) thereby compounding the risk of compliance drift. Since applications are contained within a VM, it’s easy to violate this mobility rule and drift out of license compliance. |License Type or Term||Data Required for License Entitlement| |Maximum Number of VMs/server||VM properties from hypervisor or underlying OS| |Mobility restrictions||VM/Server identification information| |Processor (CPU or Core)||Hardware parameters:| |• Number of CPUs/cores per server| |Processor Value Unit (PVU)||• CPU class| Software Asset Management Tool Requirements Software asset management tools that can meet these virtualized datacenter license compliance challenges should have the following capabilities: - Automatically discover virtual servers (e.g. VMware ESX Servers and Virtual Center Servers) on the network. - Be able to correlate VMs to physical host machines and determine the number of VM’s per server. - Collect the hardware resource data (# of processors, processor type, # of cores, speed, etc.) from the hypervisor; collect hardware resource allocations per VM. - Collect software inventory and usage data for each VM—typically this would be done via an agent that has been installed on the VM. Examples include Microsoft’s SMS/SCCM agent. Just like the software asset management tools used in physical environments, tools for virtual environments also need to be able to translate the raw software inventory data into a recognized set of applications installed on each VM. This application recognition process may take into account various types of inventory data, including: file evidence, add/remove program information, and WMI data. Software asset management tools should also be able to reconcile the list of installed applications with software purchase data, license type, and associated conditions of use to generate a detailed license compliance report and out-of-compliance alerts. The following figure shows a block diagram of a software asset management solution for both physical and virtual environments that incorporates: virtual server discovery, an inventory importer, application recognition, license reconciliation, a contract and PO manager, as well as reporting capabilities. A robust next generation software asset management solution enables IT organizations to keep track of what applications are installed and in use throughout the physical and virtual enterprise, allowing them to optimize their software investment, reduce costs, and avoid vendor audit surprises.
<urn:uuid:14707dca-d6be-46d4-a724-a976b6bf56b5>
CC-MAIN-2017-04
http://blogs.flexerasoftware.com/elo/2010/07/virtualization-software-license-management.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897685
1,440
2.796875
3
The US government today took a bold step toward perhaps finally getting some offshore wind energy development going with $50 million in investment money and promise of renewed effort to develop the energy source. The Department of the Interior and Department of Energy have teamed on what they call the joint National Offshore Wind Strategy: Creating an Offshore Wind Industry in the United States. The plan focuses on overcoming three key challenges that have made offshore wind energy practically non-existent in the US: the relatively high cost of offshore wind energy; technical challenges surrounding installation, operations, and grid interconnection; and the lack of site data and experience with project permitting processes. Layer 8 Extra: 10 hot energy projects that could electrify the world In support of this the plan the DOE announced three projects that will be funded up to $50.5 million over 5 years to develop breakthrough offshore wind energy technology and to reduce specific market barriers to its deployment: - Technology Development (up to $25 million over 5 years): DOE will support the development of innovative wind turbine design tools and hardware to provide the foundation for a cost-competitive and world-class offshore wind industry in the United States. Specific activities will include the development of open-source computational tools, system-optimized offshore wind plant concept studies, and coupled turbine rotor and control systems to optimize next-generation offshore wind systems. - Removing Market Barriers (up to $18 million over 3 years): DOE will support baseline studies and targeted environmental research to characterize key industry sectors and factors limiting the deployment of offshore wind. Specific activities will include offshore wind market and economic analysis; environmental risk reduction; manufacturing and supply chain development; transmission planning and interconnection strategies; optimized infrastructure and operations; and wind resource characterization. - Next-Generation Drivetrain (up to $7.5 million over 3 years): DOE will fund the development and refinement of next-generation designs for wind turbine drivetrains, a core technology required for cost-effective offshore wind power. Meanwhile, the DOI said it would identified four Wind Energy Areas offshore that uses appropriate designated areas, coordinated environmental studies, large-scale planning and expedited approval processes to speed offshore wind energy development. The areas, on the Outer Continental Shelf offshore Delaware (122 square nautical miles), Maryland, New Jersey, and Virginia, will receive early environmental reviews that will help to lessen the time required for review, leasing and approval of offshore wind turbine facilities. The department said that by March it expects to identify Wind Energy Areas off of North Atlantic states, including Massachusetts and Rhode Island, and launch additional NEPA environmental reviews for those areas. A similar process will occur for South Atlantic region, namely North Carolina, this spring, the agency stated. Under the National Offshore Wind Strategy, the Department of Energy is pursuing a scenario that includes deployment of deploying 10 gigawatts of offshore wind generating capacity by 2020 and 54 gigawatts by 2030. In a report last fall, the DOE said that if wind is ever to be a significant part of the energy equation in this country we'll need to take it offshore - into the deep oceans. Large offshore wind objects could harness about more than 4,000 GW of electricity according to the DOE. The DOE noted that while the United States has not built any offshore wind projects about 20 projects representing more than 2,000 MW of capacity are in the planning and permitting process. Most of these activities are in the Northeast and Mid-Atlantic regions, although projects are being considered along the Great Lakes, the Gulf of Mexico, and the Pacific Coast. The deep waters off the West Coast, however, pose a technology challenge for the near term. "Although Europe now has a decade of experience with offshore wind projects in shallow water, the technology essentially evolved from land-based wind energy systems. Significant opportunities remain for tailoring the technology to better address key differences in the offshore environment. These opportunities are multiplied when deepwater floating system technology is considered, which is now in the very early stages of development," the report states. Last year Google said it wants a big part of the energy that could be generated from offshore wind farms. The company said it inked "an agreement to invest in the development of a backbone transmission project off the Mid-Atlantic coast that offers a solid financial return while helping to accelerate offshore wind development-so it's both good business and good for the environment. The new project can enable the creation of thousands of jobs, improve consumer access to clean energy sources and increase the reliability of the Mid-Atlantic region's existing power grid." The project, known as the Atlantic Wind Connection (AWC) backbone will be built across 350 miles of ocean from New Jersey to Virginia and will be able to connect 6,000MW of offshore wind turbines. That's equivalent to 60% of the wind energy that was installed in the entire country last year and enough to serve approximately 1.9 million households, Google stated. "The AWC backbone will be built around offshore power hubs that will collect the power from multiple offshore wind farms and deliver it efficiently via sub-sea cables to the strongest, highest capacity parts of the land-based transmission system. This system will act as a superhighway for clean energy. By putting strong, secure transmission in place, the project removes a major barrier to scaling up offshore wind, an industry that despite its potential, only had its first federal lease signed last week and still has no operating projects in the U.S.," Google stated. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:25095e7b-f036-40d1-82d6-7aa87e3b5fd1>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228458/security/us-tries-to-fire-up-mighty-offshore-wind-energy-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938149
1,132
2.609375
3
Hughes Satellite Technology Provides Critical Communications in the Aftermath of Superstorm SandyDownload (151 KB) On October 29, 2012, Superstorm Sandy struck the eastern seaboard, impacting states from Florida to Maine. According to the National Hurricane Center, Superstorm Sandy was the largest Atlantic hurricane recorded in history, and the second most expensive after Katrina in 2005. Destroying parts of the Caribbean, Mid-Atlantic, and Northeastern United States, Sandy caused approximately $20 billion in damage and an estimated loss of $50 billion in revenue from interruption to businesses.1 The telecommunications infrastructure in these areas was also severely damaged. In fact, Superstorm Sandy knocked out approximately 25 percent of all cellphone communications across 10 states.2 An estimated 800,0003 New Yorkers lost power during the storm, and the city’s telecom network experienced downed landlines and cell towers. Adding to the situation, a key hub for a major telecom provider located in lower Manhattan flooded, so it was unable to provide Internet or voice communications to its customers. In fact, thousands of people remained without service even six months after the storm.4 But in the aftermath of Sandy, vital communications were rapidly restored in some areas by using satellite technology. Unlike terrestrial technologies that rely on ground-based infrastructure, such as cell towers that are vulnerable to being disabled or knocked out when disaster strikes, satellite provides a true alternate and robust communications path that is easy to deploy virtually anywhere using small dish antennas. Several examples highlight the important role it played. The Rockaway and Far Rockaway areas of Queens, New York— home to between 175,000 and 200,000 people—were hit hard during the storm and had little or no communications. The Federal Emergency Management Agency (FEMA) opened Disaster Recovery Centers (DRCs) in the area, providing much- needed information about recovery services, such as housing/ rental assistance and referrals to other assisting agencies (e.g., Department of Veterans Affairs, Social Security Administration, Small Business Administration). With terrestrial lines down, volunteers and disaster victims couldn’t make calls or apply for services online. Hughes responded promptly by providing 20 DRCs with its satellite broadband terminals and high-speed connectivity, including Voice-over-IP (VoIP), to help people obtain needed services. More than 100 homes were lost in the Breezy Point area of New York from a six alarm fire that ensued during Superstorm Sandy. Habitat for Humanity set up a command center nearby to help coordinate the rebuilding efforts; however, no terrestrial communications were available. The Global VSAT Forum—the voice of the satellite industry—put out a call to its members, and Hughes joined in the recovery effort by providing key communications capabilities, including broadband services. “I can still remember the day that the truck pulled up in front of our makeshift office at a gutted-out church,” said Jim Killoran, Executive Director, Habitat for Humanity of Westchester, New York. “It was like the cavalry arriving.” Hughes provided broadband service through its Internet Access Solutions, powered by the EchoStar XVII® satellite with JUPITERTM high- throughput technology—a next-generation Ka-band satellite system with fast Internet speeds of up to 15 Mbps. “Satellite connectivity again proved its resilience in the face of a disaster,” said Tony Bardo, Assistant Vice President of Government Solutions at Hughes. “And we were honored to have played an important part in supporting Habitat for Humanity’s noble efforts to help families rebuild their lives.” 1 NOAA’s National Weather Service Newport, Morehead City, NC Event Summaries/Case Studies, Oct. 29, 2012 2 Peter Svensson, Sandy Takes Out 25 PCT of Cell Towers, AP, Oct. 30, 201 3 Hurricane Sandy After Action Report and Recommendations to Mayor Michael R. Bloomberg, May 2013 4 Phillip Dampier, Six Months Later, Still No Verizon Service in the Rockaways, Stop the Cap!, April 8, 2013 © Copyright Hughes Network Systems LLC. All Rights Reserved. The HUGHES logo is a registered trademark of Hughes Network Systems, LLC, an EchoStar company. All other logos and trademarks are the property of their respective trademark owners. ® and ™ denote registered trademarks in the United States and other countries
<urn:uuid:b844cedc-69ec-4524-b385-f76ebc2f52ec>
CC-MAIN-2017-04
http://www.hughes.com/resources/superstorm-sandy-1?locale=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00279-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934785
890
3.359375
3
Black Box Explains...SCSI-1, SCSI-2, SCSI-3, and SCSI-5 There are standards…and there are standards applied in real-world applications. This Black Box Explains illustrates how SCSI is interpreted by many SCSI manufacturers. Think of these as common SCSI connector types, not as firm SCSI specifications. Notice, for instance, there’s a SCSI-5, which isn’t listed among the other approved and proposed specifications. However, for advanced SCSI multiport applications, SCSI-5 is often the connector of choice. Supports transfer rates up to 5 MBps and seven SCSI devices on an 8-bit bus. The most common connector is the Centronics® 50 or a DB50. A Micro Ribbon 50 is also used for internal connections. SCSI-1 equipment, such as controllers, can also have Burndy 60 or 68 connectors. SCSI-2 introduced optional 16- and 32-bit buses called “Wide SCSI.“ Transfer rate is normally 10 MBps but SCSI-2 can go up to 40 MBps with Wide and Fast SCSI. SCSI-2 usually features a Micro D 50-pin connector with thumbclips. It’s also known as Mini 50 or Micro DB50. A Micro Ribbon 60 connector may also be used for internal connections. Found in many high-end systems, SCSI-3 commonly uses a Micro D 68-pin connector with thumbscrews. It’s also known as Mini 68. The most common bus width is 16 bits with transfer rates of 20 MBps. SCSI-5 is also called a Very High-Density Connector Interface (VHDCI) or 0.8-mm connector. It’s similar to the SCSI-3 MD68 connector in that it has 68 pins, but it has a much smaller footprint. SCSI-5 is designed for SCSI-5, next-generation SCSI connections. Manufacturers are integrating this 0.8-mm design into controller cards. It’s also the connector of choice for advanced SCSI multiport applications. Up to four channels can be accommodated in one card slot. Connections are easier where space is limited.
<urn:uuid:43f75c77-684d-4bee-aca0-85ffa147857f>
CC-MAIN-2017-04
https://www.blackbox.com/en-ca/products/black-box-explains/black-box-explains-scsi-1-scsi-2-scsi-3-and-scsi-5
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00095-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933512
474
3.078125
3
In a paper set to be published this week in the scientific journal Nature, IBM researchers are claiming a huge breakthrough in spintronics, a technology that could significantly boost capacity and lower power use of memory and storage devices. Spintronics, short for "spin transport electronics," uses the natural spin of electrons within a magnetic field in combination with a read/write head to lay down and read back bits of data on semiconductor material. By changing an electron's axis in an up or down orientation - all relative to the space in which it exists -- physicists are able to have it represent bits of data. For example, an electron on an upward axis is a one; and an electron on a downward axis is a zero. Spintronics has long faced an intrinsic problem because electrons have only held an "up or down" orientation for 100 picoseconds. A picosecond is one trillionth of a second (one thousandth of a nanosecond). One hundred picoseconds is not enough time for a compute cycle, so transistors cannot complete a compute function and data storage is not persistent. In the study published in Nature, IBM Research and the Solid State Physics Laboratory at ETH Zurich announced they had found a way to synchronize electrons, which could extend their spin lifetime by 30 times to 1.1 nanoseconds, the time it takes for a 1 GHz processor to cycle. The IBM scientists used ultra short laser pulses to monitor the evolution of thousands of electron spins that were created simultaneously in a very small spot, said Gian Salis, co-author of the Nature paper and a scientist in the Physics of Nanoscale Systems research group at IBM Research. Usually, such spins find electrons randomly rotating and quickly losing their orientation. In this study, IBM and ETH researchers found, for the first time, how to arrange the spins neatly into a regular stripe-like pattern -- the so-called persistent spin helix. The concept of locking the spin rotation was originally proposed as a theory back in 2003, Salis said. Since then, some experiments found indications of such locking, but the process had never been directly observed until now, he added. "These rotations of direction of spin were completely uncorrelated," Salis said. "Now we can synchronize this rotation, so they don't lose their spin but also rotate like a dance, all in one direction." "We've shown we completely understand what's going on there, and we've proven that the theory works," he added. The IBM researchers have been using gallium arsenide, a material commonly used today in electronics, diodes and solar cells, as their primary semiconductor material. Today's computing technology encodes and processes data by the electrical charge of electrons. However, researchers say the technique becomes limited as semiconductor dimensions shrink to the point where the flow of electrons can no longer be controlled. For example, NAND flash products already use circuitry that is less than 20 nanometers in width, which is approaching atomic size. Spintronics could surmount this memory impasse by harnessing the spin of electrons instead of their charge. The new understanding of spintronics can not only give scientists unprecedented control over the magnetic movements inside devices, but also opens new possibilities for creating more energy efficient electronic devices. IBM is not alone in its pursuit of spintronics technology research. Three years ago, physicists from the Institute of Materials Physics and Chemistry in Strasbourg, France, built new laser technology on the foundation of spintronics and won the 2007 Nobel physics prize for the effort. The French physicists discovered a way to use lasers to accelerate storage I/O on hard discs by up to 100,000 times current read/write methods. A problem with spintronics had been the slow speed of magnetic sensors that are used to detect bits of data. But according to the 2007 French study, published in the scientific journal Nature Physics, the team used a "Femtosecond" laser, which produces super-fast laser bursts to alter electron spin, speeding up the read/write process. IBM's researchers said their breakthrough opens the door for efforts to create transistors and non-volatile storage that would use considerably less power than today's NAND flash technology. However, one rather large sticking point is that researchers haven't been able to produce their results at room temperature, an important requirement for producing a viable processor or memory device. Currently, experiments take place at very low temperatures of 40 degrees Kelvin, or -233 Celsius, -387 Fahrenheit. "There's no device for this yet, but it's a breakthrough in that we now know how to increase the electron's spin lifetime in channel," Sails said. "Next, one thing we'd really like to do is increase that [spin lifetime] by a factor of 30." Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian, or subscribe to Lucas's RSS feed . His e-mail address is firstname.lastname@example.org.
<urn:uuid:20ffb31d-be87-4ad1-8033-b5cd9ae5d1c7>
CC-MAIN-2017-04
http://www.computerworld.com/article/2505658/emerging-technology/ibm-claims-spintronics-memory-breakthrough.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941279
1,058
3.578125
4
Enterprise Applications: 20 Things You Might Not Know About COBOL (as the Language Turns 50) The name COBOL was selected during a meeting of the Short Range Committee, the organization responsible for submitting the first version of the language, on Sept. 18, 1959. This committee, formed by a joint effort of industry, major universities and the U.S. government, was known as CODASYL (Conference on Data Systems Languages). CODASYL completed the specifications for COBOL as 1959 ended. These were approved by the Executive Committee in January 1960 and sent to the government printing office, which edited and printed these specifications as Cobol 60. COBOL was developed within a six-month period, and yet is still in use more than 50 years later.
<urn:uuid:9b00e738-6e5a-4d40-a500-8d7326245f27>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/20-Things-You-Might-Not-Know-About-COBOL-As-the-Language-Turns-50-103943
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975294
161
2.921875
3
In today’s networking systems, LANs are becoming larger and more complicated, and people are looking for equipment that is cost-effective, flexible and easy to manage, it comes to the Fiber Media Converter, which can connect different types of media effetively and seamlessly. Fiber media converter is one of the key components in modern networking. Its features of high bandwidth capacity, telephone long distance operation and reliability, making fiber optics the most desired funnel for data communications. Fiber media converter is a short distance twisted pair electrical signals and optical signals over long distances to swap the Ethernet transmission media conversion unit, ensures the smooth transmission of data packets between two networks at the same time, in many places, also known as fiber converter. Its network transmission distance limit extended to more than 100 kilometers from the copper wire 100 meters (single-mode fiber). Products in generic applications can be covered the Ethernet cable with Ethernet media converter, and is usually located in the broadband metropolitan area network access layer applications; in helping the fiber last mile connections to the metro also played a huge role in the network and the outer layer of the network. What is inside a media converter? A media converter is composed of two transceivers or MAU (Media Attachment Units) that can transmit data to and receive data from each other, and a power supply. Each of the transceiver (MAU) has a different industry standard connector to join the different media. One media type goes in and other media type comes out. The connectors comply with IEEE standard specifications and use standard data encodings and link tests. Fiber Media converter types vary from small standalone devices and PC card converters to high port-density chassis systems that offer many advanced features for network management. Working distance of the fiber optic converters are different. For typical multimode fiber optic converter, its working distance max is about 2km, for single mode media converter, its working distance can be 20km, 40km, 60km, 80km and up to 120km. Here I would like to introduce you the media converter fiber to Ethernet. Ethernet fiber media converters are often mounted to the wall near or directly over a telephone jack and do not need to be tampered with once installed. They provide a fiber optic connection’s extremely high speeds without having to install a complicated series of fiber optic cables. Ethernet fiber media converters usually have their own power adapter and can transfer several gigabytes of data at a time. In fact, fiber to ethernet media converter can be purchased for commercial purposes that can house and manage up to 19 different connections simultaneously. Fiber optic media converter can be used in any part of the network, including between routers, servers, switches, hubs and so on. It is even possible for them to be integrated alongside your workstation. Media converters make the configuration of any network to be more flexible.
<urn:uuid:48040728-e86c-490e-af95-fda44245f8f2>
CC-MAIN-2017-04
http://www.fs.com/blog/guide-to-fiber-media-converter-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00335-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930222
588
2.875
3
The underground fiber optic cable can be accidentally cut. The most common factor which can cause this accident is the use of backhoe while digging. If it happens to you, you can simply look for backhoe and get the cut cable. However, if it is caused by moles, it will be difficult for you to troubleshoot it. You will need some equipment to involve. Now, there are some simple tips to repair a cut underground fiber optic cable. The first thing you have to do is to look for the break in your cable. Usually, the optical fiber technicians use the equipment is called optical time domain reflectometer or OTDR. It can work like redar which sends a light pulse down to the cable. It will be reflected back to your device when it encounters break. It helps to the technician know the location of the break. After knowing the location of the break, you should dig up the cable with the break. Then, strip the fiber around 9 feet of the cable using cable rip cord. Peel the jacket gently so that the fiber optic tubes exposed and eliminate the excess jacket. Then, clean that cable gel using cable gel remover and cut any sheath and yarn. Separate the tubes of the fiber. Avoid damaging the strength member since it is required to hold the cable in splice enclosure. Next, you need to do is 2 inches by using the fiber cladding on fiber coated with mold release tools and claning in the tube. Trim any damage on the fiber ends using high precision fiber cleaver. If you want to perform a fusion splice, you need to place a fusion splice protector to the fiber. From now on, you must be cleaned, use lint-free stripe fiber wipes it is soaked in alcohol. In addition, if you want to create a mechanical connection, you must connect fast fiber optic connector to fiber and fiber with alcohol and a lint-free after cleaning wipe. Make sure that the fiber does not touch anything. Then, if you make a fusion splice, you need to put the fibers which will be spliced in the fusion splicer. Then, fire the fusion splicer based on the manual. After that, you need to move the fusion connector into a heat shrink oven. Press a button to heat shrink. In some cases, the fusion splice is better than mechanical splice since the signal loss is under 0.1 decibels (dB). On the other hand, the mechanical splice has signal loss under 0.5 dB. The last thing is to see the connection of fiber-optic with the OTDR. Then put back those splices into the splice enclosure. Close the enclosure and then rebury the cable.
<urn:uuid:d99d80b7-d8a7-4946-b676-79145e50d0da>
CC-MAIN-2017-04
http://www.fs.com/blog/how-to-repair-a-cut-underground-fiber-optic-cable.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925121
555
2.515625
3
Cracking a password may seem like a next to impossible task, but you’d be surprised how easy it can be. There are dozens of password cracking programs on the market, each with their own special recipe, but they all basically do one of two things: create variations from a dictionary of known common passwords or attempt every possible combination using a method called a brute force attack. Let’s look at how each technique works and how to protect against them. It’s important to understand at the outset, that professional password crackers aren’t looking to log in to your PayPal account. That process is slow to begin with, and most services will lock out repeated login attempts anyway. Rather, the pros work against password files that they download from breached servers. These files are usually easy to access from the root level of most server operating systems or are maintained by individual applications. These files may be protected with weak encryption algorithms, which are not much of an impediment to the determined hacker. Once criminals obtain a password list they can take as many shots as they like to break it. Their goal generally isn’t to crack an individual password, but to run tests against the entire file, knocking down their targets one by one. Modern graphics hardware makes this incredibly fast. For example, some commercial products can test trillions of passwords per second on a standard desktop computer using a high-end graphics processor. This table of password recovery speeds is truly scary. It shows that a seven-character password composed of upper and lower case letters and digits has 3.5 trillion permutations. While that may sound like a lot, today’s speedy desktop computers can test all of them in an hour or two. An engineering workstation, or several PCs strung together, can finish the task in 10 seconds. Let’s look at the two most common password-cracking techniques. This technique uses lists of known passwords, word list substitution and pattern checking to find commonly used passwords, or those that are discoverable with a bit of personal information. It isn’t difficult to find lists of compromised passwords. Sites like PasswordRandom.com publish them, and much large lists are available on the dark web at little cost. A criminal can probably unlock 10% to 20% of a password file using just the 10,000 most common passwords. In fact, it has been estimated that about 75% of online adults have used one or more of the 500 most popular passwords. After decrypting the password file, a dictionary attack uses text strings and variations thereof to test different combinations. For example, many people append numbers to their names or user names, which may be stored in plain text. If a user named Robert has the password “Robert123,” a dictionary attack will figure that out in seconds. The software simply cycles through every possible combination to identify the ones that work. If a little information is known about people in the database, the job is even easier. For example, people frequently use the names of children, addresses, phone numbers, sports teams and birthdays as passwords, either alone or in combination with other characters. Since most people append characters to the end of the password, it’s easy for dictionary cracks to cycle through all of those likely possibilities. Social media is an attacker’s dream. People freely post personal information in their profiles or tweet repeatedly about the sports teams or celebrities they follow. These are natural paths for a dictionary crack to pursue. Brute Force Crack This is just what it sounds like: a technique to reveal those stubborn passwords that can’t be unlocked by a dictionary. Today’s multi-core processors and graphics processing units have made brute force tactics more practical than they used to be. Machines that can be purchased for less than $1,000 are capable of testing billions of passwords per second. Short passwords are easiest to guess, so attackers typically use brute force tactics to unscramble the five- and six-character passwords that didn’t yield to the dictionary approach, a process that might only take a few hours. For longer passwords, brute force and dictionary techniques may be combined to narrow the realm of possible combinations. Some brute force cracking software also uses rainbow tables, which are lists of known codes that can sometimes be helpful in reverse-engineering encrypted text. How vulnerable are password files to brute force attacks? In 2013 the tech news site Ars Technica gave an editor who had no experience with password cracking a list of 16,000 encrypted passcodes and challenged him to break as many as possible. Within a few hours, he had deciphered nearly half of them. The same list was then given to some skilled hackers, one of whom cracked 90% of the codes in about 20 hours. Some Good News and Some Bad News If some of the statistics cited above are intimidating, rest easy. The biggest problem with password protection is that many people don’t use strong passwords. The laws of mathematics dictate that longer passwords are harder to break than short ones, and passwords that contain random combinations of characters are more secure than those that conform to a known pattern. A 13-digit password that mixes alphanumeric characters and punctuation systems is considered impractical to break with today’s technology. Unfortunately, few people can remember a random 13-digit string of characters, much less multiple strings for different logins. Equally unfortunate – from a security perspective – is that computers are getting faster and cracking algorithms are getting better. Five years ago, an eight-digit password was considered strong enough. Five years from now, 18 digits may be too weak. This is where password management software is valuable. Password managers store passwords of any length and can regularly generate new passwords without the user having to bother to remember them. They can also be protected by two-factor authentication, which is considered to be almost unbreakable in any context. By the way, in case you’re wondering why password-cracking programs aren’t illegal, it’s because there are perfectly valid and legal reasons to use them. Security professionals employ these tools to test the strength of their own software, and password crackers are widely used by law enforcement agencies to fight crime. As with any technology, these tools can be used for evil, as well as for good.
<urn:uuid:491dd0f2-68b3-46c8-b4d7-5e9dcad1a6d1>
CC-MAIN-2017-04
https://blog.keepersecurity.com/page/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94707
1,297
2.6875
3
What is network security? How does it protect you? How does network security work? What are the business benefits of network security? You may think you know the answers to basic questions like, What is network security? Still, it's a good idea to ask them of your trusted IT partner. Why? Because small and medium-sized businesses (SMBs) often lack the IT resources of large companies. That means your network security may not be sufficient to protect your business from today's sophisticated Internet threats. What Is Network Security? In answering the question What is network security?, your IT partner should explain that network security refers to any activities designed to protect your network. Specifically, these activities protect the usability, reliability, integrity, and safety of your network and data. Effective network security targets a variety of threats and stops them from entering or spreading on your network. What Is Network Security and How Does It Protect You? After asking What is network security?, you should ask, What are the threats to my network? Many network security threats today are spread over the Internet. The most common include: - Viruses, worms, and Trojan horses - Spyware and adware - Zero-day attacks, also called zero-hour attacks - Hacker attacks - Denial of service attacks - Data interception and theft - Identity theft How Does Network Security Work? To understand What is network security?, it helps to understand that no single solution protects you from a variety of threats. You need multiple layers of security. If one fails, others still stand. Network security is accomplished through hardware and software. The software must be constantly updated and managed to protect you from emerging threats. A network security system usually consists of many components. Ideally, all components work together, which minimizes maintenance and improves security. Network security components often include: - Anti-virus and anti-spyware - Firewall, to block unauthorized access to your network - Intrusion prevention systems (IPS), to identify fast-spreading threats, such as zero-day or zero-hour attacks - Virtual Private Networks (VPNs), to provide secure remote access What are the Business Benefits of Network Security? With network security in place, your company will experience many business benefits. Your company is protected against business disruption, which helps keep employees productive. Network security helps your company meet mandatory regulatory compliance. Because network security helps protect your customers' data, it reduces the risk of legal action from data theft. Ultimately, network security helps protect a business's reputation, which is one of its most important assets.
<urn:uuid:362a4493-f6c7-4f2a-aec1-5577e03ad055>
CC-MAIN-2017-04
http://www.cisco.com/cisco/web/UK/solutions/small_business/resource_center/articles/secure_my_business/what_is_network_security/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948037
537
2.78125
3
Since academics first began studying communication, they’ve been trying to figure out who we talk to and how those networks change with the invention of new mediums of interaction. Who you could talk to, and even what you might talk about, obviously differed between the eras of the covered wagon and the cell phone. And now we have an instantaneous, global and (mostly) free platform for talking to virtually anyone: the Internet. So how has it altered the real-world geography of communication? Some previous efforts to address this question have come out of the workplace (researchers can’t query Google for all of our gmail data, but large international companies can do this with their own employees). There’s only so much to be learned, however, from the email correspondence between a company man in L.A. and his coworker in China. "The holy grail has been to look at people in real life, to look at people outside the workplace, to see when people are on their own, communicating as part of their general course of life, how has the electronic revolution changed that?" says Kalev Leetaru, a University Fellow at the University of Illinois Graduate School of Library and Information Science. "That has been a difficult thing to look at because there hasn’t been much data." Now, however, there is more data than most computers can process coming from public social networking platforms like Twitter. We wrote in late 2011 about some early research suggesting that many Twitter users in fact follow other people located within their same city, evidence, Richard Florida wrote, that the Internet is reinforcing the value of place instead of eliminating it. But now that Twitter is a few years older – and considerably more global – Leetaru and several colleagues have conducted a massive new analysis of the site that suggests the opposite: "In effect," Leetrau says, "location plays a much lesser role now in terms of who we talk to, what we talk about, and where we get our information."
<urn:uuid:05569259-ad90-4253-a087-cf9efa09409b>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2013/05/how-twitter-changing-geography-communication/63206/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.979594
414
3.109375
3
Information Security is a fast changing field. The techniques of the attackers are constantly changing so it is necessary study attack methods and to adapt when necessary. Security Operations and Security Event Analysis effectiveness can be greatly improved through visualizing security event data. While some people take great pleasure in looking at long lists of statistics from firewalls, intrusion detection systems and other security related logs most find it not only boring but also ineffective. Visualizing the data can help an analyst spot patterns and trends that may otherwise be missed. It also makes your reports look pretty. 🙂 |Mapping Web Attacks with Splunk||Quickly map web application attacks such as the WordPress Timthumb using Splunk and Geolocation plugins.| |SSH Blacklist Visualization||Using SSH black list data in this visual we plot the location of the different blacklisted IP's based on an IP geo-location lookup and then plotted onto a google mapped visualisation.| |Tor Exit Node Visualization||Tor is a network of relays that are able to provide anonymity to its users. It is used by people all around the world; often by those who are living under oppressive regimes. An exit node is where the action is, this is where the traffic comes out of the encrypted tunnels and really hits the internet. This visualisation shows a break down of those exit nodes.| Tutorials and Guides Introductory tutorials and guides for building, installing and using Open Source security solutions on your own systems. |Nmap Tutorial||A basic tutorial for installing Nmap and getting started using this powerful tool.| |SQLmap Tutorial||With SQLmap you can go from initial discovery of SQL Injection to complete database and server compromise. This tutorial will get you started.| |Nikto Tutorial||Install Nikto and scan web servers with this simple tutorial.| |XSS Tutorial||An introductory tutorial to cross site scripting (XSS). Understand the basics of how XSS works, to understand the risk.| |Port Scanner Guide||Knowing how a Port Scanner benefits your security testing, is an essential step in building secure systems.| |10 years of SQL Injection||A compilation of the largest SQL Injection attacks over the past 10 years. A good reminder of the need for secure web application development!| |Firewall Ubuntu with UFW||Configure an IP Tables Firewall on Ubuntu with UFW in this tutorial.| Passive Website Analysis Looking at the technology behind the most highly trafficked websites in world (top one million sites) provides insight into Internet trends; including Internet Security, where our particular interests lie. Over 12 months ago, we did an analysis of the Top 1 Million websites that included details of the web servers, hosting companies, web applications and locations of the sites. We are working on expanding this research into new area's and building a new set of data for 2012. |100K Top Websites powered by WordPress||In this post we look at the top 100'000 wordpress sites; digging a bit deeper to pull out the Hosting Provider, Theme Name and Web Server the sites are running on. Download the full list of sites in .csv format to perform your own analysis or perhaps to see where you are sitting in the list.| |WordPress WooThemes Framework Updates||WooThemes is one of the most successful theme development shops on the planet. In this analysis we look at how well webmasters apply security updates to the WooThemes Framework. Theme updates are just as important as WordPress Core and Plugin updates when maintaining a WordPress installation.| |WordPress Theme Usage||WordPress is now hitting over the 16% mark in the top 1 million websites. This analysis breaks down the most popular commercial and free themes.| |HTTP Headers for Security||With a number of different http headers available for protecting the end user, we performed some analysis to find out how prevalent the configuration of these headers is in the top websites.| |IPv6 Infographic||During March we conducted analysis that involved looking for the presence of IPv6 AAAA records for the sites in the Top 1 Million. Through this analysis we found only 1.1% of all sites have made the move towards the new IP addressing technology.| |WordPress Infographic||WordPress is the worlds most popular content management system. With around 15% of the top websites, this Infographic explores the hosting, security updates and operating systems of those sites.| |Hosting Report 2011||During March 2011 we examined the hosting providers of the top 1 million sites, top 100000 sites and the top companies.| |CMS Survey Summary||Content management systems (CMS) run many of the worlds websites both at the high end in the top 100'000 sites in the world and right down to personal blogs. This study has a look at the breakdown of the different systems.| |Web Server Survey Summary||This analysis shows a breakdown of the web servers used by the most popular sites in the world.|
<urn:uuid:fc510633-346e-4425-ae2b-bf4d4dd76da8>
CC-MAIN-2017-04
https://hackertarget.com/research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.860646
1,039
2.546875
3
Sasser Worm Spreading Rapidly The worm exploits a security flaw, but this can be prevented with a Microsoft patch. It typically shuts down the computer then automatically re-boots it and repeats this process several times, but is not thought to cause lasting damage. Alfred Huger, senior director of engineering at Symantec, said the worm "breaks into your computer and then attempts to break into others. It chooses its victims randomly. Mikko Hyppoenen, an anti-virus expert at Finnish internet security company F-Secure, said "It was probably some hobbyist, a teenager who has the skills and wants to show off. We expect things to get much worse when people bring their laptops in to the office after the weekend." Graham Cluley, senior technology consultant for Oxfordshire-base software security company Sophos, said home users were especially vulnerable. "They are often not running the latest anti-virus protection, haven't downloaded the latest security patches from Microsoft, and may not be running a personal firewall," he added.
<urn:uuid:d61571d8-2292-4b54-897b-b499eb0aa9d2>
CC-MAIN-2017-04
http://www.cioupdate.com/print/news/article.php/3348441/Sasser-Worm-Spreading-Rapidly.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960164
222
2.625
3
Hackberry trees and saguaro cactuses fade into the darkness on the valley floor as the sun drops behind the McDowell Mountains, whose faint outline will be invisible in a few more minutes. The Cessna Citation V jet is flying low -- dangerously low over such terrain at night were it not for a color screen on the pilot console that displays the mountains, an even closer jumble of brown hills and a blue stream of running water as if it were high noon. It is not a photograph on the screen that Honeywell Aerospace pilot Sandy Wyatt is monitoring, along with traditional needles and gauges, to make sure he is navigating safely. The image, which is constantly updated during the flight from takeoff to landing, is artificially produced to mimic the view a pilot would see outside the cockpit windscreen on a perfect blue-sky day. Yet the simulation is an accurate three-dimensional representation of the world outside, derived from a terrain database of Earth and onboard equipment that use the satellite-based global positioning system (GPS) to track the plane's path on the topographical map. This may sound like science fiction, but synthetic vision is on aviation's horizon. Proponents -- including the FAA, NASA and some airplane manufacturers -- say combining synthetic vision with technologies that improve real vision will help pilots envision and instantaneously understand what they otherwise couldn't see in bad weather, at night or in challenging flying environments that are filled with natural and manmade obstacles. "There is a lot of progress being made," said John McGraw, manager of the Federal Aviation Administration's flight technologies and procedures division. "We feel you will end up getting the best of both worlds by fusing synthetic vision with enhanced vision technologies that use external sensors on the aircraft to present the most accurate and reliable image of the real world outside." Today, pilots must study their instruments and conjure up a mental picture of where they are and what the aircraft is doing based on the aeronautical map in the pilot's lap and cockpit gauges showing airspeed, altitude, course heading and aircraft pitch and roll in relation to the horizon. Those mental calculations take time and talent, adding significantly to pilot workload. Many accidents occur because the pilot, who should always be thinking about what's coming next -- it's called "flying ahead of the plane" -- fails to keep up with current demands. With synthetic vision and enhanced vision, which uses infrared or millimeter wave technologies to improve low-vision situations, boosting pilots' awareness of their surroundings is expected to help reduce the two leading causes of fatal aviation accidents -- flying into terrain and loss of control during flight. More than 3,600 people have been killed over the last 20 years worldwide in accidents in those two categories, according to the Commercial Aviation Safety Team, a joint government-industry group working to reduce the fatal accident rate. One of the more notable loss-of-control accidents was the 1999 crash of John F. Kennedy Jr.'s Piper Saratoga in the Atlantic. An accident involving a plane flying into terrain happened in 1995 when an American Airlines Boeing 757 crashed into mountains near Cali, Colombia, killing 160 people. Besides improving safety, the advances in synthetic and enhanced vision research also hold the eventual promise of expanding air travel to hundreds of small and medium U.S. airports that lack the landing-guidance equipment necessary in severe weather. Over the Arizona desert, the Citation V executive business jet is flying level at 3,500 feet -- too low to clear the more than 6,000-foot mountaintops of the McDowells straight ahead, bathed in darkness. "Right here it is telling me that I don't want to stay on this tack very long," said Wyatt, development pilot of flight operations for Honeywell, which is one of several companies working on the computer-generated synthetic vision technology. Distance-range rings incorporated onto the synthetic vision display show the plane's proximity in miles to the exposed bedrock, which changes color on the screen to red, marking the first of several escalating alerts to the pilot. The most extreme caution is an aural command -- "Terrain! Terrain! Pull up! Pull up!" -- that goes off beginning 60 seconds before a potential collision. But the alert won't be necessary here because the synthetic vision display has given Wyatt plenty of advanced warning. "It says I am less than 10 miles away so I'd better do something," Wyatt said. He throttles up the jet's twin turbines and dials a new altitude setting of 7,500 feet. "I am going to miss it," Wyatt assures his passenger on the flight deck. A pitch reference indicator on the screen confirms the plane is traveling fast and high enough to safely vault over the peaks. The FAA has approved two versions of synthetic vision for use in general aviation aircraft. The products are made by Chelton Flight Systems and Universal Avionics Systems Corp. Gulfstream Aerospace Corp., which manufactures business jets, plans to offer the Honeywell version of synthetic vision on its 2007 models, said Gulfstream spokesman Robert Baugniet. He said Gulfstream, which provides enhanced vision on some of its aircraft, expects the FAA to approve the Honeywell system soon. "Making the Gulfstream planes a tool for the business executive is the ability to go into any airport," said Tom Horne, a Gulfstream senior experimental test pilot on synthetic vision. "Synthetic and enhanced vision gives pilots a much higher comfort level and an immediate awareness if things aren't going the way they should." No companies have yet sought FAA certification for a synthetic vision system geared to use in the airline industry. There are several reasons, including the still-emerging nature of the technology. The FAA wants further assurances that the terrain database is without faults. Any potential inaccuracies in the global database that is used to form the synthetic vision presentation of the terrain ahead could lead to pilots receiving inaccurate displays. The FAA also is concerned that pilots might tend to rely too much on the synthetic vision tool, using it for more than the intended functions and procedures, said the FAA's McGraw. In addition, the financially struggling airlines are trying to cut costs, not add to their expenses. Several major airlines have gone to synthetic vision vendors seeking demonstrations, industry sources said. Future safety developments may dictate how fast the technologies find their way into airliner cockpits. The FAA mandated collision-avoidance systems on commercial airplanes after midair collisions between aircraft in the late 1970s, and the agency required ground-proximity terrain warning systems after the 1995 Cali crash. "The airlines and the FAA are the toughest customers," said Sergio Cecutta, who markets the synthetic vision system at Honeywell. (c) 2006, Chicago Tribune. Distributed by McClatchy-Tribune Information Services via Newscom. Image courtesy of Honeywell.
<urn:uuid:097d24d8-96fd-4ead-9cd7-a67098d6921e>
CC-MAIN-2017-04
http://www.govtech.com/geospatial/Synthetic-Vision-Systems-Will-Improve-Aircraft.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941519
1,411
3.234375
3
IBM and four universities are planning to study cognitive computing in a research project whose goal is to develop computers that operate in a manner that's similar to the way the human mind works. The researchers hope to create systems that outperform Watson, the IBM supercomputer that, among other things, famously competed on the TV game show Jeopardy! The company said its partners will be Carnegie Mellon University, MIT, New York University and Rensselaer Polytechnic Institute. The project will involve the study of big data, to find new ways to use computers to process large volumes of structured and unstructured data. Topics to be explored include how applications can boost group decision-making, how processing power and algorithms apply to artificial intelligence, how systems should be designed for more natural interaction and how deep learning impacts automated pattern recognition. Another IBM project seeks to imitate how neurons receive sensory input and connect with each other. That effort is called Synapse, for Systems of Neuromorphic Adaptive Plastic Scalable Electronics.
<urn:uuid:8f26ebe5-c1d6-4cf2-9608-d8935decc495>
CC-MAIN-2017-04
http://www.computerworld.com/article/2486277/vertical-it/ibm-and-four-universities-plan-to-study-cognitive-computing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00289-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923727
208
3.21875
3
C++'s Standard Template Library (STL) is uniquely powerful and extensible, and it facilitates the creation of very efficient code. However, the design behind the STL is unconventional and can be counterintuitive. Developers who fail to grasp the design often produce software that looks reasonable, but that's actually brittle, inefficient, and home to subtle bugs. In this course, you will learn about the architecture that underlies the STL, including its fundamental concepts, components, and how they relate to one another. You will learn specific guidelines for making effective use of the STL and its architecture. Note: You are required to bring your own laptop. Train your entire team in a private, coordinated professional development session at the location of your choice. Receive private training for teams online and in-person. Request a date or location for this course.
<urn:uuid:c90064f4-145f-46a9-97ee-6f39bbf68c01>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/121221/concepts-and-architecture-of-the-stl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9391
174
3.203125
3
Big Data, like most things that you read about on the internet or hear about on TV, can be thought of as both a nebulous buzzword, and a real, functional concept with a definition. The buzzword, also like most, comes with grand expectations and proportional misunderstanding, while the reality is utilitarian, and somewhat less exotic. So really, what is Big Data, and what should this term mean to a business? In a technical sense Big Data generally refers to data sets that are unfit for traditional relational databases, due to a combination of excessive size (in the terabyte or petabyte range) or format that doesn’t fit the classic table structure (like JSON data, raw unstructured text, etc.). Essentially, for those reasons Big Data is differentiated by the fact that it can’t be stored, processed, or manipulated via usual means. In more common conversation though, Big Data refers to the fact that companies are dealing with a rapid explosion of information that is being collected, driven by recent technological innovation. As connected digital devices have become more common and people increasingly live and work online (and in-app), the amount of data that we have at our fingertips has grown exponentially. With numbers being thrown around like ‘90% of the world’s data has been created in the last two years’ or ‘Every day, we create 2.5 quintillion bytes of data,’ it’s not hard to see how this subject can quickly become overwhelming, but don’t worry. The reality is that most companies won’t need to scale that much, that fast, so the journey from ‘small’ data to Big Data will be gradual, and you are likely already on your way. What’s Your Point? All of that generalization and background is fine, but how is this all relevant to a small- to medium-sized business in the real world? Think of it this way: Your business has always kept some information on its customers, things like: - Contact information (name, address, phone number) - Billing information (credit card number, payment type preference) - Transaction information (what was purchased, and when, and for how much) Way back when, maybe it was all kept in a ledger, by a person who still knew how to write in cursive and sharpen a quill pen. Then it made its way into paper files and a rolodex. Eventually it grew into spreadsheets, then MS Access, then a full on database. So don’t think of this as a new world, just the same continuous evolution that has been going on forever. It’s just that now, you have a website, probably Google Analytics, maybe an app, a CRM, a Facebook page, and a digital product. Every time that a person interacts with any part of your business, on any platform, from anywhere in the world, scripts start running, pixels start firing, and servers hum like angels. All of these events are generating data, on your customers and your business. Whether or not your data meets some arbitrary threshold to be considered BIG DATA is beside the point, what matters is that there has recently become a lot more of it available, and it would be best to do something with it. Like What, and How? We’ve already discussed all kinds of ways that your data might be collected, but no amount of information will do your business any good if it simply blinks out of existence, or ends up in a place that is prohibitively hard to get to. If you want to ride the wave of Big Data and get value out of it, you need to think about three main things: The good news is, while technology has gotten us into this situation, it also offers solutions. In the past, we had to pick and choose which data to keep because there were significant limitations on storage in terms of cost. That’s why the biggest advance in the Big Data ecosystem over the last few years has likely been the rise of enterprise cloud computing options. This is the big point of this entire piece, and if you take nothing else away, let it be that every company now has the ability to leverage data cheaply and efficiently, thanks to the cloud. A few of the biggest full-service cloud platforms available today are: Services like these enable any company to get access to virtually limitless cheap storage capacity without maintaining their own hardware, and they all allow you you configure and deploy Hadoop clusters (or Spark), or build containers and microservices on top of your data using technologies like Docker. While there will still be a need for data scientists to work with the data that a company collects, they will rely on the IT professionals who will be called upon to build and maintain the increasingly important data pipelines and warehouses, plus the DevOps automation that will connect them all. But with everyone in a hurry to unlock the profit potential of their data, only companies with highly trained IT teams will have the key to turn Big Data from a buzzword into a reality. Your business has always kept some information on its customers, things like contact information, billing information, and transaction information. Way back when, maybe it was all kept in a ledger. Then it made its way into paper files and a rolodex. Eventually it grew into spreadsheets, then MS Access, then a full on database. So don’t think of this as a new world, just the same continuous evolution that has been going on forever. So don’t wait, start learning about some of the most in-demand skills in the IT field today! Not a CBT Nuggets subscriber? Start your free week today.
<urn:uuid:39089c1b-c11e-4ebe-a429-a705f27ab70f>
CC-MAIN-2017-04
https://blog.cbtnuggets.com/2016/11/so-what-is-big-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952397
1,184
2.84375
3
For example, the United States has fallen from 4th to 11th in broadband penetration and stands to fall even further. Broadband penetration in Latin America varies greatly from country to country, but as a whole, the region's progress towards having widespread high-speed Internet use remains sluggish. Moreover, with few exceptions, most of the broadband infrastructure available today consists of relative slow connections in the 500kbps to 3Mbps range. A few countries, such as Korea and Japan, have plans to take their broadband infrastructure from current levels to next generation speeds of 100Mbps. In Japan, it is already a reality as many Japanese consumers can now get a 100mbps connection for approximately US$38 per month. Because telecommunications is one of the most intensely regulated industries and it has a legacy of decades of government involvement, regulatory policy significantly affects broadband infrastructure investment. Regulators and government impact investment through the application of legacy regulation to new technologies, through attempts to artificially create competition, through spectrum allocation, through universal service subsidy systems and through direct government investment and tax incentives. Impact to Business In the US, industry has encouraged further deregulation and market driven policies to promote competition. Efforts to stimulate broadband deployment have been hindered by discussion of how and who to regulate (or not regulate), as well as legal hurdles, such as right-of-way issues for deployment across multiple legal jurisdictions. More recently, however, a major hurdle to broadband deployment in the US has been the issue of digital rights management - technological protections to prevent copying of digital content carried over the Internet. Several legislative initiatives are underway in the new Congress to stimulate broadband deployment through deregulation and targeted tax incentives. The U.S. Federal Communications Commission (FCC) released its most recent report on Broadband penetration in the United States on September 9, 2004. At the end of 2003 in the US, there were 28.2 million broadband access lines in service with a speed of at least 200k. The majority of FCC Commissioners also concluded that broadband deployment was proceeding at an adequate pace considering other technology adoption rates. Two of the Commissioners, on the other hand, dissented by pointing out that the US is 11th in the world in broadband penetration and that the FCC's 200k definition of broadband is too slow given advancement in technology. On March 26, 2004, President Bush set a national goal of making broadband available to all Americans by 2007. Cisco's President and CEO John Chambers applauded President Bush for his vision of universal, high-speed broadband access in the United States and his recognition of the positive impact that broadband could have on the nation's education and healthcare systems - two of our country's top priorities. Cisco believes that having affordable broadband for all Americans will help ensure the nation's competitiveness for decades to come. On June 24, 2004, President Bush reiterated that goal and added an additional goal of making America #1 in broadband penetration, up from its current #11 international ranking. Additional world leaders are incorporating broadband into their national platforms. On September 28th, 2004 U.K. Prime Minister Tony Blair, speaking at the Annual Labour Party Conference, said that a third term for his government would mean "ending the digital divide by bringing broadband technology to every home in Britain that wants it by 2008." On the European Union (EU) level, the European Commission continues to recognize the importance of broadband deployment. For example, the eEurope 2005 Action Plan seeks to accelerate the rollout of broadband. Countries have been instructed to use the EU's existing Structural Funds (regional and social funds, etc.) to facilitate broadband access in remote and rural areas. In addition, "the EU countries should eliminate legislative barriers [and] promote investments in broadband notably by easing 'right of way' restrictions." All Member States submitted national broadband plans by the end of 2003. They now need to provide an up-date of their national strategies to promote the use of broadband by the end of 2005. As such, national broadband strategies in Europe are led by a common approach: Over the past few years, the European Union has made significant progress in raising participation in the Information Society. In the past year, for instance, the number of broadband connections has nearly doubled and now reaches nearly 25 million households. The EU, including the accession countries, has the second largest global broadband region. Belgium, Netherlands and Denmark lead the way. These countries have a competitive cable platform to DSL provided by the incumbent and have broadband penetration rates close to 15%. Finland and Sweden are also high-performers. The UK, while a slow starter, was among the fastest growing markets in Europe in the first half of 2004. France is showing strong growth in DSL connections over the last few months, namely due to regulatory intervention to decrease the price of unbundled local loops. Germany, and are clearly falling behind. DSL in Europe continues to represent around 74% of the total connections with cable around 22% while other technology platforms are still in the early stages. EU countries are feverishly working to catch up to Asia Pacific's dominant broadband global market share. Market penetration figures indicate that broadband penetration in Asia-Pacific continues to be extremely strong. The latest figures confirm that South Korea maintains its global lead of market penetration. It is considered the greatest deployment success story because of a series of government investments and market enabling policy efforts. Through this effort, the South Korean government successfully stimulated one of the highest broadband penetration rates worldwide - a whopping 75 percent. The Government has made about $77 million in loans available to fund private networks since 1998. The South Korean government plans to invest $926 million by 2005 to continue the broadband deployment effort. Hong Kong ranks second in the world by penetration with Taiwan at sixth and Japan at ninth. Leading countries must develop comprehensive national broadband infrastructure deployment plans and advance policies to implement them. A national broadband plan should include policies to: Vital regulatory policies to implement these goals include:
<urn:uuid:1f15949a-ad79-4ec4-887c-6968465130d2>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/government-affairs/high-tech-policy-guide/accelerating-broadband/broadband.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00464-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954589
1,201
2.78125
3
Wjst M.,Institute of Lung Biology and Disease BMC Medical Ethics | Year: 2010 Background. Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public. © 2010 Wjst; licensee BioMed Central Ltd. Source Hecker M.,Justus Liebig University | Zaslona Z.,Justus Liebig University | Kwapiszewska G.,Justus Liebig University | Niess G.,Justus Liebig University | And 14 more authors. American Journal of Respiratory and Critical Care Medicine | Year: 2010 Rationale: Idiopathic pulmonary arterial hypertension (IPAH) is characterized by medial hypertrophy due to pulmonary artery smooth muscle cell (paSMC) hyperplasia. Inflammation is proposed to play a role in vessel remodeling associated with IPAH. IL-13 is emerging as a regulator of tissue remodeling; however, the contribution of the IL-13 system to IPAH has not been assessed. Objectives: The objective of this study was to assess the possible contribution of the IL-13 system to IPAH. Methods: Expression and localization of IL-13, and IL-13 receptors IL-4R, IL-13Rα1, and IL-13Rα2 were assessed by real-time reverse transcription-polymerase chain reaction, immunohistochemistry, and flow cytometry in lung tissue, paSMC, and microdissected vascular lesions from patients with IPAH, and in lung tissue from rodents with hypoxia- or monocrotaline-induced pulmonary hypertension. A whole-genome microarray analysis was used to study IL-13-regulated genes in paSMC. Measurements and Main Results: Pulmonary expression of the IL-13 decoy receptor IL-13Rα2 was up-regulated relative to that of the IL-13 signaling receptors IL-4R and IL-13Rα1 in patients with IPAH and in two animal models of IPAH. IL-13, signaling via STAT3 and STAT6, suppressed proliferation of paSMC by promoting G0/G1 arrest. Whole-genome microarrays revealed that IL-13 suppressed endothelin-1 production by paSMC, suggesting that IL-13 controlled paSMC growth by regulating endothelin production. Ectopic expression of the il13ra2 gene resulted in partial loss of paSMC growth control by IL-13 and blunted IL-13 suppression of endothelin-1 production by paSMC, whereas small-interfering RNA knockdown of il13ra2 gene expression had the opposite effects. Conclusions: The IL-13 system is a novel regulator of paSMC growth. Dysregulation of IL-13 receptor expression in IPAH may partially underlie smooth muscle hypertrophy associated with pathological vascular remodeling in IPAH. Source Leppkes M.,Friedrich - Alexander - University, Erlangen - Nuremberg | Maueroder C.,Friedrich - Alexander - University, Erlangen - Nuremberg | Hirth S.,Johannes Gutenberg University Mainz | Nowecki S.,Friedrich - Alexander - University, Erlangen - Nuremberg | And 20 more authors. Nature Communications | Year: 2016 Ductal occlusion has been postulated to precipitate focal pancreatic inflammation, while the nature of the primary occluding agents has remained elusive. Neutrophils make use of histone citrullination by peptidyl arginine deiminase-4 (PADI4) in contact to particulate agents to extrude decondensed chromatin as neutrophil extracellular traps (NETs). In high cellular density, NETs form macroscopically visible aggregates. Here we show that such aggregates form inside pancreatic ducts in humans and mice occluding pancreatic ducts and thereby driving pancreatic inflammation. Experimental models indicate that PADI4 is critical for intraductal aggregate formation and that PADI4-deficiency abrogates disease progression. Mechanistically, we identify the pancreatic juice as a strong instigator of neutrophil chromatin extrusion. Characteristic single components of pancreatic juice, such as bicarbonate ions and calcium carbonate crystals, induce aggregated NET formation. Ductal occlusion by aggregated NETs emerges as a pathomechanism with relevance in a plethora of inflammatory conditions involving secretory ducts. Source Sanchez-Antequera Y.,TU Munich | Sanchez-Antequera Y.,Institute of Lung Biology and Disease | Mykhaylyk O.,TU Munich | Van Til N.P.,Erasmus Medical Center | And 8 more authors. Blood | Year: 2011 Research applications and cell therapies involving genetically modified cells require reliable, standardized, and cost-effective methods for cell manipulation. We report a novel nanomagnetic method for integrated cell separation and gene delivery. Gene vectors associated with magnetic nanoparticles are used to transfect/transduce target cells while being passaged and separated through a high gradient magnetic field cell separation column. The integrated method yields excellent target cell purity and recovery. Nonviral and lentiviral magselectofection is efficient and highly specific for the target cell population as demonstrated with a K562/Jurkat T-cell mixture. Both mouse and human enriched hematopoietic stem cell pools were effectively transduced by lentiviral magselectofection, which did not affect the hematopoietic progenitor cell number determined by in vitro colony assays. Highly effective reconstitution of T and B lymphocytes was achieved by magselectofected murine wild-type lineage-negative Sca-1 + cells transplanted into Il2rg-/- mice, stably expressing GFP in erythroid, myeloid, T-, and B-cell lineages. Furthermore, nonviral, lentiviral, and adenoviral magselectofection yielded high transfection/transduction efficiency in human umbilical cord mesenchymal stem cells and was fully compatible with their differentiation potential. Upscaling to a clinically approved automated cell separation device was feasible. Hence, once optimized, validated, and approved, the method may greatly facilitate the generation of genetically engineered cells for cell therapies. © 2011 by The American Society of Hematology. Source Yildirim A.O.,University of Marburg | Yildirim A.O.,Institute of Lung Biology and Disease | Muyal V.,University of Marburg | John G.,Justus Liebig University | And 5 more authors. American Journal of Respiratory and Critical Care Medicine | Year: 2010 Rationale: Emphysema is characterized by destruction of alveoli with ensuing airspace enlargement and loss of alveoli. Induction of alveolar regeneration is still a major challenge in emphysema therapy. Objectives: To investigate whether therapeutic application of palifermin (ΔN23-KGF) is able to induce a regenerative response in distal lungparenchyma after induction of pulmonaryemphysema. Methods: Mice were therapeutically treated at three occasions by oropharyngeal aspiration of 10 mg ΔN23-KGF per kg body weight after induction of emphysema by porcine pancreatic elastase. Measurements and Main Results: Airflow limitation associated with emphysema was largely reversed as assessed by noninvasive head-out body plethysmography. Porcine pancreatic elastase-induced airspace enlargement and loss of alveoli were partially reversed as assessed by design-based stereology. ΔN23-KGF induced proliferation of epithelium, endothelium, and fibroblasts being associated with enhanceddifferentiation as well as increased expression of vascular endothelial growth factor, vascular endothelial growth factor receptors, transforming growth factor (TGF)-β1, TGF-β2, (phospho-) Smad2, plasminogen activator inhibitor-1, and elastin as assessed by quantitative reverse transcriptase-polymerase chain reaction, Western blotting, and immunohistochemistry. ΔN23-KGF induced the expression of TGF-β1 in and release of active TGF-β1 from primary mouse alveolar epithelial type 2 (AE2) cells, murine AE2-like cells LA-4, and cocultures of LA-4 and murine lung fibroblasts (MLF), but not in MLF cultured alone. Recombinant TGF-β1 but not ΔN23-KGF induced elastin gene expression in MLF. Blockade of TGF-signaling by neutralizing antibody abolished these effects of ΔN23-KGF in LA-4/MLF cocultures. Conclusions: Our data demonstrate that therapeutic application of ΔN23-KGF has the potential to induce alveolar maintenance programs in emphysematous lungs and suggest that the regenerative effect on interstitial tissue is linked to AE2 cell-derived TGF-β1. Source
<urn:uuid:60961963-4d61-45f3-b65f-f3da629727e1>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/institute-of-lung-biology-and-disease-485205/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886427
2,214
2.546875
3
File Anti-Virus intercepts all file operations (such as reading, copying, starting) using the klif.sys driver and scans the files being accessed. If the file is infected, the operation is blocked, and the file is either disinfected or deleted by default. Even if the Mail Anti-Virus and the Web Anti-Virus components are disabled, the user cannot run an infected file received via e mail or downloaded from the Internet because once the file is saved on the hard drive, it will be detected and blocked by the File Anti-Virus. You cannot run the file from an e mail attachment or from a web site without saving it to the hard drive. So, File Anti-Virus is of primary importance for the file system protection, which at the same time makes it the most important protection component in general. File Anti-Virus uses the following scanning technologies: Signature analysis. A virus detection method that uses signatures. A signature is a part of executable code, a checksum or some other binary string, which helps detect whether the file is infected by the corresponding virus. Consecutive file checks against the signatures of known viruses returns the verdict of whether the file is infected in general. This scanning method is very reliable, but only allows detecting the viruses whose signatures have been added in Anti-Virus databases Heuristic analysis. This scanning method applies only to executable files. Kaspersky Endpoint Security starts the scanned file in a virtual environment, isolated from the operating system, and analyzes its behavior. This method requires more time when compared with the signature analysis, but allows the detection of some new viruses Check against KSN lists. This method also applies to executable files only. A checksum is calculated for every scanned file, which is compared with the records in the local KSN database. Further, the following alternatives exist: If neither signature nor heuristic analysis has detected an infection, the decision is made based on the information available in the local KSN cache on the client computer. If the local cache lacks information about this file, access to the file is allowed, and a background request is simultaneously sent to the KSN cloud. If the answer is received that the file is dangerous, File Anti-Virus scans it again. If KSN returns information that the file is harmless or if KSN servers cannot be reached, file scanning is finished If either signature or heuristic analysis has detected that the file is infected, File Anti-Virus sends the request to KSN. If the local database lacks information about the file, File Anti-Virus will wait for the answer from the KSN cloud. If KSN considers the file to be clean, it is treated as non-infected despite the verdicts of signature and heuristic analysis. If the verdict is reaffirmed or information cannot be received from КSN (connection with KSN servers cannot be established), the file is processed as an infected one As you can see from the scanning algorithm, the check against the KSN database complements the signature analysis and helps to decrease the probability of false positives. File Anti-Virus settings that define the protection scope and other scanning parameters are gathered in the Security level group of parameters. In the policy, these parameters have a common lock, that is, they are locked or unlocked together. Considering the importance of the File Anti-Virus, the users should not be allowed to change the scanning parameters and the lock in the Security level area should be closed. By default, Protection scope of the File Anti-Virus includes: All removable drives All hard drives All network drives In other words, all drives from which malware can be run. A protection area allows adding individual drives and folders instead of drive groups. However, disabling any standard scan scope considerably decreases the protection level. That is why this group of settings should be modified very cautiously. For example, if Cisco NAC or Microsoft NAP guarantees that all network nodes are protected with Anti-Viruses, All network drives can be removed from the protection scope. In this case, if a file from a network drive is accessed, it will be scanned by the Anti-Virus installed on the computer where the drive is located. Types of files to be scanned The File types setting can take one of three values: Files scanned by format—i.e. files that can contain executable malware code; in this case the file format is determined as the result of the file header analysis rather than by the file extension Files scanned by extension—i.e. files with extensions characteristic of infected formats The optimum value for the File Anti-Virus is the middle one. Scanning of all files requires considerably more resources without a dramatic improvement of protection. The scanning based on the file extensions is fraught with skipping a renamed malware object or a non-typical extension may result in opening or even running such a file. Heuristic analysis parameters are configured in the Scan methods group. Heuristics levels—Light, Medium or Deep—define the period of observing the object in the virtual environment. In the context of the File Anti-Virus operation this means an increased delay when a program is run. Therefore, completely disabling heuristic analysis within the File Anti-Virus component is acceptable. The Scan only new and changed files option ultimately decreases the number of scans performed by File Anti-Virus. If an object was scanned and has not been modified ever since, it will not be scanned again. Kaspersky Endpoint Security receives information about the changes using iSwift and iChecker technologies, whose settings are located in the Additional tab. It is not recommended to scan compound files using File Anti-Virus. Unpacking of these files consumes a lot of resources and they do not impose any direct threat. Even if an archive contains a virus, you cannot run any infected file without unpacking it. During unpacking it will be detected and blocked as a regular file. It is sufficient to scan compound files with on-demand scan tasks. iSwift and iChecker iSwift and iChecker scanning technologies are responsible for collecting data about the changes made to files. The iSwift technology extracts the data about changes from the NTFS file system. Therefore, the iSwift technology is used for the files located on NTFS drives. The iChecker technology is efficient for executable files located on the drives with non-NTFS file systems, for example, FAT32. The iChecker technology calculates and saves the checksums of the scanned executable files. If the checksum remains the same at the next check, it means that the file has not been changed. Both technologies save information about the file scan date and the version of the databases used for the scanning. If the Scan only new and changed files option is enabled, the iSwift Technology and iChecker Technology checkboxes are of no importance. Even if you clear them, these technologies will still be used because without them Kaspersky Endpoint Security will not be able to determine which files have already been scanned and which of them have not been changed since the last scanning. If the Scan only new and changed files setting is disabled, the iSwift Technology and iChecker Technology settings are relevant. In this case, a certain quarantine or a trust period is associated with each file. During the quarantine period the file will be scanned even if it has not been modified, while during the trusted period the file will not be scanned. The quarantine period is assigned to all files which have not been scanned yet or which have changed since the last scanning. During the quarantine period, the file will not be scanned if it was already scanned with the same database version. For this purpose, the iSwift and the iChecker technologies register the version of the anti-virus databases used for the scanning. In all other cases, standard scanning is performed. Once the quarantine period is over, the trusted period is assigned to the file. During the trusted period, the file is not scanned if it has not changed. Once the trusted period is over, the file is scanned once again when the necessity arises, and if it is not infected, a new trusted period is assigned, longer than the previous one. In case of any change, the file gets a quarantine period and everything begins from scratch. When the Scan only new and changed files setting is enabled, the trusted period is not restricted in time. The trusted period expires only if the file is changed. Disabling the iSwift and iChecker technologies makes no sense in File Anti-Virus. This will either have no effect (if the Scan only new and changed files feature is enabled) or will lead to more scans and a general decrease of the computer performance. The Scan mode determines the file operations that trigger scanning. It is simpler to describe them in the reverse order of their appearance: On execution—only executable files are scanned and only when they are started. Copying an infected executable file will remain unnoticed. Switching File Anti-Virus into this mode decreases the security level considerably On access—files are scanned when they are opened for reading or execution. The user may download malicious code from a website but will not be able to do anything with this file On access and modification—files are scanned when any operation is performed on them. This is the safest mode, yet the most resource-consuming Smart mode—the order of operations performed with the file is analyzed. If a file is opened for writing, the scan will be performed after it is closed and all changes to it are made. Intermediate changes made to the file are not analyzed. If a file is opened for reading, it will be scanned once on opening, but will not be rescanned on intermediate read operations until the file is closed Essentially, Smart mode ensures the same protection as On access and modification, but consumes less resources. Therefore it is recommended for most computers. On access or On execution modes can be used on the computers where efficiency is more important than security, understanding that the probability of infection or virus spreading increases. Pausing File Anti-Virus File Anti-Virus can be paused while a resource-consuming operation is performed using the settings in the Pause task area: By schedule—the schedule (daily only) is set by specifying the time when the File Anti-Virus is to be paused and when it is to resume its normal operation. The time is specified in hours and minutes At application startup—File Anti-Virus will pause when the specified program loads in the memory and will resume its operation when this program is unloaded from the memory Standard security levels The security levels can be managed using the three-position switch: Low, Recommended and High. If any setting is modified, the security level is changed to Custom. In order to return to the standard level, click the By Default button. When an infected object is detected, File Anti-Virus can try to disinfect or delete it. Most infected files cannot be disinfected, because they contain nothing but the infected code. Before a file is disinfected or deleted, its copy is placed into the backup storage. That way, if it contains important information or is deleted because of a false positive, the file can be recovered. In some cases, it is impossible to say whether the file is definitely infected. If the threat is detected using heuristic analysis, the KSN database, or is similar to a virus signature, it receives the "suspicious" verdict. Instead of being disinfected, suspicious files are moved from their original location into a separate repository called Quarantine. The files in the quarantine can be rescanned so as to update their status. If the Roll back malware actions during disinfection option is enabled within the properties of the System Watcher component, after deleting an infected object, Kaspersky Endpoint Security rolls back its actions. Malware detected by File Anti-Virus should not be left unprocessed. That is why the settings that regulate File Anti-Virus actions should be locked. The optimal choice is to disinfect and if disinfection is impossible, delete infected files. Exclusions for objects Sometimes File Anti-Virus erroneously returns the “infected” verdict. Such cases are rare, and usually concern tailor-made software. This problem is reduced by creating exclusion rules for objects. Exclusions are configured in a separate group of settings, which are used by all protection components. An exclusion rule for objects consists of three attributes: Object—the name of the file or folder to which the exclusion applies. The name of the object may include environment variables (systemroot, userprofile and others) and also “*” and “?” wildcard characters Threat type—the name of the threat to be ignored (usually corresponds to a malware name), which can also be specified using wildcard characters Component—the list of protection components to which the rule applies Of the three attributes, one of the first two attributes and the third one are mandatory. You can create a full-fledged exclusion rule for a separate file or folder without specifying the threat type—the selected components will ignore any threats in the objects specified. And, conversely, you can create an exclusion rule for some threat types, for example, for the UltraVNC remote administration tool, so that the selected components would not respond to this threat regardless of where it was detected. All three attributes can also be specified simultaneously. For example, the exclusion list contains a set of rules for widespread remote administration tools: UltraVNC, RAdmin, etc. In these rules, both the threat type and the object (typical location of the executable file) are specified. In this case, Kaspersky Endpoint Security would not respond to the administration tools run from Program Files, but if the user runs UltraVNC from another folder, Kaspersky Endpoint Security would consider it a threat. Exclusions for applications Security level settings can be adjusted so as to achieve the optimal performance-reliability balance for an average computer. But if the computer runs resource-consuming programs, their operation can be slowed down by the File Anti-Virus. This is especially true for the programs that perform numerous file operations, for example, backup copying or defragmentation. To avoid slowdowns, special measures can be taken. The first thing to do is to configure an exclusion so that File Anti-Virus ignores file operations performed by the program. When adding exclusions under Trusted applications, within the Exclusions for Application window, specify the full or partial path to the executable file of the program and select the action—Do not scan opened files. If the program has many processes, and the data files are located in one directory, it might be worthwhile to exclude this directory from the File Anti-Virus scan scope: Under Exclusion rules, add the rule, specify the necessary directory as its object, do not specify any threat type, and select File Anti-Virus in the list of components to apply the rule. If the desired effect cannot be achieved by setting up exclusions, as a last resort, configure pausing File Anti-Virus while the program runs (in the Security Level settings, on the Additional tab). Exclusion settings should be locked. Users are often unable to properly configure their exclusions and may abuse such an ability and considerably weaken the protection of the computer. When a policy is applied, all local exclusions are disabled and replaced with centralized ones. The default exclusions configured in the standard policy apply only to the remote administration tools; moreover, they are disabled. Therefore, in order to create a useful set of exclusions, the administrator should find out which exclusions are required to minimize impact to the users, and to set them up in the policy. The best way to do this is to create exclusions in the local Kaspersky Endpoint Security interface and then import them into the policy.
<urn:uuid:c09b617d-32c6-4c0f-8774-c0a37530d9cc>
CC-MAIN-2017-04
http://support.kaspersky.com/learning/courses/kl_102.98/chapter2.2/section1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917249
3,298
2.8125
3
The Challenge of VoIP Security Voice over Internet Protocol (VoIP) is about carrying voice calls over an Internet Protocol (IP) network. It results in the digitization and packetization of voice streams. Security practitioners need to better understand VoIP technologies, protocols and standards and develop a policy to address VoIP security to ensure that these technologies are not the “gaps” exploited by hackers. Voice communications must be real-time, so high performance is critical. A delay of a few seconds in data transmission for a VoIP infrastructure renders the system unacceptable to users. Both performance and security are significant challenges that must be addressed in the planning of a VoIP infrastructure. VoIP Components and Protocols The components of VoIP include call processors/call managers, gateways, routers and firewalls. There are also specialized protocols associated with VoIP, as well as specialized end-user equipment. VoIP systems typically support standards, such as H.323, the session initiation protocol (SIP), as well as media gateway control protocol (MGCP) and Megaco/H.248. Voice packets use real time protocol (RTP). H.323 is the International Telecommunication Union (ITU) specification for audio and video communication across packetized networks. This specification includes several protocols, such as H.225, H.245 and others. H.323 is a wrapper for a suite of ITU media control recommendations. Each protocol in the H.323 specification has a specific role in the call setup process. An H.323 network typically includes a gateway and possibly a gatekeeper, multipoint control unit (MCU) and back end service (BES). The purpose of the gateway is to serve as a bridge between the H.323 network and the external network of non-H.323 devices, such as SIP or traditional PSTN networks. The gateway also supports address resolution and bandwidth control. The MCU is an optional component that facilitates multipoint conferencing and other communications between more than two endpoints. Gatekeepers are also optional, and their main purpose is to optimize network tasks. If a gatekeeper is present, then a BES may exist to support functions, such as maintaining data about endpoints, including permissions, services and configuration. Almost all H.323 traffic is routed through dynamic ports. This is especially challenging for stateless firewalls that cannot comprehend H.323 traffic. Thus, organizations need to configure a stateful firewall that supports VoIP, especially H.323. Network address translation (NAT) is another serious issue because the external IP address and port specified in the H.323 headers and messages are not the actual IP address and port numbers used internally. Security practitioners will need to make sure that H.323 traffic is read and modified by authorized systems, so that the correct address/port numbers are sent to the endpoints establishing a call connection. Session Initiation Protocol (SIP) The Session Initiation Protocol (SIP) provides similar functionality to H.323. SIP is specified by the Internet Engineering Task Force (IETF) for initiating a two-way VoIP communication session. SIP is a text-based protocol, while H.323 is based on ASN.1. SIP is an application-layer protocol that can use the services of UDP or TCP. The SIP network consists of endpoints, a proxy or redirect server, a location server and a registrar. The user initially reports its location to a registrar, which may be integrated into a proxy or redirect server. This information is stored on an external location server. Messages from endpoints are routed through a proxy or redirect server. Redirect servers obtain the actual address of the destination from the location server and return this information to the original sender, which then sends the message directly to the resolved address. Media Gateway Control Protocol (MGCP) Decomposed VoIP gateways consist of media gateways (MGs) and media gateway controllers (MGCs). They appear to the outside as a single VoIP gateway. MGCP is used to communicate between the separate components of a decomposed VoIP gateway. MGs handle the audio signal translation function, performing conversion between the audio signals carried on telephone circuits and data packets carried over the Internet. The MGC handles the signaling data between the MGs and the other network components, such as the H.323 gatekeeper or the SIP server. A single MGC can control multiple MGs. Real Time Protocol (RTP) Real Time Protocol (RTP) is used to transport voice packets over the Internet. RTP packets are encapsulated with UDP packets. RTP packets have special fields that hold data needed to correctly re-assemble the packets into a voice signal at the other end. How Does It Work? With VoIP, the user enters the phone number, and this phone number needs to connect (map) to an IP address. A number of protocols are used to determine the IP address that corresponds to the phone number. Once the call has been established and the party answers, the voice must be converted into a digitized form, resulting in a stream of packets created for transmission. It all starts with the analog voice signals converted into digital using an analog-digital converter. A compression algorithm is used to reduce the number of bits transmitted because digitized voice generates a large number of bits. The UDP protocol is then used with RTP to transmit voice packets on the Internet. Once the packet reaches the destination, the packets are disassembled and put in the right sequence. The digitized voice data is then extracted from the packets and uncompressed. The digitized voice is processed by a digital-to-analog converter, and the result is an analog signal that is transmitted to the phone system. VoIP Network Design Separate DHCP servers should be considered for voice and data. The firewall systems that are deployed must be designed for VoIP traffic—through either application level gateways (ALGs) or firewall control proxies. For example, in a SIP-based VoIP network, firewall systems must be stateful and monitor SIP traffic to determine which RTP ports are to be opened and made available to which addresses. Further, IPSec or Secure Shell (SSH) should be used for remote management and auditing access. Periodically, a detailed analysis of voice and network components should be conducted. This includes a thorough and comprehensive review of voice gateways, remote-access devices, firewalls, intrusion detection systems and routers. VoIP-Based Firewall Systems Security practitioners need to understand the performance of the firewall system in terms of how fast it can handle VoIP packets. Most VoIP traffic is UDP-based. Since numerous RTP ports (which are dynamic UDP ports) may be open at any time, it is recommended that all PC-based phones be placed behind a stateful firewall to monitor VoIP media traffic. Otherwise, there will be degradation in the quality of service (QoS). The large number of small RTP packets also impacts the performance of firewall systems in a VoIP environment. The firewall has to inspect each packet. As the number of packets increases, it puts a strain on the firewall CPU. This problem is further compounded by NAT, which introduces significant media traffic control in VoIP networks. Security practitioners must closely review the firewall architecture and NAT and the impact both have on VoIP QoS. Application-level-gateway types of firewall systems are ideal for VoIP. These firewalls can parse and understand H.323 or SIP and dynamically open and close necessary ports. Potential increases in productivity, mobility and resilience will lead to more deployments of VoIP networks. The challenge is to ensure that the IP telephony infrastructure is secure and protected from dis
<urn:uuid:7c298320-9565-493b-9f9e-3590d948f47f>
CC-MAIN-2017-04
http://certmag.com/the-challenge-of-voip-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916934
1,604
3.109375
3
During the 2013 NVIDIA GPU Technology conference, the Swiss National Supercomputing Center (CSCS) revealed that its Cray XC30 “Piz Daint” supercomputer was on track to becoming Europe’s fastest GPU-accelerated number-cruncher, and the first Cray machine to be equipped with Intel Xeon processors and NVIDA GPUs. Now, one year later, the revved-up Piz Daint is officially cleared for research. On March 21, Fritz Schiesser, the president of the ETH Board, Ralph Eichler, the president of ETH Zurich, and luminaries from research, politics and industry gathered together in Lugano to inaugurate the upgraded Piz Daint – the flagship Swiss supercomputer that is the fastest in Europe. Prof. Dr. Ralph Eichler, Prof. Dr. Thomas Schulthess and Dr. Fritz Schiesser Named for one of the highest mountains in the Swiss Alps, Piz Daint was built by the Swiss National Supercomputing Center (CSCS) to provide scientists with extreme-scale computing resources for a wide variety of disciplines, including climate science, geoscience, chemistry, physics, biology, and materials research. The center has particularly emphasized the expected benefit for its weather and climate modeling work. According to Thomas Schulthess, director of CSCS, the new and improved Piz Daint enables a weather prediction application to run three times faster with seven times less energy compared to its predecessor, the CPU-only “Monte Rosa.” The hybrid Piz Daint is also expected to improve energy-efficiency by a factor of three compared to GPU-less version. Piz Daint was built and configured in the fall of 2013 in time for the latest TOP500 and the Green 500 lists that were published in November 2013. Within the coveted top 10 zone of each of these lists, Piz Daint is only machine that appears on both – where it is the sixth most powerful and the fourth most energy-efficient, respectively. In addition to nabbing this double honor, Piz Daint is also one of only two petafloppers to crack the top 10 on the Green 500. With the upgrade, Piz Daint ballooned from twelve computer cabinets to twenty-eight, stuffed with a mix of Intel Xeon E5 processors and NVIDIA Tesla K20X GPUs. The number of compute nodes increased from 2,256 to 5,272 tightly bound by Cray’s Aries interconnect. The system touts 6.2 petaflops of performance as measured by the LINPACK Benchmark and reaches a theoretical peak performance of 7.8 petaflops. In terms of energy efficiency, Piz Daint with 3.2 gigaflops per watt was the first petascale-class system to break the 3 gigaflop per watt barrier. The combination of GPUs and CPUs helped make Piz Daint the most energy-efficient supercomputer in the petaflop club. To secure approval from manufacturer Cray, the renovated Piz Daint has been put through its paces. “Thanks to the close and dedicated collaboration between hardware producers, researchers and CSCS staff, we have succeeded in opening ‘Piz Daint’ up for research in record time after the upgrade,” says Schulthess in a statement. Scientists from all disciplines can apply for computer time on Piz Daint every six months through the regular CSCS user program. For larger projects, CHRONOS puts out an annual call for proposals. Computer time is allocated by an independent committee of specialists. Schulthess highlighted the fact that natural sciences cannot be satisfied solely with more powerful supercomputers; investments also need to be made to advance computing algorithms and software. For this reason, the platform for High-Performance and High-Productivity Computing (HP2C) was launched. Under this program, application developers have spent the last four years collaborating with applied mathematicians and computer scientists to create more efficient simulation systems. Piz Daint is the result of this collaboration. “For the first time at CSCS, a new supercomputing system has been co-designed along with key scientific application codes,” Schulthess states. “Building the hybrid ‘Piz Daint’ supercomputer in such a short time was only possible thanks to an excellent collaboration with Cray, NVIDIA, and application scientists at Swiss universities.”
<urn:uuid:495940ad-1998-4bde-990f-f32b76006a16>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/03/24/swiss-hybrid-petaflopper-opens-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00216-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937812
937
2.703125
3
They might seems small and relatively insignificant, but cheap wireless web cams deployed in houses and offices (and connected to home and office networks) might just be the perfect way in for attackers. Researchers from the Vectra Threat Lab have demonstrated how easy it can be to embed a backdoor into such a web cam, with the goal of proving how IoT devices expand the attack surface of a network. They bought a consumer-grade D-Link WiFi web camera for roughly $30, and cracked it open. They dumped the content of the camera’s flash memory chip, went through it and discovered a boot loader, a Linux kernel and image. After accessing the Linux image filesystem, they unearthed a binary that performs verification and update of the firmware (checks if the filed opened correctly – its size – its signature – if the update is newer than the current one – checks if the file checksum is the right one). “At this point, adding a backdoor roughly devolves to adding a service inside a Linux system – in our case, all we want is a simple connect-back Socks proxy. This can either be accomplished with a srelay and netcat in the startup script or more optimized C code, or one could go with a simple callback backdoor with a shell using netcat and busybox which are already present on the system,” the researchers explained. “While we are making the modification, we can also remove the capacity to reflash the device in the future. This would prevent an administrator-initiated firmware update which would remove our backdoor.” Repackaging the backdoored flash image and fixing the file checksum was trivial, and once the update was implemented, the backdoor worked beautifully. “Using the telnetd / busybox / netcat we can bring back a telnet socket to an outside host to have remote persistence to the webcam. With the webcam acting as a proxy, the attacker can now send control traffic into the network to advance his attack, and likewise use the webcam to siphon out stolen data,” they noted. Limitations to this type of attack are obvious: attackers must be skilled enough to create a backdoored flash image, and find a way to deliver it to the device – either by “updating” an already deployed device, or by getting their hands on it before it’s installed. The advantages are obvious: “Putting a callback backdoor into a webcam, for example, gives a hacker full-time access to the network without having to rely on infecting a laptop, workstation or a server, all of which are usually under high scrutiny and may often be patched,” they explained. “On a tiny device, there is no anti-virus and no endpoint protection. In fact, no one thinks of the device as having software on it at all. This makes these devices potentially inviting for persistent attackers who rely on stealthy channels of command-and-control to manage their attacks.” “The irony in this particular scenario is that Wi-Fi cameras are typically deployed to enhance an organization’s physical security, yet they can easily become a network security vulnerability by allowing attackers to enter and steal information without detection,” pointed out Vectra Networks CSO Gunter Ollmann.
<urn:uuid:ea34ccbb-4aa0-48b8-b53e-8123f3747b6a>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2016/01/14/cheap-web-cams-can-open-permanent-difficult-to-spot-backdoors-into-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939303
687
2.59375
3
Crudely classifying these two, "Length" is a intrinsic function operated of variables "Length OF" is a special register Coding either Function Length(variable_name) or LENGTH OF Variable-Name would return the same result? the length of variable-Name in bytes. Whereas the LENGTH function can only be used where arithmetic expressions are allowed, the LENGTH OF special register can be used in a greater variety of contexts. For example, the LENGTH OF special register can be used as an argument to an intrinsic function that allows integer arguments. (An intrinsic function cannot be used as an operand to the LENGTH OF special register.) The LENGTH OF special register can also be used as a parameter in a CALL statement.
<urn:uuid:7f9593b2-d943-48f1-9046-9015754a5f98>
CC-MAIN-2017-04
http://ibmmainframes.com/about11297.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.74594
156
2.90625
3
Definition: A model of computation proposed by A. K. Chandra, L. Stockmeyere, and D. Kozen, which has two kinds of states, AND and OR. The definition of accepting computation is adjusted accordingly. See also time/space complexity, Turing machine. Note: First proposed as a model for parallel computation, it has been widely used to prove complexity bounds on problems. A. K. Chandra and L. J. Stockmeyer, Alternation, pages 98-108, and D. Kozen, On Parallelism in Turing Machines, pages 89-97, both in Proc. Seventeenth Annual IEEE Symposium on Foundations of Computer Science, 1976. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 14 December 2005. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Sandeep Kumar Shukla, "alternation", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 14 December 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/alternation.html
<urn:uuid:f560ce6b-1aa3-43b1-b998-6ef1f1519729>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/alternation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876254
258
2.578125
3
Definition: A curve representing the limit of a function. That is, the distance between a function and the curve tends to zero. The function may or may not intersect the bounding curve. See also asymptotic upper bound, asymptotic lower bound. Note: After Ben Podoll <email@example.com> 28 August 2003. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "asymptotic bound", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/asymptoticBound.html
<urn:uuid:c7706e55-5225-4fbf-b219-c46eafd77c30>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/asymptoticBound.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00510-ip-10-171-10-70.ec2.internal.warc.gz
en
0.805941
201
2.515625
3
Public health planners have a new tool to help them prepare for one of the most daunting public health emergencies: an influenza pandemic. PandemicPractices.org, launched Monday by the Center for Infectious Disease Research & Policy (CIDRAP) at the University of Minnesota and the Pew Center on the States (PCS), a division of The Pew Charitable Trusts, brings together more than 130 peer-reviewed promising practices from four countries, 22 states and 33 counties. Compiled as a resource to save communities and states time and resources, the database enables public health professionals to learn from each other and to build on their own pandemic plans. "The federal government has a national plan in place for a flu epidemic. But that plan will be useless unless states and local communities are ready and able to handle a public health emergency on the ground," said Jim O'Hara, managing director of Health and Human Services Policy at The Pew Charitable Trusts. "Communities across the country are facing the challenge of translating broad requirements into local action, often with limited resources. This database is an excellent tool to help public health officials inform their own pandemic planning and may save valuable time and resources that would be spent crafting strategies from scratch." Every winter, seasonal flu kills approximately 36,000 Americans and hospitalizes more than 200,000. Occasionally, a new flu virus emerges for which people have little or no immunity. Such a virus will spread worldwide, causing illnesses and deaths far beyond the impact of seasonal flu, in an event known as a pandemic. A severe flu pandemic will last longer, sicken more people, and cause more death and disruption than any other health crisis. In addition to the human toll, a flu pandemic will take a serious financial toll. One report predicts a range -- from a global cost of approximately $330 billion in a mild pandemic scenario, to $4.4 trillion worldwide under a 1918- like scenario. Planning for a flu pandemic represents a challenge in public health. No one can predict the severity of the next pandemic, and there is a shortage of data from past pandemics to help guide planning. Despite the hard work of professionals across the public health community, America is unprepared for even a moderate pandemic. "It is crucial that states, counties and cities continually enhance their preparedness for pandemic influenza," said Michael Osterholm, PhD, MPH, CIDRAP director. "This online database represents an important step by providing concrete, peer-reviewed materials to further public health preparedness." PandemicPractices.org highlights approaches that communities across America have developed to address three key areas: altering standards of clinical care, communicating effectively about pandemic flu and delaying and diminishing the impact of a pandemic. Users can easily find practices applicable to their communities. The database can be searched by state or topic, as well as by area of special interest, such as materials translated into multiple languages, materials for vulnerable populations, or toolkits for schools. "Communities across America are looking for information and resources to help them plan for a flu pandemic. This database will be a vital contribution to those efforts," said Isaac Weisfuse, MD, MPH, deputy commissioner, New York City Department of Health and Mental Hygiene, who served as an Advisory Committee member and reviewer on this project. Planners can examine and download pandemic flu planning materials and use or adapt them to fit local needs. The database allows cities, counties, states, hospitals, clinics and community organizations to find materials that may enhance their pandemic preparedness. Even agencies whose work is included can benefit from the work of others. For example, communities that have developed strong risk communications practices can learn from their peers who have focused on expanding the health care workforce to meet the needs of an influx of patients.
<urn:uuid:29cbeaa2-674f-48df-a219-c2ecd4cfb5c6>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Online-Database-Showcases-Local-County-and.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00234-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944474
784
3.28125
3
The enthusiasm for bring-your-own-device (BYOD) programs has spread beyond private sector companies, with employees at government agencies also interested in making use of their personal phones and tablets. A new report by InformationWeekfound that the principles of BYOD have gone one step further, with schools turning to user-brought devices to deepen the educational experience. The source suggested that the introduction of personal phones and tablets could completely change the way education is administrated. According to the news source, education-focused companies have plans for mobile device use that allow teachers to access analyses of student-input data. In higher education especially, smart devices are becoming more widespread. Finding a way to tap into students’ attachment to their phones and tablets could help administrators increase engagement and act as a helpful way to collect and aggregate digital data with no scanning or conversion. InformationWeek noted that schools will likely take on complex and in-depth reforms, rather than simply transposing standard textbooks to function in a mobile environment. The source noted that analytics technology can now measure the quirks of individual learning styles, allowing leaders to customize the curriculum more effectively to each student. Work performed on tablets or recorded with phones could give the information needed for this method, and BYOD could ensure projects take off without schools investing heavily in hardware expenditures. Students may not be the only ones connecting to educational networks from afar. The addition of a digital framework and tools to a school’s offerings could have massive implications for the staff, allowing them full access to data at any time through BYOD. Educators with their own devices can harness zero client access systems to log in to their sophisticated new systems at any time and from any device. No longer dependent on physical access to servers or a desktop system, staff members can increase their own engagement with the data, even as the data itself becomes more descriptive and helpful. According to Brookings research, a digital focus could change the way teachers understand their students. The source noted that modern classrooms can include a focus on learning that is based on real-time data accumulation. While educators have to wait for tests and quizzes to gather an understanding of student accomplishment under the current system, an approach more rooted in digital technology could make assessment a constantly- updated process. Industry news brought to you by Ericom Software, leaders in Industry IT solutions.
<urn:uuid:01383c64-aa17-4697-94b7-07ef2268de79>
CC-MAIN-2017-04
https://www.ericom.com/communities/blog/byod-tactics-spread-education
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00354-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959319
476
2.640625
3
Device fingerprinting, i.e., collecting information from a device for the purposes of identification, is one of the main techniques used by online services for mobile fraud detection. The goal is to recognize “bad” devices used by fraudsters, such that they can be identified even when other attributes (such as user names or IP addresses) change. In the browser era, device fingerprints typically took the form of browser and OS configuration information and/or persistent HTTP cookies. However, as more and more online services shift to a “mobile-first” or “mobile-only” strategy, device fingerprinting technology also took on an entirely new form. The tracking entity (often the mobile app) now resides on the device itself, rather than remotely as in web communications. They hence have access to mobile device identifiers, sensor readings, and other contextual information that enable more accurate device fingerprints — and potentially better mobile fraud detection solutions. This did not deter fraudsters. Device fingerprinting may be good at identifying known bad mobile devices, but there is little they can provide on never-before-seen devices that lack reputation history. Fraudsters exploit this information void to their advantage. By simulating the appearance of multiple distinct mobile devices, they can conduct large-scale attack campaigns that look as if they are from unique legitimate users. This allows them to avoid detection and reap gains from fraudulent transactions, ad campaigns, mobile fraud and mass registered fake accounts, just to name a few. Mobile Device Flashing A common technique for simulating the appearance of multiple new, distinct mobile devices is called device flashing. On mobile devices, the operating system initializes and controls the system configuration (for legacy reasons, the operating system on mobile devices is sometimes also called “firmware” or “ROM image”). By “flashing,” or overwriting, the current version of the operating system with a custom version, it is possible to reset the device to its factory state. This effectively erases all stored data, and forces a new device identifier to be generated. For example, ANDROID_ID, the unique identifier for Android phones, is randomly generated when the user initializes the device. Recently, the DataVisor team observed mobile device flashing used in a mobile fraud attack to perform fraudulent purchases within a popular mobile game app. The fraudsters acted as “brokers” to purchase virtual items on the gamers’ behalf, leveraging stolen credit cards and virtual currency arbitrage to make a profit. Device flashing is used here to avoid raising suspicion from having too many accounts (the gamer accounts) associated with the same device (the fraudster “broker’s” phone). The table below shows this mobile fraud attack in action. Each row corresponds to an event logged by the mobile game app. The attacker repeatedly logged on as different users (gamer IDs) to make purchases, without generating any other types of events indicative of actual game play. As shown in the “DEVICE_ID” column, they also switched out their device identifiers frequently – after every couple of users – such that each “device” will only be used by a very small number of users, similar to legitimate devices. Spoofing via Intercepting System Calls In addition to mobile device flashing, spoofing is another way of simulating the appearance of multiple devices. This abuses the fact that apps obtain device identifiers and other system information through system calls. On jailbroken/rooted devices, or for apps that have been maliciously repackaged, these calls can be intercepted and given a fake value – whether it’s device identifiers, sensor readings, or any other contextual information about the device’s surroundings. As an example of spoofed identifiers, the figure below shows invalid “MAC addresses” observed by a mobile app that are not even in hexadecimal representation, which is a base 16 system that should only contain the symbols 0-9 and A-F (or a-f). The table below shows examples of fake signup events at an online social network app, which offers newly registered users a limited number of virtual currency that they can use to purchase virtual items or trade with other users. In this attack, hundreds of users registered from the same IP subnet, each with a different MAC address and randomly generated usernames. By mass registering fake accounts this way, the attackers can harvest virtual currency to resell for profit, all while evading standard detection techniques based on unique device identifiers. The Uses and Limits of Mobile Device Fingerprinting Device fingerprinting technology has advanced greatly in the mobile era, primarily due to the many fine-grained “identifiers” available on mobile devices that were not previously accessible on PCs. However, as we show in the above examples, fraudsters have adapted their techniques to circumvent mobile device fingerprinting – and the security solutions that rely on them. With these sophisticated obfuscation techniques, a group of bot accounts (controlled by the same attackers) can appear to originate individually from different devices and geolocations, just like legitimate users. More importantly, these attacks illustrate the importance of understanding the caveats and limitations of mobile device fingerprinting. “Unique” identifiers may not always be what they seem, and even then, recognizing a returning device is not the same as identifying fraud. As the security landscape continues to evolve with new technology, online services also need to be aware of these new threats and be prepared to deal with them.
<urn:uuid:09311973-b7d0-40f4-bede-5670e44b77ef>
CC-MAIN-2017-04
https://www.datavisor.com/threat-blogs/mobile-fraudsters-gone-in-a-device-flash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91902
1,133
3.015625
3
Astronomers Discover 'Earth-Like' Planet / December 6, 2011 The most Earth-like planet yet has been discovered outside our solar system -- and its surface temperature averages a pleasant 72 degrees. Researchers announced Monday at NASA's Ames Research Center in Moffett Field, Calif., that the planet was discovered by the Kepler space telescope and orbits a star about 600 light years away – within its "habitable" zone (and considered close by astronomical standards). "It is right smack in the middle of the habitable zone," Kepler scientist Natalie Batalha told USA Today. The planet, named Kepler 22b, a rendering of which is shown above, could harbor oceans on its surface, like Earth does, since liquid water is considered vital for the development of life. The Kepler telescope also has discovered more than 1,000 new planet candidates, all of which require follow-up observations to verify they are actual planets. Kepler-22b is about 2.4 times the radius of Earth, and scientists don't yet know if it has a predominantly rocky, gaseous or liquid composition. Previous research hinted at the existence of near-Earth-size planets in habitable zones, but clear confirmation proved elusive. Two other small planets orbiting stars smaller and cooler than our sun recently were confirmed on the very edges of the habitable zone, with orbits more closely resembling those of Venus and Mars. "This is a major milestone on the road to finding Earth's twin," said Douglas Hudgins, Kepler program scientist at NASA Headquarters in Washington, in a press release. "Kepler's results continue to demonstrate the importance of NASA's science missions, which aim to answer some of the biggest questions about our place in the universe."
<urn:uuid:b77b8d0e-74e8-48c4-9c5a-5166cf6b9b8c>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Astronomers-Discover-Earth-Like-Planet-12062011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940963
350
3.5625
4
Researchers from the University of Washington (UW) have created a new Passive Wi-Fi system that consumes 10,000 times less power than conventional Wi-Fi and 1,000 times less than existing energy efficient methods, such as Bluetooth Low Energy (LE). This could be good news for IoT. The technology has also been named one the year’s 10 breakthrough technologies by MIT Technology Review and it could have a huge impact on the Internet of Things (IoT) industry. Although the ubiquity of Wi-Fi is one of its strongest advantages, the power it consumes is an issue, particularly as more connected devices begin to enter the market. Passive Wi-Fi transmits signals at a bit rate that is lower than the maximum Wi-Fi speed but 11 times higher than Bluetooth, but all devices with Wi-Fi connectivity are capable of decoding signals sent by Passive Wi-Fi. Reducing energy consumption for IoT The researchers at UW were able to vastly reduce the power consumption of their Wi-Fi signals by decoupling the digital and analogue components of radio communications. The power-intensive analogue functions are assigned to a single device plugged into the wall, which produces the Wi-Fi signal. The passive sensors then simply reflect and absorb that signal using a digital switch, meaning energy consumption is kept to a minimum. “All the networking, heavy-lifting and power-consuming pieces are done by the one plugged-in device,” explained Vamsi Talla, co-author of a paper detailing the technology. “The passive devices are only reflecting to generate the Wi-Fi packets, which is a really energy-efficient way to communicate.” Passive Wi-Fi will face competition from other low energy communications, all seeking to dominate the Internet of Things market. Long Range Radio, or LoRa, is already being trialled in several world cities, while a number of businesses demonstrated their Bluetooth IoT offerings at Mobile World Congress last month. However, Passive Wi-Fi does hold some advantages, namely that Wi-Fi is already present in so many homes, meaning that integration between IoT networks becomes less of a problem. The technology has also demonstrated impressive range, with researchers able to connect smartphones and passive sensors at distances of up to 100 feet.
<urn:uuid:97f5702d-3c33-4ae1-b1d6-3b51809c2e78>
CC-MAIN-2017-04
https://internetofbusiness.com/passive-wi-fi-could-solve-iot-power-problems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00227-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950565
465
3.203125
3
University researchers are studying the brains of honey bees in an attempt to build an autonomous flying robot. By creating models of the systems in a bee's brain that control vision and sense of smell, scientists are hoping to build a flying robot that can do more than carry out pre-programmed instructions. Such a robot would be able to sense and act as autonomously as a bee. Researchers at the University of Sheffield and the University of Sussex in England are teaming up to take on what they call one of the major challenges of science today -- building a robot with artificial intelligence good enough to perform complex tasks as well as an animal can. If that's possible, the flying robot would be able to use its "sense of smell" to detect gases or other odors and then home in on the source. "The development of an artificial brain is one of the greatest challenges in artificial intelligence," said James Marshall, lead project researcher at the University of Sheffield. "So far, researchers have typically studied brains such as those of rats, monkeys and humans. But actually simpler organisms, such as social insects, have surprisingly advanced cognitive abilities." The universities are using GPU accelerators, donated by Nvidia, to perform the massive calculations needed to simulate a brain using a standard desktop PC, instead of a far more expensive supercomputer. Mixing brain and robotic research isn't new. Duke University researchers reported in 2008 that they had worked with Japanese scientists to use the neurons in a monkey's brain to control a robot. Scientists hoped the project would help them find ways to give movement back to people suffering from paralysis. That research came on the heels of work done in 2007 at the University of Arizona, where scientists successfully connected a moth's brain to a robot. Linked to the brain of a hawk moth, the robot responded to what the moth was seeing and was able to move out of the way when an object approached the moth. Scientists working on the moth project five years ago predicted that people will be using "hybrid" computers -- a combination of hardware and living organic tissue -- sometime between 2017 and 2022. In the research on bees' brains, the scientists said they hope their findings can be used to build flying robots that could, for example, be used in search and rescue missions, perhaps to gather information that rescue teams could use to make decisions about how to proceed. "Not only will this pave the way for many future advances in autonomous flying robots, but we also believe the computer modeling techniques we will be using will be widely useful to other brain modeling and computational neuroscience projects," said Thomas Nowotny, project leader at the University of Sussex. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin and on Google+, or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org.
<urn:uuid:4007751e-7feb-4a51-83f3-b8fdf659e331>
CC-MAIN-2017-04
http://www.computerworld.com/article/2491852/emerging-technology/researchers-study-bee-brains-to-develop-flying-robots.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95012
602
4.09375
4
In fiber optic communication, the visible-light or infrared (IR) beams carried by a fiber are attenuated as they travel through the material. Then there comes to the fiber optic amplifier which is used to compensate for the wakening of information during the transmission. Amplifiers are inserted at specific places to boost optical signals in a system where the signals are weak. This boost allows the signals to be successfully transmitted through the remaining cable length. In large networks, a long series of optical fiber amplifiers are placed in a sequence along the entire network link. Common fiber optical amplifiers include Erbium-Doped Fiber Amplifier (or EDFA Optical Amplifier), Raman fiber amplifier, and silicon optical amplifier (SOA). Erbium doped fiber amplifier is the major type of the fiber amplifier used to boost the signal in the WDM fiber optic system, as we know it is WDM that increase the capacity of the fiber communications system and it is the erbium-doped fiber amplifier that makes WDM transmission possible. Fiber amplifiers are developed to support Dense Wavelength Division Multiplexing (DWDM) which is called DWDM EDFA amplifier and to expand to the other wavelength bands supported by fiber optics. There are several different physical mechanisms that can be used to amplify a light signal, which correspond to the major types of optical amplifiers. In doped fibre amplifiers and bulk lasers, stimulated emission in the amplifier’s gain medium causes amplification of incoming light. In semiconductor optical amplifiers (SOAs), electron-hole recombination occurs. In Raman amplifiers, Raman scattering of incoming light with phonons in the lattice of the gain medium produces photons coherent with the incoming photons. Parametric amplifiers use parametric amplification. When light is transmitted through matter, part of the light is scattered in random directions. A small part of the scattered light has frequencies removed from the frequency of the incident beam by quantities equal to the vibration frequencies of the material scattering system. Raman fiber optic amplifiers function within this small scattering range. If the initial beam is sufficiently intense and monochromatic, a threshold can be reached beyond which light at the Raman frequencies is amplified, builds up strongly, and generally exhibits the characteristics of stimulated emission. This is called the stimulated or coherent Raman effect. EFDA fiber optic amplifier functions by adding erbium, rare earth ions, to the fiber core material as a dopant; typically in levels of a few hundred parts per million. The fiber is highly transparent at the erbium lasing wavelength of two to nine microns. When pumped by a laser diode, optical gain is created, and amplification occurs. Silicon or semiconductor optical amplifier functions in a similar way to a basic laser. The structure is much the same, with two specially designed slabs of semiconductor material on top of each other, with another material in between them forming the “active layer”. An electrical current is set running through the device in order to excite electrons which can then fall back to the non-excited ground state and give out photons. Incoming optical signal stimulates emission of light at its own wavelength. Fiber optic repeater also can re-amplify an attenuated signal but it can only function on a specific wavelength and is not suitable for WDM systems. That is the reason why optical fiber amplifier plays a much more important role in communication systems.
<urn:uuid:285bfdcb-c306-40aa-9488-6cf8c48a96af>
CC-MAIN-2017-04
http://www.fs.com/blog/technology-of-fiber-optic-amplifiers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921476
709
3.6875
4
Fiber optic coupler is a lively type of passive components, and its basic function would be to achieve the optical power and fiber wavelength distribution. Single-mode fiber coupler is a passive component of a very broad application in optical fiber communication systems, fiber optic sensors, fiber optic measurement techniques and signal processing systems. We use electronic couplers constantly, like a telephone coupler which lets you connect both a mobile phone and a fax machine towards the same telephone line. Optical couplers have similar functionality as electronic couplers. They split the signal to multiple points(devices). Fiber optic couplers are needed for tapping(monitoring the signal quality) or even more complex telecommunication systems which require a lot more than simple point-to-point connections, for example ring architectures, bus architectures and star architectures. Fiber optic couplers could be either passive or active devices. The difference between active and passive couplers is that a passive coupler redistributes the optical signal without optical-to-electrical conversion. Active couplers are electronics that split or combine the signal electrically and utilize fiber optic detectors and sources for input and output. You will find majorly three kinds of manufacturing technologies for fiber optic coupler: micro optics, planar waveguide and fused-fiber. Micro optics technologies use individual optic elements such as prism, mirrors, lens etc to construct an optical route which functions like a coupler. This can be an expensive approach and never as common as the other two sorts. Planar waveguides are more like semiconductors, such as PLC splitters. A planar wafer is used to create waveguide couplers. They are more often employed for high port count couplers for instance 12, 24, and 36 output ports. Fuse-fiber couplers or FBT couplers make use of the simplest material – optical fibers. Multiple fiber cores are melted together which let light transmit among them. Fused primary technique is to burn melt together two fiber optic and stretching to reach the core polymer optical coupling together. The most crucial being the fiber optic splicing equipment. Fiber optic splicing is also the most important step. While some significant steps may be within the machine OEM, but after fused, you have to manually package. This method has certain advantages in the production efficiency and product performance. These days it is the primary method for manufacturing an optical fiber coupler. In this way, optical fiber coupler produced properties happen to be significantly improved than before. Nonetheless, with the big number of applications within the military, aerospace and other high-tech fields, the fiber coupler have become increasingly demanding for insertion loss flatness, polarization sensitivity, device reliability, bandwidth and power and other aspects of the work. These practical needs coupled with the manufacturing process submits higher requirements in order to meet these requirements. Scientists have done lots of research in various manufacturing techniques.
<urn:uuid:3b041406-2df0-4af9-9638-a4036d438236>
CC-MAIN-2017-04
http://www.fs.com/blog/manufacturing-techniques-of-fiber-optic-coupler.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913794
598
3.578125
4
A microcontroller can be considered as a self-contained system with a processor, memory, and peripherals and can be used as an embedded system. Most of the microcontrollers used today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems. Microcontrollers can be found equipped in a large number of products these days. If your microwave oven has an LED or LCD screen and a keypad, it contains a microcontroller. All modern automobiles contain at least one microcontroller, and can have as many as six or seven: The engine is controlled by a microcontroller, as are the anti-lock brakes, the cruise control, and so on. Any device that has a remote control almost certainly contains a microcontroller. TVs, VCRs and high-end stereo systems all fall into this category. Basically, any product or device that interacts with its user has a microcontroller equipped. Latin American Microcontroller market report segments the market by application and components. On the basis of components market is segmented into ROM, RAM, and EEPROM. On the basis of application, the market is segmented into Automation, Building Technology, Communications and Networking, Computer and Personal Multimedia, Energy and Smart Grid, Healthcare and Wellness, Home Appliances and Power Tools, LED and General Lighting, Motor Control, Multimedia Convergence, Power Supplies and Converters, and Transportation. The market is further segmented on the basis of countries, such as Brazil and Argentina. The current and future market trends for each country have been analyzed in this report. Porter’s five force model analysis, along with the market share of leading players and competitive landscaping are included in the report. This report also includes the market share analysis, by revenue, of the leading companies. The market share analysis of these key players is arrived at, based on key facts, annual financial information, and interviews with key opinion leaders, such as CEOs, directors, and marketing executives. In order to present an in-depth understanding of the competitive landscape, the report on Latin American Microcontroller market provides company profiles of the key market players. With market data, you can also customize MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standard and deep dive analysis of the following parameters: 1. Data from Manufacturing Firms - Fast turn-around analysis of manufacturing firms with response to recent market events and trends - Opinions from various firms about different applications where Microcontroller can be used - Qualitative inputs on macro-economic indicators and mergers & acquisitions in each country 2. Shipment/ Volume Data - Value of components shipped annually in each geography tracked 3. Trend analysis of Application - Application Matrix, which gives a detailed comparison of application portfolio of each company, mapped in each geography 4. Competitive Benchmarking - Value-chain evaluation using events, developments, market data for vendors in the market ecosystem, across various industrial verticals and market segmentation - Seek hidden opportunities by connecting related markets using cascaded value chain analysis Please fill in the form below to receive a free copy of the Summary of this Report Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
<urn:uuid:c3395de3-7196-4d4a-994f-90bcd2c9525b>
CC-MAIN-2017-04
http://www.micromarketmonitor.com/market/latin-america-microcontroller-7821036931.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930519
687
2.703125
3
The practice of altering Caller ID information, whether for fraudulent purposes or otherwise, has come to be referred to as "spoofing." Caller ID spoofing refers to the alteration of caller ID information (number and/or name) by the originator of a telephone call. Caller ID spoofing is not, in itself, illegal and it may have legitimate uses: - Some businesses with a large number of employees may alter the Caller ID information in order to provide a single number for those customers returning calls placed by its employees. - Telemarketers are required to transmit caller ID information, but they are allowed to substitute the name of the seller on behalf of which the telemarketing call is placed and the seller's customer service telephone number. Unfortunately, to the detriment of some telephone subscribers, the ability to manipulate Caller ID information enables a practice known as "vishing." Vishing is the practice of leveraging IP-based voice message technologies to socially engineer the intended victim into providing personal, financial or other confidential information for the purpose of financial reward. Caller ID spoofing is used in support of vishing: - By changing Caller ID data, this can help the vishers reinforce their social engineering story as well as make it more difficult to track the source of an attack. Vishing is expected to have a high success rate because: - Telephone systems have a much longer record of trust than newer, Internet-based messaging. - A greater percentage of the population can be reached via a phone call than through email. - There is widespread adoption and general acceptance of automated phone validation systems. - The telephone makes certain population groups, such as the elderly, more reachable. - Timing of message delivery can be leveraged to increase odds of success. - The telephone allows greater personalization of the social engineering message. - Increased use of call centers means that the population is more accepting of callers from foreign countries asking for confidential information. The most profitable uses of the information gained through a vishing attack include: - Controlling the victim's financial accounts - Purchasing luxury goods and services - Identity theft - Making applications for loans and credit cards - Transferring funds, stocks and securities - Hiding criminal activities, such as money laundering - Obtaining personal travel documents - Receiving government benefits Caller ID spoofing is relatively easy to accomplish. For individuals with little to no computer knowledge, spoofing services are readily available over the Internet from such providers as SpoofCard.com, CallerIDFaker.com, PhoneGangster.com, telespoof.com, and numerous others. Spoofing practices will likely vary considerably depending upon the spoofer's purpose and scale of activity, whether the spoofers are casual pranksters forging identities or whether they are more committed, organized, large-scale criminal operators. The Office of the Attorney General (OAG) recommends that the MN Public Utilities Commission continue its effort to determine whether a technological solution to Caller ID Spoofing is feasible. It should be the Commission, not the industry, which determines whether the costs associated with a technological solution outweigh the public benefit of accurate and reliable Caller ID information. Until such a solution is implemented, the OAG recommends that regulatory entities, as well as Caller ID providers, educate consumers about the severe limitations of Caller ID service.
<urn:uuid:f790f90a-7963-40c3-8949-5ab4652c7a97>
CC-MAIN-2017-04
https://www.consolidated.com/support/alerts/caller-id-spoofing-awareness
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00181-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915238
693
2.625
3
Intel researchers say that even tiny steps can make huge improvements in computers and security. SAN FRANCISCOIntel researchers are envisioning a new way to de-worm PCs. The computer chip giant here at its Intel Developer Forum on Thursday discussed technology designed to head off computer worms and virus attacks in PCs, by stopping the agents before they can begin to spread and attack other systems. Generally, Intel Corp.s researchers are attacking a broad range of subjects, ranging from new microprocessor features such as advanced transistors to software and communications technologies, inside its various labs. One effort, for example, is working on integrating radios into a broad range of chips, so as to create ad hoc networks. Often, the labs projects, whose focuses range from between a year 15 years, become part of Intels products, including its processors and their enabling chip sets for PCs. However, at other times, the company contributes technology the broader computer market through licensing or by contributing it to standards bodies. Justin Rattner, director of Intel corporate technology, demonstrated several of the companys latest efforts, including Manageability Engine, hardware that serves to augment protective software by helping to quickly detect the beginnings of an infection in a PC and cut that machine off from the network before the worm can spread to other machines. "The problem is worms and viruses propagate so quickly, that if youre not able to respond in manner of minutes the situation [gets] out of control," Rattner said during his Thursday IDF keynote address. Thus "Weve been working on technologies that will help systems protect themselves and not harm the environment around them." The Manageability Engine essentially works by measuring activity associated with worms and viruses, such as the number of connections per second a PC is attempting to make to a computer network. Because it looks for a pattern of behavior, it can recognize new attacks, which might not have been seen before. Upon sensing worm-like activity, the engine can work with elements of a PCs operating system to respond, ensure protections are not circumvented and, if needed, break the network connection. Click here to read more about Intel and Cisco teaming up to boost security. "We think this is really exciting research," Rattner said. "If we can create systems with this kind of featurethe ability to do no harm, here in the sense of not spreading virus or worm to another systemthe benefit to users will be enormous." Rattner demonstrated the technology running on a network with 50 systems during the keynote speech. The technology, which is still only running in Intels labs, could be added into future Intel hardware, such as a network connector. Rattner, in his keynote address, also demonstrated: The Diamond Project, with interactive data exploration for search. In the demonstration, a photo of Rattner was found from among 85,000 photos by searching on faces and then the color blue, which was the color of the shirt he was wearing in the photo. Precision Location Technology, a Wi-Fi-specific technology that can help add security to wireless networks by triangulating where users are located. A user outside a given boundary, such as the walls of a home or a business, could be denied access. Intel aims to lend the technology to the 802.11 standards body, Rattner said. Meanwhile, there is "no fundamental reason couldnt be extended" to other wireless technologies as well, he told attendees at a post-keynote Q&A. Finer power management, using a faster voltage regulator that shifts up and down in fractions of microseconds, saving wasted electricity. Intels demonstration paired a processor, chip set and voltage regulator together on a daughter card, which could be added to notebooks fairly easily. The company estimated the better voltage regulations could reduce power consumption by 15 percent to 30 percent compared to todays notebooks without affecting performance. Ultimately, the technology might be applied to a multicore processor to regulate power for their individual cores, Rattner said. "Theres an example of technology working at a very deep level in the system in order to deliver user value at the top level," he said. Check out eWEEK.coms for the latest news in desktop and notebook computing.
<urn:uuid:9ed95090-ca2e-44e6-be80-8957729d57f8>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Researchers-Chase-Away-Worms-WiFi-Bandits-at-Intel
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955559
874
2.75
3
The international competition to build an exascale supercomputer is gaining steam, especially in China and Europe, according to Peter Beckman , a top computer scientist at the U.S. Department of Energy's Argonne National Laboratory. An exascale system will require new approaches in software, hardware and storage. It is why Europe and China, in particular, are marshaling scientists, research labs and government funding on exascale development. They see exascale systems as an opportunity to build homegrown technology industries, particularly in high-performance computing, according to Beckman. An exascale system is measured in exaflops; an exaflop is 1 quintillion (or 1 million trillion) floating point operations per second. It is 1,000 times more powerful than a petaflop system, the fastest systems in use today. The Department of Energy (DOE) is expected to deliver to Congress by Feb. 10 a report detailing this nation's plan to achieve exascale computing. The government recently received responses from 22 technology firms to its request for information (RFI) about the goal to develop an exascale system by 2019-2020 that uses no more than 20 megawatts (MWs) of power. To put that power usage in perspective, a 20-petaflop system being developed by IBM, which will likely be considered one of the most energy efficient in the world, will use seven to eight MWs. Beckman, the director of the Exascale Technology and Computing Institute at DOE's Argonne National Laboratory, talked with Computerworld about current developments in exascale. Excerpts from that interview follow: The Department of Energy wants an exascale system by 2019-2020, and one that operates on no more than 20MW. What did DOE learn from the tech industry responses? About 22 companies replied. [DOE isn't disclosing the names of the responding companies.]They had a wide range of types of companies. Some were integrators; some were chip designers, software companies. All of them said that this is a great challenge and that we think we can make fantastic progress on this, but it will be really hard. We're setting pretty lofty goals, hard things. But if you start out saying that 100MW will be just fine, then you're not really pushing the envelope. The 20MW is very difficult to achieve, but we want to see new technology to make that happen, and so all of them, universally, said that was hard. Did they ask you to adjust the 20MW requirements? All the responders said it would be a difficult target to reach without a strong investment. If we allowed them twice as much power, 40MW or 50MW, then it is much simpler. They also said that the system software and the whole software stack required an integrated approach. Most of the responses, I would say, were light on the data challenges. People know that data is a challenge, but they really focused, in the responses, on the computing. What is the exascale data challenge? If we imagine that we have a machine that is an exascale, exaflop machine, generating petabytes and petabytes of data, it becomes its own, in some sense, computation problem. We can't solve the bandwidth storage problem by just buying more disks. A multi-level plan is what will have to evolve, including NVRAM and even novel technologies such as phase change memory . But there has to be a comprehensive data solution that includes analysis. It can't be, 'Oh, we just need to be able to store the data.' We need to look up the architecture necessary to analyze the data. If you look at Google and the other web-based technologies, they have come up with ways to store and analyze data -- a way in which you have a programming model where the storage and analysis are very close. In computing we haven't done that yet. We've always had the model where the data is over here, the computing is over [there]; you ask for the data, you get a copy of it, you put it in the computer, you work on it a lot, and then you put it back. And so as we move to exascale, where this computing becomes really more powerful and the data sets become bigger, sloshing this back and forth is way too costly in terms of power and performance -- power, especially. It's movement that cost a lot of electrical power. We need to find to ways to compute and then analyze and do the storage and analysis closer together. Is there anything out there like that today? Some types of data lend themselves to spreading out the computation though the data -- satellite images and other things. People have had this sort of capability for certain types of data sets. But we really need to think broadly about the problem. What you want to do is figure out ways to slice and dice the data, and do analysis on the data in an integrated architecture. And that's something that will become more important at exascale that we haven't addressed very well, yet. What about the February exascale report due to Congress? What's that about? Congress asked the DOE for a written plan for exascale and it is to be delivered no later than Feb. 10. In the last couple of years, the labs, the scientists, have been driving this exascale discussion, because of a need to do the science, and these are big challenges: power, resilience, how to program these things. What hasn't happened is, in some sense, a formal plan from DOE for reaching exascale... [the] plan for getting us there. Is this report the gateway to funding? Congress is not going to fund an exascale initiative without a clear plan, so real funding is gated on convincing through this plan, and through discussions, of the importance of this for the nation. What's going on internationally to develop exascale computing? A year and half ago, the Europeans got together as part of working in this space and said, 'We need to put together a European plan.' They created this plan over the last year. In October, I was at the meeting in Barcelona when they presented the plan to the European Commission and said, 'This is what we need for exascale -- two-to-three billion Euros.' In addition to presenting this to the European Commission, which is favorably disposed, they have already boot-strapped three projects. It is a step along the way, but it is bold and it is already started and people are already working on it. If they are successful, it paves the way to put more funding into that and go take it to the next level and eventually look at building a system. Why is it so important for Europe to develop its own system? A good way to look at this is Airbus and Boeing. An IDC report ( download PDF ) said to the Europeans: You have all this technology but its spread out through all of Europe. If you were to bring it together, you could, like Airbus, compete quite well. I don't want to put too much emphasis on this, but I think it's pretty clear that the Europeans want to develop a platform that can be sold at their supercomputer centers and sold back to us. What about the Chinese? The Chinese are moving full speed ahead. They have a machine that is very similar in character to some of our machines. It's a water-cooled machine, with 16 cores on a die in a socket at about a petaflop in about nine racks. It's a pretty amazing feat and they are in it to win it. If you look at their investment in people, they're training up the scientists and building platforms to continue that innovation so that they can have their own homegrown industry as well; where they will own all the technology from the chip all the way to the software stack to top. What does winning look like? Right now, if you look at China, a lot of their machines are still made from components in the U.S. However, this one machine that they built, the 16 core has its own interconnects, uses Chinese technology. What they would like to do, just like any country, is to be able to reap the benefits of developing that technology across their entire infrastructure, so that everything that's in their cell phones all the way up to their to supercomputers is jobs in China. And of course once that happens, they will be selling this back to Brazil to South America, to India. Whether or not they can sell it back into to the U.S. is a good question, but the rest of the markets are open. Intel says it can deliver an exascale system by 2018, ahead of U.S. government's requested date. What do you think about that? I think Intel's technology is pretty exciting and they have mapped an aggressive roadmap. They have unmatched technology in the chip and in process, and if they want to go after this new piece, I think they will do very well. Nvidia believes 2019 is possible, but also says government help will be needed. Given how far out in the future we are looking it's pretty hard to predict what date people will finish their products by. We know that there are certain things that both companies (Nvidia and Intel) would not address unless we give them government funding. For example, for scientific computing resilience is something we think is a really big issue. If you are selling a laptop you don't need to make it a 1,000 times more fault resistant, but if you put it in an exascale system, you do. That will not be developed unless the government invests in it. The second one is power. Most people around the planet are going to buy a couple of dozen racks. For them the price sensitivity, whether it's a couple of hundred kilowatts or twice that, [is not a] big deal. But when you are talking about a machine as big as ours, that is a big deal. So putting the investment in power, in making it extraordinary lower, there probably isn't a market driver in their short-term time frame except in government exascale. Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed . His e-mail address is firstname.lastname@example.org . Read more about mainframes and supercomputers in Computerworld's Mainframes and Supercomputers Topic Center. This story, "Exascale now a global race for tech" was originally published by Computerworld.
<urn:uuid:e5e23fdf-eb2d-4d34-9251-0e06bb5b9e53>
CC-MAIN-2017-04
http://www.itworld.com/article/2734781/data-center/exascale-now-a-global-race-for-tech.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970304
2,213
2.921875
3
Level of Govt: Local Function: Law Enforcement Problem/Situation: The Brevard County, Fla., Sheriff's Office is responsible for a large area with few facilities in between jurisdictions. Solution: A laptop-based communications system that allows for data exchange between officers. Jurisdiction: Brevard County, Fla. BREVARD COUNTY, Fla - Today, the Brevard County Sheriff's Office network and law enforcement information system are among the most innovative in Florida and are attracting the attention of officials from across the state. Located east of Orlando, Fla., and stretching along the Atlantic coast, Brevard County is a geographic nightmare for its sheriff's office. The Brevard County Sheriff's Office (BCSO) is responsible for enforcing the law in all county areas that do not fall under the authority of municipal law enforcement agencies. Consequently, its jurisdiction is as large as the county itself, some 80 miles long and 40 miles wide. In addition to its headquarters in Titusville, BCSO operates precincts in the central and southern parts of the county and runs the county's detention center in Sharpes, a small town 15 miles south of Titusville. With such a large jurisdiction and long distances between facilities, BCSO considers electronic communications and data exchange critical to maintaining public safety. In 1989, BCSO initiated a plan to develop a laptop-based computer network that would allow its 240 agents and deputies to file reports from the field. At the time, deputies were spending nearly half of their time writing reports, and BCSO required nearly two weeks to process them. Agents traveled to headquarters to thumb through paper-based files to conduct their investigations. BCSO believed the new laptop network would save time, reduce costs and vastly improve its ability to enforce the law. The BCSO was correct. With an initial deployment of 20 laptops and remote terminals in 1989, the terminal network was off and running, and demand for the system quickly grew. But problems soon arose: the undersized mainframe struggled to keep up with the increased volume, and the existing data communications network proved to be unreliable. The terminal network used low-speed leased telephone lines to connect copper wire-based local and remote terminal installations at administration headquarters and other BCSO facilities. This configuration was unreliable, because the communications network was dependent on the local phone company for service. The configuration also was susceptible to the powerful effects of lightning storms, which are common in central Florida and can easily knock out copper-wire networks. Ray Dils BCSO's MIS Manager "It became evident that our current system couldn't handle the workload," said Ray Dils, BCSO MIS Manager. "The multiplexers and leased lines were operating at full capacity, and we were constantly being hit by lightning. Instead of only upgrading the mainframe, we decided to go with a whole new setup." Unisys Network Enable, a Unisys organization that specializes in the design, implementation, and support of advanced integrated open-systems networks, provided BCSO with a turnkey, open systems solution. Unisys engineers conferred with Dils, reviewed the requirements, and designed and implemented a solution that met BCSO's current and anticipated network needs. The solution included a high-speed radio area network (RAN) that supported TCP/IP protocol and connected BCSO's four sites. For its headquarters and detention center, BCSO standardized on Ethernet as a transport over a fiber optic-cabling topology. These networks are extremely flexible and easily adapted to changing end user requirements. A Unisys UNIX-based client/server system provided the processing power and ensured interoperability with existing and future systems. "I didn't have the expertise to install the fiber-optic lines and radio area network. It was all new to me," said Dils. "Unisys took responsibility for it all: project coordination, hardware and material procurement, system installation, testing, documentation, and user training." The Network Enabled RAN solution eliminated the recurring costs of leased telephone lines and provided the bandwidth capacity required for high-speed video and data transmission. The solution is also highly reliable, since it is neither dependent on the local telephone company nor susceptible to the effects of lightning. More than 200 laptops and terminals are connected to the system, with capacity for future growth. Shortly after activating the new network, BCSO began to experience intermittent communication problems. If the source of the problem was not quickly identified, the situation would become very serious for BCSO. The agents and deputies relied heavily on the network, but they would lose confidence in the new system if it wasn't reliable, said Dils. Network Enable specialists and technicians returned to the site, and after a thorough investigation found that a faltering transmitter belonging to a nearby beeper company was interfering with BCSO's RAN microwave frequency. The beeper company was notified, its transmitter was repaired, and interference with BCSO's network was eliminated. The new RAN permits BCSO administrators, agents and deputies to use the Brevard Uniform Laptop Law Enforcement Tracking (BULLET) system, a multipurpose information management system. Together, the network and BULLET integrate the entire BCSO complex, providing the expedient and reliable flow of information among headquarters, the detention center and the precincts. The system saves time, reduces costs and provides better information faster, said Joan Heller, BCSO's public information officer. Deputies now spend 50 percent less time filling out reports, giving them more time to patrol the streets. With the records department automated, labor is reduced and the expense of purchasing paper forms has been nearly eliminated. Heller estimated that the new system saves Brevard County more than $1 million each year. Agents no longer travel to headquarters for case information - they retrieve it from connected terminals at their precincts. The system provides online access to a number of databases, such as criminal history and incident reports, and allows electronic searches. Mug shots are also available over the network. Because the RAN provides very high-speed data transmission, agents can access digitized video images of suspects who have been booked at the county detention center. Specialized terminals located at each BCSO facility allow the agents to print out high-quality color photographs in minutes. "We've come out of the stone age and into the modern world," says BCSO Chief Deputy Ron Clark. "I couldn't be more pleased."
<urn:uuid:13f196a9-2fbb-4faf-900a-87882fde9dc6>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Radio-Net-Connects-Sheriffs-Laptops.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00539-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959219
1,331
2.734375
3
University labs, fueled with millions of dollars in funding and some of the biggest brains around, are bursting with new research into computer and networking technologies. Wireless networks, computer security and a general focus on shrinking things and making them faster are among the hottest areas, with some advances already making their way into the market. Here's a roundup of 25 such projects that caught our eyes: This free website, Duolingo, from a pair of Carnegie Mellon University computer scientists serves double duty: It helps people learn new languages while also translating the text on Web pages into different languages. CMU's Luis von Ahn and Severin Hacker have attracted more than 100,000 people in a beta test of the system, which initially offered free language lessons in English, Spanish, French and German, with the computer offering advice and guidance on unknown words. Using the system could go a long way toward translating the Web, many of whose pages are unreadable by those whose language skills are narrow. Von Ahn is a veteran of such crowdsourcing technologies, having created online reCAPTCHA puzzles to cut down on spam while simultaneously digitizing old books and periodicals. Von Ahn's spinoff company, reCAPTCHA, was acquired by Google in 2009. Duolingo, spun off in November to offer commercial and free translation services, received $3.3 million in funding from Union Square Ventures, actor Ashton Kutcher and others. Princeton University Computer Science researchers envision an Internet that is more flexible for data center operators and more useful to mobile users. Princeton's open source Serval system is what Assistant Professor of Computer Science Michael Freedman calls a Service Access Layer that sits between the IP Network Layer (Layer 3) and Transport Layer (Layer 4), where it can work with unmodified network devices. Serval's purpose is to make Web services such as Gmail and Facebook more easily accessible, regardless of where an end user is, via a services naming scheme that augments what the researchers call an IP address set-up "designed for communication between fixed hosts with topology-dependent addresses." Data center operators could benefit by running Web servers in virtual machines across the cloud and rely less on traditional load balancers. Serval, which Freedman describes as a "replacement" technology, will likely have its first production applications in service-provider networks. "Its largest benefits come from more dynamic settings, so its features most clearly benefit the cloud and mobile spaces," he says. If any of this sounds similar to software-defined networking (SDN), there are in fact connections. Freedman worked on an SDN/OpenFlow project at Stanford University called Ethane that was spun out into a startup called Nicira for which VMware recently plunked down $1.26 billion. WiFi routers to the rescue Researchers at Germany's Technical University in Darmstadt have described a way for home Wi-Fi routers to form a backup mesh network to be used by the police, firefighters and other emergency personnel in the case of a disaster or other incident that wipes out standard cell and phone systems. The proliferation of Wi-Fi routers makes the researchers confident that a dense enough ad hoc network could be created, but they noted that a lack of unsecured routers would require municipalities to work with citizens to allow for the devices to be easily switched into emergency mode. The big question is whether enough citizens would really allow such access, even if security was assured. University of Tulsa engineers want to slow everything down, for just a few milliseconds, to help network administrations avoid cyberattacks. By slowing traffic, the researchers figure more malware can be detected and then headed off via an algorithm that signals at hyperspeed to set up defenses. Though researcher Sujeet Shenoi told the publication New Scientist that it might not be cheap to set up such a defense system, between the caching system and reserved data pipes needed to support the signals. University of Washington researchers have created a card game called Control-Alt-Hack that's designed to introduce computer science students to security topics. The game, funded in part by Intel Labs and the National Science Foundation, made its debut at the Black Hat security conference in Las Vegas over the summer. The tabletop game involves three to six players working for an outfit dubbed Hackers, Inc., that conducts security audits and consulting, and players are issued challenges, such as hacking a hotel mini bar payment system or wireless medical implant, or converting a robotic vacuum cleaner into a toy. The game features cards (including descriptions of well-rounded hackers who rock climb, ride motorcycles and do more than sit at their computers), dice, mission cards, "hacker cred tokens" and other pieces, and is designed for players ages 14 and up. It takes about an hour to play a game. No computer security degree needed. "We went out of our way to incorporate humor," said co-creator Tamara Denning, a UW doctoral student in computer science and engineering, referring to the hacker descriptions and challenges on the cards. "We wanted it to be based in reality, but more importantly we want it to be fun for the players." This effort, focused on nixing malware like Flame that spreads from computer to computer via USB storage drives, got its start based on research from Sebastian Poeplau at Bonn University's Institute of Computer Science. Now it's being overseen by the broader Honeynet Project. The breakthrough by Poeplau and colleagues was to create a virtual drive that runs inside a USB drive to snag malware. According to the project website: "Basically, the honeypot emulates a USB storage device. If your machine is infected by malware that uses such devices for propagation, the honeypot will trick it into infecting the emulated device." One catch: the security technology only works on Windows XP 32 bit, for starters.~~ IP over Xylophone Players (IPoXP) Practical applications for running IP over xylophones might be a stretch, but doing so can teach you a few things about the truly ubiquitous protocol. A University of California Berkeley researcher named R. Stuart Geiger led this project, which he discussed earlier this year at the Association for Computing Machinery's Conference on Human Factors in Computing Systems. Geiger's Internet Protocol over Xylophone Players (IPoXP) provides a fully compliant IP connection between two computers. His setup uses a pair of Arduino microcontrollers, some sensors, a pair of xylophones and two people to play the xylophones. The exercise provided some insights into the field of Human-Computer Interaction (HCI). It emulates a technique HCI specialists use to design interfaces called umwelt, which is a practice of imagining what the world must look like to the potential users of the interface. This experiment allowed participants to get the feel for what it would be like to be a circuit. "I don't think I realized how robust and modular the OSI model is," Geiger said. "The Internet was designed for much more primitive technologies, but we haven't been able to improve on it, because it is such a brilliant model." Making software projects work San Francisco State University and other researchers are puzzling over why so many software projects wind up getting ditched, fail or get completed, but late and over budget. The key, they've discovered, is rethinking how software engineers are trained and managed to ensure they can work as teams. The researchers, also from Florida Atlantic University and Fulda University in Germany, are conducting a National Science Foundation-funded study with their students that they hope will result in a software model that can predict whether a team is likely to fail. Their study will entail collecting information on how often software engineering students - teamed with students at the same university and at others -- meet, email each other, etc. "We want to give advice to teachers and industry leaders on how to manage their teams," says Dragutin Petkovic, professor and chair of SF State's Computer Science Department. "Research overwhelmingly shows that it is 'soft skills,' how people work together, that are the most critical to success." Ultra low-power wireless Forget about 3G, 4G and the rest: University of Arkansas engineering researchers are focused on developing very low-power wireless systems that can grab data from remote sensors regardless of distortion along the network path. These distortion-tolerant systems would enable sensors, powered by batteries or energy-harvesting, to remain in the field for long periods of time and withstand rough conditions to monitor diverse things such as tunnel stability and animal health. By tolerating distortion, the devices would expend less energy on trying to clean up communications channels. "If we accept the fact that distortion is inevitable in practical communication systems, why not directly design a system that is naturally tolerant to distortion?" says Jingxian Wu, assistant professor of electrical engineering. The National Science Foundation is backing this research with $280,000 in funding. University of Waterloo engineering researchers have developed a way for wireless voice and data signals to be sent and received simultaneously on a single radio channel frequency, a breakthrough they say could make for better performing, more easily connected and more secure networks. "This means wireless companies can increase the bandwidth of voice and data services by at least a factor of two by sending and receiving at the same time, and potentially by a much higher factor through better adaptive transmission and user management in existing networks," said Amir Khandani, a Waterloo electrical and computer engineering professor, in a statement. He says the cost for hardware and antennas to support such a system wouldn't cost any more than for current one-way systems. Next up is getting industry involved in bringing such technology into the standards process. Next steps require industry involvement by including two-way in forthcoming standards to enable wide spread implementation. The Waterloo research was funded in part by the Canada Foundation for Innovation and the Ontario Ministry of Research and Innovation.~~ Researchers at Rice University in Houston have developed a prototype spray-on battery that could allow engineers to rethink the way portable electronics are designed. The rechargeable battery boasts similar electrical characteristics to the lithium ion batteries that power almost every mobile gadget, but it can be applied in layers to almost any surface with a conventional airbrush, said Neelam Singh, a Rice University graduate student who led a team working on the technology for more than a year. Current lithium ion batteries are almost all variations on the same basic form: an inflexible block with electrodes at one end. Because they cannot easily be shaped, they sometimes restrict designers, particularly when it comes to small gadgets with curved surfaces, but the Rice prototypes could change that. "Today, we only have a few form factors of batteries, but this battery can be fabricated to fill the space available," said Singh. The battery is sprayed on in five layers: two current collectors sandwich a cathode, a polymer separator and an anode. The result is a battery that can be sprayed on to plastics, metal and ceramics. The researchers are hoping to attract interest from electronics companies, which Singh estimates could put it into production relatively easily. "Airburshing technology is well-established. At an industrial level it could be done very fast," she said. Mobile Mosh pit Two MIT researchers formally unveiled over the summer a protocol called State Synchronization Protocol (SSP) and a remote log-in program using it dubbed Mosh (for mobile shell) that's intended as an alternative to Secure Shell (SSH) for ensuring good connectivity for mobile clients even when dealing with low bandwidth connections. SSP and Mosh have been made available for free, on GNU/Linux, FreeBSD and OS X, via an MIT website. SSH, often used by network and system admins for remotely logging into servers, traditionally connects computers via TCP, but it's that use of TCP that creates headaches for mobile users, since TCP assumes that the two endpoints are fixed, says Keith Winstein, a graduate student with MIT's Computer Science and Artificial Intelligence Lab (CSAIL), and Mosh's lead developer. "This is not a great way to do real-time communications," Winstein says. SSP uses UDP, a connectionless, stateless transport mechanism that could be useful for stabilizing mobile usage of apps from Gmail to Skype. Researchers from MIT, California Institute of Technology and University of Technology in Munich are putting network coding and error-correction coding to use in an effort to measure capacity of wired, and more challengingly, even small wireless networks (read their paper here for the gory details).
<urn:uuid:48200ea3-8da1-4d36-82a4-3546579f7803>
CC-MAIN-2017-04
http://www.itworld.com/article/2720467/mobile/25-of-today-s-coolest-network-and-computing-research-projects.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947927
2,588
2.671875
3
In case you were wondering if these new-fangled Chinese GPU-powered supercomputers can do anything useful, Thursday’s announcement about the latest exploits of the Tianhe-1A system should give you some idea of the significance of these petascale beasts. On Thursday, researchers from the Chinese Academy of Sciences’ Institute of Process Engineering (CAS-IPE) claimed to have run a molecular simulation code at 1.87 petaflops — the highest floating point performance ever achieved by a real-world application code. The simulation is being used to help discern the behavior of crystalline silicon, a material used in solar panels and semiconductors. According to NVIDIA, the application used just 2,000 lines of CUDA to accelerate the simulation — not an inconsequential amount of source code, but considering the result, a pretty impressive ROI. In addition, all the reported FLOPS for this application were attributed to GPUs, in this case, 7,168 of them. The three-hour simulation modeled the behavior of 110 billion atoms, beating out the previous record for a molecular simulation code, which modeled 49 billion atoms at 369 teraflops. The latter was performed on Roadrunner, the original petaflop super, accelerated by IBM’s souped up Cell processors, the PowerXCell 8i. The 1.87 petaflop performance is quite an achievement for the top-ranked Tianhe-1A, especially considering the current number two system, the CPU-only Jaguar at Oak Ridge Lab, manages just 1.76 petaflops on Linpack, an artificial benchmark designed to show off a system’s floating point muscles. In 2008, Jaguar delivered it own sustained petaflop for a real-world application, in this case a superconductor simulation code, which hit 1.35 petaflops*. That work nabbed the application team at Oak Ridge the Gordon Bell Prize that year. Whether the CAS-IPE team wins any trophies for its molecular simulation application remains to be seen. The researchers will be presenting their work at the upcoming the NVIDIA GPU Technology Conference (GTC) in December in Beijing, and also next May in San Jose, California at the US GTC event. Over and above the impressive FLOPS is the larger significance of using the technology to propel science and engineering forward. Last year, NVIDIA Tesla GM Andy Keane, penned an opinion piece warning that the lagging adoption of GPU in HPC could threaten the country’s competitive edge. While that editorial could easily be construed as self-serving for his employer’s interests, the fact is that the US and Europe have lagged countries like China and Japan in adopting this technology for their most elite systems. Those nations saw the revamped graphics chip as the most economical path to petascale machines. Of course, there are valid reasons to be wary GPU computing for HPC — programmability difficulties, over-hyping of performance, proprietary software, etc. — leading many in the HPC community to be extra careful about adopting the technology. But the negative backwash from the original flood of hype can be as ill-informed as the initial exaggerations. In the current issue of HPCwire, Stone Ridge Technology CEO and GPU enthusiast Vincent Natoli, offers a nice set of rebuttals to the major objections to GPU computing. If you’re a GPGPU fence-sitter, it’s definitely worth a read. Beyond the significance of GPU usage, the application work demonstrates that the Chinese are not just building these big machines for national prestige. Simulations such as these support basic science research that can be applied to designing and manufacturing better solar energy panels and semiconductor devices. These types of high-tech commercial applications are exactly what the US and other industrialized countries envision as the basis for their future economic growth, and their ability to compete in the global marketplace. In that sense, even though today’s announcement won’t appear on the front page of the New York Times, as did the Tianhe-1A TOP500 news, this development is arguably much more significant. It’s also best to see this achievement in the larger context of what the Chinese scientific community is doing. A recent article in Forbes points out that China is quickly catching up to US in scientific output, and in some cases surpassing it: In 2009, for the first time, Chinese researchers published more papers in information technology than those in the U.S., with both countries churning out more than 100,000 info-tech publications. In clean and alternative energy, Chinese researchers have likewise been publishing up a storm, not surpassing U.S. researchers but coming close. The bottom line is that the US is in danger of losing its technological edge, which it has basically enjoyed, unchallenged, since the end of World War II. It’s not that GPU computing is the magic bullet here. But news like this should be a wake-up call to American HPC’ers and policy-makers that sometimes being extra careful is the riskiest proposition of them all. *The same superconductor simulation subsequently achieved 1.9 petaflops on the upgraded Jaguar supercomputer.
<urn:uuid:c9f9fc40-e6c7-40c1-9ade-ea4561191372>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/09/chinese_super_breaks_world_record_in_application_performance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00383-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922946
1,081
2.640625
3
NASA's Hubble Space Telescope has found the oldest, most distant supernova ever discovered, which experts say could help scientists better understand the evolution of the universe. A supernova is the explosive death of a star, which emits a dramatic burst of light. This latest discovery, Supernova UDS10Wil, or SN Wilson, exploded more than 10 billion years ago, NASA said. SN Wilson, named after President Woodrow Wilson, is categorized as a Type Ia supernova, which are prized astronomical finds because they provide a consistent level of brightness that can be used to measure the expansion of space. Type Ia supernovas also offer clues to the nature of dark energy, a mysterious and largely unknown force thought to accelerate the expansion of the universe. "This new distance record holder opens a window into the early universe, offering important new insights into how these stars explode," said David Jones, an astronomer at Johns Hopkins University. "We can test theories about how reliable these detonations are for understanding the evolution of the universe and its expansion." NASA reported that the discovery came out of a three-year Hubble project that focused on studying distant Type Ia supernovae to determine whether they have changed during the 13.8 billion years since the birth of the universe. Astronomers used Hubble's Wide Field Camera 3 to search for the supernovae and calculate their distance with spectroscopy. The supernova discovery was aided by a 2009 repair and upgrade to the Hubble Space Telescope, which was launched in 1990 and orbits about 350 miles above the Earth's surface. Astronauts on the space shuttle Atlantis, which carried 22,500 pounds of equipment for the telescope, spent 11 days restoring a broken-down, wide-field imaging camera, while also installing a new, more powerful one. They also did the same with Hubble's Space Telescope Imaging Spectrograph. On the same trip, the astronauts replaced all six of the Hubble's gyroscopes and all six of its batteries, along with a computer unit that had failed months earlier. During its 23 years in orbit, Hubble's discoveries have been so important that they have forced academics to rewrite astronomy text books. It has taken deep photographs of the universe and captured images of the birth and death of stars. It also played a key role in discovering that the universe, believed to be driven by dark energy, is expanding at an accelerating rate. And Hubble also showed that most galaxies in the universe contain massive black holes. At the time, NASA scientists said they thought the repairs and upgrades would keep the Hubble running until 2014, if not until 2016 or 2017. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is firstname.lastname@example.org.
<urn:uuid:1d6f405f-7c63-4e08-bea7-e9ef6a21be61>
CC-MAIN-2017-04
http://www.computerworld.com/article/2496298/emerging-technology/nasa-s-hubble-telescope-finds-distant-supernova--clues-to-universe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00228-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94274
597
3.78125
4
Research yields options for arthritis treatment Monday, Sep 9th 2013 In conditions where environmental control systems are crucial, scientists can make considerable breakthroughs to help society. There have recently been two research reports showing key elements for beating arthritis. One claims to have isolated a protein that causes the ailment and can therefore create medication to subdue it, while the other shows a homegrown remedy using broccoli to deter the ailment. Arthritis creates a lot of pain for people with the condition. Although the ailment typically affects people as they age, any individual can get it. Australian researchers have narrowed the affliction down to a protein, MLKL, that triggers necroptosis in dying cells, which instruct the immune system to respond with inflammation, according to the Daily Express. This breakthrough provides the first proof linking the protein as one of the direct causes of the ailment, making it easier to target treatment for the condition. As the scientists are trying to narrow down the protein to a molecular image, they can begin testing solutions for it, as well as potentially develop medicine for other chronic inflammatory diseases. "We discovered that MLKL needs to be switched on before it can activate necroptosis," said Dr. James Murphy of the Walter and Eliza Hall Institute of Medical Research in Melbourne. "MLKL could therefore be a perfect target for treatments because it is different from almost every other cell-signaling protein, making it easier to develop highly specific drugs and limiting potential side effects." Is broccoli a deterrent? While individuals with arthritis may have to wait for the medication to be produced, they can eat broccoli to prevent the condition from spreading. According to new research from the U.K.'s University of East Anglia, the vegetable may be able to slow down damage from arthritis and potentially even stop it from developing altogether, the Daily Express reported. Broccoli produces a high amount of sulforaphane which can block a molecule known to cause inflammation. Researchers have also stipulated that the chemical has properties to deter cancer, however, there have been no previous studies conducted regarding its effects in joint health. "The results from this study are very promising," lead scientist Professor Ian Clark told the source. "We have shown this works in the three laboratory models we have tried, in human cartilage cells, cow tissue and mice." As this method can be used immediately until a treatment has been solidified, there are a few best practices for storing broccoli. For fresh, raw vegetables, they should be kept in an environment between 41 degrees Fahrenheit and 50 degrees Fahrenheit, according to University of California - Davis Postharvest Technology Center. Using a temperature monitor can help ensure that consumers are getting the most out of their produce. Many typically store broccoli unwashed in a plastic bag, where it can remain fresh 5-14 days depending on the fridge's conditions. Freezing the broccoli can help it to remain consumable up to 18 months, however, the produce is more susceptible to bacterial decay. Both new research reports have promised short-term and potential long-term solutions for the arthritis ailment that has affected many individuals. Although it may take a while for the medication to be developed to treat the condition, eating broccoli as a regular part of the diet may be an easy solution in the meantime.
<urn:uuid:82291f53-717b-423c-b83a-f6c865dde1f3>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/research-yields-options-for-arthritis-treatment-503844
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945745
670
3.21875
3
How a malware can steal the data from an infected system that doesn't have internet connection? You might think it is impossible. Computer scientists say it is possible. German Researchers at Fraunhofer Institute for Communication, Information Processing, and Ergonomics, say that a malware can transmit data using inaudible sounds. It can steal confidential data or keystrokes using nothing more than a normal speakers and Microphones without any internet connection. Security researchers often suggest not to connect the system that has sensitive data to Internet so that cyber criminals can't reach them. But now, It can steal from audio sounds without network connection. So what now?! Then, Let us remove the audio devices. The researchers says it can be prevented by switching off audio I/O devices. Sometimes, we might need audio devices. In that case, the inaudible communication can be prevented "by application of a software-defined lowpass filter". The researchers has described their idea in their paper entitled "On Covert Acoustical Mesh Networks in Air". You can find the research paper here. (h/t: Ars Technica)
<urn:uuid:67855413-215e-4495-9ea7-d50a9a00cccb>
CC-MAIN-2017-04
http://www.ehackingnews.com/2013/12/malware-steals-data-using-speakers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911467
232
2.84375
3
Using the SMTP/SMTPS monitor in Anturis Console, you can set up monitoring of general availability and response time for any remote email server connected to the internet (such as a Microsoft Exchange server). It also enables you to set up a notification when a certificate for a secure TLS/SSL connection is about to expire. You can send requests either from one of the components in your infrastructure, or use one of the available Public Agents that are maintained by Anturis in different geographical locations. Simple Mail Transfer Protocol (SMTP) is used by email servers for sending messages to other email servers, and by email clients for sending messages to email servers. To retrieve messages from servers, email clients use either the Post Office Protocol (POP) or the Intenet Message Access Protocol (IMAP). This is why email clients require you to specify both an incoming and an outgoing email server. SMTP is a delivery protocol, designed to push email messages to a target server. The default port number that an SMTP server listens on is either 25 or 587 (for mail submission). The server requires a username and password to authenticate a client. However, SMTP does not encrypt messages, so your credentials may be read by a third party involved in the connection. To provide an encrypted connection, SMTP can be used over the Transport Layer Security (TLS) protocol, which was previously known as Secure Socket Layer (SSL). When SMTP is used over a TLS/SSL layer this is called an SMTPS connection, and it is directed through port 465 by default. As an alternative you can establish a TLS/SSL connection over the standard SMTP submission interface on port 587 using the STARTTLS extension. TLS/SSL are cryptographic protocols for secure communication over computer networks. They are based on the exchange of X.509 certificates and public keys for encrypting and decrypting messages. Digital certificates are issued by a certificate authority (CA) trusted by both parties involved in the communication. A certificate binds the public key to a person or organization for a predetermined period of time (until the certificate expires). By regularly sending SMTP requests and tracking the time it takes for a response to be returned (also known as round-trip delay time, latency, or timeout), you can ensure the availability and performance of your critical email servers. This directly affects the quality of your service, because your clients or employees rely on email every day. The sooner you are able to detect a possible issue, the faster you will be able to react to it. If the server uses TLS/SSL security, it is also important to monitor the certificate expiration date. ©2017 Anturis Inc. All Rights Reserved.
<urn:uuid:5bc18549-ede7-4821-8176-84d7748f2e5a>
CC-MAIN-2017-04
https://anturis.com/monitors/smtp-monitor/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913855
550
2.546875
3
Joined: 25 Jan 2004 Posts: 160 Location: Toronto, Canada Try reading some books on assembler, This is a small abstract One major reason for the base and displacement concept is multiprogramming. There is an additional reason for utilising this concept. Accessing an address in terms of a base register expressed as a single hex digit (0-F) and a three position displacement (000-FFF) uses two less digits than standard IBM 6-digit address. Since typical programs consist of hundreds of instructions with dozens of storage addresses that need to be accessed, this savings is substantial and significant.
<urn:uuid:1a124312-1180-4619-9525-b2fada9e4204>
CC-MAIN-2017-04
http://ibmmainframes.com/about1164.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00254-ip-10-171-10-70.ec2.internal.warc.gz
en
0.897863
124
2.984375
3
Are digital images submitted as court evidence genuine or have the pictures been altered or modified? We developed a range of algorithms performing automated authenticity analysis of JPEG images, and implemented them into a commercially available forensic tool. The tool produces a concise estimate of the image’s authenticity, and clearly displays the probability of the image being forged. This paper discusses methods, tools and approaches used to detect the various signs of manipulation with digital images. How many kittens are sitting on the street? If you thought “four”, read along to find out! Alexey Kuznetsov, Yakov Severyukhin, Oleg Afonin, Yuri Gubanov © Belkasoft Research 2013 Today, almost everyone has a digital camera. Literally billions of digital images were taken. Some of these images are used for purposes other than family photo albums or Web site decoration. On the rise of digital photography, manufacturers of graphic editing tools quickly catch up momentum. The tools are becoming cheaper and easier to use – so easy in fact that anyone can use them to enhance their images. Editing or post-processing, if done properly, can greatly enhance the appearance of the picture, increase its impact to the viewer and better convey the artist’s message. But where is the point when a documentary photograph becomes fictional work of art? While for most purposes editing pictures is more than okay, certain types of photographs are never to be manipulated. Digital pictures are routinely handed to news editors as part of event coverage. Digital pictures are presented to courts as evidence. For news coverage, certain types of alterations or modifications (such as cropping, straightening verticals, adjusting colors and gamma etc.) may or may not be acceptable. Images presented as court evidence must not be manipulated in any way; otherwise they lose credibility as acceptable evidence. Today’s powerful graphical editors and sophisticated image manipulation techniques make it extremely easy to modify original images in such a way that any alterations are impossible to catch by an untrained eye, and can even escape the scrutiny of experienced editors of reputable news media. Even the eye of a highly competent forensic expert can miss certain signs of a fake, potentially allowing forged (altered) images to be accepted as court evidence. Major camera manufacturers attempted to address the issue by introducing systems based on secure digital certificates. The purpose of these systems was the ability to prove that images were not altered after being captured by the camera. Obviously aimed at photo journalists and editors, this system was also used in legal cases as genuine court evidence. The approach looks terrific on paper. The only problem, it does not work. A Russian company was able to easily forge images signed by a Canon and then Nikon digital cameras. The obviously faked images successfully passed the authenticity test by the respective manufacturers’ verification software. Which brings us to the question. If human experts are having a hard time determining whether a particular image was altered, and if existing certificate-based authenticity verification systems cannot be relied upon, should we just give up on the very issue? This paper demonstrates a new probabilistic approach allowing automatic authenticity analysis of a digital image. The solution uses multiple algorithms analyzing different aspects of the digital image, and employs a neural network to produce an estimate of the image’s authenticity, or providing the probability of the image being forged. 1. What Is a Forged Image? What constitutes a manipulated image? For the purpose of this paper, we consider any modification, alteration or “enhancement” of the image after the image left the camera made with any software, including RAW conversion tools to constitute an altered image. That said, we don’t consider an image to be altered if only in-camera, internal conversions, filters and corrections such as certain aberration corrections, saturation boost, shadow and highlight enhancements and sharpening are applied. After all, the processing of raw pixel data captured from the digital sensor is exactly what the camera’s processor is supposed to be doing. How many umbrellas? Read along to find out! But is every altered image a forged one? What if the only things done to the image were standard and widely accepted techniques such as cropping, rotating or applying horizon correction? These and some other techniques do alter the image, but don’t necessarily forge it, and this point may be brought before the editor or a judge, making them accept an altered image as genuine . Therefore, the whole point of forgery analysis is determining whether any changes were made to alter meaningful content of the image. So we’ll analyze an image on pixel level in order to detect whether significant changes were made to the actual pixels, altering the content of the image rather than its appearance on the screen. Considering all of the above, it’s pretty obvious that no single algorithm can be used to reliably detect content alterations. In our solution, we are using multiple algorithms which, in turn, fall in one of the two major groups: pixel-level content analysis algorithms locating modified areas within the image, and algorithms analyzing image format specifications to determine whether or not certain corrections have been applied to the image after it left the camera. In addition, certain methods we had high hopes for turned out to be not applicable (e.g. block artifact grid detection). We’ll discuss those methods and the reasons why they cannot be used. 2. Forgery Detection Algorithms Providing a comprehensive description of each and every algorithm used for detecting forged images would not be feasible, and would be out of scope of this paper. We will describe five major techniques used in our solution to feed the decisive neural network (the description of which is also out of scope of this paper). The algorithms made it into a working prototype, and then to commercial implementation. At this time, forgery detection techniques are used in the Forgery Detection plugin [http://forensic.belkasoft.com/en/forgery-detection], an extension of a forensic tool Belkasoft Evidence Center. The plugin can analyze images discovered with Belkasoft Evidence Center, and provide the probability of the image being manipulated (forged). 2.1. JPEG Format Analysis JPEG is a de-facto standard in digital photography. Most digital cameras can produce JPEGs, and many can only produce files in JPEG format. The JPEG format is an endless source of data that can be used for the purposes of detecting forged images. The JPEG Format Analysis algorithm makes use of information stored in the many technical meta-tags available in the beginning of each JPEG file. These tags contain information about quantization matrixes, Huffman code tables, chroma subsampling, and many other parameters as well as a miniature version (thumbnail) of the full image. The content and sequence of those tags, as well as which particular tags are available, depend on the image itself as well as the device that captured it or software that modified it. In addition to technical information, JPEG tags contain important information about the photo including shooting conditions and parameters such as ambient light levels, aperture and shutter speed information, make and model of the camera and lens the image was taken with, lens focal length, whether or not flash was being used, color profile information, and so on and so forth. The basic analysis method verifies the validity of EXIF tags in the first place in an attempt to find discrepancies. This, for example, may include checks for EXIF tags added in post-processing by certain editing tools, checks for capturing date vs. the date of last modification, and so on. However, EXIF tags can be easily forged; so easily in fact that while we can treat existing EXIF discrepancies as a positive sign of an image being altered, the fact that the tags are “in order” does not bring any meaningful information. Our solution makes an attempt to discover discrepancies between the actual image and available EXIF information, comparing the actual EXIF tags against tags that are typically used by a certain device (one that’s specified as a capturing device in the corresponding EXIF tag). We collected a comprehensive database of EXIF tags produced by a wide range of digital cameras including many smartphone models. We’re also actively adding information about new models as soon as they become available. In addition to EXIF analysis, we review quantization tables in all image channels. Most digital cameras feature a limited set of quantization tables; therefore, we can discover discrepancies by comparing hash tables of the actual image against those expected to be produced by a certain camera. EXIF tags of this image are a clear indication of image manipulation. The “Software” tag displays software used for editing the image, and the original date and time does not match last modification date and time. 2.2. Double Quantization Effect This algorithm is based on certain quantization artifacts appearing when applying JPEG compression more than once. If a JPEG file was opened, edited, then saved, certain compression artifacts will inevitably appear. In order to determine the double quantization effect, the algorithm creates 192 histograms containing discrete cosine transform values. Certain quantization effects will only appear on these histograms if an image was saved in JPEG format more than once. If the effect is discovered, we can definitely tell the image was edited (or at least saved by a graphic editor) at least once. However, if this effect is not discovered, we cannot make any definite conclusions about the image as it could, for example, be developed from a RAW file, edited in a graphic editor and saved to a JPEG file just once. The first two histograms represent a typical file that was only saved once. The other two demonstrate what happens to a JPEG image if it’s opened and saved as JPEG once again. These two images look identical, although the second picture was opened in a graphic editor and then saved. The following histograms make the difference clear. 2.3. Error Level Analysis This algorithm detects foreign objects injected into the original image by analyzing quantization tables of blocks of pixels across the image. Quantization of certain pasted objects (as well as objects drawn in an editor) may differ significantly from other parts of the image, especially if either (or both) the original image or injected objects were previously compressed in JPEG format. While this may not be a perfect example, it still makes it very clear which of the four cats were originally in the images, and which were pasted during editing. Quantization deviation is significantly higher for the two cats on the left. This effect will be significantly more pronounced if the object being pasted would be taken from a different image. 2.4. Copy/Move Forgery and Clone Detection An extremely common practice of faking images is transplanting parts of the same image across the picture. For example, an editor may mask the existence of a certain object by “patching” it with a piece of background cloned from that same image, copy or move existing objects around the picture. Quantization tables of the different pieces will look very similar to the rest of the image, so we must employ methods identifying image blocks that look artificially similar to each other. The second image is fake. Note that the other umbrella is not simply copying and pasting: the pasted object is scaled to appear larger (closer). The third image outlines matching points that allow detecting the cloned image. Our solution employs several approaches including direct tile comparison across the image, as well as complex algorithms that are able to identify cloned areas even if varying transparency levels are applied to pasted pieces, or if an object is placed on top of the pasted area. 2.5. Inconsistent Image Quality JPEG is a lossy format. Every time the same image is opened and saved in the JPEG format, some apparent visual quality is lost and some artifacts appear. You can easily reproduce the issue by opening a JPEG file, saving it, closing, then opening and saving again. Repeat several times, and you’ll start noticing the difference; sooner if higher compression levels are specified. Visual quality is not standardized, and varies greatly between the different JPEG compression engines. Different JPEG compression algorithms may produce vastly different files even when set to their highest-quality setting. As there is no uniform standard among the different JPEG implementations to justify resulting visual quality of a JPEG file, we had to settle on our own internal scale. This was inevitable to judge the quality of JPEG files processed by the many different engines on the same scale. This is the same image, only the last three pictures are saved from the original with 90%, 70% and 50% quality respectively. The higher the level of compression is the more visible blocking artifacts become. JPEG is using blocks sized 8×8 pixels, and these blocks become more and more clearly visible when the image is re-saved. According to our internal scale, JPEG images coming out of the camera normally have apparent visual quality of roughly 80% (can be more or less, depending on camera settings and JPEG compression engine employed by the camera processor). As a result, we expect an unaltered image to fall approximately within that range. However, as JPEG is a lossy compression algorithm, every time a JPEG image is opened and saved as a JPEG file again, there is loss of apparent visual quality – even if the lowest compression / highest quality setting is used. The simplest way to estimate the apparent visual quality of an existing JPEG file would be applying certain formulas to channel quantization tables specified in the file’s tags. However, altering the tags is all too easy, so our solution uses pixel-level analysis that can “see through” the quantization matrix. 3. Non-Applicable Algorithms Some techniques sound great on paper but don’t work that well (if at all) in real life. The algorithms described below may be used in lab tests performed under controlled circumstances, but stand no chance in real life applications. 3.1. Block artifact grid detection The idea is also based on ideas presented in and . However, the algorithm analyzes the result of discrete cosine transform coefficients calculated on a bunch of 8×8 JPEG DCT chunks. Comparing coefficients to one another can supposedly identify foreign objects such as those pasted from another image. In reality these changes turned out to be statistically insignificant and easily affected by consecutive compression when saving the final JPEG image. In addition, discrepancies can easily arise in the original image on the borders of different color zones. 3.2. Color filter array interpolation Most modern digital sensors are based on the Bayer array. This algorithm makes use of the fact that most modern digital cameras are using sensors based on a Bayer array. Pixel values of color images are determined by interpolating readings of adjacent red, green and blue sub-pixels . Based on this fact, a statistical comparison of adjacent blocks of pixels can supposedly identify discrepancies. In reality, we discovered no statistically meaningful differences, especially if an image was compressed and re-compressed with a lossy algorithm such as JPEG. This method would probably give somewhat more meaningful results if lossless compression formats such as TIFF were widely used. In real-life applications, the lossy JPEG format is a de-facto standard for storing digital pictures, so color filter array interpolation algorithm is of little use in these applications. The algorithms described in this paper made it to a commercial product. They were implemented as a plugin to a forensic tool Belkasoft Evidence Center [http://forensic.belkasoft.com/]. The plugin enables Evidence Center to estimate how genuine the images are by calculating the probability of alterations. The product is aimed at forensic audience, allowing investigators, lawyers and law enforcement officials validate whether digital pictures submitted as evidence are in fact acceptable. Using Evidence Center equipped with the Forgery Detection plugin to analyze authenticity of digital images is easy. The analysis is completely automated. Sample report looks like the following: The plugin is available at http://forensic.belkasoft.com/en/forgery-detection. 5. Conclusion and Further Work We developed a comprehensive software solution implementing algorithms based on statistical analysis of information available in digital images. A neural network is employed to produce the final decision, judging the probability of an image of being altered or original. Some algorithms employed in our solution are based on encoding and compression techniques as well as compression artifacts inherent to the de-facto standard JPEG algorithm. Most alterations performed to JPEG files are spotted right away with high probability. Notably, our solution in its current state may miss certain alterations performed on uncompressed images or pictures compressed with a lossless codec. Let us take, for example, scenario in which an editor pastes slices from one RAW (TIFF, PNG…) image file into another losslessly compressed file, and then saves a final JPEG only once. In this case, our solution will be able to tell that the image was in fact modified in some graphic editing software, but will be likely unable to detect the exact location of foreign objects. However, if the pasted bits were taken from a JPEG file (which is rather likely as most pictures today are in fact stored as JPEGs), then our solution will likely be able to pinpoint the exact location of the patches. 6. About the Authors Alexey Kuznetsov is the Head of Department of GRC (Governance Risk Complience) in International Banking Institute. Alexey is an expert on business process modeling. Yakov Severyukhin is Head of Photoreport Analysis Laboratory in International Banking Institute. Yakov is an expert in digital image processing. Oleg Afonin is Belkasoft sales and marketing director. He is an expert and consultant in computer forensics. Yuri Gubanov is a CEO of Belkasoft. Yuri is a renowned computer forensics expert. He is a frequent speaker at industry-known conferences such as CEIC, HTCIA, FT-Day, ICDDF, TechnoForensics and others. The authors can be contacted by email at firstname.lastname@example.org 1. Protecting Journalistic Integrity Algorithmically http://lemonodor.com/archives/2008/02/protecting_journalistic_integrity_algorithmically.html#c22564 2. Detection of Copy-Move Forgery in Digital Images http://www.ws.binghamton.edu/fridrich/Research/copymove.pdf 3. John Graham – Cumming’s Clone Tool Detector http://www.jgc.org/blog/2008/02/tonight-im-going-to-write-myself-aston.html 4. Demosaicking: Color Filter Array Interpolation http://www.ece.gatech.edu/research/labs/MCCL/pubs/dwnlds/bahadir05.pdf 5. Retrieving Digital Evidence: Methods, Techniques and Issues http://forensic.belkasoft.com/en/retrieving-digital-evidence-methods-techniques-and-issues
<urn:uuid:32367258-3429-45d4-8749-efd66ac1764b>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2013/08/22/detecting-forged-altered-images/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920292
3,943
2.578125
3
Decisions will be made by artificial intelligence in the Information Generation Tweet this Easier access to advanced, cloud-based analytical tools will transform decision making by turning data into useful insights. Linked data, machine intelligence, predictive analytics, autonomous algorithms, and computational and virtual simulations will enhance the relevance and meaning from the massive amounts of data generated in today’s digital transformation. Additionally, new interfaces that can understand spoken and programing languages will support an increased role of smart automation in all kinds of work. Artificial Intelligence (AI) Resource Deployment Hong Kong has one of the world’s best subway systems. When maintenance or engineering work is needed, an AI program runs a simulated model of the entire system to find the best schedule to do the work. An algorithm that can see in a way no human can identifies the opportunity to combine and share resources and manages each task. The AI overseer incorporates knowledge gathered from human engineering and maintenance experts into its algorithm to help inform recommendations. Deep Learning Software MetaMind uses deep learning, a type of AI, to train systems on volumes of information derived from images, audio, unstructured text and other data, and then extract inferences about it in response to new information. The company’s technology enables powerful text classification, Twitter sentiment analysis, summarization of data, question answering, and custom tasks such as identifying images based on their composition. The software can also be used to extract signals hidden in financial reports or analyze chat messages from people seeking customer support from a company. Augmented decision making support is set to be incorporated into many forms of knowledge work in the near future. A 2013 Oxford University report concluded that 45% of American jobs are at high risk of being taken over by computers within the next two decades. Expect knowledge work to be transformed in the coming decade by the emergence of even more robust and accessible augmented decision making tools. Without social and education policy interventions, the next wave of technological innovation may lead to higher unemployment and rising inequality. In many domains, analytics tools have the potential to not only augment human decision making but replace it altogether. This may eventually lead to mass displacement of white-collar jobs, creating new economic, policy and social challenges. Success will lie in the correct blend of human and machine augmented decision making capabilities. As we increasingly come to rely on these decision making technologies, it will be important to frequently re-examine how algorithmic biases and assumptions impact decisions. Business leaders will need to operate with full transparency, sharing with their employees and perhaps even the public the assumptions built into the algorithms they are using to make strategic decisions. Your data will be bought and sold on an open economyLearn more » Information will become a sensory experience.Learn more » Environments will become more aware, connected and responsiveLearn more » New tools will put privacy controls into consumers' hands.Learn more »
<urn:uuid:a72ea464-c6f6-43f2-8d4f-7a1d671896b4>
CC-MAIN-2017-04
https://www.emc.com/information-generation/augmented-decision-making.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918125
595
2.828125
3
Your Database on ACID: Set of Rules for Data Safety Acid might screw up your brain, but it won’t hurt your database. Not the drug, of course. Here we’re talking about atomicity, consistency, isolation and durability (ACID). It’s a key concept in database theory, and if you’ve worked with a database in an enterprise setting, there’s a good chance you’ve heard of it. If not, no worries. It’s easy to grasp, and that’s good, because it’s something you’ll need if running a database (or merely the network that supports an enterprise-grade database) is part of your job. ACID refers to the way a database processes transactions. A transaction is simply a logical operation, and it’s often one that’s composed of many steps. For instance, a common transaction in an airline database would include a passenger’s flight change. First, the passenger will be removed from his old flight, then added to his new flight. Each step will be logged to complete the transaction. In fact, it’s far more complex — even the simplest transactions can involve a dozen steps or more. So, how does each element of ACID apply to our hapless passenger, who’s trying to switch his flight from New York to New Zealand? Let’s take a look. Atomicity: Simply put, atomicity means “all or nothing.” If a part of a transaction fails, all of it fails. A database with perfect atomicity won’t keep the first part of a transaction intact if the second part fails because of drive corruption, a network brownout or, frankly, any other reason. Applied, our hapless passenger won’t be deleted from his flight to New York, unless he’s also added to a flight to New Zealand. If, for some reason, he’s been deleted from his New York flight but can’t be added to New Zealand (because the flight is full, perhaps), the New York deletion will be undone — “rolled back” in database lingo. Consistency: Every database has its own rules. For instance, the airline database might have a rule that says no passenger can change flights unless he’s already paid for his first flight. In the ACID formula, consistency is just a nice way of saying no transaction can flout the rules of the database in which it takes place. If you try to book a passenger on a flight to New Zealand when he’s not yet paid for the flight to New York (say, because his credit card was declined), the transaction will fail. Isolation: A good transaction knows little — or even nothing — about the transactions that come before and after it or even at the same time. This is the principle of isolation: No two transactions can interfere with each other. What’s more, a transaction can’t use data from a transaction that’s in progress. Rather, it can use data only from a transaction that’s complete. As one transaction alters data, another transaction can’t see — or use — that altered data to avoid complications. Hence, other New York/New Zealand transactions won’t see your passenger after he’s been deleted from his New York flight and before he’s been added to the New Zealand flight. Durability: Changing a database won’t do much good if the change disappears. Thus, the notion of durability: Any successful transaction should be permanent. It can’t be undone unless the user specifies it, which would entail a new transaction. No hardware error, network brownout or act of God should keep a completed transaction from staying complete. So, if you reroute that passenger from New York to New Zealand, you’ll be confident he’ll end up in Auckland and not LaGuardia or JFK. And it’s good thing too — New York gets chilly this time of year. David Garrett is a Web designer and former IT director, as well as the author of “Herding Chickens: Innovative Techniques in Project Management.” He can be reached at editor (at) certmag (dot) com.
<urn:uuid:14f6ae0e-0ad1-4c75-b12e-fd4afa3fcabe>
CC-MAIN-2017-04
http://certmag.com/your-database-on-acid-a-simple-set-of-rules-for-data-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932741
910
2.578125
3
Tapping Into A Homemade Android ArmyBlack Hat speaker will detail how security researchers can expedite their work across numerous Android devices at once. In the Android development world, fragmentation has been the bane of the typical app coder's existence for as long as the platform has been running devices. With so many different devices to account for, it's difficult to troubleshoot and ensure apps run uniformly across them. That same frustration is actually amplified in mobile security research, because as white hat hackers dive under the hood of Android devices they find that not only do different devices behave differently, but sometimes even devices advertised under the same name may sport different processors and totally different architectures. "Each device is kind of like a unique snowflake," says Joshua Drake, director of research science at Accuvant Labs. "Even if we both had a Samsung Galaxy S3, and, say, you had one from Verizon and I had one unlocked, those phones are almost completely different on the inside. Samsung makes the processor for the unlocked one, and Qualcomm's processor runs Verizon's. That core of a change will change a lot of things." Consequently, understanding how certain vulnerabilities may cut across devices and manufacturers becomes a very difficult nut to crack -- or, at the very least, requires a long nut-cracking process. However, at Black Hat USA next month Drake plans to help the security community save time and focus on finding bugs and reaching other important security conclusions by building what he terms a homemade "Android Army." His talk will discuss how a simple hardware hack, combined with an open-source toolkit he's been refining, can make it easier for researchers to scale their exploration across many different devices at once. Drake came up with the idea as he was writing and researching the Android Hacker's Handbook. As he explains, the typical way a researcher interacts with an Android device is through the device hooked up via USB and the Android Debug Bridge (ADB) running on a PC. "That tool works fine, but it is not really designed to be one where you're operating on lots of devices," he says. "I thought to myself: Wouldn't it be great if I could somehow have ADB but add in this extra layer of something that will run across a whole bunch of devices?" And so, Drake figured out the most expeditious way to nest together multiple USB ports to get dozens of devices running on a PC at once and started working on the scripts that would eventually make up what he calls the Android Cluster Toolkit. Already available as an open-source project, the toolkit makes it easier, not only for the user to identify devices hooked into a computer by human-friendly names rather than long serial codes, but to also run commands on multiple devices at once. Drake says he personally has built up a cluster of about 55 devices but that it is possible for a researcher to cram up to 127 devices at once on a single PC's root USB hub. "It can be helpful, not just if you are auditing and looking through some source code and trying to connect that to real devices, but also if there has been a vulnerability that's already been identified and disclosed -- then you can quickly get an idea of which devices out there that are actually affected. Most of what the software part of this toolkit was designed to do was to help me find a way to type less and get more done." Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading. View Full Bio
<urn:uuid:d3460f51-e80e-4e87-a902-aa076d184aea>
CC-MAIN-2017-04
http://www.darkreading.com/mobile/tapping-into-a-homemade-android-army/d/d-id/1297309
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964881
741
2.625
3
Python Coding & Scripting Python Coding & scripting course from CODEC Networks will teach you how to quickly write your first program in Python! You will also learn how to create custom modules and libraries. This comprehensive course covers the basics of Python as well as the most advanced aspects such as debugging and handling files. Python is a functional and flexible programming language that is powerful enough for experienced programmers to use, but simple enough for beginners as well. Python is a well-developed, stable and fun to use the programming language that is adaptable for both small and large development projects. Programmers love Python because of how fast and easy it is to use. Python cuts development time in half with its simple to read syntax and easy compilation feature. Debugging your programs is a breeze in Python with its built-in debugger. Using Python makes programmers more productive and their programs ultimately better. Who Should Attend Candidates should have prior programming experience and be familiar with basic concepts of C/C++. Prior exposure to object-oriented programming concepts is not required, but definitely beneficial. The course is targeted at candidates who wish to start from basic or improve their Python skill level or developers, system administrators who wish to be able to develop, automate, and test applications and systems using Python programming languages. Specifically, attendees will possess relevant knowledge in: - Design, develop and implement powerful unit testing within their Python applications. - Create Python scripts that use expressions, variables, conditionals, loops, lists, sets, functions, objects and exceptions. - Understand and leverage Object Oriented programming techniques in their Python applications. - Develop Python applications that utilize file handling, picking and archiving (zip and tar). - Utilize Python to interact with SQL databases. - Implement email objects. - Create/Develop generators and decorators. - Utilize introspection, multi-threading, and multi-processing techniques. - Alter or augment the operation of existing or inherited Python code using decorators. - Apply powerful regular expression matching and manipulation techniques. - Generate and send a complex email with multiple MIME parts and attachments. - Fast Track : 5 Days (6 Hours/Day) - Regular Track : 6 Weeks (3 Hours/Day) - Pre-course technical evaluation - Training Material (E-Books) - Discount Vouchers up to 15 - 25% for further training - Certificate of Appreciation from CODEC Networks Post Training Program (CODEC Networks Specialty) - One Live Project Work - Hand-over Labs & Practical's Checklist for review
<urn:uuid:ddd9c690-35d0-434f-82cb-aad59b70bb59>
CC-MAIN-2017-04
http://www.codecnetworks.com/python-coding.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862877
541
2.65625
3
2016 marks the 60th anniversary of the venerable Hard Disk Drive (HDD). While new computers increasingly turn to Solid State Disks (SSDs) for main storage, HDDs remain the champions of low-cost, high-capacity data storage. That’s a big reason why we still use them in our Storage Pods. Let’s take a spin the Wayback Machine and take a look at the history of hard drives. Let’s also think about what the future might hold. It Started With RAMAC IBM made the first commercial hard disk drive-based computer and called it RAMAC – short for “Random Access Method of Accounting And Control.” Its storage system was called the IBM 350. RAMAC was big – it required an entire room to operate. The hard disk drive storage system alone was about the size of two refrigerators. Inside were stacked 50 24-inch platters. For that, RAMAC customers ended up with less than 5 MB – that’s right, megabytes of storage. IBM’s marketing people didn’t want to make RAMAC store any more data than that. They had no idea how to convince customers they’d need more storage than that. IBM customers forked over $3,200 for the privilege of accessing and storing that information. A MONTH. (IBM leased its systems.) That’s equivalent to almost $28,000 per month in 2016. Sixty years ago, data storage cost $640 per megabyte, per month. At IBM’s 1956 rates for storage, a new iPhone 7 would cost you about $20.5 million a month. RAMAC was a lot harder to stick in your pocket, too. Plug and Play These days you can fit 2 TB onto an SD card the size of a postage stamp, but half a century ago, it was a very different story. IBM continued to refine early hard disk drive storage, but systems were still big and bulky. By the early 1960s, IBM’s mainframe customers were hungry for more storage capacity, but they simply didn’t have the room to keep installing refrigerator-sized storage devices. So the smart folks at IBM came up with a solution: Removable storage. The IBM 1311 Disk Storage Drive, introduced in 1962, gave rise to the use of IBM 1316 “Disk Packs” that let IBM’s mainframe customers expand their storage capacity as much as they needed (or could afford). IBM shrank the size of the disks dramatically, from 24 inches in diameter down to 14 inches. The 9-pound disk packs fit into a device about the size of a modern washing machine. Each pack could hold about 2 MB. For my part, I remember touring a data center as a kid in the mid-1970s and seeing removable IBM disk packs up close. They looked about the same size and dimensions that you’d use to carry a birthday cake: Large, sealed plastic containers with handles on the top. Computers had pivoted from expensive curiosities in the business world to increasingly essential devices needed to get work done. IBM’s System/360 proved to be an enormously popular and influential mainframe computer. IBM created different models but needed flexible storage across the 360 product line. So IBM created a standard hard disk device interconnect. Other manufacturers adopted the technology, and a cottage industry was born: Third-party hard disk drive storage. The PC Revolution Up until the 1970s, computers were huge, expensive, very specialized devices only the biggest businesses, universities and government institutions could afford. The dropping price of electronic components, the increasing density of memory chips and other factors gave rise to a brand new industry: The personal computer. Initially, personal computers had very limited, almost negligible storage capabilities. Some used perforated paper tape for storage. Others used audio cassettes. Eventually, personal computers would write data to floppy disk drives. And over time, the cost of hard disk drives fell enough that PC users could have one, too. In 1980, a young upstart company named Shugart Technology introduced a 5 MB hard disk drive designed to fit into personal computers of the day. It was a scant 5.25 inches in diameter. The drive cost $1,500. It would prove popular enough to become a de facto standard for PCs throughout the 1980s. Shugart changed its name to Seagate Technology. Yep. That Seagate. In the space of 25 years, hard drive technology had shrunk from a device the size of a refrigerator to something less than 6 inches in diameter. And that would be nothing compared to what was to come in the next 25 years. The Advent of RAID An important chapter in Backblaze’s backstory appears in the late 1980s when three computer scientists from U.C. Berkeley coined the term “RAID” in a research paper presented at the SIGMOD conference, an annual event which still happens today. RAID is an acronym that stands for “Redundant Array of Inexpensive Disks.” The idea is that you can take several discrete storage devices – hard disk drives, in this case – and combine them into a single logical unit. Dividing the work of writing and reading data between multiple devices can make data move faster. It can also reduce the likelihood that you’ll lose data. The Berkeley researchers weren’t the first to come up with the idea, which had bounced around since the 1970s. They did coin the acronym that we still use today. RAID is vitally important for Backblaze. RAID is how we build our Storage Pods. Our latest Storage Pod design incorporates 60 individual hard drives assembled in 4 RAID arrays. Backblaze then took the concept a further by implementing our own Reed-Solomon erasure coding mechanism to work across our Backblaze Vaults. With our latest Storage Pod design we’ve been able to squeeze 480 TB into a single chassis that occupies 4U of rack space, or about 7 inches of vertical height in an equipment rack. That’s a far cry from RAMAC’s 5 MB of refrigerator-sized storage. 96 million times more storage, in fact. Bigger, Better, Faster, More Throughout the 1980s and 1990s, hard drive and PC makers innovated and changed the market irrevocably. 5.25-inch drives soon gave way to 3.5-inch drives (we at Backblaze still use 3.5-inch drives designed for modern desktop computers in our Storage Pods). When laptops gained in popularity, drives shrunk again to 2.5 inches. If you’re using a laptop that has a hard drive today, chances are it’s a 2.5-inch model. The need for better, faster, more reliable and flexible storage also gave rise to different interfaces: IDE, SCSI, ATA, SATA, PCIe. Drive makers improved performance by increasing the spindle speed. the speed of the motor that turns the hard drive. 5,400 revolutions per minute (RPM) was standard, but 7,200 yielded better performance. Seagate, Western Digital, and others upped the ante by introducing 10,000-RPM and eventually 15,000-RPM drives. IBM pioneered the commercial hard drive and brought countless hard disk drive innovations to market over the decades. In 2003, IBM sold its storage division to Hitachi. The many Hitachi drives we use here at Backblaze can trace their lineage back to IBM. Solid State Drives Even as hard drives found a place in early computer systems, RAM-based storage systems were also being created. The prohibitively high cost of computer memory, its complexity, size, and requirement to stay powered to work prevented memory-based storage from catching on in any meaningful way. Though very specialized, expensive systems found use in the supercomputing and mainframe computer markets. Eventually non-volatile RAM became fast, reliable and inexpensive enough that SSDs could be mass-produced, but it was still by degrees. They were incredibly expensive. By the early 1990s, you could buy a 20 MB SSD for a PC for $1,000, or about $50 per megabyte. By comparison, the cost of a spinning hard drive had dropped below $1 per megabyte, and would plummet even further. The real breakthrough happened with the introduction of flash-based SSDs. By the mid-2000s, Samsung, SanDisk and others brought to market flash SSDs that acted as drop-in replacements for hard disk drives. SSDs have gotten faster, smaller and more plentiful. Now PCs and Macs and smartphones all include flash storage of all shapes and sizes and will continue to move in that direction. SSDs provide better performance, better power efficiency, and enable thinner, lighter computer designs, so it’s little wonder. The venerable spinning hard drive, now 60 years old, still rules the roost when it comes to cost per gigabyte. SSD makers are getting closer to parity with hard drives, but they’re still years away from hitting that point. An old fashioned spinning hard drive still gives you the best bang for your buck. We can dream, though. Over the summer our Andy Klein got to wondering what Seagate’s new 60 TB SSD might look like in one of our Storage Pods. He had to guess at the price but based on current market estimates, an SSD-based 60-drive Storage Pod would cost Backblaze about $1.2 million. Andy didn’t make any friends in Backblaze’s Accounting department with that news, so it’s probably not going to happen any time soon. As computers and mobile devices have pivoted from hard drives to SSDs, it’s easy to discount the hard drive as a legacy technology that will soon be by the wayside. I’d encourage some circumspection, though. It seems every few years, someone declares the hard drive dead. Meanwhile hard drive makers keep finding ways to stay relevant. There’s no question that the hard drive market is in a period of decline and transition. Hard disk drive sales are down year-over-year. Consumers switch to SSD or move away from Macs and PCs altogether and do more of their work on mobile devices. Regardless, Innovation and development of hard drives continue apace. We’re populating our own Storage Pods with 8 TB hard drives. 10 TB hard drives are already shipping, and even higher-capacity 3.5-inch drives are on the horizon. Hard drive makers constantly improve areal density – the amount of information you can physically cram onto a disk. They’ve also found ways to get more platters into a single drive mechanism then filling it with helium. This sadly does not make the drive float, dashing my fantasies of creating a Backblaze data center blimp. So is SSD the only future for data storage? Not for a while. Seagate still firmly believes in the future of hard drives. Its CFO estimates that hard drives will be around for another 15-20 years. Researchers predict that hard drives coming to market over the next decade will store an order of magnitude more data than they do now – 100 TB or more. Think it’s out of the question? Imagine handing a 10 TB hard drive to a RAMAC operator in 1956 and telling them that the 3.5-inch device in their hands holds two million times more data than that big box in front of them. They’d think you were nuts.
<urn:uuid:5015ee88-63fb-4536-93fa-3aec0cdd2ecf>
CC-MAIN-2017-04
https://www.backblaze.com/blog/history-hard-drives/?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952247
2,418
3.484375
3
Real-time voice along with video communication can be attained with the internet standard SIP (Session Initiation Protocol). That was initiated by the IETF (Internet Engineering Task Force) and you can find it in published form from RFC 3261. As for live communications, internet protocol “SIP” is used for establishing voice and video calls and within an IP network, one or more than one participant can whether create, alter, or end sessions with the help of this signaling protocol. It is very important in explanation of the functionality of the VoIP technology as one of the Voice over IP protocols. But at this point of learning about SIP, first you have to understand the term “session” within a communicating network. Actually, it is a clear-cut two-way phone call procedure. But in case of multi-media conference session, it can be consisted on loads of participating persons.
<urn:uuid:d8f550c8-d476-4b94-9bc9-a3cd8bf951eb>
CC-MAIN-2017-04
https://howdoesinternetwork.com/tag/session-initiation-protocol
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00008-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926956
184
3.03125
3
Think of an atlas, and the image of a dusty, oversized book perched atop a library shelf likely comes to mind. Filled with maps, it was the key to many a grade-school report. Using the Internet, today's student geographers have a wealth of information at their fingertips. And you should see what the adults have to play with. Tom Paese, Pennsylvania secretary of the Governor's Office Administration and Charles Gerhards, Pennsylvania deputy secretary of information technology. In Pennsylvania, officials with the Governor's Office of Information Technology (OIT) took cartography a step further last December, when they unveiled the Pennsylvania Technology Atlas, an online program that allows users to create GIS-based maps detailing technology resources ranging from installed fiber to telemedicine facilities. Classroom to Boardroom An offshoot of Gov. Tom Ridge's $132 million Link-to-Learn educational technology initiative, the atlas was originally created to track existing technology infrastructure so state grants weren't wasted duplicating IT. By knowing which schools were located near existing fiber or wireless networks, officials could earmark Link-to-Learn funds for other areas, those lacking high-speed infrastructure. Since then, the atlas has evolved into a valuable economic development tool. Officials quickly realized that putting the information on a Web site would give bandwidth-seeking public and private organizations an easy way of viewing the expanding infrastructure. With a potential hit on their hands, officials expanded the database to include hospitals, libraries, utilities and government agencies. "Technology is generally driving a lot of economies, and in Pennsylvania, we believe we have a lot to capitalize on," said Tom Paese, secretary of the Governor's Office of Administration. "We have to attract more and more businesses; we have to retain those that are spinning out of the universities, and if the atlas allows them to easily identify where capabilities in technology exist, the better we're able to build jobs and retain them." Developed by the University of Pittsburgh's School of Information Sciences using a Link-to-Learn grant of $600,000, the atlas contains more than 400 million bytes of information, enough to fill 10,000 printed pages. Teams led by university educators in each of seven state regions collected data using local surveys and interviews. The teams continuously update data from more than 10,000 telecommunications companies, universities, local governments, school districts and other public and private organizations that own, rent or lease technology assets. Beginning with a base map of Pennsylvania showing county borders, users can create customized displays by selecting mapping layers that include geographical features, organizations and telephone and utility companies. A user wanting to display public high schools and their location relative to Sprint's fiber network would select the "Public High School" and "Sprint" mapping layers from a frame along the left side of the mapping window. Zoom buttons allow the user to display the selected mapping criteria over the entire commonwealth or focus in on a highlighted area. The resulting map can be downloaded and printed. The geographical context of the map can be user-defined, allowing the display of technology resources by school district, congressional district, area code or ZIP code. The list of organizations that can be displayed include public and private universities and state agencies. The fiber networks of local utility companies and such telecommunications heavyweights as AT&T, Bell Atlantic, GTE and MCI can be displayed to show their proximity to any organization listed. The atlas' map-generating tool is supported by links to the online Link-to-Learn database that drives the project. A technology atlas snapshot of the past year's data is being developed, which will allow users to compare data from year to year and observe trends in infrastructure growth. A Sharper Image Pennsylvania officials turned to the atlas recently for help in forming a procurement plan for the state's annual $80 million telecommunications system. Based on a long-held belief that Pennsylvania's northern tier was short on installed fiber, procurement officers were prepared to budget millions for new fiber projects in that region. However, data collected for the atlas painted a different picture. "Once we had the Technology Atlas, we were able to see that, in fact, the northern tier does have a lot of installed fiber, although it's from non-traditional service providers like utility companies. That completely changed the way we went about procuring," said Charlie Gerhards, Pennsylvania's deputy secretary of information technology. "We had less emphasis on the traditional service providers, and we were looking at partners like utility companies and others who already had the fiber in the ground." And, during budget preparations last year, researchers used the atlas to show that by spending $7 million to finish Internet wiring to all public libraries, 95 percent of Pennsylvania's residents would be less than 20 minutes from a public Internet access location. No Secrets Revealed While confident that the data presented is accurate for planning purposes, Pennsylvania officials acknowledge that with competing companies sharing space in the atlas, strategic information such as spare fiber quantity, specific technologies in use or plans for future expansion is likely to be left off its pages. "We realize that beneath the information on the site, users may not see information that is proprietary," Paese said. "What we've really tried hard to say to businesses is that as other companies grow around you, that will help your business ultimately, particularly in regions that are hard hit. And we've been pretty surprised at the willingness of the companies to talk." OIT officials plan to spend $150,000 a year to maintain and update the atlas and the 12,000 files occupying its database. A CD-ROM version will be released annually for those who don't have access to the Internet. Gerhards said future enhancements to the atlas will depend on feedback received from users later this year. A free executive brief describing the development and capabilities of the Technology Atlas is available by calling 717/705-4636. Tom Byerly is a writer in Elk Grove, Calif. Email
<urn:uuid:22ca407a-1433-4916-b893-c7b37fd16c70>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/You-Are-Here.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948004
1,233
2.578125
3
Diebold Election Systems voting machines are not secure enough to guarantee a trustworthy election, and an attacker with access to a single machine could disrupt or change the outcome of an election using viruses, according to a review of Diebold's source code. "The software contains serious design flaws that have led directly to specific vulnerabilities that attackers could exploit to affect election outcomes," read the University of California at Berkeley report, commissioned by the California Secretary of State as part of a two-month "top-to-bottom" review of electronic voting systems certified for use in California. The assessment of Diebold's source code revealed an attacker needs only limited access to compromise an election. "An attack could plausibly be accomplished by a single skilled individual with temporary access to a single voting machine. The damage could be extensive -- malicious code could spread to every voting machine in polling places and to county election servers," it said. The report, titled "Source Code Review of the Diebold Voting System," was apparently released Thursday, just one day before California Secretary of State Debra Bowen is to decide which machines are certified for use in California's 2008 presidential primary elections. The source-code review identified four main weaknesses in Diebold's software, including: vulnerabilities that allow an attacker to install malware on the machines, a failure to guarantee the secrecy of ballots, a lack of controls to prevent election workers from tampering with ballots and results, and susceptibility to viruses that could allow attackers to an influence an election. "A virus could allow an attacker who only had access to a few machines or memory cards, or possibly to only one, to spread malicious software to most, if not all, of a county's voting machines," the report said. "Thus, large-scale election fraud in the Diebold system does not necessarily require physical access to a large number of voting machines." The report warned that a paper trail of votes cast is not sufficient to guarantee the integrity of an election using the machines. "Malicious code might be able to subtly influence close elections, and it could disrupt elections by causing widespread equipment failure on election day," it said. The source-code review went on to warn that commercial antivirus scanners do not offer adequate protection for the voting machines. "They are not designed to detect virally propagating malicious code that targets voting equipment and voting software," it said. In conclusion, the report said Diebold's voting machines had not been designed with security as a priority. "For this reason, the safest way to repair the Diebold system is to reengineer it so that it is secure by design," it said. The Diebold source-code review and several other documents, including a review of source code used in other voting systems, had earlier been withheld from release by the Secretary of State, even as other reports related to the review of voting machines were released on July 27. An explanation posted on the Secretary of State's website on July 27 noted the source-code review and other reports had been submitted on time. "Their reports will be posted as soon as the Secretary of State ensures the reports do not inadvertently disclose security-sensitive information," the website said. The delayed release of the source-code review meant that David Wagner, an associate professor of computer science at the University of California at Berkeley and an author of the report, was not able to present his findings at a public hearing held on July 30 to discuss the results of the voting system review.
<urn:uuid:f89bd1b8-62a9-4392-9c13-609045c9f939>
CC-MAIN-2017-04
http://www.cio.com/article/2438314/social-media/diebold-voting-machines-vulnerable-to-virus-attack.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969891
704
2.59375
3
Some applications require representations that uniquely identify an object, a scene, or even a face during content acquisition. When these representations are used to identify a person or a person's identity they are generally called biometrics, whereas more general representations are often referred to as metadata. Visual biometrics like faces, fingerprints, and iris patterns require visual processing techniques (i.e. computer vision) to compute and verify a biometric whereas other biometrics like voice-print identification (used in speaker recognition) or a DNA analysis use information sources. Aside from the ability to quickly recall personal information from a few biometrics, the ability to index and retrieve content with these cues can greatly improve a user's experience with his or her content. Have you ever wanted to scan all of your personal photos and videos to find all of the pictures of a friend? Have you tried to find all of the recent speeches given by a political figure or aspiring actor? If so, the additional metadata provided by visual biometrics like faces would be an ideal way to organize, search, and filter your personal content. While exact face recognition is quite an active research topic that poses many challenges (lighting changes, database scale, accuracy, etc.), face clustering embraces a more gradual approach that requires no training stage, is applicable to large-scale databases, and can easily be improved with user feedback or a secondary analysis with a more rigorous recognition algorithm. In prior work, a person is represented by two image regions: the face and torso. Given the output of a face detector, commonly the Viola-Jones boosted cascade of simple features, the torso region can be approximated from the size and position of the detected face region, as illustrated in the figure on the right. Low-level features like color, texture, and edge information are concatenated and analyzed in an agglomerative clustering routine that repeatedly iterates over content until it reaches a pre-determined stopping condition. Experimentally, the torso region not only aides the clustering algorithm in disambiguating people's faces, but it can also be used as an index for people wearing similar clothing with different non-frontal views of a face or conversely, the different clothing that one person wears throughout a piece of content, as illustrated above. Continuing research in this area focuses on alternative representations (i.e. 3D, semantic, etc.) and higher-precision features for face recognitoin such as those used in the content-based copy detection framework. Further, as mobile technology and capabilities continue to advance, we are also investigating methods for acquisition and analysis of biometric data on mobile devices, such as the LipActs project. Increasingly, businesses and consumers rely on passive video feeds, like fixed security cameras to provide peace of mind for their stores and homes. In a business's public locations (eg retail stores), it is often helpful to understand who customers may be at certain times of day or after large promotional campaigns. Similarly, in a personal environment, a home owner may have more peace of mind if there is a visual record of known visitors or solicitors. While it is unreasonable or too costly to ask each visitor to a home or business to identify hisself or herself, passive analysis can provide an estimation of information about these visitors over many different categories. Stemming from similar techniques utilized in facial biometrics, this passive and anonymous information is often referred to as Content Analytics and is commonly used in aggregate to spot new trends or singular anomalies. As a service provider, AT&T is uniquely poised to offer identity authentication for both online (i.e. a banking website, hotel check-in, etc) and in-person transactions (i.e. buying coffee, a book, or using a vending machine) using tokens that are unique to a person like his or her phone, a pin code, or fingerprint. Biometrics like fingerprints offer personalized tokens that are hard to emulate. Now that many mobile devices have at least one forward facing camera, it is possible to leverage a person's facial activity as one of the strongest tokens of identity. While prior face recognition systems can be fooled with a color print out, it is much harder to emulate the facial mannerisms and lip movements of a person's speech. In this discussion, we discuss the discovery of optimal settings for recognizing a persons lip-based actions, or LipActs, for use in verification and retrieval scenarios. In the field of computer vision, activity recognition (waving hands, jumping, running, etc.) and lip recognition (to improve speech recognition with visual cues) have been studied independently for quite some time. Innovations in these two fields independently have lead to the creation of recognition systems that can read lips and those that detect suspicious activity in public places. Like the image-based scale invariant feature transform (SIFT) representation for content-based copy detection, the histogram of oriented gradients (HOG) feature representation is increasingly popular for human activity detection. Often referred to as local features, as opposed to global features like color and texture, these representations work so well because they capture information from a single point in an image or video keyframe in a highly efficient way. For example, HOG features can describe content in a 6x6 pixel square (36 pixels x 3 colors) with only 9 real values such that they are highly separable from other image square in an image! For LipActs, HOG features were analyzed with different time and space settings over both a personal (i.e. mobile phone) dataset as well as a debates (i.e. public, kiosk-like) dataset. For temporal settings, both the sampling rate (τ - tau) from the video and the number of frames (t) to be pooled for feature extraction were varied. For spatial settings, HOG descriptors are first quantized into intermediate word features within a representative vocabulary of different sizes (N). Next, using one of these vocabularies, the word features are aggregated into different regions by their location in a frame to create a probabilistic histogram of words. The figure below provides a high-level overview of the feature creation process used in the LipActs work. Though an iteration over the different time and space settings above, equal-error rate performance (EER) was improved by almost 50% over unoptimized LipAct features for both datasets. Continued research focuses on dimensionality reduction, synchronization of LipActs features and audio features, and opportunities for deployment on mobile platforms for augmented speaker verification. In the LipActs experiments, two datasets were collected in the fall of 2010 and are decribed below. To facilitate experiments and extensions of the LipActs work outside of AT&T, each video in the debates dataset and its MD5 hash are recorded in this text file. The format of this text file is simply a two-column MD5 hash and filename, which can be verified in any unix-like environment with the command below. md5sum --check text file
<urn:uuid:71c3bf31-0f6e-426a-929a-03e11796d0da>
CC-MAIN-2017-04
http://www.research.att.com/projects/Video/VisualBiometrics/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.934357
1,447
3.203125
3
Increasingly, it’s a Big Data world we live in. Just in case you’ve been living under a rock and need proof of that, a major retailer can use an unimaginable number of data points to predict the pregnancy of a teenage girl outside Minneapolis before she gets a chance to tell her family. That’s just one example, but there are countless others that point to the idea that mining huge data volumes can uncover gold nuggets of actionable proportions (although sometimes they freak people out – for example that girl’s father). We’re still at the dawn of this Big Data era and as the market is showing, one-size-fits-all data processing is no longer adequate. To take the next step in this evolution, specialized Big Data software can improve not only by using cloud computing, but also by utilizing specialized networking infrastructure, InfiniBand, from the supercomputing community. Before understanding why, though, you need to understand the history of how we got to this Big Data world in the first place. How Did We Get Here? The Birth of the Relational Database 1970 isn’t just the year of the Unix Epoch, it’s also the year that the granddaddy of all Relational Database (RDB) papers was written. IBM Researcher E. F. Codd wrote “A Relational Model for Large Shared Data Banks” for Communications of the ACM magazine in June of that year, and it became the defining work on data layouts for decades. Codd’s model would be refined over the next 40 years, but what he proposed evolved into a generic toolset for structuring and manipulating data that was used for everything from managing bank assets to storing food recipes. This general-purpose data analysis software also ran exceptionally well on general-purpose computing hardware. The two got along great, actually, since all you really needed was a disk big enough to handle the structured data and enough CPU and RAM to perform the queries. In fact, some hardware manufacturers such as Hewlett-Packard would give away database software when you purchased the hardware to run it on. For the Enterprise especially, the Relational Database was the killer app of the data center hardware business. At this point, everybody was happily solving problems and making money. Then something happened that changed everything and completely disrupted this ecosystem forever. It was called Google. Then Google Happened During the Nixon Administration, copying the entire Internet was not a difficult problem given its diminutive size. But this was not so by the late 1990s, when the first wave of search engines like Lycos and Alta Vista had supposedly solved the problem of finding information online. Shortly thereafter, Google happened and disrupted not only the online search industry but also data processing. It turns out that if you can keep a copy of the modern Internet at all times, you can do some amazing things in determining relevance and, therefore, return better search results. However, you can’t use a traditional RDB to tackle that problem for several reasons. First of all, to solve this problem you need to store a lot of data. So much so, it becomes impractical to rely solely on vertical scaling by adding more disk/CPU/Ram to a system and a RDB does not scale horizontally very well. Adding more machines to a RDB does not improve its execution or ability to store more data. That disk/CPU/RAM marriage has been around for 40 years and it’s not easy to break apart. Further, as the size of the data set in an RDB gets larger the query speed generally degrades. For a financial services company querying trends on stock prices that may be acceptable, since that influences the time of a handful of analysts who can do something else while that processing is going on. But for an Internet search company trying to deliver sub 3-second responses to millions of customers simultaneously that just won’t fly. Finally, given the large data volumes and the query speed required for Internet searches, the necessity for data redundancy is implied since the data is needed at all times. As such, the simple master-slave model employed by most RDB deployments over the last four decades is a lot less bullet proof than what is needed when you are trying to constantly copy the entire Internet. One big mirror simply won’t cut it. Distributed File Systems and Map/Reduce Change Everything If Codd’s seminal RDB paper had grandchildren, they would be a pair of papers released by Google that described how they conquered their data problem. Published in 2003, “The Google File System” by Sanjay Ghermawat, Howard Gobioff, and Shun-tak Leung described how a new way of storing data across many, many different machines provided a mechanism for dealing with huge volumes in a much more economical way than the traditional RDB. The follow-up paper from 2004 entitled, “MapReduce: Simplified Data Processing on Large Clusters” by Ghermawat and Jeffrey Dean further revealed that Google performs queries across its large, distributed data set by breaking up the problem into smaller parts, sending those smaller parts to nodes out on the distributed system (the Map step), and finally assembling the results of the smaller solution (the Reduce step) into a whole. Together, these two papers created a data processing renaissance. While RDBs still have their place, they are no longer the single solution to all problems in the data processing world. For problems involving large data volumes in particular, solutions derived from these two papers have emerged over the past decade to give developers and architects far more choice than they had in the RDB exclusive world that existed previously. Hadoop Democratizes Big Data; Now Where Are You Going to Run It? The next logical step in this evolution in an era of Open Source programming was for somebody to take the theories laid out in these Google papers and transform them into a reality that everyone could use. This is precisely what Doug Cutting and Michael J. Cafarella did, and they called the result Hadoop. With Hadoop, anyone now had the software to tackle huge data volumes and perform sophisticated queries. What not everybody could afford, however, was the hardware to run it on. Enter cloud computing, specifically Infrastructure as a Service (IaaS). Primarily invented by Amazon with its Amazon Web Services offering, anyone could lease the 100s if not 1000s of compute nodes necessary to run big Hadoop jobs instead of purchasing the physical machines necessary for the job. Combine that idea with orchestration software from folks like OpsCode or Puppet Labs and you could automate the creation of your virtualized hardware, the installation and configuration of the Hadoop software, and the loading of large data volumes to minimize the costs of performing these queries. Again, everybody is happily solving problems and making money. But we aren’t done. There’s another step to this evolution, and it’s happening now. InfiniBand: Making Hadoop Faster and More Economical Processing Hadoop and other Big Data queries on IaaS produces results, but slowly. This combination is praised for the answers it can find but at the cost of reduced speed. We saw a data processing revolution sparked by different software approaches than those pioneered in the 1970’s. Better-performing Hadoop clusters, with all the network traffic they produce in their Map and Reduce steps, can be found by taking a similar approach with a different network infrastructure. Ethernet, the most widely used network infrastructure technology today, has followed a path similar to that of RDBs. Invented in 1980, Ethernet uses a hierarchal structure of subnets to string computers together on a network. It is so common that, like RDBs 10 years ago, most people don’t think they have a choice of something different. The performance problem with Ethernet comes in its basic structure. With hierarchies of subnets connected by routers, network packets have exactly one path they can traverse between any two points on the network. You can increase the size of the pipe between those two points slightly, but fundamentally you still just have the one path. Born in the supercomputing community during the 21st Century, InfiniBand instead uses a grid system which enables multiple paths for network packets to traverse between two points. Smart routing that knows what part of the grid is currently busy, akin to automobile traffic reporting found on smart phone map apps, keeps the flow of traffic throughout the system working optimally. A typical Ethernet-based network runs at 1 Gigabit per second (Gb/s), and a fast one runs at 10 Gb/s. A dual-channel InfiniBand network runs at 80 Gb/s, making it a great compliment to Map/Reduce steps on a Hadoop cluster. We’ve seen how a software revolution getting us past the exclusive use of RDBs has enabled data mining that was previously unimaginable. Open Source and cloud computing have made Big Data approachable to a wider audience. Better speed, resulting in shorter query times and time reductions needed in leasing IaaS space, is achievable using public cloud providers offering InfiniBand. This is the next step in the data processing revolution and the next generation of Cloud Computing services (also known as Cloud Computing 2.0) bring InfiniBand to the public cloud. ProfitBricks is the first provider to offer supercomputing like performance to the public cloud at an affordable price. Data is becoming democratized, and now High Performance Computing is as well.
<urn:uuid:2bdee5a2-4898-47f9-bf11-ca72b1346045>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947258
1,986
3.046875
3
As the global energy economy makes the transition from fossil fuels toward cleaner alternatives, fusion becomes an attractive potential solution for satisfying the growing needs. Fusion energy, which is the power source for the sun, can be generated on earth, for example, in magnetically-confined laboratory plasma experiments (called “tokamaks”) when the isotopes of hydrogen (e.g., deuterium and tritium) combine to produce an energetic helium “alpha” particle and a fast neutron – with an overall energy multiplication factor of 450:1. Building the scientific foundations needed to develop fusion power demands high-physics-fidelity predictive simulation capability for magnetically-confined fusion energy (MFE) plasmas. To do so in a timely way requires utilizing the power of modern supercomputers to simulate the complex dynamics governing MFE systems — including ITER, a multi-billion dollar international burning plasma experiment supported by 7 governments representing over half of the world’s population. Unavoidable spatial variations in such systems produce microturbulence which can significantly increase the transport rate of heat, particles, and momentum across the confining magnetic field in tokamak devices. Since the balance between these energy losses and the self-heating rates of the actual fusion reactions will ultimately determine the size and cost of an actual fusion reactor, understanding and possibly controlling the underlying physical processes is key to achieving the efficiency needed to help ensure the practicality of future fusion reactors. The goal here is to gain new physics insights on MFE confinement scaling by making effective use of powerful world-class supercomputing systems such as the IBM Blue-Gene-Q “Mira” at the Argonne Leadership Class Facility (ALCF). Associated knowledge gained addresses the key question of how turbulent transport and associated confinement characteristics scale from present generation devices to the much larger ITER-scale plasmas. This involves the development of modern software capable of using leadership class supercomputers to carry out reliable first principles-based simulations of multi-scale tokamak plasmas. The fusion physics challenge here is that the key decade-long MFE estimates of confinement scaling with device size (the so-called “Bohm to Gyro-Bohm” “rollover” trend caused by the ion temperature gradient instability) demands much higher resolution to be realistic/reliable. Our important new fusion physics finding is that this “rollover” is much more gradual than established earlier in far lower resolution, shorter duration studies with magnitude of transport now reduced by a factor of two. The basic particle method has long been a well established approach that simulates the behavior of charged particles interacting with each other through pair-wise electromagnetic forces. At each time step, the particle properties are updated according to these calculated forces. For applications on powerful modern supercomputers with deep cache hierarchy, a pure particle method is very efficient with respect to locality and arithmetic intensity (compute bound). Unfortunately, the O(N2 ) complexity makes a particle method impractical for plasma simulations using millions of particles per process. Rather than calculating O(N2) forces, the particle-in-cell (PIC) method, which was introduced by J. Dawson and N. Birdsall in 1968, employs a grid as the media to calculate the long range electromagnetic forces. This reduces the complexity from O(N2) to O(N+MlogM), where M is the number of grid points and is usually much smaller than N. Specifically, the PIC simulations are being carried out using “macro” particles (~103 times the radius of a real charged ion particle) with characteristic properties, including position, velocity and weight. However, achieving high parallel and architectural efficiency is very challenging for a PIC method due to potential fine-grained data hazards, irregular data access, and low arithmetic intensity. The issue gets more severe as the HPC community moves into the future to address even more radical changes in computer architectures as the multicore and manycore revolution progresses. Machines such as the IBM BG/Q Mira demand at least 49,152-way MPI parallelism and up to 3 million-way thread-level parallelism in order to fully utilize the system. While distributing particles to at least 49,152 processes is straightforward, the distribution of a 3D torus-shape grid among those processes is non-trivial. For example, first consider the 3D torus as being decomposed into sub-domains of uniform volume. In a circular geometry, the sub-domains close to the edge of the system will contain more grid points than the core. This leads to potential load imbalance issues for the associated grid-based work. Through a close collaboration with the Future Technologies Group at the Lawrence Berkeley National Laboratory, we have developed and optimized a new version of the Gyrokinetic Toroidal Code (“GTC-Princeton” or “GTC-P”) to address the challenges in the PIC method for leadership-class systems in the multicore/manycore regime. GTC-P includes multiple levels of parallelism, a 2D domain decomposition, a particle decomposition, and a loop level parallelism implemented with OpenMP – all of which help enable this state-of-the-art PIC code to efficiently scale to the full capability of the largest extreme scale HPC systems currently available. Special attention has been paid to the load imbalance issue associated with domain decomposition. To improve single node performance, we select a “structure-of-arrays” (SOA) data layout for particle data, align memory allocation to facilitate SIMD intrinsic, binning particles to improve locality, and use loop fusion to improve arithmetic intensity. We also manually flatten irregular nested loop to expose more parallelization to OpenMP threads. GTC-P features a two-dimensional topology for point-to-point communication. On the IBM BG/Q system with 5D torus network, we have optimized communication with customized process mapping. Data parallelism is also being continuously exploited through SIMD intrinsics (e.g., QPX intrinsics on IBM BG/Q) and by improving data movement through software pre-fetching. Simulations of confinement physics for large-scale MFE plasmas have been carried out for the first time with very high phase-space resolution and long temporal duration to deliver important new scientific insights. This was enabled by the new “GTC-P” code which was developed to use multi-petascale capabilities on world-class systems such as the IBM BG-Q “Mira” @ ALCF and also “Sequoia” @ LLNL. (Accomplishments are summarized in the two figures below.) Figure 1: Modern GTC-Princeton (GTC-P) Code Performance on World-Class IBM BG-Q Systems Figure 2: Important new scientific discoveries enabled by harnessing modern supercomputing capabilities at extreme scale The success of these projects were greatly facilitated by the fact that true interdisciplinary collaborative effort with Computer Science and Applied Math scientists have produced modern C and CUDA versions of the key HPC code (originally written — as in the case of the vast majority of codes in the FES application domain) in Fortran-90. The demonstrated capability to run at scale on the largest open-science IBM BG-Q system (“Mira” at the ALCF) opened the door to obtain access to NNSA’s “Sequoia” system at LLNL – which then produced the outstanding results shown on Figure 1. More recently, excellent performance of the GPU-version of GTC-P has been demonstrated on the “Titan” system at the Oak Ridge Leadership Class Facility (OLCF). Finally, the G8-sponsored international R&D advances have enabled this project to gain collaborative access to a number of the top international supercomputing facilities — including the Fujitsu K Computer, Japan’s #1 supercomputer. In addition, these highly visible accomplishments have very recently enabled this project to begin collaborative applications on China’s new Tianhe-2 (TH-2) Intel-MIC-based system – the #1 supercomputing system worldwide. RESEARCH TEAM: Bei Wang (Princeton U), Stephane Ethier (PPPL), William Tang (Princeton U/PPPL), K. Ibrahim, S. Williams, L. Oliker (LBNL), K. Madduri (Penn State U), Tim Williams (ANL) Link to SC13 conference: http://sc13.supercomputing.org/schedule/event_detail.php?evid=pap402
<urn:uuid:72cd8127-82b2-4ce9-a747-c6672e6273e7>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/11/16/sc13-research-highlight-extreme-scale-plasma-turbulence-simulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00430-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903889
1,816
3.578125
4
NASA's Lunar Atmosphere and Dust Environment Explorer (LADEE), set to launch late Friday morning, should give scientists insights into the moon's atmosphere that will be helpful for lunar exploration and lunar-based astronomy. S. Pete Worden, director of NASA’s Ames Research Center, where the LADEE space craft was constructed, talked with The Washington Post about the mission and a few other things. He cites one particular area of interest to scientists at NASA regarding the LADEE project. "We learned a lot about the Moon when we went there with Apollo, but it actually opened more questions," Worden tells the Post. "One of the key things has been what is the environment on the Moon. It has a very, very tenuous atmosphere which is actually called an exosphere. There was some evidence from Apollo there might even be things like dust storms caused by interactions with solar wind." This, he says, "would have a big impact on some of our human activities or large scale robotic activities there." Why? Not to get all scientific on you, but dust just messes up everything! Worden says another goal of LADEE is to study the moon in its pristine state before we start seeing McDonald's restaurants littering the lunar surface. "There’s some urgency about that," he says. "In fact, at the end of the year the Chinese are supposed to land on the Moon. That alone will probably disrupt the exosphere considerably. So we really want the pristine state." But it was the mission's third objective that caught my eye. From Worden to the Post: "One of our big problems with any space mission is communications. Today we use radio with these giant radio dishes in the deep space network, some of which are hundreds of feet across, and that’s expensive and limited. "Lasers, because they’re a much tighter beam than a radio beam, offer an exciting new way to get a lot more data down. That technology has advanced considerably so we added a laser communications test."As times go on we expect that we may eventually be able to get something you might call an interplanetary internet and this will be the first step in demonstrating we can do that. This will give us close to a gigabit per second from the Moon, which is pretty impressive – that’s more connectivity than most companies." Was that a little dig at the private sector? Hard to tell. Either way, Worden tells the Post that "a solar system wide broadband is our ultimate objective." And why not? It would be unfair to deprive our Mars colonists the dubious pleasure of watching Miley Cyrus twerk on YouTube. Though it might make them glad they can't get back to Earth. Now read this:
<urn:uuid:e16fddb4-84de-435b-ac0f-0c1995715552>
CC-MAIN-2017-04
http://www.itworld.com/article/2703759/enterprise-software/think-your-internet-connection-is-weak--try-logging-on-from-saturn.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956361
565
3.3125
3
3.5.4 How do elliptic curve cryptosystems compare with other cryptosystems? The main attraction of elliptic curve cryptosystems over other public-key cryptosystems is the fact that they are based on a different, hard problem. This may lead to smaller key sizes and better performance in certain public key operations for the same level of security. Very roughly speaking, when this FAQ was published elliptic curve cryptosystems with a 160-bit key offer the same security of the RSA system and discrete logarithm based systems with a 1024-bit key. As a result, the length of the public key and private key is much shorter in elliptic curve cryptosystems. In terms of speed, however, it is quite difficult to give a quantitative comparison, partly because of the various optimization techniques one can apply to different systems. It is perhaps fair to say the following: Elliptic curve cryptosystems are faster than the corresponding discrete logarithm based systems. Elliptic curve cryptosystems are faster than the RSA system in signing and decryption, but slower in signature verification and encryption. For more detailed comparisons, see the survey article [RY97] by Robshaw and Yin. With academic advances in attacking different hard mathematical problems both the security estimates for various key sizes in different systems and the performance comparisons between systems are likely to change.
<urn:uuid:4dcb84a7-9936-4525-9a74-4c8565e1c1d2>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/compare-with-other-cryptosystems.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949624
290
2.734375
3
A 12 million digit prime number, the largest such number ever discovered, has landed a voluntary math research group a $100,000 prize from the Electronic Frontier Foundation (EFF). The number known as a Mersenne prime, is the 45th known Mersenne prime, written shorthand as 2 to the power of 43,112,609, minus 1 . A Mersenne number is a positive integer that is one less than a power of two, the group stated. Layer 8 Extra: The computing project called the Great Internet Mersenne Prime Search (GIMPS) made the discovery on a computer at the University of California, Los Angeles (UCLA) Mathematics Department. Computing manager Edson Smith installed and maintained the GIMPS software at UCLA, and thousands of other volunteers also participated in the computation. According to the GIMP Web site the massive prime was first verified on June 12th by Tony Reix of Bull SAS in Grenoble, France using the Glucas program running on Bull NovaScale HPC servers, one featuring Itanium2 CPUs and another featuring Nehalem CPUs. The prime was later independently verified by Rob Giltrap of Sun Microsystems using Ernst Mayer's Mlucas program running on a Sun SPARC Enterprise M9000 Server. The $100,000 prize will be awarded during EFF's Pioneer Awards ceremony on October 22nd in San Francisco. EFF's first Cooperative Computing Award, given for a prime number of at least a million digits, was awarded nearly 10 years ago. Two Cooperative Computing Awards are still up for grabs: EFF will award $150,000 to the first individual or group who discovers a prime with at least 100 million digits, and $250,000 for a prime with at least a billion digits, EFF stated. The huge prime number discovery comes close on the heels of another massive number finding. A group of researchers in September said they, through a technique for multiplying large numbers, have figured out congruent numbers up to a trillion. Apparently no one had taken them beyond a billion for some reason. In case you were wondering, the first few congruent numbers are 5, 6, 7, 13, 14, 15, 20, and 21. Many congruent numbers were known prior to the new research. For example, every number in the sequence 5, 13, 21, 29, 37, ..., is a congruent number. But other similar looking sequences, like 3, 11, 19, 27, 35, ...., are more mysterious and each number has to be checked individually. The calculation found 3,148,379,694 of these more mysterious congruent numbers up to a trillion, the researchers said in a statement. The problem, which was first posed more than a thousand years ago, concerns the areas of right-angled triangles. The difficult part is to determine which whole numbers can be the area of a right-angled triangle whose sides are whole numbers or fractions. The area of such a triangle is called a "congruent number." For example, the 3-4-5 right triangle which students see in geometry has area 1/2 × 3 × 4 = 6, so 6 is a congruent number. The smallest congruent number is 5, which is the area of the right triangle with sides 3/2, 20/3, and 41/6, researchers stated. Layer 8 in a box Check out these other cool stories:
<urn:uuid:b4b3affc-03c7-4944-aca3-f9f61009eea9>
CC-MAIN-2017-04
http://www.networkworld.com/article/2232082/security/12-million-digit-prime-number-sets-record--nets--100-000-prize.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94517
712
3.0625
3
A word of caution about editing entries "anonymously" in Wikipedia: a tool has been developed that can show who made the changes. Virgil Griffith, who will be a graduate student at the California Institute of Technology starting in September, has developed Wikipedia Scanner, a search tool that traces the IP address of people who make edits to the online encyclopedia. While Wikipedia allows anyone to make edits, it keeps detailed logs of the changes made. And although people can make changes without identifying themselves, the changes often create digital fingerprints that provide information about the user, such as the location of the computer used to make the edit. MORE ON WIKIPEDIA Many of the edits detected by the scanner correct spelling mistakes or obvious factual errors, but others have been used to polish entries by rewriting or removing critical material. The scanner has traced entries to people at several large companies who appear to have altered potentially damaging content. Someone on Wal-Mart Stores' network, for instance, altered a line about the wages it pays employees. The original entry stated that "Wages at Wal-Mart are about 20% less than at other retail stores," citing the author Greg Palast as the source. The revised entry reads: "The average wage at Wal-Mart is almost double the federal minimum wage," and changes the attribution to Wal-Mart. A person with access to an IP address at the election systems division of Diebold cut large sections out of an entry about concerns of security experts over the integrity of Diebold's voting machines, as well as information about the its CEO's fund-raising for President George W. Bush. The deleted text was later restored. And a user of a computer at the British Broadcasting changed Bush's middle name from "Walker" to "Wanker." The scanner has also tracked digital fingerprints that have led to computers at the CIA and the Vatican. Griffith created the tool to "create minor public relations disasters for companies and organizations I dislike," he wrote on his website. He admitted that it's impossible to be sure if the edits were made by someone working at one of the organizations, although the IP address reveals that they were made by someone with access to their network, he says. "If the edit occurred during working hours, then we can reasonably assume that the person is either an agent of that company or a guest that was allowed access to their network," he wrote. Griffith came up with the idea when he "heard about congressmen being caught for white-washing their wikipedia pages," he said. He said he believes that anonymous speech is important for open projects like Wikipedia. The online encyclopedia works fine today for "noncontroversial topics," he said, but tools like Wikipedia Scanner can help make the site more reliable for controversial topics, he said. A spokesman for Wikipedia in Germany referred to the scanner tool as a "good development" and encouraged other researchers and people to download data from the online encyclopedia and snoop around. "There's surely plenty to discover," he said via e-mail.
<urn:uuid:ee67c68c-babf-4b77-b707-5d6e025641a8>
CC-MAIN-2017-04
http://www.cio.com/article/2438193/consumer-technology/new-tool-exposes-self-edits-in-wikipedia.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.975214
627
2.875
3
In recent years, it has been realized that ERP systems and data warehouses are insufficient to really tackle the problem of inconsistent, inaccurate and unreliable data. In particular there is a growing awareness that the processes that create and update corporate data need to be addressed if the data dragon is ever to be slain. This involves understanding, documenting and controlling the business rules that surround the creation of new business classifications (such as a new customer code, a new product line or brand, an updated hierarchy of engineering assets or organizational structure). This is commonly termed Data Governance. What is Data Governance? Data Governance is the process of establishing and maintaining cooperation between lines of business and management to establish standards for how common business data and metrics will be defined, propagated, owned and enforced throughout the organization. In brief it is: • The governing body responsible for data policy relating to how data is defined, owned, stored, reconciled, deployed, and enforced. • The process for agreeing data ownership and rights and reconciling conflicts. • The embodiment of the organizational culture relating to the management of the decision making process. • The function targeted with ensuring data security and integrity across the enterprise. • The scope of data governance needs to extend to the semantics used in systems, to ensure consistency in the way that data is handled. The following diagram illustrates the typical components and general structure of data governance required for master data management. Data governance is, of course, an umbrella term that is much broader than master data. It also encompasses such areas as archiving policy, and compliance with data protection laws and security policies as well as the quality and accessibility of data. Many organizations have attempted to address the governance of business processes and data, but few have genuinely succeeded. Documenting business models in some sort of data dictionary, or even on PowerPoint, is a useful start and can have many benefits but it is, essentially, a passive activity and gets quickly outdated. What is needed is for such descriptions of processes to be intimately linked with the systems that they involve. Data Governance is closely related to master data management. Managing master data is a business issue and responsibility. No longer should this be delegated to IT as is often now the case. Indeed, to be effective the governance process must be sponsored and led at the C- level in the business. The definition of the master data items and classifications is central to the business process and so must be managed by business experts who have the understanding of the business goals and strategy. The whole success of being able to understand and consolidate business data is dependent on the business taking an active role here. Choosing when, for example to classify frozen peas as a ‘garden vegetable’ or a ‘prepared food’ is a business decision, depending perhaps on the marketing strategy. Clearly not everyone in the business can decide this. There must be a process and business owners who are appointed by the business to take these decisions and resolve conflicts. This in turn needs to be supported by some form of workflow process. Putting in place an effective governance structure is essential to the success of a BI initiative. The following diagram illustrates the various aspects and tasks of Data Governance with particular reference to master data management. It’s important to understand from the outset that master data management initiatives and the associated data governance is not a project. The project is to set up an ongoing and efficient process for data governance and master data management. To be successful with governance and master data management initiatives the business must be fully committed to a long-term program supporting both the technology and process. Most companies will find if they look closely that they are already spending lots of money fixing problems arising from poor data so it makes economic sense to establish this as a process. Establishing effective Data Governance is essentially about introducing changes in the business processes by which businesses manage their key business data. Although to be successful it requires that appropriate supporting technology be put in place this is not the major challenge. Knowing how and where to start is usually the major worry and The Information Difference has a wealth of experience in tackling just this sort of issue. If you’re just about to initiate a data governance initiative or already have an initiative underway our experience could help you avoid some common pitfalls. Please contact us.
<urn:uuid:270d956a-6ed4-4371-b551-a56c27058f3a>
CC-MAIN-2017-04
http://www.informationdifference.com/tidres/focus-areas/data-governance/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951964
877
3
3
Gordon Moore’s 1965 article on the economics driving the increase of semiconductor functionality has turned out to be wildly prophetic in terms of the effect on transistor scaling. Nearly 40 years later, in 2004, Intel was building microprocessors on the 90 nanometer process with 150 million transistors in a single chip. Today, we’re putting as many as 1.35 billion transistors in a processor with features as small as 14 nm. That’s approaching 10x the number of transistors with 6x finer features in only eleven years of technology development. And that’s not even the highest density part Intel makes; that award, at the time of this writing, goes to the 18-core Intel Xeon processor E7-8890 v3 with 5.5 billion transistors built on the 22 nm process, all packaged into 52 mm x 45 mm. Even more impressive is the fact that this is only counting transistors—there are many other components that occupy that 52 mm x 45 mm die. Moore’s observation continues to stand the test of time after 50 years of technology advancement. Seeing Moore’s Law through an Engineer’s Eyes Many who quote Moore’s Law—often interpreted as ~2X more transistors every technology generation—may not fully appreciate what Gordon Moore’s observation means to those who actually design the transistors that live up to it. Outside technology development, the challenges that shrinking transistors present are not seen, yet, they are enormous. And it’s not just the reduction in size. To deliver on Moore’s Law, engineers drive increasing complexity in the transistor design itself, including changing the internal structures, materials, and even the overall device architecture. These changes are necessary to create devices that can continue to be high performing and power efficient at smaller and smaller dimensions. Until about ten years ago, transistor engineering was conceptually simpler: scale down essentially every aspect of the previous generation transistor, whose design features could all be represented in a simple 2D diagram (see below). Numerical simulation of the transistor, an important part of process development, could be readily achieved by breaking down the device into small silicon blocks and applying classical semiconductor physics. The computation could be run on a desk side workstation, which would crunch through the equations in minutes to hours. But as feature sizes have scaled down, we have had to use novel architectures, such as 3D transistors, and new materials with nanoscale dimensions to continue delivering device performance. These add a third dimension to the transistor representation and require more complicated physics, greatly increasing the complexity of the simulation. See the figures below comparing the complexity of 2D planar transistors ten years ago (left) to 3D architectures today (right). Challenges of Dimension, Architecture, and Materials Looking far into the future, we’re examining and simulating beyond just the next few generations of transistors. We’re looking at the technologies we need to continue to develop components at these dimensions with the performance and reliability that our customers have come to expect. At these smaller and smaller dimensions, physical effects we used to approximate or overlook can no longer be ignored. For example, the following figure shows a futuristic device, a transistor made out of a silicon “wire” surrounded by a metal gate, with the red and green spheres representing individual silicon and metal atoms. At this dimension, a single atom of an element other than silicon within the wire (represented by the yellow dot in the corner of the cross section) can impact transistor behavior. As visualized after numerical simulation, this single stray atom distorts the uniformity of the electronic current traveling through the cross-section, disrupting the desired electrical behavior. Years ago, because the cross section was large, we would have ignored this effect. For devices of the future, we can’t, because its impact can be significant. This simulation of a nanowire transistor shown below demonstrates how a single stray atom can distort electronic behavior. At these nanoscale dimensions, it becomes important to simulate every atom including each atom’s electronic orbitals. Classical physics is left far behind and we enter the realm of quantum physics—much more complex and computationally demanding to simulate. These kinds of problems are not possible to run on workstations—at least in a timeframe that allows Intel to introduce innovative electronic products every year or two. For example, calculating current to voltage relationships (I-V characteristics) with at least 10 points on a curve is a central part of simulation and transistor analysis. The table below shows the amount of memory and the wall clock time to calculate a single I-V point for a range of devices using a single processor core. At the dimensions of these simulations, the wave nature of electrons becomes important and it is necessary to solve Schrodinger’s equation. These simulations were conducted using NEMO5, a code developed at Purdue University by Professor Gerhard Klimeck’s group. It doesn’t take long for the problem to outgrow the compute capabilities at a typical engineer’s desk—even with today’s powerful workstations. For case 3, a typical 10-point curve would take nearly 15 years to complete. Wall times of these magnitudes are not realistic for maintaining the types of development schedules any company must follow to stay competitive—and in business. Moore’s Law and HPC So, how do we, as process designers, keep up with the changes that drive Moore’s Law? How do we deal with shrinking technologies, novel architectures, and new materials at the center of our simulations? We turn to some of the very computers for which Moore’s Law helps provide increasing performance—enormously large High Performance Computing (HPC) clusters. The figure below maps how computational demand has grown over the course of years of technology advancements and marks the major inflection points that have shaped that demand. Today’s problems are solved by very large systems, the kind of machines that make the Top500.org list of the fastest supercomputers in the world. To illustrate what these HPC systems mean to design times, refer back to case 3 above: using 20,000 cores, calculating 10 I-V points for case 3 can be done in about a day. So, Moore’s Law drives an increasingly larger demand for HPC, which allows us to continue to design devices that live up to Moore’s Law, which supports the creation of more powerful HPC, so that we can carry on the expressions of Moore’s Law in smaller and more complex devices. If it weren’t for these supercomputers, living up to Moore’s Law would become impossible. It’s a symbiotic relationship expressed in silicon with a never ending cycle—at least into the foreseeable future. Delivering HPC for Transistor Design Technical supercomputing in electronic design and simulation is an absolute necessity for Intel to stay competitive and retain its leadership position for the products it offers. From 2004 until today, Intel’s computing capacity for chip design has increased 4,600 percent (46X), for the reasons stated above. To serve Intel’s chip design computing needs, we have approximately 130,000 servers powered by Intel Xeon processors adding up to a million cores. When Intel’s process simulation team (aka TCAD) approached Intel’s IT department with the computational demand needed for future-generation device simulations, IT designed a solution comprised of 1,296 nodes with 2,592 Intel Xeon processors E5-2680 v3, totaling 31,104 cores and 324 TB of memory. The system, identified as SCD2P4 (named for its location, the Santa Clara D2-P4 Data Center), occupies 15 extra-tall, 60 rack units (60U) compared to industry standard 42U, and it consumes 0.6 MW of power. There are several aspects that are unique to this supercomputer: 1) it was designed from Commodity hardware/COTS (Common Off-the Shelf) components instead of custom components, as is the case in most world-class supercomputers; 2) it utilizes blade servers rather than traditional rack servers or specialized servers, which offers 1.6X better density—15 racks vs 26 racks for the entire system; 3) components were selected based on real-world benchmarking, which showed a 31% performance difference between competing InfiniBand Architecture solutions; 4) we developed a unique multi-tier check-pointing architecture, which utilizes Intel SSDs in each server, improving the reliability of the check-pointing and restore process, and removing the need for a complicated parallel storage solution. In June of 2015 the SCD2P4 system ranked 81 on the Top500.org list with 833.92 TFLOPS. In November of this year it remains among the top 100 fastest machines in the world, according to the Top500. Cool HPC Machines This machine is a very large system, but not the only large cluster dedicated to transistor and circuit design. Intel has at least three other HPC systems with over 4,000 cores that have ranked among the Top500 in the last several years. The problems of design are growing ever larger because of the complexity of devices, shrinking processes, and additional capabilities added to the silicon—all driving the need for larger systems. With large systems like SCD2P4, one of the chief problems data centers face is managing power and using that power efficiently because electricity is expensive. The cluster runs in Intel’s free-air-cooled, extremely energy efficient D2 data center in Santa Clara with a Power Usage Effectiveness (PUE) of 1.06. The average PUE for the industry is 1.80 PUE. That qualifies SCD2P4 as among the most power efficient supercomputers in operation in the world. We are able to run at such high efficiency largely because we use free-air cooling rather than total refrigeration, and we maintain temperatures in the data center between 60 and 91 degrees Fahrenheit. In 2014, out of the 8,760 hours of operation in the year, the data center required only 39 hours of refrigerated cooling while the outside temperature was over 91 degrees. All told, the Santa Clara data center saves Intel $1.9 million each year in electricity and 44 million gallons of water. Thus, not only does Intel lead in transistor design, the data centers supporting these design efforts are built and managed for optimal utilization and power efficiency. An Exciting Time for Process Design With the capabilities of today’s HPC systems, device engineering is a lot more exciting than it was even ten years ago. We get to run incredibly interesting simulations—virtual experiments—at levels of detail we never dreamed of; we now explore new device architectures and novel materials and visualize electronic behavior and process physics with atomistic resolution. While some naysayers in the industry have sounded the death knell for Moore’s Law—as they have since time immemorial—it is Intel’s business to continue it. It’s an unwritten law in engineering that every generation thinks their challenges are the most difficult. Although new technical challenges continue to emerge, as they have every generation since the first VLSI chips were created, the outlook for Moore’s Law remains the same as it did twenty years ago; the path for the next few generations is visible, and after that, it gets hazy until we move forward. The HPC industry’s march to Exascale depends upon Moore’s Law. The new Intel Scalable Systems Framework (Intel SSF)—an advanced architectural approach for developing scalable, balanced and efficient HPC systems—was designed with this in mind. Plus, Intel SSF will take advantage of innovations like the Intel Omni-Path Architecture fabric and the 3D XPoint to power the supercomputers that will enable process designers to address the challenges involved with keeping Moore’s Law advancing. Innovation, by definition, is beset with barriers. Intel’s job is to overcome these barriers by exploration and discovery. In the case of transistor design, this means the creation of new materials, device architectures, manufacturing processes, etc. To bring these advances to the consumer takes a lot of simulation before we even begin chip fabrication (see below). Without supercomputers, we wouldn’t be able to understand what it takes to continue the march of Moore’s Law, and without this understanding, we wouldn’t be able to create more powerful supercomputers. This symbiosis is at the heart of the relationship between Moore’s Law and HPC. As shown here, using numerical simulation and HPC, process designers can visualize novel materials and process techniques and their effects on device behavior before running actual experiments. Shown above are simulations of chemical reactions that occur during the fabrication process. The individual particles are atoms. By Mark Stettler, Vice President, Technology and Manufacturing Group, Director of Process Technology Modeling, Intel Corporation and Shesha Krishnapura, Intel IT Chief Technology Officer and Senior Principal Engineer. For more information on SCD2, read Intel Data Center Design Reaches New Heights of Efficiency (http://datacenterfrontier.com/intel-data-center-new-heights-efficiency/) and Intel CIO Building Efficient Data Center to Rival Google, Facebook Efforts (http://blogs.wsj.com/cio/2015/11/09/intel-cio-building-efficient-data-center-to-rival-google-facebook-efforts/).
<urn:uuid:7f458cf2-a802-43e7-a95a-8faac68edfff>
CC-MAIN-2017-04
https://www.hpcwire.com/2016/01/11/moores-law-not-dead-and-intels-use-of-hpc-to-keep-it-that-way/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925342
2,836
3.421875
3