text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
The Cleveland Clinic study shows that men who used their cell phones for four hours a day or more had the greatest damage to their sperm. Heavy cell phone use may have a significant impact on the fertility of men, according to a study released in late October by the prestigious Cleveland Clinic in Cleveland, Ohio. The study, conducted by Dr. Ashok Agarwal and in PDF abstract here, reported on the results of 364 men who used cell phones for varying amounts of time each day. According to Agarwal, men who used their cell phones for four hours a day or more showed the greatest damage to their sperm. "Those differences are highly significant," Agarwal told eWEEK. He said that he can only speculate on the reason for the damage, but he said its likely to be the effects of the electromagnetic radiation emitted by the cell phones when they transmit, as they do in use. "Men that use cell phones had a decreased sperm quality compared with those who dont. Those who use it for long periods of time had a much more profound decrease," he said. Agarwal said that his study was consistent with previous studies conducted in 2002 and 2005 that found a relationship between exposure to electromagnetic radiation and fertility, as well as with animal studies. Agarwal said that other studies showed that electromagnetic radiation may cause DNA damage in mice. The study used only GSM phones operating in frequencies ranging from 800 to 1900 mHz. According to Agarwal, the Cleveland Clinic is planning a follow-up study that will look at some of the variables that werent considered in this test. Those variables include the type of phone, the specific frequency and type of transmission. "There are hundreds of questions that need to be addressed if these findings turn out to be true," Agarwal said. He mentioned that no one knows whether cell phones have the same effect on women, for example. Click here to read more about how laptop use can affect male fertility. He added that there are doubts about his study: "We controlled many of the variables, but not all of them," he said. Agarwal said that there are a lot of things that researchers dont yet know. In addition to learning whether women are affected, Agarwal said that the current study is a simple snapshot. "We did not do a longitudinal study over several months," he said. In addition, he said that he doesnt know if the damage gets worse with time, or if it gets better if cell phone use is reduced. Is the damage enough to cause problems for men who want to have children? Perhaps. "Some of these effects, especially in those who have over four hours of use, put them in a condition thats highly abnormal," Agarwal said. "I definitely think it will cause problems in men to achieve a pregnancy." So with all that he has learned, is Agarwal ready to give up his own cell phone? "Im not ready for that," he said. Adding that he uses his cell phone for an hour or two a day, Agarwal said that hes not changing that now. "I dont plan to stop," he said. The next study in this series should be out in a year. Check out eWEEK.coms for the latest news, reviews and analysis on mobile and wireless computing.
<urn:uuid:d9324941-6240-44f5-996d-6943d0abd84c>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Mobile-and-Wireless/Cell-Phone-Use-Affects-Fertility-Study-Shows
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz
en
0.978214
697
2.75
3
As local and state governments in the Gulf Coast struggled in summer 2005 to provide up-to-date information in Hurricane Katrina's wake, two programmers in Austin, Texas, took matters into their own hands. Jonathan Mendez, a software engineer from New Orleans, and Greg Stoll created Scipionus.com . The Web site is a "visual wiki" -- a Google map of affected areas overlaid with dozens of site-specific comments -- in the same way that Wikipedia is a publicly produced and edited document. Individuals who were in New Orleans went to the Web site and posted statements such as, "Hynes Elementary School. 8/30. Ten feet of water inside." Scipionus, which drew tens of thousands of visitors, is just one example of a revolution under way involving digital mapping and Web applications. As hobbyists and Web developers gain access to mapping tools and geospatial data, they are rediscovering the excitement and entrepreneurial spirit the Internet originally spawned. It's part of the emerging geospatial Web, which could take the form of anything from city maps overlaid with health data to maps of city subway systems developed for digital music player downloads. Government officials should take note, because this surge in mapping creativity could open new windows to important civic information for the public. However, local governments' role in information dissemination isn't clear yet, said Mike Liebhold, a senior researcher at the Institute for the Future in Palo Alto, Calif. "There's enormous value in detailed map information, and Web mapping using Google maps is only about a year old," Liebhold said. "Cities and states use digital mapping for planning, facilities management, police and fire services, houses and zoning." Still, state and local governments so far haven't done much to open that infrastructure to the public, he said. "Soon the public will have the ability to add notes and comments to those public maps," Liebhold said. Web developers worldwide are busy creating "mash-ups" -- seamlessly combining data from other sources with Google maps. For instance, one site -- incidentlog.com -- maps police, fire and 911 alerts in more than 85 cities across the country. Another term being bandied about in digital mapping circles is the "geospatial Web," which Liebhold said could include a combination of digital map information and location-based hypermedia. "For instance, [if] you're walking down the street with a wireless device that knows where it is," he explained, "you can pull up information about that particular location that is of interest to you, perhaps safety data posted by a municipality or by another citizen, such as 'Watch out for traffic coming around a blind curve here.'" One ambitious geospatial Web project involves creating real-time maps of cell phone use in urban areas. In a demonstration project in 2005, researchers from the SENSEable City Laboratory at the Massachusetts Institute of Technology in Cambridge, Mass., used anonymous cell phone data from A1/Mobilkom to create electronic maps of cell phone use in the metropolitan area of Graz, Austria -- the country's second largest city. MIT's researchers created computer-generated images of the cell-phone data overlaid with street maps of the city. The digital maps changed as people traveled around the city, offering a view of the urban area as a shifting entity rather than a fixed, physical environment. When unveiling the maps, Director of SENSEable City Laboratory Carlo Ratti said, "[Visualizing a city in real time] opens up new possibilities for urban studies and planning." It also could play a role for public safety officials in case of emergencies. Mapping Civic Data The spirit of civic Web mapping is strong in Chicago. In 2005 Windy City resident Adrian Holovaty drew praise and media attention for combining the Police Department's crime statistics with Google maps to create an easy-to-use portal so residents could see, among other things, where robbery and homicide are highest. The Chicago-based nonprofit Center for Neighborhood Technology's Civic Footprint project uses mapping software to help voters understand who their local, state and federal representatives are. On the Web site civicfootprint.org, voters can plug in their address and a map displays their house with overlapping district boundaries for state representatives and senators, members of Congress, county boards and other offices. "Some of this information is already available, either online or in paper form, but it's scattered and poorly utilized," said Ben Helphand, director of the project. "We decided to use our mapping capabilities to bring it all together." Illinois has more units of government than any other state, Helphand added, and when you register to vote in Chicago, you get a voter information card with eight units of government, but it's just a list of a number of districts. "With the map, you can see the different districts, how they overlap and if they're gerrymandered," he said. Although the project is still in its first year, Helphand said the effort has received praise from government agencies and board of election officials. He noted that a few years ago, after his organization put legislative bill-tracking information online, the state eventually unveiled its own bill-tracking service, so government agencies may mimic the Civic Footprint as well. The next phase of the project, he said, will add data sets such as politicians' voting information and campaign finance data personalized for the user. Another possibility is to capture expertise on civic engagement by creating a guide using wiki technology. "Users can offer advice on topics such as how best to interact with your alderman or how to make community policing work for you," Helphand said. In the San Francisco Bay Area, entrepreneurs often use mapping, GPS and the Internet to work on traffic congestion, parking and public transportation in conjunction with local transportation agencies. A company called NextBus Inc. offers transit users updated schedules and real-time online maps. NextBus uses satellite technology to track vehicles on their routes. Each vehicle is fitted with a satellite tracking system, and modeling software takes into account the actual position of the buses, their intended stops and the typical traffic patterns. NextBus' constantly updated estimates are overlaid on route maps, and the predictions are posted on the Web and to wireless devices and PDAs. Combining database and mapping technology, Acme Innovation Inc.'s SmartParking technology allows wireless and Internet users to view a map of real-time availability of parking spaces on private lots in San Francisco from their cars, homes or offices. They can then reserve parking spaces through its ParkingCarma phone reservation system or via the Internet. Although local governments are important partners in providing data, it makes sense that they are not taking the lead in developing advanced mapping applications. Professor Dennis Culhane, co-director of the Cartographic Modeling Laboratory at the University of Pennsylvania in Philadelphia, said that in many communities, the local GIS division in government is consumed with data standards and keeping the parcel map layers up to date. "That is a huge job, and they are struggling to catch up with demand," Culhane said. "They often don't have the time or resources for the fun, creative stuff." Culhane's lab has created a Neighborhood Information System (NIS), a Web-based property and social indicator information system that uses mapping software to support city agencies and community-based organizations throughout Philadelphia. The Neighborhood Gardens Association uses the NIS to assess gardens it is considering purchasing for its land trust. The garden group can collect information on a property's ownership, size, tax status and council jurisdiction. "There was a window over the last several years where nonprofits jumped in to offer community information systems, and the cities are just now starting to catch up," he said. "Philadelphia has been a big supporter in sharing data because they're one of our biggest users." Culhane said cities and counties also face liability issues about publishing erroneous information or wiki-style guides that may be inaccurate. Philadelphia spent five years updating its parcel layers, he noted, but city officials didn't build an application to show them because they knew there were errors in the data. Nonprofits like the NIS have more leeway, he said, to advise users that data may not be 100 percent accurate. Some local government GIS departments are innovating with geospatial data for their own use. In March 2006, the Technology Services Department of Johnston County, N.C., began field-testing a program that gives the county's planning and inspection teams GPS units attached to wireless data transmitters. That gave the inspection manager a real-time map of exactly where each building inspector is at all times, said Lori Key, GIS applications analyst for Johnston County. "If a call for an inspection comes in, she can see which one is closest, call them and say, 'Go to 305 Henry Street,'" Key said. With the GPS devices, inspectors also can record the exact location of a pothole or other problem the county needs to address, Key said. The county is considering adding GPS devices to emergency response vehicles so it could instantly generate helpful maps for the public in case of emergencies such as hurricanes. Key, who has been working in GIS since 1997, said the field has recently exploded. A few years ago, people in county government, including some of the commissioners, didn't even know what GIS and GPS were. "But now that they've seen what it can do, they're asking about its potential for use in other areas of county government," Key said. "It's an exciting time."
<urn:uuid:963f3f28-3429-45a6-889b-b911b8e39d2f>
CC-MAIN-2017-04
http://www.govtech.com/security/Mapping-the-Future.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947072
1,993
2.84375
3
The NSA is making headlines once again thanks to new revelations from fugitive whistleblower Edward Snowden. Snowden claims that efforts to encrypt communications are incapable of preventing access by the NSA, but at least one security expert maintains that this claim is probably exaggerated, and that you may play a significant role in allowing the NSA to “break” your encryption. According to a report from UPI.com, “The [NSA], at a cost of more than $250 million in the current year's budget, employs custom-built, superfast computers to break codes with "brute force," uses covert measures to ensure NSA control over setting international encryption standards and, in the most closely guarded secret, collaborates with technology companies and Internet service providers in the process, said the documents published by The New York Times, the non-profit news organization ProPublica and a British newspaper, The Guardian.” Is it possible? Yes. There is no such thing as absolutely impenetrable encryption. Given enough processing power, and time, the NSA can just try every possible combination in existence until it hits the right one—a brute force attack. An encryption algorithm based on a 256-bit key, however, has 1x10 to the 77th power possibilities. That’s a 10 with 77 zeros after it. When you’re brute forcing, you could get lucky and hit it on the first try, or it could take you 1x10 to the 77th power attempts. I have no idea what you even call a number with almost 80 zeros, but suffice it to say its astronomically huge. I don’t care how powerful your computers are, it will take a long time to try out that many possible key combinations to find the right one. Anderson suggests that the NSA ability to bypass encryption is almost certainly a function of flawed implementation and/or poor encryption key management. “So, is it possible that the NSA can decrypt financial and shopping accounts? Perhaps, but only if the cryptography that was used to protect the sensitive transactions was improperly implemented through faulty, incomplete or invalid key management processes or simple human error.” When properly implemented, encryption provides essentially unbreakable security. It’s the sort of security that would take implausibly-powerful supercomputers millions of years to crack. But if it’s carelessly implemented, and the key management processes are not sound, this security can be reduced to the level where a hacker with a mid-market PC can crack in a few hours at most. Regardless, the issue underscores a massive problem with data security. Encryption is generally touted as the Holy Grail magic solution for all things data security, and many organizations and individuals just turn on whatever encryption is the easiest or most convenient and expect communications and data to be invulnerable. It’s an unrealistic expectation. You can have the best, most formidable lock in the world securing the front door to your home, but if you hide the key under the welcome mat, it won’t stop an intruder. If the NSA is cracking all of the encryption on the Internet, there’s a pretty good chance that a weakness in key management is making it possible—maybe even easy. It might be a weakness in how the keys are being generated, or how they’re stored. The key management lifecycle typically relies in part on human intervention, which brings an element of human error into the equation as well. Anderson summed up with, “General Robert Barrow (USMC) once said that amateurs think about tactics while professionals think about logistics. An appropriate way to update this to the Internet age might be that amateurs talk about encryption while professionals talk about key management.”
<urn:uuid:bbe5ed33-1c52-4327-b78a-59fd1e156c6e>
CC-MAIN-2017-04
http://www.csoonline.com/article/2137121/privacy/it--146-s-probably-your-fault-the-nsa-can-crack-your-encryption.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932733
768
2.765625
3
Microsoft maintains a running list of the top authors in computer science. Out of the top 100, only three are women. It’s another telling data point, one more sign of gender inequality in science, and it raises the all-important question: what can be done to address this gender disparity? This subject is receiving attention. Yesterday, Science Writer Scott Gibson published an article highlighting the importance of encouragement and practical instruction to bridging the gender gap in computer science. A study of current trends reflects the urgent need for action. As Gibson explains, computer and information technology (CIT) is growing at a rapid pace and will soon undergo a 22 percent workforce increase. If, as the Bureau of Labor Statistics anticipates, there are 758,800 new CIT jobs from 2010 to 2020, and the current gender gap persists as expected, these positions will predominantly be filled by male hires. The result would be that the overall number of potential jobs (and moreover typically higher-paying computer-related jobs) would be heavily biased toward males, and the CIT industry would be denied the talents of a representative portion of the female population. And lest anyone doubts the veracity of this predicted scenario, when it comes to the gender gap, the data are not promising. From 2010 to 2011, fewer than 12 percent of computer science degrees were awarded to women, according to a survey performed by the Computing Research Association called “Computing Degree and Enrollment Trends.” Gibson interviewed two computer science professionals for the article: Amy McGovern, associate professor in the School of Computer Science and adjunct professor in the School of Meteorology at the University of Oklahoma; and Brittany Dahl, graduate student in the School of Meteorology at the University of Oklahoma. Dahl is part of a tornado-prediction research team led by Professor Amy McGovern. McGovern’s position as an educator provides her the platform to encourage girls and women to pursue careers in science and technology. She bristles when recalling how she was told at a young age that “girls don’t belong in math.” McGovern believes it’s important to reach students at the elementary-school level or even earlier because this is the critical age at which they start internalizing society’s beliefs about gender roles. McGovern tells Gibson: “The really big drop off happens in middle school, where the girls start wanting to conform. They like the boys a little bit more, and the boys don’t like brainy girls. And so a lot of those girls disappear out of math and science at that point. It’s really too bad. Also, I think the media don’t help, in that they portray computing as nerdy. What girl wants to be a nerd?” This is why positive role models are so important. McGovern didn’t have to look far to find her inspiration. Her mother, a school superintendent in charge of purchasing classroom computers, was her first computer instructor. McGovern’s mother taught her to program in BASIC on a Commodore 64. Brittany Dahl also came from a family that valued education and science. For many years, she knew she wanted to be a meteorologist and she has combined that passion with computer science. Sciences that are considered less theoretical and more practical, such as meteorology, are seeing a narrowing of the gender gap, but this just isn’t the case with most segments of computer science. However, schools such as CMU are leading the effort to bring about change. Writes Gibson: “From 1995 to 2000, for example, the percentage of women entering the School of Computer Science (SCS) at CMU climbed from 7 percent to 42 percent. Today, the school’s women @ scs Web site is an example of how the school reaches out to females, with information, workshops mentoring programs and conferences.” In her role as computer-science ambassador to women, McGovern’s appeal has an emotional and practical component. “You’ll never be broke and you’ll never be bored in this field,” she says. “It’s lots of fun. Stick to it. Stay at it. You can apply computing to change the world in many different ways.”
<urn:uuid:f66bb40a-7bf6-482b-88b3-3e9f30d5f55d>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/01/mind_the_gap_bridging_the_gender_divide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00257-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953855
892
2.84375
3
A recent NYTimes article touches upon a number of topics in the ongoing conversation about data center energy efficiency. Some reading that article may react as if some secret revelation has been exposed, incriminating our beloved social media networks and data center as spendthrifts or environmentally ignorant. The fact of the matter is that we live in an information driven world. Information systems are the foundation of our economies, governments, entertainment and many aspects of our daily lives. Maintaining this information and conducting the data processing around it is an industry. It is as much a part of our industrial fabric as steel and manufacturing were in the 20th century. The data processing that serves our 21st century lives takes place in facilities called “data centers.” Data centers are essentially industrial factories. From an energy profile perspective, they look exactly like any other factory in that they consume large amounts of resources (electricity and water in their case). 1E has a pedigree of addressing data center energy efficiency and we’ll share that with you presently but first we’d like to give you a little more background. The core of the problem There are some out there that will claim the heart of the problem is our dependency or desire for more and more data processing. That is, we are a data processing driven society, hurtling toward the planet’s demise. We’ll leave that to another discussion and instead assume that the increase of data processing demand in our society is a reflection of progress, commerce, and democracy. If you grant me that assertion, the core of our energy demand problem here is that silicon semiconductor-based data processing systems require energy to operate and produce a good bit of heat as a byproduct of their activity. This is compounded exponentially by a matter of scale. Semiconductor devices have become increasingly dense (in terms of number of transistor gates per unit of area), with higher and higher clock speeds. As these increase, so does energy demand. As individual devices become increasingly dense, we correspondingly demand more and more of them. The result is computer rooms with massive quantities of data processing servers, each of which have massively dense semiconductor chips. We mentioned a moment ago that a byproduct of the power going to the server is heat. These very dense silicon chips operate at temperatures so high that one could not possibly touch them bare handed. Interestingly, this large amount of heat produced by the semiconductor chips is also a threat to their very health. Consequently, computer servers have lots of fans that pull cool air into the front of the server and blow hot exhaust air out of the back of the server. Yes, fans consume loads of energy too, but the bigger problem still is all this hot exhaust air from all the servers sharing the same space in the data center. For this reason, a large amount of mechanical equipment and resources are a part of data centers as well. These mechanical systems are in the form of air handlers, chillers, cooling towers, and plumbing that is in place simply to remove all this hot air from the data center for the purpose of maintaining a healthy ambient operating temperature for the servers. In an average run-of-the-mill data center today, approximately half of the electricity supplied by the utility to the data center makes it to the power cord of the IT (server) equipment. Why only half? Well, the mechanical equipment that cools the data center requires a large amount of it, and there are other losses along the way due to common inefficiencies in power distribution and mechanical and electrical technology (one never gets 100% of what one puts in). To make matters worse still, of the electricity which actually makes it to the IT power cord, much less than that actually goes toward actual data processing due to fan energy consumption, conversion losses, and other subsystems within the server itself. In summary, we need lots of data processing, and data processing technology consumes large amounts of energy. All hands on deck These issues have been thoroughly understood and very publically visible steps taken to address them for many years already. In the United States, the US Department of Energy (DoE) created the “Save Energy Now” program. This program partners the DoE with industry to drive energy efficiency improvements year over year in data centers, with specific goals of saving over 20 billion kWh annually (as compared to historic trends). In the EU, the “EU Code of Conduct” was created to establish a framework of best practices covering energy efficiency, power consumption, and carbon emissions. Within the data center community, numerous industry groups, trade organizations, and ad hoc committees have been at work on these issues for years. The work of the Green Grid, in particular, has been instrumental in creating the common language used in the community addressing this problem, resulting in a number of energy efficiency management metrics and data center design conventions that we now consider de rigueur. With governments and the industry itself working the problem, the equipment manufacturers have a role to play as well. Mechanical and Electrical plant (MEP) equipment manufacturers have responded with higher efficiency transformers and UPS, and innovations in pump, fan, and cooling technologies. When it comes to the IT equipment which is truly the engine of this factory we call a data center, the work of participating equipment manufacturers in the ASHRAE TC9.9 body of work is truly remarkable. This is remarkable in that major server manufacturers mutually revealed engineering details of their products to one another to the extent allowing specification of wider ranges of operating temperature and humidity envelopes. This is crucial to energy efficiency in that it is fundamental to allowing reduced energy consumption of MEP, and greatly expands the opportunities for use of free cooling. Once can go on about this, but suffice to say the evidence is clear that energy consumption by data processing facilities is a widely recognized problem, and much is being done in a coordinated and public way, to provide relief. It’s improper to draw conclusions about a specific data center facility, based upon news of a high profile business with completely different data centers. Some energy efficiency techniques are available to everyone everywhere, and many are not. This is a complex subject with significant nuance, and generalizations can come with risk. In the end, the Business has invested quite a lot of money in its data center, and to acquire the servers and software within it. Over the years, the Business spends quite a lot of money maintaining and supporting these systems, and is also spending quite a lot of money on energy for power and cooling. In part two, I’ll look at how to identify server waste and what you can do to eliminate it.
<urn:uuid:5231654e-f816-4695-a900-a77c6cc068bb>
CC-MAIN-2017-04
https://www.1e.com/blogs/2012/09/26/power-productivity-and-the-internet-part-1-the-core-of-the-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00469-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951611
1,358
2.703125
3
In this TedTalk, we see how electrical stimulation of neurons can trigger activity in a severed cockroach leg. And lemme tell ya, this cockroach can dance. A little background may be in order. Tim Marzullo and Greg Gage wanted to show high school students what neural signals look like, without an enormous amount of expensive equipment. After finishing grad school, they founded an educational equipment company, Backyard Brains. Now that company manufactures open-source equipment and produces accompanying lesson plans to explain and demonstrate how our brain works. So where does the cockroach come in? Since our brains work by transmitting electrical stimuli between neurons, the stimulation of cockroach neurons demonstrates the principles without the unpleasant mess of brain surgery. However, a bit of roach surgery is required. The device also allows for the leg to receive electrical stimulation from an external source. Essentially, the leg is "hearing" what we receive through earbuds connected to our phones or mp3 devices. (Musically inclined geeks can also beatbox to the leg.) Apparently the leg likes the beat. The device also allows for the leg to receive electrical stimulation from an external source. Essentially, the leg is "hearing" what we receive through earbuds connected to our phones or mp3 devices. (Musically inclined geeks can also beatbox to the leg.) Apparently the leg likes the beat.Recently, Tim explained Backyard Brain's latest product, RoboRoach, to Slashdot TV. If you haven't seen a cockroach wearing a backpack -- or being remote controlled -- this is a must-see video. RoboRoach works by replacing the nerve in the cockroach's antenna with a silver electrode. Once the backpack is attached, you can control the insect's movements for a few minutes. Turns out, cockroaches adapt fairly quickly. When you return the cockroach to it's cage for ~20 minutes, he "forgets" and the stimulation works again. ... After about 2-7 days, the stimulation stops working altogether, so you can clip the wires and retire the cockroach to your breeder colony to spend the rest of its days making more cockroaches for you and eating your lettuce. Don't have any cockroaches on hand? No worries. For the Spikerbox experiment you can substitute crickets, although they may not work with RoboRoach. For that, you need an insect capable of carrying the 4.4 gram device. Luckily, Backyard Brains will send you a box of roaches for $24.
<urn:uuid:11a63773-6bf8-45f4-b2ab-620a2fb27dd7>
CC-MAIN-2017-04
http://www.computerworld.com/article/2473961/open-source-tools/controlling-cockroach-neurons----there-s-an-app-for-that.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927816
528
2.78125
3
The US Department of Transportation said it will run a massive road test of cars, trucks and buses linked together via WiFi equipment in what the agency says will be the largest test of automated crash avoidance technology to date. The test will be conducted by the University of Michigan's Transportation Research Institute (UMTRI), and feature mostly volunteer participants whose vehicles have been outfitted with vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication devices that will gather extensive data about system operability and its effectiveness at reducing crashes, the agency said. In the news: The fascinating world of the ubiquitous antenna The test will feature nearly 3,000 cars, trucks and buses equipped Wi-Fi technology that will let the vehicles "talk" to each other in real time to help avoid crashes and improve traffic flow in the test area around Ann Arbor, Mich. The DOT said that the test comes on the heels of a study it did earlier this year that found 82% of drivers "strongly agreed that they would like to have vehicle-to-vehicle safety features on their personal vehicle. In addition, more than 90% of the participants believed that a number of specific features of the connected vehicle technology would improve driving in the real world, including features alerting drivers about cars approaching an intersection, warning of possible forward collisions, and notifying drivers of cars changing lanes or moving into the driver's blind spot." More on car technology: Seven advanced car technologies the government wants now According to DOT's National Highway Traffic Safety Administration (NHTSA), V2V safety technology could help drivers avoid or reduce the severity of four out of five unimpaired vehicle crashes. To accomplish that goal, the model deployment vehicles will send electronic data messages, receive messages from other equipped vehicles, and translate the data into a warning to the driver during specific hazardous traffic scenarios. Such hazards include an impending collision at a blind intersection or a rear collision with a vehicle stopped ahead, to mention a couple. Layer 8 Extra Check out these other hot stories:
<urn:uuid:060ea928-73c5-421e-b5ee-2829f44b0dfc>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222994/mobile-apps/us-to-drive-3-000-wi-fi-linked-vehicles-in-massive-crash-avoidance-trial.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938657
422
2.6875
3
Open source tools for protection against DDoS (IPS), such as, Snort, are based on DPI, that is, they analyze the entire protocol stack. However, they cannot control the opening and closing of TCP connections, since they are too high in the network stack of Linux and represent neither server nor client side. This allows to bypass IPS data. Proxy servers are also involved in establishing the connection, but they cannot protect against major DDoS attacks, because they are relatively slow, as they work based on the same principle as the server. For them, it is desirable to use the equipment which, despite being not as good as the one for the back end, can withstand heavy loads. According to Kaspersky Lab, the number of malicious programs targeting Apple products is nearing 1800. In the first eight months of 2014 alone, researchers have found some 25 new families of malware for OS X. Before turning to unconventional methods of usage, I will describe how “keep-alive” is working. The process is utterly simple – in a connection, multiple requests are sent instead of just one, and multiple responses come from the server. The benefits are obvious: there is less time spent on establishing connection, less load on CPU and memory. The number of requests in a single connection is usually limited by settings of the server (in most cases, there are at least several dozen). The procedure for establishing a connection is universal. Philip Kucheryavy, Software Engineer in the Operations Team - He is 24 and has a beard - In love with Linux and Python - Has no diploma of higher education *nix systems are by default provided with remote management tools, while the method of storing and format of configuration files allows you to rapidly distribute the updated version of settings by simply copying them to the node. This scheme will be good enough for up to a certain number of systems. However, when there are several dozens of servers, they cannot be handled without a special tool. This is when it becomes interesting to have a look at configuration management systems that allow a programmable rather than manual configuration of servers. As a result, the systems can be configured quickly and with fewer errors while the administrator will get the comprehensive report. Also, a CM system knows how to keep track of all changes in the server while supporting the desired configuration. Often, the manufacturers of routers do not particularly care about the quality of their code. As a result, the vulnerabilities are not uncommon. Today, the routers are a priority target of network attacks that allows to steal money and data while bypassing local protection systems. How can you personally check the quality of firmware and adequacy of settings? You can do this by using free utilities, online test services and this article. Task: Set up Cisco as server Today we are going to cover the topic of Cisco-device (routers, switches) hacking, so to say, carrying on with the once started. Here I would like to amend the information which was presented in the previous issue. First, these devices have not two but three variants of user isolation: by password only, by login and password, or in “AAA” model (also by login and password). There seems to be no practical difference for a pen tester, but we’d still better rely on valid information.
<urn:uuid:d060f149-2dfc-4816-9194-5e51211d44fa>
CC-MAIN-2017-04
https://hackmag.com/page/4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951038
681
2.578125
3
A study published by the Pew Internet Project shows sharp disagreement, however, on whether the effects of this evolution will be curative or toxic. The study surveyed 742 experts identified by the Pew Internet Project and Elon University, including the Internet Society, the World Wide Web Consortium, the Working Group on Internet Governance, ICANN, Internet2 and the Association of Internet Researchers. For instance, 56 percent of respondents agreed that a low-cost global network will be thriving in 2020 and will be available to most people around the world at a low cost. And they agreed that a tech-abetted "flattening" of the world will open up opportunities for success for many people who will compete globally. Still, 43 percent of respondents said they are unsure that policy will foster such a positive outcome for Internet expansion. They said that progress will be inhibited by businesses anxious to preserve their current advantages and by policy-makers for whom control over information and communication is a central value. The study also showed that experts are split on whether technology will become autonomous by 2020 and escape human control. Forty-two percent thought leaders agreed that dangers and dependencies will grow beyond humans' ability to stay in charge of technology. "There's a very split verdict," noted Lee Rainie, director of the Pew Internet Project and publisher of the study. Rainie admitted that he was surprised by the lack of consensus on many of these key issues. "There is disagreement here that's real," he told internetnews.com. The respondents also demonstrated concern about the balance between transparency and privacy. Forty-six percent agreed that the benefits of greater transparency of organizations and individuals would outweigh the cost in terms of lost privacy; 49 percent disagreed. Rainie noted that while the difference between the two camps is statistically insignificant, the study is less about numbers than identifying key issues. "The purpose of publishing this study is to stimulate conversation, not end conversation," he said. Rainie noted that the responses ran counter to the bias he expected to find. "These were not people voting with their pocket books or having an ax to grind. These are serious people expressing some hope and some worry all at the same time," he said. "Some of the most pessimistic people are some of the most accomplished technologists," he added.
<urn:uuid:18d67e99-8a8b-492c-8b7f-0a22ad817234>
CC-MAIN-2017-04
http://www.cioupdate.com/research/article.php/3634571/Internet-Visionaries-Hope-Fear-And-Loathing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962146
477
2.578125
3
2.2.2 What is a digital signature and what is authentication? Authentication is any process through which one proves and verifies certain information. Sometimes one may want to verify the origin of a document, the identity of the sender, the time and date a document was sent and/or signed, the identity of a computer or user, and so on. A digital signature is a cryptographic means through which many of these may be verified. The digital signature of a document is a piece of information based on both the document and the signer's private key. It is typically created through the use of a hash function (see Question 2.1.6) and a private signing function (encrypting with the signer's private key), but there are other methods. Every day, people sign their names to letters, credit card receipts, and other documents, demonstrating they are in agreement with the contents. That is, they authenticate that they are in fact the sender or originator of the item. This allows others to verify that a particular message did indeed originate from the signer. However, this is not foolproof, since people can 'lift' signatures off one document and place them on another, thereby creating fraudulent documents. Written signatures are also vulnerable to forgery because it is possible to reproduce a signature on other documents as well as to alter documents after they have been signed. Digital signatures and hand-written signatures both rely on the fact that it is very hard to find two people with the same signature. People use public-key cryptography to compute digital signatures by associating something unique with each person. When public-key cryptography is used to encrypt a message, the sender encrypts the message with the public key of the intended recipient. When public-key cryptography is used to calculate a digital signature, the sender encrypts the "digital fingerprint" of the document with his or her own private key. Anyone with access to the public key of the signer may verify the signature. Suppose Alice wants to send a signed document or message to Bob. The first step is generally to apply a hash function to the message, creating what is called a message digest. The message digest is usually considerably shorter than the original message. In fact, the job of the hash function is to take a message of arbitrary length and shrink it down to a fixed length. To create a digital signature, one usually signs (encrypts) the message digest as opposed to the message itself. This saves a considerable amount of time, though it does create a slight insecurity (addressed below). Alice sends Bob the encrypted message digest and the message, which she may or may not encrypt. In order for Bob to authenticate the signature he must apply the same hash function as Alice to the message she sent him, decrypt the encrypted message digest using Alice's public key and compare the two. If the two are the same he has successfully authenticated the signature. If the two do not match there are a few possible explanations. Either someone is trying to impersonate Alice, the message itself has been altered since Alice signed it or an error occurred during transmission. There is a potential problem with this type of digital signature. Alice not only signed the message she intended to but also signed all other messages that happen to hash to the same message digest. When two messages hash to the same message digest it is called a collision; the collision-free properties of hash functions (see Question 2.1.6) are a necessary security requirement for most digital signature schemes. A hash function is secure if it is very time consuming, if at all possible, to figure out the original message given its digest. However, there is an attack called the birthday attack that relies on the fact that it is easier to find two messages that hash to the same value than to find a message that hashes to a particular value. Its name arises from the fact that for a group of 23 or more people the probability that two or more people share the same birthday is better than 50%. How the birthday paradox can be applied to cryptanalysis is described in the answer to Question 2.4.6. In addition, someone could pretend to be Alice and sign documents with a key pair he claims is Alice's. To avoid scenarios such as this, there are digital documents called certificates that associate a person with a specific public key. For more information on certificates, see Question 184.108.40.206. Digital timestamps may be used in connection with digital signatures to bind a document to a particular time of origin. It is not sufficient to just note the date in the message, since dates on computers can be easily manipulated. It is better that timestamping is done by someone everyone trusts, such as a certifying authority (see Question 220.127.116.11). There have been proposals suggesting the inclusion of some unpredictable information in the message such as the exact closing share price of a number of stocks; this information should prove that the message was created after a certain point in time.
<urn:uuid:e097a0e4-d394-4d23-b1c5-847f39459a27>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-digital-signature-authentication.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935083
1,018
4.125
4
Uchiyama A.,Japan Meteorological Agency | Yamazaki A.,Japan Meteorological Agency | Kudo R.,Japan Meteorological Agency | Kobayashi E.,Aerological Observatory | And 2 more authors. Journal of the Meteorological Society of Japan | Year: 2014 To investigate aerosol optical properties, the Meteorological Research Institute has been continuously measuring scattering and absorption coefficients since January 2002 by using an integrating nephelometer and one- and three-wavelength absorption photometers in dry air conditions at Tsukuba, Japan. We used these optical data to investigate trends of aerosol properties and climatology from 2002 to 2013. The results showed that most aerosol characteristics had seasonal variation and decreasing or increasing trends significant at the 95 % confidence level. From 2002 to 2013, the extinction coefficient at 550 nm and absorption coefficient at 530 nm had statistically significant decreases of -1.5 × 10-6 and -5.4 × 10-7 m-1 year-1, respectively. In the same period, the scattering coefficient showed a non-significant decrease of -8.8 × 10-7 m-1 year-1. The single scattering albedo (SSA) at 550 nm had a significant increasing trend of 7.4 × 10-3 year-1. Asymmetry factors did not show a significant trend. The increasing trend in the extinction Ångström exponent was significant, whereas the trend in the effective radius was not significant. The increasing trend of 2.1 × 10-2 year-1 in the absorption Ångström exponent from 2006 to 2013 was significant. This tendency suggests a compositional change of light-absorbing aerosol. Frequency distributions of aerosol properties were investigated during 2006-2012. In this period, absorption coefficients were measured by the three-wavelength absorption photometer. The most frequent values of the extinction coefficient at 550 nm, the absorption coefficient at 530 nm, and the SSA at 550nm were 25 × 10-6, 3.0 × 10-6 m-1, and 0.905, respectively. The analysis using the extinction Ångström exponent showed that aerosol characteristics were dependent on the extinction Ångström exponent. The aerosol characteristics estimated from optical data were consistent with those derived from radiometer data. Therefore, ground-based monitoring of aerosol optical properties is useful for monitoring aerosol characteristics and interpreting variations in the surface radiation budget. © 2014, Meteorological Society of Japan. Source Kobayashi E.,Aerological Observatory | Noto Y.,Aerological Observatory | Wakino S.,Aerological Observatory | Yoshii H.,Aerological Observatory | And 3 more authors. Journal of the Meteorological Society of Japan | Year: 2012 Observation instruments are commonly upgraded because of technological advances and the convenience of the observation agency. However, great care is necessary when changing instruments to ensure data continuity for climatic data analysis. The Tateno upper-air observation station of the Japan Meteorological Agency replaced the Meisei RS2-91 type rawinsondes with Vaisala RS92-SGP type GPSsondes in December 2009. We carried out a total of 115 simultaneous dual launches for four seasons to investigate any differences in performance. The simultaneous sensor comparison results showed that Vaisala RS92-SGP temperature was 0.1-0.4K higher than Meisei RS2-91 temperature above the 100 hPa layer in night time observations; and that Meisei RS2-91 temperature was ~0.1K higher above the 30 hPa layer in day time observations. Vaisala RS92-SGP relative humidity was ~5% lower, particularly under humid conditions and in autumn. Vaisala RS92-SGP pressure was ~0.5 hPa higher in the stratosphere. We also made pressure-level comparisons for temperature and relative humidity. Furthermore, comparison results are shown for precipitable water vapor measurements taken with a collocated GPS receiver for a sensitivity analysis on the number of dual soundings and for a reanalysis of upper-air temperature trends for 1956-2010, taking the three instrumental change events into consideration. © 2012, Meteorological Society of Japan. Source
<urn:uuid:d5266b7f-5466-4ab7-95a1-9f4745765a22>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/aerological-observatory-2292297/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00157-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884902
884
2.859375
3
Date: 29 September 2010 Click here for printable version What is Return-Oriented Programming? Return-Oriented Programming enables an attacker to use non-malicious code maliciously by combining short snippets of benign code already present in the system. I first heard about how Return-Oriented Programming works back on the 28th of August 2009 in relation to a new attack on electronic voting machines. After that I did a bit of reading because I found the idea of using an existing programís own code against itself an interesting one. In light of the recent Adobe Reader Vulnerability where malware was found using Return-Oriented Programming, I thought it would be good to provide an overview about what Return-Oriented Programming actually is. If you are looking for something more in-depth, then there is a good presentation from BlackHat available and also two good PDF files about Return-Oriented Programming: Getting Misquoted Ė A Small Example The idea behind Return-Oriented Programming is to use an existing programís code, but to alter the flow of execution to perform a different task. Typically an attacker would attempt to execute malicious actions by using certain parts of a benign programís code. The best example of this that I can give is where a journalist deliberately reports information out of context and changes the meaning of a statement. Letís say that Tom has just found a cure for cancer and distributed the following media release to the press: Cure for Cancer Media Release After 10 years of research, my team has developed a cure for most forms of cancer. We have decided to donate this research to the world by making it freely available. Because of this research and the funding we were provided the whole world will benefit. We hope within the next few months many others will sign up for a trial, as I have. I would love to be able to say that I alone am responsible for the cure for cancer. The truth, however is that the team of people I have working with me are the ones that have done all of the work. They are the people that took us in the right direction with almost no guidance from myself. They are the people who worked week after week in the lab. They are the ones who should receive the publicís thanks. The next day the following misquoted article appears in the newspaper (or should that be eReader?): Cure for Cancer News Article If you are wondering whether Tom had any help in finding the cure, this is what he had to say: "I alone am responsible for the cure for cancer. The whole world will benefit as I have done all the work myself." Ė Tom If you look closely, all these words/statements are indeed in the media release provided. This is one of the ideas behind Return-Oriented Programming; using small parts of a program in a different order to achieve different results. A Little Bit About Programming Now we have to take a small step back to look at what makes up a simple computer program. At its most basic level a program is made up of lots of small simple instructions. Instructions can be things like adding two numbers, reading or writing a piece of memory, or moving some data from one location to another. These small instructions can then be built up into longer sets of instructions that accomplish a more complex task. These complex tasks could be things like sorting a set of numbers or finding the average of a set of numbers. Programmers often want to repeat these more complex tasks again and again. To do this they could either write the same code again and again and again, or they could use a subroutine. A subroutine is a section of code that is packaged together to perform a specific complex task. A programmer can then call that subroutine, the subroutine will execute, and when finished it returns control back to where it was called from. The diagram below illustrates the difference between code without and with a subroutine. Making a Mashup Subroutines don't strictly have to start from the beginning. That may seem counter intuitive, but a subroutine is just a list of instructions with a return statement at the end; as such you could start executing the instructions from anywhere in that list. No matter where you start in the subroutine, the code will then continue until it reaches the return statement. What this means is that the last few instructions before the return statement of each subroutine (often called a "gadget") can be cobbled together to create a mashup program. This mashup program can often be created to perform absolutely anything that a normal program can do, including malicious actions. This form of programming can allow an attacker to give a set of existing code locations to execute rather than injecting new (malicious) code. This is important because lots of software is written to try and prevent any external data from executing as code (eg: No eXecute bit and Data Execution Prevention). Return-Oriented Programming gets around these protections because the data provided by an attacker is never actually executed. The attacker simply provides a list of entry points or addresses into subroutines. It is then the code already in those subroutines that gets executed. The Return in Return-Oriented Programming The reason the return is important, is to retain control of the flow of this new mashup program. The attacker is only really able to supply a list of addresses to execute or jump to. This means that if there was no return statement at the end of each code snippet or gadget, the flow of the program would not move onto the next bit of code that attacker wants to run. Instead it would continue running the normal program code. The diagram below shows how all these gadgets can get cobbled together. Note the return statements (arrows) at the end of each subroutine allow for the next gadget in the attackerís list to execute. Jumping back for a minute to our journalism example, you will see that various parts of the statement are highlighted. Each part occurs at the end of a sentence where the full stop represents a return statement. Each sentence can be thought of as a subroutine. The misquoting journalist can jump into each sentence at any point, but once there must continue until the full stop. Upon reaching the full stop, another sentence may, once again, be chosen to jump into. Look at the quote again: Cure for Cancer News Article If you are wondering whether Tom had any help in finding the cure, this is what he had to say: "I alone am responsible for the cure for cancer. the whole world will benefit. as I have. done all of the work. myself." Ė Tom Getting Into the Flow of Things Now we come to one final tricky question; how does the attacker get control of what I have called the "program flow"? There is no simple answer to that question, but it normally starts with some sort of program error. One of the most common is called a buffer overflow. This gives the attacker the ability to overwrite parts of memory that control the flow of the program; normally the stack. The stack is an area of memory used to store small pieces of information that can be used or referenced later. One of the main types of information stored on the stack is which instruction a program should return to after a subroutine finishes. As you can probably guess this makes the ability to control the contents of the stack very useful for Return-Oriented Programming. When a subroutine that has been called finishes with a return instruction, the computer automatically removes the next item from the top of the stack, and uses it as the next instruction to execute or the starting point for the next gadget. This essentially means that if you can write the location of the gadgets you want to execute, one after the other, onto the stack (using a buffer overflow for example) you can make existing code do more or less anything you want. Well, I hope someone found that useful. I know I learnt quite a bit reading through some of those papers.
<urn:uuid:ac2ce2b7-810f-4807-a2f3-ff1c2eafb90c>
CC-MAIN-2017-04
https://www.auscert.org.au/render.html?it=13408
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949021
1,670
3.140625
3
Agency: Department of Defense | Branch: Army | Program: SBIR | Phase: Phase I | Award Amount: 69.97K | Year: 2011 Energy can be harvested or scavenged from many environmental sources such as: solar, wind, vibrations, temperature gradients, etc. Two common issues related to environmental energy sources are limited/unpredictable availability and limited/unpredictable quantity. This proposal examines the requirements for the efficient harvesting of energy based on the temperature gradient that exists between the human skin and the surrounding environment. This source has the advantage that it is essentially available 24 hours a day, 7 days a week. The challenges of this work include quantifying and working with the low energy flux (mW/cm2) of the human body, the range of body-to-ambient temperature gradients (from a few degrees to tens of degrees centigrade), and the sensitivity of human tissue to extreme temperatures. This proposal will apply Camgian Microsystems"low power design technology with RTI"s efficient thermo-electric technology to the solutions of these challenges. Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 99.61K | Year: 2010 This program aims to develop an ultra low power System-on-a-Chip (SoC) technology that will enable >10x improvements in size and endurance over current generation wireless micro-sensor networks. This will be achieved through the integration of advanced circuit and architectural design methods targeted to improve wireless micro-sensor node performance in four critical performance parameters: (1) digital circuit design methodologies that enable the energy consumption of the chip to be dynamically matched to the performance needs of the system;(2) digital circuit design styles that minimize crosstalk noise to radio-frequency (RF) and other analog circuits;(3) advanced power and voltage gating and scaling techniques that reduce idle (leakage) energy consumption; and (4) System on Chip (SoC) design architectures optimizing energy, cost, and size. These elements will be integrated with a unique RF circuit architecture which has improvements in RF switches for signal steering to antennas, power amplifiers for transmitter output, direct digital synthesis, and high-frequency mixers for carrier modulation and received signal down-conversion. While the technical approaches to intelligent, adaptive, ultra-low power, low-noise circuits will be generic and broadly applicable to DoD systems a specific hardware architecture will be developed based on an intelligent wireless micro-sensor node. BENEFIT: The successful program will lead to improvements in the cost, size, weight, and power (CSWAP) metric of wireless sensors. The CSWAP reduction is achieved partially through dramatic reduction of the electronics power and energy consumption which leads directly to smaller power source requirements and higher integration capability. A further driver of CSWAP reduction is the use of NCL clockless logic which gives lower noise crosstalk from the digital processing circuitry to the critical RF and analog circuits. An additional benefit of the program which drives CSWAP improvement is the integration of NCL digital processing circuits with an innovative new RF circuit architecture which has improvements in critical performance areas. The initial products targeted for deployment will be wireless micro-sensor systems such as are used for border security, military intelligence, military battlefield surveillance, and SmartGrid power system monitoring. Agency: Department of Defense | Branch: Defense Advanced Research Projects Agency | Program: SBIR | Phase: Phase I | Award Amount: 98.95K | Year: 2009 This program will develop a revolutionary, ultra low power System-on-a-Chip (SoC) technology that will enable >10x improvements in size and endurance over current generation microsystems such as unattended ground sensing (UGS) systems, micro-UAVs, micro satellites, body worn electronics, etc. This will be achieved through the integration of advanced energy efficient circuit designs that enable the power consumption of the chip to be dynamically matched to the performance needs of the systems. While the technical approaches to intelligent, adaptive, ultra-low power architectures will be generic and broadly applicable to DoD systems specific hardware implementations will be delivered as proof-of-principle prototypes. These will be based on the digital processing “brain” of a microbolometer IR camera system, which is used in all of the above microsystem applications. The focus of this program is on the digital processing of the architecture. The goal is to synergistically integrate into an SoC the low power enabling capabilities of: · Subthreshold transistor operation. · Clockless self timed logic circuits. · Dynamically controlled power supply voltages determined by the data rate. Codetronix will support Camgian in this effort with their Mobius design specification and implementation tool. Agency: Department of Defense | Branch: Defense Advanced Research Projects Agency | Program: SBIR | Phase: Phase I | Award Amount: 99.00K | Year: 2009 Today’s warfighter is reliant upon more and more technological support to achieve strategic and tactical superiority over their enemies. Technological advantages include such items as night-vision equipment, unattended ground sensors (UGS), unmanned aerial vehicles (UAVs), target tracking beacons, etc. To be effective, these need to be as small and as lightweight as possible. This leads to the drive for the integration of disparate technologies, such as high performance RF circuitry and large scale digital signal processing, into more and more dense circuits to achieve the maximum Size, Weight and Power (SWaP) reductions possible. The ability to increasingly integrate complex digital logic with analog/RF circuits on the same device, makes revolutionary new products feasible. As a case in point, Camgian Microsystems, is developing a revolutionary new UGS that combines radar, camera, digital signal processing and a communications system. The device must achieve lower power and area than are possible with discrete integration. This SBIR program will use this applications to push the limits of state-of-the-art, seeking to demonstrate the ability to provide revolutionary integration levels, combining high performance RF circuits with large quantities of complex digital logic, all on a low cost, small geometry silicon CMOS process. News Article | July 7, 2015 STARKVILLE, Miss.--(BUSINESS WIRE)--Camgian Microsystems has been named one of the 50 Fastest Growing Tech Companies by The Silicon Review. The recognition, Camgian’s third award of the year, comes from a distinguished panel of executives and analysts honoring leading companies who build cutting edge technology products and services. The Silicon Review highlighted Camgian as a leader in edge computing, advanced sensing, and real-time processing in the Internet of Things (IoT) market. “We are honored to be recognized as one of the fastest growing tech companies globally,” said Gary Butler, chairman and CEO of Camgian Microsystems. “We are very excited about our products’ abilities to deliver valuable real-time intelligence in both the government and commercial markets.” The full list of companies can be seen here: http://www.thesiliconreview.com/magazines/Special-issue/50-fastest-growing-tech-companies-listing. Named by Inc. Magazine as one of America’s fastest growing private companies, Camgian Microsystems delivers award winning IoT services for both commercial and government clients. Its latest innovation, Egburt is a complete IoT application service comprising software, hardware and communications built on a powerful edge computing architecture. To learn more about Camgian Microsystems, please visit www.camgian.com or follow them on twitter @CamgianMicro.
<urn:uuid:58cf0017-cda9-445d-a64d-4491fd0da270>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/camgian-microsystems-corporation-1504412/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00304-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907125
1,596
2.65625
3
If you’re looking for a great video describing the essentials of Moore’s law in under 10 minutes, perhaps for your non-HPC friends and family, look no further. With a direct, easy-to-follow delivery, Professor Derek McAuley with the School of Computer Science at the University of Nottingham lays out the elements of chip design and manufacturing that have today’s chip designers butting up against the laws of physics. McAuley refers to Moore’s law – Gordon Moore’s observation that the transistors in a given area doubles every two years or so – as the “sweet spot” that occurs with each generation of processors, each time the feature size of a chip’s components (e.g., transistors and wire) get reduced. The professor recalls early on in his career working at Acorn Computing when his colleagues Sophie Wilson and Steve Furber were designing the ARM processor. At this point, they were all very excited about 3 micron technology, the feature size of the transistor. Today the industry is down to 28 or 22 nanometers. Professor McAuley gets into describing how transistors are made using semiconductor materials doped with ions (p or n materials) and the slowing down of Moore’s law. “Each generation has required better understanding and more complex optical systems,” says McAuley. “As these feature sizes get smaller, the areas of the transistor can only fit so many ions or atoms of the doping material – and as it gets smaller and smaller, the number gets less and less. As we get to very small numbers of atoms, the quantum mechanics behavior of the transistor and the probability that it does the right thing start to reduce.” McAuley continues by saying the prediction that Moore’s law will run out is essentially saying that the transistors will start to do the undesirable thing too often. Error correction can be used to abate some of this behavior, but it only goes so far when errors become too numerous. There are still many other areas open to development, however, and McAuley sees promise in architecture innovation and 3D chip design.
<urn:uuid:0e7001dd-6e50-41ec-8b7d-8fc53789edcb>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/18/moores-law-versus-laws-physics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00148-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939083
453
3.53125
4
Cremer J.T.,Adelphi Technology, Inc. Advances in Imaging and Electron Physics | Year: 2013 This chapter covers the correlation, scatter, and intermediate functions of small-angle neutron scatter (SANS). Small-angle X-ray and neutron scatter from general sample materials are covered, followed by the Rayleigh-Gans equation, Babinets' principle, and the differential cross section of X-ray or neutron small-angle scattering from a solute-solvent sample. This provides a resolution of the scattering vector for a SANS instrument for X-rays or neutrons. The chapter also presents neutron scatter length density, particle structure factor, scatter amplitudes, and intensity. The following topics are also covered: random variables, correlation, and independence, followed by derivation of the macroscopic differential cross section for neutron scatter, which involves convolution and cross correlation. Also presented are the coherent and incoherent, elastic and inelastic components of the pair correlation function, intermediate function, and scatter function, the relationships among these functions, and the measured SANS intensities from the neutron scattering sample. The Guinier, intermediate, and Porod regimes of the sample-averaged intermediate function are covered, in addition to the method of contrast variation and Porod's law. Coherent neutron scatter measurements are shown to yield the solute particle size and shape in the Guinier regime, and incoherent neutron scatter measurements are shown to yield the incoherent scatter function, which gives particle diffusion information. Also derived is the principle of detailed balance. Other covered topics are the static approximation, the particle number density operator and pair correlation function, and the moments of the neutron scatter function. The neutron coherent differential cross section in crystals is shown to be expressed by particle density operators, and neutron elastic scatter is shown by the coherent intermediate and scatter functions to occur only in the forward direction for liquids and gases. © 2013 Elsevier Inc. All rights reserved. Source Cremer Jr. J.T.,Adelphi Technology, Inc. Advances in Imaging and Electron Physics | Year: 2013 This chapter derives the partial differential cross sections for neutron scatter from a nucleus, which accounts for the neutron spin and the nuclear spin. First covered are the preliminary background topics of angular momentum vectors, spin vectors, and vector operators, the Heisenberg uncertainty principle and commutation of operators, the neutron spin operator, and the neutron spin-lowering and -raising operators. First, the partial differential cross section for nuclear scatter of the neutron spin-up and spin-down states is dervied. Next derived for polarized neutron scatter is the partial differential cross section, which includes both the neutron spin state and nuclear spin state, via the combined neutron spin operator and nuclear spin operators. Covered next are the neutron nuclear scatter length, which accounts for the neutron spin states. Thermal averaging is then taken into account, and the total partial differential cross section for neutron spin state scatter is derived, as well as the neutron spin state scatter lengths for an ensemble of nuclear spins and isotopes. Finally, the partial, differential, and total cross section for neutron coherent and incoherent scatter are derived from an ensemble of atoms of varying nuclear spins and isotopes, which accounts for neutron spin states. © 2013 Elsevier Inc. All rights reserved. Source Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 100.00K | Year: 2008 A short-pulsed neutron generator is proposed for the detection of concealed high explosives. A recently developed RF-excited plasma neutron generator will be pulsed to produce the activating neutrons whose pulse length is 5-10 ns with a repetition rate of 100 kHz. Using the D-T nuclear reaction, we expect the proposed generator to produce an average yield of 10E9 n/s at this pulse length and rate. In Phase I an existing neutron generator will have a set of electrodes installed to chop the ion beam to produce the desired neutron-pulse time structure and a high peak yield. The present deuterium generator will be redesigned to support the safe use of tritium. The proposed system will be designed to be low cost, transportable, and mechanically and electronically robust, to ensure its wide useage. Unlike penning diode sources, the generator is expected to have a long lifetime. The project has a high probability of success based on the recent development by Adelphi Technology Inc. and Lawrence Berkeley National Laboratory of new RF plasma neutron generators. Agency: Department of Energy | Branch: | Program: STTR | Phase: Phase II | Award Amount: 750.00K | Year: 2008 No long-lived gamma-ray calibration sources exist with energies above 3.5 MeV, which is an impediment to the calibration of high-purity-germanium and scintillation detectors used in homeland security, nuclear physics and astrophysics. Recent advances in Prompt Gamma-ray Activation Analysis with guided neutron beams have led to the precise calibration of neutron-capture gamma ray sources with energies up to 10.8 MeV. In this project, these neutron-capture gamma ray sources will be produced in a moderator/transducer surrounding a compact, low-yield neutron generator that uses the safe D-D fusion reaction. In Phase I, a portable gamma-ray generator was designed using an inexpensive ion source, a self-replenishing target for generating neutrons, and a compact moderator with a gamma-ray transducer. The parameters for selecting the three major components were based on the required count rate for calibrating the frequency and efficiency of the detector, while still ensuring operator safety and minimizing possible damage to the detector. The high-energy gamma ray spectrum was measured using the selected transducer material. In Phase II, the ion source will be fabricated and tested, and the fast neutron generator will be fabricated and integrated into the moderator and gamma ray emitter. Then, the source¿s gamma-ray yield will be measured, and the source¿s safety and benefits for detector calibration will be determined. Commercial Applications and Other Benefits as described by the awardee: The DOE and the International Atomic Energy Agency must provide for the application of standards for the safety of nuclear installations and radioactive sources. The new device should enable the easy calibration of the energy and efficiency of HPGe detectors at high gamma ray energies, at in-house installations or in the field, for the identification of nuclear and radioactive materials. It also should reduce security concerns about the storage of radioactive sources currently in use. Agency: Department of Energy | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 100.00K | Year: 2008 Refractive lenses can dramatically improve neutron instrumentation in DOE facilities. In previous experiments, compound refractive lenses (CRLs) were shown to be capable of imaging using thermal neutrons. However, a number of problems exist that prevent the full implementation of these lenses: the initial prototype lenses, which used compression molding of metals, have long focal lengths, small fields of view, poor surface quality, and material inhomogeneities. To achieve shorter focal lengths and shorter neutron wavelengths, the radii of curvature must be reduced. This project will use an injection-molding bubble injection process to design and fabricate refractive lenses that will be able to focus, collimate, and image thermal neutrons. Both simple concave and Fresnel lenses will be investigated. Commercial Applications and other Benefits as described by the awardee: The new CRLs should provide better resolution and higher quality images. They will be inexpensive, compact, and capable of imaging using thermal neutrons with wide bandwidth spectra. Since CRLs should have very modest cost, they would be much less expensive than the large mirrors and other optics currently used. Many scientific and technological applications should ensue, including microscopy, scattering, interferometry, crystallography, and reflectometry.
<urn:uuid:eae012f4-621c-4d0e-84b4-bf3215e14c90>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/adelphi-technology-inc-92186/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00268-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895265
1,642
2.515625
3
Before you can use a new Flash memory card, you must format it. Different commands are required to format and erase Flash memory depending on the type of filesystem running on the router. Class A and Class C filesystems issue the format command. Class B filesystems issue the erase command. This is the Class A filesystem: |Cisco 12000 Series Internet Router| |Cisco 7000 Route Switch Processor (RSP)| |Catalyst 8500 Switch Route Processor (SRP)| |Cisco 7500 Series RSPs (RSP 2, RSP 4, RSP 8)| |Cisco 6400 Universal Access Concentrator (UAC)| |Catalyst 5000 and 5500 Route Switch Module (RSM)| |Multiservice Switch Route Processor (MSRP) for LightStream 1010| |ATM switch and processor for LightStream 1010 and Catalyst 5000 and 5500| This is the Class B filesystem: |Cisco 1000 series routers |Cisco 1600 series routers: The 1600 series router has a single PC card that contains Flash memory. The 1601 to 1604 run from Flash. If you remove the PC card when the router is running, the router halts. The 1601R to 1605R run from RAM. If you remove the PC card, the router does not load the Cisco IOS Software image during the next bootup. In the 1600 series, you cannot delete the running image file or any other file unless it is in a different partition. |Cisco 3600 series routers: The 3600 series routers traditionally uses a Class B filesystem. However, with the addition of crash information file support in Cisco IOS Software version 12.2(4)T, the 3600 needs the ability to delete individual files. Consequently, the 3600 series routers with Cisco IOS Software version 12.2T and later utilize commands from Class B filesystems as well as commands from Class C filesystems. To activate the Class C filesystem commands on the 3600 with Cisco IOS Software Release 12.2T, issue the erase command to completely remove all files from the Flash filesystem. When the Flash is empty, issue the squeeze command against it to create a squeeze log. At this point, the 3600 Flash system issues the delete and squeeze commands like a Class C filesystem.| This is the Class C filesystem: |AS5800 Dial Shelf Controller (DSC)| |Catalyst 5000 and 5500 Supervisor III Module| |Catalyst 6000 and 6500 Supervisor Engine I| |Catalyst 6000 and 6500 Supervisor Engine II| |Cisco 7000 Route Processor| |Cisco 7100 series routers| |Cisco uBR7100 series routers| |Cisco 7200 Series Network Processing Engine| |Cisco uBR7200 series routers| |Cisco 7200VXR Series Network Services Engine 1| |Cisco 7600 Series Internet Routers| |Cisco 10000 series routers (Edge Services Router (ESR))| |Cisco uBR10000 series routers| To remove the files from a compact Flash memory card previously formatted with a Class B Flash filesystem, perform one of these procedures: |For external compact Flash memory cards, issue the erase slot0: command.| |For internal compact Flash memory cards, issue the erase flash: command.| To format an external compact Flash memory card with a Class B Flash filesystem, refer to this sample output: Router# erase slot0: Erasing the slot0 filesystem will remove all files! Continue? [confirm] Current DOS File System flash card in slot0: will be formatted into Low End File System flash card! Continue? [confirm] Erasing device... eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeeeeeeeeee ...erased Erase of slot0: complete To remove the files from a compact Flash memory card previously formatted with a Class A or Class C Flash filesystem, perform one of these procedures: |For external compact Flash memory cards, issue the format slot0: command.| |For internal compact Flash memory cards, issue the format flash: command.| To format an internal compact Flash memory card with a Class A or Class C Flash filesystem, refer to this sample output: Router# format flash: Format operation may take a while. Continue? [confirm] Format operation will destroy all data in "flash:". Continue? [confirm] Enter volume ID (up to 64 chars)[default flash]: Current Low End File System flash card in flash will be formatted into DOS File System flash card! Continue? [confirm] Format:Drive communication & 1st Sector Write OK... Writing Monlib sectors ............................ Monlib write complete .. Format:All system sectors written. OK... Format:Total sectors in formatted partition:250592 Format:Total bytes in formatted partition:128303104 Format:Operation completed successfully. Format of flash complete For more information you may wish to refer to: Cisco PCMCIA Filesystem Compatibility Matrix and Filesystem Information
<urn:uuid:538c99d3-be3f-40f1-aff3-18d96961776e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2343977/cisco-subnet/how-to-format-and-erase-flash-in-a-cisco-router.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00570-ip-10-171-10-70.ec2.internal.warc.gz
en
0.801404
1,078
2.515625
3
|Inheritance Tutorial||Exception Handling Tutorial| This tutorial is about using the collection classes. Collections are very useful in object-oriented programming. They are used to store groups of objects. You usually store objects of the same type in any particular collection, although you can also store objects of different types in a collection. Object COBOL also enables you to store COBOL intrinsic data, which is not represented by objects, in collections. This is done by a mechanism which enables you to treat an item of intrinsic data, for example a PIC X(20), as though it were an object. You can't mix intrinsic data and objects inside a single collection, or mix different types of intrinsic data. Finally, this tutorial covers dictionaries, which are another type of collection. This tutorial consists of the following sessions: Time to complete: 25 minutes. The different types of collection in the Class Library can all be categorized by the following properties: You can access any particular element in an indexed collection by giving its position. This is similar to using a conventional array or table. In non-indexed collections, elements are not stored in a defined order. Automatically growable collections get bigger when you exceed the capacity for which you created the collection. A manually growable collection only gets bigger when you send it the "grow" message. Some collections disallow duplicate elements; trying to add an element which has the same value as one already in the collection will cause an exception. The different collection classes available are listed below, with their properties. |Bag||Non-indexed, automatically growable, duplicates allowed| |Array||Indexed, manually growable, duplicates allowed| |CharacterArray||Indexed, manually growable, duplicates allowed| |OrderedCollection||Indexed by insertion order, automatically growable, duplicates allowed| |SortedCollection||Indexed by sort order, automatically growable, duplicates allowed| |ValueSet||Non-indexed, automatically growable, duplicate values disallowed| |IdentitySet||Non-indexed, automatically growable, duplicate object handles disallowed| |Dictionary||Indexed by key, automatically growable, duplicate key values disallowed| |IdentityDictionary||Indexed by key, automatically growable, duplicate key object ValueSet and IdentitySet only differ in the way they determine duplicate elements. ValueSets compare the values of elements and disallow duplicate values; IdentitySets compare object references, and disallow storing the same object more than once. Dictionary and IdentityDictionary are special types of collection which are dealt with in a later section of this tutorial. They determine duplicate keys in the same way that ValueSet and IdentitySet determine duplicate elements. Next you will look at a simple COBOL program, coll0.cbl which illustrates some of the differences and similarities between the main types of collection. The program is not an Object COBOL class program, but procedural COBOL code which uses the Class Library collection objects. To animate coll0.cbl cob -a coll0.cbl This is a set of strings the program stores in different types of collection. You need to specify an initial size for all types of collection. In the case of an Array, you can't exceed the initial capacity of the collection unless you send it the "grow" message. Other types of collection will grow in size if you add more elements than initially specified. However, growing collections can be an expensive operation at run-time, so you should still always try to pick an initial size which will reduce the number of times a collection needs to grow. move 20 to i). i holds the length for the instances of CharacterArray which will be created as elements for the different types of invoke CharacterArray "withLengthValue"...). A CharacterArray is an object for storing strings. The "withLengthValue" message creates a new instance of CharacterArray and initializes it with some data. invoke aBag "add"...) until the With all the different collection types, except Array, you use the "add" message to add new elements. With the Array, which does not grow automatically, you use "atPut" which stores the element at the specified index position. Although OrderedCollection is indexed, you can't use "atPut" until you have added elements to the collection. For example, once you have used "add" to add the first five elements, you can use "atPut" and an index between one and five to replace any of those elements. You can't do an "atPut" with an index greater than five until you have added more elements. You can't ever use "atPut" to put an element in a SortedCollection. A SortedCollection uses the value of each element to determine its position in the collection. This completes execution of the perform loop without you needing to step through each statement in turn. display " "). This retrieves the object reference to the fourth element in the Array instance. Sending the "display" message to the CharacterArray displays it on the screen. Its contents should be the word "banana". Push the F2=View key to see the console contents, and any other key to return to the Animator display. invoke aBag "includes"...). Because a Bag is not indexed, you can't retrieve elements from it directly. Instead, you can query it to find out whether it has one or more occurrences of an object with a given value. The "includes" message returns 1 if a Bag contains an object with a matching value and 0 if it doesn't. You can also examine the contents of unindexed collections (Bags and ValueSets) by using the iterator methods. These are covered later in this tutorial. display " "). These test the result of the returned value and tell you whether or not the bag contained an object with a matching value. invoke aBag "add"...). This adds the string to the Bag again. A Bag stores objects with duplicate values by recording the number of occurrences of each object with a different value. invoke aBag "occurrencesOf"...). The "occurrencesOf" message returns the number of elements a collection has which match the specified object in value. display " "). This displays the information that this Bag has two occurrences of the specified string. invoke anArray "occurrencesOf"...) and A015 ( display " "). This code demonstrates how the Array also responds to the "occurrencesOf" message. You can use "occurrencesOf" and "includes" on indexed collections as well as instances of Bag and ValueSet. This tells you whether or not the element exists, but not its position. invoke aValueSet "add"...). Instances of ValueSet do not maintain duplicate elements. The ValueSet already contains an element with value "banana", so it will not be added a second time. invoke aValueSet "occurrencesOf"...). This message always returns 1 or 0 for ValueSet instances. display " "). This displays the result of the "occurrencesOf" message. The statements below tag A016 ( display "Collection contents") display every string in the OrderedCollection and SortedCollection in indexed order. The elements in the OrderedCollection appear in the order in which they were added. The elements in the SortedCollection appear sorted into ascending alphabetical order. Ascending order is the default for a SortedCollection. At the end of this section, you may have some questions to ask: Program coll0.cbl stores instances of CharacterArray in the collections it creates. An instance of a CharacterArray is a simple object, with an obvious single value (the string you store in it). But if you stored Account objects, like the ones used in the Inheritance Tutorial in a collection, how would you determine the value? Would it be the name, the balance or the account number? The answer is that the collection objects provide a framework within which objects stored as elements must work. When a collection needs to know whether two objects are equal, it sends one object the "equal" message, passing it the other as a parameter. It is then up to the object to interrogate the other element and decide whether they are equal or not. The default sort method for the SortedCollection works in a similar way. The SortedCollection sends one element the "lessThanOrEqual" method and a second element as a parameter. The receiving element can then compare itself to the second element and return a result. If you are writing your own objects to store in collections, you may need to implement these methods yourself, unless you are subclassing from a class like CharacterArray, which implements them for you. There is also a default "equal" method in Base which compares the object handles of two objects. This implementation of "equal" will only find two elements equal if they are actually the same object. For full information on the methods you might need to implement to use the Collection classes, see the chapter Collection Frameworks . In the previous session, you looked at a program which stored objects in different types of collection. There may be occasions when you want to store intrinsic COBOL data, like numbers, in collections. You can do this by using the intrinsic classes of the Class Library. Object COBOL provides a mechanism which enables you to send a message to an intrinsic data item, as though it were an object. To send a message to an intrinsic data item The class library includes classes for three different types of intrinsic data (PIC X, PIC X COMP-X, PIC X COMP-5). These classes are templates which handle data of fixed length. When you clone the class, you specify the actual size of data you want to handle. The example below sends the "hash" message to a numeric data item: 00001 working-storage section. 00002 01 aValue pic x(2) comp-5. 00003 01 cloneX2Class object reference. 00004 01 aLength pic x(4) comp-5. ... 00005 procedure division. ... 00006 move 2 to aLength 00007 invoke COBOLCOMP5 "newClass" using aLength 00008 returning cloneX2Class ... 00009 invoke aValue as cloneX2Class "hash" 00010 returning aHashValue ... This is what the code above is doing: |Lines 1-4||Declaring data.| |Lines 6-8||Cloning the COBOLCOMP5 to create a new class for comp-5 data items two bytes long.| |Line 9||Sending a message to the data in data item invoke...as statement uses the data item as an instance of the cloned class. You can think of an intrinsic object as a static object which has memory allocated to it at compile time. Static objects do not have object handles, unlike the dynamic objects created by the OO RTS at run-time. Intrinsic data objects are the only examples of static objects in Object COBOL. We will now look at a short sample program, coll1.cbl, which uses the intrinsic classes to store a set of integers in an array. To animate coll1.cbl cob -a coll1.cbl move 4 to i). The "newClass" message creates a clone of the CobolComp5 class, initialized in this case for data four bytes in length. The object returned is a new class object, and not an instance of CobolComp5. move 10 to i). This creates an instance of Array with space for ten elements. This time the message to create the Array instance is "ofValues" (in the example in the previous section it was "ofReferences"). When a collection is created with the "ofValues" message, it stores intrinsic data instead of object handles. The cloned class, PicX4Comp5, is used as a template so that the Array knows how much space to allocate for each element. You can't mix objects and intrinsics inside a single collection. Once you create a collection you can only store the type of data for which it is initialized. If you want to mix many different kinds of data in a collection, you should create the collection using "ofReferences", and use different types of objects to represent the different types of data. There is no restriction on mixing different kinds of object inside a collection of references. move 10 to element). perform varying from 1 ...). This executes the entire perform loop to initialize the array. This retrieves the fourth element from the array (which has a value of 7) and displays it. Because a data item is returned, we can show it using the display verb; when we got back objects in the previous example we had to send them the "display" message to show them. This completes this part of the tutorial on using intrinsic data. For more information, see the chapter Intrinsic Data. Dictionaries are a special sort of indexed collection, which store key-data pairs (known as associations). In a dictionary, the key is used as the index when you store or retrieve the data. Dictionaries do not allow you to store duplicate keys. Like the other collection types, you can store either objects or intrinsic data in a dictionary. In a dictionary though, either the key or the data part can be an intrinsic or an object. This gives you four possible combinations for intrinsic or object storage: When you create a dictionary you have to give it a template so that it knows how the key and data portions are to be stored. The template is either the Association class, or a clone of the Association class. The Association class is another clonable class, like the classes for intrinsic data classes, used for creating templates for data storage. An Association template actually consists of two templates; one for the key and one for the data. To create any type of dictionary you need to create an Association template. To create an Association template If both the key and data entries in the dictionary are to be objects you can use the Association class itself as a template; you don't need to create a clone. Having created a template for your dictionary object, there are two methods you can use to create a dictionary itself; "ofValues" or "ofAssociations". A dictionary "ofValues" stores each element as a key data pair. A dictionary "ofAssociations" stores each element as an instance of the Association template you used to create the dictionary. A dictionary "ofAssociations" in effect stores three items for every entry in the dictionary: An object which contains the key and data Which could be an object or an intrinsic value Which could be an object or an intrinsic value A dictionary "ofValues" stores the key and data directly without wrapping them inside an association object. When would you use a dictionary "ofAssociations" and when would you use a dictionary "ofValues"? A dictionary "ofValues" is more efficient at run-time in terms of speed and memory if you simply want to store data items against key items. However, if your application uses associations elsewhere to manage key/data pairs, a dictionary "ofAssociations" is a better choice as the actual dictionary only stores the object handles to the associations, and you don't have to extract the key and value from the association before putting it in the dictionary. The sample code below creates an association template and then uses it to create a dictionary "ofValues": 00001 working-storage section. 00002 01 aKeyTemplate object reference. 00003 01 aDataTemplate object reference. 00004 01 anAssocTemplate object reference. 00005 01 aDictionary object reference. 00006 01 aLength pic x(4) comp-5. ... 00007 procedure division. ... 00008 move 3 to aLength 00009 invoke CobolCompX "newClass" using aLength 00010 returning aKeyTemplate 00011 move 20 to aLength 00012 invoke CobolPicX "newClass" using aLength 00013 returning aDataTemplate 00014 invoke Association "newClass using aKeyTemplate 00015 aDataTemplate 00016 returning anAssocTemplate 00017 invoke Dictionary "ofValues" using anAssocTemplate 00018 returning aDictionary ... |Lines 1-6||Declares storage for the templates.| |Lines 8-9||Creates a template for a PIC X(3) COMP-X numeric key.| |Lines 11-12||Creates a template for a PIC X(20) data portion.| |Line 14||Creates an association template.| |Line 17||Creates a dictionary of values. In the next part of this section we will animate through some code which uses a dictionary to store account objects. The account objects are the ones which were introduced in the Inheritance Tutorial Each account object is stored in the dictionary against the customer name. We are going to use a simple program, coll2.cbl to demonstrate the use of the dictionary. To animate coll2.cbl cob -a coll2.cbl Animator starts with the statement below tag A040 highlighted ready for execution. move length of...). The dictionary we want to create uses the customer name as the key, and account objects as values. This code clones the intrinsic class CobolPicX to create a class for representing strings the same length as a customer name. set wsNull to null). This code clones the Association class, to create a template for the dictionary. The key portion represents strings the same length as wsCustomer, the data portion is set to null and represents an object handle. invoke Dictionary "ofValues"...). This creates the new dictionary. It is initialized to store 10 elements, but grows automatically if more elements than this are added. move spaces to wsCustomer). This creates a check account. Using Perform Step saves animating through all the "openAccount" code. invoke wsDictionary "atPut"...). The "atPut" message stores the account object at the key in wsCustomer. move "Mike" to wsCustomer). This creates two more accounts, and stores them in the dictionary. move spaces to wsCustomer). The "at" message retrieves Bob's account from the dictionary. invoke wsAccount "printStatement"). The "printStatement" message displays the account details on the console. Press F2=view to switch the display from Animator to see what is on the console. Press any key to return to the Animator view. Don't shut down Animator; the next session carries on directly from this one. This completes this section; in the next section you will use the already running application to look at the iterator methods for collections. The collection classes all provide iterator methods which enable you to examine all the elements of a collection. There are four iterator methods, and they are supported by all types of collection: Passes every element as a parameter to a method which you specify. Passes every element as a parameter to a method which you specify, and creates a subcollection of the elements for which your method returns the value 1. Passes every element as a parameter to a method which you specify, and creates a subcollection of the elements for which your method returns the value 0. Passes every element as a parameter to a method which you specify, and creates a new collection of the elements which your method returns. This section shows how coll2.cbl uses the iterator methods to carry out operations on all accounts. The instructions below assume that you are continuing directly from the end of the previous section, and have not stopped animating coll2.cbl. To see the use of iterators: invoke EntryCallback "new"...). Iterator methods pass elements in the collection to a piece of code. An EntryCallback is an object which contains an entry-point name; in effect it is a piece of code wrapped up inside an object. The class library also includes a Callback class, which contains an object handle and a message name; in effect wrapping a method up inside an object. invoke wsDictionary "do"...). The callback is passed to the "do" method inside dictionary. The "do" method uses the callback to call the entry-point "printAll" for each object in the dictionary. Execution switches to the the statement below tag A160. invoke lnkAccount "printStatement"). Execution jumps straight back to the top of entry-point, below tag A160. The "do" method in the dictionary has passed its next account object to the entry-point. The entry-point is called once for every single account in the dictionary. All collections respond to the "do" message. The iterator prints statements for the remaining accounts in the dictionary. You can see how the combination of collection iterator methods, and polymorphic methods is a powerful programming technique. This completes the tutorial. This tutorial covered the following: Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Inheritance Tutorial||Exception Handling Tutorial|
<urn:uuid:2329805f-a6ab-46c9-ac66-a155765399e2>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/opcolu.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.816361
4,510
3.390625
3
Gaining insights from big data is no small task. Having the right technology in place to collect, manage and analyze data for predictive purposes or real-time insight is critical. Different types of data may require different computing platforms to provide meaningful insights. Understanding the difference between data in motion vs. data at rest can help determine the type of technology and processing capabilities required to glean insights from the data. Data at rest This refers to data that has been collected from various sources and is then analyzed after the event occurs. The point where the data is analyzed and the point where action is taken on it occur at two separate times. For example, a retailer analyzes a previous month’s sales data and uses it to make strategic decisions about the present month’s business activities. The action takes place after the data-creating event has occurred. This data is meaningful to the retailer, and allows them to create marketing campaigns and send customized coupons based on customer purchasing behavior and other variables. While the data provides value, the business impact is dependent on the customer coming back in the store to take advantage of the offers. Data in motion The collection process for data in motion is similar to that of data at rest; however, the difference lies in the analytics. In this case, the analytics occur in real-time as the event happens. An example here would be a theme park that uses wristbands to collect data about their guests. These wristbands would constantly record data about the guest’s activities, and the park could use this information to personalize the guest visit with special surprises or suggested activities based on their behavior. This allows the business to customize the guest experience during the visit. Organizations have a tremendous opportunity to improve business results in these scenarios. Infrastructure for data processing You might be wondering what type of IT Infrastructure would be needed to support data processing for both of these types. The answer depends on which method you choose, and your business objectives for the data. For data at rest, a batch processing method would be most likely. In this case, you could spin up a bare-metal server during the time you need to analyze the data and shut it back down when you are done. With no need for “always on” infrastructure, this approach provides access to high-performance processing capabilities as needed. For data in motion, you’d want to utilize a real-time processing method. In this case, latency becomes a key consideration because a lag in processing could result in a missed opportunity to improve business results. By eliminating the resource constraints of multi-tenancy, bare-metal cloud offers reduced latency and high performance levels, making it a good choice for processing large volumes of high-velocity data in real time. Both types of data have their advantages, and can provide meaningful insights for your business. Determining the right processing method and infrastructure depends on the requirements for your specific use case and data strategy. Learn more about the benefits of bare-metal cloud for different types of big data workloads.
<urn:uuid:f6e82de6-29bf-49b6-8339-7779e8be25d5>
CC-MAIN-2017-04
http://www.internap.com/2013/06/20/data-in-motion-vs-data-at-rest/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00534-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936202
621
2.578125
3
Tech Careers That Make a Difference Over 50% of people who live in rural parts of the world cannot access basic healthcare. Imagine if you could change that with technology and the Internet of Things (IoT). Telehealth specialists do, every day. Telehealth uses networked medical devices, video conferencing, and other Internet-based technologies to let doctors speak to and evaluate patients as if they are face-to-face, no matter how many miles separate them. Telehealth specialists use their skills in VoIP, computer networking, video streaming, and more to increase healthcare access for patients and improve their quality of life. When disasters strike, communications systems are often the first to fail. Imagine being unable to use smartphones, email, or social media during a crisis. Without it, responders can’t save lives or deliver food and water where it’s needed. Survivors can’t get medical care or find their loved ones. But network engineers with special training and the right equipment can restore these vital connections. These professionals often deploy to the front lines of a disaster, helping to ensure public safety and provide humanitarian assistance. Technology is transforming our electrical grid. We use Internet-enabled thermostats to control and monitor energy consumption. Cities use motion-detecting sensors and software to turn streets lights go on and off. These technologies can reduce energy costs, light pollution, and greenhouse gas emissions. But the increased connections expose electric systems and our homes to cyber attacks. Network security professionals protect communities and families from those risks so they can enjoy the benefits of a modern electrical system. Do you want to be your own boss or start a business while benefitting others? As a tech entrepreneur you’ll work hard and take on big risks, but you’ll have the potential for big rewards, especially if making an impact on society is important to you. One study estimates that every $1 of profit earned by an innovator generates $50 worth of benefits for society at large. Technology is behind most every transformative innovation of the last two decades, with no signs of slowing down.
<urn:uuid:6ec2c98e-2b6a-4d03-be2e-29a5c96ee18d>
CC-MAIN-2017-04
https://www.netacad.com/ioe/tech-careers-that-make-a-difference
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00442-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929437
428
2.703125
3
Celebrated annually in more than 192 countries, Earth Day is an international holiday, the purpose of which is to educate the public about the importance of protecting the environment. In honor of Earth Day’s message, let’s take a look at the ways video conferencing technology can significantly reduce carbon emissions and safeguard against global climate change. We’re all familiar with the role that heavy industry and daily commuting play in greenhouse gas generation, but something that’s often glossed over is the impact air travel has on our environment. According to UK-based charity Friends of the Earth, air travel is the single fastest-growing source of greenhouse gases, with the world’s 16,000 commercial aircraft producing roughly 660 million tons of carbon dioxide a year. To put that into perspective, that’s equivalent to the amount of carbon dioxide created annually by the sum of human activity in Africa. Kind of crazy, isn’t it? Greenhouse gases like carbon dioxide are the single biggest contributor to global climate change, which in turn threatens the long-term sustainability of the planet. According to the Intergovernmental Panel on Climate Change (IPCC), if proper steps are not taken soon, by 2050 climate change will have reduced crop yields by 25 percent, which will have devastating effects on earth’s human population. But, What About Me? At this point, you might be thinking, “Do I really need to change my flying habits? How much am I personally impacting the environment? After all, I’m only one person.” The answer might surprise you. Let’s say you make three trips a year to your company’s London offices from your home base in Chicago. Those three trips alone produce 10.4 tons of carbon dioxide per passenger. To put that into perspective, the average US household’s annual electricity consumption accounts for a comparatively paltry 6.6 tons of carbon dioxide. In those three trips across the pond, you alone have created half again as much carbon dioxide as your entire family consumes in a year. And just because you’re not making regular transatlantic flights doesn’t mean you’re off the hook! Let’s say instead that you make three round-trip flights from New York to San Francisco. That’s “only” six tons of carbon dioxide a passenger – or nearly twice the carbon dioxide you create from driving your car 9,600 miles. Three round-trippers from Chicago to Houston create two tons per passenger, or the equivalent of an entire year’s worth of commutes to and from work. That adds up…fast! Change Your Habits with Video Conferencing So it’s pretty clear that excessive air travel is pretty bad for the planet, but where does that leave you and your business? You still need to meet with clients, collaborate with domestic and international partners and bring products to market, and that almost certainly requires air travel – and maybe even plenty of it. How can you help save the environment while also growing your business? Two words: video conferencing. High-definition video conferencing perfectly replicates the experience of a face-to-face conversation, eliminating the need for anything but the most important in-person meetings. Thanks to video conferencing, it’s easier than ever to make a positive impact on the environment – and what better way to celebrate Earth Day is there? Check out our infographic below for even more fun facts about our environment (click to enlarge):
<urn:uuid:a5a01992-8159-4890-bc45-6097e0db8b5e>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/earth-day-2014/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00076-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931641
737
3.609375
4
PARIS, France, May 12 — The German computing center for climate research DKRZ (Deutsches Klimarechenzentrum) and Bull have signed a contract for the delivery of a petaflops-scale supercomputer, as well as cooperation on climate research simulation. The contract worth 26 million euro covers the delivery of all the key computing and storage components of the new system. “What does the climate have in store for our future and for the Earth?” It’s a question that arouses a great deal of interest and controversy nowadays. To answer it, climate simulations are an essential tool. This involves replicating the climate system and its complex developments on a computer, with the help of digital models. The new system will be used to process the huge quantity of data (Big Data) needed to carry out effective climate simulation. Despite its impressive technical performance, the global energy consumption of the system demonstrates exemplary energy efficiency, with a PUE as low as 1.2. The PUE value is the ratio between the global energy consumption of the Data Center and the actual energy consumption of the computer. This excellent result is a direct result of the technology developed by Bull for High-Performance Computing (HPC): the system being purchased by the Hamburg climate researchers will be cooled using warm water, a technology that requires significantly less energy than standard cooling systems, as the heat generated by processors and memory modules is extracted as close as possible to source. The system will also benefit from advances in energy consumption reductions, born out of a cooperative project between by Bull and the Technical University of Dresden. “We are very proud that DKRZ has chosen Bull. Bull is a leading international provider of HPC solutions, and supports HPC research and education in Germany for customers including the Universities of Dresden, Cologne, Aachen, Düsseldorf and Munster, and the Jülich Research Center. The contract signed today with the German computing center for climate research is a new milestone in Bull’s HPC success story,” said Gerd-Lothar Leonhart, CEO of Bull for the DACH area (Germany/Austria/Switzerland). “As part of the agreement signed today, DKRZ and Bull will cooperate to improve the scalability of climate models and the corresponding software algorithms. In climate simulation, we generate such enormous quantities of data that we not only need efficient hardware, but also highly efficient software, to get to grips with that data,” commented Professor Thomas Ludwig, Director of DKRZ and research team leader. “We must be able to rely on supercomputers that incorporate the latest technological advances to be able to improve our climate forecasts. With the new system, for example, we hope to gain new insight into the forecasting of cloud formation,” explained Professor Dr. Jochem Marotzke, Director of the Max-Planck Institute of Meteorology, one of the main users of the DKRZ facilities. The expertise in the optimization of software codes developed by Bull’s Parallel Programming team in Grenoble was a key factor in DKRZ’s decision. “It is also this proven competence that finally convinced us that Bull was the right partner to have at our side,” Professor Ludwig added. If the new system were fully installed today, it would rank among the five fastest supercomputers in Germany according to the current Top500 list. And the project breaks another record: its 45 Petabyte storage system is one of the largest in the world. DKRZ is setting new standards with the deployment of this outstanding infrastructure, specifically scaled to support its users’ scientific research programs. For more information visit: www.dkrz.de Bull is the trusted partner for enterprise data. The Group, which is firmly established in the Cloud and in Big Data, integrates and manages high-performance systems and end-to-end security solutions. Bull’s offerings enable its customers to process all the data at their disposal, creating new types of demand. Bull converts data into value for organizations in a completely secure manner.
<urn:uuid:6f9f5dbe-e020-4fa6-b8c8-7b38083c9f67>
CC-MAIN-2017-04
https://www.hpcwire.com/off-the-wire/bull-provide-dkrz-supercomputer-climate-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00194-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935594
854
2.578125
3
Authors: Ellie Quigley, Scott Hawkins Publisher: Prentice Hall PTR So, you want to learn Linux shell programming? This text and CD-ROM package covers the essential Linux shells (bash and tcsh) and three key Linux shell programming utilities (grep, sed, GNU awk). Ellie Quigley – Silicon Valley’s top shell programming instructor starts from scratch and gets you all the way to expert-level techniques. In 1991, Linus Torvalds, a Finnish college student, developed a UNIX-compatible operating system kernel at the University of Helsinki, Finland. What started as one man’s hobby has become the fastest growing rate server OS and a serious threat on the desktop (client-side) to Microsoft’s Windows OS. When writing new utilities or adding enhancements to the existing ones, Linux software is almost 100% compliant with POSIX standards in order to provide software portability across different platforms and provide a UNIX-like computing environment. Commitment to open software standards and protocols is on of the major Linux strengths. Inside the book The introductory text discuss also the definition and function of a shell, system startup and login shell, processes and the shell, the environment and inheritance and executing commands from scripts topics. This book is a big tom. The authors present 11 chapters, although the book could be logically divided into five major parts. Regular Expressions – Before someone can fully appreciate the power of grep, sed, and gawk a good foundation in the use of regular expressions and regular expressions metacharacters is required. This section empowers you with the knowledge of regular expressions definition, regular expression metacharacters concepts, and how to combine regular expression metacharcaters. The grep Family – The UNIX grep family consists of the commands grep, egrep, and fgrep. Linux uses the GNU version of grep, which in functionality is much the same as grep, but better. This section cover the grep commands, and shows off how grep works, basic and extended regular expressions usage, pointing out the differences between the variants of Linux grep commands and their UNIX namesakes. The Streamlined Editor – In chapter 4 the sed command is presented. This section starts with an introduction to sed and explains the basic differences between a streamlined non-interactive way the sed work, and the interactive way of vi for example. There is a lot of sed examples of printing, deleting, substitution, range of selected lines, multiple edits, reading from files, writing to files, appending, inserting, next, transform, sed scripting… The gawk Utility – The gawk section has a really in-depth coverage. To the gawk utility the authors dedicated three chapters. Chapter 5 covers awk, awk’s format, formatting output, awk commands from within a file, record and fields, patterns and actions and regexps topics. Chapter 6 continues the discussion on awk with the comparison expressions, and chapter 7 covers variables, redirection and pipes, pipes, closing files and pipes, conditional statements, loops, program control statements, arrays, akw built-in and user-defined functions. The Interactive Shell And Shell Programming – The rest of the book in much the same way but respecting the differences between the two shells covers the interactive bash and tc shell, and programming with the bash and tc shell. Chapter 8 covers the interactive bash shell, and in chapter 10 the interactive tc shell topics are covered. Among the topics covered in these chapters are command line shortcuts, variables, job control, manipulating directory stack, command and file name completion, history, quoting, aliases, functions, standard I/O redirection, globbing, wildcards, etc. Chapter 9 cover bash shell programming and chapter 11 cover tc shell programming. Reading user input, Arithmetic, Positional parameters and command line arguments, Conditional constructs and flow control, Looping commands, Functions, Trapping signals, Debugging, Processing command line options with getopts, Eval command, Bash options, Shell built-in commands are some of the subjects discussed in these two chapters. This book is written with clarity and conciseness, traits so often missing from computer books. Whether you’re a system administrator, application developer, or power user, this book is for you. If you’re a Linux newbie and want to sharp your shell programming skills, this book is for you, although probably you’ll want to read also a more general book about Linux such as Running Linux by Matt Welsh, et al. If you already have some UNIX experience you’ll also want to read this book, because Linux shells and utilities covered in this book are enhanced with a lot of new features not available in their UNIX counterparts. The CD-ROM contains all of the source code and data files from the book, and also a copy of the book in HTML format. What I think of it When you purchase this book, you’re in effect purchasing a sliver of the combined knowledge of both authors in the Linux shells programming field. You’ll find a lot of examples in this book. For almost any command, syntax or concept covered, there is a screen shot or a graphical explanation which stands as a proof of concept. On scale of 1 – 5 (1 being the lowest possible grade) I can give it 5 and recommend this book to anyone who wants to master the Linux Shell programming environment.
<urn:uuid:e7350856-873d-4980-b628-2baf4f080f0d>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2003/03/07/the-complete-linux-shell-programming-training-course/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00314-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896252
1,127
2.859375
3
Cloud computing has changed the way the world can interact with and manage data, but is it as secure as its proponents say? Both Google and Amazon use cloud services to manage their massive networks, with Amazon's A3 service available to the public. Purchasing space on A3's network is relatively cheap in terms of cloud storage and distribution, but having your information shared across multiple servers does cause some concerns. Businesses, non-profits, online PhD or education programs and individuals alike have begun to rely on cloud computing for their daily Internet needs, but is this new type of technology secure enough to handle sensitive information? The answer is a simple yes: it's just as secure as standard computing and hosting. However, the media sometimes gives the impression that it isn't. Often you hear about major networks like Google suffering huge outages across their cloud network, which can send users into a panic about their information. What many people don't realize is that while these outages do occur, they're not any more significant than a traditional outage. The data has not been "hacked," and the outage does not mean that the information was lost. Because of the risks of cloud computing, many major providers take their security much more seriously. Their policies and physical security on site are often much tighter than traditional hosting platforms, with employees dedicated to actively monitoring how the network is performing, and taking action when an intrusion is detected. Just as cloud computing is a boon for society because of its redundant nature, the security is that much tighter for the network because of its security redundancy. Most offer some sort of encryption for your data so that as soon as it enters their servers, it is impossible for an outsider to read. The difference between cloud computing and traditional computing is that most traditional hosts offer nightly backups. Thus, if something happens, the latest backup is from the night before. This differs from cloud computing, which is designed to back up after each transaction, instantly. Therefore, should something fail within the service, the last backup was only moments before the last transaction. For problems that exist inside the cloud, a single fix can instantly alleviate any problems experienced by consumers. If you are still worried about the security of your data, there are measures you can take to ensure that your information is safe. For example, you can encrypt your own information before you submit it. Some hosts offer their own form of data encryption to help protect it, but taking security measures into your own hands keeps you protected even if something should happen to the thousands of servers that have access to your data. Additionally, you can run your own back-ups of the information, so if the cloud network goes down you still have the data. The future of data storage online is most certainly cloud computing, as it provides instant access to data under a heavy load and redundant backups for when the inevitable fail should happen. The security measures that go into protecting this future will only become more stringent as time goes on, so there's really nothing to worry about.
<urn:uuid:5e4a2f94-dd97-46d2-bcb7-b324f9645715>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/17690-Is-Cloud-Computing-Secure.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00342-ip-10-171-10-70.ec2.internal.warc.gz
en
0.968396
614
2.984375
3
In an ideal world I’d find out that the word wiki came from the old song called “Jam On It”. Unfortunately, that isn’t the case. Wikis, however, are just about the coolest things going right now, and if you don’t know what they are then you need to get so that you do – immediately. A Wiki is fundamentally a collection of information, but its distinguishing characteristic is that it can be edited by (usually) anyone. This means that the body of knowledge within the collection is able to grow very quickly, and can also be updated or added to on a regular basis. The largest out there, and for all intents and purposes the wiki that you should get up to speed on, is called Wikipedia. This living, breathing archive of information is everything that Encyclopedia Brittanica wishes it was – and much more. In the old stodgy tomes, for example, it wasn’t possible to look up Hip-Hop Slang, or Tool in quite the same way – if at all. If you value information in any way, shape, or form, make it a point to get familiar with both the concept of a wiki and Wikipedia.
<urn:uuid:463c1912-629e-4cd4-b188-525810a3029e>
CC-MAIN-2017-04
https://danielmiessler.com/blog/wiki-wiki-wiki-wiki/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945866
252
2.5625
3
Data science has increased in popularity over recent years as organizations realize that the challenges they face can be addressed in whole or in part by understanding the data available to them. The field of data science is still relatively new and is founded on several skill areas. Core skill areas Data science includes a number of key capabilities: - Mathematics and statistics help provide a rigorous analytical framework that can be invaluable when making decisions to address inherently amorphous business challenges - Computer science supports programming as it provides the theory to formalize approaches for real-life data challenges - Domain knowledge supports the development of expertise by providing reference points for hypotheses generation, whether data driven or by expert judgment Combining these three core skill areas with the right technology and processes enables data scientists to help organizations gain value from data (see figure 1). Moving through skills boundaries The interplay of these skills areas represents increased capability and learning trajectories for people. For example, an analyst with domain knowledge and coding skills can write programming scripts to work with more data and automate key tasks to become more self-sufficient and capable. As another example, the difference between a good model and an excellent one can be the insight from a domain expert (for example, a marketing professional, physician, or insurance specialist). Machine learning is attracting increasing interest due to applications as wide ranging as identifying objects in images to translating human language. It sits squarely between mathematics and computer science and requires strong knowledge of both skills to generate the greatest results. Given the wide range of talent areas associated with data science, it is vital to build teams with complementary skills. We are passionate about analytics education at Genpact. Moreover, we have an interest in fostering a data-driven culture with our partners as it enables several important things ranging from functional to industry leading. As more teammates hit internal repositories, data veracity increases through applied data testing, which can then support increasingly advanced projects providing differentiation that has an impact on an industry and society.
<urn:uuid:260ff100-a0a2-47b2-8c09-b9bb297e2ef2>
CC-MAIN-2017-04
http://www.genpact.com/home/blogs/bloginner?Title=The+interplay+of+data+science+skills+areas&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GenpactBlogs+%28Genpact+Blogs%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00305-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935196
398
2.921875
3
It is assumed that Apache is a more secure web server than its Microsoft counterpart, Internet Information Services (IIS) (read more about IIS security here). Whether this statement is true or not depends on who’s facts you believe, however for the web administrator who is under the impression that Apache on its own is completely secure is in for an unfortunate shock. Apache, like any software, is susceptible to vulnerabilities. Having the largest share of the web server market makes Apache, and those who use it to power their web sites, vulnerable to the different threats that exist. As cyber criminals have shifted the focus of their attacks from defacement and delinquency to the actual theft of dollars and data, web administrators have to be more vigilant that ever when it comes to securing their web applications, regardless of what server software they are running. Despite the belief that Apache is more secure than its competitors, the latest version of the software, Apache 2.2, has had 30 known vulnerabilities patched since May of 2002. These vulnerabilities range from giving attackers the ability to coordinate Denial of Service attacks against the server to bring down a web site to opening the door to Cross-Site Scripting (XSS) attacks. Like any web server, if Apache is not configured properly there is a good chance that it will be open to attackers who use exploits like SQL Injections and PHP File Includes. Since Microsoft’s IIS was marketed to be easier to install, configure, and manage Apache’s developers have made great strides in making their web server a more user friendly product to those who may have shied away from the GNU/Linux shell in the past. However, as the default installation has become easier, it has also become less secure due to unnecessary services being installed. The more services that are running equates to a higher risk of a vulnerability being exploited. If the administrator is unaware of a specific service, then they may not know what to watch for in order to prevent an attack. Even a web server that is updated with all the latest patches, monitored closely, and configured properly can be compromised by a zero-day attack. When this happens, it can be days before a fix is found and the attacked site could suffer from any number of issues during that time. Unless the web administrator knows what patterns to look for in illicit web traffic and does nothing else but watch this traffic, he or she will not be able to spot this type of attack before it is too late. While efforts to secure Apache may be high on a web administrator’s priority list, if the applications installed on the server are not treated with the same consideration the site is vulnerable to a number of threats. Some of the more common methods of attack against the most popular web applications are: Like any server, certain steps need to be taken to harden the operating system against attacks. While malware prevention, Intrusion Detection/Prevention Systems, network firewalls, and all of the other tools and techniques help prevent some attacks, they don’t adequately prevent attacks launched against any third-party applications that have been installed on the server. Apache’s developers realize the need to protect their product with a web application firewall. In response to security threats that exist users can install a module called mod_security. mod_security is a plug-in that installs a web application firewall on Apache to help protect against certain threats. While in the hands of a security expert mod_security can be a useful tool in the fight to protect a web server, it does require the user to understand how to write complex rules, accept the basic supplied rule set, or purchase rules for a small fee. dotDefender's unique security approach eliminates the need to learn the specific threats that exist on each web application. The software that runs dotDefender focuses on analyzing the request and the impact it has on the application. Effective web application security is based on three powerful web application security engines: Pattern Recognition, Session Protection and Signature Knowledgebase. The Pattern Recognition web application security engine employed by dotDefender effectively protects against malicious behavior such as the attacks mentioned above, and many others. The patterns are regular expression-based and designed to efficiently and accurately identify a wide array of application-level attack methods. As a result, dotDefender is characterized by an extremely low false positive rate. What sets dotDefender apart is that it offers comprehensive protection against threats to web applications while being one of the easiest solutions to use. In just 10 clicks, a web administrator with no security training can have dotDefender up and running. Its predefined rule set offers out-of-the box protection that can be easily managed through a browser-based interface with virtually no impact on your server or web site’s performance. Unlike mod_security, dotDefender runs as a Security-as-a-Service solution and is able to provide protection to web servers directly out of the box- whether the admin has an extensive background in security or just a minimal amount of knowledge on the subject. With the dotDefender web application firewall you can avoid many different threats to web applications because it performs a deep inspection of your HTTP traffic and checks their packets against rules such as to allow or deny protocols, ports, or IP addresses to stop web applications from being exploited. This deep level packet inspection helps to prevent against zero-day attacks as traffic that appears to be illicit can be stopped using the pattern recognition features. Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against DoS threats, Cross-Site Scripting, SQL Injection attacks, path traversal and many other web attack techniques without the need to perform expert level configurations. The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
<urn:uuid:0cfff34b-5bef-47dd-aaa6-e0d3f88d1195>
CC-MAIN-2017-04
http://www.applicure.com/solutions/apache-security
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936271
1,196
2.765625
3
Advancement of technology answered many difficult queries inside a second. Multiplexer and demultiplexer (Mux Demux) may be the one that is a sophisticated creation that can help in transferring the data extremely fast. There are lots of sites that may be browsed and also the gradel quality of Mux Demux solutions is obtained that suit with all the network hardware brands. All these products are available with a good warranty and therefore are very much reliable products. An individual can mail if the product is required and then order for the same. Mux Demux is the device that will help within the selection of most of the analog or even the signals from the digital inputs and these are then forwarded and single lines are obtained. Multiplexer will enable many signals that combine particular resource or equipment. This helps in minimizing one device per input signal to one informing line. A multiplexer that has double inputs has many select lines that will select the kind of line that holds the input pins and forward it to the pins from the output side lines. The job of the multiplexers is to make the level of data to higher amount and is passed across the network in a with time and the bandwidth. The demultiplexer or even the double multiplexer may be the device that will absorb the one particular signaling which involves the input and can select the majority of the output lines of the data that are plugged towards the input device. The majority of the multiplexers are mainly coupled with a dumex at the conclusion to get data. The symbol that is representative of the Mux is really a two sided equal trapezoid which has the length wise side that contains the pins of input and the narrow side that contains the pins of output. A Mux Demux will work largely by combining and separating the two optical signals which are present on different band widths that potentially double the amount capacity from the data, that is installed on the fiber plant. The only Mux terminal goes through both the bandwidths. The Demux that is of two fiber segregates particular bandwidth signal and passes onto a dedicated fiber strand and just the passive cable will allow the flow both in the directions. Both Mux and Demux are transparent to many of the networks and protocols. The reflections that are obtained by the signals are deleted by the existence of angled polished connectors that are present in the interface. Utilization of these cables will help in high segregation and occasional polar dependence. Fiberstore is definitely an China company that specialises in selling quality network connection parts. All of our multiplexer and demultiplexer possess a best warranty and incredibly competively priced as well as being of the most useful. For instance, the 100G DWDM Mux Demux can be used to provide 100G transport solution for DWDM networking system. The most popular configuration is 4, 8, 16 and 32 channels, and that we also provide 40, 44 channels. These DWDM modules passively multiplex the optical signal outputs from 4 or even more electronic devices, send them over a single optical fiber after which de-multiplex the signals into separate, distinct signals for input into electronics in the opposite end from the fiber optic link. Our standard 100G DWDM Mux/Demux package types are ABS box package, LGX package and 19 1U rackmount. We also supply custom package to meet your requirement. Here you can purchase DWDM Mux Demux modules from fiber optic products worldwide online shop with confidence. About the author:
<urn:uuid:3aae164e-c61e-4186-b0d4-837bcb28c3e5>
CC-MAIN-2017-04
http://www.fs.com/blog/a-competent-input-and-output-selector-that-you-could-choose.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945762
725
3.046875
3
How to declare html tags in cobol program in order to generate an excel with the data .The data from mainframe has to displayed in the excel with its headers. Please let me know if anybody could help me on this Joined: 29 Oct 2010 Posts: 110 Location: Puerto Rico Good day to all! Like Bill has mention you don't have to use HTML tags to format data in Excel. You could use any special character as a delimiter as long as you specify it during the process of importing the data in Excel. Normally you will use the comma as a delimiter (CV). In the Cobol program you will format your output record with comma value between each field that include your first record which will contain your heading for each cell. After running your program you will download your output file to a text file. Then you will open a Excel spread sheet where you will import your external file using the comma value as a delimiter. You got your work already cut-out. Be aware that COBOL does not "do" HTML. You can code up the tags as variables in COBOL, but this is not something COBOL will do for you -- you will need to hand-code each tag as a COBOL variable. COBOL handles XML, but not HTML.
<urn:uuid:69ae5426-654d-4d65-b108-b39bf451a836>
CC-MAIN-2017-04
http://ibmmainframes.com/about60357.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00205-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923833
267
2.71875
3
Definition: The spatial k-d tree is a spatial access method where successive levels are split along different dimensions. Objects are indexed by their centroid, and the minimum bounding box of objects in a node are stored in the node. See also extended k-d tree. Note: After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "skd-tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/skdtree.html
<urn:uuid:cb96e829-c039-493e-83cf-426397a112b6>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/skdtree.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00141-ip-10-171-10-70.ec2.internal.warc.gz
en
0.876213
181
2.796875
3
Multiple Cores and I /O Differences"> The Cell makes up for its simpler Power core by including eight "synergistic processing units" that can work on different tasks in parallel. These multiple cores help the processor run at the high clock rates and with floating point operations that can reach 256 gigaflops (billion floating-point operations per second). However, the Cell isnt IBMs only multicore processor. The Power5 architecture is multicore, as is the forthcoming PowerPC 970MP, which may be Apples choice to receive its PowerPC G6 branding. The chip contains two Power cores that can operate independently and simultaneously. In some reports, the Cell is described as having nine cores, leaving the impression that the Cell is an advanced version over the two-core 970MP. The Cell processors single Power core controls the eight synergistic processing units in a master/slave relationship. In the Cell, the Power-based core is the taskmaster, feeding subtasks to the synergistic processing units.The result of these differences is that the 970MP and Cell excel in different types of tasks. For instance, IBM said the Cells multiple cores will let it run multiple operating systems simultaneously using software virtualization techniques. Each operating system will be run natively, not in emulation. The Cell also will offer outstanding performance for 2-D and 3-D graphics and video10 times the performance of traditional PC processors, IBM executives said in their Cell briefing. To read more about the Cells virtualization scheme and what that could mean to developers, click here. The PowerPC architecture, currently expressed in 970 family, on the other hand, is designed to run the varied tasks required in a personal computer. Krewell said the first generation of Cell processors appears ill-suited to running a desktop or a mobile PC. "As a general-purpose processor, the Cell might run 10 times slower than what IBM is claiming," he said. Meanwhile, the Cells bus, which is designed by Rambus Inc., is also application-specific, IBM said. "The Cells bus is actually a little more limited because its not designed to be a general-purpose bus," Krewell said. "The Cell [was] optimized for Rambus I/O, which might not be appealing for a mainstream box." The Rambus specs are impressive. For memory, the Cell uses Rambus XDR (extreme data rate), which can provide a total bandwidth of 25.6GB per second. For I/O, the Cell uses FlexIO to pass information outside of the processor. According to Rambus, FlexIO has a maximum bandwidth of 76.8GBps, giving the Cells bus a theoretical bandwidth total of more than 100 GBps bandwidth. No PC processor on the market comes close to that performance. This bandwidth, together with the potential for high gigaflops speed, could make the Cell a graphics powerhousewhich is why Sony is betting the future of the PlayStation platform on it. Assuming the Rambus architecture could be made useful as a general-purpose bus in a PC, the bandwidth probably would saturate todays Mac hardware. The change would require Apple to redesign its bridging controller to interface with the Rambus technology. While this hardware design would not be as difficult as porting the Mac OS X to the new hardware, it would be another hurdle to overcome. Next Page: Future Cells, future Macs. On the other hand, the two cores in the 970MP are Power-based, each acting separately to handle tasks and operations.
<urn:uuid:86a10440-b85e-4b86-be4d-d84649e8120c>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Apple/Is-There-a-Cell-Processor-in-Apples-Future/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00563-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946608
725
2.703125
3
It took only 148 days, not hundreds of thousands of years, for security researchers in Japan to crack the 923-bit key to a next-generation encryption protocol. Not that it was easy. The team from Fujitsu, Japan's National Institute of Information and Communications Technology (NICT) and Kyushu University ran advanced cryptanalysis techniques running on 21 PCs running a total of 252 processing cores in parallel to crack a document encrypted using pairing-based cryptography (PBC). Unlike public-key cryptography, on which most current cryptographic schemes are based, pairing based cryptography doesn't rely on a single string of numbers or key-issuance authority for its encryption. Instead it uses two groups of numbers that generate a third set when run through any of a series of formulae. The encryption "key" comes from running values from each of the first two groups though a formula that delivers a result found in the third group, then removing one of the two original groups of numbers. The sender of an encrypted email might use his or her own list of numbers "A" and a list supplied by the recipient of the email "B," to generate a third set "C" using a pre-defined formula. The recipient can then decrypt the email using only number groups B and C, though with more difficulty than if he or she possessed groups A and B. "The known implementations of these pairings – the Weil and Tate pairings – involve fairly complex mathematics," according to notes from a 2004 presentation at MIT by lecturers Ran Canetti and Ron Rivest. "Fortunately, they can be dealt with abstractly, using only the group structure and mapping properties." Because the encryption/decryption process is so complex and the relationship among the cryptographic-number groups is so flexible, PBC can be used for a variety of different functions, including crypto that would require all three digits to crack, encryption based on the identity of one participant or search encryption that would allow users to search a database for a specific answer without decrypting the whole database. PBC has been the subject of enthusiastic academic discussion for more than a decade, but has so far been too complex or too ill-defined to be used as an effective, practical encryption method. Fujitsu's experiment was designed partly to define just how secure a PBC encryption can be – how long it takes to crack, that is – and partly to jump-start practical development of PBC into commercial products. The cryptanalysis techniques Fujitsu used are designed not only break the encryption, but to allow the crackers to emulate the authority of the admin who created it – a much harder test. "As a result, for the first time in the world we proved that the cryptography of the parameter was vulnerable and could be broken in a realistic amount of time," according to Fujitsu's announcement of the PBC project. Cracking a 923-bit encryption key is a feat in itself, but it's also a world record. The previous record was a successful 2009 attack on an encryption key 676 bits long. Increasing the key length by a third increased the difficulty of the decryption logarithmatically, so cracking it took "several hundred times the computational power" to crack the new code as it did to crack the old. There is no indication when or if either PBC or 923-bit-long encryption keys will be used for encryption in commercial security systems. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:68c6f991-60b0-4b9c-9c4e-aa5d64623764>
CC-MAIN-2017-04
http://www.itworld.com/article/2722189/security/fujitsu-cracks-923-bit-painfully-complex-crypto.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00251-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935282
763
2.90625
3
Philosophy and Its Implications for Technology Back when I was in college, I had a few friends who majored in philosophy. I always joked that they were in fact majoring in unemployment. They usually greeted this gold nugget of humor with frowns, rolling eyeballs, the occasional raspberry and, more often than not, a rejoinder about my own decision to major in history. Still, I always understood that the ideas expressed in philosophy were more than just meaningless abstractions that could be taken for granted or even ignored. At this point, you might be wondering, “What does any of this have to do with me?” On its surface, IT seems to be a fairly straightforward (though hardly simple) enterprise, and not in need of theoretical assessments. In this respect, it’s not unlike mathematics. Yet even math—essentially a language of numbers—has had its share of philosophical figures. Technology might benefit from similar reflective examinations, so in the spirit of the famed Socratic axiom “Know thyself,” I’ll attempt to extrapolate philosophical models for IT. Socrates, Hegel and the Evolution of Knowledge Most discussions of philosophy begin with that great Greek thinker, Socrates, the teacher of Plato. Socrates would always question the ideological positions of his interlocutors, who were frequently among the most prominent citizens of the city-state of ancient Athens. It was through this technique—known as the dialectic—that he got them to think critically about their own ideas and values. The dialectical approach was further developed many centuries later by the German philosopher Georg Wilhelm Friedrich Hegel. He claimed that a new concept or vision (a thesis) would arise, which would then be countered by an opposing view (its antithesis). However, instead of one of these “winning” over peoples’ minds, the two would be fused together through various stages of intellectual, pragmatic and moral compromise to form a synthesis. This ideal seems to fit certain aspects of IT, such as with open-source collaborative development processes, but ultimately falls short in terms of scale and speed. There are simply far too many things happening far too quickly in technology for the dialectical framework to apply in any meaningful way. Aristotle’s Chaotic Web of Information Perhaps the most appropriate philosophy for IT is chaos theory, which is by nature unpredictable, dynamic and highly conditional. In spite of the name, it does not describe a state of chaos, but rather the emergence of new and unanticipated arrangements that could not have been foreseen because there are so many factors in play at all times. It is paradoxically orderly and uncontrollable at once. Although chaos theory was refined and redefined in the 20th century by French scientific philosopher Henri Poincare, its roots lie in the works of Aristotle. He argued that any thoughts about the way the world works must be shaped and organized according to the multiplicity of empirical data points that comprise them, emphasizing causal relationships. Because these are always in flux, the theories that explain them will constantly change as well. Civilization today is continually shifting, adding new ideas or methodologies while it drops others. Technology has certainly contributed to this state of affairs, but is subject to it too, having helped set in motion a situation it no longer can control—if it ever could have. While there are many examples of this, the case in point par excellence is the Internet, which owes much more to Aristotle than anyone seems to acknowledge. Exploding Plato’s Static World According to Plato, we live in a world defined by metaphysical ideas and beliefs, which are eternal and unchanging. He believed that the tangible elements of the physical world were subservient to these everlasting forms, as defined by the higher realm of logical thinking. Thus, ideally, pillars of civilization like government and economy would remain in a fixed state. Of course, in the world of IT, nothing stays the same—it could be described as being in a condition of perpetual upheaval. And the industry impacts all other institutions around it as well. As Larry Downes and Chunka Mui wrote in their book “Unleashing the Killer App: Digital Strategies for Market Dominance,” technology, which progresses exponentially, tends to accelerate the development of the rest of humanity, which otherwise would advance at a slower pace. Brian Summerfield is Web editor for Certification Magazine. Send him your favorite study tips and tech tricks at email@example.com.
<urn:uuid:d62d66a8-77a2-496d-96e0-dbec2f47233d>
CC-MAIN-2017-04
http://certmag.com/philosophy-and-its-implications-for-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00067-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967347
925
2.71875
3
The Internet of Things (IoT) was the most hyped technology of 2014 according to Gartner, and it’s easy to see why. Cisco’s recent IoT study suggests that the number of connected devices is expected to grow to 50 billion by 2020, leading to a global economic impact of $10 trillion. Connected devices are set to change the very fabric of the world we live and work in. However, the buzz around gadgets such as connected fridges and smart kettles being developed by consumer goods manufacturers have been a distraction from the IoT’s true potential. Indeed, recent research from Embarcadero Technologies revealed that just 16% of those developing IoT solutions are targeting consumers. The real focus should be on connecting consumer gadgets, such as smartphones and wearables with capital-intensive physical infrastructure or assets, such as plants, hospitals, electric grids, field vehicles and pipelines. Until a large part of these assets is connected to the handheld devices, real benefits of IoT such as improved uptime, efficiency and asset utilisation, cannot be achieved. “While many technologies depend on consumer needs and their rate of adoption, IoT will largely be governed by business use cases that will create virtuous adoption cycles across an ecosystem” While many technologies depend on consumer needs and depend on their rate of adoption, IoT will largely be governed by business use cases that will create virtuous adoption cycles across an ecosystem. For example, in the medical profession, connecting wearable devices with doctors and hospitals could be used to remotely monitor patients’ vital signs, whilst sensors can help clinicians figure out whether elderly or vulnerable patients have taken their pills on time. This would enable doctors to monitor their patients remotely, reducing the need for hospital and GP surgery visits whilst improving care and decreasing the costs of delivering it significantly. Time to realise the true benefits of IoT Today we are only seeing the tip of the iceberg when thinking about the vast potential of the IoT. The real innovation is set to take place behind the scenes in the Industrial Internet of Things (IIoT). The real business imperatives arising from adoption of IoT will be operational efficiency and incremental revenue generation opportunities. Connected devices and network sensors will reinvent and optimise the efficiency of business processes and global supply chains in sectors such as manufacturing, healthcare, energy management, transportation, agriculture, and countless others. The IIoT will give these organisations a greater ability to control the machines, factories and infrastructure that form their physical operating environments. As more devices and sensors begin to come online, there will also be a significant acceleration in the volume, variety and velocity of data being created. Combining the insights gleaned from analysis of this ‘big data’ with the greater level of control the IIoT enables will allow businesses to automate processes and reduce equipment downtime with predictive maintenance. This will help them to improve product quality, increase throughput and realise potentially enormous cost-savings. “The real innovation in IoT is set to take place behind the scenes in the Industrial Internet of Things (IIoT)” For example, an early pilot of IoT along with big data to help optimise the use of equipment at just one of Intel’s manufacturing facilities generated millions of dollars in forecasted cost savings, highlighting the potential of the IIoT and big data analytics when rolled-out on a larger scale. However, this is just one early example; the IIoT is still in its infancy and there are a number of key challenges that must be overcome before businesses can begin to realise its true and full potential. Getting the IoT out of the house First and foremost, there are the obvious concerns over security and privacy. There is barely a day that goes by without a high-profile data breach or large-scale cyberattack in the headlines. As more endpoints are connected to the internet, the organisational attack surface will be increased dramatically, exposing them to an even greater risk from these cybercriminals. As such, current cybersecurity measures will soon become inadequate. Organisations embracing the IIoT must look to develop new security frameworks that span the entire cyber physical stack, from device-level authentication to application-level security. The other major barrier to overcome is the interoperability between existing IT infrastructure and systems, which has the potential to ramp up the costs and complexity of IIoT deployments significantly. The industrial internet will rely on an interconnected digital ecosystem that enables machines and core physical infrastructure components to communicate and share data seamlessly. As such, it isn’t enough to simply layer IoT technologies on top of the existing infrastructure; those looking to embrace the IIoT successfully must lay the groundwork by digitalising their operating environments. “It isn’t enough to simply layer IoT technologies on top of the existing infrastructure; those looking to embrace the IIoT successfully must lay the groundwork by digitalising their operating environments” This is no small task, and so it is vital to ensure that you are able to walk before you try to run. Legal and political structures will need to collaborate with global enterprises and respond rapidly and perhaps with measured force to such situations. Organisations big or small will need to drive the technology test-beds and have to keep room within their IoT budgets to allow for rapid prototyping, use-case testing and data evaluation so that they can quickly turn the ship in case the business case falters. Drawing a roadmap for the IoT Once these initial barriers and any early teething troubles of the IIoT have been overcome, we’re likely to see a dramatic shift in the business world. In addition to the huge operational efficiencies set to be introduced, three key trends will begin to emerge: Industry borders will be redrawn The connection of ecosystems that currently operate in isolation will redefine the boundaries that exist between industries. Successful IoT implementation will require new level of partnerships, mergers and acquisitions that we may not consider as “strategic fit” today. Technology leaders such as Microsoft, Apple and Google may find themselves partnering with companies that manufacture toothbrushes, hairdryers, meters and fire alarms to name a few. The acquisition of Nest by Google is a case in point where an individual use-case product required support by a larger ecosystem.The prevalence of data sharing will enable software platforms to draw insights and identify parallel lines between businesses that never realised they were connected. For example, agricultural supply chains will see significant efficiency gains when sensors deployed in a farmer’s field to monitor the soil conditions can forecast crop yields more accurately. This insight will enable the rest of the supply chain to become more efficient, as manufacturers can automatically adjust the throughput of packaging to ensure shortfall or surplus is limited, whilst logistics firms can plan vehicle routes in advance to ensure the most effective utilisation of their fleets. Emergence of the ‘outcome economy’ As we become increasingly able to directly control and even automate the physical world through technology, we’ll have a stronger grip on the results of business processes and activities than ever before. This will place an ever-growing emphasis on end products and the outcomes of business activities, meaning that traditional models of services and fees will no longer be as important as they once were.80% of the IoT revenue will be derived from services. It’s the end result that will begin to command a fee; which will force businesses to re-assess the ways in which they work. In many cases, this will mean building new business models and software platforms that will help create, distribute and monetise the outcome-based services at an unprecedented speed and scale. Humans and machines will become colleagues It may sound like an idea from a sci-fi movie, but as the capability of machines continues to evolve and become more complex, they will increasingly act as collaborative partners for humans. As we grow increasingly used to the helping hand machines can lend, working practices will evolve to include them, leading to a huge upturn in productivity. This will enable people to enjoy more engaging working practices, as mundane and routine tasks are transitioned to machine counterparts. Of course, there is a long road ahead before these visions become a reality. The opportunity is huge and the canvas is vast. The key to winning lies in picking the right partners and driving change internally. The big winners will be the owners and partners of new platforms and business models, who can harness the network effect inherent in these models to create new kinds of value and revenue streams. The introduction of smarter ways of working through a seamlessly connected ecosystem is set to yield considerable economic benefits over the coming years. It’s time to put the adolescent dreams of the IoT behind us and set it to work in the industrial internet.
<urn:uuid:304cad51-bb67-4cd2-82ae-0c3625d21d1c>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/sbanerjee/opinion-forget-connected-fridges-it%E2%80%99s-time-get-serious-about-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00369-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941177
1,790
2.5625
3
A team of researchers from the Washington University School of Medicine and the University of Illinois have developed ecological implants which could be used to monitor patients’ brains without the need for invasive surgery. In intensive care units around the world, patients with severe brain trauma require close monitoring to keep track of the temperature and pressure inside the skull. Armed with this information doctors are better able to prevent further injury. Although there are existing methods of retrieving this data, they require invasive surgery, and the implantation of devices which need to be removed at a later date. The research team, led by Dr Wilson Ray and Dr Rory Murphy, have developed wireless sensors made of polylactic-co-glycolic acid (PLGA) and silicone, which can transmit data, including levels of temperature and pressure, accurately from within the skull. The devices have been shown to dissolve completely in a saline solution after a few days. The benefits of implants of this kind are varied. In an article published in science journal nature, researchers explain how these implants avoid triggering the immune response sometimes associated with biomedical solutions, while also not allowing possible infections to build up as a result of their presence. Most import though, is the avoidance of invasive surgery, which, especially in patients with serious brain trauma, can often cause complications. Cumbersome wires are substituted for sensors the size of a grain of rice, and, because they dissolve in a matter of days, there’s no need to schedule potentially dangerous surgery to remove them. It’s expected that this technology can be used in different procedures with different organs. Dr Murphy says that “The ultimate strategy is to have a device that you can place in the brain – or in other organs in the body – that is entirely implanted, intimately connected with the organ you want to monitor and can transmit signals wirelessly to provide information on the health of that organ, allowing doctors to intervene if necessary to prevent bigger problems.” Biomedical engineering expert Professor Christopher James, senior member of the Institute of Electrical and Electronic Engineers, told Internet of Business that the wider applications of this technology have the potential to “truly revolutionise the way in which we can personalise measurements from the human body.” Dissolvable sensors mean that long term “there is no foreign body which can cause issues such as scarring and infection”.
<urn:uuid:682a8644-5328-42fb-87ef-9d88267f151c>
CC-MAIN-2017-04
https://internetofbusiness.com/dissolvable-iot-brain-sensors-could-aid-patient-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947902
481
3.4375
3
This is the second in a series of blog posts that will look at how common objections to the use of Bayesian networks can be overcome by clear thinking and appropriate models. The first post showed that even concepts that seem vague or imprecise can be represented in a probabilistic model. This post addresses another common objection, that the knowledge engineering required to specify a Bayesian network is often a prohibitively expensive task. Specifying a complex Bayesian network does require specifying a large number of parameters, specifically the entries on all of the required conditional probability tables (CPTs). Where do these parameters come from? In some applications it is possible to learn the parameters from data. This can work, but it is only possible when the data sets required for learning are available. Another possibility is that the parameters are defined through knowledge elicitation from domain experts. This can also work, but it may require an expensive effort. Knowledge engineering may require identifying and obtaining access to one or more domain experts, as well as statistics experts who understand the requirements of the Bayesian network. Multiple knowledge engineering sessions may be required to elicit and then refine the values. It is also possible to learn parameter values by combining expert knowledge with available data. At the end of the day the model and the parameters do have to be defined, and some potential users are scared away from using Bayesian networks because this step is perceived to be a prohibitively expensive bottleneck. However, in many cases it is possible to dramatically reduce the knowledge engineering effort to develop a model and define the parameters for a Bayesian network. I will illustrate an approach for this by introducing a toy problem, and defining a small Bayesian network to solve it. The approach has three components: an appropriate model, a recognition that neither perfection nor precision is required and an iterative process that builds, tests and refines the model. The first component is to build an appropriate model. When the problem involves reasoning about things operating in some domain, it often pays to think first about the objects, or agents, in the domain and build a model that represents them, their attributes and the relationships between them. The attributes are typically represented as random variables; the relationships may be random variables or may be represented by the graphical links in the Bayesian network. We do this initially without any regard to what observations we may have or expect to have. Then, once there is a model of the objects/agents in the domain, we extend the model to include the observations that are available to us or that might become available. The second part of the approach is a willingness to accept – even to embrace – simplicity, and a lack of precision. It is not necessary, especially with the first version of a model, to include every possible random variable in the model, or to require precision in specification of the model parameters. It is important to capture significant relationships, but it’s much easier to get a simple model working and then extend it than it is to create a complex model from scratch. The next step in the approach is spiral development. We build a small simple model of an important part of the problem, test it by interacting with it to make sure the model responds in believable ways, then make refinements or extensions until a useful model is achieved. With that introduction to the process, here is the toy problem – which uses the classic ‘blind-men-and-an-elephant‘ example: Four blind men are walking on the savanna in Africa. They encounter an elephant. The first blind man has bumped into one of the elephant’s legs. He explores it with his hands and says: “I have found a tree.” The second blind man encounters one of the elephant’s ears: “No, it is a large palm leaf.” The third encounters the elephant’s trunk: “It is a python!” And the fourth blind man reaches out and finds the elephant’s tail: “No, you are all wrong – it is just a rope, hanging from a tree.” So, how can we combine these observations and reason that this is an elephant? To build a model of this problem, first identify the objects, or agents, in the problem domain. We do want to keep things simple at the start, so we can identify that there is some object that the blind men have encountered, and that there are the blind men themselves. Let’s start with the object we wish to reason about: the object that the blind men have encountered. Its key attribute is its type. So we can start with a random variable that represents the type of the object. The object type is a random variable with multiple states. From the problem description, the possible states include: ‘tree,’ ‘palm leaf,’ ‘rope,’ ‘python’ and, of course, ‘elephant.’ In Netica, a commercial Bayesian-network development package from Norsys Software Corp., it looks like this: Now we consider the blind men. The important attribute for them is their observation of the object. The blind men are the same, so we only need to specify the observation once. The observation is a random variable with four states: ‘tree,’ ‘palm leaf,’ ‘rope’ and ‘python.’ Because it is an observation of the object’s type, we model it in the Bayesian network as a child to the ‘Object Type’ node: The network above still has the default probability distributions assigned by Netica. To complete the model, we need to define the parameters of the local probability distributions. That is, we need a prior distribution across the states of ‘Object Type,’ and a conditional distribution for the ‘Blind Observation’ given the object type. These numbers are not specified in the problem description, so where do they come from? It would certainly be possible to devote considerable time and energy to defining the numbers by reviewing literature, conducting surveys, designing and implementing randomized experiments with blind men and African savannas or interviewing experts. In some problems that kind of effort may be appropriate. But for this model, and especially for the early versions of many models, it is not necessary to agonize over the process of defining the numbers needed for the required probability distributions. A lot of anecdotal evidence from constructing many Bayesian network models suggests that reasonable numbers will give reasonable results. Let’s start with the prior distribution for the object type. What follows is a stream of consciousness thought process that will consider the problem and end up with a prior probability distribution for Object Type: The model is developed from the ‘world’ defined by the problem description. In that world, we can reasonably assume that at least all of those states do exist, so there will be no prior probabilities of zero. We can envision an African landscape, with scattered trees, where some of them are palm trees. There is at least one elephant and elephants are usually together, in groups. And there must be at least an occasional rope hanging from a branch, plus the occasional python. Mentally examining this imagined landscape, we see lots of trees, a number of palm trees with large leaves and a parade of elephants. We probably can’t see any ropes or pythons, but we know that they are there. That suggests there are more trees than palm leaves, more of either of them than elephants and the occasional rope or python. We do not need to specify actual probabilities; just articulating likelihoods for the different types is sufficient. What is important is the ratios between the likelihoods we assign to the different states. Let’s say 40 trees, 20 palm leaves, five elephants, and two apiece for ropes and pythons. (Note that a wide range of different numbers will work for this problem.) In the order that we defined the states, that yields the likelihood vector [40, 20, 2, 2, 5]. We can enter these numbers into the distribution table in Netica, and then use Netica’s Table | Normalize function (which scales them so that they sum to 100%) to turn those likelihoods into a prior probability distribution. (The probability distributions in Netica are typically shown as percentages.) We next need to define the conditional probability distribution for a blind observation given the object type. That is, we must fill out this table: For each row in the table, we must answer the question: What will a blind man observe if he encounters that object type? It would be possible to conceive of extensive experiments to collect data that would answer this question, or intense knowledge engineering sessions to try to elicit probabilities from knowledgeable experts. But often, especially in the early version of a model, it is possible to employ common-sense reasoning to come up with reasonable values for the needed numbers. As we did above, it is only necessary to specify likelihoods for each row. We can later use Netica to convert the likelihoods into probabilities. Again, what follows is stream of consciousness for the kind of thinking that can generate the parameters required: First consider a blind man who encounters a tree. He is likely to recognize through touch that it is a tree, so that outcome should have a large likelihood. Yet all sensors are ‘noisy’ and subject to error – even blind men – so we don’t want to use zero for any of the outcomes. Is there anything that might be confused for a tree? Ok, perhaps a python, if it were hanging from a branch, and was holding still… perhaps that could be confused for a tree, but it wouldn’t happen very often. Now pick some likelihood numbers consistent with that reasoning, say [80, 1, 1, 2]. Next, consider a blind man who encounters a palm leaf. He is likely to recognize that it is a palm leaf. And for this one, there is no other state that might be expected to be confused for a palm leaf. Again, we do recognize that all sensors are subject to error, so we do not wish to use any zeros. We must pick some numbers, so… [1, 80, 1, 1] Now consider a blind man who encounters a rope, hanging from a branch. In this case it is conceivable that a rope could be confused with a small narrow tree trunk. And plausible that a rope could be confused with a python. Still, most of the time we expect that a rope will be recognized as a rope. And again we do not wish to use any zeros. So pick some numbers… [2, 1, 80, 10]. A blind man who encounters a python may be confused in similar ways as with a rope. A python could be confused with a tree, or even more likely with a rope, but most of the time it will be recognized as a python. We need to pick some numbers, so we might select [2, 1, 10, 80] Now we get to the last row of the conditional probability table, where we model the blind man encountering an elephant. How to predict what a blind man will report? One possibility is just to count up the opportunities for the different misclassifications that are described in the problem definition. An elephant has four legs, two ears, one tail, and one trunk. We can use those counts as likelihoods [4, 2, 1, 1]. At this point the table has been filled in with likelihoods: It is not necessary to use these exact numbers. A wide range of numbers will work for this problem. We use Netica’s Table | Normalize function to convert these likelihoods to probabilities (which sum to 100% across each row): At this point the Bayesian network looks like this: We can do the first round of ‘testing’ on this model by successively setting each state in the Object Type, and then each state in the Blind Observation to make sure that these two random variables interact with each other in ways that are expected and consistent with the problem domain. If necessary, make changes to the prior or to the conditional probabilities (or likelihoods) until the model ‘feels’ reasonable. Now we can make three additional copies of the Blind Observation node, to represent the four blind men in the original story. When we apply the evidence reported from the story, we can see that the Bayesian network has indeed identified the object as an elephant! [Note: The example Bayesian network discussed in this post, BlindMenAndElephant.neta, is available for download here. The example runs in Netica, a demo version of which is available for free at the Norsys website that is more than sufficient to run the example Bayesian network.] This Bayesian network was developed for a small yet interesting toy problem, but it has relevance to more complex problems. First, the model was developed logically, starting with a model of the important agents in the domain and their attributes – in this case the object and its type – followed by modeling the observations that are available in the domain – in this case the observations of the blind men. Most importantly, it has demonstrated that at least in some cases it is possible to define parameters of a non-trivial model without an extensive or expensive knowledge engineering process. Reasonable numbers, defined using logical thinking, common sense and an understanding of the domain, are often sufficient to achieve reasonable results. This problem, and this Bayesian network, can also be used to illustrate a common misstep that is sometimes made in Bayesian modeling. Suppose that in our original modeling we had decided to model the observation as the parent, and the Object Type as the child. This may even seem reasonable, because that is the way that we think. If we reason from data to inference, it can ‘make sense’ to build the model that way. And if we do, we get this: The Bayesian network above does not have probabilities assigned; the numbers are just default values from Netica. At first blush, this network may even seem reasonable. But consider what happens when we try to define the probability distributions. Even defining a prior across the states of the blind observations feels awkward. But when we try to define the conditional distribution of the Object Type given four blind observations, we discover that we have to fill in a table with (4 x 4 x 4 x 4 =>) 256 rows. For each row, we have to answer questions like: “If one observations is ‘tree,’ the second observation is ‘rope,’ the third observation is again ‘tree,’ and the fourth observation is ‘palmLeaf,’ then what is the likelihood that the object is a ‘tree’… a ‘palmLeaf,’… a ‘rope,’ etc.? This does not sound like fun! There are many more parameters, and even understanding them well enough to try to specify them is hard. The lesson here is that if defining the parameters of the model is too painful then that is evidence your model is wrong. It is almost always better to model observations as children of the random variable that are being observed. There are some other lessons that can be extracted from this toy problem. First, an astute reader may have asked early on: “Where did the elephant in the model come from? That is, why is ‘elephant’ one of the states of the object?” That’s a valid question, since in a realistic problem we may not know that elephants exist until we encounter one. It’s still possible to use a Bayesian network to reason in such a domain, and it is done by explicitly including the state ‘other’ in the model. For example, in this very problem suppose we had the same four blind men and the same observations, but suppose that the possibility of ‘elephant’ had not already been encoded in the model. Instead, a model can be constructed with five object states: the four that are known – ‘tree,’ ‘palmLeaf,’ ‘rope,’ and ‘python’ – and then a fifth state of ‘other’. The prior distribution of ‘other’ will likely be small, but it should not be miniscule. Then the last row of the conditional probability table for the blind observations will be the probability distribution across the possible observations, given that the object is ‘other’. Without any additional information, we can assign equal probabilities for each observations state. When we apply evidence of the four blind men to this model, we see that the probability of ‘other’ is very high. If the automated system using this Bayesian network was coded to raise an alert when the probability of ‘other’ exceeded some threshold, a human analyst would at some point have a ‘Eureka!’ moment: “Oh! It’s an elephant!” Then the model could be extended to include the object state of ‘elephant.’ At that point, for completeness, the model should have six states for ‘Object,’ including both ‘elephant’ and ‘other’ – to account for future encounters with other unexpected objects – say, hippos, rhinoceroses or giraffes. Finally, note that this model is a very simple fusion system, which infers the presence of some (perhaps rare or unexpected) state of the world by fusing observations from multiple sensors. The sensors here are not even ‘aware’ of some important states of the world (i.e., the elephant). This fusion system could be extended to account for sensors with different accuracies (e.g., some blind men are more reliable than others) or for different types of sensors. This model has a prior distribution across the states of the object, but that model could be extended with additional environment variables that are parents to the Object Type node, which would provide different distributions for different locations in Africa, or different times of year, and so on. Any real-world problems of course will be considerably more complex than this example, with lots of variables and therefore a complex Bayesian network with lots of local-probability distributions that require parameters. But we still have a reasonable prospect of defining a useful Bayesian network if we: - Start small, beginning with simple models of the objects or agents that we wish to reason about, and then add the observations that we may have about those objects; - Use engineering judgment to define reasonable parameters, without worrying about precision in early versions; and - Test and evaluate the model by interacting with it – or with data if available – and refine as necessary. Once the simple model gives reasonable results, we can then iterate to add new concepts and relationships until the model is complete enough to be useful. Ed Wright, Ph.D., is a Senior Scientist at Haystax Technology.
<urn:uuid:e22ba0b4-de86-4e54-b9ee-a580804f3489>
CC-MAIN-2017-04
https://haystax.com/blog/2017/01/04/overcoming-objections-bayesian-networks-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00426-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935474
3,962
2.609375
3
The forensic breakdown of the attack came first from Fabio Assolini, a researcher for Kaspersky Labs, during a presentation at the Virus Bulletin conference. Graham Cluley at Sophos recounted the presentation in his blog. Assolini described how at some Brazilian ISPs, more than 50% of users were reported to have been affected by the attack. After the six manufacturers affected issued firmware updates to plug the security hole, the number of compromised modems decreased. However, some 300,000 modems are still thought to be controlled by attackers. “My suspicion is that the typical computer user doesn't give a second thought about whether their router could be harboring a security threat, imagining that the devices don't need to be treated with suspicion,” said Cluley. Users’ ADSL modems had been compromised, and the hackers had changed the router's configuration to point to a malicious DNS. This meant that when the user entered the web address of a legitimate website (like google.com.br or facebook.com) they could be taken to a malicious website instead, posing as the real thing. Thus, users would visit legitimate websites such as Google, Facebook and Orkut (a popular social network in Brazil) and would be prompted to install software. Visitors to Google.com.br, for instance, were invited to install a program called "Google Defence" in order to access the "new Google." “Now, normally if you access a router via the internet you will be asked for a username and password – and so long as the user has chosen hard-to-guess login credentials (and not gone with manufacturer's defaults) all should be well,” explained Cluley. “Unfortunately, in this case, the hackers were able to exploit a vulnerability in the Broadcom chip included in some routers.” The Broadcom flaw allows a Cross Site Request Forgery (CSRF) to be performed in the administration panel of the ADSL modem, capturing the password set on the device and allowing the attacker to make changes, usually in the DNS servers, Cluley said. So, the exploit allowed malicious hackers to break into millions of routers remotely, without having to know the passwords being used to protect them. The hackers were then able to change the ADSL modem's DNS settings – pointing them to one of 40 malicious DNS servers around the world. The end result is that many Brazilian users downloaded code, mistakenly believing it was from websites they trusted. “Ironically, if users contacted their anti-virus vendor's tech support line and asked them about the safety of files like facebook.com/ChromeSetup.exe, chances are that the support technician would not be able to locate the file themselves because their own computers were not running through malicious DNS servers,” said Cluley. "And, of course, affected users would often be adamant that they had done nothing wrong – certain that their computers were fully updated with patches and anti-virus. But, of course, that didn't stop the remote attack on their router.” The DNS redirects were first reported last fall, but the inner workings of the attacks and the ongoing nature of the problem were revealed just this week.
<urn:uuid:2589d4a9-508d-4f0a-82e7-2dec7e387630>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/45-million-routers-hacked-in-brazil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971415
667
2.515625
3
Internet Opens Up to All Names Every since the very beginning of the Internet, new top level domains (TLD) have been added incrementally. Currently the number of TLDs stands at 22, but thanks to a process that begins today that number could be over 1,000 by this time next year. The new TLDs are known as generic top level domains (gTLD) and can potentially be any word, in nearly any type of human language script. The current pool of TLDs are entirely in Latin script and include among others: .com, .net, .org and country codes TLDs (ccTLDs) like .de (Germany) and .cn (China). "This is the first time in the history of the Internet that generic top level domains can be created in non-Latin characters," Rod Beckstrom, CEO of ICANN said during a press conference announcing the start of the gTLD program. Non-Latin characters had previously only been available as part of internationalized domain names (IDNs) for country code TLDs (ccTLD). The IDN process officially approved the first non-Lation IDN ccTLDs in 2010. The path to today's historic milestone comes after over six years of debate and discussion. The gTLD program was officially approved by ICANN as a program to implement in a meeting in Singapore in June of 2010. "We think the world is ready for this innovation," Beckstrom said. "We believe that this program will do what it is designed to do, which is open up the Internet domain name system to further innovation." Beckstrom noted that there has been concern about the program and he stressed that the initiative has multiple protections in place to protect trademark and rights holders for intellectual property and names. In Beckstrom's view, the new gTLD system will help improve competition in the Internet, as well. He noted that domain name prices have dropped by over 70 percent since ICANN was first formed 12 years ago. The process by which new gTLDs will be granted is a complex and costly one. Applicants are required to pay a fee of $185,000 to even be considered. The people behind a bid are all subject to background checks. New gTLD applications will be publicly posted for public comment prior to approval. The initial application period is open for the next three months, in which time ICANN could handle as many as 1,000 new gTLD applications. "With the new TLD program, you'll see entirely new business names that are related to the names that people are able to get in the top level domain system," Beckstrom said.
<urn:uuid:e7ec5b20-488e-484a-878f-3e000db9b638>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/datacenter/internet-opens-up-to-all-names.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960212
554
2.5625
3
The Internet must support the large number of languages in the world at all levels, including content, hardware, software, and internationalized domain names if it is to reach the next billion people, according to speakers at an Internet Governance Forum (IGF) in Hyderabad, India. "When we talk about Internet for all, we have to go beyond the people who speak English," said Manal Ismail, vice chair of the Governmental Advisory Committee (GAC) of the Internet Corporation for Assigned Names and Numbers (ICANN), on Wednesday. Read full story: Network World |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:003c617c-2a32-48c9-9010-fd7471a1bb95>
CC-MAIN-2017-04
http://www.circleid.com/posts/igf_next_billion_internet_multilingual_support/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.706578
186
2.53125
3
This week we've encountered a cross-platform worm that's capable (at least theoretically) of spreading from a PC to a mobile device and back. To be more specific, the "Mobler" worm moves between Symbian and Windows platforms. Although it's quite nasty on the Windows side, it doesn't cause much harm on the Symbian device. It just copies itself to the memory card and tries to trick the user into infecting his PC. Technically there isn't any automatic spreading mechanism for Mobler to copy itself from one platform to another. It just creates a Symbian installation package that inserts a Windows executable on the mobile device's memory card. This executable is visible as a system folder in Windows Explorer - so it's possible for the user to accidentally open it and infect their PC while browsing the memory card's files. Mobler poses no immediate risk to mobile device users in its present form. However, it's possible that virus writers might use it as a basis for more malicious malware. But then again, that could be said of previous cross-platform viruses and thus far a heavy hitter has failed to materialise.
<urn:uuid:44b5d859-97ad-4c92-9819-0805a28357cc>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00000960.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957197
230
2.53125
3
RADIUS Server vs. VPN - Link Layer and Network Layer Security for Wireless Networks Wireless networking presents a significant security challenge. There is an ongoing debate about where to address this challenge: at the link layer with a RADIUS server or at network layer with a VPN (OSI layers 2 or 3, respectively). This article looks at the basic risks inherent in wireless networking and explains both approaches. It concludes that RADIUS server and VPN deployments are complementary. Link layer security using an 802.1X RADIUS server provides a comprehensive solution and network layer security such as a VPN enhances link layer security to provide additional WiFi security is required. WiFi brings a whole new meaning to networking security risk analysis and mitigation. With readily available equipment, attacks on wireless networks are very easy. Some network administrators, uncomfortable with the state of WiFi security, have turned to more traditional wired network security solutions to secure their wireless networks as well. Often, they will use VPNs, which operates at the network layer, to provide the required security. Unfortunately, network layer security solutions such as VPNs do not address all of the security concerns that arise from the shared airwaves. In addition, the "per-tunnel" licensing makes VPN solutions costly and adds to the management headaches inherent in network layer solutions. Since VPNs don’t provide 100% security coverage for Wi-Fi networks, the industry has standardized on 802.1X, a link layer security protocol for wireless networks using the RADIUS Server. Link layer security protects a wireless network by denying access to the network itself before a user is successfully authenticated. This prevents attacks against the network infrastructure and protects the network from attacks that rely on having IP connectivity. Wi-Fi Protected Access (WPA), a link layer solution, was designed specifically for wireless networks using 802.1X and the RADIUS server and is particularly well suited for wireless security. This paper examines network layer security provided by IPSec VPNs and link layer security provided by WPA, 802.1X, and the RADIUS server, addressing the characteristics of each approach when applied to wireless networks. It discusses the shortcomings of IPSec when applied to wireless networking security concerns, and it demonstrates how 802.1X provides a more desirable wireless network security solution for most applications. What is Network Security? Three things must be in place to make any network environment secure: access control, privacy, and packet authentication/integrity. - Access control limits the users who can gain access to the network. Access control can occur through any number of user authentication methods designed to verify that a user is who they claim to be and that they have network privileges. Once it is determined that users belong on the network, authorization may occur to determine what services they can have. - Privacy hides information from those who shouldn't have it. Network transmissions are susceptible to casual browsing if the data packets aren't encrypted (encoded) so that the data is unintelligible to eavesdroppers. Encryption can be carried out at Layer 2 through 802.1X using secure key exchange, or at Layer 3 through the use of Virtual Private Networks (VPNs). - Authentication/Integrity - Authentication, in this case, verifies that devices (rather than users) are legitimate and that data packets originate from the source they claim to and have not been "spoofed" by a rogue network device using stolen credentials. Integrity is ensuring that packets have not been tampered with en route, even though they may have originated from a legitimate network device. Link Layer Security with Wi-Fi Protected Access (WPA) Link layer security provides point-to-point security between directly connected network devices. It provides secure frame transmissions by automating critical security operations including user authentication, frame encryption, and data integrity verification. In a wireless network, link layer protection starts with an authentication service and includes link layer encryption and integrity services. Link layer protection secures wireless data only where it is most vulnerable, at the wireless link level and is characterized and allows higher-level protocols, such as IP, IPX, etc., to pass securely by providing security for ALL upper layer protocols. The industry recommended approach to Wi-Fi security incorporates link layer security through the 802.1X security standard. The IEEE 802.11i wireless security standard calls for 802.1X link layer security and has been adopted by the Wi-Fi Alliance in their Wi-Fi Protected Access (WPA and WPA2) standard. 802.1X is the industry standard for providing strong link layer security to WiFi, and supports two authenticated key management protocols using the Extensible Authentication Protocol (EAP) running on a RADIUS server. 802.1X provides strong, robust security on wireless connections, and is used to eliminate the widely publicized security holes in older WiFi standards. Network Layer Security with IPSec Network layer security provides end-to-end security across a routed network and can provide authentication, data integrity, and encryption services. These services are only provided for specific network and transport layer services (e.g. for only IP traffic). Once the network endpoints are authenticated, IP traffic flowing between those endpoints is protected. However, all other non-IP traffic is not secured and is unprotected. IPSec is a standard network layer security protocol that provides an extensible method to secure the IP network layer and upper layer protocols based on IP such as TCP and UDP. It is used extensively in Virtual Private Networks (VPNs) to secure network connections that extend between networks and to connect remote clients over the Internet. And while IPSec is a well-understood for providing security across wired network elements, it was not specifically designed for protecting non-IP traffic and data at lower layers in the network such as 802.11. Why Is Link Layer Security Important? Deciding which layer of the network you should apply security needs some examination. IPSec security protects data beginning with the network layer. It provides protection for only selected network connections, and leaves the network open to attacks that work outside of this limited security method. In addition, network layer protocols often use authentication mechanisms that require that the network be completely open to all wireless devices, ultimately leaving the network vulnerable. Link layer security such as 802.1X specifically operates on the data link layer to provide protection specifically for the over-the-air portion of the connection between the mobile user and wireless access point. 802.1X protects upper layer attacks by denying access to the network before authentication is completed. VPNs and 802.1X with a RADIUS server complement each other in Wi-Fi security applications. 802.1X provides strong, standards-level security for networks that are under the Carrier’s or IT department’s control. Enterprises deploy 802.1X through a RADIUS server for user authentication to control access and encrypt data on their wireless networks. Some Carriers and service providers extend their dial-up and DSL connection services to include Wi-Fi access through 802.1X tied back to their centralized RADIUS server as well. The value to 802.1X is best realized when access to the network can be controlled through a RADIUS server. On the other hand, VPNs are best used in situations where Wi-Fi networks are not able to be secured. These are typically on remote networks such as public Wi-Fi hotspots that can’t be secured at the link layer. In these cases, VPNs secure the IP services across the network. The user needs to be careful to limit their network access through the VPN tunnel, and avoid accessing unsecured portions of the open network. In industries such as healthcare, financial services and certain government organizations, multiple layers of security may be deemed as offering the best solution. In this case, the best wireless security may be a combination of VPNs and 802.1X, combining both link and network layer security as shown below. Shortcomings of Network Layer Security for WiFi Although IPSec can be used to provide WiFi security, there are some drawbacks to using network layer security alone for securing WiFi. The following four sections discuss the types of attacks that might be effective against a network layer IPSec solution. Denial of Service Attack Denial of service (DOS) attacks often attempt to monopolize network resources. This type of attack prevents authorized users from gaining access to the desired network resources. In a WiFi network that relies solely on IPSec for security, the access point must bridge all traffic to the wired network. This allows legitimate users to authenticate and establish an IPSec connection, but also allows malicious users to send frames to the access point. Thus, an attacker can flood the access point with data, interrupting a legitimate user’s connection. Another DOS attack could result when an attacker captures a previous disconnect message and re-sends it, resulting in the legitimate user’s loss of connection to the WiFi network . As discussed earlier, IPSec does not provide protection for protocols other than IP, leaving other protocols unprotected and vulnerable to attacks. One such attack uses the Address Resolution Protocol (ARP) to fool a client into sending data to a malicious peer. An attacker could launch a man-in-the-middle (MITM) attack by using forged ARP messages to insert a rogue entity into the data path. Often, IPSec is used to protect network layer connections between a user and a gateway. Without link layer security, however, the access point will bridge frames initiated from both authorized and unauthorized users. Thus, an unauthorized user could monitor the wireless traffic to capture information such as the IP address of a neighboring peer, and then use it to attack the wireless interface on neighboring peer hosts IPSec protects the traffic only between the wireless user and the end-point. Any connection outside of the tunnel is not secure. A business user connecting to a personal email account, for example, may be surprised to learn that browsing to an Internet site is not secure. Corporate users with a network layer IPSec tunnel providing security at a public access hotspot have nothing protecting the traffic that is not destined for the corporate IPSec gateway. 802.1X Link Layer Security with a RADIUS Server 802.1X is designed specifically for wireless networks, and provides users with data protection while allowing only authorized users to have access to the network. 802.1X not only overcomes the security vulnerabilities of WEP (an earlier, and unreliable wireless security solution), but also provides effective protection from both non-targeted attacks (e.g., Denial of Service attacks) and targeted attacks (e.g., Peer-to-Peer attacks). 802.1X with a RADIUS server is a standards based solution, and an integral part of both the IEEE 802.11i and Wi-Fi Protected Access (WPA) standards. It works with most enterprise and Carrier level wireless network devices with delivering interoperability and reducing dependence on vendor-specific components. It provides effective link layer security, making wireless security sufficiently strong. Wireless security can be addressed at the link layer (layer 2), network layer (layer 3), or a combination of both. By understanding both types of security, network administrators can make decisions that are appropriate for their own environments. VPNs provide protection for traffic only between the user and a private network, and do not protect against other security risks associated with wireless networks. Since VPNs were developed to protect users on a wired network, they leave wireless users open to security concerns that arise from wireless networks. The link layer security provided by 802.1X is an essential component for wireless LAN security. As the Wi-Fi Alliance and IEEE recommend, network administrators should secure access to the wireless link layer by using a RADIUS server with EAP for user authentication and encryption key generation. This provides a baseline of security that is necessary to protect wireless users and the wired network they are accessing. Network layer security remains important to the WiFi user in an untrusted (e.g., hot spot) network, but is most effective when used in combination with link layer security such as 802.1X and a RADIUS server. Link layer security used in conjunction with VPNs provide a double layer of security to meet the needs of the most security conscious organizations
<urn:uuid:4ccfa2f1-6729-46ad-8faa-8b0ff641f416>
CC-MAIN-2017-04
https://www.interlinknetworks.com/whitepapers/Link_Layer_Security.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00097-ip-10-171-10-70.ec2.internal.warc.gz
en
0.908222
2,534
2.53125
3
Additional details have come to light on the brain research initiative, announced last year by President Obama. A working group of the US National Institutes of Health (NIH) published a ten-year plan – Brain 2025: A Scientific Vision – for the agency’s portion of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative. The plan’s recommended budget would quadruple the current allocation, providing a $4.5 billion investment over 10 years, beginning in fiscal year 2016. The project, which aims to map all activity in the human brain, was initially given a $1 billion budget to be spent over 12 years. The blueprint outlines specifics regarding NIH’s contribution to the major neuroscience initiative. Obama cited an initial investment of $100 million when he announced the project in April 2013. The NIH advisory working group report recommends $400 million in funding for the National Institutes of Health for each of the next five years, then $500 million annually for the five years after that. “Our budget estimates, while provisional, are informed by the costs of real neuroscience at this technological level,” the report said. “While we did not conduct a detailed cost analysis, we considered the scope of the questions to be addressed by the initiative, and the cost of programs that have developed in related areas over recent years.” As for the scope of research, the report’s authors summarize the nature of the work thusly: “The BRAIN Initiative will deliver transformative scientific tools and methods that should accelerate all of basic neuroscience, translational neuroscience, and direct disease studies, as well as disciplines beyond neuroscience. It will deliver a foundation of knowledge about the function of the brain, its cellular components, the wiring of its circuits, its patterns of electrical activity at local and global scales, the causes and effects of those activity patterns, and the expression of brain activity in behavior. Through the interaction of experiment and theory, the BRAIN Initiative should elucidate the computational logic as well as the specific mechanisms of brain function at different spatial and temporal scales, defining the connections between molecules, neurons, circuits, activity, and behavior.” If the recommendations are granted, funding for the US project will far surpass the amount currently allotted to Europe’s big neuroscience project, the Human Brain Project, which seeks to reproduce the brain in computer form. That project was awarded 1 billion Euros (US $1.3 billion) over 10 years. In March, program officials from both camps revealed that the US and European research programs would be joining forces, but to what extent they will collaborate is still not clear. Coordination will begin later this year when representatives meet to lay out a strategy for collaboration and data sharing. US officials have compared the BRAIN initiative with the Human Genome Project, which had a similarly high price tag. The 10-year project that resulted in the first human genome being sequenced in 2003 cost $3 billion. “How the brain works and gives rise to our mental and intellectual lives will be the most exciting and challenging area of science in the 21st century,” said Francis Collins, NIH director. “As a result of this concerted effort, new technologies will be invented, new industries spawned, and new treatments and even cures discovered for devastating disorders and diseases of the brain and nervous system.” The NIH is expected to award its first BRAIN grants in September. The BRAIN Initiative is jointly led by NIH, Defense Advanced Research Projects Agency of the US Department of Defense, National Science Foundation, and Food and Drug Administration.
<urn:uuid:3d44d0f6-769e-4e71-be47-0f9bc0249c4d>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/06/11/13144/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927986
736
2.875
3
Late last year, while visiting with leaders from Asia and the Middle East to talk about the role of "smart communities" -- communities aggressively deploying information technology to transform their region for a global, knowledge-based economy and society -- I sensed two things: First, an urgent and compelling call for world telecommunications reform. Second, a message that the world needs America's help preparing its cities and its people for a fundamental shift in the basic structure of the world's economy. America has a unique opportunity to work with other communities across the globe to develop a strategy to renew their cities and consequently create the sense of world community our planet so desperately needs. In Riyadh, Saudi Arabia, at the conclusion of an international meeting on future cities in November, these were some of the initial findings, recommendations and conclusions of the participants: - Islam does not contradict or conflict with globalization but is consistent with its ideals, and the Islamic civilization has been a model for that. Globalization should not affect negatively the principles and values. - Full participation of all members of society, including women and children, must be ensured in the process of planning for future development, aiming to develop common perceptions conforming to the community's aspirations and ambitions, its cultural tenets and its aspiration for economic prosperity and social welfare. - Arab cities should utilize digital technology in the various walks of life in future cities and provide the infrastructure necessary to incorporate and utilize new technologies. - Cities should prepare to adopt the concept of smart communities by expanding applications of e-government, distant learning, e-commerce, etc., in ways that do not contradict the humanitarian aspects of future cities. If this communiqu
<urn:uuid:ff83cbb0-bf6e-47c1-ba5d-3294cfdf6bb6>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Building-a-Smart-Future.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00151-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924121
334
2.609375
3
Del Valle ISD is a public school district in Texas that serves over 11,000 students on 15 campuses in Southeast Travis County near Austin. The school district is committed to providing innovative educational programs that cultivate critical thinking skills among its students, including mock trial leagues and debate teams. Unlike typical programs, these students do not just compete within their school district, within their state, or even within their own country. These students are able to compete internationally with students like themselves from all over the world through the power of HD video collaboration. From the outside, Del Valle ISD may look like a typical school system, but on the inside, students are traveling the world and gaining a cultural education that very few people ever have a chance to experience. By partnering with world-class schools in countries such as South Africa, Australia, Nigeria, Iran, Korea, India, Taiwan, and many others, students from Del Valle ISD are able to flex their debate skills and perform mock trial hearings via LifeSize video conferencing solutions. The idea was born in 2003. Del Valle ISD had distance learning video equipment that was given to them from Region 13, but it was severely underutilized. Programs such as mock trial and debate required students to travel for competitions. In fact, students from as far as Alaska would journey in for these tournaments, resulting in a week’s worth of travel. Since acquiring the video conferencing equipment from Region 13, the school district has continually built upon its distance learning programs and now participates in international educational programs with 225 schools across 75 countries. Students now have the opportunity to debate about current events and even historical events, as they take opposing sides on topics such as Armenian genocide, the evolution of the Catholic Church, or even Truman’s role in the bombing of Hiroshima. They collaborate with local colleges and even leaders in the judicial system, such as a judge in Australia for debates and mock trial events. Whether they are working on a topic from the YMCA, ICC or National Forensics League, or choosing a topic of their own, these young people compete against their peers across the globe in crystal-clear HD, all from the comfort of their classroom in their hometown. “I really see this program as a living book,” said Michael Cunningham, project director of World Class Schools and educator for Del Valle ISD. “These students are learning something incredible every day, and expanding their global knowledge in ways that we never thought were possible a decade ago. This is something that a standardized, multiple choice test could never teach. This is where true critical thinking skills are born.” The program has even extended beyond mock trials and debate competitions; the students also interact with their international peers to learn more about their culture, food and music. Del Valle ISD students have sung Norwegian Christmas carols, learned how to cook exotic foods like antelope and porcupine, and spoken to Dr. Scott Kofmehl, chief of staff to the United States ambassador to Islamabad to learn about having a career in foreign affairs. “When we first tried using video conferencing in 2003, the quality of the image just wasn’t good enough to use on a daily basis, but the technology has really evolved since then,” said Cunningham. “LifeSize ClearSea provides exceptional HD quality and it’s so easy to use that we share it with schools all over the world.” “I want schools to realize that you can do this for next to nothing: bring a world-class school to your school for only a little money,” said Cunningham. “With this program, we are enabling our students to open their mind and see what is really going on in their world. Other districts may prefer to spend the money to bring their students on field trips in their local community, but we have chosen to invest in video conferencing to bring the world to our students.”
<urn:uuid:182fb84d-d536-40fc-a97b-eea3bf27a8df>
CC-MAIN-2017-04
http://www.lifesize.com/video-conferencing-blog/innovative-educational-approach/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977804
813
2.84375
3
Computer Automation Is Making Researchers Obsolete January 26, 2013 In archives and libraries around the world, piles of historic documents are sitting gathering dust. One of the problems librarians and archivists have with these documents is that they do not have a way to historically date them. The MIT Technology Review may solve that problem, says the article, “The Algorithms That Automatically Date Medieval Manuscripts.” Gelila Tilahun and other people from the University of Toronto have created algorithms that use language and common phrases to date the documents. Certain words and expressions can date a document to a specific time period. It sounds easy, but according to the article it is a bit more complex: “However, the statistical approach is much more rigorous than simply looking for common phrases. Tilahun and co’s computer search looks for patterns in the distribution of words occurring once, twice, three times and so on. “Our goal is to develop algorithms to help automate the process of estimating the dates of undated charters through purely computational means,” they say. This approach reveals various patterns that they then test by attempting to date individual documents in this set. They say the best approach is one known as the maximum prevalence technique. This is a statistical technique that gives a most probable date by comparing the set of words in the document with the distribution in the training set.” Tilahun and his team want their algorithms used for more than dating old documents as well. It can be used to find forgeries and verify authorship. The dating tool opens many more opportunities to explore history, but the down side is that research is getting more automated. Librarians and scholars may be kicked out and sent to work at Wal-Mart. Whitney Grace, January 26, 2013 Sponsored by ArnoldIT.com, developer of Beyond Search
<urn:uuid:be09e15f-f629-42cd-b9dc-6422f4e0f104>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2013/01/26/computer-automation-is-making-researchers-obsolete/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946877
386
3.21875
3
A new white paper by CSIRO has called for the introduction of robots which will help, rather than replace, workers in the Australian manufacturing sector. The paper entitled <i>An initiative to enhance SME productivity through fit for purpose Information and Robotic technologies</i> (PDF) argues that virtual reality headsets, multi-tasking robots and robots which can be controlled via the Internet will enhance worker’s skills and tasks. According to report co-author and CSIRO business development manager, Doctor Peter Kambouris, local manufacturing is shifting away from mass production to mass customisation. “Companies are telling us they need more flexible systems to deliver these more customised products,” he said in a statement. “Industrial automation used in manufacturing today is limited, but developments in ICT and robotics present Australia with an opportunity to change the way we manufacture.” For example, a virtual headset called Remote Mobile Tele-assistance (ReMoTe) is one system being trialled by the manufacturing industry. Using a head-mounted camera the worker is able to broadcast what they see to a supervisor in a remote location. The supervisor can project their hand gestures onto whatever the worker is looking at and virtually show them how to fix an issue or conduct a repair. According to Kambouris, systems like ReMote have been designed with safety in mind and allow workers to operate in hazardous environments and safely complete tasks. The white paper will be launched on 8 May at National Manufacturing Week in Melbourne. It is based on interviews with small to medium enterprises in Queensland and Victoria. ReMoTe has also been trialled by the mining industry. In May, CSIRO principal research scientist Leila Alem told Computerworld Australia that mining operators in mine sites are required to maintain and repair equipment that is more and more sophisticated. "They don’t have the skills to do that and they often have to fly in an expert and fix the machine,” she said at the time. Using CSIRO's solution means that the expert can be located anywhere in the world, which saves time and money as he or she doesn’t need to physically be at the mining site. ReMoTe, being a complete hands-free system, can be operated without any training. It has also been developed to operate in harsh environments, such as dirty and dusty areas and places where ventilation and lighting are poor. The CSIRO research team is in the process of integrating the ReMoTe technology with a panoramic display system for remote operation, which has been developed at the Virtual Mining Centre at CSIRO in Brisbane. Follow Hamish Barwick on Twitter: @HamishBarwick Follow Techworld Australia on Twitter: @Techworld_AU
<urn:uuid:ac8c7cb4-1251-4be6-900d-c81aad149080>
CC-MAIN-2017-04
http://www.computerworld.com.au/article/461108/factory_robots_may_help_manufacturing_sector_csiro/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00573-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950288
572
2.921875
3
In the world of technology, few things are as important as data and information. In a world fraught with opportunity and risk, data has become a central part of business. From Big Data to Data Breach, information is the essential commodity fueling the modern world. Yet with all the talk of the importance of data, the majority of it sits unused and, worst yet, not understood. Globally, the amount of data being generated is increasing faster than can be imagined, more than doubling every two years. Today, close to 5 billion people have access to or own some kind of a mobile device, with close to 2 billion using a smartphone. According to EMC/IDC, 1.7 megabytes of data is created every minute for every person on Earth. Adding to this deluge is an ever-increasing pool of Internet-connected things. Although first described more than 50 years ago, we are only now entering the dawn of the information age. It's a time marked by not only the creation of vast amounts information, but thanks to new forms of analytics and visualization, it also marks the beginning of an enlightened age driven by an immense amount of information being collected from all aspects of the world around us. The Oxford English Dictionary definition of "information" now runs 9,400 words, the length of a novella. It is in itself a sort of masterpiece—an adventure in cultural history. A century ago, "information" did not have much resonance. It was a nothing word. "An item of training; an instruction." Now it defines the very era in which we live, "the era in which the retrieval, management, and transmission of information, esp. by using computer technology, is a principal (commercial) activity." Today, the vast quantity of data is growing at an incredible 40% per year. By 2020, the amount created globally every year is expected to reach 44 zettabytes, or 44 trillion gigabytes. In the time it has taken me to write this post, I would have created 100 megabytes data, if not more. Yet most of this data is transient – unsaved Netflix or Hulu movie streams, or Xbox gamer interactions, temporary routing information in networks, sensor signals discarded when no alarms go off, etc. Just 0.5% of global data is ever actually used. Herein lies the paradox – there is a lot of valuable data all around us, but it will take determination and a skilled workforce to find and put it to use. It will need to be protected, analyzed, and acted upon. Data is just data; but information, insight, and understanding is power. The focus of this blog is to explore the limitless potential of data in its visual form. From emerging technologies to data scientists to designers shaping the way we understand information, Data Graphica will showcase the very best in the world of data science and information design.
<urn:uuid:3d10564a-7907-4253-9642-8c0f2920789d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2911508/big-data-business-intelligence/introducing-data-graphica-the-art-and-science-of-information.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00417-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938434
587
2.84375
3
Martin Karplus, Michael Levitt and Arieh Warshel have been awarded the 2013 Nobel Prize in Chemistry for their groundbreaking work on computer modeling that complements traditional test tube experiments in solving mysteries of physical science. Or to put the prize winners’ work into more scientific terms, they are being recognized: “for the development of multiscale models for complex chemical systems.” According to the Royal Swedish Academy of Sciences, the chemists work in the 1970s laid the foundation for powerful software programs used today that enable scientists to combine classical and quantum physics while exploring everything from drug interactions to environmental studies. Here’s a layman’s explanation of the researchers’ work, via the Nobel organization. In announcing the award, Professor Staffan Normark, Permanent Secretary of the Royal Swedish Academy of Sciences, said that “this year’s prize is about taking the chemical experiment to cyberspace” (well before cyberspace became a common term). The prize winners hail from the Universite de Strasbourg, France and Harvard University (Karplus), Stanford University of Medicine (Levitt) and the University of Southern California (Warshel). [BIG QUESTION: Why is there no Nobel Prize in Computing?] The Nobel Prize in Computing announcement on Wednesday followed Tuesday’s announcement of the Nobel Prize in Physics, which recognized the work of Francois Englert and Peter Higgs that led to the recent discovery of the Higgs boson. [QUICK LOOK: The Higgs boson phenomenon]
<urn:uuid:5070c137-e84c-49a6-8594-b7fb0f6f709e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2170706/data-center/chemistry-nobel-prize--about-taking-chemical-experiment-to-cyberspace-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895244
320
2.875
3
Military researchers are embarking on a project designed to provide social media users on Facebook or even Google for that matter, greater privacy. The Defense Advanced Research Projects Agency (DARPA) this week issued a call for information on how it can help develop technology to best protect the rich private details that are often available on social media sites. Better anonymization algorithms and other technology to hide data seems to be a key component of what DARPA is looking to develop, though it notes: Anonymization techniques for social network data can also be more challenging than those for relational data. "Massive amounts of social network data are being collected for military, government and commercial purposes. In all three sectors, there is an ever growing need for the exchange or publication of this data for analysis and scientific research activities. However, this data is rich in private details about individuals whose privacy must be protected and great care must be taken to do so. A major technical challenge for social network data exchange and publication is the simultaneous preservation of data privacy and security on the one hand and information utility on the other," DARPA stated. In Congressional testimony this week the FBI has talked about how users on social networking accounts such as Facebook and MySpace are ripe for cyber crime and that such crimes using those network has been rapidly increasing. "The surge in the use of social networking sites over the past two years, has given cyber thieves and child predators new, highly effective avenues to take advantage of unsuspecting users," said Gordon Snow, Assistant Director of the FBI's Cyber Division. DARPA notes that the while there has been a lot of work on privacy preservation in the exchange and publication of relational data, much of this work cannot be directly applied to social networks. "Privacy preservation is a greater challenge in several ways. Modeling the attacks on privacy as well as the background knowledge used by perpetrators of these attacks is more complex. In the case of relational data, a set of attributes serves as a quasi-identifier used to associate data from multiple tables. Attacks are usually based on identifying individuals using these quasi-identifiers," DARPA stated. DARPA is requesting white papers that relate to the privacy-preserving publication of social network data and wants answers to the following questions: - How do we specify elements of information that must remain private? - What properties must an anonymized network have to ensure that those elements remain private and how do we demonstrate that? - How do we express knowledge that adversaries can use to defeat anonymization? - What assumptions can we make about the nature and extent of that knowledge? - How do adversaries use that knowledge? - How do we transform a network so that a given privacy model is satisfied? - How do we define and compute metric(s) that indicate the degree to which the transformed network satisfies the privacy model (particularly with consideration of an adversary's background knowledge)? - How do we define and compute metrics for measuring the utility of anonymized data when the purpose for which the data will be used is known in advance/the purpose for which the data will be used is not known in advance? DARPA stated that the Executive Branch of the United States Government has been proactive in developing policies and procedures for safeguarding personally identifiable information, defined as "information which can be used to distinguish or trace an individual's identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother's maiden name, etc." The Department of Defense has also worked to preserve the confidentiality of the personally identifiable information of Service members and the civilian workforce . DARPA is planning a workshop September 27-28, 2010 in Arlington, VA to discuss the project. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:053645ab-90b4-448f-ac1b-8cf5a20b4ab5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231465/security/us-military-wants-to-protect-social-media-privacy.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940007
804
2.796875
3
Several years ago, a diminutive marine crustacean called the Gribble landed on the biofuel industry’s radar for its unique ability to digest wood in salty conditions. Now, researchers in the US and the UK are putting the University of Tennessee’s Kraken supercomputer to work modeling an enzyme in the Gribble’s gut, which could unlock the key to developing better industrial enzymes in the future. Marine biologists in the UK made an important discovery about the Gribble in 2010. Apparently, the wood-boring critters had so-called “family-7” enzymes living in their gut. Family-7 enzymes are usually only found in fungi, which have traditionally been the main sources of the enzymes that biofuel researchers are interested in. Armed with this information, a group of researchers from the University of Portsmouth in the UK, the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL), and the University of Kentucky set out to better understand the Gribble and its enzymes. The U.K. researchers isolated one of the family-7 enzymes in the Gribble, called Cel7B, and solved its structure with X-ray diffraction, providing a good static view of the entity. Meanwhile, the NREL enlisted the UT’s Kraken supercomputer to perform molecular dynamics (MD) simulations on Cel7B, which provided a detailed view of Cel7B’s activity. Kraken is a Cray XT5 supercomputer housed at the Oak Ridge National Laboratory and operated by the UT’s National Institute for Computational Sciences (NICS). In 2009, Kraken became the world’s first academic supercomputer to enter the petascale range, which means it performed more than one thousand trillion operations per second. At the time, Kraken was only the fourth supercomputer of any kind to break the petascale barrier. The 112,800-core Opteron-based system debuted on the Top 500 list of the world’s biggest supercomputers in June 2011 at number 11. It has not run the LINPACK test again, and slipped to number 30 on the June 2013 edition of the list. The 9,400-node cluster continues to help scientists in the fields of astronomy, chemistry, and meteorology. The MD simulations on Kraken have already led to several potentially valuable discoveries about the Gribble’s enzyme, according to NREL’s Gregg Beckham. For example, the researchers found “that the charge on the enzyme’s surface was immense,” Beckham tells the NICS. High negative surface charge is typically correlated with salt tolerance. Indeed, the researchers found that Cel7B remained active in water up to six times saltier than ocean water. This is potentially valuable because it means Cel7B may be hardy in high-solids, industrial environments. Enzymes with high-solids tolerance have the potential to save industrial biofuel operations money because they require a smaller reactor and less water, Beckham says. The work with Kraken, which is being funded by the National Science Foundation’s eXtreme Science and Engineering and Discovery Environment (XSEDE), is still on-going. Up next: comparing Cel7B with other family-7 enzymes, with the goal of better understanding this class of enzymes and potentially modifying them for industrial use.
<urn:uuid:629b2600-c10f-4ca8-b4e6-61ad286c8cf2>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/06/25/kraken_chews_on_gribble_data_for_industrial_enzyme_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937314
709
3.0625
3
Scientists at Sandia National Laboratories in Albuquerque, N.M., are taking the idea of alternative energy to new extremes using the Z Machine -- the world's largest X-ray generator. The Z Machine's purpose is to produce temperatures that exceed those of the hottest stars in the universe. To do this, the machine discharges a magnetic field and an electrical current across a steel wire mesh. This produces, albeit for only fractions of a second, plasma that has reached 6.6 billion degrees Fahrenheit, which in turn generates X-rays. Scientists hope this energy can be used to develop clean nuclear fusion technology.
<urn:uuid:6deb6060-c27d-46da-b526-22ef335fef0a>
CC-MAIN-2017-04
http://www.govtech.com/technology/Green-Initiatives-Z-Machine----Worlds.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.83653
122
3.4375
3
Volvo Trucks has developed technology that eliminates a truck drivers' blind spot, in a bid to reduce accidents involving pedestrians. Pedestrians and cyclists in urban areas are at particular risk. Last year 19,000 cyclists were reported killed or injured in Britain. Further, in London an estimated 20 percent of cyclist fatalities involve a truck or heavy goods vehicle (HGV), according to figures from the royal society of the prevention of accidents. Volvo Trucks' research found that limited visibility is one of the main causes of heavy truck accidents with road users in Europe, and has is working on technologies to bring to the market to eliminate dangers on the road. Volvo's technology will give truck drivers a 365 degree view with cameras installed on the vehicle. These cameras share information with sensors and radars which can autonomously activate the braking or steering system if the driver does not respond to an alert that a pedestrian or cyclist is close by. Carl Johan Almqvist, Volvo Trucks' Traffic and Product Safety Director said: "Today's Volvo trucks are designed to eliminate any vehicle blind spots. But in situations with heavy traffic it is easy for a driver to miss something important such as an approaching cyclist on the vehicle's passenger side. Now we can solve this issue and help the driver see and understand everything that is happening around the vehicle." The technology, which is a result of a four-year research project called Non-Hit Car and Truck, in cooperation with Volvo Cars and Chalmers University of Technology, could be introduced as early as five years' time from now, Almqvist said. "We have the main components in place but we need to do a lot more testing in order to make sure that the system is fault-free. If we manage to solve these challenges, a future without truck accidents is within reach." Volvo currently offers a set of safety technologies for trucks. These include an emergency braking system with an early collision warning to prevent accidents caused by inattention and lane changing support which detects vehicles in the blind spot on the passenger side. The car manufacturer also offers lane keeping support - a monitor of a truck's position on the road, detecting and alerting the driver to drifting, as well as driver alert support which warns tired drivers and advises them to take a break. Competitor Honda recently revealed its plans for a "collision free society" during a demonstration of its cutting edge machine-to-machine products. The car-manufacturer presented two connected Acura Sedan cars to "talk" to each other enables a "virtual tow" which allows one car to drive with no-one behind the wheel, directed by the other, in Detroit last month. Honda said that "virtual tow" is designed to allow drivers on the roads to assist each other when breakdowns occur. This story, "Volvo Hopes to Prevent Truck Accidents with New Tech" was originally published by Computerworld UK.
<urn:uuid:a5d27297-0d81-4610-a53f-f02ae80f3dd9>
CC-MAIN-2017-04
http://www.cio.com/article/2824263/data-center/volvo-hopes-to-prevent-truck-accidents-with-new-tech.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964722
603
2.96875
3
Decision making can be a regular calling or a herculean task depending on the people or business owners faced with it. Where there’s business and its dealings, risks are inevitable because a business simply cannot operate without having to take risks. Risk management software slows down the occurrence of hazards to a great extent. This software’s usage is not only limited to financial organizations like investment banks and retail. It also deals with the risks associated with the government, the energy sector and the insurance field. For management of risks in financial organizations, it is imperative to use risk management software to pick up profitability and administer risk to achieve confident compliance. Using software that administers risk associated with your finances can help save you a lot of time and money on taxes and payrolls by giving you the correct figures at all times. This will minimize the hazard of being short of funds when there is a financial crisis lurking around and you have no time to gather funds for your bank or institution. The Basel 2 accord was in fact enforced to avoid situations like these in January 2008. This accord states that risk management is crucial for securing financial services to firms being a part of best practices in business. Hazards associated with the energy sector also need their risks to be calculated so that no loss of unused energy takes place and all resources are used to their full power or capabilities. For example, any company supplying electricity to a whole town has to have a back up transformer or emergency power supplies, which can be used on account of a blackout. Among other townspeople, the patients in critical states in a hospital are likely to lose their lives just because the power supply company did not have any planning for such a risk management hazard.
<urn:uuid:15cee4b1-30bf-4181-bb70-b8eb6ddd9f7c>
CC-MAIN-2017-04
http://www.best-practice.com/best-practice-software/risk-management-software/saving-your-business-from-unpredictable-hazards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00463-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958279
349
2.53125
3
Google Dives Into Genomics Research Google makes a commitment to help with genomics research by joining the Global Alliance for Genomics and Health.Google is expanding its involvement in medical science around the world by joining the Global Alliance for Genomics and Health as part of an effort to expand and advance genomics research that could keep humans healthier. Google's new membership in the group, which was formed in 2013, was announced by Jonathan Bingham, a Google product manager, in a Feb. 27 post on the Google Research Blog. "Generating research data is easier than ever before, but interpreting and analyzing it is still hard, and getting harder as the volume increases," wrote Bingham. "This is especially true of genomics. Sequencing the whole genome of a single person produces more than 100 gigabytes of raw data, and a million genomes will add up to more than 100 petabytes. In 2003, the Human Genome Project was completed after 15 years and $3 billion. Today, it takes closer to one day and $1,000 to sequence a human genome." All of this information "carries great potential for research and human health—and requires new standards, policies and technology," he wrote. "That's why Google has joined the Global Alliance for Genomics and Health. The Alliance is an international effort to develop harmonized approaches to enable responsible, secure, and effective sharing of genomic and clinical information in the cloud with the research and health care communities, meeting the highest standards of ethics and privacy." Some 146 organizations from some 21 countries around the world are members of the group so far, including Boston Children's Hospital, Brigham and Women's Hospital, California Institute of Technology, Canada Health Infoway, Canadian Institutes of Health Research, Chinese Academy of Sciences, Dana-Farber Cancer Institute, Genome Institute of Singapore, Harvard University, Indian Society of Human Genetics, Johns Hopkins University School of Medicine, Massachusetts General Hospital, Melbourne Genomics Health Alliance, New York Genome Center, Osaka University, Graduate School of Medicine, SIB-Swiss Institute of Bioinformatics, Spanish Institute of Bioinformatics, Spanish National Cancer Research Center, St. Jude Children's Research Hospital, Stanford University, University of California Health System and the University of Cape Town.
<urn:uuid:12ef30ca-c59a-4044-bfa8-2d777c12efdd>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-dives-into-genomics-research.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00282-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932534
459
2.515625
3
The Cloud computing is a new computing paradigm which aims to provide reliable, customized, and quality of service (QoS) guaranteed dynamic computing environments for end users. The cloud computing paradigm has several aspects including distinct features and enabling technologies. Cloud computing involves researchers and engineers from various backgrounds, e.g., Grid computing, software engineering, and database professionals. They integrally work on Cloud computing platform from their different view points. Computing clouds provide large scale deployment and usage capacities, which fully justifies the rapid ongoing adoption of cloud computing and hosting services. Conceptually in cloud computing, end users acquire computing platforms or Information Technology infrastructures from computing clouds, and then run their applications inside the cloud. Therefore, computing clouds render users with services to access hardware, software, and data resources, thereafter, an integrated computing platform as a service, in a fully transparent way. Based on the support of the hardware as a service (HaaS), software as a service (SaaS), and data as a service (DaaS), the Cloud computing in addition can deliver the Infrastructure as a Service (IaaS) for users. Users thus can on-demand subscribe to their favorite computing infrastructures with requirements of hardware configuration, software installation, and data access demands. Cloud computing services can be accessed with simple and pervasive methods. In fact, the Cloud computing adopts the concept of Utility computing. In other words, users obtain and employ computing platforms in computing Clouds as easily as they access a traditional public utility (such as electricity, water, natural gas, or telephone network). The Cloud interfaces do not force users to change their working habits and environments, e.g., programming language, compiler, and operating system. This feature differentiates Cloud computing from Grid computing as Grid users have to learn new Grid commands and APIs to access Grid resources and services. The Cloud client software which is required to be installed locally is lightweight. Cloud interfaces are location independent and can be accessed by some well established interfaces like Web services framework and Internet browser.
<urn:uuid:f885ae3c-0732-470c-b5cb-9b88d266a56d>
CC-MAIN-2017-04
http://www.myrealdata.com/blog/189_cloud-computing-a-vibrant-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00400-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926017
414
3.5
4
Moshfegh A.J.,U.S. Department of Agriculture | Holden J.M.,U.S. Department of Agriculture | Cogswell M.E.,Div for Heart Disease and Stroke Prevention | Kuklina E.V.,Div for Heart Disease and Stroke Prevention | And 6 more authors. Morbidity and Mortality Weekly Report | Year: 2012 Background: Most of the U.S. population consumes sodium in excess of daily guidelines (<2,300 mg overall and 1,500 mg for specific populations). Excessive sodium consumption raises blood pressure, which is a major risk factor for heart disease and stroke, the nation's first and fourth leading causes of death. Identifying food categories contributing the most to daily sodium consumption can help reduction. Methods: Population proportions of sodium consumption from specific food categories and sources were estimated among 7,227 participants aged ≥2 years in the What We Eat in America, National Health and Nutrition Examination Survey, 2007-2008. Results: Mean daily sodium consumption was 3,266 mg, excluding salt added at the table. Forty-four percent of sodium consumed came from 10 food categories: bread and rolls, cold cuts/cured meats, pizza, poultry, soups, sandwiches, cheese, pasta mixed dishes, meat mixed dishes, and savory snacks. For most of these categories, >70% of the sodium consumed came from foods obtained at a store. For pizza and poultry, respectively, 51% and 27% of sodium consumed came from foods obtained at fast food/pizza restaurants. Mean sodium consumption per calorie consumed was significantly greater for foods and beverages obtained from fast food/pizza or other restaurants versus stores. Implications for Public Health Practice: Average sodium consumption is too high, reinforcing the importance of implementing strategies to reduce U.S. sodium intake. Nationwide, food manufacturers and restaurants can strive to reduce excess sodium added to foods before purchase. States and localities can implement policies to reduce sodium in foods served in institutional settings (e.g., schools, child care settings, and government cafeterias). Clinicians can counsel most patients to check food labels and select foods lower in sodium. Source
<urn:uuid:d2a67f0d-37dc-41a2-8010-186aea505103>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/div-for-heart-disease-and-stroke-prevention-2142355/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00428-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903914
447
3.21875
3
The National Oceanic and Atmospheric Administration (NOAA) is preparing to launch a geostationary satellite that can scan the earth from North Pole to South Pole in five minutes. The Geostationary Operational Environmental Satellite-R (GOES-R) will provide weather reports to meteorologists, enabling them to observe weather patterns in the Western Hemisphere develop in near-real time. GOES-R is the first in a series of four satellites that will provide weather forecasts; it will launch at 5:40 p.m. on Nov. 4. “GOES-R is a quantum leap above and beyond its NOAA predecessors,” said Stephen Volz, assistant administrator for NOAA’s Satellite and Information Service, at a press conference on Oct. 4. “U.S. forecasting supports the world.” The four satellites will sustain coverage through 2036, according to Greg Mandt, NOAA’s GOES-R program manager. Mandt said that these satellites are the most sophisticated ones NOAA has ever launched and contain technological abilities previous systems lacked. For example, GOES-R is equipped with an Advanced Baseline Imager (ABI), which can take pictures revealing area imagery and radiometric information about Earth’s oceans, weather, and environment. Mandt said the ABI functions five times faster than previous satellites at a resolution four times clearer. GOES-R is also furnished with a Geostationary Lightning Mapper (GLM), which will measure lightning activity within clouds, between clouds, and on the ground throughout the Americas. In addition to rain and winds, GOES-R will be able to measure other natural occurrences, such as volcanic ash. Mandt used the example of a volcano that erupted a few days ago in Mexico as one of the phenomena on which GOES-R will be able to collect data. The satellite will be able to update scientists with information every five minutes. Scientists can also use the satellite to conduct detailed studies on specific events. “We’re really excited to receive data,” said Louis Uccellini, Director of the National Weather Service (NWS). “We’re ready for this data as it flows.”
<urn:uuid:b277a698-e7f4-49f9-96d5-d555f7717aac>
CC-MAIN-2017-04
https://www.meritalk.com/articles/noaa-expects-big-data-from-most-advanced-satellite-to-date/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915298
463
2.671875
3
For people who've had a stroke, a treatment that involves applying an electric current to the brain may help boost recovery of their mobility, a small clinical trial found. Stroke is the most common cause of severe, long-term disability. Rehabilitation training, which helps patients re-learn how to use their bodies, can help some patients recover their ability to move. But it is often costly and time-consuming. The new study looked at 24 patients; each had experienced a stroke that affected his or her ability move a hand and arm. Half of the participants were picked, at random, to receive nine days of rehab paired with a brain-stimulation technique known as transcranial direct current stimulation (tDCS). This method uses electrodes placed on the scalp to deliver constant, low electrical currents to specific areas of the brain. The other patients received a sham control treatment; they were fitted with electrodes but did not receive tDCS. Compared to the control group, patients who received brain stimulation and rehab were better able to use their hands and arms for movements such as lifting, reaching and grasping objects, the researchers found. [10 Technologies That Will Transform Your Life] "It was hard work for the patients. They had to come into the lab every day for two weeks," study co-lead researcher Heidi Johansen-Berg, a neuroscientist at the University of Oxford in England, told Live Science. But the findings showed that "we can speed up stroke rehab with brain stimulation," Johansen-Berg said. "If we could routinely add brain stimulation to rehabilitation, this could help ensure that each patient reaches their true potential for recovery." Magnetic resonance imaging (MRI) scans of the patients' brains revealed that these benefits, which lasted for at least three months, were associated with higher levels of activity in the brain's motor cortex (which controls voluntary movements) during movement, as well as an increased amount of brain matter in the motor cortex. Previous research showed that tDCS could boost motor learning in healthy individuals. This led scientists to explore whether tDCS might also help reinforce patients' rehab training, the researchers said. "The training was exhausting,like being in the gym every day, but it was huge fun," a study participant named Jan said in a statement. "Even after the first session, I felt as if I could do more, even though I was knackered. That made me go back every day, and I found it easier and easier." The stimulation felt like amild tingle or a static electric shock, Jan said. "The worst part was that my head itched afterwards. "I have definitely improved and benefited," Jan added. "People who haven't seen me say, 'Wow — you can move better now.'" "For many patients after stroke, there is minimal opportunity to regain lost functions; tDCS has the potential to make the brain more plastic and so more responsive to treatment," said Marom Bikson, a biomedical engineer at City College of New York who was not involved in the study. "This is a well-controlled clinical trial toward that goal." In the future, the researchers would like to conduct a larger clinical trial "to understand who benefits most or least from this approach," Johansen-Berg said. [5 Amazing Technologies That Are Revolutionizing Biotech] How safe is this kind of brain stimulation? "This is an important question, as although this method is noninvasive — that is, we don't have to open up the skull — we are putting electrical current into people's brains, and this is not something that should be done lightly," Johansen-Berg said. "We need to be careful about how much current is being applied, and for how long. "As this type of stimulation can boost learning, it could potentially be used as a cognitive enhancer in healthy people," Johansen-Berg said. "However, there is much still to be understood about how it works and what its long-term effects are, so we should be cautious before progressing to widespread use of the approach." The new findings are published online today (March 16) in the journal Science Translational Medicine. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. News Article | March 18, 2016 The need to non-invasively see and track cells in living persons is indisputable. Emerging treatments using stem cells and immune cells are poised to most benefit from cell tracking, which would visualize their behavior in the body after delivery. Clinicians require such data to speed these cell treatments to patients. Researchers now describe a new highly sensitive chemical probe that tags cells for detection by MRI. News Article | April 14, 2016 Cervical cancer patients without enlarged lymph nodes could benefit from SPECT-MRI imaging of their sentinel lymph nodes (SLNs) to assess whether metastases are present. The image looks like something out of a SciFi movie: metal flows out of one spot, reaching towards its companion piece opposite, growing little fingers to explore its surroundings. But this is not SciFi, it is real life. This is what it looks like when lithium electrodes are used in a lithium-ion battery. The lithium metal crystal structure experiences stress as it cycles through discharges and recharges. This stress forces parts of the metal to extrude away from the body of the electrode. It is a common phenomenon, especially in pure metals carrying electrical charges, known as dendrite formation. When the dendrites bridge the gap between the anode and the cathode, a short circuit occurs. In the best case, the battery life is shortened. In the worst, the battery starts heating itself up until the point of danger: a fire or explosion can occur. In theory a pure lithium electrode would be the ideal electrode for a lithium-ion battery because of the favorable properties of lithium metal and because it allows a cycle of lithium ions into the solid (metal) and back into solution (lithium ions) again during use and recharging. Dendrites give battery engineers big headaches which they try to resolve by using clever alternative materials for the electrodes or additives in the electrolyte. But slow, laborious and often destructive analytical methods delay the experimental feedback necessary to finding better materials and optimizing battery lifespan and safety. The news of a real-time, in situ, 3-D imaging method developed by scientists at the Department of Chemistry of New York University spurs hopes that we will witness break-throughs in battery technology as researchers can immediately see what is happening inside batteries. This technique may also be useful in the emerging field of battery safety testing, which becomes increasingly necessary as a spate of battery incidents has led to restrictions of devices on airplanes and consumer fears about the products containing these batteries, ranging from Samsung Galaxy Note 7 smartphones to Tesla electric cars. The chemists achieved the speed necessary for real-time imaging by looking not at the dendrites but at the electrolyte in the space around them. The distortions around the dendrites on the MRI images act like "shadows" that can be used to visualize the dendrites that cast the shadow. The other methods in use to examine dendrites usually involve opening the battery up, which disturbs the chemistry and the delicate dendrite structures. And, of course, these methods are useless for seeing what is happening when the batteries are in use. As our future seems dependent on a myriad of devices powered by batteries, especially as these batteries start to give us more independence from fossil fuels, any advance in the science of better batteries and battery safety will be welcome. Read more about Real-time 3D imaging of microstructure growth in battery cells using indirect MRI News Article | January 18, 2016 Andrew Petrulis is finally getting some rest. For years, he didn’t want to fall asleep. He was out of the war but sleep put him back in it. His dreams replayed scenes from 11 years of active-duty service as a member of a US Air Force explosive ordnance disposal unit. Master Sgt. Petrulis defused roadside bombs and other improvised explosives with a robot, or sometimes his own hands, throughout Iraq, Afghanistan, and Southwest Asia between 2002 and 2013. He received the Bronze Star twice. He shot at people and got blown up. Bombs went off within feet of him. The explosions rattled his brain. He relived these scenes, over and over, in nightmares. After an honorable discharge, returning home, and joining the reserves in 2013, an MRI showed scar tissue on his brain. The VA diagnosed Petrulis with traumatic brain injury, severe post-traumatic stress disorder, tinnitus, Achilles and kneecap tendonitis, and depression. The VA rated his disabilities at a combined 140 percent, with PTSD, which his life now revolves around, accounting for 70 percent of that rating. But he was still functional in the sense that he could eat and go to the bathroom on his own. The VA ultimately declared him a 90 percent disabled veteran. He was running on fumes, getting only two or three fitful hours of sleep each night. He had regular panic attacks. Weekly night terrors. Vivid nightmares every other day, or so. He locked himself in his house, alone. Sometimes he’d drink on the couch until he passed out. But mostly he was too afraid to close his eyes. “It got really, really bad,” Petrulis, now 31, tells me. “I couldn’t do anything. So I’d just stay up.” Things are different today. Three or four nights a week, after tucking himself in bed, Petrulis slides a prototype 17-pound weighted blanket over his sheets. The blanket is roughly 3 feet wide by 6 feet long and looks a bit like 60 or so 4 x 4 inch bean bags handstitched together. The pockets are each stuffed with polypropylene pellets and a sort of memory foam material. Petrulis is a big guy—6'2", 250 pounds—but the blanket’s weight spreads evenly over him. “I feel safer when it’s covering my entire body,” Petrulis explains. No one can bother him this way. “It sets my mind up for sleeping hard that night.” Which he does. What happens, exactly, while he’s under such pressure? It sounds almost too good to be true. Whatever it is, can heavy blankets help other veterans with combat-related sleep problems get some rest too? What about restless deployed troops? Can heavy blankets offer them relief? The underlying idea is dead simple: create a cocooning embrace, like being swaddled. Petrulis compares it to a firm, comforting hug. According to Gaby Badre, a leading sleep researcher who’s studied weighted blanket therapy for treating insomnia in adults, there is good reason to believe this is because the deep pressure touch of a weighted material spread over part or all of the body dials down the fight-or-flight arousals of the sympathetic nervous system. (It’s generally accepted that a weighted blanket should be at least 10 percent the person’s body weight.) There is also speculation that lying under heavy constant pressure such as a weighted blanket feels good because it somehow lights up the brain’s reward center, probably triggering the release of neurotransmitters like serotonin and dopamine. But that’s about the extent of our understanding of the science beneath weighted blankets. No one knows precisely what goes on in the brain and throughout the body under this kind of pressure; whether the mechanism is mere placebo, or if something else altogether makes lying under a weighted blanket feel so reassuring and safe that it could bring deep, restorative sleep to those who need it but can’t otherwise get it on their own. It’s this mystery that still largely colors weighted blankets as non-evidence-based folk remedies to sleep disorders. They have shown promise as anti-anxiety and stress-relief aids in the very young and the very old. There is data and evidence to support claims that heavy blankets can help calm children with attention deficit hyperactivity disorder, autism, and other sensory disorders, as well as elderly people with dementia, added Badre, who’s been studying sleep since the late 1980s and currently oversees sleep medicine clinics at The London Clinic, the Institute of Neuroscience and Physiology at the University of Gothenburg, and SDS Kliniken. The between years, from roughly age 14 through 60, are murkier. There just isn’t sufficient data from clinical experience, at least not yet. There is hardly any supporting research, just anecdotal evidence, that shows the potential of weighted blankets having the same arousal-reducing effects as well as sleep-inducing ones in adult populations, including combat veterans like Petrulis. No small number of Iraq and Afghanistan war vets have trouble sleeping. Among patients of the Veterans Health Administration, the healthcare arm of the Department of Veterans Affairs, in 2015, 1,262,393 veterans—over 20 percent—had a sleep disorder diagnosis in the past two years, according to a VA representative. Those million-plus diagnosed sleep disorder cases, to say nothing of undiagnosed cases, are all different; various external factors like back and other muscular, skeletal, and neurological issues, plus prescription drug histories, bring unique forces and circumstances to bear on combat-related sleep disturbances. Petrulis is one veteran battling sleep after war. And one veteran reporting positive results, with no apparent side effects, from a non-evidence-based sleep aid is notable. But it’s not enough to convince the government to fund or conduct clinical research into that aid. Neither the VA nor the Department of Defense are exploring weighted blanket therapy. Petrulis and Chelsea Benard, a licensed occupational therapist who introduced him to weighted blanket therapy in the fall of 2015, wonder why not. Petrulis and Benard, who handstitched the 17-pound blanket Petrulis currently uses, don’t think the blanket is a cure-all for his sleep problems, but rather a promising, albeit under-researched supplement to other evidence-based treatment options for sleep and anxiety issues. “What’s neat is it’s a non-pharmacological approach that can be used as a complement tool to any other kind of treatment,” says Benard, who had the idea to try out weighted blankets with adult patients after she saw success using them on kids. “It’s not going to have any side effects.” She and Petrulis genuinely believe the technique can help people like him who cope with combat-related PTSD or TBI, whose core symptoms include sleep disturbances. And he says he’s tried just about everything when it comes to sleep. The VA initially prescribed him Ambien, which he tried once with no luck. The VA then upped the dosage, but still nothing; he’d sleep a few hours, then be up the rest of the night. They also put him on Valium for panic attacks, but that didn’t help either, even after an upped dosage. The VA currently has him on Prazosin, a blood pressure medication developed in the 1980s that’s been shown to stanch night terrors, and also has him on Klonopin, an anti-anxiety drug, for panic attacks. He says the Klonopin isn’t working, and is unsure whether or not Prazosin is helping. When he tries to power down at night, his brain is often going a million miles an hour. Except while he’s under the weighted blanket. He says it’s the only thing that helps him sleep. Nothing else gets him in a place at the end of the day where he can calm down and drift off. To this day, he hasn’t had a nightmare with the blanket on. But bad dreams still haunt him. They come when he isn’t sleeping under the blanket, and they often begin at home in Higganum, Connecticut, with Petrulis surrounded by family and friends. Then he’s driving a Humvee around town. He turns a corner, and suddenly he’s in Baghdad or Kandahar or some other place where he’s fighting for his life. He steps out of his vehicle and there’s a guy pointing a gun at him. Petrulis raises his M4 rifle, pulls the trigger. But it won’t fire. He keeps pulling the trigger and the guy either shoots Petrulis, or Petrulis dreams he shoots the guy. That bad dream hasn’t come around in awhile. It’s a scene from January 2, 2006, the first time Petrulis was blown up. He was driving a Humvee through Kandahar when a vehicle-borne improvised explosive device—a car bomb—detonated 10 feet from the armored vehicle. Everything went black. His gunner’s face was covered in third-degree burns. It was the first time Petrulis realized, “Hey, I’m not invincible.” He has lived with that memory—that bad dream—for years, reliving it over and over again in his sleep. Petrulis (right) with fellow bomb squad member and bomb-defusing robot (back left), near Forward Operating Base Giro, Afghanistan, May 2011. Photo: Andrew Petrulis His dreams have expanded with time. Most recently they’ve taken on an Inception-like, dream-within-a-dream quality. Petrulis will be disarming IEDs when suddenly he “wakes up.” “Oh my god,” he thinks to himself. “It was just a dream. I’m glad I’m not at war.” He’s fine. He’s in his bedroom. He gets up, walks outside, and guess what? There’s the war again. There’s nowhere for him to take cover. Enemy rounds are popping off over his head. He’s dodging RPG fire. He starts freaking out. Is this reality? Then he wakes up again. This time he’s screaming. He really is awake. “These dreams are so real,” he tells me, almost exasperated. “I can’t express how real they are, even when I wake up that second time. It’s almost like I have feelings in the dream. I physically feel in the dreams.” These layered nightmares are so visceral he has panic attacks when he surfaces from them. He’ll be soaked in sweat, unable to get back to sleep. Why would he want to? The psychologist he saw while on active duty recommended Petrulis keep a dream journal. So he writes down a lot of these nightmares. He’s found it helps his brain comprehend them. Writing this raw material down is a key stage of image rehearsal therapy, an evidence-based treatment for nightmares, said Wendy Troxel, a clinical and health psychologist who does sleep research in both civilian and military concentrations. Image rehearsal therapy involves patients “rescripting” their dreams, and is one in a range of evidence-based treatment options for enduring psychic wounds of modern war like PTSD and insomnia. These options include medications like Prazosin, the anti-nightmare drug Petrulis currently takes; prolonged exposure therapy for PTSD; and cognitive behavioral therapy for insomnia. Troxel told me she’s never heard of weighted blankets. As the co-principal investigator of an exhaustive 2015 RAND report on military sleep, she would be cautious about discussing any potential remedy for sleep disorders or PTSD that isn’t evidence-based. Which a heavy blanket is not. “If you can find science behind it, that’s one thing,” she wrote in an email. “But I would be very skeptical.” A review of the literature brings up just one randomized controlled trial examining the efficaciousness of weighted blankets on any psychological health outcome, according to Dr. Daniel Evatt, chief of research production at the Department of Defense Deployment Health Clinical Center. The study, published in 2014 in the journal Pediatrics, found that autistic children and their parents preferred weighted blankets over regular ones (the blankets were “well tolerated”). But the findings also reported that the weighted blankets did not improve overall sleep time for the children any more than the traditional blankets. In a written statement, Evatt said in light of that evidence, clinicians “might incorporate initial evidence that weighted blankets may be preferred and well tolerated and suggest that weighted blankets could be considered like any other bedding accessory and advise patients to use those bedding accessories that work for them.” “On the other hand,” Evatt added, “clinicians should be cautious of alternative treatments such as weighted blankets that are advertised with unsupported claims and that could be sought out by some patients in lieu of treatments that have the support of a body of scientific evidence." Dr. Vincent Mysliwiec, the US Army Surgeon General’s sleep medicine consultant, is aware of heavy blankets used for sleep. “From my understanding it’s kind of like a Beanie Baby,” says Mysliwiec, who authored a 2013 American Academy of Sleep Medicine study on active duty military personnel prone to sleep disorders and short sleep duration. “You’ve got this blanket with these tactile-like senses that you can, like, sense while you’re sleeping.” Mysliwiec is not familiar, however, with any scientific or medical-based studies that have established weighted blankets as an efficacious sleep therapy for any patient population, not just military. Kind of like a Beanie Baby. That’s about as good of an explanation as any. Or, maybe, kind of like floating. Gaby Badre doesn’t have problems sleeping, though he’s tried sleeping under a weighted blanket anyway. He’s also spent time soaking in a sensory deprivation tank, and thinks that somehow the two experiences can share a core operating principle. “The floating situation is really interesting,” says Badre. “You’re floating. It’s the same thing if you’re under deep pressure that is evenly distributed, so that you don’t feel a change in stimulation. You don’t get more stimulated by moving in your bed.” Badre is at the forefront of clinical weighted blanket therapy research in adult populations. He led a 2015 study on the positive effects of weighted blankets in adults aged 20 to 66 with intrinsic insomnia, or insomnia not secondary to medical or psychiatric disorders. The weighted blanket used in that study was a Swedish-designed product with adjustable metal chains (providing adequate pressure, depending on body weight). The Swedish heavy blanket used in Badre's study. Photo: Gaby Badre The results were published in the Journal of Sleep Medicine & Disorders, and found that a weighted blanket might aid in decreasing insomnia and, as such, “may provide an innovative, non-pharmacological approach and complementary tool to improve sleep quality.” Badre says there are two issues at play. “We know that deep pressure with a consistent sensory input decreases the level of arousal,” he explains. “The other aspect is that tactile stimulation can decrease the activity of the sympathetic nervous system. We know that an increase in sympathetic activity will increase arousal.” That might be the limit of our understanding of the science beneath weighted blankets, but for Badre it seems to be enough to justify using one. “I think everything that can give you this cocooning and monotonous tactile stimulation can have a positive impact,” he tells me. A positive impact is one thing. A body of evidence supporting that impact is another. Badre admits we simply need more clinical data before considering weighted blankets as anything other than an alternative approach to treating sleep disorders in adult populations. That includes active-duty military personnel and veterans. Badre says he has worked with at least one former member of the US military—a Marine who’d act out his nightmares—and thinks weighted blankets can help those with sleep problems related to PTSD. There's even a chance Badre could've been studying weighted blankets to treat such disorders in these types of patients by now, if only it were easier to convince the US military community to provide funding to rigorously research the technique. He would know. It’s unclear which branch of the military he and an American colleague were targeting. Badre says last fall they'd drawn up a weighted blanket research grant proposal, but that according to his colleague the military showed “no enthusiasm” before the idea was even formally presented. The researchers decided to not submit the proposal. What might account for that lack of enthusiasm? Troxel speculates it could be a matter of military funding. But it could also be that scant preliminary data on weighted blankets is not enough to support deeper investments from the government. There does seem to be a lack of bandwidth, time, and money among the small handful of weighted blanket providers on the market to commission clinical research. “We have been far too busy making weighted blankets to commission studies, but we would love to do so (or be part of one),” Donna Chambers, founder and CEO of Sensacalm, wrote me in an email. Sensacalm makes weighted blankets for people with autism, ADHD, Asperger’s, PTSD, sensory processing disorder, anxiety, dementia, and Alzheimer’s. Chambers added that Sensacalm has previously donated blankets to researchers studying them, but has yet to hear back any results. The irony is that the VA, at least, does offer patients weighted blankets and vests. Just not for sleep disorders. They can be ordered through the VA’s Rehabilitation & Prosthetic Services and are provided for orthopedic and neurologic balance disorders, such multiple sclerosis, Parkinson’s, ataxia, and stroke, according to a written statement from the VA. Patients must show documentation of medical necessity and how the blanket is an essential component of their treatment plans. This doesn’t extend to treating sensory processing disorders, post-traumatic stress, and anxiety, the statement adds. That's one way of putting it. “We can’t necessarily prescribe this because it’s not a medical device,” says Mysliwiec, the US Army Surgeon General’s sleep medicine consultant. That's another way. That doesn’t mean Mysliwiec thinks there’s nothing to lying under a weighted blanket, however. He thinks it can play a role in people getting better sleep. It’s OK to use one, Mysliwiec admits, so long as it doesn’t cause a person any side effects. Blankets are probably not fit for people with disruptive breathing disorders like sleep apnea, or who have underlying heart or lung conditions. In those cases Mysliwiec would not consider weighted blankets appropriate or exactly safe. But for people like Petrulis, who does not report sleep breathing disorders or any underlying heart or lung conditions, for whom sleeping under a weighted blanket helps their sleep, Mysliwiec doesn’t think using one is a problem, and he sees no significant side effects either. For their part, Petrulis and Benard are trying to get the word out and scale up production of her proprietary blanket model. Benard tells me she’s used weighted blankets with at least 200 adult patients over roughly the past five years; about 80 of them were veterans, and she says every one of them had symptoms of anxiety or disturbed sleep negatively impacting their quality of life. Others with PTSD, including rape victims, have also reached out to her, asking how they can get their hands on a blanket. Benard has 10 blankets as of this writing. She was waiting on a bulk order, paid entirely out of pocket, of 1,000 pounds of the polypropylene pellets and memory foam-like material (she did not elaborate on either) to hopefully make a bunch more blankets. At the same time, she’s trying to figure out if bulk manufacturing would even make sense. “I’m not sure we’ll be able to keep up with the demands,” Benard tells me. Her and a small group of volunteers still sew blankets by hand. She does hope to launch a non-profit, called Snug as a Bugz, centered on her model of “battle blanket.” She has a natural spokesperson in Petrulis, who is currently helping her raise funds to get more material to make more blankets. Since launching a GoFundMe campaign on December 5, 2015, “over 400” people have contacted Benard in support of weighted blankets or requesting more information; they have all either been combat veterans or families or friends of veterans. All the money raised through the campaign would go toward making more blankets—Benard and Petrulis would take none of the cut. The pair hopes to get blankets in the hands of as many vets as possible, including at retreat centers like Virginia-based Boulder Crest, a privately-funded rural wellness retreat for combat vets and their families. I asked Josh Goldberg, director of strategy at Boulder Crest, if the retreat would ever consider keeping a few weighted blankets onsite for guests with sleep disorders. “I would absolutely not rule it out,” said Goldberg. He was careful not to endorse weighted blankets outright, but did say Boulder Crest is “very open minded to the fact that a lot of things that are non-clinical in nature can be very, very effective at giving people the peace that they need to live the life they deserve to live in.” Lying under a heavy blanket has given Petrulis a little bit of that peace. He sleeps and feels a lot better than he did just a few years ago because of it. If the potential is there for something as dead-simple as weighted blankets to help other vets with sleep issues and perhaps even deployed troops with similar problems, Petrulis wants military brass to understand something. “I want the military to really understand that this is something—and they really don’t know about it, or talk about it, and there’s no information on this—that will drastically help people even if you just have ‘em in your mental health units,” he says. “Or bring them into every EOD shop.” If someone’s having a bad flashback or is unable to sleep, just wrap them up. “It’s such an easy thing to do.” He’s talking about people who are deployed, who are in war zones, not just vets at home. A reality of modern war is that a generation of tired troops are being raised up through the ranks, and that has a big impact on sleep and life during and after war. The 2015 RAND military sleep study Troxel co-authored included 1,957 participants from across all four branches of the armed forces, and found a “high prevalence” of sleep issues like poor sleep quality, nightmares, insufficient sleep duration, and daytime sleepiness among those subjects. The participants were “older and all married,” Troxels points out. Their battle rhythms are in stark contrast—they’re just not as zipped up—compared to the twentysomething deployed men who Troxels says are the highest-risk demographic today for, say, slugging energy drinks. And then crashing. “It’s concerning that we’re raising this population of servicemembers who are using a variety of techniques to stay awake, which then further compromises sleep,” Troxel says. “It’s a vicious, perpetuating cycle of trying to stay awake and then not being able to fall asleep at night, which perpetuates not being able to sleep the subsequent night.” “There’s something endemic to military culture that’s contributing to sleep problems,” she adds. It can be hard to sleep while at war. But adjusting to sleep long after battle can be a war unto itself. Petrulis misses the person he used to be, before the PTSD and sleepless nights. But he knows he will never be that person again. “I feel like a person who pops 30 pills a day from 10 different doctors all trying to figure out what's wrong with me and how to help me,” he says. “I used to care but now I don't. I feel like a test subject that is fed pills until my brain is numb and I don't have to think anymore.” He has good and bad days. Today, he is in the process of being medically retired from the reserves. He heads up a CrossFit gym, which recently celebrated its one-year anniversary. He brings the blanket to work sometimes, wrapping himself up there if he can’t concentrate. Or if he had a bad night sleeping. He seems somewhat relieved that sleeping with the blanket some nights is helping him sleep during nights when he doesn’t use it. The night terrors are down to once every two weeks, but he’s still having issues allowing his body to rest. He does report sleeping soundly the three or four nights per week he currently sleeps under the blanket, accepting its weight. He pulls it down toward his feet as these nights progress. Sometimes bad dreams happen after that, but never when the blanket is physically on top of him. When he wakes up it’s not on him at all. Eventually Petrulis hopes to wean himself off the blanket entirely. “I want to try and be a normal human being again,” he says. There’s a hesitance in his voice, as if staring down a long struggle ahead. “I don’t want to go to sleep still.” You’ll Sleep When You’re Dead is Motherboard’s exploration of the future of sleep. Read more stories.
<urn:uuid:754068c7-51e0-4206-8e09-1f8ae88f0930>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/mri-528938/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960978
7,111
3.328125
3
Charity uses hospital ships to bring medical services such as tumor removal to countries in the Caribbean and West Africa. Mercy Ships brings help and healing to poor nations around the world, according to Kelvin Burton, Mercy Ships chief technology officer. Founded in 1978, the organization depends on the volunteer efforts of doctors, dentists, nurses, teachers, cooks, seamen, engineers, community developers and others. Over the years, Burton said, Mercy Ships has: performed approximately 2 million services worth more than $250 million and has affected more than 2.5 million people; treated more than 300,000 people in village medical clinics; performed approximately 18,000 surgeries; performed approximately 110,000 dental treatments; and completed nearly 350 construction and agricultural projects. In addition, the Mercy Ships fleet has visited more than 500 ports in more than 50 developing nations and 17 developed nations. Click here to read the related story on how Mercy Ships is using Borlands JBuilder. "What we do is take our ... hospital ships to Third World situationsprimarily West Africa and the Caribbean in the last few years," said Burton. "The Caribbean being places like the Dominican Republic, Honduras, Belize, Nicaragua and Guatemala. And in West Africa, its been Benin, Togo and Sierra Leoneand most recently in Liberia." Burton said the United Nations pushed very hard for Mercy Ships to go to Liberia, and "we just sailed out of Monrovia to South Africa to do our annual refit of the ship, and then well be going back to Monrovia." Indeed, the most visible part of Mercy Ships efforts is the medical work the organization performs. "The surgeries we do are typically life-changing surgeries, like cataract removal, tumor removal, and various sorts of cleft lip and cleft palate surgeriesthings that are totally debilitating but dont take dramatic amounts of surgery to produce radical changes," Burton said. "So thats the focus of the surgery, and primarily because of the fact that it requires relatively short ward time and ward space is critical in these situations," Burton said. "We can help a whole lot more people if they dont have to spend three months recovering, and most of our patients can recover in a week." Meanwhile, although Burtons IT staff does not produce systems for the medical operations of the ships, he said, Mercy Ships is seeking a grant to enable his staff to develop applications that will better enable doctors on the ships to consult with specialists remotely on cases that might require outside consultation. Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
<urn:uuid:0237cebb-8994-470d-89ac-a24cf8351151>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Mercy-Ships-Has-Helped-Millions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00346-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95097
552
2.53125
3
According to NASA, WISE has two coolant tanks that keep the spacecraft's normal operating temperature at 12 Kelvin (minus 438 degrees Fahrenheit). The outer, secondary tank is now depleted, causing the temperature to increase. One of WISE's infrared detectors, the longest-wavelength band most sensitive to heat, stopped producing useful data once the telescope warmed to 31 Kelvin (minus 404 degrees Fahrenheit). The primary tank still has a healthy supply of coolant, and data quality from the remaining infrared detectors remains high. NASA stated. WISE's infrared telescope and detectors are kept chilled inside a Thermos-like tank of solid hydrogen, called a cryostat. This prevents WISE from picking up the heat, or infrared, signature of its own instrument. The solid hydrogen, called a cryogen, was expected to last about 10 months -- the mission launched in December 2009. WISE observes infrared light, letting it show the darkest components of the near-Earth object population -- those that don't reflect much visible light. Visible-light estimates of an asteroid's size can be deceiving, because a small, light-colored space rock can look the same as a big, dark one. In infrared, however, a big dark rock will give off more of a thermal or infrared glow, and reveal its true size, NASA stated. NASA said WISE completed its primary mission, a full scan of the entire sky in infrared light, on July 17, 2010. The mission has taken more than 1.5 million snapshots so far, uncovering hundreds of millions of objects, including asteroids, stars and galaxies. It has discovered more than 29,000 new asteroids to date, more than 100 near-Earth objects and 15 comets, NASA said. For now WISE is performing a second survey of about one-half the sky. It's possible the remaining coolant will run out before that scan is finished. Scientists say the second scan will help identify new and nearby objects, as well as those that have changed in brightness. It could also help to confirm oddball objects picked up in the first scan, NASA stated. Data from the mission will be released to the astronomical community in two stages: a preliminary release will take place six months after the end of the survey, or about 16 months after launch, and a final release is scheduled for 17 months after the end of the survey, or about 27 months after launch, NASA stated. Almost as soon as it came online in December, WISE spotted a new, half-mile wide asteroid some 98 million miles from Earth. The near-Earth object, designated 2010 AB78 and circles the Sun in an elliptical orbit tilted to the plane of our solar system. The object comes as close to the Sun as Earth, but because of its tilted orbit, it will not pass very close to Earth for many centuries. This asteroid does not pose any foreseeable impact threat to Earth, but scientists will continue to monitor it. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:3c0c51d3-a3e7-4214-b39e-c7e9915185e8>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231604/security/nasa-universe-watching-satellite-losing-its-cool.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947265
625
3.3125
3
IBM researchers say they have developed a prototype analog-to-digital converter (ADC) that will quadruple the transfer speeds -- 200 - 400 Gigabits per second (Gb/s) -- of huge data dumps between clouds or data centers. IBM says the ADC could download 160 Gigabytes, the equivalent of a two-hour, 4K ultra-high definition movie or 40,000 songs, in only a few seconds. IBM said the device is a lab prototype, but noted that a previous version of the design has been licensed to Semtech Corp, which will be incorporating the technology into communications platforms expected to be announced later this year. Big Blue says The 64 GS/s (giga-samples per second) chips for Semtech will be manufactured at IBM's 300mm fab in East Fishkill, New York in a 32 nanometer silicon-on-insulator CMOS process and has an area of 5 mm2. This core includes a wide tuning millimeter wave synthesizer enabling the core to tune from 42 to 68 GS/s per channel with a nominal jitter value of 45 femtoseconds root mean square. The full dual-channel 2x64 GS/s ADC core generates 128 billion analog-to-digital conversions per second, with a total power consumption of 2.1 Watts., IBM stated. An ADC converts analog signals to digital, approximating the right combination of zeros and ones to digitally represent the data so it can be stored on computers and analyzed for patterns and predictive outcomes, IBM says. For example, IBM said scientists will use hundreds of thousands of ADCs to convert the analog radio signals that originate from the Big Bang 13 billion years ago to digital. It's part of a collaboration called Dome between ASTRON, the Netherlands Institute for Radio Astronomy, DOME-South Africa and IBM to develop a fundamental IT roadmap for the Square Kilometer Array (SKA), an international project to build the world's largest and most sensitive radio telescope. +More on Network World: 10 amazing facts about the world's largest radio telescope+ The radio data that the SKA collects from deep space is expected to produce 10 times the current global internet traffic and the prototype ADC would be an ideal candidate to transport the signals fast and at very low power - a critical requirement considering the thousands of antennas which will be spread over 3,000 kilometers (1,900 miles). As another way of looking at what SKA will generate, it is expected to turn out enough raw data to fill 15 million 64 GB MP3 players every day. The device was presented at the International Solid-State Circuits Conference in San Francisco this week. Check out these other hot stories:
<urn:uuid:7da1e216-06eb-4dfe-b844-01c6b67c2f41>
CC-MAIN-2017-04
http://www.networkworld.com/article/2226351/applications/ibm--prototype-device-supports-400-gb-s-data-transfer-speeds.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00549-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893125
557
2.765625
3
A traditional security system is composed of a variety of subsystems such as video surveillance, access control and intrusion detection alerts. By themselves, each of these systems is a critical component of a sophisticated security infrastructure. But for the most part, these platforms operate as separate components without the ability to communicate with each other. This inability to share critical security information between devices and platforms leads to missing data, which can result in security hazards and lapses. Over the past decade, the focus of security has shifted from analog based systems to IP based solutions. IP security devices enable quicker access to critical data than analog systems because information can be accessed instantly over the network. Yet until today, IP systems fell short in that they had no inherent way of connecting the disparate systems. To combat this, integrators and installers were called upon to bring these siloed platforms together. In combining these separate systems – video surveillance, access control, analytics, intrusion alerts and more – into a single interface, users can take advantage of a higher level of situational awareness to more effectively detect and deter potential security threats. However the process of merging these systems was expensive and complex because the products were not designed for intelligent integration.
<urn:uuid:54b89ede-0fae-469c-924d-bfc7a2f20d8f>
CC-MAIN-2017-04
http://www.bsminfo.com/doc/integrated-and-networked-physical-security-0001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962523
239
3.03125
3
A data storage device is any medium onto which a computer can reliably place data and, after some period of time without any power or maintenance, retrieve that data. Examples are tapes, hard disks, floppy disks, CD-ROMS, flash ROMS, optical disks, and, soon, holographic data crystals. All of these devices are typically thought of as contiguous arrays of data storage locations, although that's not always accurate. These might be addressed by the byte, or in larger chunks called blocks. Meanwhile, data to be stored is usually thought of in terms of files. A file is just a collection of arbitrarily associated data. Exactly what goes into a file and how it is formatted depends on the application and user which create it. Determining how a file is placed on a storage device, is the job of a file system. A file system is a mechanism for keeping track of data on a storage media. Generally, a file system controls storage over a fixed array of data, called a partition. In the simplest case, each physical device has one partition. Most storage media can be divided into multiple partitions. You are probably familiar with at least one filesystem, such as Mac OS's HFS+, Windows' FAT32, BSD Unix's UFS, or linux's FFS. These are all used primarily for storing data on hard-drives, and that will be the focus of this article. Hard Drive Geometry Hard disks are not only the most common data storage device, the are also the most complicated. The basic element of a hard disk is, well, a hard disk. Called a platter, the surface is covered with a magnetic coating. The magnetism of the coating can be either sensed or changed by passing a read/write head over the surface. The entire surface is accessed by spinning the platter around a central spindle and then moving the head toward or away from the center. Data is thus written in concentric circles called tracks. When you hear your hard drive making clicking noises as it's being accessed, that's the sound of the heads moving from one track to another. Solid State Drives (SSDs) have none of the geometry limitations of Hard Drives and so are not bound by any of these limitations. Things get complicated when you factor in that each platter has two surfaces and most hard drives have at least two platters. Each surface has its own read/write head. In order to save space and cost, all the heads move in and out together, but only one can be active at a time. As the disk spins, the heads move back and forth reading and writing data. All the tracks that are at the same distance from the center are called a cylinder. A collection of blocks that are at approximately the same angle around the disk are called a sector. So while most data storage devices are thought of as being a linear collection of bytes, a hard drive is really three dimensional. Data must be located by its sector, cylinder, and head (rotation, radius, and height). This will be important later when we talk about optimization. Since hard drives can hold a lot of data, they are almost always divided up into multiple "virtual" storage units called partitions. Typically, a hard drive will have a few partitions that contain drivers and other low level operating system data, and then one partition that has a visible filesystem. All files are stored in this one filesystem. This is traditionally the preferred arrangement for most personal computers since it allows the user the simplest access to all of their storage capacity. The traditional Unix installation instead uses multiple filesystem partitions to divide up its storage roughly as follows: - The root file system, containing essential files required for absolute minimal functionality. Typically 100 MB. - The swap partition, used for virtual memory and doesn't have a file system. Usually between 2 and 3 times the amount of RAM. - The "user" file system, containing all other system files. Anywhere from 100 MB to a few gigabytes, depending on the OS. - The "home" file system, containing user home directories and files. Typically, all the space thats left. Note that the "root" partition is very small and contains everything needed to boot the system in "single user" mode. This allows for crash recovery should the other partitions become corrupted and minimizes the chance of root itself being corrupted. The /usr partition could be mounted over a network, allowing multiple machines to share a central system disk. The model above was created at a time when disk space was much more limited and less reliable than today. Storage capacity has grown much more quickly than the size of most operating systems (except Windows), and hard drives are inexpensive. Since RAM is also cheap, swap space is used only as a rare fallback, and so using swap files on the main partition has become a common practice. There are really only a few reasons left to bother with partitioning a hard drive, and these only apply if you have only one hard drive: - Multiple Operating Systems - If you need to boot multiple OSs (such as Windows and Linux), you will probably need a separate filesystem for each. - Crash Recovery - While most systems allow you to boot off of a CD-ROM in an emergency, the options for repair and recovery are often limited. In these instances it can be very handy to have an emergency boot partition with a basic OS install plus all your favorite disk repair utilities. If you have multiple operating systems and they are capable of repairing each-other, then you are already set. - User or Application Segmentation - In a multi-user system or network server, it may be useful to keep user files on a separate partition from the operating system and other critical data. That way if, for example, someone fills up the user disk, the OS doesn't run out of room for the log files and mail spooler. - Data Integrity - Having your operating system on a different partition from your documents makes you slightly less vulnerable to filesystem corruption. If your OS crashes or a virus strikes, the damage is most likely to be confined to your boot partition, leaving your precious documents recoverable. Likewise, if your computer crashes in the middle of writing documents and corrupts that partition, you would still be able to boot and run recovery tools. The downside of partitioning a hard drive is that you lose performance and flexibility. Since you are dividing up your free space, some planing is needed to be sure you don't run out in one partition while another is relatively empty. I used to be a firm believer in having at least two partitions on every computer. But these days there are a lot of options for recovering a crashed system, so unless you have a specific need, its simplest to just stick with one partition per drive. Most unix systems store 256 to 1024 characters for each filename and require that programs looking for a file match its name exactly. This is particularly true for UFS and FFS. Windows and Mac OS (with HFS+) behave differently, and this can lead to some unfortunate side effects. These systems are "case insensitive, case-preserving". This means that when you name a file, upper and lower case are saved and displayed. But when the system tries to match a file for read or writing, upper and lower case are ignored. Thus while UFS sees "foo" and "Foo" as different files, Windows and Mac OS will treat them as the same file. This can be very bad if you are installing a unix software package with files like "config" and "Config", since one will overwrite the other. In the case of Mac OS X, the BSD unix commands aren't aware of this behavior, so you need to be careful when using "tar" or such. Fortunately, OS X also supports UFS, both as a separate partition and as a disk image. When I need to work with software packages that have lots of files differing only by case, I simply unpack them onto a UFS disk image and work with them there. Windows does not have such a workaround, but then nobody really expects Windows to be compatible with unix anyway. Fragmentation and Optimization You have probably heard these two terms used a lot, particularly when someone is trying to sell you something. Unfortunately, these terms have been badly mangled over the years and it is often difficult to discern what is meant. First of all, there are two different notions of "fragmentation". The most common usage of this term refers to the data "in" a file being scattered around on a drive such that the heads have to move back and forth in order to read all the data. Head movement takes a long time compared to just spinning the disk, so fragmented files are a bad thing. In the unix world, there is also another definition of "fragmentation" which means something quite different. If you look at the "ufs", "fs_ufs", or "fsck" man pages, the "fragments" that they are talking about have to do with the allocation of individual blocks. In UFS, a single disk block can be broken up into sub-blocks or "fragments". This "block fragmentation" is completely different from "file fragmentation". A lot of people get confused by this because they see "fsck" report a some level of "fragmentation" on their disk. fsck is talking about block fragmentation, which is not something anyone really needs to worry about. File fragmentation, on the other hand, can cause some major performance problems for your hard drive. As noted above, moving the hard drive head around takes a lot of time. Thus for optimal system performance, you want all the data that's going to be accessed at one time to be grouped together on the disk. But different operating systems have different ideas about what "grouped together" means, and it doesn't always match with reality. Solid State Drives (SSDs) have none of the geometry limitations of Hard Drives and so are not affected by file fragmentation. Ironically, they are somewhat affected by block fragmentation. PC operating systems generally treat the hard drive as one big linear array of data. That is, they ignore its three dimensional nature and pay no attention to where in the drive files are placed. About the only effort they make at optimizing storage is to try and write each file contiguously... that is all in one long, unbroken sequence of blocks. But over time files get moved around or deleted, free space gets chopped up into many disparate chunks, and pieces of files end up scattered all over. Thus many vendors sell "disk optimizers" which try to rearrange all the files so that each one is contiguous (in one chunk). They may even try to group files that seem related "near" each other in the big long line of bytes. The problem with this linear approach is that it is both inflexible and somtimes wrong. For example, a file may be stored in a sequential collection of blocks but still cross a track boundary. Thus even though the optimizer software says the file is fine, accessing it still requires head movement. More imporantly, when the OS goes to write a file, it has too look for a contiguous line of free space so that it can write the blocks in linear sequence. But in reality, only the sectors need to be in sequence and the file should all be on the same (or nearby) cylinders. As a result of this operating system ignorance, PC's generally must have their disks optimized every so often in order to prevent performance from degrading. Traditional unix file systems would take disk geometry into account. This gave them a lot of flexiblity in where to place files and allowed them to keep a drive performing very well without having to run external optimizers. The downside was that when the hard drive was formatted, its exact geometry had to be known and written into the file system. That is, the system had to know how many heads, cylinders, and sectors there were in order to correctly place the files. If this information was not available, or was not accurate, then the drive might perform very poorly. But modern drives can have more complex geometries, such as variable sectors per cylinder, that are difficult to categorize and are rarely documented. As a result, most modern operating systems take the linear approach. Mac OS X (HFS+ under 10.4 and later), for example, prefers to write new files at the start of long runs of free blocks and moves frequently used files ("hotfiles") to a reserved area nearest the "front" of the partition. The drive's firmware presumeably knows everything about its geometry, so it is then responsible for mapping linear blocks onto the geometry in a (hopefully) efficient manner. File Linking and Deletion You probably know that when you delete a file on a computer, its contents are not really erased from the disk. On traditional PC operating systems, the directory entry for the file is removed and the blocks of the disk holding the file are marked as available to be overwritten. Thus deleting a file immediately causes the amount of free space to go up, but the contents of those blocks may remain undisturbed for some time. Utilities exist which can find a these contents and create a new directory entry for them, effectively "undeleting" the file, provided that it has not yet been overwritten. In unix, however, the deletion process is more complicated. This can cause some confusion if you go to delete a file intending to free up some disk space, only to discover that the amount available has not changed. For starters, files in unix can have more than one directory entry (called hard links). That is, a listing for a single file may appear in multiple directories. This is not the same thing as an alias (or soft link), which is just a redirection to a file's true location. A hard link is a bona-fide directory entry: every file has one and may have many. Most versions of the "ls" command will show you a file's link count when you do a long listing via "ls -l": it is usually the number that appears between the permissions and the owner. When you "delete" a file in unix, what you are really doing is removing one of its links. This removes that one directory entry for that file, but does not necessarily deallocate or free up the files storage blocks. Thus deleteing a file in unix does not necessarily result in free space being created on the disk. (That's why the system call to do this is called "unlink" and not "delete". See "man unlink" for more information.) To actually free up disk space, you have to delete all of a file's hard links. Hunting down all of a file's links generally requires that you figure out its inode number and then use "find" to locate all of its directory entries. Even once you've deleted all of a file's links, its space still might not be freed up. On a traditional PC system, if you delete a file while it is open, the program using it will be disrupted by the immediate deallocation of the file. But most unix systems will keep a private copy of the file's link in memory so long as at least one program holds it open. Thus if a file is opened at the time it is unlinked, its disk space won't be freed until all the processes using it have closed it. A common example of this are the swap files that some systems create for virtual memory. If you delete one ("rm" as root), the disk space won't be freed up until you reboot or otherwise reconfigure the swap system. One last thing to think about when deleting files in unix: there's usually no going back. Unlike a traditional PC system, where it will likely be a while before the contents are overwritten, unix file systems tend to overwrite the most recently freed blocks. Moreover, unix systems almost always have something writing data to disk. Thus while unix "undelete" utilities do exist, they are much less likely to be successful.
<urn:uuid:4ee20194-c276-43e8-8035-341d4c8aa5db>
CC-MAIN-2017-04
http://tips.dataexpedition.com/filesystems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951482
3,391
3.96875
4
There are challenges to creating such technology, the scientists said, from creating a robust and reliable way of integrating whats needed onto a single chip to having the CMOS technology co-exist with noisy digital circuits. Kahn said some technologysuch as smart antennas, which could include multiple directional antennas on the devicecould be seen in products within the next couple of years. But it could take 10 years or more for complete wireless connectivity to become a reality, he said.Earlier this year, Intel proposed allowing the module to be split into two components while still complying with the intent of the earlier regulation, which would enable the scientists to integrate the radio into the platform. Regulators are reviewing the request, and should come to a decision by the end of the year, Schiffer said. In the meantime, Intel officials have been working to change the way the U.S. government over doles out access to spectrum, arguing that true wireless ubiquity will need an open spectrum for it to work. "If you look at the way radios are regulated, they are regulated on 1920 radio technology," when the spectrum was simply cut up into chunks, Kahn said. "If you look at whats happening at the [Federal Communications Commission], theyre all very interested in taking the regulations from the 1920s to the 21st century." For example, Intel wants the government to open portions of unused TV spectrum for unlicensed devices. Also, Intel officials have testified at government hearings and have sat on regulatory task forces in hopes of reforming spectrum policy. Intel also is pushing for regulations allowing users access to wireless networks while on airplanes. Kahn said Intel will continue lobbying the government in hopes of creating a regulatory environment where a complete wireless environment can be created. For more wireless news, check out Ziff Davis Medias Wireless Supersite. Another hurdle is in the regulatory arena. Intel officials said they must find a way of creating wireless technology that can be accepted by countries worldwide. For example, in 2000, Intel and Motorola Inc. helped draw up regulation that was adopted by the Federal Communications Commission in which the government accepted a wireless module that could be tested onceand certified by regulatorsand then used in any OEMs platform. That proposal has been recognized in several European countries, and Japan and China are investigating it, said Jeffrey Schiffer, co-director of wireless technology development for the Communications and Interconnect Lab. Latest Wireless News: Latest Intel News:
<urn:uuid:044902c5-3c2a-45cf-b05f-789b4f1755de>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Mobile-and-Wireless/Intel-Gives-Glimpse-of-Wireless-Nirvana/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00117-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949445
490
2.6875
3
During the annual televised “State of the Union” address at the beginning of 2011, Barak Obama sought to renew the national focus on science and technology, in part by using supercomputing capabilities to drive progress. To highlight the role of HPC in the new generation of scientific endeavors, the President told millions of Americans about how supercomputing capabilities at Oak Ridge National Laboratory (ORNL) will lend the muscle for a Department of Energy initiative “to get a lot more power out of our nuclear facilities” via the Consortium for Advanced Simulation of Light Water Reactors (CASL). This speech came well before the word “nuclear” was (yet again) thrown into the public perception tarpit by the Fukushima reactor disaster, otherwise it might be reasonable to assume that there would be more attention focused on the safety angle that complements the CASL’s nuclear efficiency and waste reduction goals. Outside of the safety side of the story, another, perhaps more specific element to his national address was missing — that the power of modeling and simulation — not just high performance computing — might lie at the heart of a new era for American innovation. To arrive at an ambitious five-year plan to enact a number of design and operational improvements at nuclear facilities, CASL researchers are developing models that will simulate potential upgrades at a range of existing nuclear power plants across the United States that will seek to address a number of direct nuclear facility challenges as well as some pressing software challenges that lie at the heart of ultra-complex modeling at extreme scale. Despite some of the simulation challenges that are ahead for CASL, the payoff for the DOE’s five-year, $122 million grant last May to support this and two other innovation hubs could be significant. According to the team behind the effort, “these upgrades could improve the energy output of America’s existing reactor fleet by as much as seven reactors’ worth at a fraction of the cost of building new reactors, while providing continued improvements in reliability and safety.” Director of Oak Ridge National Laboratory, Thom Mason, pointed to the power of new and sophisticated modeling capabilities that “will provide improved insight into the operations of reactors, helping the industry reduce capital and operating costs, minimize nuclear waste volume, safely extend the lifetime of the current nuclear fleet and develop new materials for next-generation reactors.” The CASL has been designed with the goal of creating a user environment to allow for advanced predictive simulation via the creation of a Virtual Reactor (VR). This virtual reactor will examine key possibilities and existing realities at power plants at both the design and operational level. CASL leaders hope to “produce a multiphysics computational environment that can be used for calculations of both normal and off-normal conditions via the development of superior physical and analytics models and multiphysics integrators.” The CASL team further claims that once the system has matured, the VR will be able to combine “advanced neutronics, T-H, structural and fuel performance modules, linked with existing systems and safety analysis simulation tools, to model nuclear power plant performance in a high performance computational environment that enables engineers to simulate physical reactors.” Many of the codes will employ a number of pre-validated neutronics and thermal-hydraulics (T-H) codes that have been developed by a number of partners on the project, including a number of universities (University of Michigan, MIT, North Carolina State and other) as well as national laboratories (Sandia, Los Alamos, and Idaho). During the first year CASL will be able to achieve a number of initial core simulations using coupled tools and models — a goal that they have reached for the most part already. This involves application of 3D transport with T-H feedback and CFD with neutronics to isolate core elements of the core design and configuration. In the second year the team hopes to be able to apply a full-core CFD model to calculate 3D localized flow distributions to indentify transverse flow that could result in problems with the rods. According to a spokesperson for ORNL, making use of the Jaguar supercomputer, CASL will allow for large-scale integrated modeling that has only been possible in the last few years.” The challenge is not simply how to use these new capabilities, but how to make sure current programming and computational paradigms can maximize its use. A document that covers the goals of CASL in more depth sheds light on some of the computational aspects of these massive-scale simulations. The authors note that “a cross-cutting issue that will impact the entire range of computational efforts over the lifetime of CASL is the dramatic shift occurring in computer architectures, with rapid increases in the number of cores in CPUs and increasing use of specialized processing units (such as GPUs) as computational accelerators. As a result, applications must be designed for multiple levels of memory hierarchy and massive thread parallelism.” The authors of the report go on to note that while they can expect peak performance at the desktop to be in the 10 teraflop range and the performance at the leadership platform to be in the several hundred petaflop range, during the next five years, “it will be challenging for applications to achieve a significant fraction of these peak performance numbers, particularly existing applications that have not been designed to perform well on such machines.” Another one of CASL’s stated goals has to do with the future of modeling and simulation-focused research. The team states that they hope to “promote an enhanced scientific basis and understanding by replacing empirically based design and analysis tools with predictive capabilities.” In other words, by harnessing high performance computing to demonstrate actual circumstances versus reflect the educated hopes of even the most skilled reactor engineers, we might be one step closer to fail-proof design in an area that will allow for nothing less than perfection. CASL could have a chance to see its models and simulations leap to life over the course of the first five years of the project. Currently the Tennessee Valley Authority operates a total of six reactors that generate close to 7,000 megawatts. The agency is currently embarking on a $2.5 billion journey to create a second pressurized water reactor at one of its existing facilities. This provides a perfect opportunity for the CASL team to put their facility modeling research to work; thus they’ve started creating simulations focused on the reactor core, internals and the reactor vessel. CASL claims that “much of the virtual reactor to be developed will be applicable to other reactor types, including boiling water reactors.” They hope that during the subsequent set of five-year objectives they will be able to expand to include structures, systems and components that are outside of the vessel as well as consider small modular reactors.
<urn:uuid:6487c0b9-ca85-459d-b076-a86e0dfd63b2>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/09/oak_ridge_supercomputers_modeling_nuclear_future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00263-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942502
1,406
2.703125
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The public health infrastructure in the US is so antiquated that the Centre for Disease Control has likened it to a "pony express". The system still relies on paper-based reports and phone calls. Despite years of warning about the possibility of bio-terrorism, when the first incident was reported on 4 October, only half the state, local and territorial health departments in the US had full-time Internet connectivity, according to a report on the public health infrastructure. The Centre for Disease Control (CDC) used Atlanta county for a pilot project to enhance the capabilities of the nationwide Health Alert Network, a secure system intended to connect all public health departments with the CDC. "We need to get into the modern age of communications," said Dr Paul Weisner, a board of health director in Georgia. "I can't track my emergency room patients in real time. Instead, I have an icon on my desk here that only gives me an update every 24 hours." Dr Rex Archer, a health department director, said real-time information is needed to track not only emergency room visits, but also other indicators that could signal the spread of a natural or deliberate outbreak of disease. The public health infrastructure lacks the funds to handle a major crisis, said Archer. The CDC has started to deploy a secure information system called the Epidemic Information Exchange (Epi-X), which uses digital certificates to ensure data privacy. However, deployment of the full-scale Epi-X system has been restricted to state level, leaving Archer with a read-only terminal. Archer said he needs the full-scale capability of Epi-X today. Patricia Quinlisk, a state epidemiologist, has urged the US Congress to provide $50m (£35m) for a new CDC project called the National Electronic Surveillance System, designed to integrate as many as 100 separate data systems used by public health agencies. It would ensure the rapid analysis of data across data sets and regions, so that mandated reporters such as doctors would find reporting diseases significantly simplified. However, CDC spokeswoman Barbara Govert, described the $90m (£63m) of government funding for the network as "minimal". "Maybe the events of the past two weeks have made it easier to understand the importance of funding health care infrastructure," she said.
<urn:uuid:fdca3fd0-961d-4f57-b259-b0c0a8c167ce>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240042822/Anthrax-threat-exposes-US-infrastructure-failings
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958798
496
2.546875
3
Don’t worry, this is not a crash course in computer programming — we are simply highlighting the use of software language types being used to construct the IoT and thinking about what that means in a wider sense. Microsoft wants to embrace open source, this we already know. The firm has widely moved many of its main bastions of previously proprietary software to a new open status where the ‘community contribution model’ can bring in positive DNA from all those who wish to propose augmentations. The firm’s latest movements in open source are focused on the Internet of Things and, specifically, the P programming language for embedded systems which has recently gained open source status. What is domain-specific? As a language for building the code that creates the software in our IoT things, P is a domain-specific language (as opposed to a general purpose one) meaning that it is particularly suited to its job. HTML is domain-specific for the web, so you get the idea. Microsoft’s Ethan Jackson, Sriram Rajamani and Shaz Qadeer explain that P allows the programmer to specify the system as a collection of interacting state machines, which communicate with each other using events. “The P language is carefully designed so that we can check if the systems being designed is responsive, i.e., it is able to handle every event in a timely manner. By default, a machine needs to handle every event that arrives in every state. P was used to implement and verify the core of the USB device driver stack that ships with Microsoft Windows 8. Your business ‘takeaway’ Microsoft describes P as offering ‘safe event-driven programming’, an approach to software where the actual execution of the code instructions are determined by ‘events’. At your PC, these events come in the form of key presses, mouse clicks or instructions from other programs. At your typical IoT device, these events may be sensor outputs or other forms of electronic Input/Output. On your PC, the program depends on user input, on your IoT device the program depends on what is happening in the world. So in other words we are building our IoT software code in a very similar way to the methods we use on your desktop… and we’re doing it with Microsoft in the open source arena, so that’s okay too. P can be used to create code for a microwave, toaster, a car or an elevator… you could already be a P consumer.
<urn:uuid:9282484c-a448-49af-a83d-4e5bc5e6acfe>
CC-MAIN-2017-04
https://internetofbusiness.com/microsoft-wants-iot-built-p/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00284-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935092
519
3.09375
3
The venerable Mercury thermometer has been on its way out for a number of years and the National Institute of Standards and Technology next month may give it a final push. On March 1, NIST said it will no longer provide calibration services for mercury thermometers. The cessation of the mercury thermometer calibration program marks the end of an era at NIST, which has provided the service since the doors opened in 1901. In fact NIST itself at one point had a stockpile of more than 8,000 industrial-use mercury thermometers hidden away in drawers. The mercury from these has been sent to specialized recycling centers, which repurpose the mercury to produce compact fluorescent light bulbs. Mercury thermometers contain about 500 milligrams of mercury-an amount equal to the mercury in over 125 compact fluorescent bulbs, NIST stated. The NIST announcement is only part of a world-wide effort to eliminate the use of Mercury. According to NIST, Mercury is a potent neurotoxin. Elemental mercury is found in thermometers and used in a number of industrial processes such as gold mining. Once released into the environment, mercury makes its way into streams, rivers, and finally the ocean. The mercury is absorbed by sea life and accumulates in the larger fish that humans like to eat. This is the main source of mercury poisoning in humans today, NIST stated. According to the EPA, industrial and manufacturing measurement and control devices, including glass non-fever thermometers, still use mercury-containing products, but in many cases effective non-mercury alternative products exist. Presently about 300 of the approximately 700 standards have been amended to allow for the use of both mercury-free liquid-in-glass and digital thermometers. According to NIST researcher Dawn Cross, each of these ASTM standards is reviewed on a rolling basis. She estimates that all the standards will have been amended to include detailed procedures for making the switch to mercury thermometer alternatives within three years. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:dd43769e-3d89-4e33-b032-d9289d4d4a3c>
CC-MAIN-2017-04
http://www.networkworld.com/article/2228414/compliance/nist-puts-one-more-nail-in-the-mercury-thermometer-coffin.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00310-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929232
426
2.765625
3
Configuration Management Database (CMDB) Configuration Management Database (CMDB) is a centralized repository that stores information on all the significant entities of your IT environment. The entities, termed as Configuration Items (CIs) constitutes of Hardware, the installed Software Applications, Documents, Business Services and People that are part of your IT system. Unlike the asset database that comprises of a bunch of CIs, the CMDB is designed to support a vast IT structure where the interrelation between the CIs are maintained and supported successfully. Each CI within the CMDB is represented with Attributes and Relationships. Attributes are data elements that describe the characteristics of CIs under a CI Type. For instance, the attributes for CI Type Server would be Model, Service Tag, Processor Name and so on. Relationships denote the link between two CIs that identifies the dependency or connection between them. CMDB functions as an effective decision making tool by playing a critical role in impact analysis and root cause determination. In the following documents, you will take a look at Configuring the CMDB configurations, Defining the CI Relationships and viewing the Relationship Map.
<urn:uuid:0c4e68ef-a143-45ee-b72d-2f5ca4d7014f>
CC-MAIN-2017-04
https://www.manageengine.com/products/asset-explorer/help/cmdb/cmdb.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.904688
228
3.03125
3
Welcome back everyone! In the past two articles, we taught the basics of Ruby in a sort of crash course. The reason behind this crash course was to prepare you for building our own port scanner. That’s what we’ll be doing here today. We have a few things we need to discuss first, so let’s knock those out. We need to discuss what sockets are and how to use them in Ruby, let’s discuss this now. What are sockets? Simply put, sockets allow us to make and manage connections over a network interface. This is what allows us to evaluate whether a port of open or closed. If we can successfully connect a socket to the target port, the port is open. Keep in mind that repeated and sequential connection such as this can and will be logged by both the victim and any IDS/IPS devices that may be listening. Now that we know a little more about sockets, we can actually start making the port scanner. We’re going to be breaking the code down into sections and analyzing each section individually, so let’s get started! Step 1: Setting Interpreter Path and Requiring Modules When we make a Ruby script, it can be a real pain to have to type “ruby [SCRIPT NAME]” in order to execute it every time. So instead we can set the interpreter path. This will allow us to treat the file as a regular executable. We can mark the file as a ruby script from within the file by setting the interpreter path. This must be on the first line of the file, and is preceded by the shebang (#!). We also need to import the necessary modules for our port scanner. Modules are chunks of code that are logically grouped into files so we can pick and choose what we need. Now that we’ve discussed what this section of the script will do, let’s take a look at the code, it will seem fairly simple compared to what we just discussed: The last bit of this snippet involves us calling multiple elements out of the “ARGV” array. This is an automatically generated array that contains command line arguments in the order they’re given. This means that when we run this script, we need to give the target, the starting port, and the ending port as command line arguments. Now that we have the first snippet out of the way, we can work with some things we’ve already covered. Step 2: Generate an Array of Ports to Scan Now that we have a start and end port provided by the user, we need to generate an array of numbers to serve as port numbers. We already covered how to convert between different port numbers, and we briefly covered how to generate a range of numbers. We’re going to be chaining conversions together here, but don’t worry, it’ll all come out nice and clean. Let’s take a look at this snippet before we dissect any further: Alright, we can see here that we’ve placed the whole array generation inside a begin/rescue statement. We start our conversion with an if statement, which tests to see if the start port number is less than or equal to the stop port number. This is to avoid generating an invalid range of ports. We’ve stored our range of ports in a new variable called $to_scan. The dollar sign in front of the variable name means this variable can be accessed from anywhere in the script. Now that we’ve got our range of ports, we can build the method to scan a given port. Step 3: Build the Port Scanning Method When we finally scan the ports of the victim, we’re going to need to perform the same action again and again, for every port in our array. In order to use this same piece of code over and over again, we’re going to make a method and call that method for every port. Our method will take one argument, a port number, and it will then proceed to connect to that port and return true or false based on the result. Let’s take a look at our method and then we’ll give a deeper look: We start by creating a socket. We do this by calling .new on the Socket module we required earlier. Then we follow it with some attributes we want our socket to have. Next, we make a new variable named sockaddr, in this variable we store the result of calling pack_sockaddr_in out of the Socket module. In order to connect our socket to a remote host, we need to properly pack the addressing information into it. This is the proper way of doing so, we’ve placed this within a begin/rescue just in case the socket fails to resolve the target. If you remember back to the very beginning, we imported a second module, timeout. We can make a timeout do loop to attempt a certain action for a certain amount of time. Once the timer runs out it will move on to the next section of code. This timeout is to prevent the script from hanging if a port is unresponsive. It will then assign the resulting value to the result variable. If the connection was successful, it will return 0. We can make a simple if statement to test for 0 result, and return true if it is. We will return false otherwise. Now that we’ve made a method to scan a given port, we can loop through our array and use it. Step 4: Iterate Over the Array and Call the Method Now that we have our method, we can use it on our array. We’re going to use a .each loop and give our temporary variable the name “port”. This loop is rather simple, so let’s take a look before we dissect any deeper: First, we put that we’re beginning the scan, followed by some blank lines for neatness. Then we make a .each loop with our to_scan array. We then make an if statement using our method. Remember how our method returned true or false? Well this is where that comes in handy. Instead of manually evaluating it, we can just place it in an if statement and let it take care of everything. If the result from our method is true, we print that the scanned port is open, anything else we just ignore. Once our scan is complete, we put that the scan is complete. Now that we have our port scanner (available here), we can test it out! Step 5: Test it Out Since we have a new tool, we need to test it out. First we’re going to perform a basic nmap scan against our target and see what results to expect: Alright, if we scan ports 1 through 100, we can expect that ports 80 and 53 will be open. Let’s go ahead and fire up our port scanner! First, we need to make the file executable using the chmod command. Once we’ve made it executable, we can fire it up and use it. Let’s do both now: There we go! We were able to successfully build our own, functional port scanner! I hope now that we’ve built our own, we have a bit better understanding about how they work. In the next article, we’ll be covering the concept of discovery, and a few different tactics used by attackers. I’ll see you there!
<urn:uuid:5c4361a1-9ec7-43df-9903-f43eb5390072>
CC-MAIN-2017-04
https://www.hackingloops.com/how-to-build-a-basic-port-scanner-in-ruby/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909405
1,558
3.234375
3
Volodchenko A.N.,Belgorod State Technological University | Lukutsova N.P.,Bryansk State Technological Academy of Engineering Russia | Olegovna E.,Belgorod State Technological University | Prasolova,Belgorod State Technological University | And 2 more authors. Advances in Environmental Biology | Year: 2014 We have found that sand-clay rocks unfinished stage of clay formation can be used as a raw material for autoclave silicate materials. These rocks are widespread, and in large quantities fall within the mining operations in mining. Due to contained in rock minerals and metastable fine quartz destruction of siliceous raw material mixture components accelerated, and as a result, accelerated synthesis of neoplasms. Using sand and clay material can improve the strength of silicate materials. Growth of strength due to the formation of the microstructure of cementitious stronger material by increasing the packing density of the material, as well as hydrogarnets synthesis those are in microfiller submicrocrystalline gel low-alkali phase of calcium hydrosilicates. The possibility of reducing energy consumption in the production of silicate materials by reducing the time of autoclaving. It is shown that increases in strength crude in 3-4 times. This lets you receive high hollow products that will improve their thermal properties. Using the studied species will significantly expand the raw material base of production of silicate materials, and also help to improve the environmental situation. © 2014 AENSI Publisher All rights reserved. Source
<urn:uuid:8bad6fc7-3ac2-4e72-9714-e61948631e42>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/bryansk-state-technological-academy-of-engineering-868625/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00210-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893388
322
2.6875
3
Developing High-Demand Skills: Virtualization “Virtualization” is one of those trendy terms these days, much like “green IT” or “cloud computing.” What exactly does it mean, though? To put it simply, virtualization is a description of how a service can be logically separated from the physical hardware that is traditionally used to provide it. For instance, a local area network (LAN) traditionally was provided by one or more switches. Segmenting the network into multiple networks meant buying separate switches for each subnet. Today, multiple networks can be logically segmented across one or more switches by use of virtual LANs (VLANs). The logical separation of service from hardware is not limited to networking. Virtualization tends to fall into one of four major categories: - Virtual LANs (VLANs): As previously discussed, this refers to one or more switches that act as multiple networks. - Platform virtualization: This uses a hypervisor to abstract operating system(s) from physical hardware, allowing multiple virtual systems to run on a single piece of hardware. Platform virtualization is a major trend in IT that often is discussed in the context of environmental friendliness since it reduces electrical usage and the amount of servers needed. It’s also a major money saver: An average server costs as much as the amount of electricity it uses during a three-year period, and platform virtualization can reduce the number of servers needed by as much as 40-to-1. That is a huge payback. - Application virtualization: A virtual application is an encapsulated portable application that does not truly get installed. - Storage virtualization: This provides access to storage while making the location of the physical disk irrelevant. Examples are deduplication, which provides access to more storage than physically exists, and appliances that aggregate multiple storage sources into a single service. Now let’s delve specifically into the platform virtualization world. There are a healthy number of vendors that provide platform virtualization solutions. The leaders of the pack are Citrix, Microsoft and VMware. Each of these vendors has a certification track, as well. XenServer is Citrix’s open source virtualization platform. Certification for XenServer is available in each of its four versions: CCA for Citrix XenServer 4 Platinum Edition; CCA for Citrix XenServer 5 Platinum Edition; CCA for Citrix XenServer Enterprise Edition 4; and CCA for Citrix XenServer Enterprise Edition 5. Although each XenServer certification requires passing only one test, published resources are sparse. Syngress is one of the few publishers that covers it. The Definitive Guide to the Xen Hypervisor is another resource. Microsoft offers a Microsoft Certified Technology Specialist (MCTS) certification in virtualization “for IT professionals who want to demonstrate their in-depth technical skills in these areas of Microsoft Virtualization.” Microsoft focuses on server virtualization, application virtualization, presentation virtualization and virtualization management. The certification that aligns most closely with platform virtualization is Exam 70-652, Configuring Windows Server Virtualization. Microsoft Press doesn’t have any publications that specifically cover Hyper-V. Fortunately, many other publishers have stepped in to fill the void. Two good choices are Windows Server Virtualization Configuration Study Guide and Windows Server 2008 Hyper-V: Insiders Guide to Microsoft’s Hypervisor. VMware is the most mature product in the platform virtualization space, having effectively created the x86 virtualization niche. VMware has two certifications that directly relate to platform virtualization: - VMware Certified Professional (VCP) on VMware Infrastructure 3: Prerequisites for the VCP certification are attendance at a VMware sanctioned class and subsequently passing the VCP test. - VMware Certified Design Expert (VCDX) on VMware Infrastructure 3: This is a more advanced certification that requires defense of a design position. It’s a light version of Cisco’s lab-based approach to certifying CCIEs. VCDX candidates must have a VCP certification. They also must submit and successfully defend a design and implementation plan. Study aides for the VCDX are sorely lacking because the certification is so new. However, there is an abundance of resources to help prepare for a VCP, including a VCP Exam Cram, a VCP test prep book, flash cards and a video. Shawn Conaway, VCP, MCSE, CCA, is a director of NaSPA and editor of Virtualize! and Tech Toys magazines. He can be reached at editor (at) certmag (dot) com.
<urn:uuid:8e851cda-a6d2-4691-938c-f9d45af38876>
CC-MAIN-2017-04
http://certmag.com/developing-high-demand-skills-virtualization/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00512-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905757
962
3.15625
3
SNMP - Anything But Simple The recent vulnerabilities discovered in the Simple Network Management Protocol (SNMP) have had those involved in network management asking two questions. Why has the problem not been detected in the past 12 years, and why are we using a product that is 12 years old in any case? The answer to both questions, if you'll excuse the pun, is anything but simple. SNMPv1 was introduced in 1989 to provide a mechanism that allowed devices on the network to communicate information about their state to a central system. The central system is referred to as an SNMP manager or more commonly as a Network Management System (NMS). The devices that can communicate with the manager are referred to as SNMP agents. It's a common misconception that SNMP is a network management system, which it is not. SNMP is a protocol, part of the TCP/IP protocol suite, that enables the communication of network management information between devices. SNMP operates on a fairly simple structure. A small number of commands can be issued by the manager, to the agent, which responds with the information requested. In certain cases, the SNMP manager is able to reconfigure the device it is communicating with by issuing a special command, called a 'set'. The information that can be retrieved from the agent or set by the manager is defined by a Management Information Base, or MIB. The MIB defines a set of values that can be read or changed by the SNMP manager. To make sure that SNMP remains protocol dependant rather than platform dependant, the International Standards Organization (ISO) controls the creation of MIBs. The ISO issues MIB identifiers (which look something like '188.8.131.52.4.1.311') to organizations that want to create their own MIBs. As long as they stay under the MIB ID they are assigned, they can do anything they like with it. As well as the process of the manager interrogating or configuring the devices that are running an SNMP agent, the devices themselves are also able to communicate with the manager through the use of 'trap' messages. Traps are generated when either a threshold is exceeded on the device, or when a certain condition is met. Examples of events that might generate a trap message include an interface going down on a router or the threshold that dictates the amount of free disk space on a server being surpassed. It should be noted that SNMP agents are very simple pieces of software, which makes it possible to install SNMP agent functionality on just about anything from a server to a router to an air conditioning system to a vending machine. Now that's a practical application for technology if ever I have heard of one. As adept as SNMPv1 is at allowing the management of devices on the network, it does so at the expense of one major factor -- security. Although there are additional mechanisms that can be used to increase the security of SNMP, the basic measures boil down to something called community strings. When configuring an SNMP agent, the community string (which is a name or combination of characters) is input as part of the configuration information. When a management system wants to communicate with the device, it authenticates using the community string. There are typically two community strings accommodated by a device, one for reading values and one for writing (setting) values. It's a sound strategy, except for one fact. The community strings are transmitted between manager and agent in plain text, which means that anyone with a packet sniffer and the inclination to do so can discover the community strings. Amusingly, this facet of SNMP causes some in the industry to rename it 'Security is Not My Problem.' Hey, who said this industry wasn't fun! To move SNMP forward a version was needed that offered all of the good points of v1, but that took care of the bad - in other words the security concerns. The next version of SNMP called, not surprisingly, SNMPv2 set out to accomplish this goal in 1995. Although security was the major drive behind SNMPv2, it was not the only enhancement. New SNMP commands such as 'GetBulk', were added along with an enhanced MIB language which added a degree of flexibility missing from SNMPv1. The only problem was that it quickly became apparent that opinions differed as to how to make SNMP more secure. As the wrangling continued, two separate versions, SNMPv2* and SNMPv2u emerged, each touting its advantages over the other. In attempt to move forward with SNMP as a whole, another version SNMPv2c was introduced that took the advantages of management over SNMPv1, but reverted back to the old community string authentication methods of the original version. The result of all these shenanigans is that SNMPv2 of any variety never managed to get a foothold. Which brings us up to version 3, which is where we are today. SNMPv3 was introduced in 1999, and gets around the security concerns by making it possible to encrypt all SNMP related traffic. It also accommodates authentication via a digital signature for remote systems. In other words, the router in Helsinki is able to verify, in a secure manner, that the request to reset Interface 0 originated from the SNMP management system in Orlando. It is also possible to operate SNMPv3 without the authentication or encryption if so desired, though the number of environments that would consciously disable security in this day and age is few. It should be noted however, that SNMPv3 does not just offer security enhancements. Other features of the new version include auditing, an enhanced time synchronization protocol and an increased set of management tools. It also incorporates the non-security related enhancements that were included in SNMPv2. To put it simply, SNMPv3 takes the best of version 2, perfects these features, adds a few of its own and then makes it secure. Another major plus for SNMPv3 is that it has been designed in a modular manner that, some say, will make in unnecessary for a new version (v4 per chance) to be introduced in the near future. When the need for new functionality is realized, it can be incorporated into SNMPv3 without the need for wholesale changes. With all the advantages SNMPv3 offers you might think that everyone who's anyone would be using it, but that's not the case, and it's certainly not for lack of vendor support. All major network hardware vendors including Cisco, Nortel and Intel provide SNMPv3 support. SNMPv3 support in network management software applications is also widespread and has been for some time. To go back to the original question posed at the beginning of this article, you may now be asking yourself why everyone is not using SNMPv3. For many, it is a case of "If it's not broken don't fix it." The problem with this strategy is that now, with the security problems identified with SNMPv1, even if your SNMP structure is not broken, someone else is likely to try and break it for you. Drew Bird is a freelance writer, trainer, consultant and technical author with over 13 years of industry experience. He is the author of a number of technical books including the Server+ Study Guide for Coriolis and the Linux+ Study Guide from Osborne McGraw Hill. Related Article: What To Do About SNMP Vulnerabilities
<urn:uuid:93509ed8-c648-4223-91fd-34504aa92f40>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/979991/SNMP--Anything-But-Simple.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00476-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954291
1,536
3.21875
3
Sixty-eight years ago this month, construction began quietly on ENIAC, the first electronic computer that was built for the U.S. Army to speed up the calculation of ordnance trajectories for soldiers in wartime. Almost three years later, in February 1946, it was finally completed and was announced to the world in a three-page press release from the U.S. War Department, titled "FUTURE." From the press release: "A new machine that is expected to revolutionize the mathematics of engineering and change many of our industrial design methods was announced today." Called ENIAC (Electronic Numerical Integrator and Computer), the machine was touted as "the first all-electronic general purpose computer ever developed." With that news, the birth of modern electronic computing had begun with a huge impetus, eventually evolving into the powerful computers and technologies used in the enterprise today. Slideshow: Computer History: The First 2,000 Years "It really mattered, not because it gave us the architecture that we use today, but because it showed the way and allowed the idea of programming to be discovered," says Mitch Marcus, a professor in the Computer and Information Science Department at the University of Pennsylvania, where ENIAC was built. Photo of ENIAC Courtesy of University of Pennsylvania ENIAC Museum What made ENIAC so important is that its co-inventors, John W. Mauchly and J. Presper Eckert, knew they were on to something much bigger than simply building a machine that could quickly figure ordnance trajectories, Marcus says. "Eckert and Mauchly rapidly understood that computers had a commercial application," he says. "They saw the potential business use for such machines." So the two men, who worked in the Moore School of Electrical Engineering at the college, quickly left Penn when ENIAC was completed and went into business together as The Eckert Mauchly Computer Corp. to market and build similar machines for corporate use. Unfortunately, they quickly ran into a road block with their game-changing invention. "They were very good engineers but not very good businessmen, so that did not work out financially," Marcus says. They sold the company to Remington Rand, which later became Sperry Rand, and it was brought into the company's UNIVAC division. But the two men didn't give up on their work. Eckert stayed with UNIVAC and Mauchly worked there before heading out on his own later as a consultant. "They understood the ubiquitous nature of computers in ways that no one else did at the time," Marcus says. "That was Mauchly's vision." ENIAC was quite an accomplishment back then and continues to inspire new technologies today, Marcus says. "It really was a proof of concept project to show that a general-purpose, high speed electronic machine could successfully be built," he says. "The fact that it could do 2,000 addition processes per second was a very big deal. Before that there had been computers built out of relays that could do only five adds per second." ENIAC may have started the electronic computer revolution, but it was a very different machine physically from what we use today to run our IT infrastructures. ENIAC itself was very large -- it filled a 30-by-50 foot room, weighed some 30 tons and incorporated about l8,000 vacuum tubes in its design and construction. It was built from 40 panels that were arranged in a U-shape. "No one had ever built anything with this many tubes," Marcus says. "That level was beyond anything that people thought was possible." Other key differences from computers of today were that ENIAC didn't include or run any stored programs and it also wasn't a binary machine using just zeroes and ones. Instead, it was run by entering normal arithmetic, Marcus says. Yet it inspired so many improvements that we find in computers and other technologies today. "The folks who designed it went on to design and do major work on the instructions sets of modern computers," Marcus says. "And the female programmers who ran ENIAC developed the use of subroutines that we still have today." ENIAC even inspired the ways in which future computers would be programmed, according to Marcus. "The people who designed ENIAC were electrical engineers and designed it from an electrical perspective. But the women who programmed it dragged it away from that way of thinking and they invented the modern view of programming." The programmers were all women because the men were away fighting in the war, according to Marcus. Those women were actually called "computors," which was the term given to anyone who worked with an adding machine in the early 1940s. "The smartest of those women were recruited to be programmers for ENIAC," Marcus says. "The original idea was that scientists would program the machine and it turned out that the scientists found programming to be hard and that the women mastered it. The thing that ENIAC replaced was a room full of folks doing ballistics calculations, all women with undergraduate degrees in mathematics." The importance of all of this can't be overstated, Marcus says. The roots of ENIAC are in today's servers, mobile devices, enterprise applications, PCs and laptops, the Internet and just about every IT process used in business and personal computing. "ENIAC was absolutely seminal," he says. "When it was announced it was like 'giant brains hit the world.' The fact that this machine could calculate the trajectory of an artillery round faster than the round itself would actually travel to its target was astounding." The importance of ENIAC was that it was built for a specific purpose for the war, but was seen as a springboard for so much more by visionaries who saw its potential. "That was the beginning of what grabbed people's attention," Marcus says. "Certainly the people who built it then became drivers for the commercialization of computers. Everybody else thought computers were only useful for figuring out scientific tables like logarithms tables and ordnance calculations." And that's when Eckert and Mauchly realized what it all meant. "It turned out that people thought this work was what computers were for, but after you printed such tables out once, you didn't ever need another one," Marcus says. "These guys knew better from the beginning and they really understood that." Todd R. Weiss covers Enterprise Applications, SaaS, CRM, and Cloud Computing for CIO.com. Follow Todd on Twitter @TechManTalking. Follow everything from CIO.com on Twitter @CIOonline and on Facebook. Email Todd at firstname.lastname@example.org You can also join Todd in the "CIO Forum" group on LinkedIn.com to talk with CIOs and IT managers about the things that keep them up at night. This story, "ENIAC still influencing enterprise IT 68 years later" was originally published by CIO.
<urn:uuid:4670fe9a-e3e4-4669-8414-b53645c114cd>
CC-MAIN-2017-04
http://www.itworld.com/article/2738846/data-center/eniac-still-influencing-enterprise-it-68-years-later.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.986038
1,452
3.21875
3
Powerline Ethernet adapters run Ethernet signals over electrical wiring. This can be particularly useful in a home that has impediments to wireless signals getting through. You plug one adapter into an electrical outlet and connect it to your router, and you plug another adapter into another electrical outlet and connect it to something else – a PC, TV, game system or other Ethernet device. The adapters communicate with each other over the power lines in the home. The technology has been available for several years from companies such as Belkin and Cisco, based on standards set forth by the HomePlug Powerline Alliance. Now the IEEE is expected to standardize the technology as well, via its IEEE 1901 specification. Rates can be expected to be as high as 200Mbps, though of course the actual network speed will vary quite a bit depending on the exact condition of your electrical wiring. That speed is higher than you could expect from current wireless networks. Last year, Belkin introduced a gigabit version, five times as fast as other powerline products. Now, the HomePlug Powerline Alliance is working on a specification along those same lines, something it’s calling the HomePlug AV2 specification, which is intended to be interoperable with the slower version and future IEEE 1901 products. The alliance says these are the key enhancements (quoting): * MIMO (Multiple-Inputs Multiple-Outputs) offers significant increases in link throughput and range without requiring additional spectrum or transmit power. MIMO allows the data signal to propagate from multiple outputs to multiple inputs implementing advanced transmission coding schemes which will increase capacity and enable more reliable and expanded home coverage. This is similar to the 802.11n and 802.16e which use MIMO solutions with wireless products to extend performance. * Increased MAC (Medium Access Control) efficiencies lower overhead and expand throughput. * Increased operating spectrum: the specification will expand operations into an additional spectrum, up to an order of magnitude beyond current powerline technology. This increased bandwidth will further improve performance. Look for more powerline Ethernet adapters to hit the market once the IEEE 1901 standard is ratified this year, with more promise in the future as the second generation of the standard gets under way.
<urn:uuid:a7556327-155a-4fc8-b169-30a1109dcc92>
CC-MAIN-2017-04
http://www.networkworld.com/article/2219218/lan-wan/powerline-ethernet-adapters-advance.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00374-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918007
452
2.765625
3
Web Services are receiving a lot of press attention. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. A Web Service is a collection of functions that are packaged as a single entity and published to the network for use by other programs. Web services are building blocks for creating open distributed systems, and allow companies and individuals to quickly and cheaply make their digital assets available worldwide. One early example is Microsoft Passport, but many others such as Project Liberty are emerging. One Web Service may use another Web Service to build a richer set of features to the end user. Web services for car rental or air travel are examples. In the future applications may be built from Web services that are dynamically selected at runtime based on their cost, quality, and availability. The power of Web Services comes from their ability to register themselves as being available for use using WSDL (Web Services Description Language) and UDDI (Universal Description, Discovery and Integration). Web services are based on XML (extensible Markup Language) and SOAP (Simple Object Access Protocol). Despite whether you see the difference between sophisticated web applications and Web Services, it is clear that these emerging systems will face the same security issues as traditional web applications.
<urn:uuid:b0eec4f5-f600-4b42-a2a4-090f9d2a17a5>
CC-MAIN-2017-04
http://www.cgisecurity.com/owasp/html/ch02s02.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00009-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945033
264
3.140625
3
Statistics as an imperative Statistics makes our world go round. Statistics determines how much we pay for insurance, if it will rain tomorrow, how many sales a company will have, and which emails should be flagged. It determines products of likely interest to internet surfers, where businesses should focus their message, what messages should be given, through which channels, and the likely consumer response from these messages. Statistics helps distinguish the probable from the improbable, correlation from causation, and the unexpected from the outlier. It enables one to find patterns that are leverageable, white space opportunity that is actionable, and hypotheses that are reasonable. Statistics is key to competitive advantage, especially in markets in which companies fight aggressively for the next dollar of consumer spend. Statistics is not only an art, not only a science, and not only key to driving insights. Statistics enables analysts to comb through petabytes of data to find information that is significant, as opposed to computer science combing through the same data to find information that is interesting. For those who call themselves analysts, in any field, statistics is not a nice-to-have but a need-to-have. - Forensic analysts use statistics to find causational evidence rather than circumstantial evidence. - Financial analysts use statistics to balance risk probabilities associated with reward probabilities. - Medical research analysts use statistics to determine incremental benefits of new treatments over existing therapies. - Taxation analysts use statistics to determine the likelihood of tax filing errors and the probability of recovering outstanding liabilities. - Marketing analysts use statistics to determine the perceptions of customers as compared to the realities of the market. - Weather analysts, economic analysts, epidemiological analysts, sociological analysts, supply chain analysts, information technology analysts, and the countless number of other types of analysts rely on statistics to do their jobs. Without firsthand knowledge of statistics, these analysts can do the mechanics if their respective jobs but then require statistics be built into their tools of trade, or each can instead be exceptional at their analytics jobs with a mastery of statistics. As analysts, we transform data and interpret results. Without a thorough understanding of statistical inference and significance, we risk drawing conclusions on observations more akin to correlation than causation. Case Study: A Genpact analyst created a Marketing Mix Model for a client. The analyst believed the model was significant, relevant, and predictive based on the model statistics and goodness of fit graphs. The client asked the analyst to determine how to cut the marketing budget by 25 percent with the least negative impact. The analyst used the model to run simulations and found that reducing TV by 50 percent while increasing internet search activity by 75 percent would lead to this result. However, TV drives search activity; decreasing TV decreases search. It was unrealistic to increase search without increasing TV, the driver of search. Despite the model results, the analyst failed to embrace statistics to generate a statistically significant mode with statistically significant causational paths between the drivers. The analyst went through the motions of statistical modeling without applying the rigor of statistics, thus leading to unrealistic models and erroneous results. Machine learning is of great interest in all industries Machine Learning is a continuously growing field of interest for companies in all verticals. It enables detection of patterns of complex sorts within the data, across large datasets with varying degrees of data quality, quantity, and structure. As datasets grow in size, finding patterns and determining how variables interact represent goldmines of opportunity. However, care must be taken to distinguish causation from correlation. The appeal of Machine Learning — to those not versed in statistics — is vast given the promise of computer algorithms to find actionable insights and opportunities. Machine Learning is referred to as Computational Statistics in other circles. One cannot fully leverage Machine Learning without having a solid handle on statistics — doing so actually risks generating outputs that are not significant, relevant, or reasonable. For instance, core to Machine Learning are decision trees. Yet there are so many variations of trees, tests for splitting branches, and rules and heuristics for pruning. To know if the tree created with Machine Learning is the most appropriate for a dataset requires a strong understanding of the underlying statistics behind the trees. Business analysts are in roles in which it is particularly crucial to have solid understanding of statistics. The analytics created, reports generated, and KPIs calculated are used to aid million-dollar decisions about billion-dollar brands in trillion-dollar economies. Given that business decision-makers rely on analysts for their depth of skills, if analysts lack a solid foundation in statistics, then the reports generated and analytics created may lack verifiable tests of validity, reliable separation of signal from noise, or confidence between reliable "actual" versus spurious "possible." Analytic outputs are too often reliant on an algorithm in a computer than on a statistical understanding of the appropriateness of that algorithm. Anyone can generate a decision tree once they learn the basic syntax in R. Anyone can generate a non-linear regression model, once they learn Proc NLIN in SAS. Anyone can generate a multi-dimensional pivot of data, once they learn pivot tables in Excel. Yet there are many types of decision trees with many kinds of tests of splits, many types of non-linear regression functional forms and many types of objective functions, and many types of pivots and transformations. So, just because one can generate the outcome does not mean that specific outcome is optimal or even appropriate. Furthermore, being an analyst means that non-analysts will trust the results mores, rather than question and dissect them. Hence it is all the more important that analysts gain strength in statistical capability so as to confidently create defensible analytics. Analysts across industries often create analytics based on convenience rather than statistical preference. They often model the data that is provided without regard to how many observations are actually required given the number of variables and cross sections. Analysts often use model structures they were taught in school, or use model structures built into macros and tools they use, without regard to the theoretical, statistical, and practical appropriateness, or weaknesses, of the model. Case Study: A Genpact analyst once created an Econometric Model for a client using the standard Log-Log model. When questioned about this model, the analyst reported how well-established that functional form was for many applications of econometrics. Upon completion of the model, the client asked about the optimal marketing spend that would increase the ROI. The analyst used the model and found drastically cutting the spend on all marketing channels would increase the ROI and thus concluded they overspent on marketing. The analyst did not appreciate the underlying statistics of the Log-Log model, which imposes the assumption that the highest return is always at the lowest levels of spend. It is a diminishing-returns-only function, which states that the very first dollar of spending is more effective than the second dollar, which in turn is more effective than the third. Without a foundation in statistics, this analyst, who was well-trained in Log-Log modeling, created a model of limited value by not knowing how to address the imposed limitation. Hypothesis-testing is an important tool within statistics wherein a sample of data is used to determine if a hypothesis is invalid. Some people misunderstand statistics and use data as a means to prove their validate hypotheses are valid. Statistics does not enable one to prove something right, only wrong. Failing to find evidence for something is not the same as proving that something is right. Saying that all X are Y requires simply finding one X that is not Y. Failing to find any Xs that are not Ys does not prove that all Xs are indeed Ys. These nuances are important but too often lost on those not versed in statistics. An important theorem in statistics is called Bayes' Theorem which talks about the probability of an event given prior information about other conditions. This theorem, without a firm understanding of statistics, leads many people, including analysts, astray. Example: Suppose 1 in 1,000 people are carriers of a genetic disease that can only be detected in a medical test. Suppose there is a test that is accurate 95 percent of the time. If this test is given to someone, and the test result says they are a carrier, what is the probability they are indeed a carrier? Using logic, one would conclude that since the test is 95 percent accurate, then a positive test result means they have a 95 percent probability of being a carrier. With such a high probability of being a carrier, various treatments, protocols, or actions may be implemented. With a solid understanding of statistics and conditional probabilities in particular, however, one would find the actual probability is 1.9 percent of being a carrier (see the figure below). Given such as disparity of results, an understanding of statistics cannot be over emphasized, as the implications of 95 percent versus 1.9 percent could be enormous. Driving value to the end user For analysts to drive the greatest value to users of the results, the analysts need to be fluent in all things statistical. They need to know theory and applications, they need to know how to interpret and bridge the interpretation to the end user, and they need to know how to apply statistics to the data available. They need to understand the statistics well enough to explain it to the non-quantitative. Analytics is a driver of value for all clients. Analytics is built on data and statistical analytics of the data. However, most people spend most of their energies on data quality, quantity, transformations, dashboards, and reporting, and spend little on the depth of the analytics, variations of methodologies, and theoretical and practical underpinnings of the analytics, the latter of which is built firmly on statistics. Using mathematical analytics, numeric manipulation, and transformation is indeed important. But full-fledged statistical analytics is part of holistic analytics and anything less means we will not be maximizing value of the insights embedded within the data. By using all tools other than statistics means we are not doing a full analysis. By having a solid understanding of statistical methods, analysts can provide greater value to clients than without such means.
<urn:uuid:8a3794dc-2436-483e-9f59-bffc23cb7a7b>
CC-MAIN-2017-04
http://www.genpact.com/home/blogs/bloginner?Title=Why+all+analysts+must+know+Statistics+%28and+know+it+well%29&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+GenpactBlogs+%28Genpact+Blogs%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942886
2,055
2.546875
3
Over the years, we've seen researchers develop some rather unorthodox energy harvesting systems, including photovoltaics (solar panels), piezoelectric materials that react to motion, and thermoelectrics that turn heat into electricity. Now, MIT Professor of Electrical Engineering Anantha Chandrakasan and MIT doctoral student Saurav Bandyopadhyay are working on a chip that can harvest energy from all three of the same sources at the same time. According to the researchers, the chip can generate up to 0.15 volts from thermal differences, 0.7 volts from natural light, and five volts from vibrations. While each power source only produces a small amount of electricity, the researchers have found a way to effectively combine the energy from all three methods by rapidly switching between them. A major advantage of the system is that it can pull energy from multiple sources that otherwise produce electricity at unreliable rates. To further increase the system's efficiency, the scientists also bypassed the need for a battery or capacitor to store the energy for later use. This way, all the energy the system generates goes directly into powering the device it's connected to. The MIT researchers imagine that their technology could be incorporated into biomedical monitoring devices or remote environmental sensors. Hopefully the system can also be adapted into a portable device that we can use to recharge our phones and tablets wherever we are. Like this? You might also enjoy... - Give Me Three Minutes and I'll Teach You How to Out-Skrillex Skrillex With Barcodes - Space Engine Lets You Be the Astronaut You've Always Wanted to Be - Finally: Indoor GPS System Uses Your Smartphone and Earth's Magnetic Field This story, "MIT develops an energy-harvesting chip that you can shake and bake" was originally published by PCWorld.
<urn:uuid:7c7faffa-ebeb-4373-ade2-6a40284447c7>
CC-MAIN-2017-04
http://www.itworld.com/article/2723837/it-management/mit-develops-an-energy-harvesting-chip-that-you-can-shake-and-bake.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929083
373
3.8125
4
An international team of researchers led by Dr. Mark Thompson from the University of Bristol have for the first time successfully generated and manipulated single photons on a silicon chip – putting them substantially closer to realizing their goal of building a quantum computer. The technique involved shrinking the key components so they could be integrated onto a silicon microchip, according to the announcement. Featured on the cover of Nature Photonics, the breakthrough solves the on-chip integration problem that had blocked the further development of large-scale quantum technologies. Previous efforts used external light sources to generate the photons, while the new chip integrates components that can generate photons inside the chip. “Our device removes the need for external photon sources, provides a path to increasing the complexity of quantum photonic circuits and is a first step toward fully integrated quantum technologies,” the researchers write. The chip was fabricated by Toshiba using conventional manufacturing techniques, which bodes well for future production. Quantum computing has long been considered the holy grail of technology. Computers built on quantum principles are expected to be orders of magnitude faster than the best-in-class conventional number crunchers. Although much of the work is still theoretical, the area has experienced rapid progress over the last decade with organizations like D-Wave, the University of Bristol and a few other sites claiming to have developed quantum processing abilities. “We were surprised by how well the integrated sources performed together,” notes Joshua Silverstone, lead author of the paper. “They produced high-quality identical photons in a reproducible way, confirming that we could one day manufacture a silicon chip with hundreds of similar sources on it, all working together. This could eventually lead to an optical quantum computer capable of performing enormously complex calculations.” Group leader Mark Thompson explains the process in more detail. “Single-photon detectors, sources and circuits have all been developed separately in silicon but putting them all together and integrating them on a chip is a huge challenge,” he says. “Our device is the most functionally complex photonic quantum circuit to date, and was fabricated by Toshiba using exactly the same manufacturing techniques used to make conventional electronic devices. We can generate and manipulate quantum entanglement all within a single mm-sized micro-chip.” The international collaboration includes researchers from Toshiba Corporation, Stanford University, University of Glasgow and TU Delft. The next step is getting the other necessary components onto the chip, and then demonstrating the feasibility of large-scale photon-based quantum devices. “Our group has been making steady progress towards a functioning quantum computer over the last five years,” Thompson remarks. “We hope to have within the next couple of years, photon-based devices complex enough to rival modern computing hardware for highly-specialized tasks.” Interestingly, the research group maintains that an engineering-oriented approach is what enabled it “to make leaps and bounds in a field previously dominated by scientists.” To this end, the University of Bristol has proposed a new engineering specialty to turn out quantum engineers who are intimate with the fundamentals of quantum mechanics and can apply this knowledge to real world problems. Bristol has established a Centre for Doctoral Training in Quantum Engineering for this purpose. The center will train a new generation of engineers, scientists and entrepreneurs to lead the quantum technology revolution. Using a multi-discipliarny approach, the program aims to bridge the gaps between physics, engineering, mathematics and computer science, while working in tandem with biologists and chemists and maintaining strong industry relationships.
<urn:uuid:6fb9b774-ec2c-4bcd-88db-1704001c3ff8>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/02/05/breakthrough-paves-way-integrated-quantum-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939493
728
3.734375
4
28th of March, 2003 (last update 5:30 PM EET) The war in Iraq has several indirect effects to public data networks. These effects are not caused by the possible network warfare campaigns launched by US or Iraq armies, but by independent hackers who want to get their own message across. These hackers can be divided to three groups: - US-based patriotic hackers, who want to join the war against Iraq but have no others means to do it except by attacking the virtual enemy through networks. This might mean launching a distributed denial-of-service attack against the e-mail server of Iraqi embassy or web sites of Iraqi companies. - Islamic extremist groups from around to world who are trying to fight back to the perceived enemy by launching attacks against US sites and especially .mil websites. - Peace activists who are not for USA or for Iraq but just against the war. For example, we've seen several computer viruses released which carry an anti-war message or are trying to use the situation otherwise for their own advantage. Lioten, found December 17th, 2002 Lioten, also known as Iraq_Oil, is a Windows network worm spreading through shared folders. The worm spreads using a file called iraq_oil.exe. For more information see the virus description. Prune, found March 12th, 2003 The Prune virus uses a war-related subject and attachment name to trick users to execute a file. This may be a very effective strategy, according to reports from US. Relatives to soldiers serving in the war are very keen to get any kind of information about the crisis. For more information see the virus description. Ganda, found March 17th, 2003 Ganda is an e-mail worm that uses a strategy similar to Prune. It replicates using mail messages with varying subjects and contents. Several of the alternative messages are directly related to the war. Ganda seems to be a protest against the Swedish school system rather than an anti-war protest. It just uses the public interest in the war to boost replication. For more information see the virus description. Vote.D, original Vote found September 24th, 2001 The first version of the Vote virus was released after the WTC terrorist strike September 11th 2001. It used the media hype to trick users into executing an e-mail attachment. A new version, Vote.D, was released during the Iraq war. The message used by the new version still refers to WTC and to the war. But the subjects are somewhat related and the new version may have been made as a war-related protest. For more information see the virus description. Melhacker is a Malaysian virus writer who has released several viruses, including Nedal (Laden backwards) and Blebla. Melhacker gave an interview for the US-based Computerworld Magazine in November 2002. In the interview he described a new virus he's written, known as Scezda: "I will attack or launch this worm if America attacks Iraq. The worm has been ready and fully tested since August." Wednesday, March 26th The Swedish police is questioning a person that is suspected for writing and spreading the Ganda virus. The suspect is living in Härnösand and he has confessed the crime, according to the police. | Denial of service attacks| Monday, March 24th The British Prime Minister Tony Blair’s site at www.number-10.gov.uk was apparently attacked using DDoS (Distributed Denial of Service) during Sunday. The site was inaccessible for a short time, according to reports. There are also rumors about defacements of this site. These rumors are most likely not reliable. Tuesday, March 25th Qatar-based TV station Al-Jazeera (www.aljazeera.net) released pictures of war prisoners and received a very high number of hits. One of Al-Jazeera's spokesmen suspected that a distributed denial of service attack (DDoS) was conducted against their site. The server was inaccessible from Monday to early Tuesday. The attack cannot be confirmed and the service disruption may simply be caused by a high number of ordinary users. Friday, March 28th Hackers have hit the Qatar-based news network Al-Jazeera hard. Their sites have been unavailable for long periods and also the target for defacement attacks, see the defacement gallery for a screenshot. Many reports blame the disruptions on denial of service attacks. Al-Jazeera is a natural target for US patriotic hackers after releasing pictures of American prisoners of war in Iraq. Al-Jazeera is probably at this moment the organisation that has had most trouble because of war-related hacking. | Defacements related to situation in Iraq| Thursday, March 20th The number of web defacement is clearly rising because of the Iraq war. Hackers use defacing as a protest against USA, Iraq or the war in general. Several hundreds of clearly war-related defacements have been reported during 48 h preceding the attack on Iraq. War-related protests stand for the majority of all reported defacements. Friday, March 21st The number of hacked sites during Friday, March 21st, has been constantly increasing. The reporting systems have problems dealing with the load and the number of hacked sites can only be estimated. It's clear that over 1000 sites have been defaced between midnight and 3:00 PM EET. The actual number is probably much higher and keeps increasing. Saturday, March 22nd The rate of web defacements was still high on Saturday, March 22nd. Further, it was still impossible to give reliable numbers as the reporting systems are heavily overloaded and all reports can't be verified. Sources that watch the hacker community closely are talking about around 2500 reports per day. Sites related to the American military have, as expected, been subject to attack. But the increased hacking activity is not limited to the nations directly involved in the war. Sites in any country can be subject to attack as the hackers seek maximum publicity for their protest. Sunday, March 23rd The rate of reported defacements is still high and the reporting systems are finally starting to catch up. However, it is clear that many defacements remain unreported because of the overloaded system. One hacker group claims that they have defaced 3000 sites in addition to the verified statistics. A majority of the defacers seem to resist USA or the war in general. A smaller number of groups spread pro-US or anti-Iraq material. US authorities, especially military organizations, are naturally a common target in this situation. The number of verified defacements of such organizations is however rather low. These organizations could easily predict a high rate of attacks and pay attention to security issues before the war. Administrators of these sites have also blocked access from organizations that are known to confirm defacement attacks. This means that many successful defacements of US sites may remain unconfirmed. One hacker group claims that they have defaced www.whitehouse.gov successfully. The site was apparently restored very quickly and independent observers were not able to confirm this defacement. Monday, March 24th The rate of new defacement reports remains high. However, it is clear that the actual number of defacements is much higher than the reported figures. The slow reporting system and the fact that many sites are restored before the defacement can be verified causes this. The number of reports and confirmed defacements do however clearly show that the hacking activity has increased significantly. Almost 10,000 defacements have been reported or confirmed during the past week and it is clear that the actual number is much higher. Tuesday, March 25th Zone-h, currently the best tracking system for defacement activities, has been down for more than 12 hours during March 25th. This makes it impossible to get reliable data for this date. However, the system has been up some time during the day and there are no signs of decreasing activity. A clear trend is that the hacking groups select their targets using systematic methods. Whole domains are scanned and several vulnerable hosts in the domain tend to be hacked at the same time. Other domains remain unattacked, at least for the moment. But, there is naturally no guarantee that they will remain untouched forever. The hackers may attack any site to spread their message, regardless of nationality or religion. Friday, March 28th Defacement archive Zone-h is again accessible after long outages during Tuesday – Thursday. The rate of defacement reports is still high. Zone-h currently receives several reports every minute. The number of confirmed defacements during Friday exceeds 1500 already at 4 PM. The total number of defacements since the beginning of the war is however hard to estimate due to the long service disruption. Graph 1. Defacements during weeks 10 – 12 | Examples of Iraq-related web defacements| (Note: Some of the screenshots are from Zone-h’s defacement archive. Please, click on the image to view it in a larger size)
<urn:uuid:514c99ce-0abc-47ab-9a9a-30276908fbf4>
CC-MAIN-2017-04
https://www.f-secure.com/virus-info/iraq.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96642
1,866
2.703125
3
Da Vinci’s Code of Conduct Leonardo da Vinci. His mere name conjures up a kind of genius that today’s minds are hard-pressed to match. It’s fair to say, although some will object, that even names as luminous as Albert Einstein and Richard Feynman would be dim stars in da Vinci’s universe. He was, after all, the man who first sketched the helicopter and the parachute about five centuries before Kitty Hawk. (And his design for the parachute, by the way, contains precise measurements that are still in use today. They also appear to be the only measurements that work.) If that’s not enough, consider that da Vinci conceived, if nowhere else but in his notebooks, the tank, the double hull, the use of solar power, the calculator and even a basic theory of plate tectonics. And let’s not forget “Mona Lisa” and “The Last Supper.” Da Vinci was also one heck of a consultant — he was the ultimate freelance talent, working, as clients called on him, for the Medicis, the Sforzos, popes, French kings and many others. In fact, much of his best work was done on retainer. So what can we learn from da Vinci? Volumes, according to Michael J. Gelb, author of “How To Think Like Leonardo da Vinci: Seven Steps to Genius Every Day.” In the book, Gelb, who has written about the mental techniques of some of the world’s greatest minds, lists seven of da Vinci’s traits that can serve the modern consultant well. For now, let’s delve into one trait – “curiosita” or curiosity – that can help you do more and earn more, whether you build intranets, tweak routers or lay fiber for a living. New Questions (And Old Questions in New Ways) In “How to Think like Leonardo da Vinci,” Gelb mentions the work of psychologist Mark Brown, who notes that nomadic cultures began to stabilize (and thus thrive) when they stopped asking how to find water and started asking how to get water to come to them. The point? Don’t simply ask questions — ask new questions or old questions in new ways. For instance, if you wonder how to find new clients (as the nomads wondered how to find water), why not ask yourself how they find you? Do you know what they go through to get to your doorstep? Is your ad in the phone book large enough? Does your Web site pop up at the top of Google’s lists? Stop thinking about how you get to your clients and start asking what they do when they have a problem that calls for your expertise but don’t know your name or even know that you exist. Do you know where they turn? Do you know whom they call? And most crucially, do those paths lead to you, to your competition, or a dead end because no one in your field has bothered to ask these questions at all? And while you’re putting new spins on old questions, don’t forget to ask new questions, the type of questions you’ve put off because the answers are hard. Or perhaps the answers are more than hard — perhaps they’re disturbing because they indicate problems and even failures you’d rather not contemplate: - What’s the one mistake I keep making that costs me business? - Just as important, why do I keep making it? - If 80 percent of my income comes from 20 percent of my clients (Pareto’s Law), what one thing could I do for those 20 percent to knock their socks off? - What have I failed to do or put off because the other 80 percent takes up too much of my time? - Should I get rid of that 80 percent? - How can I get paid to do the work I truly love? (a question that Gelb asks in his book) - What’s holding my business back? It’s easy to blame a down market or bad clients, but as da Vinci himself noted, nothing holds us back so much as our own opinions. Bear in mind that those who learn more earn more. And those who know more are worth more, not merely as freelance talent but as a member of a rapidly changing world that relies on rapidly changing technologies. So let me ask a few questions that da Vinci himself might have liked: How much are you worth? How much do you want to be worth? And how much do you have to give? David Garrett is a Web designer and former IT director, as well as the author of “Herding Chickens: Innovative Techniques in Project Management.” He can be reached at email@example.com.
<urn:uuid:52c711bc-1500-49f0-897f-f04740a2275e>
CC-MAIN-2017-04
http://certmag.com/da-vincis-code-of-conduct-lessons-from-the-master-of-all-consultants/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961454
1,036
2.609375
3
We are now moving into the deployment of the Internet of Things (IoT). IoT is an attempt to attach uniquely identifiable devices to the existing Internet infrastructure. The connected devices will allow you to receive better information, control items and, simply, just do fun stuff. How many will be connected by 2020? Gartner estimates 26 billion devices and ABI Research estimates 30 billion. But there are a number of security concerns. Recently, a simple thing like a light bulb was designed and deployed insecurely. The light bulb could be controlled by a mobile app. The house could have many light bulbs connected, but only one needs to be connected to the network. The one light bulb would interconnect to the other light bulbs and provide any security information. The problem was this information was provided insecurely. I’m sure the light bulb manufacturer was trying to deploy the light bulbs easily, but beware. If it is too easy to set up, then is it secure? I know people will complain that security is slowing things down, but let’s get it straight, security is an enabler. For instance, if you want to go fast in your car, what do you need? For starters, how about brakes, seat belts and airbags? This allows you to go fast, but also mitigates the risk of high speeds. The same is for the Internet. If you want to enable transactions on the Internet, then you need to trust the identity, authorize the identity and secure access to the information. If you address security, then you can allow those transactions to happen without giving up privacy or convenience. Experts suggest to use existing open security standards. Internet standards for SSL/TLS and OAuth (authentication standard) provide proven protocols. IoT is still in its infancy, but it does look like some groups are forming and, hopefully, they will develop standards that address security: - Allseen Alliance hosts AllJoyn, which is the open-source project that lets the compatible smart things around us recognize each other and share resources and information across brands, networks and operating systems. - Open Interconnect Consortium, which states, “We want to connect the next 25 billion devices for the Internet of Things” and “We want to provide secure and reliable device discovery and connectivity across multiple OSs and platforms.” - Thread Group says, “We wanted to build a technology that uses and combines the best of what’s out there and create a networking protocol that can help the Internet of Things realize its potential for years to come.” They also use the words “always secure.” Hopefully the groups will push each other to make interconnectivity easy and secure and we don’t end up with Betamax versus VHS.
<urn:uuid:8be03f81-49a0-41bb-a928-338050488875>
CC-MAIN-2017-04
https://www.entrust.com/internet-things-beware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00459-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939439
580
2.8125
3
IBM's experimental chip 'thinks' like a human brain - By Kevin McCaney - Aug 22, 2011 Computers, even supercomputers, are still basically processing machines, programmed to perform tasks. They perform those tasks more quickly than humans do and, unlike humans, they don’t get distracted and they don’t forget what they’re supposed to do. But they can’t think like humans. They don’t draw on sights, sounds, smells and other stimuli all at once, mix it with remembered facts and events, apply it to goals, and interpret, or project, events. Not yet, anyway. IBM says it has taken computers one step closer to brain-like processing with its Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, which seeks to ape, as it were, the functions of the brain on a new type of highly efficient processing chip. IBM’s Watson vs. the human brain: Tale of the tape Why IBM’s Watson is good news for government The project, which has received funding from the Defense Advanced Research Projects Agency, combines elements of neuroscience, supercomputing and nanotechnology into to what it calls cognitive computing, Dharmendra Modha, manager of cognitive computing at IBM’s Almaden Research Center, writes on a company blog. Modha writes that the SyNAPSE project uses advanced algorithms and silicon circuitry to create computers that could function without set programming but could “learn through experiences, find correlations, create hypotheses, and remember — and learn from — the outcomes.” Such a system could, for example, monitor the world’s waters via a network of sensors monitoring temperature, water pressure, wave heights and other factors, and use that information to predict tsunamis, Modha writes. In another example, “imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens,” Modha said in a company release. IBM unveiled two prototypes of the new chips Aug. 18. Called neurosynaptic computing chips, the processors are designed to work in a way similar to neurons and synapses in a biological brain, the company said. The cores of each 45-nanometer chip seek to emulate synapses with integrated memory, neurons with computational components and axons, which conduct electrical impulses, the company said. Each chip contains 256 neurons. One chip has 262,144 programmable synapses; the other has 65,536 learning synapses. The goal is to not only emulate the brain but to take up about the same amount of space and use as little power as possible, the company said. IBM said the project’s long-term goal is a chip system with 10 billion neurons and 100 trillion synapses that takes up less than two liters of space (about the same as the human brain) and uses one kilowatt of power (more than the brain’s 20 watts, but still efficient for that much processing). The company, which drew attention to pioneering work in natural language processing earlier this year when the company’s Watson computer won big on “Jeopardy!”, says it has completed phase 0 and 1 of the project, and DARPA has put up $21 million in funding for phase 2. The SyNAPSE team includes IBM researchers and scientists from Columbia University, Cornell University, the University of California, Merced, and the University of Wisconsin-Madison. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:c55fc561-4126-4303-80dd-32e40389bc9d>
CC-MAIN-2017-04
https://gcn.com/articles/2011/08/22/ibm-cognitive-chip-emulates-brain.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9286
749
3.09375
3
« Obama Administration proposes $4B to accelerate development and adoption of autonomous vehicles; policy update | Main | Mammoet switches Dutch operations to Shell GTL fuel » Researchers at the University of Delaware, with a colleague at the Beijing University of Chemical Technology, have developed a composite catalyst—nickel nanoparticles supported on nitrogen-doped carbon nanotubes—that exhibits hydrogen oxidation activity in alkaline electrolyte similar to platinum-group metals. An open access paper on their work is published in the journal Nature Communications. Although nitrogen-doped carbon nanotubes are a very poor hydrogen oxidation catalyst, as a support, they increase the catalytic performance of nickel nanoparticles by a factor of 33 (mass activity) or 21 (exchange current density) relative to unsupported nickel nanoparticles, the researchers reported. Owing to its high activity and low cost, the catalyst shows significant potential for use in low-cost, high-performance fuel cells, the team suggested. Polymer electrolyte membrane (PEM) fuel cells are based on two half-cell reactions: hydrogen oxidation reaction (HOR) at the anode and oxygen reduction reaction (ORR) at the cathode. Pt is the most active catalyst for both HOR and ORR; the high price of the metal (~$50 g−1) has hindered fuel cell commercialization. This, in turn, has compelled engineers to (1) work to reduce the platinum loading in the membrane assemblies and (2) find alternate, lower-cost catalysts that offer comparable performance to platinum. Although the various efforts have managed to reduce the total content of platinum-group metals (PGMs) in the state-of-the-art proton exchange membrane fuel cell (PEMFC) stacks, more than 0.137 g Pt kW−1 is still needed, the University of Delaware team said. One promising approach to reduce the cost of fuel cells is to switch the operating environment from an acidic to a basic one (that is, a hydroxide exchange membrane fuel cell, HEMFC), thus opening up the possibility of using PGM-free catalysts and other cheaper components. For the cathode of the HEMFC, some PGM-free and metal-free ORR catalysts have been developed that show comparable activity to Pt in alkaline media. However, for the anode side, only a few PGMs (for example, Pt, Ir and Pd) show adequate activity. The HOR catalyzed by Pt is very fast in acidic conditions so that a very low loading of the Pt catalyst could be used relative to the cathode side in PEMFCs. However, the HOR activities of PGMs are ~100 times slower in alkaline solutions. As a result, a much higher loading of the HOR catalyst is required (0.4 mg Pt cm−2 in a HEMFC compared with 0.03 mg Pt cm−2 in a PEMFC) to achieve similar performance. Thus, it is highly desirable to develop PGM-free anode catalysts for the HOR in alkaline electrolyte. Unlike its reverse reaction (hydrogen evolution reaction, HER), only a few PGM-free HOR catalysts have been reported. One possibility is to use Raney Ni as the HOR catalyst in liquid alkaline fuel cells. However, it is functional only under very high alkalinity (6 M KOH) while the activity remains low. It is not catalytically active for a HEMFC, which can be mimicked as 0.1–1 M KOH. Efforts have been made to improve the HOR activity of the Ni-based catalyst in the last decade. Ni alloys, such as NiMo and NiTi, have been shown to enhance the HOR activity. Our recent work has also shown that electrochemically deposited NiCoMo on an Au substrate has a high HOR activity. Zhuang and co-workers decorated Ni particles with CrOx to weaken the Ni–O bond and stabilize the Ni catalysts. A HEMFC incorporating this PGM-free catalyst has been fabricated, and it exhibits a peak power density of 50 mW cm−2. Although the power density is still low (compared with the peak power density of more than 1,000 mW cm−2 for PEMFCs), it demonstrates the possibility to fabricate low-cost PGM-free fuel cells. However, their activities are still incomparable with PGM-based catalysts. In the Nature Communications study, the team synthesized Ni nanoparticles supported on N-doped carbon nanotubes (Ni/N-CNT) using a wet chemical method. The nanotubes are not only the support for the Ni nanoparticles, but also a promoter for the catalytic activity. Using density functional theory (DFT) calculations to understand the interaction between the Ni nanoparticle and the N-CNT support, the team found that, when nitrogen dopants are present at the edge of the nanoparticle, the Ni nanoparticle is stabilized on the support and locally activated for the HOR because of modulation of the Ni d-orbitals. The experimental work was supported by the ARPA-E program of the US Department of Energy under Award Number DE-AR0000009. The computational work was financially supported by the Catalysis Center for Energy Innovation, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001004. Stephen Giles was supported by a fellowship from the University of Delaware Energy Institute. The research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Abstract: "Planes, Trains and Automobiles" is a popular comedy from the 1980s, but there's nothing funny about the amount of energy consumed by our nation's transportation sector. This sector -- which includes passenger cars, trucks, buses, and rail, marine, and air transport -- accounts for more than 20 percent of America's energy use, mostly in the form of fossil fuels, so the search is on for environmentally friendly alternatives. The two most promising current candidates for cars are fuel cells, which convert the chemical energy of hydrogen to electricity, and rechargeable batteries. The University of Delaware's Yushan Yan believes that fuel cells will eventually win out. "Both fuel cells and batteries are clean technologies that have their own sets of challenges for commercialization," says Yan, Distinguished Engineering Professor in the Department of Chemical and Biomolecular Engineering. "The key difference, however, is that the problems facing battery cars, such as short driving range and long battery charging time, are left with the customers. By contrast, fuel cell cars demand almost no change in customer experience because they can be charged in less than 5 minutes and be driven for more than 300 miles in one charge. And these challenges, such as hydrogen production and transportation, lie with the engineers." Yan is prepared to address the biggest challenge fuel cells do face -- cost. He and colleagues recently reported a breakthrough that promises to bring down the cost of hydrogen fuel cells by replacing expensive platinum catalysts with cheaper ones made from metals like nickel. The work is documented in a paper published Jan. 14 in Nature Communications. The researchers achieved the breakthrough by switching the operating environment from acidic to basic, and they found that nickel matched the activity of platinum. "This new hydroxide exchange membrane fuel cell can offer high performance at an unprecedented low cost," Yan says. "Our real hope is that we can put hydroxide exchange membrane fuel cells into cars and make them truly affordable -- maybe $23,000 for a Toyota Mirai. Once the cars themselves are more affordable, that will drive development of the infrastructure to support the hydrogen economy." ### About the research The experimental work was supported by the ARPA-E program of the U.S. Department of Energy under Award Number DE-AR0000009. The computational work was financially supported by the Catalysis Center for Energy Innovation, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001004. Stephen Giles was supported by a fellowship from the University of Delaware Energy Institute. The research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content. Vlachos D.G.,University of Delaware | Vlachos D.G.,Catalysis Center for Energy Innovation | Chen J.G.,University of Delaware | Chen J.G.,Catalysis Center for Energy Innovation | And 6 more authors. Catalysis Letters | Year: 2010 Production of energy and chemicals from biomass is of critical importance in meeting some of the challenges associated with decreasing availability of fossil fuels and addressing global climate change. In the current article, we outline a perspective on key challenges of biomass processing. We also introduce the Catalysis Center for Energy Innovation (CCEI), one of the 46 Energy Frontier Research Centers established by the Department of Energy in the spring of 2009, and CCEI's overall research strategies and goals along with its cross-cutting research thrusts that can enable potential technological breakthroughs in the utilization of biomass and its derivatives. The center focuses on developing innovative heterogeneous catalysts and processing schemes that can lead to viable biorefineries for the conversion of biomass to chemicals, fuels, and electricity. In order to achieve this goal, a group of over twenty faculty members from nine institutions has been assembled to bring together complementary expertise covering novel materials synthesis, advanced characterization, multiscale modeling, surface science, catalytic kinetics, and microreactors. © 2010 Springer Science+Business Media, LLC. Source Do P.T.M.,Catalysis Center for Energy Innovation | Do P.T.M.,Honeywell | McAtee J.R.,University of Delaware | Watson D.A.,University of Delaware | Lobo R.F.,Catalysis Center for Energy Innovation ACS Catalysis | Year: 2013 The reaction of 2,5-dimethylfuran and ethylene to produce p-xylene represents a potentially important route for the conversion of biomass to high-value organic chemicals. Current preparation methods suffer from low selectivity and produce a number of byproducts. Using modern separation and analytical techniques, the structures of many of the byproducts produced in this reaction when HY zeolite is employed as a catalyst have been identified. From these data, a detailed reaction network is proposed, demonstrating that hydrolysis and electrophilic alkylation reactions compete with the desired Diels-Alder/dehydration sequence. This information will allow the rational identification of more selective catalysts and more selective reaction conditions. © 2012 American Chemical Society. Source Lee W.-S.,University of Minnesota | Lee W.-S.,Catalysis Center for Energy Innovation | Wang Z.,University of Minnesota | Wang Z.,Catalysis Center for Energy Innovation | And 6 more authors. Catalysis Science and Technology | Year: 2014 Vapor phase hydrodeoxygenation (HDO) of furfural over Mo2C catalysts at low temperatures (423 K) and ambient pressure showed high/low selectivity to CO bond/C-C bond cleavage, resulting in selectivity to 2-methylfuran (2MF) and furan of ~50-60% and <1%, respectively. Efficient usage of H2 for deoxygenation, instead of unwanted sequential hydrogenation, was evidenced by the low selectivity to 2-methyltetrahydrofuran. The apparent activation energy and H2 order for 2MF production rates were both found to be invariant with furfural conversion caused by catalyst deactivation, suggesting that (1) the measured reaction kinetics are not influenced by the products of furfural HDO and (2) the loss of active sites, presumably by formation of carbonaceous species observed by TEM analysis, is the reason for the observed catalyst deactivation. The observed half order dependence of 2MF production rates on H2 pressure at different furfural pressures (~0.12-0.96 kPa) and the 0-0.3 order dependence in furfural pressure support the idea of two distinct sites required for vapor phase furfural HDO reactions on Mo2C catalysts. The invariance of 2MF production rates normalized by the number of catalytic centers assessed via ex situ CO chemisorption suggests that metal-like sites on Mo2C catalysts are involved in selective HDO reactions. © 2014 the Partner Organisations. Source
<urn:uuid:b7af3436-70c7-43c3-8bb8-a1d0645fae06>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/catalysis-center-for-energy-innovation-195245/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00183-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927937
2,694
2.609375
3
Heather Darcy: IT disaster recovery planning and earthquake emergency response: Lessons learned from Haiti The 2010 Haiti earthquake killed more than 210,000 people, and approximately 1 million people were evacuated from their homes. That disaster was followed about a month later by the 2010 Chilean earthquake, which scientists said shifted the earth's axis, and generated a blackout that affected 93% of the country and lasted for several days in some areas. And more recently, the death toll from China's recent earthquake is nearing 2,400 according to reports. In the aftermath of Katrina and other hurricanes a few years back, IT staffs in certain geographic areas made hurricane preparation a top priority in IT disaster recovery (DR) planning. These earthquakes in Haiti and Chile should prompt IT organizations to look at how they're prepared to survive earthquakes, just as Katrina and other hurricanes a few years back made hurricane preparation a top priority in IT disaster recovery planning in certain geographic areas.
<urn:uuid:0b419e60-0c46-4e8b-89ec-05564e70384d>
CC-MAIN-2017-04
http://www.lifelinedatacenters.com/disaster-recovery-center/heather-darcy-it-disaster-recovery-planning-and-earthquake-emergency-response-lessons-learned-from-haiti/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967825
190
2.90625
3
Graphic: Aerial photo overlaid with the county highway map and home locations from a local high school project. The program has already located and assessed approximately 100,000 residential structures. As of early 2006, an estimated 400,000 point features have been collected. On July 4, 1999, Minnesota experienced a "blowdown" -- a windstorm that flattened 600 square miles of forest in the Boundary Waters Canoe Area. After a massive rescue effort of holiday visitors, the state was left with a very large fire hazard. "We had a fuel load that was in excess of anything we had seen before," said William Glesener, Firewise specialist for the Minnesota Department of Natural Resources. "Since then, we've had several fires up there, and we recognized that we needed to manage the Wildland/Urban Interface. So the state of Minnesota got on board with the national Firewise program, and started looking at things we could do, and things we might need in case of a catastrophic event." At the time, incident management teams were doing some mapping of homes. "Tom Eiber was the person in charge of the DNR's Firewise program at the time," said Glesener, and it soon became apparent that there was a lot at stake and a bigger view was needed. "We needed to know where the homes were," said Glesener, "where the hydrants were, where the fire stations, nursing homes and hospitals were." In addition, said Glesener, there was a need to track hazards to low-flying aircraft. "There are three different entities that are tracking stuff above the treeline," said Glesener. "The FCC tracks things that broadcast, the FAA tracks things that are over 110 feet tall, but you have a potential for a local building that is above the treeline but is not 110 feet tall such as a lookout of fire tower that isn't registered ... In helicopter operations during wildfires, that can be a great concern, so we starting looking around to get all that data together." The result was a comprehensive aerial obstruction dataset, and momentum that carried forward into discussions with the DNR's Management Information Systems staff -- as well as other federal and state agencies -- to find a better way to help emergency managers. "The premise of the program," said Glesener, "is basically 250 features that Tom originally identified. We were collecting the point attributes, but as the thing built, and we got over 120,000 homes into the system, we realized we wanted to add attributes. There are 10 static attributes we collect, such as the feature type, the location, who actually got the information, and when it was collected, verified and approved. Those are static." "We also have flex fields," said Glesener, "15 other attributes in the database. It's just a column, but it gets tied back to the actual feature type. The flex field one for a fire department is the station number. For a home, it is the E-911 address. But if you open up the database, you can see all that stuff in the same column. That's where the Java-based Web application comes into play." "We're allowing that one big dataset to be displayed and have those flex fields displayed with other values based on the scripting behind the scenes. One of the nice things about that, is that we're able to attach files to these point locations. So conceivably, we could attach 15 files to a single point -- they can be PDFs, Word documents, spreadsheets, etc. It could be another database. So you can exponentially increase the amount of data we are holding based on that one feature point. It's GIS on the Web. So far, said Glesener, the system has been amassing data, and generating planning maps. Last year, during the Ham Lake fire, the system provided structure concentrations and data on residences in the area. "We have students doing GIS -- analyzing aerial photography to determine the risk of homes to wildfire," said Glesener. Most are high-school students, but students from junior high school to college also participate. "It's part of a geography class. We give them a project area, they do the project and give us the data back. They are learning real-life skills in GIS and we're gleaning data out of this. We are capturing all their ratings on individual homes and we can then take it to local emergency planners and say: 'we have a high-risk area, what can we do project wise to help reduce the risk in this area?' So we've got students across the state actually doing functional work in the system to help emergency management planners." Density Surface Modeling "We will soon be able to do some density surface modeling on the system online," said Glesener. "So rather than having to have a client application installed on a machine ... we get some information from the person wanting to run the model, and then the model gets run on the server. It will kick out a raster image that will show the areas of high risk from those home evaluations of the aerial photography. It's a more dynamic system than going to the national map to look at data. You can add data to it, you can manipulate the data relatively easy, and then you can actually get a representation back. The caveat with it is it is critical infrastructure, and is on a secure Web site and secure server and not available to the public." Glesener said that plans are in the works to allow the data to be displayed on a standardized client such as ArcGIS, Web Mapping Service (WMS), or Web Feature Service (WFS). "One returns a raster image," said Glesener," the other displays point coordinates and the attributes. So if somebody does have a client application, and they have a password, they'll be able to use their existing client and pull the information. Once we get that up and running we can go to a sheriff's office or a dispatcher and say: 'Use your existing dispatching software that has a GIS component, and you'll be able to see everything we've got.' We want that available so dispatchers can use it." The Minnesota Firewise program secured a modest grant from the National Fire Plan for the system to purchase software and hardware. "The cost-benefit ratio to this thing is extraordinary," said Glesener. Collaboration was easy, said Glesener. "They hear about it and say 'You have what?' Then they get a gleam in their eye, and they want in on it." Glesener says other agencies are interested including the state's Homeland Security and Emergency Management agency. "I just had a request from the Minnesota Pollution Control Agency for the fire departments and training facilities," said Glesener. "The request was filled by just turning on the feature types and clicking 'export.' This probably won't be necessary once we initiate the password-protected Web feature service on the system. Government agencies are invited to contribute data. "Anybody can collect a point, and if they provide us with enough information we will throw it into the system. We aren't going to discriminate against people putting data in, the only thing is it's got to be good data." The general process for contributing data is:
<urn:uuid:e171a235-bb73-4f73-8bd3-37ca749cc578>
CC-MAIN-2017-04
http://www.govtech.com/policy-management/Ill-Wind-Sparks-Web-Based-GIS-Fire.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964381
1,513
2.75
3
With the coming of the digital age, corporate work forces are becoming a custom to different and new types of learning. ELearning Industry reports 41% of global fortune 500 companies use some sort of educational technology to instruct employees. Because the average age of the online learner report by Aurion Learning being 34 years old, traditional learning methodologies aren’t successful in knowledge retention. For learning to be effective online it needs to be highly interactive and participant-centered to keep everyone involved and engaged with the content. Information should not be pushed in a continuous stream of PowerPoint slides, but in a collaborative, interactive way that will allow the learners to pull the information they need from the training session and provide their own thoughts simultaneously. Why the Move to Virtual Classrooms - The virtual classroom is always available - Trainers and experts are more accessible online - It saves travel time and money - You can pilot a course on short notice - Classes can have a wide range of size - Learning modules are reusable and recyclable - Live virtual learning lends itself to blended methodologies - It may be recorded for on-demand playback These factors including the adoption of the internet into part of our daily lives makes online education in a virtual classroom an easy and seemingly necessary transition. Virtual Classroom Benefits Virtual learning is about building knowledge as well as training and empowering learners to share their ‘uh-oh’ and ‘ah-ha’ moments” with their peers, as one persons ‘uh-oh’ moment becomes another’s ‘ah-ha’! Compared to traditional learning, virtual learning requires the attention and participation of the audience. Such an approach to learning delivery made available through the use of chat has increased attention spans, the learner has been given a voice, and retention of the data and information shared becomes relatable based upon a personal experience that they can then refer back to on the job. Global organizations that have been using online tools for other digital business needs such as Trade Shows, Career Fairs and Company Meetings have realized the ability to aggregate content into a single destination is a cost effective and reusable platform to educate others. In particular, Virtual Classroom learning has been shown to save businesses at least 50% when replaced with traditional instructor based learning, according to ELearning Industry.INXPO’s Learning Environments & Virtual Classrooms are globally accessible, can be cloned and reused for different programs; they foster collaboration, and utilize video to deliver training and educational programs at a fraction of face-to-face instruction. By leveraging Virtual Classrooms to deliver educational programming paired with their Learning Management System, companies are able to track continuing education credits for accreditation and leverage the electronic signature capabilities to record compliance. With this new ability to gain insight from data, programs can quickly be customized and tweaked to focus on knowledge gaps and understand what learners are truly interested in to make the biggest impact. In a study conducted by Bersin & Associates, companies with virtual learning and classrooms are 46% more likely to be a leader in their industry. With virtual classrooms, companies have the opportunity to catapult their business and become their market share leader. Virtual Classroom Interactivity Using a virtual learning platform allows learners to get away from lack-luster power point presentation, which often induces information overload and lacks interactivity. Video learning platforms allow learners to combat short attention spans, make content on-demand for hectic schedules and enables learners to truly take part in education through the available tools like chat, Q&A, and testing. These tools bring the immersive and interactive nature of the physical classroom to virtual learners while delivering instant metrics to the desks of trainers for useful and meaningful insights. INXPO’s Virtual Classroom INXPO’s Virtual Classroom is an interactive way to educate corporate and enterprise learners through video and open collaboration on a single platform. Virtual Classrooms have enabled organizations to better connect with learners through video and interactive tools, track learning progress, provide certification, and analyze knowledge gaps for better education programs. With the combination of video, collaborative tools and making learning truly a conversation, INXPO’s Learning Environments and Virtual Classrooms deliver the ultimate blended learning experience in a cost effective way. We live and work in a fast-paced world and learning can take place in multiple ways without the need for eight-hour classes. Enterprise learning in the digital age is moving towards the technologies that we use most often. Using powerful tools like Virtual Classrooms, can transform how corporations, train and continue to teach dispersed work forces, creating a more informed company with a competitive edge. For more information on the Virtual Classroom please visit www.inxpo.com.
<urn:uuid:9e05bc1a-d193-47ec-ae8a-8535ccab274c>
CC-MAIN-2017-04
https://blog.inxpo.com/virtual-classrooms-in-the-digital-age-of-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930609
968
2.640625
3
A One-time password (OTP for short) is a password that is only valid for a single use. The idea is to make the password more secure by limiting the amount of time that an attacker could try to guess it or intercept it as it is used by its legitimate owner. OTPs are most commonly generated by a device, in the physical size and shape of a credit card or key fob, which displays a new, pseudo-random number every 60 seconds. A user signs into a system using such a device by keying in the current displayed number plus a PIN. The system authenticates the user by calculating what number the device should currently be displaying, based on the current time and date and a random seed known to belong to that device. Combining a one-time-password device and a PIN in this way is a form of multi-factor authentication. One time passwords may be generated in other ways. For example, users might be given a sheet of paper with a series of randomly generated strings and instructed to use them, one at a time, in sequence. One time passwords may be generated through a calculated sequence, rather than be time based. One time passwords may be generated by a device given to users, or by software installed on their mobile phones, or by software installed on their PC. The latter types are sometimes referred to as soft tokens (i.e., software based tokens) in contrast to the hard tokens -- physical devices which they replaced. Vendors of one-time-password devices include RSA Security, Vasco and Dell/Quest. The security of OTP devices generally depends on the secrecy of the initial secret used to generate the OTP sequence. This was made evident by a major security compromise at RSA Security in 2011, where it is purported that a successful penetration into RSA's network led to the compromise of all seeds for all then-issued RSA tokens, thereby calling into question the trustworthiness of all RSA tokens at all RSA customers. Hitachi ID Password Manager includes features to assist users who have an OTP token and experience a login issues, such as a forgotten PIN or misplaced token. It supports self-service PIN reset, emergency passcode issuance, clock synchronization and more.
<urn:uuid:ccff76ef-2d4e-48aa-afa1-0bc04d72c1a8>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/one-time-password.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94627
455
2.953125
3
Tutorial: Maps, pins, and bubbles This tutorial shows you how to create an app that lets users drop pins and bubbles on a map to mark locations. You will learn to: - Use a MapView and handle the signals that are emitted when the user interacts with the map - Change the map using sensors and MapView functions - Customize your MapView with pins and bubbles Before you begin You should have the following things ready: - The BlackBerry 10 Native SDK - A device or simulator running BlackBerry 10 If this is your first app, you can download the tools that you need and learn how to create, build, and run your first Cascades project. Downloading the full source code This tutorial uses a step-by-step approach to build a maps app. If you want to look at the complete source code for the MapView sample app, you can download the complete project and import it into the Momentics IDE for BlackBerry. To learn how, see Importing and exporting projects. Set up your project Before we start creating our application, create an empty Cascades project in the Momentics IDE using the standard empty project template. To make it easier to follow along, this tutorial assumes that you name your project mapview. We need to add the following line to our mapview.pro file to allow our app to access location services: LIBS += -lbb -lQtLocationSubset -lbbcascadesmaps -lGLESv1_CM To use some of the classes in these libraries, your project must use an API level of 10.2 or later. For more information, see API levels. We also need to add the Location permission to our bar-descriptor.xml file so that we can get the current location of the device: There are several graphical assets that our app uses, such as background images and custom bubbles and pins. We need to import the following images: bubble.png - A text bubble for a location on the map clearpin.png - An icon for the action to clear all pins on the map pin.png - An icon for the action to drop a pin on the map url.png - An icon for the action to center the URL of a pin on the map compass.png - An image for the compass on the main UI me.png - A different icon for the location of your device on_map_pin.png - A pin on the map To import the images into your project: - Download the assets.zip file. - Extract the images folder into the assets folder of your project. - Refresh your project in the Project Explorer view. Last modified: 2015-03-31
<urn:uuid:70a7b9c9-8677-4f6c-87f0-a26f128c5611>
CC-MAIN-2017-04
http://developer.blackberry.com/native/documentation/device_platform/location/tutorial_mapview.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz
en
0.789257
575
2.671875
3
Definition: Defining the behavior of an abstract data type with axioms. Aggregate parent (I am a part of or used in ...) stack, bag, dictionary, priority queue, queue, set, cactus stack. For example, the abstract data type stack has the operations new(), push(v, S) and popOff(S), among others. These may be defined with the following axioms. The second axiom says that if we push a value onto a stack then pop it off, the result is the same stack. The "=" can be seen as a rewrite operation. The axiom "X = Y" means any time we see X, we can rewrite it to be Y. X may contain variables representing subexpressions. What is the meaning of "popOff(push(1776, new()))"? The second axiom says it means the same as new(). The third axiom assigns meaning to expressions like top(push(1, push(2, new()))): it is 1. This is reasonable, since the top element is the latest one pushed. A series of push and popOff operations and a top operation may be reduced with these axioms. What stack does new() return, then? We still haven't said; top(new()) is just not defined. But that is how a stack works: the top of an empty stack is not defined. So our formalism corresponds to our mental notion of a stack. If we want to, we can add more axioms for richer semantics, as is done in the stack entry. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 24 August 2005. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "axiomatic semantics", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 24 August 2005. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/axiomaticSemantics.html
<urn:uuid:40e8fc3b-bf6e-4f1e-a08b-6040a5b08530>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/axiomaticSemantics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894983
451
2.875
3
Oracle said on its security blog on Sunday that its update fixed two vulnerabilities in the version of Java 7 for Web browsers. It said that it also switched Java's security settings to "high" by default, making it more difficult for suspicious programs to run on a personal computer without the knowledge of the user. Java is a computer language that enables programmers to write software utilizing just one set of code that will run on virtually any type of computer, including ones that use Microsoft Corp's Windows, Apple Inc's OS X and Linux, an operating system widely employed by corporations. One version is installed in Internet browsers to access web content. Separate versions are installed directly on PCs, server computers and other devices including phones, webcams, and Blu-ray players. The Department of Homeland Security and computer security experts said on Thursday that hackers figured out how to exploit the bug in a version of Java used with Internet browsers to install malicious software on PCs. That has enabled them to commit crimes from identity theft to making infected computers part of an ad-hoc networks that used to attack websites. Oracle said that the flaws only affect Java 7, the program's most-recent version, and versions of Java software designed to run on browsers. Java is so widely used that the software has become a prime target for hackers. Last year, Java surpassed Adobe Systems Inc's Reader software as the most frequently attacked piece of software, according to security software maker Kaspersky Lab. Java was responsible for 50 percent of all cyberattacks last year in which hackers broke into computers by exploiting software bugs, according to Kaspersky. That was followed by Adobe Reader, which was involved in 28 percent of all incidents. Microsoft Windows and Internet Explorer were involved in about 3 percent of incidents, according to the survey. The Department of Homeland Security said attackers could trick targets into visiting malicious websites that would infect their PCs with software capable of exploiting the bug in Java. It said an attacker could also infect a legitimate website by uploading malicious software that would infect machines of computer users who trust that site because they have previously visited it without experiencing any problems. Security experts have been scrutinizing the safety of Java since a similar security scare in August, which prompted some of them to advise using the software only on an as-needed basis. Meanwhile, Microsoft said on Sunday that would it release an update on Monday to fix a previously disclosed flaw in Internet Explorer versions 6, 7 and 8 that made PCs vulnerable to attacks in which hackers can gain remote control of the machines. Microsoft previously released a temporary fix to prevent such attacks. Copyright 2012 by Reuters. All rights reserved.
<urn:uuid:26588ef8-bff3-433b-81df-202b88119184>
CC-MAIN-2017-04
http://www.banktech.com/oracle-updates-java-security-experts-say-bugs-remain/d/d-id/1296076?page_number=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95339
536
2.53125
3
Originally published June 9, 2005 The focus of this article is to present and discuss the fundamentals of DNA computing power and to bridge the gap into a futuristic look at Information Integration in the Nanotech world. Future articles will venture ever further into speculation about a Nanohousing device: what the models look like, what the technology looks like to build the models and how information (both data and programmatic functions) are necessary to manage the ultimate analytical engine. These are based on bio-mechanical DNA structures. This article is focused on the abstraction of DNA computing power used to produce the world’s first Nanohouse—an integrated data store combined with functionality and recognition, inclusive of abilities to attach, detach (self-assemble, break-down, score) nodes inside of a massively parallel computing tree. Re-Introduction to DNA Computing. In case you missed some of the first articles I’ve written, or if this is your first introduction to Nanohousing, let’s recap why DNA computing is important by explaining what it brings to the table. By the way, DNA computing is very real today. It has been proven to be effective based on multiple lab reports, DARPA experiments and even professional business journals. DNA computing is the ability to perform mathematical operations within DNA strands or across DNA strands in massively parallel functionality. “The excitement DNA computing incited was mainly caused by its capability of massively parallel searches. This in turn showed its potential to yield tremendous advantages from the point of view of speed, energy consumption and density of stored information. For example, in Adleman’s model the number of operations per second was up to 1.2 x 1018. This is approximately 1,200,000 times faster than the fastest supercomputer. While existing super computers execute 109 operations per Joule the energy efficiency of a DNA computer could be 2x1019 operations per Joule. That means that a DNA computer could be about 1010 times more energy efficient. Finally, storing information in molecules of DNA could allow for an information density of approximately 1 bit per cubic Nanometer, while existing storage media store information at a density of approximately 1 bit per 1012 nm3. A single DNA memory could hold more words than all the computer memories ever made.” (Process of Bio-Computing and Emergent Technology, 1997, Lila Kari.) In order to construct our first Nanohouse we should establish some basic requirements under which it must perform. Below is the short list of such requirements. There are many more rules by which the Nanohouse must abide; however, the list is too extensive to provide it here. The basic gist is that the Nanohouse must abide by bio-mechanical rules along with some twists to handle data, form and function all in the same space. Separation of content from form from function would spell disaster. Unfortunately in today’s world of warehousing and integration, this is exactly what we’ve done, which is why (I speculate) we cannot produce gradients of content and nature of importance, and also why (I speculate again) we have to use artificial means such as data mining algorithms to put the three back together again. Bio-mechanical objects like DNA strands know and understand what their purpose is. They also house the data to accomplish that purpose—and furthermore, they house the programming or algorithms necessary to complete tasks. I hypothesize that if we are to make a truly knowledgeable or thinking machine, we must start with the building blocks, and begin with convergence of data, form and function on the molecular level. Hence, the nature of Nanohousing takes on new twists. Other features of the Nanohouse When we think about building the Nanohouse, we ask the questions: what other features will the Nanohouse have? What can it do? Where will it apply? And, why do we need one? The Nanohouse will operate in all-parallel mode all the time. It will contain the ability to build up (self-assemble) multiple Nanohouses for a larger context solution. It will also contain the ability to separate (disassemble) into its relevant parts so that new operations can be performed. The Nanohouse will also be responsible for understanding the request, knowing the information it contains, and answering whether or not it can service the request based on the information it contains. All Nanohouse elements will do this in parallel. The following functions are available on a bio-molecular level; therefore they must be coded into the software/programming of the Nanohouse: separating, extracting, cutting (writing/updating), ligating (writing), substituting (writing/updating), and marking, destroying, detecting and reading. All of this must be coded under security rules that answer these functions: Most security programming will be inherent (implicitly specified) by the biological nature of the DNA strand itself. Certain types of enzymes and encoding schemes won’t or shouldn’t be allowed. Thus it makes it virtually impossible to construct an encoded virus that affects the system. What About Errors? Errors occur even in natural systems. However, most natural systems have the ability to spot the errors and reduce or eliminate them. Think of our immune system—a set of DNA structures with a specific function to generate antibodies that eliminate viruses and other bacteria within the body when the body gets sick. This means the Nanohouse should assemble different kinds of clusters with different programming. Certain types of programming in the Nanohouse will roam the solution to find rogue nanites (new term) and destroy them. Other types of programming will serve as a nervous system. Still others will serve as an adaptation or experimentation lab (in which modifications to the DNA and its coding can be tested for viability). Finally, other types of programming will serve as construction of context and hypothesis—a thought lab if you will. So what about errors? The biology of nature solves this problem several ways, one of which is massive redundancy. Where the Nanohouse is concerned (because of the small aspect of the DNA strands, and because of the nature of reduction of heat) it is possible to have redundancy in the DNA computer, or in the Nanohouse. In fact, it’s not only possible—it’s required. Another mechanism is error correction instructions built right into the DNA strand itself, thus allowing checks and balances for all the operations that take place within a single Nanohouse. Errors will be handled in a multitude of ways. In the Nanohouse lab area (mentioned above), errors will be tested for survivability and applicability. Errors become the hypothesis of evolution of the DNA strands throughout the Nanohousing global community. Conclusions and Summary As with any good futuristic theory, I propose some methods which may or may not work. What is certain is that DNA computing is already well on its way. I urge you to read more about it, particularly in the area of biomechanics and bioinformatics. Other forms of Nanotechnology are also coming to the forefront such as carbon nanotubes and carbon nanowires, along with man-made atomic molecules. However, today the DNA computing device shows the most promise for the way we want to apply it. You can read about a joint effort here. In the next article we will dive into the risks of creating a DNA computing device (Nanohouse), and discuss some of the benefits that have already been seen through experimentation. Other articles will continue to explore Nanohousing until we have explored the hypothetical perfect-world “Data Warehouse in DNA” concept. Recent articles by Dan Linstedt
<urn:uuid:5ea0ee0d-f9bd-406e-b59b-e938a5cdf27f>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/974
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940087
1,588
3.34375
3
If a computer on local LAN network is turned off and administrator needs to do some regular maintenance on it, he will need to use Wake-On-LAN (WoL) to power the system up remotely. Of course, network devices need to be configured to enable that kind of “magic” packet forwarding. NIC cards on machines need to support WoL for this to work, but we don’t bother with this here.. WoL is sending “magic packets” to computer NIC card in order to start the system up. NIC which supports WoL is still receiving power when PC is turned off. NIC then keeps listening on the network for the magic packet and if received it will initialise the system boot process and power up the PC. Magic packet is specially crafted network directed broadcast packet typically sent with connectionless UDP, port 7. You would usually have a WoL server somewhere on you network which will be used to source magic packets. If you send magic packets across network segments (between VLANs or from some remote subnet), last router in the path, one having client subnet locally connected, needs to be configured with directed broadcast. The first router on the path, router with server subnet locally connected, should have ip helper configured pointing to directed broadcast IP address (in our case 172.19.1.255). In our example below, both ip helper and directed broadcast are configured on the same L3 device since this is the only router connecting two subnets. Directed broadcast on Cisco devices is off by default since IOS 12.0 and needs to be configured on specific subnets where WoL will be needed. You need directed broadcast because PC which needs to be woken up is asleep and while asleep it will not have an IP nor it will respond to ARP. Only way to get some packets to that PC without an ARP resolution is by using local subnet L2 broadcast. Furthermore, we can surely assume that your PCs are connected to L2 Access Switch. That switch will not know to which port is the PC connected while that PC is asleep. Only a Layer 2 broadcast (and unknown unicast) will be sent out all ports on a switch. Make Directed Broadcast Secure IP directed broadcasts, if enabled on the network equipment, can make your network vulnerable to DOS attacks. IP directed broadcast is a packet sent to the broadcast address of a subnet but from a sender which is not directly connected. This kind of packet will get forwarded through the network like a normal unicast packet until the target subnet (for example 172.19.1.255 for 172.19.1.0/24). When it arrives at its local subnet it will be transformed into link-layer broadcast (L2 destination MAC address is FFFF.FFFF.FFFF). DOS attack can happen if the attacker starts to send ICMP echo requests with a rogue source address to a directed broadcast address (again, for example 172.19.1.255 for 172.19.1.0/24). All hosts on 172.19.1.0/24 subnet will then reply to rogue source IP address. Making a large ICMP stream to this one directed broadcast address will create huge number of replies directed to one IP address. If you simulated this IP to be an IP of some important server or other type of host, this huge stream of ICMP responses can deplete server resources or available bandwidth and prohibit normal network communication. For Cisco and other vendors equipment to, this was good enough reason to have “no ip directed-broadcast” command as default for all interfaces. If you still need ip directed-broadcast on some specific network segments, you will enable it only where needed and protect sourcing of directed broadcast traffic to only specific source IP addresses. In our case, we will be able to use WoL, which needs ip directed-broadcast, but only on particular segments and with only WoL server in access list. Access list will take care that only this particular server can source directed broadcast. WoL Server sent magic packet towards the directed broadcast destination 172.19.1.255 . In order to get L3 switch to direct this packet from sourced VLAN10 towards destination VLAN11, L3 switch needs to be configured because by default it will automatically discard this kind of packet. Configuration needs to be done on two sides, on the VLAN10 where server is connected, so that magic packets can be sent, and on the client side VLAN11 to enable that magic packets can be delivered to clients. In this story in order to forward magic packet through the L3 Switch, we need ip helper-address pointing towards broadcast address of the target LAN, it our case interface of VLAN11 which broadcast IP address will be 172.19.1.255 . On the target LAN (interface VLAN11), we need ip directed-broadcast with an access-list limiting who is permitted to send directed broadcast (WoL Server). Configuring ip helper-address will solve the server side part, and configuring ip directed-broadcast will solve the client side of the deal. L3_SW(config)#access-list 111 permit udp host 10.10.10.10 any eq 7 Enables UDP port 7 (magic packet) to be forwarded as IP directed broadcast L3_SW(config)#ip forward-protocol udp 7 Server VLAN interface L3_SW(config-if)#interface vlan 10 L3_SW(config-if)#ip address 10.10.10.1 255.255.255.0 L3_SW(config-if)#ip helper-address 172.19.1.255 L3_SW(config-if)#interface vlan 11 L3_SW(config-if)#ip address 172.19.1.1 255.255.255.0 L3_SW(config-if)#ip directed-broadcast 111
<urn:uuid:20648bd7-e14e-4d15-acdc-b04c15e290a4>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2016/wol
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919033
1,251
2.96875
3
To understand the significance of December 21, 2012 to the Mayans (and today’s mass media) it’s necessary to recognize and understand the Mayan numbering system, theology and astronomical prowess. First, the Mayan had two numbering systems which more-or-less are akin to our distinct decimal system for counting things, and our Gregorian system for counting dates. However, their numerical system is a base-20 vigesimal, not base-10 decimal system. This owes to the fact that they felt perfectly comfortable using their toes for counting, and relished the ability to represent petabyte-scale numbers like faraway dates efficiently. The downside of this and some unfortunate anomalies they introduced was that they never were able to master multiplication or division. Unlike the ancient Romans though, Mayan data modelers did invent a symbol for the number zero, which turns out to be an important part of the story. However, unlike most of our cultures the Mayans also had two distinct calendar systems: the “Short Count” and “Long Count”. The Short Count derives from a sacred count of 260 days known as the tzolkin munged with Venus’s relatively-protracted year. Although based in part upon astronomical observations, this calendar was purely for ritualistic purposes, still used by Guatemalan highlanders today, and bears no relevance to our imminent ominous occasion. The Long Count calendar is also based on astronomical observations and cycles, and multiples thereof. The longest of the five nested Long Count cycles is the Baktun which is 144,000 days or about 400 years – interestingly the same as our present-day quadricentenial leap year cycle. The 13-Baktun “Great Cycle” spans 5125.36 years, completing (and iterating, I hope) on December 21, or 220.127.116.11.0 in Mayan nomenclature. But why December 21st? What happened 5125 years ago on 0.0.0.0.0? The answer that has perplexed scholars until recently is: nothing. Nothing happened on that date—which happens to predate the Mayan civilization by some 3000 years. Unlike most modern-day cultures whose ethnocentric calendars begin on an important date in their own history, the Mayans saw themselves as part of a much bigger and longer picture…one of astronomical scale. It wasn’t until scholars determined that the date 18.104.22.168.0 coincides with a confluence of Mayan theology and rare astronomical events (due to the astrological precession caused by the slow wobbling of the Earth’s axis) that they realized the Mayan calendar is reverse-engineered. After decades and centuries of data collection (i.e. ancient Big Data curating methods), the Mayan’s best data scientists projected that on December 21, 2012 the Sun’s ecliptic will pass through the center (“dark region” or “dark road”) of the Milky Way, not just on any old day, but on the Winter solstice. It is on this day that the Mayan’s depict their sun god Pacal (no relation to Blaise) traveling into the underworld to do battle with the lords of Xibalba. So if you want to really impress someone this holiday season, wish them a Happy 14th Baktun or “May you have a renewed Great Cycle!” Follow Doug on Twitter: @Doug_Laney Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:624cb78a-6263-44a1-ada1-0ec1dc512788>
CC-MAIN-2017-04
http://blogs.gartner.com/doug-laney/mayan-big-data-and-predictive-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933159
854
3.28125
3
Most smart meters that are installed, or are soon to be installed, in hundreds of millions of homes around the world are woefully insecure and can be easily hacked by a remote attacker to alter energy consumption levels, hack other smart devices in the user's home, or even cause the meter to explode. These are the findings of Netanel Rubin, a researcher with Vaultra, a security firm specialized in smart device security. Rubin presented his findings at the 33rd Chaos Communication Congress held last week in Germany. In his presentation, available at the end of this article in video format, Rubin paints a grim picture where governments around the world, in an effort to reduce energy consumption, have adopted legislation that pushes smart meters into the homes of million of people. Because of the push to make energy grids "smarter," there's now a need for smart meters, with more than 60 different smart meter manufacturers more than happy to provide products to energy companies across the world. Unfortunately, as is the case in any competitive market, these smart meter vendors are cutting corners in order to provide the cheapest and feature-full products, often sacrificing device security measures to do so. Rubin says that most smart meters available on the market today are woefully insecure, mainly by the vendor's design choice. Smart meters use GSM to talk to the energy provider, and ZigBee to connect to the user's home network and allow the user to inspect his energy consumption levels. The problem is that both protocols have been known to be vulnerable for years. Attackers could very easily spoof GSM communications and control smart meters across a city. This is possible because GSM does not support encryption, allowing a determined attacker an avenue to hack smart cities. In the cases where GSM is replaced with the combination of GPRS and A5 protocols, Rubin says that this is still not enough, as both protocols could be brute-forced, and the attacker can get hold of the encryption key with ease. Even worse, Rubin says, is that in cases he analyzed, most power grid companies use the same encryption key for all smart meters across a city. An attacker that manages to hack one smart meter could very easily escalate his access to all smart meters belonging to that energy provider. This is also possible, as Rubin explained to the audience, because energy vendors also fail to segment their networks, managing their customers in one giant LAN. And if that wasn't enough, energy companies also don't monitor their smart meter network for attacks, meaning an intruder could go undetected for days, weeks, or months. At the customer level, Rubin also says that smart meters are a gateway for cyber-attacks. The main reason, he says, is the ZigBee protocol. Because there are no official government-issued standards in most countries, smart meter vendors are left on their own to decide how to secure their devices. Since the ZigBee standard is loosely regulated and because there are about 15 different versions of this protocol, smart meter vendors pick and choose what features to implement. In most cases, they choose the ones that take up the fewer resources on their devices or remove security features from the protocol, in order to cut down functions they need to embed in the smart meter's firmware. This inadequate hack job has left smart meters open to trivial attacks. For example, a remote attacker could query the smart meter and ask it if he can join the meter's network (which is the customer's home network). In this case, Rubin explains, smart meters would dish out the network secret key to anyone asking, allowing an attacker to connect to the user's network without any form of authentication. Once an attacker has joined the user's home network, it's game over. The attacker could impersonate any device on that network, or send commands to those devices, such as to open doors protected by smart locks, alter heating system settings, control smart ovens, and more. But a loudmouth smart meter wasn't the only problem Rubin discovered. Additionally, the security expert says that these devices have very faulty firmware. This is because developers often minimize the smart meter's firmware and in most cases skip security-related checks in their code, leading to many open holes that could be exploited. While memory buffer overflows allow attackers to take over the smart meter, Rubin says that there's no need for someone to mount such a complex and time-consuming attack. "A simple segmentation fault will crash the meter, causing an electricity shutdown at the premise," Rubin said. "On top of that, some crashes will actually cause this [shows a picture of a burned down house, seen below]. So, all you have to do in order to burn someone's house down is send a very long header string." These are only a few of the many other issues Rubin presented. All these issues contribute to a quite worrying smart meter attack surface. As with other IoT devices, the problem lies with uninterested and unscrupulous vendors, but also with governments around the world, which have failed to put regulation in place even after security researchers have warned about insecure Internet of Things for more than a decade. But this won't change anytime soon, until some NSA, Chinese, or Russian hacker crashes the energy grid in a city with ten or more million people, and everyone understands the dangers they're exposing themselves to.
<urn:uuid:6b356c79-cc9c-447c-b9ad-0159d1aa89c9>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/news/security/smart-meters-are-laughably-insecure-are-a-real-danger-to-smart-homes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959896
1,096
2.609375
3
In the current age of the Internet and global economy, more and more applications are required to handle data that presents itself in different national languages. To a developer, this means that national language requirements are to be taken into consideration during every phase of application development — database design, application design, and application programming. DB2 9 supports a variety of languages with a wide range of attributes like accent marks (French), bidirectional (Arabic), and large character set (Chinese). These languages pose different challenges in storing, processing, accessing, and presenting the data. The data that is affected by national languages is not limited to string data. It also includes numeric, date, and monetary data. Character vs byte semantics of string data Prior to DB2 9, DB2 had some string functions that worked on character and graphic data from a mixed perspective of bytes and double byte units. As explained earlier, increasingly users think of their data in terms of characters of various national languages. The subject of what constitutes a character, and how you can count them, is addressed by the new DB2 9 functionality discussed in this article. In the case of a single-byte character encoding scheme, a single byte constitutes a character and the length of a single byte string is the same as the byte length of the string. In the case of a graphic strings, two bytes constitute a character and you use the number of double bytes to represent the length of the string. However, in the case of a multi-byte encoding, the length of the character in bytes varies according to the encoding used, and each character can be one or more bytes in length. The counting of string length using a byte is referred to as byte semantics in this article, and the counting of string length using the number of characters is referred to as character semantics. Consider the following string in the Chinese language: Figure 1. String in Chinese The length of the string is two, if character semantics is used to calculate the length of string. But if byte semantics is used, and characters are encoded using UTF-8, then the length of the string is 6 bytes. Need for character-based functions The character-based data in SQL is associated with numeric values in many contexts, as mentioned below: - Length of a string variable: Input argument for the SUBSTR function, which determines the desired length of the resulting string or output of the LENGTH function. - Offset within a string: The second argument of the LOCATE function, which specifies the starting position within a string to begin the search. These numeric values represent the number of bytes in single-byte data and the number of double bytes in graphic or double-byte data. However, these numeric values do not adhere to character semantics in the case of multi-byte character encodings, like UTF-8. The following condition helps us to understand the need for character-based functions. What constitutes a character? Recognizing the character as a single, unit as opposed to a sequence of bytes, is a requirement in the case of string manipulations involving multi-byte characters. It is necessary for an application programmer who is allocating a buffer to know how much memory to allocate for each character. Therefore, it is important to understand what constitutes a character for writing applications that involve multi-byte character data. A character can be defined as a unit of information that corresponds to an atomic unit in a written language. Each character is represented using a sequence of bits using a character encoding. An individual character is usually encoded using a byte or more, depending upon the encoding used. Consider the characters "A" and "Latin capital letter A with ring above." The hexadecimal representation of the character "A" is x'41' and the "Latin capital letter A with ring above" is x'C385'. You can get this representation using the SQL function hex(). Figure 2. Hexadecimal representation of characters You can see from the above representation, that during display there is only one character. However, the length of "A" is one byte and that of "Latin capital letter A with ring above" is two bytes. Length of a string in terms of code units The length of the character string is dependent on the character encoding (ASCII, EBCDIC, and Unicode) that is used to encode the character. A character can be represented using one or more of the code units of a respective encoding. Therefore, if you have the same set of characters in a string, its length may defer according to the encoding used. Consider an example of a character named "musical symbol G clef." Now consider the different encodings for this character from Table 1, and you can see that the hexadecimal representation of different encodings and their length in different code units are diffrent. Table 1. Hexadecimal representation of the same character in different encodings |Length in respective code units||4||2||1| You can see from Figure 3 how you get the length in bytes for the "musical symbol G clef" character in UTF-8. Figure 3. Length of "musical symbol G clef" in bytes Search for characters When you search a string for the occurrence of a specified substring, the search is first performed and then the result (position within the string) is returned as the number of byte positions, not the correct character or code units position. Figure 4 shows a search for "a", the actual character position of "a" is 2 but the output is 3 because there is a multi-byte character in the string. Figure 4. Result of search inside a string Splitting of characters Treating multi-byte character data as a sequence of bytes could lead to the accidental splitting of characters by string functions. In Figure 5, the substring of length 1 from the first byte of the string has been specified. Since the first character is multi-byte, it results in splitting the character and leads to dirty output. Figure 5. Character being split by SUBSTR function Specify the start You may need to provide input to functions like LOCATE to specify the starting position of the search. In the case of multi-byte data you may have problems, and the results may not be what you expected. In Figure 6, shows a search for the character after the third byte, which would have been the second occurrence of the character "a" if all were single byte characters. But since you have a multi-byte character as the first character, you get the result as 3, which is the first occurrence of the search string. Figure 6. Use of LOCATE to specify start position Character based functions In addition to the string functions that were available with earlier versions of DB2 that handle character data using byte semantics, DB2 9 introduces a set of character-based string functions that understand character semantics. If a character in a particular encoding spans multiple bytes, the character-based string functions can process each character as a single unit as opposed to a sequence of bytes. Introducing string length units The character-based string functions of DB2 introduces the concept of string length units to understand the character encoding, according to which the input string is to be considered for string operations. The string units available with DB2 9 for Linux, UNIX, and Windows are OCTETS, CODEUNITS16, and CODEUNITS32. The string function has the specification of numeric value, or the result is a numeric value related to the input data. The string length unit pertains to the numeric values. The string operation to be performed may result in different outputs, depending on the string length unit that is used for counting the characters. For some functions, a numeric value is input, for example start, length, and offset parameters of string functions. With other functions, a numeric value is returned as the result, for example, to search a string for the occurrence of a specified substring, the search is first performed and then the result is returned as a number in the string length units implicitly or explicitly specified. When OCTETS is used as the string length unit, the length of a string is determined by simply counting the bytes of the string. The CODEUNITS16 specifies that Unicode UTF-16 is used for character semantics. Also, CODEUNITS32 specifies that Unicode UTF-32 is used to understand the character boundaries of multi-byte characters. Counting code units using either CODEUNITS16 or CODEUNITS32 gives the same answer, unless supplementary characters or a surrogate pair is involved. When supplementary characters are involved, a supplementary is counted as two UTF-16 code units using CODEUNITS16, or one UTF-32 code unit using CODEUNITS32. If you take the length of a character in CODEUNITS, the output differs according to the CODEUNITS used as inputs to the string function. Listing 1. Length of a string in different CODEUNITS VALUES CHARACTER_LENGTH(X'F09D849E', OCTETS) 1 ----------- 4 1 record(s) selected. VALUES CHARACTER_LENGTH(X'F09D849E', CODEUNITS16) 1 ----------- 2 1 record(s) selected. VALUES CHARACTER_LENGTH(X'F09D849E', CODEUNITS32) 1 ----------- 1 1 record(s) selected. Character-based string functions in DB2 9 This function, as mentioned in SQL standards, is used to find the length of a character string in character semantics. This function is similar to the LENGTH function in DB2 and has an optional string length unit, in which the result can be expressed. Unlike the LENGTH function, the CHARACTER_LENGTH does not accept input data that is not string based. The function includes two arguments, the first one being the string and second one the code units. In many situations, you need the string length in terms of code units, the character-based functions could be used to find the length of the string in terms of string units. Consider the example of the character "musical symbol G clef" discussed before. Listing 2. Use of CHARACTER_LENGTH to get the length of string in CODEUNITS VALUES CHAR_LENGTH(X'F09D849E',CODEUNITS16) 1 ----------- 2 1 record(s) selected. VALUES CHAR_LENGTH(X'F09D849E',CODEUNITS32) 1 ----------- 1 1 record(s) selected. The character-based string functions can be used to solve the problem of getting the length of string in terms of CODEUNITS. This function, as mentioned in SQL standard, returns the length of the input string in octets or bytes. It is similar to the LENGTH function when used against a single-byte data type. It gives double the LENGTH function value if double-byte data type is used as input. The same functionality could be derived by using CHARACTER_LENGTH and using OCTETS as string length units. Listing 3. Use of OCTECT_LENGTH to get the length of string in bytes VALUES OCTET_LENGTH(X'F09D849E') 1 ----------- 4 1 record(s) selected. The LOCATE function returns the starting position of the first occurrence of one string within another string. If the search-string is not found, and neither argument is null, the result is zero. If the search-string is found, the result is a number from 1 to the actual length of the source-string. If the optional start is specified, it indicates the character position in the source-string at which the search is to begin. An optional string length unit can be specified to indicate in what units the start and result of the function are expressed. The problem with specifying the start in the LOCATE function can be solved using the character-based functions, as shown in Figure 7: Figure 7. Use of LOCATE with CODEUNITS The POSITION function returns the starting position of the first occurrence of one string within another string. If the string to be searched is not found and neither argument is null, the result is 0. If string to be searched is found, the result is a number from 1 to the actual length of input string, expressed in the code units that are explicitly specified. The POSITION function is defined in the SQL Standard. It is similar to, but not the same as, the POSSTR function that is implemented across the DB2 family. The problem of byte positions being returned for the character positions can be solved using character-based functions. Figure 8 shows how you can do so using a LOCATE function. Figure 8. Use of POSITION with CODEUNITS The SUBSTRING function returns a substring of a string. A substring is zero or more contiguous string length units of input string. Along with the input string, the SUBSTRING function has three other arguments, which are start position, length, and code unit specification. The start position specifies the position within input string that is to be the first string length unit of the result. The length argument specifies the length of the desired substring. The splitting of CODEUNITS used for making the character does not happen when using character-based functions. Figure 9 shows how to prevent spitting of multi-byte characters. Figure 9. Use of SUBSTRING with CODEUNITS Handling of incorrect or incomplete data The string operations involving multi-byte characters may involve conditions with characters being incorrect (combination of bytes not defined in the encoding) or incomplete (having a partial byte of a multi-byte character). Consider some common conditions that can cause such situations while you do string manipulation using the new character-based string functions. The examples are based on the character "musical symbol G clef" (UTF-8 hex format is X'F09D849E'), which has a length of two in CODEUNITS16. Problems with input string Incomplete string data A string data that has a partial character can be called an incomplete string data. Consider that you have a character in UTF-8 encoding has length of 3 bytes, and the string has only the first two byte of the encoding. If you find the length of the first two bytes in CODEUNITS16, the function results in a warning. Listing 4. Use of incomplete input string data VALUES CHARACTER_LENGTH(X'849E',CODEUNITS16) 1 ----------- 2 SQL1289W During conversion of an argument to "SYSIBM.CHARACTER_LENGTH" from code page "1208" to code page "1200", one or more invalid characters were replaced with a substitute character, or a trailing partial multi-byte character was omitted from the result. SQLSTATE=01517 1 record(s) selected with 1 warning messages printed. Incorrect string data Each character encoding has its set of permitted byte or byte combinations for a particular character. The input string data to string functions may have wrong or invalid characters in the string that it is supplied with. If DB2 comes across an invalid character while doing CODEUNITS16 or CODEUNITS32 calculation, it replaces any such byte sequences by the substitution character when the byte sequence forms part of the result of applying the function. The hex format of X'80' in UTF-8 is invalid and a warning is thrown when it is encountered. Listing 5. Use of incomplete character data VALUES CHARACTER_LENGTH(X'80',CODEUNITS16) 1 ----------- 1 SQL1289W During conversion of an argument to "SYSIBM.CHARACTER_LENGTH" from code page "1208" to code page "1200", one or more invalid characters were replaced with a substitute character, or a trailing partial multi-byte character was omitted from the result. SQLSTATE=01517 1 record(s) selected with 1 warning messages printed. OCTETS and graphic string input In SUBSTRING FUNCTION, when OCTETS is specified and the input to the function is graphic data, the <start> parameter is not odd or the <length> parameter is not even, it results in an error as it is splitting a graphic character into two bytes. Listing 6. Splitting of characters VALUES SUBSTRING(GRAPHIC('K'),2,1,OCTETS) 1 -- SQL20289N Invalid string length unit "OCTETS" in effect for function "SYSIBM.SUBSTRING". SQLSTATE=428GC Problems with output string Stand-alone surrogates or incomplete string data When a character is represented using a sequence of two 16-bit code units, it's called a surrogate pair. A pair can be distinguished into high and low surrogates. When using CODEUNITS16 in the string functions, DB2 distinguishes the standalone or isolated code units. That is, if you have a surrogate pair, the length of the character is two in CODEUNITS16 and is one in CODEUNITS32. So the functions, like SUBSTRING, could split the surrogate pair depending up on the arguments you give. Buffer overflow on substitution character insertion When a substitution character is inserted, the byte length of the string may increase. If the length increases beyond the amount of buffer space available for the output, the tail part of the string is truncated and you receive a warning that the value of a string was truncated when assigned to another string data type with shorter length. Note that the new string function is under the SYSIBM function path, compared to the older functions under the SYSFUN version. You are expected to use the new SYSIBM function path, even when you are not using the string unit argument. By default, the SYSIBM function path precedes over SYSFUN in the default CURRENT PATH. All the older functions continue to be supported. The character-based functions may need to convert the input data string to an intermediate UNICODE code page, like UTF-16 or UTF-32, before its processing can be done. In cases where the result data is a string, the intermediate result is also converted back to the input code page. OCTETS as a string unit specification does not require any conversion and is more efficient. The CODEUNITS16 and CODEUNIST32 as string units may cause code page conversions. Though DB2 does its own optimizations, code page conversions may or may not be necessary. The conversion costs are all the more important for LOB input because of the potentially large size of the input string. This article provided you with an overview of the new character-based string functions in DB2 Data Server. It first explained key concepts, such as character and byte semantics with respect to string data. Next, it discussed why you need these functions, and provided examples of some generic scenarios. You also learned about the concept of code units and character-based functions. Then, it explained how these functions helped you to solve the problems discussed before and gave an example for each scenario. Finally, it discussed the common problems and the performance considerations while using these functions. Ideally, you should be using these functions to do string manipulations better and pushing more application logic into the SQL layer, rather than implementing the same logic in your application. - "Globalize your On Demand Business": Get a basic understanding of coded character sets or code pages, essential to deal with multiple languages in information processing systems. - "Access your database from everywhere" (developerWorks, Jan 2006): Read about a practical approach to DB2 UDB for Linux, UNIX, and Windows Unicode support. - "DB2 UDB National Language Support for the Command Line Processor and Utilities" (developerWorks, Oct 2002): Use the CLP in the national language of your choice on both Windows and AIX environments. - "Setting Up a Mixed-Byte Character Set (MBCS) Database on an English OS in DB2 UDB Version 8" (developerWorks, Sept 2002) Follow step-by-step instructions for those who need to set up an environment and create a mixed-byte character set (MBCS) database on DB2 Universal Database Version 8 in an English operating system environment. - "A brief introduction to code pages and Unicode" (developerWorks, March 2000): Understand how the Unicode standard works and why you need it. Explore the code behind the letters you see on your screen and in your printouts. - Visit the developerWorks resource page for DB2 for Linux, UNIX, and Windows to read articles and tutorials and connect to other resources to expand your DB2 skills. - Learn about DB2 Express-C, the no-charge version of DB2 Express Edition for the community. Get products and technologies - Download a free trial version of DB2 Enterprise 9. - Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications. - Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®. - Participate in the discussion forum. - Check out developerWorks blogs and get involved in the developerWorks community.
<urn:uuid:495b7dc5-f1a3-44e9-9d14-0d4e471c0a02>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-0705nair/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.856398
4,481
3.515625
4
The federal government has granted Colorado authorities fighting deadly wildfires access to sensitive data that pinpoints high-value, critical infrastructure, such as bridges containing utility lines, according to a contractor mapping the data. The information release is intended to help the state predict which assets may be in danger and in need of immediate protection. The Homeland Security Infrastructure Program is a compilation of about 500 layers of geographic features, including power plants and water pumps, managed by the Homeland Security Department, the Pentagon’s National Geospatial Intelligence Agency and the U.S. Geological Survey. The full HSIP data set amassing statistics from government, industry and academia are available only to state responders when federal disasters are declared, such as the fires ravaging the hillsides west of Colorado Springs. The Waldo Canyon fire there has burned through more than 18,500 acres of land, threatening more than 20,000 structures, including many homes of military families, Pentagon officials said. By combining HSIP, live weather conditions and other updated information, state responders “are predicting where the fire is likely to go [and] where they need to allocate resources to protect the highest natural resource values and developed values,” said Russ Johnson, global director for public safety at Esri, the software firm helping decision-makers analyze the data sets. “The state is using HSIP data to fuse it with the [fire] perimeter data to understand what the impact and the potential impact on infrastructure could be.” Esri has posted a dynamic map based on public information similar to the graphics that authorities are using to gauge the effects of the fires. The in-depth HSIP geospatial information also helps response teams know where to go and what areas to avoid when thrown into unfamiliar -- and now unmarked -- areas. With mobile ubiquity, “we have intelligence coming back in, right from the field,” Johnson said. “The HSIP data is very valuable when combined with this real-time information.” The convenience, which translates into faster aid, did not exist when Johnson was a U.S. Forest Service operations chief responding to 1988’s record-breaking Yellowstone National Park fire. Incident management personnel at the time would scout the area from the air and the ground every morning and then travel back to a central coordination center. With HSIP and automated data feeds, “it’s not somebody having to go out and fly the fire.”
<urn:uuid:0746236b-e669-4e40-81ae-232ae89f9775>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2012/06/feds-divulge-sensitive-mapping-data-head-colorado-wildfires/56531/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930719
503
2.765625
3
Virtual machines are created and managed by a piece of software called a “hypervisor”. There are two kinds of hypervisors: Type I and Type II. A Type II hypervisor is a program or application that runs inside your operating system, just like your web browser. Type I hypervisors run at a level right underneath the operating system. This gives them exclusive access to nearly every aspect of their host’s hardware. Type I hypervisors like VMWare ESXi are designed with enterprise users in mind. If your business or organization needs data recovered from an ESXi virtual environment, our ESXi data recovery experts can help. How Does VMWare ESXi Work? One of the jobs of a hypervisor, is to manage your host machine’s resources on your guest machines’ behalf. Type I and Type II hypervisors manage this in different ways. And even among the same type of hypervisors, different hypervisors use different routes to reach their goals. Like Microsoft’s Hyper-V hypervisor, VMWare ESXi is a Type I “bare metal” hypervisor. While there are many similarities in the way both operate, ESXi takes a different approach. ESXi is a lighter, more compact version of VMWare’s ESX hypervisor. ESXi installs what VMWare refers to as “VMkernel” onto the bare server. VMkernel is a microkernel, meaning it only has the bare minimum of features needed to comprise an operating system. Unlike ESX, which uses 2 GB of disk space, ESXi has a disk “footprint” of only 32 megabytes. ESXi can be installed to and booted from a USB drive or SD card instead of the server itself. VMkernel has direct access to the server’s CPU and memory, as well as other hardware devices. VMkernel formats the storage space in the server with the proprietary VMFS filesystem. VMFS is a cluster file system, and allows multiple hosts to access the same logical unit number simultaneously. In this space, the user can create as many virtual machines as they want. On the outside, one VMWare ESXi virtual machine is a single large VMDK file. On the inside, that file appears to be an entire computer. It has its own file system and operating system. To the end user, its behavior is exactly identical to a normal computer. In the event of a critical error and failure of the VMkernel, ESXi can display what users have nicknamed the “Purple Screen of Death”. These can occur due to any sort of hardware failure or kernel panic. Some can be fixed by a simple reboot. Other problems can be fixed by replacing a faulty memory stick, the CPU, or the motherboard. Sometimes the problem has to do with the disks your virtual machines are stored on. Many times the origin of the problem is upstream of ESXi with an inability of a SAN to properly present an iSCSI target file. VMWare ESXi Data Recovery VMWare ESX and ESXi are enterprise-class hypervisors. This means they typically see use on enterprise-class servers and SANs like the Dell PowerEdge or Synology RackStation. The servers we see for ESXi data recovery typically contain between four and one or two dozen hard drives. These drives are usually arranged in a RAID-5 or RAID-6 array, or a nested RAID-10. Nested RAID arrays with extreme fault tolerance are less common, but we do see them on occasion. Error messages such as “WARNING: FS3: 1575: Lock corruption detected at offset 0xc10000” in your vmkernel.log file is evidence of hard drive failure. (source) There are many ways a server or SAN can fail. When these failures happen due to the hard drives inside them, you could lose valuable data from your ESXi virtual machines. It may seem unlikely that your RAID-6 or RAID-10 server might see enough drives fail to make it crash. But at Gillware, we’ve learned over thousands of server recovery cases that it isn’t so unlikely. Servers fail every day. Even if your server has two or even three drives’ worth of redundancy, it can fail. We’ve seen cases in which four drives in a RAID-10 server failed at once because of a power surge. Not only can the physical disks fail. A whole host of logical problems can lead to data loss as well. VMDK files can be deleted, or reset and reformatted. Data corruption can occur to or within ESXi VMDK files as well. Two Stages of Data Recovery When a server or SAN comes to us for ESXi data recovery servicing, there are two data recovery cases. The first recovery case involves recovering the VMDK files themselves. The second case involves recovering the data from the ESXi virtual environments. To recover the VMDK files, our RAID data recovery experts have to repair the failed hard drives and rebuilt the array. Our engineers strive to create write-blocked forensic images of 100% of the binary bits on each drive. But data recovery doesn’t always work out so smoothly. In many cases, all but a handful of drives are completely healthy. But the ones that aren’t require extensive work in our cleanroom area. They may have varying degrees of damage on their data storage hard disk platters. This damage can make a 100% recovery impossible. Our RAID engineers reconstruct the array using our forensic images. Our technicians analyze the RAID metadata and write custom software to recreate the array. There can be gaps in the data, depending on whether any portions of the array were unrecoverable. Even if 99.9% of the array was recovered, the missing 0.1% could be anywhere. So we aren’t done yet. The next step in the ESXi data recovery process is to turn all of your critical virtual machines into physical machines. We mount the VMDK files onto our own hard drives and analyze them using our proprietary forensic software. We use the status mapping from the recovered RAID in order to get as accurate a result as possible. The final step is to comb through the formerly-virtual machines and test the recovered data. Our ESXi data recovery engineers can see which files have been recovered, which haven’t, and which have been partially recovered. We can even determine the level of file corruption. Why Choose Gillware for to Recover Data from ESXi Virtual Machines? At Gillware, we have ESXi data recovery experts who understand exactly how ESXi works on a fundamental level. Our experts have handled thousands of data recovery cases. They’ve racked up tens of thousands of hours of experience over the years. We make our data recovery experts’ skills available with no upfront charges. In fact, our entire ESXi data recovery process is financially risk-free. We charge no fees, upfront or otherwise, for evaluation, and even cover inbound shipping. The evaluation process typically takes less than two business days. Afterward, we present you with a price quote and probability of success. We only move on with the recovery if you approve the quote. And we don’t send you a bill until we’ve recovered your critical data. There are no fees if you back out after the evaluation or if we don’t recover your important data. When the ESXi data recovery process is complete, we then extract your data to a healthy, password-protected hard drive. We ship the hard drive to you, and to make sure your data is secure, only you get the password. We also offer expedited emergency ESXi data recovery services. Evaluations for expedited ESXi data recovery cases are finished in a matter of hours. Emergency ESXi data recovery cases can be turned around in less than two business days. There is an additional charge for expedited service added to the bill. But we still stand by our financially risk-free, “no data, no charge” policy. Ready to Have Gillware Assist You with Your ESXi Data Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:3584f45d-a614-4916-a901-94fccac7a1f4>
CC-MAIN-2017-04
https://www.gillware.com/esxi-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92283
2,248
3.09375
3
What is Service Virtualization? In software engineering, service virtualization is a method to emulate the behaviour of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. With the emerging Internet of Things (IoT) platform, service virtualization can now be used in Cloud-based applications too. The main objective of service virtualization is to provide development and testing teams access to dependent system components, which are currently not available but which are necessary to test an application. Let us look at this through a case study The problem statement: A Bank is IoT-izing its ATMs with face recognition and finger print sensors to authenticate ATM users. This sensor data and recognition service needs to interact with other services of the Bank that connect to multiple databases. The dedicated availability of such a complex and interdependent environment is practically not possible. How should the team now test this solution? The service virtualization tool is run on the dependent components to capture their behavior and performance with respect to the system being built. A simulated test environment that replicates the components is then created and provided to the Dev and Test team to carry out their transactions. Service virtualization now provides a dedicated access to the teams and there is no impact and dependency on the original test environment. How it works: Service virtualization creates a "virtual asset" that simulates the behavior of an actual component required by the application being developed. A virtual asset means a dependent component for listening to requests and returning a corresponding response with the appropriate performance. The virtual asset should perform just like the actual component. There are 3 primary ways to create a virtual asset: - Recording live communications among components during their transactions with the application. - Providing logs representing historical communication among components. - Manually defining the behavior with various interface controls and data source values. This is further configured to represent specific data, functionality, and response times. Where to use it: Service virtualization is useful when the dependent components to the application are: - Being developed - Controlled by external sources and have limited availability - Being used by multiple stakeholders and are not conveniently available - Difficult to provision or configure in a test environment - Costly to use. e.g. accessing data from the Cloud Commercial service virtualization tools include: - IBM Rational Test Virtualization Server - CA Service Virtualization - Parasoft Virtualize - HP Service Virtualization (Trial Version Available) The development of IoT applications has many dependencies on other components and services. It is difficult for these to be available at just the right time when a component is being tested. IT is here that service virtualization comes in handy and becomes an effective tool to speed up development and testing work, without having to wait for a dependent component to be completed or become available.
<urn:uuid:2463a30a-02fa-47f1-9739-8aa5d35a48e3>
CC-MAIN-2017-04
https://www.hcltech.com/blogs/engineering-and-rd-services/service-virtualization-testing-iot-components
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00404-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905949
589
2.796875
3
Statistics? Yes. Computer science? Obviously. But for undergraduates exploring data science, the most important concepts to learn are problem-solving, synthesis—correlating one dataset with another—and storytelling. Those are the skills that will best help the next generation of data scientists take seemingly unrelated data from multiple sources and discover correlations to better understand how people, businesses and machines behave so they can solve important problems. That was the principle behind two courses taught for the first time to undergraduates in Columbia’s “Introduction to Data Science” and San Jose State’s “Introduction to Big Data” in the fall of 2012. At Columbia, “Introduction to Data Science” was taught by Rachel Schutt, a senior statistician at Google’s research division in New York. Schutt has a Ph.D. in statistics from Columbia, and first proposed the course as a seminar, featuring several guest speakers talking about their jobs in data science. That idea grew into a course in the statistics department that Schutt would teach and left her groping for textbooks and curriculum guidance. After buying 10 to 15 books on varied topics like machine learning and experimental design, she couldn’t find one that contained all the topics crucial for a class on data science. “It took me a while to realize that the fact that there wasn’t a textbook meant that this was an innovative, new thing,” Schutt said. “It actually caused me a great deal of confusion and frustration before the class started.” She ultimately scrapped the single textbook idea, instead relying on her own experience and conversations with colleagues at Google, professors at Columbia and the New York City data science community to guide the curriculum. The topics she settled on, which she described as “threads to run through the entire course,” were: machine learning algorithms and modeling, data visualizations, computer coding and ethics, and data science “habits of mind.” Teaching the Data Scientist Mindset at Columbia Schutt said the habits of mind section was hard to define and difficult to teach; she viewed them as not only tips and tricks from professionals but also promoting a higher level of understanding of what the mindset required for a successful data scientist. “These are things like creativity, knowing what to do when you don’t know what to do, and how to ask good questions,” Schutt said. “It’s not really a skill, and it’s hard to teach, but that’s actually something I found most interesting about trying to create a class like this, which is how do you teach those things that people say you can only learn on the job. It’s really about being a thinker, and being a curious person.” These topics were reinforced in the coursework and by several guest speakers including from Google and other technology companies as well as other faculty members. As part of the course Schutt wrote a blog, both as a resource for students but also to document and reflect on the course as she was teaching. Like the class’s guest speakers, the blog has guest writers, including other data science professionals, professors and students from the class. The class involved several technical and practical topics. Schutt said she taught students how to run statistical algorithms on massive datasets across multiple machines and the problems that arise in that process, which are still being figured out in the market. She covered how to create compelling data visualizations, something she struggles with herself and said there aren’t nearly enough classes on. She spent a lot of time on ethics, discussing the ethical pitfalls of building consumer-facing products where decisions are made by machines using metrics that might not understand the whole picture. Products in health care, mortgage banking or credit scores can affect a person’s whole life, so a lot of thought needs to enter into automated processes. Many of the topics covered in the course aren’t new, Schutt said, pointing out that statisticians rightly get uptight by the notion that using historical data to better understand business or human behavior is passed off as something entirely new. It’s not—it’s been happening for a long time. But what is new, Schutt said, is the convergence of several new technologies that create massive, unstructured datasets. What data scientists have to learn are the skills needed to analyze those unstructured datasets. “The type of data we have now, because of the Internet, it is different than it was even 10 years ago,” Schutt said. “It’s location-based data and time-stamped data, and all the data that humans leave behind as traces of themselves on the Internet and the Web. “We’ve let technology into our lives a lot more over the last 10 or 15 years. That means that data disseminated from that technology is a bigger part of our lives and that means we’re more accepting of it. People always want to learn about people, and so now there is this relationship between machines and people that is more pronounced than it used to be.” The Quest for Correlations in Unstructured Data at San Jose State The San Jose State course, “Introduction to Big Data,” was taught in that university’s computer science department by Professor Peter Zadronzy, who is also a performance consultant for Splunk. Zadronzy consulted with Rob Reed, Splunk’s worldwide education evangelist, as well as several other Splunk employees to discuss what big data skills were overlooked in the university’s computer science courses. The conversations were “wonderful and sometimes exhausting,” Reed said, but the final result was distilled down to this: “How do we handle huge volumes of information whose structure, size, velocity, [and other characteristics] we do not know beforehand. We know there is value in it, but how do we extract business value out of it? That was at the heart of the San Jose State course.” Zadronzy had technology partnerships with Splunk, Cloudera and GoGrid to illustrate the techniques for teasing out insights from large datasets that seem to be completely unrelated. Students used Splunk in teams as part of a final project, and presented the results to the company’s engineers and executives at Splunk’s San Francisco offices. Reed said the crucial concept in today’s big data education is the key value pair, where a single key identifier can track a person’s behavior, whether through clickstream data on the web or geolocation data via a mobile device. But according to Reed, the differentiator right now in business is not the technological skills to store key values. It’s teaching students how to think about different and interesting relationships in the data that can be identified through pairing key values from different data sets. “Let’s turn out students who are not bound by [a single] way of thinking, and who … will think of correlations across datasets that nobody has thought of before,” Reed said. “When students get that ability to correlate, it’s amazing to see the light bulb go on.” To Reed, the ability to correlate and synthesize is the difference between someone who works with data and a data scientist. “If you can’t synthesize and tell a good story, you’re not going to a good data scientist,” Reed said. “You might be a good data functionary, or a data analyst, but nobody is going to look to you to solve a problem.” Email Staff Writer Ian B. Murphy at email@example.com.
<urn:uuid:fc0ec755-bb12-4ed9-a605-721aef4881bc>
CC-MAIN-2017-04
http://data-informed.com/data-science-101-training-undergrads-to-be-curious-problem-solvers-first-programmers-later/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00248-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957922
1,629
2.71875
3
Setting Up Files and Web Sites for Offline Access So far in this series, I've explained how the various types of offline file caching work. In this article, I'll explain how to actually set up files for offline access. I'll then go on to explain how to make Web sites available offline as well. Before you can make any folder available offline, you must first share it. To do so, open My Computer and navigate to the folder you want to make available offline. Next, right-click on the folder and select the Sharing command from the resulting context menu. When you do, you'll see the folder's properties sheet with the Sharing tab selected. To share the folder, select the Share This Folder radio button and enter a share name. You can now make the folder available for offline use. To do so, click Caching on the folder's Sharing tab. You'll see the dialog box shown in Figure 1. As you can see, the Caching Settings dialog box contains a drop-down list containing the various types of caching. You can select Automatic Caching, Manual Caching, or Automatic Caching for Programs. The dialog box also contains a brief description of each type of caching and what it's good for. If you set a folder to use Automatic Caching or Automatic Caching for Programs, then the caching process is... well, automatic. However, if you decide to manually cache a folder, the caching process requires some user intervention. Before a user can use a manually cached folder offline, he must go through a process called pinning. Pinning is the process of selecting which files should be available offline. Once you manually cache a folder, any user who normally has access to the folder also has rights to pin the folder. However, you can modify the group policy so that only a select few individuals have pinning privileges. To pin a folder, the user must be online. Once the user is logged in, he must navigate through the directory structure to the folder to which he needs offline access. After selecting the folder, the user must select File|Make Available Offline. Windows 2000 will then launch the Offline Files Wizard. The wizard's initial screen simply gives an explanation of the wizard's purpose, and the user can click Next to move on. The next screen asks if the user wants to automatically synchronize the offline files when he logs on and off the computer. The user makes the selection using the check box provided and then clicks Next. The wizard's final screen gives the user a chance to see a periodic reminder that he isn't online. After the wizard completes, a dialog box asks whether the user wants to make the selected folder the only thing that's available offline, or if he would also like to include the contents of the folder's subfolders. After the user selects the appropriate radio button and clicks OK, the folder will be available offline. Caching Web Pages As I mentioned earlier, you can also make Web sites available for offline use. For example, you might like to take a copy of your company's Web site with you when you go on business trips. To make a page available offline, you must first add it to your favorites. To do so, go to the desired page and select Add To Favorites from Internet Explorer's Favorites menu. Next, return to the Favorites menu and right-click on the Web site you've just added. Select Make Available Offline from the resulting context menu. At this point, you'll see the Offline Favorite Wizard. Begin by clicking Next to get through the wizard's introduction screen. Next, you'll see a screen similar to the one shown in Figure 2. By default, the wizard makes only a single page of the Web site available for offline use. However, you can make the entire site available offline, if you wish. To do so, click Yes to make the page's links valid. You can then use the dialog box's counter to tell Internet Explorer how many layers deep you want to make available offline. If you make enough layers available offline, you can download an entire Web site. Be careful about doing that, though--some Web sites are huge, and trying to download the entire thing can cause you to run low on hard disk space. Click Next to continue. The next screen informs you that you can update the page any time you're online by selecting Tools|Synchronize. However, you can also use this screen to establish an automatic synchronization schedule. After deciding on your synchronization schedule, click Next. The wizard's final screen asks if the Web site requires a password, and gives you the opportunity to supply one. When you complete the process, Internet Explorer will begin downloading the page for offline use. In this series, I've explained that mobile users sometimes need access to network resources and Web sites when no network or dial up connection is available. In answer to this problem, Windows 2000 offers several ways to make files, folders, programs, and Web sites available for offline use through caching. In my discussion, I've explained the pros and cons to each type of offline caching, as well as the setup procedures for each. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:875cda43-7ab4-45d9-83f2-b4f298cf1c4a>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netos/article.php/625491/Setting-Up-Files-and-Web-Sites-for-Offline-Access.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905834
1,134
2.515625
3