text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
There’s a lot of talk now about heterogeneous computing, but here I want to focus on heterogeneous programming. There are several, perhaps many, approaches being developed to program heterogeneous systems, but I would argue that none of them have proven that they successfully address the real goal. This article will discuss a range of potentially interesting heterogeneous systems for HPC, why programming them is hard, and why developing a high level programming model is even harder. I’ve always said that parallel programming is intrinsically hard and will remain so. Parallel programming is all about performance. If you’re not interested in performance, save yourself the headache and just use a single thread. To get performance, your program has to create parallel activities, insert synchronization between them, and manage data locality. The heterogeneous systems of interest to HPC use an attached coprocessor or accelerator that is optimized for certain types of computation. These devices typically exhibit internal parallelism, and execute asynchronously and concurrently with the host processor. Programming a heterogeneous system is then even more complex than “traditional” parallel programming (if any parallel programming can be called traditional), because in addition to the complexity of parallel programming on the attached device, the program must manage the concurrent activities between the host and device, and manage data locality between the host and device. A Heterogeneous System Zoo Before we embark on a discussion of the jungle of programming options, it’s instructive to explore the range of heterogeneous systems that are currently in use, being designed, or being considered, from GPUs to Intel MIC, DSPs and beyond. - Intel/AMD X86 host + NVIDIA GPUs (x86+GPU). This is the most common heterogeneous HPC system we see today, including 35 of the Top 500 supercomputers in the November 2011 list. The GPU has its own device memory, which is connected to the host via PCIe. Data must be allocated on and moved to the GPU memory, and parallel kernels are launched asynchronously on the GPU by the host. - AMD Opteron + AMD GPUs. This is essentially another x86+GPU option, using AMD Firestream instead of NVIDIA. The general structure is the same as above. - AMD Opteron + AMD APU. The AMD Heterogeneous System Architecture (formerly AMD Fusion) could easily become a player in this market, given that the APU (Accelerated Processor Unit) is integrated on the chip with the Opteron cores. Current offerings in this line are programmed much like x86+GPU. The host and APU share physical memory, but the memory is partitioned. It’s very like the x86+GPU case, except that a data copy between the two memory spaces can run at memory speeds, instead of at PCI speed. Again, parallel kernels are launched asynchronously on the APU by the host. AMD has announced aggressive plans for this product line in the future. - Intel Core + Intel Ivy Bridge integrated GPU. The on-chip GPU on the next generation Ivy Bridge processor is reported to be OpenCL programmable, allowing heterogeneous programming models as well. Intel’s Sandy Bridge has an integrated GPU, but it seems not to support OpenCL. - Intel Core + Intel MIC. Reportedly, the MIC is the highly parallel technical computing accelerator architecture for Intel. Currently, the MIC is on a PCIe card like a GPU, so it has the same data model as x86+GPU. The MIC could support parallel kernels like a GPU, but it also can support OpenMP parallelism or dynamic task parallelism among the cores on the chip as well. An early Intel Larrabee article (also published in PLDI 2009) described a programming model that supported virtual shared memory between the host and Larrabee, but I haven’t heard whether the MIC product includes support for that model. - NVIDIA Denver: ARM + NVIDIA GPU. As far as I know, this is not yet a product, and could look like the AMD APU, except for the host CPU and accelerator instruction sets. - Texas Instruments: ARM + TI DSPs. A recent HPCwire article described TI’s potential move (return, actually) to HPC using multiple 10GHz DSP chips. It’s pretty early to talk about system architecture or programming strategy for these. - Convey Intel x86 + FPGA-implemented reconfigurable vector unit. I’ve always thought of the Convey machine as a coprocessor or accelerator system with a really interesting implementation. One of the key advantages of the system is that the coprocessor memory controllers live in the same virtual address space as the host memory. It’s separate physical memory, but the host can access the coprocessor memory, and vice versa, though with a performance cost. Moreover, the programming model looks more like vector programming than like coprocessor offloading. - GP core + some other multicore or stream accelerators. Possibilities include a Tilera multicore or the Chinese FeiTeng FT64. Again, it’s a little early to talk about system architecture or programming strategy. - GP core + FPGA fabric. Convey uses FPGAs to implement the reconfigurable vector unit, but you could consider a more customizable unit as well. There were plenty of FPGA products being displayed at SC11 in Seattle, but none were as well integrated as Convey’s, nor did they have as strong a programming environment. The potential for fully custom functional units with application-specific, variable-length datatypes is attractive in the abstract, but except for a few success stories in financial and bioinformatics, this approach has a long way to go to gain much traction in HPC. - IBM Power + Cell. This was more common a couple years ago. While there are still a few Cell-equipped supercomputers on the Top 500 list, IBM has pulled the plug on the PowerXCell 8i product, so expect these to vanish. In the HPC space, there seem to be only two lonely vendors still dedicated to CPU-only solutions: IBM and the Blue Gene family, and future Fujitsu SPARC-based systems, follow-ons to the Fujitsu K computer. Despite the wide variety of heterogeneous systems, there is surprising similarity among most or all of the various designs. All the systems allow the attached device to execute asynchronously with the host. You would expect this, especially since most of the devices are programmable devices themselves, but even the tightly-connected Convey coprocessor unit executes asynchronously. All the systems exhibit several levels of parallelism within the coprocessor. - Typically, the coprocessor has several, and sometimes many, execution units (I’ll avoid over-using the word processor). NVIDIA Fermi GPUs have 16 Streaming Multiprocessors (SMPs); AMD GPUs have 20 or more SIMD units; Intel MIC will have 50+ x86 cores. If your program doesn’t take advantage of at least that much parallelism, you won’t be getting anywhere near the benefit of the coprocessor. - Each execution unit typically has SIMD or vector execution. NVIDIA GPUs execute threads in SIMD-like groups of 32 (what NVIDIA calls warps); AMD GPUs execute in wavefronts that are 64-threads wide; the Intel MIC, like its predecessor Larrabee, has 512-bit wide SIMD instructions (16 floats or 8 doubles). Again, if your application doesn’t take advantage of that much SIMD parallelism, your performance is going to suffer by the same factor. Since these devices are used for processing large blocks of data, memory latency is a problem. Cache memory doesn’t help when the dataset is larger than the cache. One method to address the problem is to use a high bandwidth memory, such as Convey’s Scatter-Gather memory. The other is to add multithreading, where the execution unit saves the state of two or more threads, and can swap execution between threads in a single cycle. This strategy can range from swapping between threads at a cache miss, or alternating between threadson every cycle. While one thread is waiting for memory, the execution unit keeps busy by switching to a different thread. To be effective, the program needs to expose even more parallelism, which will be exploited via multithreading. Intel Larrabee / Knights Ferry has four-way multithreading per core, whereas GPUs have a multithreading factor of 20 or 30. Your program will need to provide a factor of somewhere between four and twenty times more parallel activities for the best performance. Moreover, the programming model may want to expose the difference between multithreading and multiprocessing. When tuning for a multithreaded multiprocessor, the programmer may be able to share data among threads that share the same execution unit, since they will share cache resources and can synchronize efficiently, and distribute data across threads that use different execution units. But most importantly, the attached device has its own path to memory, usually to a separate memory unit. These designs fall into three categories: - Separate physical memory: Current discrete GPUs and the upcoming Intel MIC have a separate physical memory connected to the attached device, not directly connected to the host processor, and often even not accessible from the host. The device is implemented as a separate card with its own memory. There may be support for the device accessing host memory directly, or the host accessing device memory, but at a significant performance cost. - Partitioned physical memory: Today’s AMD Fusion processor chips fall into this category, as do low-end systems (such as laptops) using a motherboard-integrated GPU. There is one physical memory, but some fraction of that memory is dedicated to the APU or GPU. Logically, it looks like a separate physical memory, except copying data between the two spaces is faster, because both spaces are on the same memory bus, and moving data to the APU or GPU is slower, because the CPU main memory doesn’t have the bandwidth of, say, a good graphics memory system typical for a GPU. - Separate physical memory, one virtual memory: The Convey system fits this category. The coprocessor has its own memory controllers to its own memory subsystem, but both the CPU and coprocessor physical memories are mapped to a single virtual address space. It becomes part of application optimization to make sure to allocate data in the coprocessor physical memory, for instance, if it will be mostly accessed by coprocessor instructions. Programming Model Goals Given the similarities among system designs, one might think it should be obvious how to come up with a programming strategy that would preserve portability and performance across all these devices. What we want is a method that allows the application writer to write a program once, and let the compiler or runtime optimize for each target. Is that too much to ask? Let me reflect momentarily on the two gold standards in this arena. The first is high level programming languages in general. After 50 years of programming using Algol, Pascal, Fortran, C, C++, Java, and many, many other languages, we tend to forget how wonderful and important it is that we can write a single program, compile it, run it, and get the same results on any number of different processors and operating systems. Second, let’s look back at vector computing. The HPC space 30 years ago was dominated by vector computers: Cray, NEC, Fujitsu, IBM, Convex, and more. Building on the compiler work from very early supercomputers (such as TI’s ASC) and some very good academic research (at Illinois, Rice, and others), these vendors produced vectorizing compilers that could generate pretty good vector code from loops in your program. It was not the only way to get vector performance from your code. You could always use a vector library or add intrinsics to your program. But vectorizing compilers were the dominant vector programming method. Vectorizing compilers were successful for three very important reasons. The compilers not only attempted to vectorize your loops, they gave some very specific user feedback when they failed. How often does your optimizing compiler tell you when it failed to optimize your code? Never, most likely. In fact, you would find it annoying if it printed a message every time it couldn’t float a computation out of a loop, say. However, the difference between vector performance and non-vector performance was a factor of 5 or 10, or more in some cases. That performance should not be left on the table. If a programmer is depending on the compiler to generate vector code, he or she really wants to know if the compiler was successful. And the feedback could be quite specific. Not just failed to vectorize the loop at line 25, but an unknown variable N in the second subscript of the array reference to A on line 27 prevents vectorization. So the first reason vectorizing compilers were successful is that the programmer used this feedback to rewrite that portion of the code and eventually reach vector performance. The second reason is that this feedback slowly trained the programmer how to write his or her next program so it would vectorize automatically. The third reason, and ultimately the most important, is that the style of programming that vectorizing compilers promoted gave good performance across a wide range of machines. Programmers learned to make the inner loops be stride-1, avoid conditionals, inline functions (or sink loops into the functions), pull I/O out of the inner loops, and identify data dependences that prevent vectorization. Programs written in this style would then vectorize with the Cray compiler, as well as the IBM, NEC, Fujitsu, Convex, and others. Without ever collaborating on a specification, the vendors trained their users on how to write performance-portable vector programs. I claim that what we want is a programming strategy, model or language that will promote programming in a style that will give good performance across a wide range of heterogeneous systems. It doesn’t necessarily have to be a new language. As with the vectorizing compilers lesson, if we can create a set of coding rules that will allow compilers and tools to exploit the parallelism effectively, that’s probably good enough. But there are several factors that make this hard. Why It’s Hard Parallel programming is hard enough to begin with. Now we have to deal not only with the parallelism we’re accustomed to on our multicore CPUs, we have to deal with an attached asynchronous device, as well as with the parallelism on that device. To get high performance parallel code, you have to optimize locality and synchronization as well. For these devices, locality optimization mostly boils down to managing the distinct host and device memory spaces. Someone has to decide what data gets allocated in which memory, and whether or when to move that data to the other memory. Many systems will have multiple coprocessors at each node, each with its own memory. Users are going to want to exploit all the resources available, and that may mean managing not just one coprocessor, but two or more. Suddenly you have not just a data movement problem, but a data distribution problem, and perhaps load balancing issues, too. These are issues that were addressed partly by High Performance Fortran in the 1990s. Some data has to be distributed among the memories, some has to be replicated, and some has to be shared or partially shared. And remember that these coprocessors are typically connected to some pretty high performance CPUs. Let’s not just leave the CPU idle while the coprocessor is busy, let’s distribute the work (and data) across the CPU cores as well as the coprocessor(s). Most of the complexity comes from the heterogeneity itself. The coprocessor has a different instruction set, different performance characteristics, is optimized for different algorithms and applications than the host, and is optimized to work from its own memory. The goal of HPC is the HP part, the high performance part, and we need to be able to take advantage of the features of the coprocessor to get this performance. The challenge of designing a higher level programming model or strategy is deciding what to virtualize and what to expose. Successful virtualization preserves productivity, performance, and portability. Vectorizing compilers were successful at virtualizing the vector instruction set; although they exposed the presence of vector instructions, they virtualized the details of the instruction set, such as the vector register length, instruction spellings, and so on. Vectorizing compilers are still in use today, generating code for the x86 SIMD (SSE and AVX) and Power Altivec instructions. There are other ways to generate these instructions, such as the Intel SSE Intrinsics, but these can hardly be said to preserve productivity, and certainly do not promote portability. You might also use a set of vector library routines, such as the BLAS library, or C++ vector operations in the STL, but these don’t compose well, and can easily become memory-bandwidth bound. Another alternative is vector or array extensions to the language, such as Fortran array assignments or Intel’s Array Notation for C. However, while these allow the compiler to more easily generate vector code, it doesn’t mean it’s better code than an explicit loop that gets vectorized. For example, compiling and vectorizing the following loop for SSE: do i = 1, n x = a(i) + b(i) c(i) = exp(x) + 1/x enddo can be done by loading four elements of a and b into SSE registers, adding them, dividing into one, calling a vector exp routine, adding that result to the divide result, and storing those four elements into c. The intermediate result xnever gets stored. The equivalent array code would be: x(:) = a(:) + b(:) c(:) = exp(x(:)) + 1/x(:) The array assignments simplify the analysis to determine whether vector code can be generated, but the compiler has to do effectively the same amount of work to generate efficient code. In Fortran and Intel C, these array assignments are defined as computing the whole right hand side, then doing all the stores. The effect is as if the code were written as: forall(i=1:n) temp(i) = a(i) + b(i) forall(i=1:n) x(i) = temp(i) forall(i=1:n) temp(i) = exp(x(i)) + 1/x(i) forall(i=1:n) c(i) = temp(i) where the temp array is allocated and managed by the compiler. The first analysis is to determine whether the temp array can be discarded. In some cases it cannot (such as a(2:n-1) = a(1:n-2) + a(3:n)), and the analysis to determine this is effectively the same dependence analysis as to vectorize the loop. If successful, the compiler is left with: forall(i=1:n) x(i) = a(i) + b(i) forall(i=1:n) c(i) = exp(x(i)) + 1/x(i) Then the compiler needs to determine whether it can fuse these two loops. The advantage of fusing is avoiding the reload of the SSE register holding the intermediate value x. This analysis is essentially the same as for discarding the temparray above. Assuming that’s successful, we get: forall(i=1:n) x(i) = a(i) + b(i) c(i) = exp(x(i)) + 1/x(i) endforall Now we want the compiler to determine whether the array x is needed at all. In the original loop, xwas a scalar, and compiler lifetime analysis for scalars is precise. For arrays, it’s much more difficult, and sometimes intractable. At best, the code generated from the array assignments is as good as that from the vectorized loop. More likely, it will generate more memory accesses, and for large datasets, more cache misses. At the minimum, the programming model should virtualize those aspects that are different among target systems. For instance, high level languages virtualize instruction sets and registers. The compilers virtualize instruction-level parallelism by scheduling instructions automatically. Operating systems virtualize the fixed physical memory size of the system with virtual memory, and virtualize the effect of multiple users by time slicing. The model has to strike a balance between virtualizing a feature and perhaps losing performance or losing the ability to tune for that feature, versus exposing that feature and improving potential performance at the cost of productivity. For instance, one way to manage separate physical memories is to essentially emulate shared memory by utilizing virtual memory hardware, using demand paging to move data as needed from one physical memory to another. Where it works, it completely hides the separate memories, but it also makes it hard to optimize your program for separate memories, in part because the memory sharing is done at a hardware-defined granularity (virtual memory page) instead of an application-defined granularity. Grab your Machete and Pith Helmet If parallel programming is hard, heterogeneous programming is that hard, squared. Defining and building a productive, performance-portable heterogeneous programming system is hard. There are several current programming strategies that attempt to solve this problem, including OpenCL, Microsoft C++AMP, Google Renderscript, Intel’s proposed offload directives (see slide 24), and the recent OpenACC specification. We might also learn something from embedded system programming, which has had to deal with heterogeneous systems for many years. My next article will whack through the underbrush to expose each of these programming strategies in turn, presenting advantages and disadvantages relative to the goal. About the Author Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.
<urn:uuid:50d9aa36-becd-459f-ad0e-75cba48f7cf5>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/03/19/the_heterogeneous_programming_jungle/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93899
4,748
2.625
3
Today is World Information Society Day, which aims to raise global awareness of societal changes brought about by the Internet and new technologies. In relation to this, Kaspersky Lab warns of the dangers posed by cybercriminals and offers tips for a secure and pollution-free digital life. Using social networks, banking and shopping online have become part of our everyday lives. A generation of digital natives is living online – and offline – often without being aware of the dangers of the Internet. More than 400 million people worldwide are now on Facebook (1), and more than half the population of Europe is part of the world’s biggest social network.(2) Children and teenagers are in particular danger of exposing personal data, such as private pictures, to the general public, and of revealing private information in social networks. On top of this, the security experts at Kaspersky Lab process an average of 30,000 new malicious and potentially undesirable programs every day – and the number is growing. On the occasion of World Information Society Day, Kaspersky Lab provides some simple tips for a secure digital life: - Keep Windows and third-party applications up-to-date. - Back up your data regularly to a CD, DVD, or external USB drive. - Don’t respond to email or social media messages if you don’t know the sender. - Don’t click on email attachments or objects sent via social networks if you don’t know the sender. - Don’t click on links in email or IM (instant messaging) messages. Type addresses directly into your web browser. - Don’t give out personal information in response to an email, even if the email looks official. - Only shop or bank on secure sites. These URLs start with ‘https://’ and you’ll find a gold padlock in the lower right-hand corner of your browser. - Use a different password for each web site or service you use and make sure it consists of more than 5 characters and contains numerals, special characters and upper-case and lower-case letters. Don’t recycle passwords (e.g. ‘jackie1’, ‘jackie2’) or make them easy to guess (e.g. mum’s name, pet’s name). Don’t tell anyone your passwords. - Make sure you share your child’s online experience and install parental control software to block inappropriate content. - Install Internet security software and keep it updated. While up-to-date protective software is essential for every Internet user, it is particularly important for those who spend a lot of time interacting with others via the Internet. Failing to use this type of software enables malware to take up residence on your computer, where it can intercept your login information for social networks and other services. Kaspersky Lab protects Internet users against all kinds of cyberthreats through its different security solutions, such as Kaspersky® Anti-Virus and Kaspersky® Internet Security. The new Kaspersky PURE solution offers additional features, like a password manager and data encryption, and frees life from digital pollution. More information on Kaspersky Lab products is available at www.kaspersky.co.uk (1)http://www.facebook.com/press/info.php?statistics, May 11, 2010 (2) Forrester Research, Consumer Technographics, April 2010
<urn:uuid:ffc471ad-e245-4110-b09b-3386c24481a8>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/business/2010/Keep_your_digital_life_safe_warns_Kaspersky_Lab_at_the_World_Information_Society_Day
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.841768
723
3.25
3
Potential system changes are various and can range from modifications as a result of testing to new feature requirements included in change requests. When a change is made to part of an application, it is usually tested to ensure the fix works. Though this is a good practice, it is not always sufficient as it cannot guarantee the quality of the system. Every change to an existing system has a high probability of adversely affecting other functions in that system, causing a ripple effect of defects. A primary goal of testing must be to ensure existing business objectives continue to be met after system updates. Regression Testing Defined Regression testing aims to selectively test parts of a system to ensure that additions, modifications, and deletions made to the application have not unintentionally affected previously working functionality. To that end, regression tests include test cases from original unit, functional, and system testing phases that confirmed system functionality.
<urn:uuid:0965b995-bc86-4cdd-9710-5c1de7aeca96>
CC-MAIN-2017-04
https://www.infotech.com/research/ensure-that-changes-dont-affect-the-system-use-regression-testing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944492
178
2.625
3
A new study from the London School of Economics and Political Science shows that cloud computing has a clear role in stimulating the economy and creating jobs. The Microsoft-sponsored study looks at the impact of cloud computing on the aerospace and mobile services sectors in the United States, the United Kingdom, Germany and Italy from 2010-2014. The LSE study finds that cloud computing contributes to job creation in all markets, to varying degrees. Primary jobs are created in the process of building and staffing datacenters to host the cloud platforms and services. While a secondary positive economic impact is experienced by the companies that adopt cloud services. According to the report, cloud adoption carries little risk of unemployment; worker productivity gains are usually redirected into more profitable business activities. The authors state: “This study shows how the microeconomic characteristics of cloud computing create a dynamic effect that will bring about changes that, when effectively implemented, will improve firm productivity, enhance new business development, and, while initially creating employment primarily in cloud services businesses and data centres, shift the character of work in many firms in the coming years.” Out of the four countries that were analyzed, the US was the strongest with regard to job creation with cloud-related smartphone services jobs expected to grow from 19,500 in 2010 to 54,500 in 2014. In the UK, equivalent jobs are set to grow from 900 to 4,040 during the same time period. The study’s authors cited lower electricity costs and less restrictive labor regulation in the US as possible factors for the discrepancy. In fact, the report found energy cost was second only to geopolitical stability when it comes to selecting where to build a datacenter. The full 64-page report “Modelling the Cloud,” was authored by Jonathan Liebenau, Patrik Karrberg, Alexander Grous and Daniel Castro.
<urn:uuid:65ce533d-cf96-488c-bbbb-053c26753538>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/01/30/cloud_computing_bolsters_economy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280761.39/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944757
375
2.625
3
Minimizing the Effects of Malware on a Network Tips on surviving sophisticated malware infections. In spite of the fact that malware can't be completely blocked or eliminated, IDG reports that you can manage your PCs, mobile devices, and networks to function even if infected. A malware infection doesn't necessarily mean lost data, unavailable systems, or other problems, and companies can and do function despite these intrusions. This article offers some approaches that can help minimize the effect of malware on a network. "’The current batch of malware we're seeing is very sophisticated and well written, and it hides itself well and avoids detection well,’ says Fred Rica, principal in the information security advisory practice at the PricewaterhouseCoopers consulting firm.”
<urn:uuid:f52468da-faf1-44bf-a2ab-adcaec77253a>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/minimizing-the-effects-of-malware-on-a-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909915
155
2.5625
3
Tiny Spectrometer Tells You Exactly What You're Eating For all those who have wondered about the ingredients in their lunchtime burrito, Consumer Physics is gearing up to release a handheld device designed to provide the answer. Dubbed "SCiO," the device is a tiny spectrometer that can scan food, medicines and plants to determine their molecular composition. When used to scan foods such as cheeses, fruits, vegetables, sauces, salad dressings and cooking oils, SCiO delivers data describing nutrient values -- calories, fats, carbohydrates and proteins -- as well as produce quality, ripeness level and spoilage analysis. The device also can identify and authenticate medication in real time by cross-checking a pill's molecular makeup against a pharmaceutical database. SCiO also can analyze moisture levels in plants and tell users when to water them. Real-time results are delivered to an accompanying mobile application via Bluetooth LE. "Smartphones give us instant answers to questions like where to have dinner, what movie to see, and how to get from point A to point B, but when it comes to learning about what we interact with on a daily basis, we're left in the dark," said Dror Sharon, Consumer Physics' CEO. "We designed SCiO to empower explorers everywhere with new knowledge and to encourage them to join our mission of mapping the physical world." Powering SCiO is a low-cost, mass-produced version of near-infrared spectroscopy. Enabling its capabilities is the fact that light shining on any sample excites the sample's molecules and makes them vibrate in a unique way. That wavelength-dependent light absorption creates optical signatures based on an object's chemical composition. When SCiO collects the light reflected from a particular sample, it breaks it down into spectral components for analysis. Those components are then sent to the cloud via Bluetooth, and SCiO translates the results within a matter of seconds, delivering relevant information about the sample's molecular makeup to the user's smartphone. "SCiO began as a conversation over three years ago," Sharon told TechNewsWorld. "My cofounder Damian and I had been discussing the possibility of starting a new venture together and creating a handheld device that could tell you more about the physical world around you." The two focused initially on demonstrating that they could build a small, low-cost optical spectrometer that it could handle meaningful applications, Sharon said. "Since then, we've gone through several generations of SCiO prototypes and have started to ramp up our production line." Cosmetics, clothes, flora, soil, jewels and precious stones, leather, rubber, oils, plastics, and even human tissue and bodily fluids are all among the materials SCiO can analyze. Huge With Consumers "This is a great topic and some really cool technology," Jim McGregor, founder and principal analyst with Tirias Research, told TechNewsWorld. "I think that this technology will eventually be huge with consumers, and something used in grocery stores and restaurants as an added service or differentiator." While the initial implementation is targeted at dietary information, "I think later versions will likely provide more information, such as the presence of nitrates, chemical food colorings, genetically modified foods and other things that are known carcinogens and the source of other medical problems, as well as common allergens such as wheat, nuts and corn," McGregor added. "The use of chemicals and genetically modified ingredients has led to the increase of medical problems and food allergies over the past few generations, especially in the U.S.," he explained. "I believe this technology will help those that have food limitations, push for healthier ingredients in foods, and hopefully improve the diets and health of consumers overall," McGregor said. "It will take at least a generation of consumers, but technology like this will help lead the way." Developers, Developers, Developers SCiO's $250 price is "near the high end of the consumer electronics price range for a peripheral device," Roger Kay, principal analyst at Endpoint Technologies Associates, told TechNewsWorld. "Users will have to have a compelling reason to shell out for it." It could be useful in military applications, he suggested. In fact, "I wouldn't be surprised if this is a consumer adaptation of existing military technology," Kay said. It will be up to developers, however, to embrace the platform to popularize it, he added. A small, handheld scanner like SCiO eventually could be used in any area that requires quick chemical analysis, Enderle Group analyst Rob Enderle noted. "Say if your kids or pet eat or drink some unknown substance and you wanted to know whether you needed to rush them to get medical help," he said. "I expect it will change how people look at what they eat, because folks will suddenly discover that some of the things they consume are more unhealthy than they thought." One limitation of the technology, however, is that "it measures surface material -- not what is out of range of the laser -- so liquids that are mixed will be accurate, but measuring the outside of something with a soft center won't give you what is in the center unless you break the object open," Enderle pointed out. In any case, the technology could become pervasive if enough discoveries are made, Enderle predicted. "Let's say someone discovers carcinogens in bottled water, for instance, and some people get cancer as a result," he said. "The fear could drive the technology into broad use."
<urn:uuid:e53350c7-252b-4e22-b134-ea775483de6c>
CC-MAIN-2017-04
http://www.linuxinsider.com/story/med-tech/80688.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00143-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955564
1,143
2.828125
3
In the maritime business, Automated Identification Systems (AIS) are a big deal. They supplement information received by the marine radar system, are used for a wide variety of things – including ship-to-ship communication – and are relied upon each and every day. Unfortunately, the AIS can also be easily hacked in order to do some real damage, claims a group of researchers presenting at the Hack In The Box Conference currently taking place in Kuala Lumpur. Dr. Marco Balduzzi during the presentation. Automated Identification Systems (AIS) transceivers can currently be found on over 400,000 ships sailing the high seas, and it is estimated that by 2014, that number will reach a million. The installation is mandatory for all passenger ships and commercial (non-fishing) ships over 300 metric tonnes, and it tracks them automatically by electronically exchanging data with other ships, AIS base stations, and satellites. AIS hasn’t replaced the marine radar system – it has been added to it to enhance marine traffic safety. The system has been first mandated for some 100,000 vessels in 2002. In 2006, the AIS standards committee published the Class B type AIS transceiver specification, which enabled the creation of a lower cost AIS device and triggered widespread use. The data exchanged includes everything that has to do with the position of the ship, the cargo it carries, information on nearby ships, etc. The system used by the ships to communicate with other ships, plot their course and follow it, avoid collision with other ships, reefs and things that may be floating nearby that could cause damage to the vessels, as well as to aid in accident investigation and in search and rescue operations. The information is also sent to upstream providers such as Maritimetraffic.com, Vesselfinder.com or Aishub.net, where anyone can check a specific vessel’s position and additional information about it. The upstream data sending can be effected via email, TCP / UDP, commercial software, smartphone apps, and radio-frequency gateways, and is sent via different types of messages (27 types in all). For example, message 18 delivers the position report (longitude, latitude, navigation status, an so on) and is sent every 30 second to 3 minutes depending on the speed of the ship, and message 24 provides the static report (type of ship, name, dimension, cargo type, etc) and is sent every 6 minutes. Message type 8 is a binary broadcast message that can include any type of data, type 22 is for channel management (and only port authorities are allowed to use it). Type 14 is a safety-related broadcast message (and alerts of emergencies such as crew or passengers falling off board). But, as Dr. Marco Balduzzi and Kyle Wilhoit of Trend Micro and independent security researcher Alessandro Pasta showed, AIS is vulnerable both at the implementation and at the protocol level. The researchers detailed a couple of different attack vectors and divided the exploitations of threats into software and radio frequency (RF) attacks. The root of all problems is the same: there is no authentication and no integrity checks, so the apparent validation of spoofed and specially crafted packets is a huge problem. The software attacks demonstrated to the full packed conference hall included: There are a number of online AIS services that track vessel positions and locations around the world – the aforementioned Marine Traffic, Vessel Finder and AIS Hub are just some of them. These services are receiving AIS data and use maps to provide visual plotting that showcases global maritime traffic. AIS services track vessels, but don’t do any checkups on who is sending AIS data. This data usually includes vessel identification, location details, course plotting and other data specific to the vessel in question. With this on mind, the attackers can send specially crafted messages that could mimic the location of an existing vessel, or even create a fake vessel and place it on its own virtual course. This can cause a bit of panic, especially because you can fake a whole fleet of let’s say war ships sailing on course to an enemy country or showing up off the coast of it. This variation of the spoofing attack on AIS could be used to download the data of an existing ship, changing some of the parameters and submitting it to the AIS service. The result is virtual placement of a vessel on a completely different position or plotting a bizarre route that could include some “land sailing”. All of the packets above can be saved and stored locally and then replayed at any time. By using the script and a scheduling function on a local system, the attacker can carefully replay spoofed messages in specific timeframes. The mentioned scenarios were just an introduction on what you can do when you have reverse engineered AIS and know how to modify the date and reuse it. The most interesting part of the research includes attacking vessels over RF. The researchers coded an AIS frame builder, a C module which encodes payloads, computes CRC and oes bit operations. The output of the program is an AIS frame which is transferred from a digital into the radio frequency domain. Alessandro Pasta demonstrating their setup. The hacks were crafted and tested in a lab that they built and which consists of GNURadio, transceiver service, bi and omni directional antennas, SDR (software defined radio), power amplifier, GPS antenna and a power LED (to mimic real life alert). The attacks include: Professional alpinists use avalanche safety beacons to alert rescuers after being buried by an avalanche. In the world of maritime safety, there are similar types of devices that send AIS packets as soon as someone drops in the water. This type of requests can also be spoofed, which was shown through the Python script called AiS_TX.py which is actually AIS transmitter. Because of maritime laws and best practices, everyone needs to address this type of an alert, so it is obvious how an attacker can wreak havoc in this way. This is a damaging attack that can cause some serious issues for the safety of the targeted vessel. Every vessel is tuned in on a range of frequencies where they can interact with port authorities, as well as other vessels. There is a specific set of instructions that only port authorities can do which makes the vessel’s AIS transponder work on a specific frequency. The researchers showed that the malicious attacker can spoof this type of “command” and practically switch the target’s frequency to another one which will be blank. This will cause the vessel to stop transmitting and receiving messages on the right frequency effectively making it “disappear” and unable to communicate (essentially a denial of service attack). If performed by, let’s say, Somali pirates, it can make the ship “vanish” for the maritime authorities as soon it enters Somalia sea space, but visible to the pirates who carried out the attack. From our discussion with Balduzzi and Pasta after their talk, they said that this is a big problem, especially because this frequency cannot be manually changed by the captain of the vessel. Fake CPA alerting As the attackers can spoof any part of the transmission, they are able to create a fake CPA (closest point of approach) alert. In real life this means that they would place another vessel near an actual one and plot it on the same course. This will trigger a collision warning alert on the target vessel. In some cases this can even cause software vessel to recalculate a course to avoid collision, physically an attacker to nudge a boat in a certain direction. Arbitrary weather forecast By using a type 8 binary broadcast message of the AIS application layer, the attackers can impersonate actual issuers of weather forecast such as the port authority and arbitrarily change the weather forecast delivered to ships. Help Net Security’s Mirko Zorz during a discussion with Dr. Marco Balduzzi and Alessandro Pasta. The researchers have been working on this for the last six months, and have banded together because of their respective expertise (Wilhoit on the software side, Pasta on electronics and telecommunication). They have performed other types of successful attacks, but haven’t had the chance to demonstrate them because there was no time. “The attack surface is big. We can generate any kind of message. All the attacks we have shown here except the weather forecast attack have been successful,” they pointed out. Countermeasures suggested by the researchers include the addition of authentication in order to ensure that the transmitter is the owner of the vessel, creating a way to check AIS messages for tampering, making it impossible to enact replay attacks by adding time checking, and adding a validity check for the data contained in the messages (e.g. geographical information). The researchers have made sure that their experiments didn’t interfere with the existing systems. Most of them were performed in a lab environment, especially messages with safety implications. Also, they have contacted the online providers and authorities and explained the issue. The former responded and have said they would try to do something about it, and among the latter, only the ITU Radiocommunication Sector (ITU-R) – the developers of the AIS standard and the protocol specification – has responded by acknowledging the problem. “Are they doing something about it, or did they just say thanks for letting us know?” we asked them. “It’s a complex matter. This organisation is huge, and they often work within workgroups, so there are a lot of partners involved in the decision making. They cannot do it by themselves. They were grateful to us for pointing out the problem, for how can you do something about a problem if you don’t know there is one to begin with?” Balduzzi told us. “They did help our investigation by giving us links to more information about the protocols to do more research, and they encouraged us to continue in that direction.” The International Association of Lighthouse Authorities (IALA), IMO (International Maritime Organization) and the US Coast Guard are yet to comment on the findings. The researchers said that they don’t have much hope that their research will result with prompt changes. “Perhaps the media attention will help,” said Balduzzi. “But judging by the response received by Hugo Teso, who last year presented his research on airplane hijacking by interfering with its communication systems, the issue will not be addressed or fixed soon, and we don’t expect to get a lot of feedback from the governing bodies.” On the other hand, they point out that their attacks are much more feasible than Teso’s. “The difference between the airplane attacks and these ones is that the former are more difficult to perform, and therefore less likely to be performed by attackers in the wild.” Also, they managed to test some of these attacks outside of a lab, so they are sure to work with systems already online. The good news is that similar attacks haven’t yet been spotted being performed by malicious individuals. But, according to Balduzzi, the danger is big and real. “It’s actually possible to do it by investing very little. For our experiment, we bought a SDR radio, which costs some 500 euros, but it’s possible to do it by using a VHF radio that costs around a 100 euros – a price that makes the technology accessible to almost anyone (including pirates). The threat is very real, and that’s why we talked upfront with the ITU,” they concluded. Authors: Zeljka Zorz, Mirko Zorz, Berislav Kucan.
<urn:uuid:7b2a6d26-b145-402d-ade2-82c17c4c8d59>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/10/16/digital-ship-pirates-researchers-crack-vessel-tracking-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946569
2,459
2.65625
3
Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation’s critical infrastructure and require protection from a variety of threats that exist in cyber space today. By allowing the collection and analysis of data and control of equipment such as pumps and valves from remote locations, SCADA networks provide great efficiency and are widely used. However, they also present a security risk. SCADA networks were initially designed to maximize functionality, with little attention paid to security. As a result, performance, reliability, flexibility and safety of distributed control/SCADA systems are robust, while the security of these systems is often weak. This makes some SCADA networks potentially vulnerable to disruption of service, process redirection, or manipulation of operational data that could result in public safety concerns and/or serious disruptions to the nation’s critical infrastructure. Action is required by all organizations, government or commercial, to secure their SCADA networks as part of the effort to adequately protect the nation’s critical infrastructure. The President’s Critical Infrastructure Protection Board, and the Department of Energy, have developed the steps outlined here to help any organization improve the security of its SCADA networks. These steps are not meant to be prescriptive or all-inclusive. However, they do address essential actions to be taken to improve the protection of SCADA networks. The steps are divided into two categories: specific actions to improve implementation, and actions to establish essential underlying management processes and policies. Download the DOE's Twenty-One Steps to Improve SCADA Security here:
<urn:uuid:71be3d99-325c-4a4c-8680-75cf80eb9afb>
CC-MAIN-2017-04
http://www.infosecisland.com/documentview/21535-DOE-Twenty-One-Steps-to-Improve-SCADA-Security.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00107-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946748
358
2.6875
3
PDAs (personal digital assistants), pocket-sized diaries that are becoming increasingly more powerful, can represent a serious threat to corporate security. As PDAs become smaller and their capabilities increase, these devices are becoming more popular in corporate environments, especially among managerial staff. This, combined with the ease with which data can be transmitted from a computer to these devices, means that the amount of sensitive information stored on them has also increased significantly. And as PDAs are now smaller than ever, the risk of them (and therefore critical data) being lost or stolen is now greater than ever. Sometimes the information they store may not be very important, as they could just contain a few games or the like, but in most cases, the information stored is highly sensitive. PDAs are often used to store credit card numbers, computer passwords, mail account data and even confidential financial or commercial information. For this reason, a PDA in the hands of a malicious user could become a key to the corporate network. Another important factor is the use of PDAs by malicious users as attack tools. As the PDA can be converted into just another computer with network access, an attacker could add the software needed to carry out attacks and at the same time, would also have the space needed to save the information obtained. An attacker could then access a whole network in a matter of minutes and obtain vast amounts of data without anyone realizing.
<urn:uuid:6e68069e-4ffb-42a0-af12-41a0b5a7fdb2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/11/04/the-danger-of-pdas/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00345-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970825
286
3.140625
3
Google Street View Shows Off the Spectacular Galapagos Islands The Google Street View cameras captured the images in May 2013, and now they are on display for online viewers.Google's Street View cameras captured amazing 360-degree color images of the lush Galapagos Islands earlier in 2013, and now the photographs are being shared with online viewers as part of the Street View project. "Now you can visit the islands from anywhere you may be, and see many of the animals that Darwin experienced on his historic and groundbreaking journey in 1835," wrote Raleigh Seamster, project lead for Google Earth Outreach, in a Sept. 12 post on the Google Lat Long Blog. "The extensive Street View imagery of the Galapagos Islands will not only allow armchair travelers to experience the islands from their desktop computer, but it will also play an instrumental role in the ongoing research of the environment, conservation, animal migration patterns, and the impact of tourism on the islands." The photographs, which were collected in May using Google's specialized Trekker cameras, were captured in partnership with the Directorate of the Galapagos National Park and Charles Darwin Foundation, according to Seamster. "One way in which the Charles Darwin Foundation plans to use the Street View imagery for science is by allowing the public to help identify plants and animals observed when navigating through the imagery," wrote Seamster. "Together, Charles Darwin Foundation and iNaturalist—a web site and community for citizen scientists—have developed a new project they are excited to launch today: Darwin for a Day." Darwin's discovery of the Galapagos Islands came in 1835 during a 10-day expedition, which is being marked this week with its 178th anniversary, Seamster wrote in a related Sept. 12 post of the Google Official Blog. "This volcanic archipelago is one of the most biodiverse and unique places on the planet, with species that have remarkably adapted to their environment. Through observing the animals, Darwin made key insights that informed his theory of evolution." The Street View crew used its specialized photographic equipment to capture images of the area that was first explored by Darwin and his crew. "Darwin may have first sighted San Cristobal Island from the water, perhaps near where we sailed with the Trekker strapped to a boat in order to observe the craggy shoreline and the magnificent Frigatebirds that the rocky landscape shelters," Seamster wrote. "After landing on San Cristobal, we made our way to Galapaguera Cerro Colorado, a breeding center that helps to restore the population of the island tortoises, seriously threatened by invasive species. Wearing the Trekker, we walked by giant tortoises munching on leafy stalks and recently hatched baby tortoises." In August, Google's Street View program bolstered its recently created Views Gallery with new facts, notes, details and behind-the-scenes stories about some of the spectacular locations featured in the Street View collection. The Views gallery was launched in July 2013 as a place where online visitors could add their own gorgeous photos to the amazing maps that are constantly being created with Google Maps. Google is always busy expanding its 6-year-old Street View collection of images from the world's most amazing places. Also in July, Google Street View cameras captured fun images inside the Harry Potter Studio in London to give viewers an inside tour of the world of the popular book and movie character. The images cover a portion of the inner sanctum of the Warner Bros. Studio Tour, where the sets and scenery from the beloved Harry Potter films are on display for visitors in real life, from the inside of The Great Hall to the oft-seen cobblestones of Diagon Alley, where Harry and his friends began their adventures. Now, instead of jetting off to London, Harry Potter fans can explore part of that Studio Tour—the infamous Diagon Alley marketplace—using the 360-degree views and full-color imagery provided by Street View for their virtual tour.
<urn:uuid:39402b52-308d-40e2-aad3-51cbd31ccf67>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-street-view-shows-off-the-spectacular-galapagos-islands.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94897
812
2.953125
3
(Translated from the original Italian) In the last decade, we have observed a rise in cyber attacks against military and private business, be they for cyber warfare or cyber espionage, which have demonstrated how dangerous cyber offensives can be. The U.S.'s leading cyber warrior has estimated that private businesses are losing hundreds of billions of dollars to cyber espionage, and the expense to prevent these attacks is increasing at a rate that makes companies less competitive. The main problem is how to address these cyber threats with an appropriate strategy and recruit capable experts to the cause. Gen. Keith Alexander, the director of the secretive National Security Agency and head of the Pentagon's Cyber Command, recently declared recently that illicit cyberspace activities essentially amounted to "the greatest transfer of wealth in history." The general alerted the U.S. Government to this imminent threat to national security in a recent public address in which he said that U.S. companies lose $250 billion to intellectual property theft every year. Alexander referred to data from Symantec and McAfee reports that show an alarming scenario: $114 billion was lost due cybercrime activities alone, and the number could be as high as $388 billion if the cost in time and business opportunities lost are included in the figure. In particular, McAfee proposed that $1 trillion is spent globally in remediation efforts. Which are the main cyber threats that are alarming governments? Malware and botnets represent the greatest challenge to security, and according McAfee 75 million unique pieces of malware have been detected in their database, an amazing figure if we consider the potential damage they can bring. In regards to botnets, we are witnessing an evolution of the technology applied, such as the emergence of Peer-to-Peer based botnets and the mechanisms used in their diffusion. Many concerns are also related to the business model known as malware-as-service or C2C, adopted by cybercriminals that make possible the use of botnets by those not technically inclined. In addition to cybercrime, we must take into consideration the increasing adoption of cyber offensives made by foreign governments as well as the hacktivist phenomena. Both are cyber threats, both could compromise national security, and both could expose sensitive information. Of concern is the protection of critical infrastructure. According to a recent ICS-CERT report, the number of serious attacks increased from 9 in 2009 to more than 160 in 2011, and the trend demonstrates a consistent growth. How should governments prepare for cyberwar? One of the main phenomena which we have witnessed is the recruitment of groups of hackers on the part of governments to carry out offensive actions and to train personnel in the use of deadly new weapon... the keyboard. Not with bullets but with bits we must now battle, and who better than a hacker can transfer their knowledge on the subject matter? Take for example, the approach used by the U.S. which is trying to find a way to identify and employ the most promising of young hackers. U.S. Naval Postgraduate School professor John Arquilla recommended in an interview with The Guardian that "most of these sorts of guys can't be vetted in the traditional way. We need a new institutional culture that allows us to reach out to them." Arquilla referred to about 100 "master hackers" around the world, mainly in Asia and Russia, that could potentially break in to any network, no matter how secure. These forces represent the future of cyber armys all over the world. But the initiative proposed by Arquilla is not new; consider that China can has already implemented initiatives to recruit hackers for cyber operations. The PLA programs provide an example of how the recruitment of young hackers in their cyber army is crucial. The Chinese military wrote on its official press: "The U.S. military is hastening to seize the commanding military heights on the Internet, and another Internet war is being pushed to a stormy peak... Their actions remind us that to protect the nation's Internet security, we must accelerate Internet defense development and accelerate steps to make a strong Internet army." The recruiting of hackers to train military experts is also being conducted by other countries. India, for example, announced they have engaged two cyber security experts who claimed to have cracked CERN's computer systems. The experts are now conducting a training sessions for Indian government officials. Ethical hacker Chris Russo reported that on three occasions he found vulnerabilities in IT system of European Organization for Nuclear Research (CERN) that have been involved in the discovery of the Higgs-Boson or 'God Particle'. "The projections show there is going to be lot of manufacturing in the India. Lot of software will be involved in it. We are here to create awareness among people on probable vulnerabilities in the cyber system..." The training course was attended by officials from the Indian Cabinet, Air Force, C-DAC, National Technical Research Organization, the Income Tax Department, and Assam's AMTRON along with representatives from private sector entities like Aircel and Cisco. The experts are associates of the E2 Labs security firm, and E2 Labs Managing Director Zaki Qureshey stated: "[The] Next era of wars is not going to be of bomb, gun and shells. It will be led by cyber warfare where most attacks will be on nation's secret data. The idea to conduct such programs evolved after seeing increase in cyber attacks on India," I'm completely in agreement with this statement, and more awareness and the training are necessary as components of an effective cyber strategy. Senior Director Assocham Ajay Sharma said that the talents required to build a team of cyber security experts are mostly available in people with an average age of below 30. I desire to conclude the post with few simple reflections: - We are observing an increase of the number of cyber attacks, and governments are less concerned with the consequences of a single big attack, they tend to be more concerned by the damage related to small and continuous attacks that represent a real cyber threat today - The nature of warfare has totally changed, and all governments agree on the necessity to develop a cyber strategies that address the emerging threats and their consequences. For this reason governments are searching for and recruiting young hackers - The new cyber threats affect not only national security, but also represent serious threats to private business, and that's why it is absolutely necessary for increased collaboration between the military structure and companies We are headed the right direction, but the path is going to be very long and fraught with difficulties. Cross-posted from Security Affairs
<urn:uuid:6c4ed897-1fe7-42ed-8249-d29bf635b978>
CC-MAIN-2017-04
http://infosecisland.com/blogview/21928-On-Government-Strategies-to-Mitigate-Growing-Cyber-Threats.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957407
1,351
2.78125
3
If every picture tells a story, what about every computer program? In the case of a certain 30 year-old one line BASIC program, it sure does. In fact, it’s the subject of a new book, with the title being the entire program, "10 PRINT CHR$(205.5+RND(1)); : GOTO 10". The program, written for the Commodore VIC-20 and 64 computers, will generate an endless scroll of the random display of one of two PETSCII characters (which look like forward and backslashes), which ends up looking like a maze on the screen, e.g.. Among other things, the book argues that computer code is “embedded with stories of a program’s making, its purpose, its assumptions” and that code itself, “should be valued as text with machine and human meanings, something produced and operating within culture. “ In short, that code is a type of cultural artifact worth studying and understanding as something that can tell us about the times in which it was written, the history that preceded it and the technology on which its based. I recently attended a talk in Boston by three of the book’s ten - yes, 10 - authors, Nick Montfort, Patsy Baudoin and Noah Vawter. In between doing live coding on a Commodore 64 (just the sound of the keyboard brought me back to 1982), they talked about how, by examining the code and the output, we can learn something about the history and assumptions that led to it.For example, during the talk, Montfort pointed out the following:
<urn:uuid:4ca1ad51-9bfc-4d27-b2da-88b783e1bc32>
CC-MAIN-2017-04
http://www.itworld.com/article/2716461/it-management/code-as-a-cultural-artifact.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00189-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958456
339
2.890625
3
Too little water pressure from the shower or a burst water pipe can be a hassle. Sonoma County, Calif., is counting on analytics to prevent those inconveniences. The Valley of the Moon Water District in Sonoma County began the early stages of deploying water pressure analytics technology last November through a partnership with IBM and the Sonoma County Water Agency (SCWA) developed more than a year ago. The water district — a government entity — purchases water from the water agency and is the water supplier to 23,000 customers in the Sonoma Valley. Krishna Kumar, the water district’s general manager, said the analytics technology was deployed to better manage pressure in the water distribution system. California requires that minimum water pressure for customers must be 20 pounds per square inch (psi). In the area the district serves, water pressure ranges from 20 to 70 psi. Factors such as elevation and time of year will affect water pressure. If a customer lives on a hill, more pressure is needed to pump water to that customer’s home. For summer months, customers use less water pressure than they do during the winter, Kumar said. “So from a customer perspective, you need to have an ideal pressure,’” he said. “… If you have high pressure within the system, there is a tendency to have leaks and bursts a little more than usual.” The district is trying to prevent leaks and bursts by using the analytics technology, Kumar said. Leaks and pipe breaks can be costly. IBM cited a World Bank estimate that worldwide costs from leaks total $14 billion annually. The Valley of the Moon has 10 entry points where it receives water from the water agency. Each entry point has a different water pressure. The pressures are being adjusted with pressure-reducing valves at each of the 10 entry points. IBM’s analytics recommend the valve settings. IBM Smarter Water Program Director Michael Sullivan said in a statement that the company’s technology will help the SWCA and the Valley of the Moon Water District more efficiently analyze data and predict problems. The ability to track water at such a granular level helps SCWA and Valley of the Moon make informed decisions about how to manage — and conserve — water along its entire life cycle,” Sullivan said. Since beginning the project in November, Kumar said water leaks in the water district have decreased by 30 percent. Kumar said it’s still too early to determine conclusively if IBM’s system is responsible for helping reduce the number of leaks. In the past, the water district typically would monitor water pressure by using expensive hardware and sensors placed throughout the system. Workers would manually adjust the valves manually as needed. Now IBM monitors and tracks the water pressure, and uses optimization techniques to determine the correct valve settings. Eventually the water district will take over the monitoring. The Sonoma County Water Agency contributed $100,000 for the project. The Valley of the Moon Water District didn’t pay up-front costs to deploy the technology. The new system could help the water district could save $100,000 annually, Kumar said. Paul Gradolph, the Water District’s operations and maintenance supervisor, said even though the water system is intricate, the deployment hasn’t encountered major hurdles. But because demand for water changes depending on the time of year, the district will monitor the change carefully. “Our biggest challenge really is the transition from winter months to summer months,” Gradolph said.
<urn:uuid:07d40fc3-3a1d-47dc-86a8-46a04de22d61>
CC-MAIN-2017-04
http://www.govtech.com/technology/Water-District-Takes-Plunge-into-Pressure-Analytics.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940431
722
2.921875
3
If you have been on a commercial airline, the phrase "The use of any portable electronic equipment while the aircraft is taxiing, during takeoff and climb, or during approach and landing," is as ubiquitous but not quite as tedious as "make sure your tray tables are in the secure locked upright position." But the electronic equipment restrictions may change. The Federal Aviation Administration today said it was forming a government-industry group to study the current portable electronic device use policies commercial aviation use to determine when these devices can be used safely during flight. The group however will not "consider the airborne use of cell phones for voice communications during flight." The FAA says this group will look at a variety of issues, including the testing methods aircraft operators use to determine which new technologies passengers can safely use aboard aircraft and when they can use them. These devices include CD or cassette players, MP3 players, DVD players, digital audio players, electronic calculators, electronic cameras, Global Positioning Satellite monitors, hand-held electronic games, laptops, pagers, video camcorders and eReaders like Kindle. The group will also look at the establishment of technological standards associated with the use of portable electronic equipment during any phase of flight. The group will then present its recommendations to the FAA. Specifically today FAA regulations require an aircraft operator to determine that radio frequency interference from PEDs are not a flight safety risk before the operator authorizes them for use during certain phases of flight. The FAA wrote in a paper outlining the reexamination of current policies: "PEDs have changed considerably in the past few decades and output a wide variety of signals. Some devices do not transmit or receive any signals but generate low-power, radio frequency emissions. Other PEDs, such as e-readers, are only active in this manner during the short time that a page is being changed. Of greater concern are intentional transmissions from PEDs. Most portable electronic devices have internet connectivity that includes transmitting and receiving signals wirelessly using radio waves, such as Wi-Fi, Bluetooth,5 and various other cellular technologies. These devices transmit high-powered emissions and can generate spurious signals at undesired frequencies, particularly if the device is damaged. "Avionics equipment has also undergone significant changes. When the regulations were first established, communication and navigations systems were basic systems. In today's avionics, there are various systems-global positioning, traffic collision and avoidance, transponder, automatic flight guidance and control, and many other advanced avionics systems- that depend on signals transmitted from the ground, other aircraft, and satellites for proper operation. In addition, there are advanced flight management systems that use these avionics as a critical component for performing precision operational procedures. Many of these systems are also essential to realize the capabilities and operational improvements envisioned in the Next Generation airspace system. As such, harmful interference from PEDs cannot be tolerated." "We're looking for information to help air carriers and operators decide if they can allow more widespread use of electronic devices in today's aircraft," said Acting FAA Administrator Michael Huerta. "We also want solid safety data to make sure tomorrow's aircraft designs are protected from interference." The FAA said that the first it will take is gathering public input on current policies. A Request for Comments will appear in the Federal Register on August 28th. You can send your comments to PEDcomment@faa.gov. The FAA said it is looking for comments in the following areas: - Operational, safety and security challenges associated with expanding PED use. - Data sharing between aircraft operators and manufacturers to facilitate authorization of PED use. - Necessity of new certification regulations requiring new aircraft designs to tolerate PED emissions. - Information-sharing for manufacturers who already have proven PED and aircraft system compatibility to provide information to operators for new and modified aircraft. - Development of consumer electronics industry standards for aircraft-friendly PEDs, or aircraft-compatible modes of operation. - Required publication of aircraft operators' PED policies. - Restriction of PED use during takeoff, approach, landing and abnormal conditions to avoid distracting passengers during safety briefings and prevent possible injury to passengers. - Development of standards for systems that actively detect potentially hazardous PED emissions. - Technical challenges associated with further PED usage, and support from PED manufacturers to commercial aircraft operators. Layer 8 Extra Check out these other hot stories:
<urn:uuid:657ce925-1afb-404e-88eb-952f56c1d229>
CC-MAIN-2017-04
http://www.networkworld.com/article/2223028/wi-fi/faa-to-reevaluate-inflight-portable-electronic-device-use---no-cell-phones-though.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00309-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945742
905
2.890625
3
Not so long ago, I would head to my friend’s house after school to play Oregon Trail on her desktop computer. Back in those days, you couldn’t just download apps and games from the Internet, because there was no Internet. There were no smartphones, and handheld tablet devices hadn’t even crossed our minds. We loaded our games onto the computer from multiple floppy disks. Fast-forward to today. I am surrounded by four computing devices at any given time, all of which are capable of accessing the world at lightning speeds. It’s easy to get excited about these technologies, but what I find most fascinating is what goes on “behind-the-scenes.” Recently, while studying network reference models, I learned about four different standards bodies that govern the way we experience the Internet, allowing what happens on our devices to be a seamless, magical experience. A journey down the network highway Our first stop is the International Telecom Union (ITU), with headquarters in Geneva, Switzerland. This organization was established in 1865 and created what are known as letter standards. Some examples are ADSL (Asymmetric Digital Subscriber Line), which enables a faster connection over copper telephone lines, and MPEG4, which allows us to enjoy audio and video content. The next stop is at the Institute for Electrical and Electronic Engineers (IEEE). This group was founded in 1884 by a few electronics professionals in New York. They created the numbering system that governs how we access the modern internet. Some familiar protocols are 802.3 (Ethernet) and 802.11 (wifi). Simply put, my neighborhood coffee shop without 802.11 would be like enjoying my coffee without cream and sugar. Thanks IEEE! We continue our journey to our next destination, which is the Internet Engineering Task Force (IETF). This stop takes us to the west coast in California where RFC (Request for Comment) was created. These standards govern how we reach the content via the World Wide Web. Some familiar protocols developed are RFC 2616, or HTTP (Hypertext Transfer Protocol), and RFC 1034/1035 which is better known as DNS (Domain Name System). Our last stop on this network field trip is at W3C, or the World Wide Web Consortium. This organization was founded in 1994 (right about the time I stopped playing Oregon Trail) by Tim Berners-Lee at Massachusetts Institute of Technology. W3C created familiar protocols such as HTML5 (the fifth iteration of Hypertext Markup Language), which allows us to experience multimedia content like never before, and CSS (Cascading Style Sheets) which let us manage and enjoy web pages in a more beautiful way. Now that you’re in acronym overload, I hope you have a better understanding of how our modern Internet became what it is today. I guess I’ll use those floppy disks for drink coasters while I download the latest app to my tablet.
<urn:uuid:8a6d1fdf-f150-460f-8205-e3c292bed64a>
CC-MAIN-2017-04
http://www.internap.com/2013/07/11/behind-the-scenes-on-the-internet-superhighway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939104
607
3.0625
3
The purpose of a name server in the Domain Name System (DNS) is to translate the name of an Internet resource -- as examples, a website, a mail server or a mobile device -- to an Internet Protocol (IP) address. Domain names, such as www.VerisignInc.com, provide a textual, hierarchical identifier for an Internet resource in a higher-layer protocol such as HTTP. The corresponding IP address -- either the traditional 32-bit form in IP version 4 or the new 128-bit form in IP version 6 -- gives a routable, numeric identifier for the resource in lower-layer network communications. IN THE NEWS: Rehearsals over, IPv6 goes prime time June 6 IN PICTURES: Why the Internet needs IPv6 Thanks to the name-to-address translation provided by a DNS name server, when Verisign and other websites want to deliver content over IPv6, they don't need to use a different domain name (although they could do so). Rather, they can leave it to the Web browser, when running on an IPv6-enabled endpoint device, to look up an IPv6 address from the website's name server, and then to communicate with the website over IPv6. As a result, the grand upgrade currently underway from IPv4 to IPv6 largely impacts network communications, but not HTTP or other higher-layer protocols. The simplicity of this higher-layer abstraction, of course, comes at a cost: The complexity of the implementations of services that translate between higher and lower layers of the stack. DNS offers a good case study with its quadrupling of options due to IPv6. A name server hosts many "resource records," consisting of a domain name and associated information. Traditionally, requesters could only send DNS queries to look up resource records associated with a given domain name over the IPv4 protocol. Furthermore, if the associated information included an IP address, it could only be an IPv4 address (a so-called "A" record). Now, if a name server is IPv6-enabled, requesters can send queries over either IPv4 or IPv6. In addition, the associated information can include either an IPv4 address or an IPv6 address (a so-called "AAAA" or "quad-A" record -- four times as many bits). The two choices are orthogonal, so overall there are four times as many options as before. This initial complexity is just the starting point, however, because of the recursive nature of DNS, which may result in transactions with additional name servers, some of which may be IPv6-enabled, and some not, in the search for an ultimate IPv4 or IPv6 address. Operators of the authoritative name servers for large top-level domains (TLDs) have a privileged "observation point" for the transition from IPv4 to IPv6, relative to the "zone" or set of domain names for which the name server is authoritative. Verisign has been studying trends in the zones it operates name servers for -- including the DNS root, .com and .net, such as: • The percentage of domain names in a given zone that are served by an IPv6-enabled name server (vs. IPv4-only). • The percentage of DNS queries received via the IPv6 protocol (vs. IPv4). • The percentage of DNS queries that request a quad-A record (vs. an A record). Already, there has been a steady increase in the percentage of DNS queries over the IPv6 protocol at the two root name servers that Verisign operates. Labeled "A root" and "J root," these are two of the 13 name servers that requesters can contact to get the IP addresses of name servers for top-level domains. From May 2011 to May 2012, the percentage of queries to the A and J root name servers received over IPv6 has tripled, from just over 1% to between 3% and 4%. (This current rate is consistent with what we're aware of for other root name servers.) The percentage of queries over IPv6 to the name servers for .com and .net is still steady at just under 1%. Occasional fluctuations can be due to trial deployments of IPv6 at various parts of the Internet, or other variations in the mix of IPv4 vs. IPv6 traffic. The steady increase is encouraging, because it means that more and more requesters are starting the hierarchical process of "resolving" a domain name into an IP address with an IPv6 communication to a root server. In a typical DNS deployment, the requesters are recursive name servers, acting on behalf of end-consumers that ultimately interact with the named resources. The increase reflects IPv6 adoption by recursive name servers and the networks they reach the root servers through. It doesn't necessarily mean that the named resource, the end-consumer, nor the networks they connect through support IPv6, though such transitions would be likely to follow, if not already in place. It will be interesting to see what happens in the lead up to World IPv6 Launch on June 6. Verisign will share insights at www.VerisignLabs.com and we are very interested in gaining insights from the larger Internet community into questions such as: Will there be a significant acceleration in the various IPv6 adoption indicators? What are other observers seeing? With its open architecture, there is no single observation point for all impacts of IPv6 adoption and activity on the Internet. Assembling the larger picture will depend on the information shared by the Internet's many service providers and stakeholders. As the current early adoption of this grand upgrade of network communications transitions into the mainstream, we can expect to see much larger percentages of domain names with IPv6-enabled name servers, of DNS queries received over IPv6, of requests for quad-A records, and ultimately broad adoption of IPv6 communications overall. Will 2012 be the year? Read more about lan and wan in Network World's LAN & WAN section. This story, "IPv6 transition: Observations from a name server perspective" was originally published by Network World.
<urn:uuid:bf55aae6-0529-4397-a264-93c219b02ca3>
CC-MAIN-2017-04
http://www.itworld.com/article/2727501/networking/ipv6-transition--observations-from-a-name-server-perspective.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00363-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9181
1,264
3.109375
3
Users to write, run and debug software using quantum algorithms. Google has launched its Quantum Computing Playground, a browser based WebGL Chrome experiment, which will allow users to simulate quantum scale computing right in the browser. The company said the web-based integrated development environment (IDE), which will facilitate users to write, run and debug software using quantum algorithms. You can also simulate quantum registers up to 22 qubits, run Grover’s and Shor’s algorithms with Quantum Computing Playground. The platform features a GPU-accelerated quantum computer with a simpler IDE interface and own scripting language with debugging and 3D quantum state visualisation features. The interface will offer the results in 2D and 3D graphs with each bar representing superpositions of qubits, while colour and height of the bars will show the amplitude and phase of a given superposition.
<urn:uuid:08beeace-bcc6-4fa3-9ae4-da1ff58ebaa6>
CC-MAIN-2017-04
http://www.cbronline.com/news/cloud/aas/experiment-with-quantum-computing-with-googles-quantum-computing-playground-4278316
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00026-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864105
181
2.859375
3
USB flash drives are one of the many modern conveniences we all take for granted today. They are small and portable (about the size of a human thumb, hence the nickname “thumb drive”) and can be easily pocketed and brought from one computer to another. And because flash drives are solid-state devices, they can survive being dropped from far greater heights than a hard disk drive. With no moving parts, solid-state data storage devices have less points of failure than other storage devices. But thumb drives are still very vulnerable due to their small size and portable nature. The circuit board can be shorted out. The USB plug can become bent or torn off if the drive is bumped or jostled roughly enough while plugged in. Ejecting a flash drive too suddenly can cause damage to the filesystem. When you have irreplaceable data stored on your flash drive, it is all too easy to lose access to it. If you’ve lost critical data on your thumb drive, our flash drive recovery technicians are here to assist you. What’s In a Thumb Drive? Top: The 48-pin NAND flash memory chip mounted to the thumb drive’s PCB. Bottom: The controller chip mounted to the underside of the PCB. A monolithic USB drive There are three major components to any USB flash device: the NAND chip, controller chip, and USB plug. Most important is the NAND flash memory chip. This is where all the data on your thumb drive is stored. While the technology is radically different, it plays the same role as the platters in a hard disk drive. The NAND chip is attached to a printed circuit board along with a controller chip. The controller pieces the data flowing to and from the NAND chip together. Finally, there is the USB plug itself, which connects to the PCB and fits into the USB port on your computer. All USB flash devices have these components. But if you crack open your thumb drive’s casing, you might not be able to see all of them. You may open up your thumb drive and see, instead of a large black NAND chip and small controller chip on a green circuit board and a silver USB plug, a simple black rectangle with four gold “fingers”. All of the major components are still there—they’ve just been soldered together into a monolithic USB thumb drive, or “monolith”. There are a few advantages to this monolithic construction. It’s smaller, more water-resistant, more durable, and cheaper to manufacture. And it bears a striking resemblance to the iconic monolith from Stanley Kubrick’s 2001: A Space Odyssey. (Although that’s more of an aesthetic advantage than a practical one.) The main disadvantage is that it takes more effort for our flash drive recovery engineers to access the data on the drive. USB Flash Drive Recovery Situations There are two common “tiers” of flash drive recovery cases we see here at Gillware. The first tier involves cases in which the connection between the USB plug and PCB has been damaged. The second involves cases in which the PCB has been damaged and the NAND chip must be removed from it. USB Flash Drive Recovery Tier 1: USB Plug Repair For USB devices where all the components are discrete, there is one major weak point. The connection between the USB plug and circuit board is very fragile. If you have your thumb drive plugged into a USB port and accidentally hit or bump the device, you could bend or shear off the leads connecting the plug to the circuit board. This is one of the more common reasons people bring their thumb drives to us. It takes a highly skilled engineer some time with our soldering equipment to repair the flash drive. After the drive has been repaired, recovering data from it is usually a simple matter. There may be some file corruption if the USB flash drive was damaged in the middle of a write operation. USB Flash Drive Recovery Tier 2: Raw NAND Chip Read In cases where the circuit board itself has been damaged or shorted, flash drive data recovery involves removing the NAND chip from the board and reading its contents using a chip reader. The data on the NAND chip is nothing like what a data recovery engineer can find on a hard drive’s platters. The NAND chip’s contents are a jumbled mess of both user data and the system data the flash drive needs to operate. It is the job of the controller chip to make sense of all this. For us, reading the chip is the easy part. Our USB flash drive data recovery experts have to write custom software to emulate the controller and reassemble the raw data from the NAND chip. For monolithic flash drive recovery cases, this task is a little more difficult. The monolith can’t be disassembled, but we do still have a way to gain access to the NAND flash memory chip inside. By “spiderwebbing” tiny wires to specific contact points on the device, our USB flash drive recovery experts can access the buried NAND chip. Our R&D director Greg Andrzejewski explains this complex and delicate process in greater detail in this video case study. This is the more complex of the two common tiers USB flash drive data recovery cases fall into. Between the various brands and models of flash drives, each controller chip does its job a little differently. Concerning monoliths, there is no industry standard for the contact points allowing access to the data. Salvaging data directly from a NAND chip requires clever detective work and reverse-engineering. Why Choose Gillware for My USB Flash Drive Data Recovery Needs? Gillware Data Recovery employs full-time data recovery technicians and computer scientists dedicated to staying on the cutting-edge of data storage and recovery technologies. At Gillware, we offer financially risk-free flash drive recovery services. We charge no evaluation fees, and even offer to cover the cost of inbound shipping. We don’t show you a bill until we’ve finished our recovery efforts. If we don’t recover any data you need, you owe us nothing. Ready to Have Gillware Assist You with Your USB Flash Drive Recovery Needs? Best-in-class engineering and software development staff Gillware employs a full time staff of electrical engineers, mechanical engineers, computer scientists and software developers to handle the most complex data recovery situations and data solutions Strategic partnerships with leading technology companies Gillware is proud to be a recommended provider for Dell, Western Digital and other major hardware and software vendors. These partnerships allow us to gain unique insight into recovering from these devices. RAID Array / NAS / SAN data recovery Using advanced engineering techniques, we can recover data from large capacity, enterprise grade storage devices such as RAID arrays, network attached storage (NAS) devices and storage area network (SAN) devices. Virtual machine data recovery Thanks to special engineering and programming efforts, Gillware is able to recover data from virtualized environments with a high degree of success. SOC 2 Type II audited Gillware has been security audited to ensure data safety, meaning all our facilities, networks, policies and practices have been independently reviewed and determined as completely secure. Facility and staff Gillware’s facilities meet the SOC 2 Type II audit requirements for security to prevent entry by unauthorized personnel. All staff are pre-screened, background checked and fully instructed in the security protocol of the company. We are a GSA contract holder. We meet the criteria to be approved for use by government agencies GSA Contract No.: GS-35F-0547W Our entire data recovery process can be handled to meet HIPAA requirements for encryption, transfer and protection of e-PHI. No obligation, no up-front fees, free inbound shipping and no-cost evaluations. Gillware’s data recovery process is 100% financially risk free. We only charge if the data you want is successfully recovered. Our pricing is 40-50% less than our competition. By using cutting edge engineering techniques, we are able to control costs and keep data recovery prices low. Instant online estimates. By providing us with some basic information about your case, we can give you an idea of how much it will cost before you proceed with the recovery. We only charge for successful data recovery efforts. We work with you to define clear data recovery goals for our technicians, and only charge you upon successfully meeting these goals and recovering the data that is most important to you. Gillware is trusted, reviewed and certified Gillware has the seal of approval from a number of different independent review organizations, including SOC 2 Type II audit status, so our customers can be sure they’re getting the best data recovery service possible. Gillware is a proud member of IDEMA and the Apple Consultants Network.
<urn:uuid:a0f44b84-9e71-4401-aa52-6a88df1ea652>
CC-MAIN-2017-04
https://www.gillware.com/flash-drive-recovery-service/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92161
1,858
3.328125
3
- What are blogs? - What are wikis? - What is social software? - Why should I care about blogs and wikis? - How can blogs and wikis benefit my business? - What blog- or wiki-related challenges should I watch out for? - What types of blog technologies should I know about? - What types of wiki technologies should I know about? - What blog terminology should I know? - What wiki terminology should I know? What are blogs? "Blog" is a contraction of Web log, which is a website where users post journal-like entries that are displayed in reverse chronological order, with the most recent posting at the top of the page. Blogs can take the form of online diaries, personal chronicles, travel logs, newsy columns and reports from special events. They can include graphics, pictures, and even music and video clips. Blog postings often contain links to other blogs or websites. Blogs can be publicly viewable, or tucked safely behind the company firewall. Both public and internal blogs are often focused on a particular topic or issue. Virtually all blogs provide a vehicle for comments from readers, and the best ones-those that are most popular with readers, and therefore generate the most traffic-develop into a kind of conversation. And good blogs are frequently updated. What are wikis? A "wiki" is a website comprising text-based content that can be edited collectively by users at will. Unlike a blog, in which the authored posts remain unaltered, wiki documents can be modified by anyone with access to the website. It's a shared-authorship model; users can add new content and revise existing content without asking for permission to do so. Typical wikis are based on a Web server, which can be left open to public access via the Internet, or restricted on a company's local area network. One of the largest and best-known examples of a wiki is the Wikipedia free online encyclopedia. In business, wikis are increasingly employed as a new type of collaboration tool. The term "wiki" is derived from wiki wiki, which is Hawaiian for quick, which underscores one of the model's key benefits: Documents on a wiki can be edited very fast. Fans of the form claim that the whole of this kind of collaborative authorship is greater than the sum of its parts. What is social software? Both blogs and wikis are examples of social software, an emerging IT category currently being applied to a range of application and platform types or genres designed to facilitate personal interactions over computer networks. Blogs and wikis are types of social software, as are social networking websites, such as MySpace and Friendster. For the moment, social software is a flexible category under which some industry watchers would include virtual worlds, such as Second Life, instant messaging and even e-mail. However, at the heart of all social software worthy of the label is a dynamic group environment that allows individuals to interact in a way that essentially combines their intelligence and/or capabilities. As pioneering blogger and social software expert Tom Coates has defined it, social software supports, extends or derives added value from human social behavior. The groups of individuals gathered in this environment have been called "smart mobs." Author James Surowiecki has described this kind of collective intelligence as "the wisdom of crowds." The current flexibility of space is exemplified in the emergence of the wikiblog, a hybrid of the blog and the wiki. Also known as "wikiweblogs," "wikilogs," "blikis" and even "wogs," wikiblogs combine the features of the two models: The entries or articles are arranged in reverse chronological order on the main page like a blog, but the content can be edited like a wiki. Within this context, blogs and wikis have been compared to e-mail in terms of their potential impact on the enterprise. Instant messaging, which was once thought of as irrelevant teeny-bopper tech, only to evolve into an essential business tool, also comes to mind. Each form provides nontechnical users with uniquely accessible platforms for fast and easy information publication, interpersonal communication and team collaboration.
<urn:uuid:b1f91319-481a-417f-8998-3d13c78d1f45>
CC-MAIN-2017-04
http://www.cio.com/article/2438542/web-services/blogs-and-wikis-in-the-business-world-definition-and-solutions.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00236-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927425
864
2.6875
3
4.2.4 What is Mondex? Mondex is a payment system in which currency is stored in smart cards. These smart cards are similar in shape and size to credit cards, and generally permit the storage of sums of money up to several hundred dollars. Money may be transferred from card to card arbitrarily many times and in any chosen amounts. There is no concern about coin sizes, as with traditional currency. The Mondex system also provides a limited amount of anonymity. The system carries with it one of the disadvantages of physical currency: if a Mondex card is lost, the money it contains is also lost. Transfers of funds from card to card are effected with any one of a range of intermediate hardware devices. The Mondex system relies for its security on a combination of cryptography and tamper-resistant hardware. The protocol for transferring funds from one card to another, for instance, makes use of digital signatures (although Mondex has not yet divulged information about the algorithms employed). Additionally, the system assumes that users cannot tamper with cards, that is, access and alter the balances stored in their cards. The Mondex system is managed by a corporation known as Mondex International Ltd., with a number of associated national franchises. Pilots of the system have been initiated in numerous cities around the world. For more information on Mondex, visit their web site at http://www.mondex.com.
<urn:uuid:8823ff85-a6a4-4db9-894c-b7cf738e0cb0>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/modex.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00236-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963127
290
2.8125
3
The Max Planck Society, a Hewlett Packard Enterprise customer, is one of Germany's top research organization. More than 18 Nobel laureates have emerged from the ranks of its scientists, putting it on a par with the best and most prestigious research institutions worldwide. Last February, the Max Planck Institute made big news with a stunning discovery that proves Einstein’s 100-year-old theory about the existence of gravitational waves. BBC news summarized their findings in this article: Einstein's gravitational waves 'seen' from black holes. Max Planck is a customer that I have been personally in touch with thru my role in promoting case studies for customers that use our networking equipment. A year ago, I interviewed the team there to understand how they use our data center equipment and the benefits they realized. With this news showing them as a Nobel Prize contending institute, I wanted to share with you how they are using the HPE data center networking solution. To support its research effort, the Institute built a massive computer cluster to gather and analyze data from the most sensitive gravitational wave detectors in the United States and Europe. Just to give you some figures on the magnitude of the data they work with, the cluster is made up of 3,350 compute nodes (most with 4 CPU cores), 37 file servers, and 12 storage servers, for an overall storage capacity of 4.4 petabytes. Choosing Einsteinian simplicity and power Max Planck’s network was aging and due for a refresh. The team managing the infrastructure wanted to walk away from its existing proprietary environment to a more open network that would allow for flexibility and growth. Simplicity was key, but the project’s leaders were also seeking a network architecture that would allow for high performance, reliability, and lower TCO – all of which HPE was able to offer with its data center networking solutions. The Institute worked with HPE Partner microstaxx to design and build a 10Gbase-T network based on the HPE FlexFabric 12916 Switch AC Chassis for modular scalability and unprecedented levels of performance. “Einstein said things should be as simple as possible, but not simpler, and that’s exactly what we’ve achieved with our new network,” says Bruce Allen, Director of the Institute. “The reason we wanted the HP 12916 at our network core was the sheer power and simplicity of it.” Future science: opting for flexibility Flexibility was another area that was crucial for Max Planck, to ensure that it can run its research in an efficient manner. “We have the largest capacity flagship core switch HPE sells, and with that comes all the flexibility we’ll ever need,” Allen says. “With a 16-slot chassis and 720 10Gb Ethernet ports, it’s really a remarkable network core that will support whatever we want to do for the next 10 years.” Emphasizing how openness was a key need, Allen commented that “our HPE network is built on open standards for high reliability—this is telecommunications-grade stuff—and our experience shows that it’s actually more reliable than our compute and storage environments.” 5 key business outcomes With the HPE data center network, the Max Planck Institute is able to: - Maximize IT staff resources by simplifying network topology and maintenance - Enable seamless migration to new network fabric with zero unplanned downtime - Enhance its supercomputing environment with ease of adding new nodes, storage - Focus its innovation on research instead of IT - Meet tight timelines, delivering technology ahead of schedule and meeting academic budgets. With Max Planck Institute making such remarkable discoveries, we at HPE are proud to be the providers of their data center network. For the full Max Planck Institute case study, click here You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
<urn:uuid:cbdd1f8d-c9d2-49c5-b341-ed0c85ca7937>
CC-MAIN-2017-04
http://community.arubanetworks.com/t5/Technology-Blog/HPE-FlexFabric-data-center-network-plays-a-role-in-scientists/ba-p/263451
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00264-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939304
823
2.578125
3
Fun with dates and times This content is part # of # in the series: DB2 Basics This content is part of the series: DB2 Basics Stay tuned for additional content in this series. Important: Read the disclaimer before reading this article. This article is written for IBM® DB2® for Linux, UNIX®, and Windows®. This short article is intended for those who are new to DB2 and wish to understand how to manipulate dates and times. Most people who have worked with other databases are pleasantly surprised by how easy it is in DB2. To get the current date, time, and timestamp using SQL, reference the appropriate DB2 registers: SELECT current date FROM sysibm.sysdummy1 SELECT current time FROM sysibm.sysdummy1 SELECT current timestamp FROM sysibm.sysdummy1 The sysibm.sysdummy1 table is a special in-memory table that can be used to discover the value of DB2 registers as illustrated above. You can also use the VALUES keyword to evaluate the register or expression. For example, from the DB2 Command Line Processor (CLP), the following SQL statements reveal similar information: VALUES current date VALUES current time VALUES current timestamp For the remaining examples, I will simply provide the function or expression without repeating SELECT ... FROM sysibm.sysdummy1 or using the VALUES clause. To get the current time or current timestamp adjusted to GMT/CUT, subtract the current timezone register from the current time or timestamp: current time - current timezone current timestamp - current timezone Given a date, time, or timestamp, you can extract (where applicable) the year, month, day, hour, minutes, seconds, and microseconds portions independently using the appropriate function: YEAR (current timestamp) MONTH (current timestamp) DAY (current timestamp) HOUR (current timestamp) MINUTE (current timestamp) SECOND (current timestamp) MICROSECOND (current timestamp) Extracting the date and time independently from a timestamp is also very easy: DATE (current timestamp) TIME (current timestamp) You can also perform date and time calculations using, for lack of a better term, English: current date + 1 YEAR current date + 3 YEARS + 2 MONTHS + 15 DAYS current time + 5 HOURS - 3 MINUTES + 10 SECONDS To calculate how many days there are between two dates, you can subtract dates as in the following: days (current date) - days (date('1999-10-22')) And here is an example of how to get the current timestamp with the microseconds portion reset to zero: CURRENT TIMESTAMP - MICROSECOND (current timestamp) MICROSECONDS If you want to concatenate date or time values with other text, you need to convert the value into a character string first. To do this, you can simply use the CHAR() function: char(current date) char(current time) char(current date + 12 hours) To convert a character string to a date or time value, you can use: TIMESTAMP ('2002-10-20-12.00.00.000000') TIMESTAMP ('2002-10-20 12:00:00') DATE ('2002-10-20') DATE ('10/20/2002') TIME ('12:00:00') TIME ('12.00.00') The TIMESTAMP(), DATE() and TIME() functions accept several more formats. The above formats are examples only and I'll leave it as an exercise for the reader to discover them. Warning: What happens if you accidentally leave out the quotes in the DATE function? The function still works, but the result is not correct: SELECT DATE(2001-09-22) FROM SYSIBM.SYSDUMMY1; Why the 2,000 year difference in the above results? When the DATE function gets a character string as input, it assumes that it is valid character representation of a DB2 date, and converts it accordingly. By contrast, when the input is numeric, the function assumes that it represents the number of days minus one from the start of the current era (that is, 0001-01-01). In the above query the input was 2001-09-22, which equals (2001-9)-22, which equals 1970 days. Sometimes, you need to know how the difference between two timestamps. For this, DB2 provides a built in function called TIMESTAMPDIFF(). The value returned is an approximation, however, because it does not account for leap years and assumes only 30 days per month. Here is an example of how to find the approximate difference in time between two dates: timestampdiff (<n>, char( timestamp('2002-11-30-00.00.00')- timestamp('2002-11-08-00.00.00'))) In place of <n>, use one of the following values to indicate the unit of time for the result: - 1 = Fractions of a second - 2 = Seconds - 4 = Minutes - 8 = Hours - 16 = Days - 32 = Weeks - 64 = Months - 128 = Quarters - 256 = Years Using timestampdiff() is more accurate when the dates are close together than when they are far apart. If you need a more precise calculation, you can use the following to determine the difference in time (in seconds): (DAYS(t1) - DAYS(t2)) * 86400 + (MIDNIGHT_SECONDS(t1) - MIDNIGHT_SECONDS(t2)) For convenience, you can also create an SQL user-defined function of the above: CREATE FUNCTION secondsdiff(t1 TIMESTAMP, t2 TIMESTAMP) RETURNS INT RETURN ( (DAYS(t1) - DAYS(t2)) * 86400 + (MIDNIGHT_SECONDS(t1) - MIDNIGHT_SECONDS(t2)) ) @ If you need to determine if a given year is a leap year, here is a useful SQL function you can create to determine the number of days in a given year: CREATE FUNCTION daysinyear(yr INT) RETURNS INT RETURN (CASE (mod(yr, 400)) WHEN 0 THEN 366 ELSE CASE (mod(yr, 4)) WHEN 0 THEN CASE (mod(yr, 100)) WHEN 0 THEN 365 ELSE 366 END ELSE 365 END END)@ Finally, here is a chart of built-in functions for date manipulation. The intent is to help you quickly identify a function that might fit your needs, not to provide a full reference. Consult the SQL Reference for more information on these functions. SQL Date and Time functions are as follows: - DAYNAME: Returns a mixed case character string containing the name of the day (e.g., Friday) for the day portion of the argument. - DAYOFWEEK: Returns the day of the week in the argument as an integer value in the range 1-7, where 1 represents Sunday. - DAYOFWEEK_ISO: Returns the day of the week in the argument as an integer value in the range 1-7, where 1 represents Monday. - DAYOFYEAR: Returns the day of the year in the argument as an integer value in the range 1-366. - DAYS: Returns an integer representation of a date. - JULIAN_DAY: Returns an integer value representing the number of days from January 1, 4712 B.C. (the start of Julian date calendar) to the date value specified in the argument. - MIDNIGHT_SECONDS: Returns an integer value in the range 0 to 86 400 representing the number of seconds between midnight and the time value specified in the argument. - MONTHNAME: Returns a mixed case character string containing the name of month (e.g., January) for the month portion of the argument. - TIMESTAMP_ISO: eturns a timestamp value based on date, time or timestamp argument. - TIMESTAMP_FORMAT: Returns a timestamp from a character string that has been interpreted using a character template. - TIMESTAMPDIFF: Returns an estimated number of intervals of the type defined by the first argument, based on the difference between two timestamps. - TO_CHAR: Returns a character representation of a timestamp that has been formatted using a character template. TO_CHAR is a synonym for VARCHAR_FORMAT. - TO_DATE: Returns a timestamp from a character string that has been inter-preted using a character template. TO_DATE is a synonym for TIMESTAMP_FORMAT. - WEEK: Returns the week of the year of the argument as an integer value in range 1-54. The week starts with Sunday. - WEEK_ISO: Returns the week of the year of the argument as an integer value in the range 1-53. Changing the date format A common question I get often relates to the presentation of dates. The default format used for dates is determined by the territory code of the database (which can be specified at database creation time). For example, my database was created using territory=US. Therefore the date format looks like the following: values current date 1 ---------- 05/30/2003 1 record(s) selected. That is, the format is MM/DD/YYYY. If you want to change the format, you can bind the collection of db2 utility packages to use a different date format. The formats supported are: - DEF: Use a date and time format associated with the territory code. - EUR: Use the IBM standard for Europe date and time format. - ISO: Use the date and time format of the International Standards Organization. - JIS: Use the date and time format of the Japanese Industrial Standard. - LOC: Use the date and time format in local form associated with the territory code of the database. - USA: Use the IBM standard for U.S. date and time format. To change the default format to ISO on windows (YYYY-MM-DD), do the following steps: - On the command line, change your current directory to - Connect to the database from the operating system shell as a user with db2 connect to DBNAME db2 bind @db2ubind.lst datetime ISO blocking all grant public (In your case, substitute your database name and desired date format for DBNAME and ISO, respectively.) Now, you can see that the database uses ISO date format: values current date 1 ---------- 2003-05-30 1 record(s) selected. Custom Date/Time Formatting In the last example, we demonstrated how to change the way DB2 presents dates in some localized formats. But what if you wish to have a custom format such as 'yyyymmdd'? The best way to do this is by writing your own custom formatting function. Here is the UDF: create function ts_fmt(TS timestamp, fmt varchar(20)) returns varchar(50) return with tmp (dd,mm,yyyy,hh,mi,ss,nnnnnn) as ( select substr( digits (day(TS)),9), substr( digits (month(TS)),9) , rtrim(char(year(TS))) , substr( digits (hour(TS)),9), substr( digits (minute(TS)),9), substr( digits (second(TS)),9), rtrim(char(microsecond(TS))) from sysibm.sysdummy1 ) select case fmt when 'yyyymmdd' then yyyy || mm || dd when 'mm/dd/yyyy' then mm || '/' || dd || '/' || yyyy when 'yyyy/dd/mm hh:mi:ss' then yyyy || '/' || mm || '/' || dd || ' ' || hh || ':' || mi || ':' || ss when 'nnnnnn' then nnnnnn else 'date format ' || coalesce(fmt,' <null> ') || ' not recognized.' end from tmp </null> The function code may appear complex at first, but upon closer examination, you'll see that it is actually quite simple and elegant. First, we use a common table expression (CTE) to strip apart a timestamp (the first input parameter) into its individual components. From there, we check the format provided (the second input parameter) and reassemble the timestamp using the requested format and parts. The function is also very flexible. To add another pattern simply append another WHEN clause with the expected format. When an unexpected pattern is encountered, an error message is returned. values ts_fmt(current timestamp,'yyyymmdd') '20030818' values ts_fmt(current timestamp,'asa') 'date format asa not recognized.' These examples answer the most common questions I've encountered on dates and times. I'll update this article with more examples if feedback suggests that I should. (In fact, I've updated this three times already... thanks to readers). Bill Wilkins, DB2 Partner Enablement This article contains sample code. IBM grants you ("Licensee") a non-exclusive, royalty free, license to use this sample code. However, the sample code is provided as-is and without any warranties, whether EXPRESS OR IMPLIED, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. IBM AND ITS LICENSORS SHALL NOT BE LIABLE FOR ANY DAMAGES SUFFERED BY LICENSEE THAT RESULT FROM YOUR USE OF THE SOFTWARE. IN NO EVENT WILL IBM OR ITS LICENSORS BE LIABLE FOR ANY LOST REVENUE, PROFIT OR DATA, OR FOR DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF THE USE OF OR INABILITY TO USE SOFTWARE, EVEN IF IBM HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
<urn:uuid:99b39b9d-1149-43b8-a09e-a9f5f5c9d9f3>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/0211yip/0211yip3.html?ce=ism0070&ct=is&cmp=ibmsocial&cm=h&cr=crossbrand&ccy=us
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.779986
3,072
2.9375
3
The Mask Raises Network Security Worries in an Age of Cyberwarfare The Careto malware, a.k.a. The Mask, could have been a Duqu-like cyberweapon – but more sophisticated. What if your network was compromised for the past five years and you didn't know? That seems to have been the situation for many victims of one of the greatest security threat to have been recently discovered. On February 11, Kaspersky Labs announced its discovery of a particularly insidious piece of malware dubbed "The Mask" – also known as "Careto" (Spanish for "mask" or "ugly face"), the name given by the attackers to one of the two primary backdoor implants used on target machines. Kaspersky has detected at least 380 unique victims of the attack across at least 31 countries, concentrated among energy companies, government offices, private equity firms, research institutions, and political activists. Kaspersky further concedes that many more victims could remain undetected. Kaspersky reports that The Mask has been active for at least five years, until January of this year. This means that, for years, major public and private sector organizations have had their networks and data deeply compromised and not known about it. Some samples of The Mask were found to have been compiled even before then, in 2007. Disturbingly, this is the same year as the origins of major cyberweapons like Stuxnet and Duqu. What's more, Kaspersky reports that The Mask is a more sophisticated piece of malware than Duqu because of the former's capacity for flexibility and customization. Working through a highly complex combination of modules and plug-ins, The Mask would secretly gather and steal data from all manner of systems and networks – including remote and virtualized ones – while monitoring all file operations. It would then hide its tracks in highly sophisticated ways, including replacing real system libraries, entirely wiping log files (as opposed to simple deletion), and blocking the IP addresses of renowned computer research entities (including Kaspersky) from its command and control servers. For these reasons, and because of the unique and sophisticated way this malware would work from a network infrastructure management perspective, security experts hypothesize that The Mask was created or sponsored by a nation-state, similar to Kaspersky's conclusions about the Stuxnet worm. "The attack is designed to handle all possible cases and potential victim types," Kaspersky reports. Kaspersky has uncovered versions of The Mask that affect Windows, Mac OSX, and Linux. Kaspersky also reports that there are mobile versions of The Mask, including one known to attack Nokia devices. While Kaspersky has not been able to obtain a sample to 100% confirm, the computer security firm believes that versions of The Mask affect both iOS and Android devices. The Mask also works through a variety of browsers, including Internet Explorer, Firefox, Chrome, Safari, and even Opera. "Depending on the operating system, browser and installed plugins," Kaspersky notes, "the user is redirected to different subdirectories, which contain specific exploits for the user’s configuration that are most likely to work." Among these exploits are plugin modules that attack anti-malware products (including those by Kaspersky), intercept network traffic, obtain PGP keys, steal email messages, intercept and record Skype conversations, gather a list of available WiFi networks, and provide other network functions to facilitate other modules. One module even creates a framework for extending the reach of The Mask with new plugins. The Mask also has the ability to profile its targets. Its modules would automatically determine details of its victims' systems and software and then customize attacks using that information. It can even figure out if it is targeting a remote desktop portal or a virtualized environment. "The installer module can detect if it is being executed in a VMware or Microsoft Virtual PC virtual machine," reports Kaspersky. Network administrators and information security officers should find these revelations particularly disturbing. The fact that something so flexible, complex, and sophisticated could compromise so much information across so many platforms and go undetected for several years is bad enough. The fact that it is probable that this is the work of a nation-state or nation-state-sponsored group is yet more disconcerting. As cyber warfare ramps up, so must cyber defenses. The logical consequence may be greater government oversight (some may prefer the more uncharitable characterization "intrusion") over private sector systems, particularly in essential industries like energy, banking, and transportation. For now, basic security measures are still the best protection against these kinds of attacks. Use up-to-date antimalware and firewall software. Don't open suspicious attachments or click suspicious links. Use air gaps where practicable, and when transferring files across the air gap, use media with small storage space filled with random files to prevent malware from storing itself on your USB stick or CD and leaping the air gap. In the present case, The Mask appears to have been focused primarily on obtaining information. Still, such information – especially considering the sheer volume of system information accessible to those behind The Mask – could be used to develop and enable outright destructive Stuxnet-like attacks in the future. Therefore, those who were compromised should not consider themselves out of the woods yet. New security measures, perhaps right down to a complete network infrastructure overhaul, may be necessary to avoid serious system disruptions down the line. And then there is the even bigger question: If something as sophisticated as The Mask went undetected in the wild this long, then what else is still out there? Photo courtesy of Shutterstock. Joe Stanganelli is a writer, attorney, and communications consultant. He is also principal and founding attorney of Beacon Hill Law in Boston. Follow him on Twitter at @JoeStanganelli.
<urn:uuid:13c09f9c-9faa-4f8c-839a-31ca4550baba>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/the-mask-raises-network-security-worries-in-an-age-of-cyberwarfare.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00474-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948711
1,188
2.625
3
Malware and Bitcoin. As of late August, Kaspersky Lab’s analysts detected 35 unique malicious programs that targeted the Bitcoin system in one way or another. Realizing that their potential earnings largely depend on the number of computers they have access to, the cybercriminals have moved from stealing Bitcoin wallets to using Twitter and P2P network-based botnets. Cybercriminals have resorted to this measure to counter the antivirus companies that may block the operation of a single botnet C&C server if no alternate servers exist in the malicious network. For example, a bot would send a request to a Twitter account, which provides commands that are left there by the botnet owner — i.e., where the Bitcoin-generating program is downloaded, along with instructions for which Bitcoin pools to work with. The use of Twitter as a botnet command center is not new; although this is the first time it has been used with the Bitcoin system. In August, Kaspersky Lab also discovered that one of the largest botnets conceals actual accounts as they can be deleted by server owners who take a proactive stance against unlawful mining programs. To achieve this, the botnet owners had to create a special proxy server that interacts with infected computers, and their requests are then transferred to an unknown Bitcoin pool. It is not possible to identify the specific pools that the botnet works with and thus block the fraudulent accounts. In this situation, the only means of intercepting such criminal activity is to gain full access to one of the proxy servers. Ice IX: the illegitimate child of ZeuS. Almost a year after the original code of the most wide-spread threat targeting online banking users was leaked, Trojan ZeuS (Trojan-Spy.Win32.Zbot), Russian-speaking cybercriminals created its clone which became quite popular among fraudsters this summer. The new variant which emerged in the spring was dubbed Ice IX by its creator and sells for US $600-1,800. One of Ice IX’s most remarkable innovations is the altered botnet control web module which allows cybercriminals to use legitimate hosting services instead of costly bulletproof servers maintained by the cybercriminal community. This difference is meant to keep hosting costs down for Ice IX owners. The appearance of Ice IX indicates that we should soon expect the emergence of new “illegitimate children” of ZeuS and an even greater number of attacks against the users of online banking services. Remote-access worm. The new network worm Morto is interesting in that it does not exploit vulnerabilities in order to self-replicate. Furthermore, it spreads via the Windows RDP service that provides remote access to a Windows desktop – a method which has not been seen before. Essentially, the worm attempts to find the access password. Provisional estimates indicate that tens of thousands of computers throughout the globe may currently be infected with this worm. Attacks against individual users: mobile threats. In early August 2010, the first-ever malicious program for the Android operating system was detected: the SMS Trojan FakePlayer. Today, threats designed for Android represent approximately 23% of the overall number of detected threats targeting mobile platforms. The distribution of malicious programs targeting mobile platforms, by operating system Excluding the J2ME platform, 85% of the total number of smartphone threats detected during August 2010 targeted the Android system. In August, the Nickspy Trojan stood out among the multitude of threats targeting mobile platforms. Its distinguishing characteristics include an ability to collect information about the phone’s GPS coordinates and any calls that are made from the device. It can also record all the conversations that the infected device’s owner has. The audio files are then uploaded to a remote server managed by the malicious owner. Attacks against the networks of corporations and major organizations August saw a number of really high-profile hack attacks. The victims of hacktivists included the Italian cyber police, a number of companies cooperating with law enforcement agencies in the US, and the military contractor Vanguard, who works under contract to the US Department of Defense. However, these hack attacks were hardly surprising against the backdrop of this year’s events. Nevertheless, the IT community was shaken by a news item from McAfee about their detection of what was potentially the largest cyber-attack in history, lasting over five years and targeting numerous organizations around the world, from the US Department of Defense, to the Sports Committee of Vietnam. The attack was dubbed Shady Rat. All would have been well and good, but the malicious user-run server that was allegedly “detected by researchers” had in fact already been known to the experts at many other antivirus companies for several months. Moreover, at the time of the article’s publication the server was still up and running and all of the information that McAfee used in its report had already been made public. What is more, the long sought-after spyware that had allegedly been used in the most complex and largest attack in history had already been detected by many antivirus programs using simple heuristics. In addition to these and other factors, the McAfee incident gives rise to many other questions, which were asked publically, including by Kaspersky Lab’s experts. “Our studies have confirmed that Shady Rat was not the longest-running or the largest, nor even the most sophisticated attack in history”, comments Alexander Gostev, Chief Security Expert at Kaspersky Lab. “Moreover, we believe that it is unacceptable to publish information about any attacks without a full description of all of the components and technologies used, since these incomplete reports do not allow experts to make all possible efforts to protect their own resources.”
<urn:uuid:5e0fff1d-e2e6-4216-8081-8974e90572cc>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2011/Malware_in_August_One_Year_After_the_First_Android_Malware_Emerged_the_Clones_of_Zeus
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958795
1,176
2.546875
3
Continuing my business trip through Asia, I have left Chengdu, China, and am now in Kuala Lumpur, Malaysia. On Sunday, a colleague and I went to the famous Petronas Twin Towers, which a few years ago were officially the tallestbuildings in the world. If you get there early enough in the day, and wait in line for a few hours, you can get a ticket permitting you to go up to the "Skybridge" on the 41st floor that connects the two buildings. The views are stunning, and I am glad to have done this.(If you are afraid of heights, get cured by facing your fears with skydiving) You would think that a question as simple as "Which is the tallest building in the world?" could easily be answered, given that buildings remain fixed in one place and do not drastically shrink or get taller over time or weather conditions, and the unit of height, the "meter", is an officially accepted standard in all countries, defined as the distance traveled by light in absolute vacuum in 1/299,792,458 of a second. The controversy stems around two key areas of dispute: - What constitutes a building? A building is a structure intended for continuous human occupancy, as opposed to the dozens ofradio and television broadcasting towers which measure over 600 meters in height. The Petronas Twin Towers is occupied by a variety of business tenants and would qualify as a building. Radio and Television towers are not intended for occupation, and should not be considered. - Where do you start measuring, and where do you stop? Since 1969, the height was generally based on a building's height from the sidewalk level of the main entrance to the architectural top of the building. The "architectural top" included towers, spires (but not antennas), masts or flagpoles. Should the measurements be only to the top to the highest inhabitable floor? What if the building has many more floors below ground level? What if the building exists in a body of water, should sidewalk level equate to water level, and at low tide or high tide? (Laugh now, but this might happen sooner than you think!) To bring some sanity to these comparisons, the Council on Tall Buildings and Urban Habitat has tried to standardize the terms and definitions to makecomparisons between buildings fair. Why does all this matter whose building is tallest? It matters in twoways: - People and companies are willing to pay more to be a tenant in tall towers, affording a luxurious bird's-eyeview to impress friends, partners and clients, and so the rankings can influence purchase or leasing prices of floorspace in these buildings. - Architects and engineers involved in building these structures want to list these on their resume.These buildings are an impressive feat of engineering, and the teams involved collaborate in a global mannerto accomplish them. If an architecture or engineeering company can build the world's tallest building, you can trust themto build one for you. The rankings can help drive revenues in generating demand for services and offerings. What does any of this have to do with storage? Two weeks ago, IBM and the Storage Performance Councilanswered the question "Which is the fastest disk system?" with apress release. Customers thatcare about performance of their most mission critical applications are often willing to pay a premium to run theirapplications on the fastest disk system, and the IBM System Storage SAN Volume Controller, built through aglobal collaboration of architects and engineers across several countries, is (in my opinion at least) an impressive feat of storage engineering. EMC bloggerChuck Hollis was the first to question the relevance of these results, and I failed to "turn the other cheek" and responded accordingly. The blogosphere erupted, with more opinions piled on by others, many from EMC andIBM, found in comments on these posts or other blogs, some have since been retracted or deleted, while othersremain for historical purposes. At the heart of all this opinionated debate, lies a few areas of exploration: - What constitutes a "disk system"? What should or should not be considered for comparison? - What metrics should be used to measure performance? What is a version of the "meter" everyone can use? - How should the measurements occur? Who should perform them? - Do the measurements provide sufficient value for the purpose of aiding the purchase decision making process? I will try to address some of these issues in a series of posts this week. technorati tags: IBM, KL, Kuala Lumpur, Malaysia, Petronas, Twin Towers, SkyBridge, tallest, building, structure, tower, fasted, disk, system, SVC, SAN Volume Controller, EMC, Chuck Hollis, SPC, Storage Performance Council For those in the US, a comedian named Carlos Mencia has a great TV show, Mind of Mencia and one of my favorite segments is "Why the @#$% is this news!" where he goes about showingblatantly obvious things that were reported in various channels. So, when I saw that IBM once again, for the third year in a row, has the fastest disk system,the IBM System Storage SAN Volume Controller (SVC), based on widely-accepted industry benchmarksrepresenting typical business workloads, I thought, "Do I really want to blog about this,and sound like a broken record, repeating my various statements of the past of how great SVC is?" It's like reminding people that IBM hashad the most US patents than any other company, every year, for the past 14 years. (Last year, I received comments fromWoody Hutsell, VP of Texas Memory Systems,because I pointed out that their "World's Fastest Storage"® cache-only system, was not as fast as IBM's SVC.You can ready my opinions, and the various comments that ensued, hereand here. ) That all changed when EMC uber-blogger Chuck Hollis forgot his own Lessons in Marketingwhen heposted his rantDoes Anyone Take The SPC Seriously?That's like asking "Does anyone take book and movie reviews seriously?" Of course they do!In fact, if a movie doesn't make a big deal of its "Two thumbs up!" rating, you know it did not sitwill with the reviewers. It's even more critical for books. I guess this latest news from SPC reallygot under EMC's skin. For medium and large size businesses, storage is expensive, and customers want to do as much research as possible ahead of time to make informed decisions. A lot of money is at stake, and often, once you choose a product, you are stuckwith that vendor for many years to come, sometimes paying software renewals after only 90 days, and hardware maintenance renewals after only a year when the warranty runs out. Customers shopping for storage like the idea of a standardized test that is representative, so they can compare one vendor's claims with another. The Storage Performance Council (SPC), much like the Transaction Processing Performance Council (TPC-C) for servers, requires full disclosure of the test environment so people can see what was measured and make their own judgement on whether or not it reflects their workloads. Chuck pours scorn on SPC but I think we should point to TPC-C as a great success story and ask why he thinks the same can't happen for storage? Server performance is also a complicatedsubject, but people compare TPC-C and TPC-H benchmarks all the time. Note: This blog post has been updated. I am retracting comments that were unfair generalizations. The next two paragraphs are different than originally posted. Chuck states that "Anyone is free, however, to download the SPC code, lash it up to their CLARiiON, and have at it." I encourage every customer to do this with whatever disk systems they already have installed. Judge for yourself how each benchmark compares to your experience with your application workload, and consider publishing the results for the benefit of others, or at least send me the results, so that I can understand better all of these"use cases" that Chuck talks about so often. I agree that real-world performance measurements using real applications and real data are always going to be more accurate and more relevant to that particular customer. Unfortunately, there are little or no such results made public. They are noticeably absent. With thousands of customers running with storage from all the major storage vendors, as well as storage from smaller start-up companies, I would expect more performance comparison data to be readily available. In my opinion, customers would benefit by seeing the performance results obtained by others. SPC benchmarks help to fill this void, to provide customers who have not yet purchased the equipment, and are looking for guidance of which vendors to work with, and which products to put into their consideration set. Truth is, benchmarks are just one of the many ways to evaluate storage vendors and their products. There are also customer references, industry awards, and corporate statements of a company's financial health, strategy and vision.Like anything, it is information to weigh against other factors when making expensive decisions. And I am sure the SPC would be glad to hear of any suggestions for a third SPC-3 benchmark, if the first two don't provide you enough guidance. So, if you are not delighted with the performance you are getting from your storage now, or would benefit by having even faster I/O, consider improving its performance by adding SAN Volume Controller. SVC is like salt or soy sauce, it makes everything taste better. IBM would be glad to help you with a try-and-buy or proof-of-concept approach, and even help you compare the performance, before and after, with whatever gear you have now. You might just be surprised how much better life is with SVC. And if, for some reason, the performance boost you experience for your unique workload is only 10-30% better with SVC, you are free to tell the world about your disappointment. technorati tags: Carlos Mencia, Mind of Mencia, IBM, system, storage, SVC, SAN Volume Controller, Storage Performance Council,SPC, benchmarks, Texas Memory Systems, Woody Hutsell, EMC, Chuck Hollis, movie, book, reviews, awards, salt, soy sauce Wrapping up my week's discussion on Business Continuity, I've had lots of interest in myopinion stated earlier this week that it is good to separate programs from data , and thatthis simplifies the recovery process, and that the Windows operating system can fit in a partition as small asthe 15.8GB solid state drive we just announced for BladeCenter. It worked for me, and I will use this post to show you how to get it done. Disclaimer: This is based entirely on what I know and have experienced with my IBM Thinkpad T60 running Windows XP, and is meant as a guide. If you are running with different hardware or different operating system software, some steps may vary. (Warning: Windows Vista apparently handles data, Dual Boot, andPartitions differently. These steps may not work for Vista) For this project, I have a DVD/CD burner in my Ultra-Bay, a stack of black CDs and DVDs, and a USB-attached 320GB external disk drive. - Step 0 - Backup your system I find it amusing that this is ALWAYS the first step, but nobody provides any instruction.I will assume we start with a single C: drive with an operational Windows operating system, intermixed programs and data. If you have a Thinkpad, you should have "IBM Rescue and Recovery" program already installed, but is probably down-level. Mine was version 2.0 -- Yikes! Download IBM Rescue and Recovery Version 4.0 for Windows XP and Windows 2000,and reboot to make it fully installed. Make TWO backups. First, make a bootable rescue CD and backup to several DVDs. Second, backup to a large external 320GB USB-attached disk drive. IBM Rescue and Recovery does compression, so a 60GB drive that is mostly full might take about 8-10 DVDs, have plenty on hand. If you have to recover, boot from CD, and restore from the USB-attached drive. If that doesn't work, you have the DVDs just in case. If you are suitably happy with your backups, you are ready for step 1. For added protection, you can use a Linux LiveCD to backup your entire drive. I suggestSysRescCD, which is designed to be a rescue CD and can do backups and restores. First, figure out if your drive is "hda" or sda. The "dmesg" command below shows that mine is "sda", with output like this: tpearson@tpearson:~$ dmesg | grep [hs]d[ 7.968000] SCSI device sda: 117210240 512-byte hdwr sectors (60012 MB)[ 7.968000] sda: Write Protect is offI like to backup the master boot record to one file, and then the rest of the C: drive to a series of 690MB compressed chunks. These can be directed to the USB-attached drive, and then later burned onto CDrom, or pack 6 files per DVD.Most USB-attached drives are formatted to FAT32 file system, which doesn't support any chunks greater than 2GB, so splitting these up into 690MB is well below that limit. [ 7.968000] sda: Mode Sense: 00 3a 00 00[ 7.968000] SCSI device sda: write cache: enabled, read cache: enabled dd if=/dev/sda of=/media/USBdrive/master.MBR bs=512 count=1dd if=/dev/sda1 conv=sync,noerror | gzip -c | split -b 690m - /media/USBdrive/master.gz.To recover your system, just reverse the process: cat /media/USBdrive/master.gz.* | gzip -dc | dd of=/dev/sda1dd if=/media/USBdrive/master.MBR of=/dev/sda bs=512 count=1You can learn more about these commands here and here. - Step 1 - Defrag your C: drive From Windows, right-click on your Recycle Bin and select "Empty Recycle Bin". Click Start->Programs->Accessories->System Tools->Disk Defragmenter. Select C: drive and push the Analyze button. You will see a bunch of red, blue and white vertical bars. If there are any greenbars, we need to fix that. The following worked for me: - Right-click "My Computer" and select Properties. Select Advanced, then press "Settings" buttonunder Performance. Select Advanced tab and press the "Change" button under Virtual Memory.Select "No Paging File" and press the "Set" button. Virtual memory lets you have many programs open, moving memory back and forth between your RAM and hard disk. - Click Start->Control Panel->Performance and Maintenance->Power Options. On the Hibernate tab,make sure the "Enable Hibernation" box is un-checked. I don't use Hibernate, as it seems likeit takes just as long to come back from Hibernation as it does to just boot Windows normally. - Reboot your system to Windows. If all went well, Windows will have deleted both pagefile.sys and hiberfil.sys, the twomost common unmovable files, and free up 2GB of space. You can run just fine without either of these features, but if you want them back, we will put them back on Step 6 below. Go back to Disk Defragmenter, verify there are no green bars, andproceed by pressing the "Defragment" button. If there are still some green bars,you can proceed cautiously (you can always restore from your backup right?), or seek professional help. - Step 2 - Resize your C: drive When the defrag is done, we are ready to re-size your file system. This can be done with commercial software like Partition Magic.If you don't have this, you can use open source software. Burn yourself the Gparted LiveCD.This is another Linux LiveCD, and is similar to Partition Magic. Either way, re-size the C: drive smaller. In theory, you can shrink it down to 15GB if this is a fresh install of Windows, and there is no data on it. If you have lots of data, and the drive wasnearly full, only resize the C: drive smaller by 2GB. That is how much we freed upfrom the unmovable files, so that should be safe. You could do steps 2 and 3 while you are here, but I don't recommend it. Just re-size C:press the "Apply" button, reboot into Windows, and verify everything starts correctly before going to the next step. - Step 3 - Create Extended Paritition and Logical D: drive You can only have FOUR partitions, either Primary for programs, or Extended for data. However, theExtended partition can act as a container of one or more logical partitions. Get back into Partition Magic or Gparted program, and in the unused space freed up from re-sizing inthe last step, create a new extended/logical partition. For now, just have one logical inside theextended, but I have co-workers who have two logical partitions, D: for data, and E: for their e-mailfrom Lotus Notes. You can always add more logical partitions later. I selected "NTFS" type for the D: drive. In years past, people chose the older FAT32 type, but this has some limitations, but allowed read/write capability from DOS, OS/2, and Linux.Windows XP can only format up to 32GB partitions of FAT32, and each file cannot be bigger than 2GB.I have files bigger than that. Linux can now read/write NTFS file systems directly, using the new NTFS-3Gdriver, so that is no longer an issue. - Step 4 - Format drive D: as NTFS Just because you have told your partitioning program that D: was NTFS type, you stillhave to put a file system on it. Click Start->Control Panel->Performance and Maintenance->Computer Management. Under Storage, select Disk Management. Right-click your D: drive and choose format.Make sure the "Perform Quick Format" box is un-checked, so that it peforms slowly. - Step 5 - Move data from C: to D: drive Create two directories, "D:\documents" and "D:\notes\data", either through explorer, or in a commandline window with "MKDIR documents notes\data" command. Move files from c:\notes\data to d:\notes\data, and any folder in your "My Documents" over to d:\documents. (If you have more data than the size of the D: drive, copy over what you can, run another defrag, resize your C: drive even smaller with Partition Magic or Gparted, Reboot, verify Windows is still working,resize your D: bigger, and repeat the process until you have all of your data moved over.) To inform Lotus Notes that all of your data is now on the D: drive, use NOTEPAD to edit notes.ini and change the Directory line to "Directory=D:\notes\data". If you have a special signature file, leave it in C:\notes directory. Once all of your data is moved over to D:\documents, right-click on "My Documents" and select Properties. Change the target to "D:\documents" and press "Move" button. Now, whenever you select "My Documents", youwill be on your D: drive instead. - Step 6 - Take A Fresh Backup If you use IBM Tivoli Storage Manager, now would be a good time to re-evaluate your "dsm.opt" file that listswhat drives and sub-directories to backup. Take a backup, and verify your data is being backed up correctly. With the USB-attached, backup both C: and D: drives. I leave my USB drive back in Tucson. For a backup copywhile traveling, go to IBM Rescue and Recovery and take a C:-only backup to DVD. Make sure D: drive box is un-checked. Now, if I ever need to reinstall Windows, because of file system corruption or virus, I can do this from my one bootable CD plus 2 DVDs, which I can easily carry with me in my laptop bag, leaving all my data on the D: drive in tact. In the worst case, if I had to re-format the whole drive or get a replacement disk, I can restore C: and thenrestore the few individual data files I need from IBM Tivoli Storage Manager, or small USB key/thumbdrive,delaying a full recovery until I return to Tucson. Lastly, if you want, reactivate "Virtual Memory" and "Hibernation" features that we disabled in Step 1. As with Business Continuity in the data center, planning in this manner can help you get back "up and running"quickly in the event of a disaster. technorati tags: IBM, Business Continuity, Windows, XP, BladeCenter, solid, state, disk, backup, Linux, sysresccd, LiveCD, dd, gzip, split, Tivoli, Storage Manager, USB, Lotus Notes, NTFS, NTFS-3G, FAT32, primary, extended, logical, partition, magic, gparted Continuing this week's theme on Business Continuity, I will use this post to discuss this week'sIBM solid state disk announcement .This new offering provides a new way to separate programs from data, to help minimizedowntime and outages normally associated with disk drive failures. Until now, the method most people used to minimize the amount of data on internalstorage was to use disk-less servers with Boot-Over-SAN, however, not all operating systems, and not all disk systems, supported this. In April, the BladeCenter HS21 XM blade server introduced the option to have oneIBM 4GB Flash Memory Device that used the USB2.0 protocol. The 4GB drive can be usedto boot 32-bit and 64-bit versions of Linux, such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES)), but not Windows. Linux is incredibly small operating system. You can bootversions from a USB key/thumbdrive (64MB) or CD (700MB) image, so it makes sense that a 4GB flash drive based on USB protocol was a good fit for Linux. Windows, however, is not supported, because of the small 4GB size and USB protocol limitations. For Windows, you would add a SAS drive, you boot from this hard drive, and use the 4GB Flash drive for data only. So what's new this time? Here's a quick recap of July 17 announcement. For the IBM BladeCenter HS21 XM blade servers, new models of internal "disk" storage: - Single drive model A single 15.8GB solid-state disk drive, based on SATA protocol. In addition to theLinux operating systems mentioned above, the capacity and SATA protocols allowsyou to boot 32-bit and 64-bit versions of Windows 2003 Server R2, with plans in placeto other platforms in the future, such as VMware. I am able to run my laptop Windows with only 15GB of C: drive, separating my data to a separate D: partition, so this appears to be a reasonable size. - Dual drive model The dual drive fits in the space of a single 2.5-inch HDD drive bay.You can combine these in either RAID 0 or RAID 1 mode. - RAID 0 gives you a total of 31.6GB, but is riskier. If you lose either drive,you lose all your data. Michael Horowitz of Cnet covers the risks of RAID zerohere andhere.However, if you are just storing your operating system and application, easily re-loadable from CD or DVD in the case of loss, then perhaps that is a reasonable risk/benefit trade-off. - RAID 1 keeps the capacity at 15.8GB, but provides added protection. If you loseeither drive, the server keeps running on the surviving drive, allowing you to schedule repair actions when convenient and appropriate. This would be the configuration I would recommend for most applications. Until recently, solid state storage was available at a price premium only. Flash prices have dropped 50% annually while capacities have doubled. This trend is expected to continue through 2009. According to recent studies from Google and Carnegie Mellon, hard drives fail more oftenthan expected. By one account, conventional hard disk drives internal to the server account for as much as 20-50% of component replacements.IBM analysis indicates that the replacement rate of a solid state drive on a typical blade server configuration is only about 1% per year, vs. 3% or more mentionedin the these studies for traditional disk drives. Flash drives use non-volatile memory instead of moving parts, so less likely to break down during high external environmental stress conditions, like vibration and shock, or extreme temperature ranges (-0C° to +70°C) that would make traditional hard disks prone to failure.This is especially important for our telecommunications clients, who are always looking for solutions that are NEBS Level 3 compliant. Last year, I mentioned that flash drives could provide only a limited number of write and erase cycles, but today's new advances in wear-leveling algorithms have nearly eliminated this limitation. As with any SATA drive, performance depends on workload.Solid state drives perform best as OS boot devices, taking only a few secondslonger to boot an OS than from a traditional 73GB SAS drive. Flash drives also excel in applications featuring random read workloads, such as web servers. For random and sequential write workloads, use SAS drives instead for higher levels of performance. Part of IBM's Project Big Green, these flash drives are very energy efficient. Thanks to sophisticated power management software, the power requirement of the solid state drive can be 95 percent better than that of a traditional 73GB hard disk drive. These 15.8GB drives use only 2W per drive versus as much as 10W per 2.5” hard drive and 16W per 3.5” hard drive. The resulting power savings can be up to 1,512 watts per server rack, with 50% heat reduction. So, even though this is not part of the System Storage product line, I am very excitedfor IBM. To find out if this will work in your environment, go to the IBM Server Provenwebsite that lists compatability with hardware, applications and middleware, or review the latest Configuration and Options Guide (COG). technorati tags: IBM, Business, Continuity, solid, state, flash, disk, drive, announcement, blade, server, BladeCenter, H21, XM, 4GB, Flash, Memory, Device, USB2.0, Linux, RedHat, RHEL, Novell, SUSE, SLES, Windows, Project, Big Green, SATA, SAS, energy, efficient, efficiency, performance, NEBS, telecommunications, boot-over-SAN, Google, Carnegie Mellon, study, Vmware Continuing this week's theme on Business Continuity, I thought I would explore more on the identification of scenarios to help drive appropriate planning. As I mentioned in my last post , this should be done first. A recent post in Anecdote talks about the long list of cognitive biases which affect business decision making. This list is a good explanation of why so many people have a difficult time identifying appropriate recovery scenarios as the basis for Business Continuity planning. Their "cognitive biases" get in the way. Again, using my IBM Thinkpad T60 laptop as an example, here are a variety of different scenarios: - Corrupted File System Some file systems are more fragile than others. If your NTFS file system gets corrupted, you might be able to run CHKDSK C: /F but this just puts damaged blocks into dummy files, it doesn't really repair your files back to their pre-damage level.All kinds of things can damage the file system, including viruses, software defects, and user error. I keep my programs and data in separate file systems. C: has my Windows operating system and applications, and D: holds my pure data. If one file system is corrupted, the other one might be in tact, mitigating the risk. - Hard Disk Crash Hopefully, you will have temporary read/write errors to provide warning prior to a complete failure. In theory, if I kept a spare hard disk in my laptop bag, I could swap out the bad drive with the good drive. I don't have that. The three times that I have had a disk failure all occurred while I was in Tucson. Instead, I keep the few files I need for my trip on a separate USB key, and carry bootable Live CD, which allows you to boot entirely from CDrom drive, either to run applications, or perform rescue operations. The latest one that I am trying out is Ubuntu Linux, which has OpenOffice 2.2 that can read/write PowerPoint, Word, and Excel spreadsheets; Firefox web browser; Gimp graphics software; and a variety of other applications, all in a 700MB CDrom image. I even have been able to get Wireless (Wi-Fi) working with it, and the process to create your own customized Live CD with the your own application packages is fairly straightforward. Combined with a writeable USB key, you can actually get work done this way. Special thanks to IBM blogger Bob Sutor for pointing me to this. (If you have a DVD-RAM drive, there are bigger Live CDs from SUSE and RedHat Fedora that provide even more applications) - Laptop Shell Failure This might catch some people by surprise. I have had the keyboard, LCD screen, or some essential port/plug fail on my laptop. The disk drive and CDrom drive work fine, but unless you have another "laptop" to stick them into, they don't help you recover. This can also happen if the motherboard fails, or the battery is unable to hold a charge. IBM provides a 24-hour turn around fix. Basically, IBM sends me a laptop shell, no drive, no CDrom, with instructions to move the disk drive and CDrom drive from your broken shell, to the new shell, then send the bad shell back in the same shipping box. Here, again, I am thankful that I keep my key files on an USB key. Often I travel with other IBMers, and can borrow their laptop to make presentations, check my e-mail, or other work, until I can get my replacement shell. In you are travelling outside the US, you might be able to move your disk drive into a colleague's laptop, access the data, copy it to your USB key or burn a copy on CD or DVD. In a data center, many outages are really "failures to access data", but the data is safe. For example, power outages, network outages, and so on, can prevent people from using their IT systems, but the data is safe when these are re-established. - Temporary Separation At times, I have been temporarily separated from my laptop. Three examples: - A higher level executive had technical difficulties with his laptop, and usurped mine instead. - A colleague forgot his power supply for his laptop, and borrowed my laptop instead. (I wish there were a standard for laptop power plug connectors) - Customs agents confiscate your laptop, give you a receipt, and eventually you get it back. In all cases, I was glad that no "recovery" was required, and that the few files I needed were on my USB key. A few times, I was able to get by on the machines available at the nearest Internet Cafe, in the meantime. With some imagination, you can recognize that this scenario is similar to the previous one for laptop shell failure.Here is a good example that you can identify different scenarios, and then later discover they have similar properties in terms of recovery, and can be treated as one. - Permanent Separation Laptops are stolen every day. Luckily, I've only had this happen twice to me in my career at IBM, and I managed to get a replacement soon enough. The key lesson here is to keep your USB key and recovery media in separate luggage.I know it is more convenient to keep all computer-related stuff in one place, but a thief is going to take your whole laptop bag, to make sure that all cables and power supplies are included, and is not going to leave anything behind. That would just slow them down. In each case, some brainstorming, or personal experience, can help identify scenarios, identify what makes them unique from a recovery perspective, and plan accordingly. If you looking to create or upgrade your Business Continuity plan, give IBM a call, we can help! technorati tags: IBM, Business, Continuity, plan, plans, planning, Thinkpad, T60, laptop, NTFS, CHKDSK, hard disk crash, USB, key, Live, CD, LiveCD, DVD, Ubuntu, Linux, SUSE, RedHat, Fedora, shell, failure
<urn:uuid:223d499b-9b9d-473d-a1c1-028af3e5c10b>
CC-MAIN-2017-04
https://www.ibm.com/developerworks/community/blogs/InsideSystemStorage/date/200707?maxresults=5&page=1&lang=en
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939336
7,099
2.6875
3
Computers have become an integral part of our lives. Every day more and more users and organizations use them to store data which is a type of property. Although most people take great care of their physical property, this is often not the case where virtual property is concerned. The majority of users are still oblivious to the fact that someone somewhere may be interested in what they are doing. They still believe that there is nothing on their computers that is of value to cybercriminals and that they are invulnerable to malware. This article takes a look at the issue from the other side i.e. from the cyber criminals’ point of view. Cybercrime has evolved considerably over the past few years with new technologies being created and applied. As a result, cybercrime is no longer committed by individual amateurs; it’s become a lucrative business run by highly organized groups. It’s been variously estimated that during 2005 cyber criminals made from tens to hundreds of billions of dollars, a sum that far exceeds the revenue of the entire antivirus industry. Of course, not all this money was “earned” by attacking users and organizations, but such attacks account for a significant proportion of cyber criminals’ income. In this two part report, the first part will examine attacks on users and the second part will discuss attacks on organizations. This first part includes an analysis of what kind of virtual property is attractive to cyber criminals and what methods are used to obtain user data. What is Stolen So what kind of virtual property is of interest to a cyber thief? A study of malicious programs conducted by Kaspersky Lab virus analysts shows that four types of virtual property are most often stolen. It should be stressed that cyber scammers do not limit themselves to stealing the information listed below. Information most frequently stolen from users includes: - data needed to access a range of financial services (online banking, card services, e-money), online auction sites such as eBay, etc.; - instant messaging (IM) and website passwords - passwords to mailboxes linked to ICQ accounts, as well as all email addresses found on the computer; - passwords to online games, the most popular of which are Legend of Mir, Gamania, Lineage and World of Warcraft. If you store any of the information above on your machine, then your data is of interest to cybercriminals. We’ll take a look at why such data is stolen and what happens to it once it has been stolen later in this article (Dealing in Stolen Goods). The following section provides an overview of how the information is stolen. How it’s Stolen In most cases, cyber criminals use dedicated malicious programs or social engineering methods to steal data. A combination of the two methods may be used for increased effectiveness. Let’s start by taking a look at malicious programs which are designed to spy on users’ actions (e.g. to record all keys pressed by the user) or to search for certain data in user files or the system registry. The data collected by such malicious programs is eventually sent to the author or user of the malicious program, who can then, of course, do what s/he wants with the information. Kaspersky Lab classifies such programs as Trojan-Spy or Trojan-PSW. The graph below shows the increase in the number of modifications in this category: Figure 1. Growth in the number of malicious programs designed to steal data Spy programs arrive on victim machines in a number of ways: when the user visits a malicious website, via email, via online chat, via message boards, via instant messaging programs etc. In most cases, social engineering methods are used in addition to malicious programs so that users behave as cyber criminals want them to. One example is one of the variants of Trojan-PSW.Win32.LdPinch, a common Trojan that steals passwords to instant messaging applications, mailboxes, FTP resources and other information. After making its way onto the computer, the malicious program sends messages such as “Take a look at this < link to malicious program > Great stuff 🙂 Most recipients click on the link and launch the Trojan. This is due to the fact that most people trust messages sent by ICQ, and don’t doubt that the link was sent by a friend. And this is how the Trojan spreads – after infecting your friend’s computer, the Trojan will send itself on to all addresses in your friend’s contact list, and at the same time will be delivering stolen data to its author. One particular cause for concern is that nowadays even inexperienced virus writers can write such programs and use them in combination with social engineering methods. Below is an example: a program written by someone who is not very proficient in English – Trojan-Spy.Win32.Agent.ih. When launched, the Trojan causes the dialogue window shown below to be displayed Figure 2. Dialog window displayed by Trojan-Spy.Win32.Agent.ih The user is asked to pay just $1 for Internet services – a classic case of social engineering: - the user is given no time to consider the matter; payment must be made the day the user sees the message. - the user is asked to pay a very small sum (in this case $1). This significantly increases the number of people who will pay. Few people will make the effort to try and get additional information if they are only asked for one dollar; - deception is used to motivate the user to pay: in this case, the user is told that Internet access will be cut off unless payment is made; - in order to minimize suspicion, the message appears to come from the ISP’s administrators. The user is expected to think that it is the administrators which have written a program via which payment can be made in order to save users time and effort. Additionally, it would be logical for the ISP to know the user’s email address. The first thing that the program does is leave the user with no choice but to enter his/ her credit card data. As no other option is available, an obedient user will click on “Pay credit card”. The dialog box shown below in Figure 3 will then be displayed: Figure 3. Credit card information dialog displayed by Trojan-Spy.Win32.Agent.ih Of course, even when the user fills in all the fields and clicks on “Pay 1$” no money will be deducted. Instead, the credit card information is sent via email to the cybercriminals. Social engineering methods are also often used independently of malicious programs, especially in phishing attacks (i.e. attacks targeting customers of banks that offer online banking services). Users receive emails supposedly sent by the bank. Such messages state that the customer’s account has been blocked (this is, of course, untrue) and that the customer should follow the link in the message and enter his/ her account details in order to unblock the account. The link is specially designed to look exactly like the Internet address of the bank’s website. In reality, the link leads to a cyber criminal’s website. If account details are entered, the cyber criminal will then have access to the account. However, cyber criminals aren’t only interested in credit card information. They are also interested in the email addresses which victim machines contain. How are these addresses stolen? Here, a crucial role is played by malicious programs which Kaspersky Lab classifies as SpamTools. These programs scan victim machines for email addresses, and the addresses harvested can be instantly filtered according to predefined criteria, e.g. the program can be configured to ignore addresses which clearly belong to antivirus companies. The harvested addresses are then sent to the author/ user of the malicious program. There are other ways of planting Trojans on user computers, some of which are extremely brazen. There are cases where cyber criminals offered to pay website owners for loading malicious programs onto the machines of users who visited their websites. One example of this is the iframeDOLLARS.biz website: it offered webmasters a “partner program” that involved putting exploits on their websites so that malicious programs would be downloaded to the machines of those who viewed the sites. (Of course, this was done without the users’ knowledge). These “partners” were offered $61 per 1,000 infections. Dealing in Stolen Goods Unquestionably, the main motivation for stealing data is the desire to make money. Ultimately, all the information stolen is either sold or directly used to access accounts and get funds in this way. But who needs credit card data and email addresses? The actual data theft is only the first step. Following this, cyber criminals either need to withdraw money from the account, or sell the information received. If an attack yields details which are used to access an online banking system or an e-payment system, the money can be obtained in a variety of ways: via a chain of electronic exchange offices that change one e-currency (i.e. money from one payment system) into another, using similar services offered by other cybercriminals, or buying goods in online stores. In many cases, legalizing or laundering the stolen money is the most dangerous stage of the whole affair for the cyber criminals, as they will be required to provide some sort of identifying information e.g. a delivery address for goods, an account number etc. To address this problem, cyber criminals use individuals who are called “money mules”, or “drops” in Russian cyber criminal jargon. “Drops” are used for routine work in order to avoid exposure, e.g. for receiving money or goods. The “drops” themselves are often unaware of the purposes for which they are used. They are often hired by supposedly international companies via job-search websites. A “drop” may even have a signed, stamped contract which appears perfectly legal. However, if a “drop” is detained and questioned by law-enforcement agencies, s/he is usually unable to provide any meaningful information about his/ her employer. The contracts and bank details always turn out to be fake, as do the corporate websites with the postal addresses and telephone numbers used to contact the “drops”. Now that the cybercrime business has matured, cyber criminals no longer have to look for “drops” themselves. They are supplied by people known as “drop handlers” in Russian cyber criminal jargon. Of course, each link in the chain takes a certain percentage for services rendered. However, cyber criminals believe that the additional security is worth the cost, especially as they haven’t had to earn the money themselves. As for stolen email addresses, they can be sold for substantial amounts of money to spammers, who will then use them for future mass mailings. A few words about online games. An average player may find what can happen to his/ her gaming account very interesting. Players often buy virtual weapons, charms, protection and other things for e-money. There have been cases where virtual resources have been sold for thousands of very real dollars. Cybercriminals can get access to all these riches without having to pay for them and can then sell them on at significantly reduced prices. This explains the growing popularity of malicious programs that steal virtual property used in online games. For example, by the end of July 2006 the number of known modifications of malicious programs that steal passwords for the well-known game Legend of Mir exceeded 1,300. Moreover, lately our analysts have started seeing malicious programs that attack not just one game but several at once. Scams are designed to get users to part with their money willingly. In most cases scams take advantage of people’s love of getting something for nothing. Business is continually expanding into new areas; more and more goods and services are being made available online, with new offers appearing every day. Criminals have been quick to follow legitimate business into the online world and are now implementing online versions of real-world scams. As a rule, such schemes attract buyers or customers by offering goods at prices which are much lower than those offered legitimate vendors. Figure 5 below shows a fragment of the web page of one such Russian e-store: Figure 4. Low-priced laptops on a scam website As Figure 4 shows, the prices are impossibly low. Such low prices should arouse user suspicion, and make them think twice about buying things from such a website. In order to get around this problem, cyber criminals may give the following justifications: - the sale of confiscated items; - the sale of goods purchased with stolen credit cards; - the sale of goods which were purchased on credit using fake names. Such explanations are, of course, extremely questionable. However, many people choose to believe them: they think it’s all right to sell goods cheaply if the vendor didn’t have to pay for them. When ordering, customers are asked to make a down payment or sometimes even prepay the full price. Naturally, once payment has been made, there will be no response from the cyber criminals’ phone numbers or email addresses. And of course, the purchaser won’t get his/ her money back. This scheme can be adapted for different locations. For example, in Russia goods purchased online are commonly delivered by courier. In this case cybercriminals may require an advance payment to cover delivery, explaining that couriers are often sent to addresses where no goods have been ordered, but the e-store owners still have to pay the courier. The cyber criminals then receive the delivery charge, while the customer receives nothing. Bogus online stores are not the only trap for users. Nearly all criminal schemes which occur in the real world are reflected in equivalent scams in the cyber world. One more example of criminal online schemes is a “project” which offers users the opportunity to invest their money at a very attractive rate – so attractive that it is hard to resist. Figure 6 shows part of one of such “investment” website: Figure 5. A scam “investment” website Of course, there’s no need to comment on the interest rate offered. In spite of the ludicrous nature of such schemes, there are people who trust such “projects”, invest, and lose their money. The list is endless: new bogus e-money exchange websites, new online financial pyramids (similar to real-world pyramid scams), spam which describes special secret electronic wallets that double or triple the amounts received and other similar schemes are surfacing all the time. As mentioned above, all these scams are designed to play on people’s desire to get something for nothing. In 2006 a dangerous trend became clear: cyber extortion is evolving rapidly in Russia and other CIS countries. In January 2006 a new Trojan program, Trojan.Win32.Krotten, appeared; this Trojan modified the system registry of the victim computer in such a way as to make it impossible for the user to use the computer. After the computer was rebooted, Krotten displayed a message demanding that 25 hrivnia (about $5) should be transferred to the author’s bank account, and the computer would be restored to normal. Computer literate users would be able to revert the modifications on their own, or re-install the operating system, thereby getting rid of the malicious program. However, most other families of malicious programs designed with extortion in mind are not so easy to get rid of, and the question “to pay or not to pay” was more often than not answered in the affirmative. Krotten was distributed via online chat and on message boards in the guise of a sensational program that provides free VoIP, free Internet access, free access to cellular networks etc. On January 25, 2006 Trojan.Win32.Krotten was followed by the first modification of Virus.Win32.GpCode. This malicious program was mass mailed and encrypted data files stored on the hard drive in such a way that the user could not decrypt them. Consequently, the user would have to pay for the data to be decrypted. Folders with encrypted data contained a readme.txt file with the following content: Some files are coded by RSA method. To buy decoder mail: email@example.com with subject: RSA 5 68251593176899861 In spite of the information that encryption was performed using an RSA algorithm, the author of the program had actually used standard symmetrical encryption. This made restoring data easier. In the course of just 6 months, GpCode evolved considerably, using different, more complex encryption algorithms. Different variants of the program demanded different sums for decrypting data: the price varied from $30 to $70. These programs were only the beginning. The number of families of programs designed for extortion increased during the year (Daideneg, Schoolboys, Cryzip, MayArchive and others appeared), and the programs also increased their geographical reach. By the middle of the year, such malicious programs had been detected in Great Britain, Germany and other countries. However, other methods of extortion continued to be used as extensively as before. One example of this was the attack on Alex Tew, 21, a British student who created a website where he sold advertising space in the form of squares a few pixels across. Tew managed to make $1 million in four months with this unusual idea. Cybercriminals demanded that the successful student pay them a large amount of money, and threatened to organize a DDoS attack on his website if payment was not made. Three days after receiving the threat the student’s website underwent a DDoS attack. To his credit, he refused to pay. But why is extortion and blackmail so popular among cybercriminals? The answer is simple: such crimes are facilitated by the victims themselves, who are ready to yield to any demands in order to have their lost or damaged data restored. How to Avoid Falling Victim to Cybercriminals The reader may get the impression that this article aims to scare users, and conclude that only an antivirus program can save their data. In actual fact, there is no antivirus solution that will help Internet users who don’t take elementary precautions. Below is a list of recommendations to help you avoid being easy prey for cyber criminals: - Before making any payment or entering any personal data, find out what other users think of the relevant website. However, do not trust comments left on that site, as they may have been written by a cybercriminal. It’s best to get the opinions of people you know personally. - Avoid giving any details of your bank cards over the Internet. If you need to make a payment over the Internet, get a separate card or e-money account and transfer the necessary amount to the card or account just before making a purchase. - If an online store, investment fund or other organization has a website on a third-level domain, especially one provided by a free hosting service, this should arouse suspicion. A self-respecting organization will always find the small sum needed to register a second-level domain. - Check where and when the domain used by the online shop was registered, where the shop itself is located, and whether the addresses and telephone numbers provided are genuine. A simple telephone call can solve several problems at once by confirming or dispelling your doubts. If the domain name was registered a month ago and you are told that the company has been in the market for several years, this warrants more detailed investigation. - Do not pay money up front, even for courier delivery. Pay for all services only once you have received the goods. If you are told that people often order goods to be delivered to wrong addresses, meaning that couriers can’t deliver them, don’t believe this. It’s better to err on the side of caution and choose a different store than risk being swindled. - Never reply to mailings made by banks, investment funds and other financial organizations: such organizations never make mass mailings. If in doubt, use the telephone to check whether mail really does come from the alleged sender. But don’t use the telephone number given in the message: if the message was sent by cybercriminals, the number given will also belong to them. This first part of our article gives an overview of the most common ways to steal virtual property from Internet users. If you don’t secure your property, nobody else will. As the saying goes, forewarned is forearmed, and hopefully the information provided here has been useful. The second part of this article will cover similar attacks on organizations, provide statistical information and look at the evolution of trends in the information black market.
<urn:uuid:3103a7c7-3c21-4c2d-962d-ad8f3ecef615>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2006/10/20/computers-networks-and-theft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95072
4,329
2.703125
3
Could updated analog computer technology – popular from about 1940-1970 –be developed to build high-speed CPUs for certain specialized applications? Researchers at the Defense Advanced Research Projects Agency are looking to discover -- through a program called Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS) -- what advances analog computers might have over today’s supercomputers for a large variety of specialized applications such as fluid dynamics or plasma physics. +More on Network World: Quick look: 10 cool analog computers+ “[Analog computers and] their potential to excel at dynamical problems too challenging for today’s digital processors may today be bolstered by other recent breakthroughs, including advances in micro-electromechanical systems, optical engineering, microfluidics, metamaterials and even approaches to using DNA as a computational platform. It is conceivable, Tang that novel computational substrates could exceed the performance of modern CPUs for certain specialized problems, if they can be scaled and integrated into modern computer architectures,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office in a statement. “Critical equations, known as partial differential equations, describe fundamental physical principles like motion, diffusion, and equilibrium. But because they involve continuous rates of change over a large range of physical parameters relating to the problems of interest—and in many cases also involve long-distance interactions—they do not lend themselves to being broken up and solved in discrete pieces by individual CPUs. A processor specially designed for such equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?” DARPA recently issued a Request For Information soliciting the industry for details on how such analog or hybrid analog computer systems might work. The RFI is requesting responses in four interrelated Technical Areas as DARPA calls them. These include - Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation - Algorithms that use analog, non-linear, non-serial, or continuous-variable computational primitives to reduce the time, space, and communicative complexity relative to von Neumann/CPU/GPU processing architectures - System architectures, schedulers, hybrid and specialized integrated circuits, compute languages, programming models, controller designs, and other elements for efficient problem decomposition, memory access, and task allocation across multi-hybrid co-processors - Methods for modeling and simulation via direct physical analogy Analog computers solve equations by manipulating continuously changing values instead of discrete measurements. In their prime most analog computers were designed for specific applications, like heavy-duty math or flight component simulation. “In the 1930s, for example, Vannevar Bush—who a decade later would help initiate and administer the Manhattan Project—created an analog “differential analyzer” that computed complex integrations through the use of a novel wheel-and-disc mechanism. And in the 1940s, the Norden bombsight made its way into U.S. warplanes, where it used analog methods to calculate bomb trajectories,” DARPA noted. Check out these other hot stories:
<urn:uuid:77cad15d-5734-45b3-a02c-b69540dfc045>
CC-MAIN-2017-04
http://www.networkworld.com/article/2899923/data-center/could-modernized-analog-computers-bring-petaflops-to-the-desktop.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00162-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915373
666
3.328125
3
Nov 95 Level of Govt: State. Function: Transportation Management. Problem/situation: Weeds that grow along highways are costing California $25 million annually. Solution: Weed-seeking technology helps destroy weeds more efficiently, saving money and chemicals. Jurisdiction: California. Vendors: Patchen California Inc. Contact: Cal Schiefferly, Caltrans, 916-227-9604, fax 916-227-0977; Larry Shields, Caltrans, 916-654-4329, fax 916-653-3291 SACRAMENTO, Calif. - Among the human endeavors that call for high technology, killing weeds seems to rank in the lower strata. Yet weeds are exactly what the California Department of Transportation (CalTrans) is targeting with its latest application of technology. Department officials have leased an innovative sprayer that uses computer technology and advanced optics to determine whether a weed is present. If so, the sprayer triggers the appropriate nozzle and the weed is sprayed. If not, the machine passes over the ground without firing. The result is that only weeds are sprayed, not bare ground. The savings in chemical usage can be tremendous. That's important not only from a budgetary standpoint, but also in helping the department meet its goal of cutting chemical use by 50 percent by the year 2000 and by 80 percent by 2012, explained Larry Shields, landscape program administrator for Caltrans. "We believe we can save 40 percent of our spot treatment chemical use," he said. Tracking Chemical Use The sprayer also ties into other aspects of the coming computer revolution in agriculture. Because the sprayer utilizes computer technology, on-board memory can be added. That means it can record chemical use data, which can be downloaded into an office computer. If Caltrans wants, it could also note the exact location of the chemical application by equipping the truck with field-mapping software and a Global Positioning System (GPS) monitor. The department leased the sprayer in June. In the first trials, the sprayer has worked as advertised. But there have been logistical problems. The sprayer is used to control weeds along state highways, and the many physical obstacles present - guardrails, signposts, etc. - mean that operators must periodically adjust the sprayer's boom. In addition, the boom's eight-foot-width makes it too long for the three-foot and five-foot strips found along some highways. Those are correctable problems, said Dale Wallander, sales representative for Los Gatos, Calif.-based Patchen California Inc., the company that manufactures the sprayer. The width is an easy fix, he said. The sprayer has individual sensors and nozzles, and the numbers of each can be chosen at the time of ordering. The physical obstacles along roadways are a bigger hurdle, but can be overcome by adding hydraulics to raise and lower the boom, adding a hinge that would allow the boom to snap backward until it clears the obstacle, or some other engineering feature, Wallander said. "There's always a way to sit down and redesign the spray bar," he explained. Six months will be used to experiment with the sprayer, and to get feedback from the operators who run the equipment. "This is still in the early stages," explained Cal Schiefferly, associate equipment engineer with Caltrans. "We need to see how the equipment actually works in the field. I don't have any concerns about its ability to spray weeds, but we need to hear back from the operators before we'll know how it's going to work and what kind of changes we'll need to make." The sprayer - called the WeedSeeker - emits thousands of bursts of light each second. Within that spectrum are a couple of wavelengths that announce the presence of chlorophyll. A sensor notes that, and triggers the appropriate nozzle. The sprayer was introduced four years ago into the agricultural market. Caltrans will use it much as California farmers do - as an alternative to spot spraying by hand. Caltrans oversees 15,000 miles of roadsides and has an annual weed control budget of $25 million. The standard program is to spray a pre-emergent herbicide on the shoulders of the road, then come back and spot spray any weeds that escape the first application. Pre-emergent herbicides are those that are applied before the weed emerges. They reside in the top inch of soil and kill weeds as the weeds germinate in the spring. However, they lose their effectiveness over time, and some escapes inevitably occur. To see the results yourself, next time you're driving on the interstate, look at the shoulder. Those few green weeds that have popped up amid the bare strip of ground are escapes. Standard Approaches Weed control is important for fire safety, to maintain sightlines for drivers, and for aesthetics. The standard approach has been to control escapes in one of two ways:1.) Send out a truck and a single driver, who triggers the spray boom whenever he or she sees a weed. The drawbacks are that the driver's attention is distracted, and more chemical is used than is needed because the entire boom is activated while the weed has a width of only inches; and 2.) send out a truck with a driver and an employee on the front. The employee has a hand sprayer, which he or she uses to spray weeds as they occur. This is a more efficient use of chemical, but requires two salaries and is a hot and dusty job for the employee with the hand sprayer. The WeedSeeker has an additional advantage over a human spot sprayer. "When you're hand-spraying, you'll almost always put on too much chemical. It's hot and dusty out there and when you see a weed, it's human nature to want to be sure you've killed it," said Jim Beck, president of Patchen California and inventor of the WeedSeeker. The difficulty will come in determining whether the WeedSeeker's potential translates into practical benefits, said Shields. "We're approached by companies all the time," he said. "We listen to them, but the proof comes when we get the product out in the field." Shields said preliminary reports are promising. The sprayer will be moved around the state over the next six months so that different Caltrans operators can use it. The technology will work best, predicts Schiefferly, along rural stretches, where physical obstacles are rare and there can be several hundred feet between weeds. The results also will depend on another human factor familiar to anyone who is working to introduce new technology: how quickly line employees will adapt to a new idea. "Some of them will make it work; others won't," predicted Schiefferly.
<urn:uuid:7bfd9a2a-41fa-444f-8156-1fda278b6a40>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Dead-Weeds-On-the-Side-of.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00466-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958157
1,380
2.59375
3
What You'll Learn - Basic architecture and concepts of Microsoft SQL Server 2016 - Similarities and differences between Transact-SQL and other computer languages - Write SELECT queries - Query multiple tables - Sort and filter data - Using data types in SQL Server - Modify data using Transact-SQL - Use built-in functions - Group and aggregate data - Use subquerie - Use table expressions - Use set operators - Use window ranking, offset and aggregate functions - Implement pivoting and grouping sets - Execute stored procedures - Program with T-SQL - Implement error handling - Implement transactions - Working knowledge of relational databases. - Basic knowledge of the Microsoft Windows operating system and its core functionality. Who Needs To Attend Database administrators, database developers and business intelligence professionals, as well as SQL power users who aren’t necessarily database-focused, such as report writers, business analysts and client application developers.
<urn:uuid:1c5f1487-68cd-42d4-8a61-5eeaf27afa80>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/151303/querying-data-with-transact-sql-m20761/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.787856
200
3.203125
3
According Xijing Hospital, the implant of these three 3-D printed bones were the world’s first clinical applications for collar and shoulder bones. All three patients had malignant tumors that required removal. Using a 3-D printer to replace the removed bones in each patient was identified as beneficial for several reasons. Scans of the patients allowed the hospital to reproduce titanium implants that were in the exact shape of the patients’ original bones. The surface textures of 3-D printed bones are also more similar to real bones than the smooth surfaces of traditional artificial replacements, allowing for enhanced muscle, bone and soft tissue growth around the implant, while also lowering the chance of fluid build-up and infection.
<urn:uuid:88eb0870-9b91-4cad-8156-9194822d1936>
CC-MAIN-2017-04
http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-070314.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955358
141
2.5625
3
The Transport Systems Catapult (TSC) and Oxbotica claim to have completed the first-ever trial of a driverless car in the UK. The vehicle demonstration took place on pavements around Milton Keynes train station and business district. It marked the conclusion of the LUTZ Pathfinder Project, which has been developing the technology for the past 18 months in order to explore how vehicles interact with pedestrians and other road-users. The LUTZ project team has been running a number of exercises in preparation for the demonstration, including virtual mapping of Milton Keynes, assessing public acceptance, conducting the necessary safety planning and establishing the regulatory environment with the support of Milton Keynes Council. The TSC is one of ten ‘elite technology and innovation centers’ established and overseen by the UK’s innovation agency, Innovate UK. Oxbotica: mapping out the future The technology behind the vehicles, know as Selenium, was created by autonomous software company, Oxbotica – a “spin-off” set up by Oxford University academics. The Selenium system was developed to be “vehicle-agnostic” – meaning it can be applied to cars, self-driving pods and warehouse truck fleets. Supposedly, the technology does not rely on GPS to operate, which allows it to transition between over ground or underground environments easily. Instead, Oxbotica says Selenium uses data from cameras and LiDAR (Light Detection & Ranging) systems to navigate its way around the environment. LiDAR technology uses light sensors to measure the distance between the sensor and the target object. Supposedly, LIDAR produces very accurate, high resolution 3D data that can be used to map out urban environments and detect objects. The Selenium system is also set to be deployed to eight shuttle vehicles in Greenwich, London, as part of the Greenwich Automated Transport Environment (GATEway) project. The shuttles will be used by members of the public in Greenwich in a six month demonstration starting in early 2017. Vehicles that talk The race to develop cars that ‘talk’ is well and truly on. Vodafone also recently announced that it has started working on technology to see vehicles ‘talking to each other by 2020’. In a press release, the company said it has started early testing of LTE-vehicle-to-everything (V2X) – a technology that allows cars to communicate with their surroundings – on a private test track in the UK, which it will trial on roads in Germany. This is part of the UK Cite project, a 30-month project made of up consortium members including the likes of Huawei, Jaguar Land Rover and Siemens. This group says it will equip over 40 miles of urban roads and motorways with the technology to allow connected vehicle trials by 2017. Britain at ‘the forefront of innovation’ It seems that the UK is proving to be quite a hotbed for testing driverless technology, largely due to its relatively liberal laws on the testing of driverless vehicles – unlike say, the US, where public road testing of driverless cars is only legal in eight of 50 states. A large number of companies are already working on driverless car technology in the UK, including Google and Uber. The UK government has also shown its support by giving funding from its £100 million Intelligent Mobility Fund (albeit an announcement under previous Chancellor, George Osborne), and via funding from Innovate UK. In a statement regarding the trials in Milton Keynes, business and energy secretary, Greg Clark suggested that “Today’s first public trials of driverless vehicles in our towns is a ground-breaking moment and further evidence that Britain is at the forefront of innovation.” “The global market for autonomous vehicles presents huge opportunities for our automotive and technology firms. And the research that underpins the technology and software will have applications way beyond autonomous vehicles.”
<urn:uuid:e70d4b05-4efb-46a0-9fec-5124f01d26e3>
CC-MAIN-2017-04
https://internetofbusiness.com/oxbotica-trial-driverless-car-uk/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954761
820
2.796875
3
There’s an art and a science science behind the Internet of Things (IoT) and in incorporating the tremendous data that this modern phenomenon generates. In its brief but remarkable history, IoT has been deployed in some fairly simplistic and classic architecture schemes. What lies beyond the immediate, will prove that in some cases, major IoT challenges have not yet taken these schemes to the point of “critical mass” or outright failure. The growth of data, the growth of complexity, and countless elements combine to demand a better strategy for growth and change than the simplistic architectures being deployed today. The IoT Challenge The future is stacked with challenges and changes ahead in the story of IoT. Many of these changes will present technological advancements and do them at scale. Industry reports universally tell the tale of exponential growth in data-focused metrics. In fact, current projections indicate that by the year 2020: - 30-50 Billion IoT connections will be in place - $8.9 Trillion in earnings will be earned in all currently identified IoT-related sectors - Close to $6 Trillion will be spent on IoT solutions - The key verticals will be cities, industrials, homes, automobiles, and wearables All the while, the number of applications that are incorporating the Internet of Things continues to grow and the technology will continue to find new industries. Among the emerging applications today that are helping to drive IoT, hints of the coming future can be seen throughout its wide reach: - Smart Manufacturing. The Internet of Things is a natural home for manufacturing and automation. Bringing in the logic and intelligence of IoT and Big Data into the process adds efficiencies and opportunities that were unheard of just a few short years ago. Today, sensors and measurement devices of all types can be found throughout the manufacturing process, which is a trend that continues to grow. Flow optimization, real-time inventories, asset tracking, and other opportunities continue to prove the value of IoT in this application. - Building and Home Automation. This industry is one of the most visible IoT applications, with such devices as smart thermometers, intelligent lighting, and other enhancements. Energy optimization, access control, and other automation elements enhanced by IoT are only beginning to tap into the potential benefits. Better data and better integrations equal better benefits, and IoT is now becoming a part of new home designs. This industry will continue to grow and see widespread adoption. - Smart Cities. Municipalities provide an entire spread of services that many may not readily be aware of. Just as industry is driven to become more efficient, city-wide organizations strive to save time, resources, and man hours. Out of this drive, devices such as residential E-meters, pipe leak detection, traffic control systems, and more are becoming interconnected with the web and increasingly intelligent at the same time. - Smart and Driverless Vehicles. Wouldn’t it be great if nobody actually had to drive anywhere? It would make for cleaner commutes, save time, save fuel, and it would possibly be safer than human locomotion. That is just one of the dreams being pursued by engineers, the development of which you see all the time in the news cycles. There are plenty of targets for this futuristic technology, cars, buses, and even agricultural combines can benefit from driverless technologies. These systems are dependent on IoT technologies which can sense, adjust, and coordinate based on an interconnected fabric of data. - Wearables. Smartphones paved the way to big expectations from early wearable devices, which have been around for a couple of years now. Rest assured, the earliest models were trailblazing peeks into the future of what wearable devices could do. The ultimate metric in the human experience is not-surprisingly, human. IoT devices that interact and report between the person, applications and the knowledge of the web are the ultimate in the wearables experience. Fitness, health, entertainment, and communications only begin to tap the potential of wearables, an industry that will continue to evolve while becoming the next presumptive modern companion device. - Health Care. Extending that human to device experience, health care is an application that has only started on this IoT journey. It is somewhat related to wearables, but deserves its own category. Applications in gathering, monitoring, drug tracking, and hospital asset tracking are pushing the boundaries of medical applications of IoT. Capabilities and efficiencies that are produced from these tools will help drive the next generation of medical IoT devices, bringing patients and care closer together than ever before. It is clear throughout these brief examples, that IoT is not only growing, it is also changing. This growth and change are a big part of the challenges that lie ahead. Data overload, multiple connectivity options, security, complexity, processing, and renderings are among the major elements that are painting this picture of change and growth. Many components of this future are still being worked out. Multiple vendors, multiple protocols, power requirements and more still have a long way to go. No matter which way it goes, there is little doubt that the number of devices and sensors are on an upward trend, and along with that, it will be accompanied by large quantities of information that will have to be collected, analyzed, and stored. The enterprise is looking at an even deeper challenge because in the widespread transition to this new paradigm, it must also incorporate the standards of hundreds of thousands of applications that are incorporating IoT in addition to the hundreds of requirements that exist in the realm of business (such as SLA’s, PCI, speed to market, etc.). Hybrid is the Answer The answer to the challenge of building the most efficient, highly scalable IoT infrastructure (or any IT infrastructure for that matter) calls for utilizing the most flexible and efficient platform which is hybrid cloud technology. Hybrid cloud provides several key benefits that are naturally suited for IoT. Distributed hybrid technology specifically provides: - Data isolation, security, and privacy - Databases and big data – require I/O performance and scalability - Cloudbursting – The cloud-based ability to quickly scale up or down, as needed - Lock-in avoidance – Changing standards, integrations, and portability are not a hybrid cloud constraint - Best venue – Hybrid clouds provide the ability to select the optimal infrastructure for applications These elements only begin to tell the tale of an environment conducive to IoT growth. Hybrid allows for reduced costs, increased flexibility, better choices, accelerated deployment, improved reusability, and accelerated innovation. An organization that embraces hybrid cloud technology into its IoT strategy can expect freedom from: - Overpaying for solutions - Data isolation - Architecture-based performance issues - Limitations on customization - Excessive regulatory challenges We’d love to hear your thoughts on the evolution of the IoT, the hybrid cloud, and what infrastructure you think is best to create the most efficient, highly available IoT environment possible. Share this post and your thoughts on social media and let’s start the discussion!
<urn:uuid:1484219a-013e-4350-a689-e854fb272b12>
CC-MAIN-2017-04
http://www.codero.com/blog/iot-growing-up-hybrid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00421-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943711
1,431
2.578125
3
Bioengineers at the University of Stanford have developed a new type of circuit board modeled on the human brain. The Neurogrid, as it’s called, operates about 9,000 times faster using significantly less power than a typical PC. While no match for the power and capability of the human brain that inspired it, the new advance has major implications for robotics and computing. The closer these bio-inspired chips get to real thing (i.e., an actual human brain) the better they will be at reproducing biological actions. Potential applications include prosthetic limbs that could mimic the speed and complexity of biological entities. Leading the project is Kwabena Boahen, associate professor of bioengineering at Stanford. In a recent article for the Proceedings of the IEEE, Boahen highlights some of the main differences between the human-powered “computer” and the man-made one. A mouse brain for example is 9,000 times faster than a computer built to simulate its functions, and the computer takes 40,000 times more power to operate. “From a pure energy perspective, the brain is hard to match,” Boahen states in an interview with Stanford News. Boahen and his team represent one of the leading efforts in the field of neuromorphic research, which aims to reproduce brain functionality using a mix of silicon and software. Their iPad-sized system, called Neurogrid, is made up of a circuit board with 16 “Neurocore” chips. One Neurogrid board can simulate one million neurons and billions of synaptic connections in real-time. Power efficiency was a key design metric for the chips. The project is said to be able to simulate orders of magnitude more neurons and synapses than other bio-inspired computing devices using the footprint and power profile of a typical tablet computing device. “The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power,” Boahen writes in the IEEE piece. “Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face.” The million-neuron prototype was made possible with a grant from the National Institute of Health’s five-year Pioneer Award. Next up for the project will be reducing design costs and developing more straightforward compiler software. Boahen suggests that modern mass-scale manufacturing processes would lower the price significantly, from roughly $40,000 for the prototype down to about $400 per unit. The end goal is to make the technology accessible from a cost and usability standpoint.
<urn:uuid:4e5351a5-5075-4e14-b636-f75e5e7a4e11>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/05/05/brain-inspired-device-simulates-one-million-neurons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931494
537
3.96875
4
This tutorial is an effort to explain in brief about Simple Network Management Protocol (SNMP) is an application–layer protocol defined by the Internet Architecture Board (IAB) in RFC1157 for exchanging management information between network devices. It is a part of Transmission Control Protocol⁄Internet Protocol (TCP⁄IP) protocol suite. SNMP is one of the widely accepted protocols to manage and monitor network elements. Most of the professional–grade network elements come with bundled SNMP agent. These agents have to be enabled and configured to communicate with the network management system (NMS). A manager or management system is a separate entity that is responsible to communicate with the SNMP agent implemented network devices. This is typically a computer that is used to run one or more network management systems. A managed device or the network element is a part of the network that requires some form of monitoring and management e.g. routers, switches, servers, workstations, printers, UPSs, etc... The agent is a program that is packaged within the network element. Enabling the agent allows it to collect the management information database from the device locally and makes it available to the SNMP manager, when it is queried for. These agents could be standard (e.g. Net-SNMP) or specific to a vendor (e.g. HP insight agent) Every SNMP agent maintains an information database describing the managed device parameters. The SNMP manager uses this database to request the agent for specific information and further translates the information as needed for the Network Management System (NMS). This commonly shared database between the Agent and the Manager is called Management Information Base (MIB). Typically these MIB contains standard set of statistical and control values defined for hardware nodes on a network. SNMP also allows the extension of these standard values with values specific to a particular agent through the use of private MIBs. In short, MIB files are the set of questions that a SNMP Manager can ask the agent. Agent collects these data locally and stores it, as defined in the MIB. So, the SNMP Manager should be aware of these standard and private questions for every type of agent. Management Information Base (MIB) is a collection of Information for managing network element. The MIBs comprises of managed objects identified by the name Object Identifier (Object ID or OID). Each Identifier is unique and denotes specific characteristics of a managed device. When queried for, the return value of each identifier could be different e.g. Text, Number, Counter, etc... There are two types of Managed Object or Object ID: Scalar and Tabular. They could be better understandable with an example Scalar: Device’s vendor name, the result can be only one. (As definition says: "Scalar Object define a single object instance") Tabular: CPU utilization of a Quad Processor, this would give me a result for each CPU separately, means there will be 4 results for that particular Object ID. (As definition says: "Tabular object defines multiple related object instance that are grouped together in MIB tables") Every Object ID is organized hierarchically in MIB. The MIB hierarchy can be represented in a tree structure with individual variable identifier. A typical object ID will be a dotted list of integers. For example, the OID in RFC1213 for "sysDescr" is .22.214.171.124.126.96.36.199 The simplicity in information exchange has made the SNMP as widely accepted protocol. The main reason being concise set of commands, here are they listed below: Being the part of TCP⁄ IP protocol suite, the SNMP messages are wrapped as User Datagram Protocol (UDP) and intern wrapped and transmitted in the Internet Protocol. The following diagram will illustrate the four–layer model developed by Department of Defense (DoD). Since the inception SNMP, has gone through significant upgrades. However SNMP v1 and v2c are the most implemented versions of SNMP. Support to SNMP v3 has recently started catching up as it is more secured when compare to its older versions, but still it has not reached considerable market share. This is the first version of the protocol, which is defined in RFCs 1155 and 1157 This is the revised protocol, which includes enhancements of SNMPv1 in the areas of protocol packet types, transport mappings, MIB structure elements but using the existing SNMPv1 administration structure ("community based" and hence SNMPv2c). It is defined in RFC 1901, RFC 1905, RFC 1906, RFC 2578. SNMPv3 defines the secure version of the SNMP. SNMPv3 also facilitates remote configuration of the SNMP entities. It is defined by RFC 1905, RFC 1906, RFC 3411, RFC 3412, RFC 3414, RFC 3415. Though each version had matured towards rich functionalities, additional emphasis was given to the security aspect on each upgrade. Here is a small clip on each editions security aspect. |SNMP v1||Community–based security| |SNMP v2c||Community–based security| |SNMP v2u||User–based security| |SNMP v2||Party–based security| |SNMP v3||User–based security|
<urn:uuid:249a7c3a-261e-46fb-9d19-f7275aae0346>
CC-MAIN-2017-04
https://www.manageengine.com/network-monitoring/what-is-snmp.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.884861
1,116
3.640625
4
How big data is helping to prevent suicides Big data isn't just about finding more effective ways to market and advertise -- it's also about making the world a better place. One of the industries in which big data is having its greatest impact is health care. So many improvements are being made with better results in hospitals and healthcare facilities across the world. One area, however, that continues to prove extremely difficult for the healthcare industry, and other industries, is suicide prevention. The positive side is that services that offer Hadoop in the cloud and other big data applications in the cloud, are readily available for any business and any size. Big data no longer requires big money. Because of the complicated and sensitive nature of suicide, it's been extremely difficult to successfully discover and prevent it without people specifically revealing that they're contemplating suicide. Sure, there are warning signs and symptoms that people can look for, but they are far from definitive. What if there were a successful way to constantly and accurately predict patients with suicidal intentions? An untold amount of good could come from that. Data analysts, health care professionals and scientists from across the country are seeking to do this, and big data is making it all possible. Currently there is no real science to suicide prevention. It's hard to combat such a devastating problem without something definitive to identify, beyond mere outside observation, who is really at risk and who just happens to have some of the common symptoms. This type of information would be invaluable to not only health care professionals, but also schools and families among other entities. What is being done? One of the demographics that is most ravaged by suicide is military veterans. According to a recent study, 22 veterans commit suicide every day -- almost one an hour. What an incredible tragedy! Of those who do commit suicide, 44 percent percent see their physician before committing the act. Because of that, military veterans are the focus demographic for the Durkheim Project, an effort looking to use information from both doctors visits and social media to find clues that indicate the potential of suicide before it's too late. The system isn't just looking for isolated keywords, it's much more complicated than that. Suicide generally isn't the cause of just one thing. So, the project is hoping to find interlinked dialog, beyond just words and phrases, that can pinpoint the problem. Big data brings together all of this data from doctors and social media and then makes sense of it to produce the definitive results. The importance of the project cannot be overstated. Not only is this about finding ways to prevent suicide, but the findings could fundamentally change the way we do things. It's about not only intervening before suicide is committed, but it's also about preventing the contributors to suicide in the first place -- whether that's in the military, in school or at home. The possibility of these findings are endless with the help of big data. It's easy to see how this holds implications for not only reducing military veteran suicides, but also for stemming the tide of rising teen suicides. The benefits of the findings and potential outcomes are endless. What other implications are there? Big data is changing the world. Its impact on suicides is just one example. Along with the numerous benefits for healthcare, big data is also making a difference in many other sectors. It's improving our quality of life in so many different ways. It's important to know that big data is important and can make a difference for your business too. Whether it's saving lives, reducing injuries, reducing expenses or creating new products, there's an application for every market. Gil Allouche is the Vice President of Marketing at Qubole Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.
<urn:uuid:ed85924a-d720-49a2-baff-6532068d28e2>
CC-MAIN-2017-04
http://betanews.com/2014/05/29/how-big-data-is-helping-to-prevent-suicides/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00265-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961667
768
2.734375
3
PDH Multiplexer, or Plesiochronous Digital Hierarchy multiplexer, is a kind of point-to-point optical transmission equipment used to transport large quantities of data over digital transport media, such as fiber-optic and microwave radio systems. PDH Multiplexer is designed of highly integrated structure and provides 16 standard E1 interfaces together with one channel of order wire, with self-contained alarm and NM functions, as well as self-testing and E1 loop-back testing functions. The device is popularly with telecommunication operator. It is suitable in business for communication operator, government and kinds of entities. PDH was developed in the early 1960s. It derives its names from the Greek term “plesio,” meaning near, and “chronos,” meaning time. The name refers to the fact that networks using PDH run in a state of almost, but not quite, perfect synchronization. PDH was the first standardized multiplexing hierarchy based on time-division multiplexing. It works by channeling numerous individual channels into higher-level channels. Work Theory Of PDH Multiplexer The PDH system is based on the theory that if you have two identical clocks, each the same brand, style and everything, there is no guarantee that they will run at the exact same speed. Chances are that one of them will be slightly out of synchronization with the other. The transmitting multiplexer combines the incoming data streams, compensates for any slower incoming information, reconstructs the original data and sends it back out at the correct rates. This system allows for that slight variation in speed and corrects it during transfer to keep the system constantly running without pausing and waiting for certain slower data to arrive before sending it on. PDH simply fills in the missing bits to allow for a smooth transfer of data. PDH made little provision for management of the network, and the need to fully de-multiplex a high level carrier to extract a lower level signal meant that increasing the capacity of PDH networks beyond a certain point was not economically viable. The main economic factor was the cost of the equipment required at each cross-connect point within the network where either individual channels or low-level multiplexed data streams might need to be extracted or added. It also added additional latency and increased the possibility of errors occurring, thereby reducing network reliability. Available Types Of PDH multiplexer Traditionally, each channel in PDH was a digitized voice, but video information and data may also be sent over these channels. The basic channel is 64 Kbits per second, which is the bandwidth that is required to transmit a voice call that has been converted from analog to digital. N*E1 PDH Fiber Optic Multiplexers use the PDH fiber transmission technologies. The 2M (E1) interfaces can connect with the exchanger, light loop device and multi-diplexer directly to form the micromini or the special network. Complete alarm function for N*E1 PDH Fiber Optic Multiplexers, it is stable, easy to maintenance and install, small in size. It can support one digital service telephone. PDH Multiplexer can multiplex 4/8/16E1, Ethernet Media Converter (2*10/100Mbps) and V.35 signals in one fiber channel to transmit. It is suitable for low capacity, point-to-point application of remote transmission. The PDH Multiplexer can be applied to construct economical and flexible multi-service transmission networks, used for relay between switch offices, data transmission of LAN, 2M access of lease service for key clients, voice cutover for residental areas/intelligent buildings, and connection of base stations and other various digital transmission networks. Fiber Optic Multiplexer is reliable, stable, easy to install and maintain, which can be monitored from Fi-view-MST management software, which is widely used in voice and data application field.
<urn:uuid:8b4c5d63-4caf-424e-b31a-0eb0325a779c>
CC-MAIN-2017-04
http://www.fs.com/blog/pdh-optical-multiplexer-wiki.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938473
821
2.890625
3
Networking 101: Understanding OSPF Routing (Part 2) The real nuts and bolts of everyone’s IGP of choice, OSPF, are a bit complex, but strangely satisfying. After understanding how it works, we’re left wondering, “what else do we need?” Make sure to review the first part of our look at OSPF before embarking on this potentially confusing journey. This article will cover LSA types, packet types, and area types. First, however, we’d like to dispel a common misunderstanding about dynamic routing: People have a tendency to tinker with traffic, even when they aren’t suffering from under-provisioned bandwidth. In OSPF, you cannot really influence the way traffic is routed, aside from adjusting a path’s metric. Some routers support making changes to weights, but this isn’t usually necessary. OSPF generally takes care of assigning weights, based on the speed of the interfaces on a router. You can also use ECMP (equal cost multipath) with OSPF, if you have two links to the same place and wish to load balance in a round-robin fashion. Don’t try tinkering with OSPF parameters; more likely than not, if you think you have a problem it’s a network design issue, and fixing that will accomplish your goals. The LSA and Packets Pivotal to understanding the impact OSPF will have on your network is realizing there are multiple types of LSAs. Updates are sent every few seconds, which result in updates to the LSA database, and possibly the routing table. “New” LSAs will cause every single router to ditch its routing table and start over with the SPF (shortest path first) calculation. There are five distinct packets types that can be sent as LSAs. The hello and database descriptions were covered in the first installment of this article, and they are used during the “bringing up adjacencies” stage. OSPF packet type 3 is a link-state request, and type 4 is a link-state update. Finally, type 5 is a link-state ACK. OSPF is implemented as a layer 4 protocol, so it sits directly on top of IP. Neither TCP nor UDP are used, so to implement reliability OSPF has a checksum and its own built-in ACK. To troubleshoot by sniffing traffic, we need to know that the OSPF multicast address is 220.127.116.11, and DRs use 18.104.22.168 to talk amongst themselves. Finding the shortest path on a weighted, directed graph is computationally hard, and takes considerable time, even on today’s routers. Thankfully Edsger W. Dijkstra made this better with his SPF algorithm, but it’s still tough. This is the main reason OSPF can’t be used on the Internet, and you don’t want to squirt your full BGP Internet routing table into OSPF. Every time a network is deleted or added, an SPF recalculation happens. Try not to be confused by another “type.” OSPF has many, so be sure to pay attention to the “type” you’re referring to. LSAs can be either an update packet, or a request packet. These are the different types of LSAs that can be sent, and these are either Type 3 or 4 OSPF packets: - Type 1: router LSA. A router sends this to describe neighbors and its own interfaces. - Type 2: network LSA. For broadcast networks only; this LSA is flooded by the DR and lists OSPF-speaking routers on the network. - Type 3: network summary LSA. Sent by an ASBR to advertise networks reachable through it. A stub area router will also use this for the default route. - Type 4: ASBR-summary LSA. Sent by ASBR, but only internally. This describes to the others how to get to the ASBR itself, and uses only internal metrics. - Type 5: AS-external LSA. Used to describe external routes to internal areas. Can be used to advertise “this is the way to the Internet” (or some subset of). - Type 6: Group summary. Used in multicast (MOSPF). Ignore this. - Type 7: NSSA area import. Notice that we have both a router and a network LSA. The reason a router LSA exists is because in the absence of a DR, there is no network LSA sent. The router LSA would include a list of all links to the other routers on a network. So OSPF can work in the absence of a DR or BDR, albeit with increased complexity due to the fact that the DR is no longer providing nice summaries. We’ve already mentioned a few different types of OSPF areas, and brushed upon the idea of a backbone area in the last article. In actuality, there are only two types of areas: normal areas, which touch area zero; and stub areas, that hang off another area without touching area zero. A stub area does not accept external LSAs. A stub does not provide transit, i.e. it doesn’t ship packets across itself. A stub area only has one way out, which is through the area it’s connected to, which means that any internal routers in the stub area don’t need to recalculate the SPF. Okay, NSSA was mentioned above, so this charade can’t be keep up for long. There’s actually another type of area: the Not So Stubby Area. The only difference is that a NSSA can send a type 7 LSA to export internal routes. Interestingly, this type of LSA is translated at the ABR into a type 5 (AS-external) LSA. So a NSSA gives up some specific routes to the entire OSPF domain. Think of this as being equivalent to an ASBR: it can export some AS-external routes into the backbone. Presumably it got them by running another routing protocol internally, such as RIP or BGP. The NSSA router that connects itself to the area (not area zero, this is a stub) cannot accept AS-external routes; it can only send them. Now it’s time to truly get confused. Let’s say we have a NSSA that can’t physically be connected to area zero because of physical locality issues. It’s possible that you want the stub described above to be on area zero. Alternatively, you might have two stubs hanging off a non-backbone area, and you want them to talk without having to touch area zero, because going through the backbone would be inefficient. A virtual link can save you from this poorly designed nightmare, by creating a tunnel from one router to another. When the tunnel comes up, a virtual adjacency is formed between the two end-point routers, and they can adjust their routing tables accordingly. OSPF is extremely versatile. I’ve even seen people use OSPF for highly available failover. A router speaking OSPF will automatically detect if the route goes away (because the host running ospfd stopped responding), and it will stop sending traffic to that host. Good for job security, but bad if it breaks at 2:00am and you’re the only person who knows OSPF. Seriously though, OSPF is extremely powerful, mostly because it’s very fast to converge and uses little bandwidth. None rival OSPF's abilities among IGPs.
<urn:uuid:37cd9f2a-1039-46a0-9aed-c5a4de6d47a4>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3612166/Networking-101-Understanding-OSPF-Routing-Part-2.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00411-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929035
1,632
2.8125
3
According to a report on the condition of STEM learning in the US, written by Change The Equation, an organization working with schools, communities, and states to adopt and implement STEM policies and procedures, “between 2014 and 2024, the number of STEM jobs will grow 17 percent.” The concern is that much of the workforce currently holding STEM jobs is approaching retirement age, making the demand for highly trained and well-versed individuals skyrocket, but the education system in the United States doesn’t seem to be keeping up with that demand. Wyoming business leaders discussed the lack of STEM education in the community during Governor Matt Mead’s fourth annual Wyoming Broadband Summit, held on November 5th, 2015. StemConnector and MyCollegeOptions conducted a national report profiling the high school student population interested in STEM careers and predicted that in 2018, Wyoming will have 16,000 STEM jobs to fill. They also found that the percentage of Wyoming high school students interested in STEM is higher than the national average. However, when it comes to education, Wyoming is ranked 41st in the nation, with only four silver medal schools and no gold medals. Not one Wyoming school can be found on the list for the best STEM high schools in the US. It’s not easy for Wyoming kids to just pack up and attend a college or university outside of their own state, either. The Chronicle of Higher Education reported in 2011 that 96% of Wyoming students attend a college within their own state – the highest rate in the country. This might stem from the fact that the median household income in 2014 was $57,055, which hasn’t increased much from 2011 at $56,322. Making tuition costs a major factor. The University of Wyoming had an in-state tuition of $3,390 for the 2014 - 2015 school year, while the closest out of state university, Colorado State, had an out of state tuition of $24,048. Wyoming students qualify for the WUE (Western Undergraduate Exchange), which would allow them to pay 150% of in-state tuition at CSU, a total of $11,802. This amount is still significantly higher than the amount they would pay to stay in state. When looking at these numbers the picture starts to become clear: we need to invest in our STEM education resources right here in Wyoming, because it’s far too expensive to leave the state in order to get the education needed. iD Tech Camps is a national organization that offers summer tech camps for kids ages 6 -18, with courses concentrating on coding, app development, game design, engineering, and innovation. They operate on over 130 prestigious campus locations, and maintain an 8:1 student-to-instructor ratio, making the program a perfect fit for our plan. We created a competition for Wyoming kids to answer a few questions about what they love about technology, why they want to go to code camp, and what they hope to learn while there. Once all entries were collected, they were handed over to Shawn and a few other Green House Data executives who then chose our two winners. One of the winners, Austin, said that he wanted to attend camp because he thought it would be a good skill for his future, and he really liked meeting the instructors and other campers. The second winner, Noah, said that his favorite part of camp was making a game for his final project. He created a medieval market game, where the gear you purchase changes the outcome of the upcoming battle, and every monster you beat earns you money for more gear. Their enthusiasm shows the interest in STEM is there — we just need to bring the educators and programs to Wyoming. Our hope is that this may inspire other companies in Wyoming (and the other 49 states) to invest in our youth and help provide them with STEM opportunities. The education system, government agencies, and families may not have the resources to accomplish what is needed, so local businesses should step up and do what we can to help supplement our youth’s education.
<urn:uuid:923c54dc-0279-452a-aa0f-3e878c78b988>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/the-stem-education-issue-in-wyoming-and-what-we-are-doing-about-it
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969018
830
2.65625
3
Tagesson T.,Copenhagen University | Fensholt R.,Copenhagen University | Guiro I.,Cheikh Anta Diop University | Rasmussen M.O.,Copenhagen University | And 17 more authors. Global Change Biology | Year: 2015 The Dahra field site in Senegal, West Africa, was established in 2002 to monitor ecosystem properties of semiarid savanna grassland and their responses to climatic and environmental change. This article describes the environment and the ecosystem properties of the site using a unique set of in situ data. The studied variables include hydroclimatic variables, species composition, albedo, normalized difference vegetation index (NDVI), hyperspectral characteristics (350-1800 nm), surface reflectance anisotropy, brightness temperature, fraction of absorbed photosynthetic active radiation (FAPAR), biomass, vegetation water content, and land-atmosphere exchanges of carbon (NEE) and energy. The Dahra field site experiences a typical Sahelian climate and is covered by coexisting trees (~3% canopy cover) and grass species, characterizing large parts of the Sahel. This makes the site suitable for investigating relationships between ecosystem properties and hydroclimatic variables for semiarid savanna ecosystems of the region. There were strong interannual, seasonal and diurnal dynamics in NEE, with high values of ~-7.5 g C m-2 day-1 during the peak of the growing season. We found neither browning nor greening NDVI trends from 2002 to 2012. Interannual variation in species composition was strongly related to rainfall distribution. NDVI and FAPAR were strongly related to species composition, especially for years dominated by the species Zornia glochidiata. This influence was not observed in interannual variation in biomass and vegetation productivity, thus challenging dryland productivity models based on remote sensing. Surface reflectance anisotropy (350-1800 nm) at the peak of the growing season varied strongly depending on wavelength and viewing angle thereby having implications for the design of remotely sensed spectral vegetation indices covering different wavelength regions. The presented time series of in situ data have great potential for dryland dynamics studies, global climate change related research and evaluation and parameterization of remote sensing products and dynamic vegetation models. © 2014 John Wiley & Sons Ltd. Source Trombe P.-J.,Technical University of Denmark | Pinson P.,Technical University of Denmark | Vincent C.,Technical University of Denmark | Bovith T.,Danish Meteorological Institute | And 10 more authors. Wind Energy | Year: 2014 Offshore wind fluctuations are such that dedicated prediction and control systems are needed for optimizing the management of wind farms in real-time. In this paper, we present a pioneer experiment - Radar@Sea - in which weather radars are used for monitoring the weather at the Horns Rev offshore wind farm, in the North Sea. First, they enable the collection of meteorological observations at high spatio-temporal resolutions for enhancing the understanding of meteorological phenomena that drive wind fluctuations. And second, with the extended visibility they offer, they can provide relevant inputs to prediction systems for anticipating changes in the wind fluctuation dynamics, generating improved wind power forecasts and developing specific control strategies. However, integrating weather radar observations into automated decision support systems is not a plug-and-play task, and it is important to develop a multi-disciplinary approach linking meteorology and statistics. Here, (i) we describe the settings of the Radar@Sea experiment, (ii) we report the experience gained with these new remote sensing tools, (iii) we illustrate their capabilities with some concrete meteorological events observed at Horns Rev and (iv) we discuss the future perspectives for weather radars in wind energy. Copyright © 2013 John Wiley & Sons, Ltd. Source
<urn:uuid:cafc748f-e260-419d-8831-e2ed3c630a6b>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/danish-hydrological-institute-dhi-618420/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00559-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88365
794
2.640625
3
What are controls? In accounting and auditing, internal control is defined as a process affected by an organization's structure, work and authority flows, people and management information systems, designed to help the organization accomplish specific goals or objectives. It is a means by which an organization's resources are directed, monitored, and measured. It plays an important role in preventing and detecting fraud and protecting the organization's resources, both physical (e.g., machinery and property) and intangible (e.g., reputation or intellectual property such as trademarks). At the organizational level, internal control objectives relate to the reliability of financial reporting, timely feedback on the achievement of operational or strategic goals, and compliance with laws and regulations. At the specific transaction level, internal control refers to the actions taken to achieve a specific objective (e.g., how to ensure the organization's payments to third parties are for valid services rendered.) Internal control procedures reduce process variation, leading to more predictable outcomes. Internal control is a key element of the Foreign Corrupt Practices Act (FCPA) of 1977 and the Sarbanes-Oxley Act of 2002, which required improvements in internal control in United States public corporations. Internal controls within business entities are called also business controls. Internal controls have existed from ancient times. In Hellenistic Egypt there was a dual administration, with one set of bureaucrats charged with collecting taxes and another with supervising them. There are many definitions of internal control, as it affects the various constituencies (stakeholders) of an organization in various ways and at different levels of aggregation Under the COSO Internal Control-Integrated Framework, a widely-used framework in the United States, internal control is broadly defined as a process, affected by an entity's board of directors, management, and other personnel, designed to provide reasonable assurance regarding the achievement of objectives in the following categories: a) Effectiveness and efficiency of operations; b) Reliability of financial reporting; and c) Compliance with laws and regulations. COSO defines internal control as having five components: ||Control Environment-sets the tone for the organization, influencing the control consciousness of its people. It is the foundation for all other components of internal control. ||Risk Assessment-the identification and analysis of relevant risks to the achievement of objectives, forming a basis for how the risks should be managed ||Information and Communication-systems or processes that support the identification, capture, and exchange of information in a form and time frame that enable people to carry out their responsibilities ||Control Activities-the policies and procedures that help ensure management directives are carried out ||Monitoring-processes used to assess the quality of internal control performance over time. The COSO definition relates to the aggregate control system of the organization, which is composed of many individual control procedures. Discrete control procedures or controls are defined by the SEC as: "...a specific set of policies, procedures, and activities designed to meet an objective. A control may exist within a designated function or activity in a process. A control's impact...may be entity-wide or specific to an account balance, class of transactions or application. Controls have unique characteristics - for example, they can be: automated or manual; reconciliations; segregation of duties; review and approval authorizations; safeguarding and accountability of assets; preventing or detecting error or fraud. Controls within a process may consist of financial reporting controls and operational controls (that is, those designed to achieve operational objectives). Why integrate controls? To achieve a holistic solution by leveraging multiple frameworks and standards, management must first align IT strategy with business objectives. In many companies, the IT department is not considered a true partner, but is viewed as a service provider only. As a partner, IT will be challenged to increase business revenue and will be required to focus on the most critical internal controls. For example, integrating a IT Governance framework like COBIT and ISO/IEC 27001 with a Corporate Governance framework like COSO allows IT to align itself to your organization's business goals and mission. If alignment of priorities is not performed, IT may be concentrating on disaster recovery for IT assets only and not on the most critical business processes. IT priorities should be aligned with those of the business strategy, to effectively mitigate the most relevant risks. This will also increase return on investment to the business. To derive the benefits from different standards and frameworks, a risk-based approach to information security management should be taken. The risks within the organization that are more likely to occur and affect the critical assets and business processes should be identified. The organization should concentrate on the incidents that are more likely to occur and result in damages, and identify and prioritize the implementation of countermeasures to strengthen the security posture. The obvious benefit of this integrated approach is that enterprises are able to demonstrate that they have good internal controls over financial processes and, even more important, that they will mitigate potential security risks. By implementing this holistic approach, internal controls will be comprehensive. Management will then have ongoing measurements to maintain and monitor information security and identify possible security breaches sooner. A holistic approach will also assist in meeting industry, legal, contractual and regulatory requirements imposed on an enterprise. As a result, a sustainable and effective information security management program will be adopted, managed and monitored by combining implementation of multiple standards and frameworks. Forward-thinking enterprises that take this integrated approach will also be able to meet and exceed Sarbanes-Oxley, SAS 70, PCI-DSS, HIPAA, GLBA, FISMA and EU Directive requirements. In addition, there are efficiencies and cost savings that result from taking an integrated approach. Ultimately, enterprises will end up with a strong and robust information security management programs, based on international best practices. This approach will increase shareholder value, strengthen competitive advantage, and ensure customer and business partner information assurance. How can eFortresses assist? By leveraging eFortresses unique Holistic Information Security Practitioner (HISP) training program and Implement-Once-Comply-Many GRC consulting methodology we are able to assist our clients to integrate controls from multiple regulations and standards in the most practical and cost-effective manner by leveraging the existing controls mappings from HISP developed over several years as well as deliverables from several consulting engagement that we have successfully delivered to our clients. We can also work with your internal team to review and enhance existing controls mappings and also assist in the implementation of such mappings. For more information, please contact us by filling out this form
<urn:uuid:ff6bf0d3-22c2-4b03-a3c3-a64e53817bdc>
CC-MAIN-2017-04
http://www.efortresses.com/cfd.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00283-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924166
1,339
3.375
3
I think a major component of inequality between the sexes lies in the discord between desired male-female relations in professional settings vs. accepted behavior in our everyday world. The problem is that so many of our accepted and common interactions between men and women, in everyday life, are inherently sexist (meaning different for men and women) yet we are blinded to them due to familiarity. Here are a few examples of accepted differences between men and women within society: - Women are told to be careful when they walk alone at night. This is not said to men. Even to small men. - Chivalry is rewarded and encouraged in our society, and especially by women, even though it has at its base an assumption of inequality. - Women get into clubs for free. Men don’t. - Women create humans inside their bodies and are often in a vulnerable and emotional state while doing so. - Testosterone is the dominance hormone, and men have much more of it in their systems than women. - Humans have sex by having the male penetrate the female, usually with her in a subordinate position. - It’s understood that it’s rude to ask a woman about her age, because she may be selfconcious about her beauty. - In the news we hear constantly about protecting the “women and children”, with little regard for the men. - Men are drafted into the military. Women are exempt. - Women spend billions of dollars on cosmetics, fashion, and other beauty products, and being attractive is one of the primary ways many women judge themselves and other females. - Most women spend their entire young lives thinking about and preparing for their weddings, where their men will be dressed much like a powerful businessman, and she will be dressed much like a princess. The dissonance is created when both men and women are expected to forget these things are true once they arrive at work. Is this not a very strange thing? To be clear, it’s not about the behaviors themselves. There’s not anything wrong with women wanting to look attractive, or putting energy into a wedding where they’ll indulge childhood fantasies of princes and princesses. The problem is the disconnect between this model of male/female relations and the one that exists in professional environments. At work it’s insulting to insinuate that a woman buying cosmetics on her break would need help lifting something. Or that she’d need to be walked to her car late at night. Or that she might be acting “feminine”—as if that’s a bad thing. The fundamental issue seems to be that femininity itself is being simultaneously celebrated and chastised at the tangible border of regular life and work life. For professional women on evenings and weekends, men are expected to notice their efforts at being made up and well dressed, open their doors, and generally appreciate their femaleness. But during the work day, it’s borderline sexual misconduct just to acknowledge the very femininity that women work their whole lives to master and exude. And that seems to be the problem. We’re simply unsure of what to do with femininity. Part of the problem comes in the sensitive task of defining it. Is it something for women to strive for? Or is it a seductive handcuff that we’ve convinced them to place on themselves every morning? Once you have that solved you’ll be much closer to the shape of it, and to avoid keeping you in suspense I will tell you the answer. Both masculinity and femininity are primitive and backward. - Femininity, despite what anyone may say, implies submissiveness and passivity at its very core. This is fundamentally non-equal, by definition. - Masculinity, despite what anyone may say, implies dominance and control at its very core. This is fundamentally non-equal, by definition. And yet both are beautiful. Both are natural. Both are human. So the answer is not to encourage or discourage one or the other. The answer is to enjoy them the way you enjoy ice cream or boxing matches or painting your nails or parachuting. Maybe you like this thing or that. Maybe it’s not great for you. Maybe it’s dangerous. Maybe it should only be done in small doses. But maybe they not only add to our lives but in fact make them worth living. The solution to the femininity “problem” at work is to simply say that there isn’t one. We just need to group femininity and masculinity together in the vestigial and enjoyable category, and check them at the office door. At work we are rational beings. At work we are logical and effective and collaborative. Some masculine and feminine traits are appropriate for work. Some are not. And to whatever degree we can smuggle those in without their less desirable counterparts, we should allow that. The one thing we cannot do is ignore the dual nature of most humans at work. They are both the logical robot using Excel as well as the pulsing animal full of masculine or feminine impulses and ideals. And that’s ok. We simply have to get better, at least in the United States, at not being passive aggressive with the truth about male/female relations. Europe seems to be far ahead of using this regard, maintaining both a more natural and healthy acknowledgement of gender differences while simultaneously encouraging greater equality. I think the key to progress is knowing how, and when, to pivot appropriately.
<urn:uuid:7cc9a891-7c90-48f8-a759-ee6723718bcb>
CC-MAIN-2017-04
https://danielmiessler.com/blog/femininity-under-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969072
1,152
2.703125
3
- Rolling out a workaround to prevent incidents from occurring - Analyzing incident trends to detect problems - Identifying the cause of one or more incidents - Analyzing system logs to spot potential causes of failure The correct answer is 3. Identifying the root cause is an example of reactive problem management. ITIL Exam Prep Mobile App
<urn:uuid:ca0ffd3b-a045-4674-8e77-1b0bd1dcfcf0>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/08/21/itil-exam-prep-question-of-the-week-11/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00007-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891827
70
2.515625
3
How to be prepared for a health and safety incident The success of an OH&SMS (Operational Health and Safety Management System) is normally measured by its ability to prevent incidents and accidents; however, in the real world both unfortunately do happen from time to time. What then becomes critical is how people react to an incident, and the strength of the process that an organization has established to prevent reoccurrence of such an incident. So, what processes, procedures, and rules need to be put in place to ensure that employee reaction to an accident is sufficient and appropriate, and to provide a foundation for investigation and improvement, and what elements of OHSAS 18001 can help us to achieve this? And, on top of this, what advice and training can the employees be given on how to act when an incident does occur? Incident reaction – What do you need from your employees? If you have had to deal with the aftermath and investigation of an incident in the workplace, you will be aware that there is information you need to gather from the person/people involved and witnesses to ensure that your outcome can include a meaningful and accurate corrective action that prevents reoccurrence. That all sounds straightforward, but what exactly does the organization need to achieve that, and what measures need to be taken to ensure that the outcome is satisfactory? As part of the investigation into an incident, an incident report will normally be compiled by whoever is responsible for the OH&SMS. This incident report may differ from one organization to another, but keeping records to support action and improvement is good practice, and therefore is wise to do. So, given that it is understood that this information-gathering process is mandatory when an incident occurs, what preparation can we do to ensure that our staff have the necessary knowledge to participate fully? - Participation – In a previous article, How to satisfy participation and consultation requirements in ISO 45001, we considered how the elements of participation and consultation improved the performance of an OH&SMS, if applied correctly amongst the workforce. If your workforce truly participate in an organization’s provision of health and safety, they will truly understand the importance of gathering information accurately in the wake of an incident as a vital element of an investigation that will ultimately help ensure there is no reoccurrence. - Communication – In the article Case study: Health and safety communication compliant with OHSAS 18001 we looked at the importance of communication. This is closely related to the participation and consultation elements of the standard, in the sense that good communication can empower employees and ensure that participation and consultation can truly take place by having the correct information constantly available through the correct channels. Again, if you are really seeking to build a culture of health and safety within your organization, good communication can continually reinforce its importance and prepare your employees to recognize the importance of information gathering in the wake of an incident. - Training and awareness – In The importance of training and awareness in OHSAS 18001 we considered the impact that having properly knowledgeable and trained staff could have on the OH&SMS. This is particularly relevant in recognizing supporting information in the wake of an incident. Trained staff are more likely to not only be able to record and recognize information that may be relevant to an incident, but also recognize other circumstances that help with assessment of risk and further identification of hazards. This last element can prove to be a critical difference between staff who are properly trained and aware, and those who are not, and can bring significant benefit to the OH&SMS. - Exercise your emergency plan and rehearse – Making sure everyone is aware of what to do, how to behave, and reporting methods in the event of an incident will ensure that your employees are prepared and that your OH&SMS benefits from having the correct information made available. Preparing your staff for an incident We can therefore see that there are several vital elements that we can ensure exist to prepare employees for what is needed in the event of an incident. For example, it is highly desirable in most organizations that some of the employees will have undertaken formal first aid training, and passed some of that experience and knowledge on to colleagues. As an organization that runs an OH&SMS, it will also be helpful if your training program provides advice on how to react, whom to contact, and what action to take in the event of an accident or incident. This will also increase the preparedness of the people within your organization to deal with an incident calmly and sensibly, and to collect relevant information in the aftermath. Importantly, this will allow you to give your staff guidance on how to act and behave in the event of an incident, which should be in a calm, controlled, and organized manner. With health and safety – as with most things – knowledge, preparation, and rehearsal are key and can ensure that your team is prepared for all eventualities. No organization enjoys dealing with incidents or accidents, but the better you prepare for this likelihood, the more effective you will become at preventing them in the long run. Why not use our free Gap Analysis Tool to measure your OH&SMS readiness to handle incidents?
<urn:uuid:a08a1496-aed2-4f70-bffe-727bdb3d88b2>
CC-MAIN-2017-04
https://advisera.com/18001academy/blog/2016/12/21/how-to-be-prepared-for-a-health-and-safety-incident/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00521-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958685
1,044
2.640625
3
Open MongoDB database servers with default settings have been a source of stress for security teams for well over a year. These vulnerable databases can result in breaches affecting millions of people. Though administrators have been warned to secure these servers, the lack of doing so has resulted in tens of thousands of open MongoDB servers that have been open and ripe for abuse for months. However, a new development appears to have shifted the landscape significantly. On approximately January 6, 2017, evidence appeared that bad actors were attempting to ransom the data on MongoDB servers, as the completely unsecured servers allow data to be written as well as read. Over the past several days, it appears that additional bad actors have jumped into the fray and started overwriting other ransom notes with their own ransom notes. The result of all of this is a catastrophic volume of global data loss. According to open-source research on unsecured MongoDB databases, a minimum of 20,000 servers are affected — and likely many more. Servers that previously hosted gigabytes of data as well as many databases now contain nothing but a ransom note, and paying that ransom is unlikely to return the data. This landmark event is something that all administrators need to understand as a case study for why security vulnerabilities need to be taken seriously. The vulnerabilities themselves may or may not be cause for concern. But, when the vulnerability can be abused by a criminal, the issue very rapidly turns from an academic argument into a global incident.
<urn:uuid:ba9e2500-c16f-4b84-9228-7b02714ed73b>
CC-MAIN-2017-04
https://www.flashpoint-intel.com/blog-mongodb-ransom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971843
299
2.546875
3
Nearly fourteen billion years ago, the big bang of the universe transferred the world from a hot dense state to a totally different new world. Every moment something new is happening. As a small part of this new world, human now is also experiencing a big bang. However, this time, it’s about information, which is changing people’s life tremendously and is known as big data. Big data is no longer a strange word to us. It is generally used to describe a massive volume of both structured and unstructured data which is so large and complex that can hardly be processed by traditional database and software techniques. It is also a high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making. Usually data scientists break big data into four dimensions: Since the appearance of the Internet, the way people communicating have changed a lot and the information produced had increased largely with the characteristics of immense, variety and velocity. The right use of big data allows analysts to spot trends and gives niche insights that help create value and innovation much faster than conventional methods. To better understand the benefits of big data, here listed several things that big data can do to improve people’s daily life. Apparently Big data has tremendous possibilities. However, as data are accumulating at exponentially increasing rates, to make use of those information won’t be that easy. Except difficulties like understanding the data, display meaningful results and dealing with outliers, the using of big data is also facing challenges like data transmitting and data storage which are related to optical communication and are affecting the industry of optical communication. As most of the information produced today is being transmitted via Internet. Big data needs to storage and transmit those information which means the transport network for big data must be efficient and with high transmit capacity. In addition, there is another thing stays closely with big data – “Cloud” which refers to a type of computing that relies on sharing computing resources from applications to data centers over the Internet instead of your own computer’s hard drive. Cloud stores everyone’s cold, hard data like a big hard drive in the sky. And now, big data will store all the warm and fuzzy relationships between those data sets, a kind of social media for bits and bytes. According to research that the application of Cloud which also requires high speed transmit capacity is currently the biggest driver in continued growth in optical. All in all, big data needs big network, and optical network is now the best solution to satisfy these demands. Some companies today also manufacture optical communication products for big data. In this way, big data and optical communication industry promote each other. 57.6% of organizations surveyed say that big data is a challenge. We currently only see the beginnings of the big data applications which is just the tip of the iceberg. The great potential and possibilities are still there to be explored.
<urn:uuid:a5eb88f9-af5c-4d97-a2be-ad20113ff9ac>
CC-MAIN-2017-04
http://www.fs.com/blog/when-information-explosion-meets-big-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948591
600
3.578125
4
Big Data is the buzzword on everyone's lips these days - promising to change the world through deep insights into vast and complex sets of data. But amidst the optimism at the recent IEEE Computer Society's Rock Stars of Big Data symposium at the Computer History Museum in Mountain View, Calif., there were also stark about warnings about the dark side of the new technology. The human/ethical aspects of big data In fact, Grady Booch, chief scientist at IBM Research and co-founder of Computing: The Human Experience, led off the event with a talk on the Human/Ethical Aspects of Big Data. In front of a couple hundred big-data professionals and interested parties, Booch acknowledged that big data can have "tremendous societal benefits," but made the case that the technology has gotten way out in front of our ability to understand where it's going, and we're likely in for some nasty surprises in the not-so-distant future. He expanded on those thoughts in a private conversation later that afternoon. Many people worry about governments and corporations misusing big data to spy on or control citizens and consumers, but Booch warned the problem goes much deeper than just deliberate malfeasance: "Even the most benign things can have implications when made public." He cited the case of an environmental group that shared the locations of endangered monk seals near his home in Hawaii -- a seemingly innocuous way to raise awareness. But because monk seals eat fish, Booch said, some local fisherman used the information to try and kill the seals. Data lasts forever The problem is that big data doesn't go away once it's fulfilled its original purpose. "Technologists don't give a lot of thought to the lifecycle of data," Booch said. But that lifecycle can extend indefinitely, so we can never be completely be sure who will end up with access to that data. "This is the reality of what we do." "Our technology is outstripping what we know how to do with our laws," Booch said. "And even today's best legal and technological controls may not be enough." Social, political and other pressures can affect how big data is used, he said, despite laws designed to constrain those uses. Given how the unprecedented speed of technological change is affecting society, what is considered acceptable use of data is in constant flux and subject to contentious debate. For example, while airplanes have long used "black box" data recorders, those devices are now finding their way into cars. So far, that hasn't raised much debate, but imagine the outcry if we applied the same concept to, say, guns? Our responsibility: Fix the "stupid things" "The law is going to do some stupid things," Booch warned, which is why "technology professionals have a responsibility to be cognizant of the possible effects of the data we collect and analyze to raise the awareness of the public and the lawmakers." "The world is changing in unforeseen ways, and no one has the answer," Booch said. "It's a brave new world and we're all making this up as we go along." But just because something is possible does not necessarily mean we should do it. "We need to at least ask the question: 'Should it be done?'" Booch conceded that there is no economic incentive to raising these issues. But there are important considerations and consequences that can't be measure on a spreadsheet. "Ask yourself, 'What if the data related to you, or to your parent, or your child? Would that change your opinion and actions?'" If so, Booch said, you have an responsibility to speak out. "If you don't, who will?"
<urn:uuid:1e220193-e4a8-44f2-b197-7ecbda6c0fa2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225820/data-center/with-big-data-comes-big-responsibility.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00063-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959128
759
2.765625
3
The planet Neptune has been hiding a secret that NASA's Hubble Space Telescope just discovered -- another moon. The newly discovered moon, dubbed S/2004 N 1, is the fourteenth known to be orbiting Neptune, which is anywhere from 2.7 billion miles to as much as 2.9 billion miles from Earth, depending on where both planets are in their orbits. The newly found moon is the smallest of those around Neptune and is no more than 12 miles across. NASA noted that the moon is so small and dim that it is approximately 100 million times fainter than the faintest star that can be seen with the naked eye. In fact, it's so small and dim that even when NASA's Voyager 2 spacecraft flew past Neptune in 1989, surveying its moons and rings, it never spotted S/2004 N 1. Mark Showalter of the SETI Institute in Mountain View, Calif., found the moon on July 1, while studying Hubble's images of the faint arcs, or segments of rings, around Neptune, according to the space agency. Showalter tracked the movement of what appeared to be a white dot that appeared over and over again in more than 150 images of Neptune that Hubble took between 2004 to 2009. "The moons and arcs orbit very quickly, so we had to devise a way to follow their motion in order to bring out the details of the system," said Showalter in a written statement. "It's the same reason a sports photographer tracks a running athlete -- the athlete stays in focus, but the background blurs." The newly discovered moon is about 65,400 miles from Neptune, orbiting between the Neptunian moons Larissa and Proteus. It completes one revolution around Neptune every 23 hours. The Hubble Space Telescope has had some big wins lately. Earlier this month, NASA announced that Hubble gave scientists information about a blue planet 63 light years away that looks a lot like Earth. However, that's where the similarities end. On this planet, named HD 189733b, the daytime temperature is nearly 2,000 degrees Fahrenheit, and it may rain glass there, although sideways, in what are believed to be "howling 4,500-mph winds." Last December, scientists announced that Hubble had given them a look at a previously unseen group of seven primitive galaxies that were created more than 13 billion years ago, when the universe was just 4% of its current age. This article, NASA's Hubble telescope spots Neptune's hidden moon, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org. This story, "NASA's Hubble telescope spots Neptune's hidden moon" was originally published by Computerworld.
<urn:uuid:23e6e441-a306-4d29-93d9-d7c78fd39939>
CC-MAIN-2017-04
http://www.networkworld.com/article/2168200/data-center/nasa--39-s-hubble-telescope-spots-neptune--39-s-hidden-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00273-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958569
598
3.640625
4
How do you protect more than 80,000 people from an odorless, colorless threat that could kill them within minutes? That was the challenge facing the Oregon communities neighboring the U.S. Army's Umatilla Chemical Depot, one of eight national chemical weapons depots stockpiling mustard gas and other deadly munitions. Their response to this threat may be one of the most comprehensive and technically innovative evacuation control systems in the nation. By 2004, local officials had created a massive wireless network, a series of "overpressurized" shelters, a software modeling program that tracks airborne chemicals and a tone-alert radio system that sounds voice messages during a hazard. From perches in any of three command centers, officials can override the lights and signs on local highways, activate drop-arm barricades, and update message signs in Spanish and English. With the flip of a switch, emergency management officials can direct residents out of the local area if there's a chemical leak, and monitor roadways via remote-controlled cameras. The evacuation system also includes a video-conferencing setup that enables officials to converse in real time with officials from other parts of Oregon and with first responders working in the field. Photo: With a flip of a switch from an official in a command center, message signs are activated to alert residents to a drill or real disaster. Photo courtesy of Morrow County, Ore., Emergency Management Agency. "We didn't have the time like you would during a conventional evacuation, like a hurricane, to spend two days getting people ready and setting up roads they would take," said Casey Beard, director of the Morrow County Emergency Management Agency, which operates one of the region's three command centers. "We had to be able to instantly reconfigure our transportation network to move people away from the threat area." The International Association of Chiefs of Police gave the system an Innovations and Technology award in 2006, and it was a finalist in 2007 for the Innovation in American Government Award by Harvard University's Ash Institute. "What we have here that's unique is an elaborate evacuation control system that is activated by Wi-Fi," said Chris Brown, program manager of Oregon's Chemical Stockpile Emergency Preparedness Program. "We've established a series of portable message boards, we have fixed message boards, and swing-arm barricades that can be dropped -- all [deployed] through a Wi-Fi signal. It can activate messages to inform the public about either moving within the response zone or evacuating." Approximately 1,000 square miles of north-central Oregon, specifically Morrow and Umatilla counties, is protected by the Wi-Fi network. That coverage zone includes the Umatilla Chemical Depot, as well as nearby cities Umatilla and Hermiston. The depot is part of the Chemical Stockpile Emergency Preparedness Program, which is a partnership between the Army and the Federal Emergency Management Agency to safely store chemical weapons. The Umatilla Depot is slated for closure per the 2005 Base Realignment and Closure act, so all of the chemicals and chemical weapons stored there must be destroyed by 2012. A good portion of the munitions -- including sarin-filled bulk containers; 500-pound and 750-pound bombs; rockets; warheads; and land mines -- already have been destroyed. But a supply of mustard gas remains onsite and will take a few years to incinerate. Therefore, drills and tests of the evacuation system continue, some done twice a day. Local officials, including Morrow County's Beard and Hermiston, Ore., Police Chief Daniel Coulombe, enlisted a local innovator, Fred Ziari, for help. As founder and CEO of ezWireless, Ziari developed irrigation technology to save water and electricity for Columbia River basin farmers. "[Ziari] already had a very innovative group of people who were willing to take a look at new technologies and new ways to do things," Beard said. Ziari had access to facilities and had developed technologies that monitor a soil's moisture level and temperature, which helps farmers know when to fertilize. Consequently developing a wireless evacuation system and a chemical monitoring system wasn't a stretch for him. Ziari had already established a Wi-Fi cloud in Oregon before the network was built. It now extends 700 square miles and is considered one of the largest Wi-Fi hot spots in the world. Ziari spent $5 million of his own money to build the wireless network. He recovers his investment through contracts with more than 30 city and county agencies and the area's big farms -- including one that supplies more than two-thirds of the red onions used by the Subway sandwich chain. Photo: The chemicals stored at the Umatilla Chemical Depot are odorless, colorless, tasteless and deadly. Photo courtesy of Morrow County, Ore., Emergency Management Agency. Beard said he was warned of the security issues surrounding Wi-Fi deployments and their range problems. But he went forward with the project and made it work. "We had some real advantages. Nobody else was using that part of the spectrum, and the terrain is flat. Sometimes people in government are afraid to take chances," he said. But there were challenges. With two freeways adjacent to the chemical depot and two major state highways nearby, the motoring public's safety needed to be addressed. "At any given time there could be up to 2,000 vehicles passing through the danger area, plus there are people out there driving around. There are always going to be some who, even if you ask them to 'shelter in place,' are going to jump in their cars and take off." Beard said that given the urgency -- they might have no more than 10 minutes to shelter in place or evacuate the area -- officials needed the capability to instantly reconfigure the transportation network to move people away from the danger zone. "We elected to put variable message signs at strategic points in the transportation system -- intersections where you could turn people around. A couple hundred of those were scattered around," Beard said. "We also have fixed signs that designate evacuation routes. The idea is to funnel all the people into those designated evacuation routes." Fortunately the evacuation system hasn't been used for a chemical leak, but the scenario could fall along these lines if it were to occur: There's a chemical leak at the plant. A Morrow County employee stationed within the chemical depot is alerted by the software, which simulates a chemical plume to track the leak and predict where it will go based on wind direction and weather reports. The modeling software is called D2-Puff. It knows what kinds of chemicals are stored at the depot and what the significance is, depending on what chemical is leaked. It takes into account wind direction, topography and air temperature, and then presents on a computer screen a new visual image of the plume every 15 minutes, up to 24 hours. It rates the leak, choosing among three ranges of severity. "What this does is give us a very educated source of information about, 'If this happens, this is where it's going to go,'" Coulombe said. "We can use the information to determine if we can safely deploy in an area, if we have to evacuate, and where we would shelter in place. It's a critical component of the whole process." The command centers are alerted within three minutes of a leak. The commander on duty reacts based on the severity of the leak by choosing one of four scenarios on a chart. For instance, if the wind direction is from the north, he chooses scenario No.3 and pushes the corresponding buttons, which activate the predetermined drop-arm barricades, highway message signs and the appropriate traffic lights. "We took all the traffic signals in the area and linked them automatically to scenarios. When the button is pushed, they are re-timed," Beard said. "You get some with longer red times, some with longer green times and some go to flashing yellow to expedite movement of traffic on the routes you want traffic to go on." All the while, officials in the three command centers monitor the network's 30 video cameras that are placed strategically along highways and roadways. The cameras to make sure traffic is flowing smoothly and in the correct direction. The information is shared via video conferencing with Gov. Ted Kulongoski in his Salem, Ore., office and with other officials, including those in Benton County, Wash., to the north. Photo: Wireless connectivity helps public safety officials track police, fire and emergency medical resources. Photo courtesy of Morrow County Emergency Management Agency. Another scenario activates sirens that instruct residents to "shelter in place." The signal is sounded by specially designed tone radios placed in homes. The sirens also notify police officers' laptops and those in charge of shelter-in-place facilities: Forty buildings (e.g., medical clinics, schools, hospitals, nursing homes and county buildings) in the two counties have been rendered virtually leakproof and can safely shelter residents for days. The buildings are nearly airtight and are equipped with alarms that go off if anyone tries to open a window or door. The buildings are equipped with giant filters that utilize activated charcoal, the same material used in gas masks. "We worked with the Honeywell Corp. to develop a specialized, circulating air filter that's designed specifically for this," Beard said. "We issued a commercial-grade circulating air filter to all citizens in the vicinity of the depot." Air is pumped into the buildings at higher ambient pressure than the outdoor air; if there is leakage it will travel outside the building. The buildings are tested weekly to make sure they maintain their overpressurization. "We conducted studies that determined with enhancements [including duct tape] a room in a person's house could keep them safe for a prolonged period of time, even if exposed to chemical weapons," Beard said. "We took it to a higher level with overpressurization of schools, hospitals and other key public facilities. All you have to do is throw a switch and you can keep people safe for an indefinite period of time." Of course, practice is a key component of the evacuation system. A large number of students are "experts" at moving into the facilities and quickly establishing the shelter, Beard said. "In our drills, we routinely have students in place and [they're] reporting that their facility is up and running within two minutes or less." And although the deadly chemicals stored at the depot will be going away, the evacuation system will still be useful after the threat is gone. For the Hermiston Police Department, the wireless laptops in squad cars means cops can file crime reports from the field and save on overtime. They can track resources. "When first responders, firemen and EMTs show up at the fire hall or hospital," said Beard, "they're automatically logged in with our Wi-Fi system using a thing we call the operations console. You can tell how many people are there and how many teams are available." Meanwhile, some coastal Oregon communities are vulnerable to a tsunami, and the system would help in the event of an evacuation. The 30 cameras that run on the Wi-Fi system also would provide information on traffic counts and bottlenecks in everyday use. There is also a large railroad switching yard just south of the area, and the railcars often carry deadly chemicals. "There's always a potential for hazard there," Coulombe said. There are also the two highways and a natural gas pipeline that spans from Washington down through Oregon and into California. "We have a complex plan that governs two counties, a tribal nation and several state agencies, and is coordinated across the Columbia River with Washington state," Beard said. "It's a complex plan, but it's one plan. That's probably the biggest thing we accomplished in this."
<urn:uuid:41a09ab3-fe69-41c1-beb9-3e0c669735f7>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/Oregon-Protects-Communities-From-Deadly-Chemical.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00091-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964752
2,430
2.78125
3
7.4 What is a designated confirmer signature? A designated confirmer signature [Cha94] strikes a balance between self-authenticating digital signatures (see Question 7.2) and zero-knowledge proofs (see Question 2.1.8). While the former allows anybody to verify a signature, the latter can only convince one recipient at a time of the authenticity of a given document, and only through interaction with the signer. A designated confirmer signature allows certain designated parties to confirm the authenticity of a document without the need for the signer's input. At the same time, without the aid of either the signer or the designated parties, it is not possible to verify the authenticity of a given document. Chaum developed implementations of designated confirmer signatures with one or more confirmers using RSA digital signatures (see Question 3.1.1). - 7.1 What is probabilistic encryption? - Contribution Agreements: Draft 1 - Contribution Agreements: Draft 2 - 7.2 What are special signature schemes? - 7.3 What is a blind signature scheme? - Contribution Agreements: Draft 3 - Contribution Agreements: Final - 7.4 What is a designated confirmer signature? - 7.5 What is a fail-stop signature scheme? - 7.6 What is a group signature? - 7.7 What is a one-time signature scheme? - 7.8 What is an undeniable signature scheme? - 7.9 What are on-line/off-line signatures? - 7.10 What is OAEP? - 7.11 What is digital timestamping? - 7.12 What is key recovery? - 7.13 What are LEAFs? - 7.14 What is PSS/PSS-R? - 7.15 What are covert channels? - 7.16 What are proactive security techniques? - 7.17 What is quantum computing? - 7.18 What is quantum cryptography? - 7.19 What is DNA computing? - 7.20 What are biometric techniques? - 7.21 What is tamper-resistant hardware? - 7.22 How are hardware devices made tamper-resistant?
<urn:uuid:dda78d3e-1207-4a13-8b0d-0b38a5b99e4a>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-a-designated-confirmer-signature.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00238-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875689
473
2.53125
3
How did European researchers working on the Higgs boson recently make one of the most revolutionary physics discoveries in recent decades? From an IT perspective, they relied on a good old-fashioned grid computing infrastructure, though a new cloud-based one may be in the offing. The European Nuclear Energy Association's (CERN) decade-old grid computing infrastructure has been used extensively during the past few years for research that culminated with discovery of the Higgs boson, or so-called "God Particle." WHAT IS THE HIGGS BOSON? Quick look: The Higgs boson phenomenon Unlike a public cloud, where data and compute resources are typically housed in one or more centrally managed data centers with users connecting to those resources, CERN interconnected grid network relies on more than 150 computing sites across the world sharing information with one another. For the first couple of years after the grid computing infrastructure was created, it handled 15 petabytes to 20 petabytes of data annually. This year, CERN is on track to produce up to 30 PB of data. "There was no way CERN could provide all that on our own," says Ian Bird, CERN's computing grid project leader. Grid computing was once a buzz phrase similar to that of what cloud computing is now. "In a certain sense, we've been here already," he says. CERN, where the Large Hadron Collider that is the focal point of the Higgs boson research lives, is considered Tier 0 within the grid. That's where scientific data is produced by smashing particles together in the 17-mile LHC tunnel. Data from those experiments is then sent out through the grid to 11 Tier 1 sites, which are major laboratories with large-scale data centers that process much of the scientific data. Those sites then produce datasets that are distributed to more than 120 academic institutions around the world, where further testing and research is conducted. The entire grid has a capacity of 200 PB of disk and 300,000 cores, with most of the 150 computing centers connected via 10Gbps links. "The grid is a way of tying it all together to make it look like a single system." Each site is mostly standardized on Red Hat Linux distributions, as well as a custom-built storage and compute interfaces, which also provide information services describing what data is at each site. Research that contributes to a ground-breaking discovery like the Higgs announcement, though, is not always centrally organized. Bird says in fact it's quite a chaotic process and one that makes it difficult to plan for the correct amount of compute resources that will be needed for testing at the various sites. For example, when there is a collision in the LHC, impacted particles leave traces throughout the detector. A first level of analysis is to reconstruct the collision and track the paths of the various particles, which is mostly done at the Tier 0 (CERN) and Tier 1 sites. Other levels of analysis are broken into smaller datasets and distributed to the partnering academic institutions for analysis. From there, a variety of statistical analysis, histograms and data mining is conducted. If a certain discovery is made, an analysis might be refined and another test may be run. "You really can't predict the workflows," he says. That's why Bird and CERN are excited about the potential for using some cloud-based services. "We're interested in exactly what it would take to use cloud storage," he says. "But at this point, we're just not sure of the costs and how it would impact our funding structure." CERN receives money from various academic institutions that have access to the data CERN creates to analyze it. Many of those partnering academic groups have compute resources in place and want the CERN data on their own sites to run experiments on and make that resource available to their academic communities. "From a technical point of view, it could probably work," he says. "I just don't know how you'd fund it." CERN has made some initial forays into the cloud. Internally, CERN is running a private cloud based on OpenStack open source code. Many of the partnering organizations have private clouds on their own premises as well. In March, CERN and two other major European research organizations took steps to create a public cloud resource called Helix Nebula - The Science Cloud. It's a partnership of research organizations, cloud vendors and IT support companies that are powering a community cloud for the scientific and research community. The two-year pilot program CERN has recently kicked off will begin by running simulations from the LHC in the Helix Nebula cloud. Bird is hopeful about the cloud, figuring that within another decade the cloud will be where grid computing is now. "It's just not obvious how we'll get to that point," he says. But even if the cloud has its challenges, Bird is confident that the scientists who made one of the most important scientific discoveries in decades should be able to figure out the cloud. Network World staff writer Brandon Butler covers cloud computing and social collaboration. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW. This story, "Higgs boson researchers consider move to cloud computing" was originally published by Network World.
<urn:uuid:6cce70fd-1346-47a2-b1d2-a506c63e1b3d>
CC-MAIN-2017-04
http://www.itworld.com/article/2723352/cloud-computing/higgs-boson-researchers-consider-move-to-cloud-computing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00054-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96117
1,077
3.25
3
What is system integration? What are the services offered? System Integration is the process of integrating all the physical and virtual components of an organisation’s system. The physical components consist of the various machine systems, computer hardware, inventory, etc. The virtual components consists of data stored in databases, software and applications. The process pf integrating all these components, so that act like a single system, is the main focus of system integration.
<urn:uuid:459e52aa-4220-4df4-8c00-bf88c6cfc040>
CC-MAIN-2017-04
https://www.hcltech.com/technology-qa/what-is-system-integration
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909933
89
2.515625
3
The conventional office can be a fixed and rather limiting place. Workers must all meet in the same location and start at the same time. A modern office, however, is far more flexible and promotes business on the move. It allows remote staff to work the hours that suit them. All this is achievable thanks to new technologies, such as cloud computing. Here is an overview of how this modern form of computing benefits usinesses on the move. What is cloud computing? Cloud computing is a form of digital computing that is fast becoming the de facto platform for businesses both big and small, according to this article on Forbes. Unlike conventional computing, where data is stored on physical servers, cloud computing takes place on the internet. The technology is offered by a number of trusted companies, such as McLaren Software, and can benefit businesses in a number of ways. No fixed servers Conventional computing necessitates that you save your information on a fixed server, which is probably stored in a room down the hall from a main office. To access the data you need, you have to use a computer that is physically hooked up to this server, a factor that can severely impinge on your ability to work on the move. Today however, with cloud computing, digital information is saved on remote servers. These servers, which are maintained and run by a third party hosting company, can be accessed remotely. This allows you to get to the data you need wherever you want; all you need is an internet connection. In essence, this makes the office wherever you are. Increased security on the move In addition to making fixed servers a thing of the past, cloud computing also allows you to do away with other items of hardware that could limit the ability to carry out business on the move. Items such as hard drives, USB keys and cables are used in conventional computing to allow data to be carried around. However, while these items can make business on the move possible, they can also be lost, stolen or damaged, which could have huge implications on a business. Data stored on the cloud is far more secure; it can only be accessed by authorised personnel and is backed up on several remote servers. In the very rare event that cloud data is lost, the recovery process is very straight forward and fast. Cloud security is also carried out automatically, so a business is given the most up to date security as soon as it becomes available. Other safety benefits of cloud computing are discussed here. In a conventional office, workers need to be using the same equipment in order to collaborate with each other. Cloud computing is technology neutral, however, which means that remote workers can use the systems that best suit them without fear of issues with compatibility. This can be a great benefit to a flexible business, which may have employees using disparate systems on opposite sides of the world.
<urn:uuid:879a7e85-3dca-4c43-b3da-11f0cc0e3fa2>
CC-MAIN-2017-04
http://www.2-spyware.com/news/post4254.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00568-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960915
573
2.515625
3
Actually I am debugging a cobol prograsm throgh expeditor and I am tracking a paragraph which is performed from about 100 different places.is there any way that i can give break points on all that statements which are performing that paragraph with a single statement instead of going to 100 deifferent statment and giving break points ??! Joined: 03 Jul 2007 Posts: 1288 Location: Chennai, India Why not putting a breakpoint at the first statement of the paragraph??? By doing so, he will not know from where that para was called. I think he wants the entire flow. If your requirement is to find from where this para is called, try this. - Put a break point at the procedure division. - Issue "MON" to monitor the flow. - Put a break point at the first statement inside the paragraph. - Press F12 twice so that the control comes to this para - Then issue 'REV' for reverse and hence you can find from where the para was called.
<urn:uuid:9952c2ff-a5bb-42ef-b990-e3d7ed981bfe>
CC-MAIN-2017-04
http://ibmmainframes.com/about32865.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948144
210
2.515625
3
Changing user identification numbers (UIDs) and group identification numbers (GIDs) in the IBM® AIX® operating system (AIX) isn't one of the more exciting tasks a UNIX® administrator can face. But although it's often seen as a dreadful task, it can be an essential job that an administrator must perform to keep systems in sync within the environment. Because changing UIDs and GIDs can cause serious harm to your environment, you must be careful. The most important thing is understanding what your changes do. Then, you can learn how to make the changes correctly and even automate the process with UNIX scripts. UID and GID: Some background File ownership in AIX is determined by the UID, and group file permissions are determined by the GID. UIDs and GIDs are integers that range from 0 to about 65,535. (This number may differ depending on the UNIX version you're using.) Each username translates to one of these assigned integers. You can view the UID and primary GID of any account in UNIX by greping the username from the /etc/passwd file: $grep bduda /etc/passwd bduda:!:300:350:Ben Duda:/home/bduda:/bin/ksh The third field (300) is the UID, and the fourth field (350) is the GID of the primary group you're a member of. You can gather more information about the GID by greping it from the /etc/group file: $grep ":350:" /etc/group security:!:350:bduda As you can see, bduda has a primary group membership of security. Because security is bduda's primary group, this group will be assigned to any files bduda creates. Choosing a UID and a GID There are some basic rules for UID and GID number ranges. AIX system administrators select a range at which to begin allocating UIDs. UIDs and GIDs below 100 are typically reserved for system accounts and services. About 65,000 UIDs are available in AIX, so running out isn't an issue. Why change a UID or GID? Sometimes you need to change a UID or GID because you're migrating servers or applications from one server to another. Other times, you need to make a change because an administrator has made an error. Environments that use AIX High-Availability Cluster Multiprocessing (HACMP) for clustering must always have the same UIDs and GIDs across all the clustered servers; otherwise, your failover process won't work correctly. What happens when you change a UID or GID? It's important to understand that whenever you change a UID or GID, you affect the permission levels of files in AIX. Changing a UID or GID causes the ownership of all the files previously owned by that user or group to change to the actual integer of the file's previous owner. Change the UID You can change a UID and/or a GID two ways. You can use smitty, but this example uses the command line. Here is the syntax: Usage: usermod [ -u uid ] login Let's change user bin's UID: $ grep ^bin /etc/passwd bin:!:2:2::/bin: $ usermod -u 5089 bin $ grep ^bin /etc/passwd bin:!:5089:2::/bin: By running the usermod command, you change the system account bin's UID from 2 to 5089. Keep in mind that every file owned by bin will have an ownership of 2, because AIX doesn't automatically change the file ownership to the user's Here are the user's file permissions before the UID change -rw------- 1 bin bin 29 2008-01-19 12:30 tester and after the UID change: -rw------- 1 5089 bin 29 2008-01-19 12:30 tester The user bin no longer has permissions to the file tester; you must change the file back to the owner bin. This is why changing UIDs can be a big task for an administrator. Change the GID You saw how easy it is to change an account's UID -- and you saw one of the biggest problems with doing this. This section looks at the syntax for changing a GID using the command line. Changing a GID can be more complex: Usage: chgroup "attr=value" ... group Change the group's GID: $grep bduda /etc/passwd bduda:!:300:350:Ben Duda:/home/bduda:/bin/ksh $ grep security /etc/group security:!:350:bduda $ chgroup "id=7013" security 3004-719 Warning: /usr/bin/chgroup does not update /etc/passwd with the new gid. $ grep security /etc/group security:!:7013:bduda You get a warning message because the GID number in the /etc/passwd file doesn't change even though you changed the group's GID. Check to make sure: $ grep bduda /etc/passwd bduda:!:300:350:Ben Duda:/home/bduda:/bin/ksh The /etc/passwd file says that bduda has a primary group of 350. However, the security group has a new GID of 7013. To fix this issue, you need to run the following command: $ chuser "pgrp=security" bduda $ grep bduda /etc/passwd bduda:!:300:7013:Ben Duda:/home/bduda:/bin/ksh Note: In this example, you need to use the command for each user who has security as his primary group. Remember, when you change UIDs and GIDs in AIX, the permissions on the files and directories don't change: the files must be changed manually. If you have lots of files, or you don't know what all the files are, then this situation can be difficult to fix. The next section looks at some specific examples that show files before and after a change. Fix file permissions To see what happens when a user owns files and you change that user's UID and GID, first create two new files called File1 and File2. Here are the properties of two example files: $ls –l File* -rw-r----- 1 bduda security 21 May 19 00:23 File1 -rw-r----- 1 bduda security 23 May 19 00:24 File2 The file's owner is bduda, and the group membership is security. Change the UID for bduda: $usermod -u 34578 bduda $grep bduda /etc/passwd bduda:*:34578:7:tester:/tmp/bduda:/usr/bin/ksh Now, look at the file permissions, because you changed the UID: $ls -l File* -rw-r----- 1 203 security 21 May 19 00:23 File1 -rw-r----- 1 203 security 23 May 19 00:24 File2 The owner of your two files is now the number 203 -- the previous UID for bduda. You have to change the file permissions back to the account bduda to fix these permissions: $ chown bduda File* $ ls -l File* -rw-r----- 1 bduda security 21 May 19 00:23 File1 -rw-r----- 1 bduda security 23 May 19 00:24 File2 Can you imagine a user who has 40,000 files whose file permissions need to be changed? Again, here are the properties of the two example files: $ls –l File* -rw-r----- 1 bduda security 21 May 19 00:23 File1 -rw-r----- 1 bduda security 23 May 19 00:24 File2 The group ownership is security. Change the GID for security: $ chgroup "id=7013" security $grep security /etc/group security:!:7013:root,bduda Now, look at the file permissions, because you changed the GID: $ls -l File* -rw-r----- 1 bduda 7 21 May 19 00:23 File1 -rw-r----- 1 bduda 7 23 May 19 00:24 File2 Your two files now have the group permission 7 -- the previous GID for the group security. You must change the file permissions back to the group security to fix these permissions: $chgrp security File* $ls -l File* -rw-r----- 1 bduda security 21 May 19 00:23 File1 -rw-r----- 1 bduda security 23 May 19 00:24 File2 Assume for a moment that you don't fix the permissions on your files. If a new user or group is created with the old UID 203 or the old GID 7, then this new user or group will become the owner and group of every file on the system that the user previously owned. This is bad for the system; plus, you've created serious security issues. The next section discusses how to examine your AIX systems to find out if there are any unowned files. Prepare to make the change: Scan your system for unowned files It's a good idea to scan your systems for unowned files, and on AIX you can do so using some simple commands. To scan for files that have no user, run the following command from the command line: find / \( -fstype jfs -o -fstype jfs2 \) -nouser -print Depending on how your file systems are set up, this command checks the entire system for unowned files while skipping Network File System (NFS) mounts. You can do the same for groups: find / \( -fstype jfs -o -fstype jfs2 \) -nogroup -print Finding unowned files and groups is only half the battle: you also have to decide who should own them and what group permissions they should have. As a system administrator, this isn't the easiest thing to do. A good rule of thumb is to look at the directory owner or group. Then, set the permissions to match the directory. If you're still unsure what to do, then changing the values to the root user and root group isn't a bad idea, either. Keep UIDs and GIDs consistent in your environment Most companies that run UNIX have more than one server. If you perform the same processes on multiple servers, it's a best practice to make sure the UIDs and GIDs are the same across the enterprise. If you use some type of centralized user administration, then using IBM HACMP becomes much easier because your UIDs and GIDS are in sync. Let's say you have an application that uses IBM DB2® Universal Database™. This is a mission-critical application running on two AIX HA pairs. These servers have file systems that are passed back and forth, depending on which server is primary. The account on your primary server that runs your DB2 database has a UID of 300. Suppose that during a standard production cycle, your primary server crashes, and your secondary server picks up the workload. The account on your secondary server that runs your DB2 database has a UID of 400. This is a serious problem. The files on the primary file system were created with the DB2 account with UID 300. Because the file systems have failed over to the secondary server, the ownership is incorrect. The DB2 files aren't owned by the DB2 user with UID 400 -- they're owned by UID 300. The database doesn't own these files, so it won't function correctly, if at all. Sometimes too many changes are required for you to make them manually. This is when scripting can aid you in your efforts. No one wants to change 40,000 file owners, one at a time. It's a good thing you can write a script to find all the files for you -- and you can have it fix the permissions, as well. Searching your entire file system can be very time consuming. Perl is a great tool that allows you to search an entire file system quickly. Perl comes with a command called find2perl, which lets you turn the regular AIX find command into Perl code. This code searches the file system faster than the regular UNIX find command: $find2perl / \( -fstype jfs -o -fstype jfs2 \) -nouser -print > find_owner_script.pl $find2perl / \( -fstype jfs -o -fstype jfs2 \) -nogroup -print > find_group_script.pl You can use the regular find command if you don't have Perl on your system: $find / \( -fstype jfs -o -fstype jfs2 \) -nouser -print > find_owner_script.txt $find / \( -fstype jfs -o -fstype jfs2 \) -nogroup -print > find_group_script.txt You now have a script automatically written in Perl. This script quickly finds all your unowned files and prints the output to the screen. If you wish, you can modify the script to write the output to a file. Now, you need to determine what the file permissions should be and then change them, as shown in the following sections. You can write the following on the command line or put the code into a script: $ for file in $(cat output_from_find_owner_script.pl) do print "Old permissions: $(ls –l $file)" >> /tmp/UID_LOG chown $new_owner $file print "New permissions: $(ls –l $file)" >> /tmp/UID_LOG done Here's the output: Old permissions: -rw------- 1 485 bin 29 2008-01-19 12:30 tester New permissions: -rw------- 1 bin bin 29 2008-01-19 12:30 tester Old permissions: -rw------- 1 987 bin 4098 2008-01-26 12:30 host New permissions: -rw------- 1 bin bin 4089 2008-01-26 12:30 host Now you can do something similar for the group: $for file in $(cat output_from_find_group_script.pl) do print "Old permissions: $(ls –l $file)" >> /tmp/GID_LOG chgrp $new_group $file print "New permissions: $(ls –l $file) >> /tmp/GID_LOG done Here's the output: Old permissions: -rw------- 1 765 bin 29 2008-01-19 12:30 passwd New permissions: -rw------- 1 root bin 29 2008-01-19 12:30 passwd Old permissions: -rw------- 1 983 bin 4098 2008-01-26 12:30 group New permissions: -rw------- 1 root bin 4089 2008-01-26 12:30 group These examples create log files that record the file permissions before and after the change. These logs also provide you with proof that your script worked properly. Understanding how UIDs and GIDs work in UNIX can be confusing .. If you ever need to change these settings, you should fully understand how they work so you don't cause serious harm to your system. And with a little scripting, you can solve your UID and GID problems more quickly. - When it comes to UNIX security, Practical Unix & Internet Security is really the only book you need. You can almost become an expert from just reading this book. - The book Security Warrior complements Practical Unix & Internet Security. It provides hardcore concepts that will turn you into a subject-matter expert overnight. - Browse the technology bookstore for books on these and other technical topics. - New to AIX and UNIX: Visit the New to AIX and UNIX page to learn more about AIX and UNIX. - The developerWorks AIX and UNIX zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials. - AIX Wiki: A collaborative environment for technical information related to AIX. Get products and technologies - Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli, and WebSphere®. - Participate in the AIX and UNIX forums: - Check out developerWorks blogs and get involved in the developerWorks community.
<urn:uuid:16a6194c-45f9-40bf-af34-7b0de58dc296>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/aix/library/au-satuidgid/?ca=dgr-lnxw97uidgidchange&S_TACT=105AGX59&S_CMP=GR
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.867512
3,555
2.75
3
Hacker's Guide to (Not) Having Your Passwords Stolen Online credential theft has exploded in the past several years. This month alone, numerous breaches have affected millions of users of high profile websites such as LinkedIn, MySpace, vk.com, and Tumblr. In these cases, criminals are not seeking corporate secrets or nuclear launch codes, but rather usernames and passwords for online accounts of everyday computer users. Credential theft can come in many different flavors with varying levels of impact, from attacks targeting a single or small set of users, to attacks compromising credentials from within an enterprise, to attacks compromising the credentials of millions of users of an online service. While criminals certainly steal usernames and passwords for corporate accounts for extortion and corporate espionage, this article focuses on the compromise of personal accounts in both targeted mass data breaches. This includes why criminals steal usernames and passwords, and the most common tactics criminals use to steal usernames and passwords. It concludes with some basic steps you can take to reduce your risk of being targeted, as well as how to respond once you’ve been notified of a password breach. Why do criminals steal usernames and passwords? The short answer is: for profit, eventually. The long answer is: it depends. Hackers steal usernames and passwords from websites for a handful of reasons, but most of them lead to cash eventually. Sometimes criminals steal a database of hundreds of thousands of users from a website and sell it wholesale directly on black market web forums. The larger the database, the more money they can charge for it. Sometimes criminals will use the usernames and passwords to log in to people’s email accounts and send spam email for dubious scam products, making money from referrals and product link-clicks. In each of the cases, the methods of monetization are “quantifiably linear”. The amount of money the criminal makes is strictly tied to the amount of usernames and passwords they steal. The value of the individual accounts is not a consideration. The next reason criminals steal credentials is as a means to gain access to another, more valuable asset. Usernames and passwords by themselves provide very little value, but the assets that those credentials protect is oftentimes far more valuable. For example, ten thousand valid Gmail usernames and passwords may be worth several hundred or even thousands of dollars on underground criminal forums, but the ability to reset social media and banking passwords, access cell phone provider accounts, read confidential employer information, and even reset other email accounts provides far more value to an attacker. Criminals steal credentials ultimately to make money or gain access to a more valuable piece of information. It is this monetization of credentials, and the subsequent growth of underground markets, that drives criminals to steal usernames and passwords. How do hackers steal usernames and passwords? There are two major categories of how attackers steal usernames and passwords: attacking the users directly and attacking the websites people use. Attacking Users Directly These techniques are effective in stealing usernames and passwords from relatively small numbers of people. If an attacker values the account information of a particular targeted person, these techniques also apply. Some of these methods are obvious to a knowledgeable user and thus easier to protect against. However, as determination and intrusiveness escalates, these methods can be more difficult to stop. While credentials for many victims of this type of attack can be packaged into large numbers for sale or use, this type of activity does not usually make the headlines. Some criminals use a technique called “phishing.” This process usually looks something like this: - Hacker finds a large amount of Bank of Somewhere customers - Hacker sends a fake login page to legitimate Bank of Somewhere customers hosted on a domain that looks simiar to "bankofsomewhere.com" - Some small percentage of the victims unwittingly enter their usernames and passwords into the website that the hacker controls - Hacker logs in to stolen accounts, transfer funds to an account they control Some criminals use even broader phishing attacks to steal social media accounts: - Hacker sends fake Facebook login pages to as many email accounts as possible stating that there is a problem with their account that needs to be fixed - Some victims enter their Facebook usernames and passwords - Hacker uses access to their Facebook accounts to promote spam and adware-laden websites - Hacker generates ad revenue from fake clicks and page visits Sometimes criminals will want the credentials of a known high-value individual. More care goes into customization and believability for these cases. The attacker may go as far as attempting to impersonate the individual in tech support calls, hack the actual computer used by the high-value target to collect credentials, or other invasive techniques. It can become difficult to defend against a determined attack, but fortunately, most of us aren’t of this level of interest to attackers and basic online hygiene principles listed below will provide some protection. Attacking a Website Directly If a criminal wants to steal millions of usernames and passwords and doesn’t care who gets scooped up, he targets a website directly. The more credentials they steal, the more money they can get selling them or monetizing them in some other way. This almost always comes in the form of a criminal exploiting a vulnerability in the website itself. The criminal uses one of any number of tactics to gain access to the server supporting the website and steals the credentials directly from the database. The credentials are usually stored as a large set of username and “hashed” password pairs. A password “hash” simply refers to a more secure method of storing a password where a mathematical representation of your password is stored in lieu of the plaintext password. Once the criminal steals the database, they often have to recover the passwords from the “hashed” form back to the actual plaintext password, allowing them to check it for likely reuse on other websites. This is accomplished by “brute forcing” the password hashes to recover anything that is computationally guessable (meaning, a password simple enough to be guessed by a wordlist or sequence of iterating characters, like AAAAA, AAAAB, AAAAC, and so on). This last factor is what highlights the importance of strong, complex passwords versus simple, easily-guessable passwords. If your password is a simple dictionary word, for example “baseball”, then your password will almost certainly be very simple to recover from it’s hashed form. Conversely, if your password is long and complex then you are better protected from a large website breach, as it would be computationally infeasible for an attacker to brute force a sufficiently strong password. An example of this is as follows: - Hacker targets a popular social media website called MyBook - Hacker finds a vulnerability or misconfiguration in the server hosting the website and uses it to gain access to the website. - Hacker locates the database of all registered users and creates a backup - Hacker downloads the database backup he created of users and hashed passwords - Hacker runs the hashed passwords though a password cracker for a week and recovers 50% of the total passwords - Hacker sells the usernames and recovered passwords to someone on an underground hacking forum - The person that purchased the database uses an automated program that checks all of the usernames and passwords against other websites for password reuse and gains access to thousands of email, social media, and online banking accounts How do people protect themselves? There are a several easy steps you can take to minimize the damage personally inflicted upon you by a password breach. Use unique passwords on different websites Imagine having the same key for your house, car, office, and gym locker. While it would be very convenient, it would be a nightmare if you lost it (or worse, if somebody stole it). Criminals gain access to multiple accounts on the Internet because they know that remembering passwords is hard and nobody likes to do it. By having unique passwords on different websites you are reducing the risk of a criminal gaining access to additional accounts as a result of stealing your password. Use complex passwords Complex passwords are essential to make them difficult to guess and difficult to recover from a compromised password hash. I recommend using passwords that are at least 12 characters long that include a mix of letters, numbers, and symbols. You should avoid using words that would be present in a dictionary to make password guessing and brute-forcing more difficult. Use a password manager Password managers are programs that run on your computer, in your web browser, or directly on your smartphone. Instead of thinking of a password every time you register on a website, the password manager generates a long, complex, random password that you don’t have to remember. Then, whenever you want to log back into that website, you visit your password manager and copy and paste the saved password directly into the website. LastPass and 1Password are two examples of popular password managers. It is also important to note that a password manager inherently accomplishes the previous two recommendations. Use multi-factor authentication on all high value accounts Multi-factor authentication is a security control that adds an additional layer of security beyond username and password. Multi-factor authentication can come in many different forms, but the most common are a smart phone app, hardware token, or text message codes. Once you’ve enabled multi-factor authentication, you’ll enter your username and password on a website and it will ask you for a third item (a number from an app or a text message). This ensures that the person attempting to log into the account with your username and password also has your smart phone, and thus, is more likely actually you. Even if a criminal successfully steals your online banking username and password through a targeted email attack or from a third-party website breach, they will not be able to log into your account because they do not have access to your smart phone. The best part is that most major banking, social media, and email providers offer and encourage multi-factor authentication free of charge. Unfortunately, password breaches and credential theft aren’t going anywhere soon. They are an unwelcome and inconvenient fact of life in the modern Internet era. As long as credential theft remains relatively easy, and the market continues to offer large financial rewards, your usernames and passwords will continue to be highly sought. The good news is that it’s pretty straightforward to protect yourself from a large majority of the real threats to average computer users. All of the recommended protections are low cost and take no more than an hour to set up. By following these basic steps you can significantly reduce your risk exposure to any credential breach. Now go forth, secure yourself, and use the Internet with confidence.
<urn:uuid:b5b41b99-f4dd-45a8-b42e-c17cc565ed8e>
CC-MAIN-2017-04
https://www.endgame.com/blog/hackers-guide-not-having-your-passwords-stolen
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00046-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929028
2,231
2.78125
3
One major concern for parents and small business owners both is how to keep employees or kids safe and productive on the internet without having to spend a lot of money. While there is a wide variety of commercial software available, there is a free tool that does not require installation on every internet capable machine in the home or business. It is called OpenDNS. OpenDNS works by having the end user configure their router to use OpenDNS's DNS servers. Completing this configuration means that OpenDNS is being identified as the “domain name system” (DNS) server for the network instead of the servers that would typically be in place from an internet service provider (ISP). These servers then handle the traffic that originates from computers and devices connected to the network. When the user creates an account with OpenDNS, they can filter internet content based on categories. Among the default options are settings to block adult content, social networking sites, games sites, etc. It also allows the end user to customize which categories are allowed based on individual needs. For example, it is possible to allow specific sites while blocking a category as a whole, such as allowing www.facebook.com while blocking all other social media sites. While not perfect, OpenDNS allows the user to control DNS without additional software installation. And, since there is no software to install it cannot be removed from a computer by users who want to circumvent the controls. Further, since the filtering takes place against the router, it can become much more difficult for the end-user to find out where the filter is located. Free and paid version options allow businesses to select the protection level that is best for them without having to spend a lot of money on enterprise-grade web filtering software. This tool, along with more information, can be found at www.opendns.com. Posted By: Systems Engineer Chris Young
<urn:uuid:cbb81aad-c9ee-4fc6-a861-dbf58cd5df30>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/safer-web-browsing-with-opendns
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943424
388
2.59375
3
The importance of temperature monitoring in operating rooms Thursday, Nov 21st 2013 Successful surgery depends on optimizing working conditions for medical professionals and patients. Temperatures in operating rooms are often colder than many indoor locations, but people may not know why. However, the temperature range can have a significant effect on patient health and safety. The American Society of Heating, Refrigeration, and Air-Conditioning Engineers recommends that operating room temperatures be kept between 66 to 68 degrees Fahrenheit, with a humidity percentage of 70. While this may seem a bit chilly for a room that usually requires patients to wear a thin paper gown, health and safety take precedence over their comfort in such a case. 'Raining' in the operating room One reason to maintain such a temperature range is to prevent the buildup of humidity in the operating room. When the space is kept too warm, condensation can collect on surfaces, including the room's ceiling and various operating equipment. In addition to making the room uncomfortable, this condensation buildup can pose serious risks to patient health. If this moisture is left to collect and not prevented through temperature monitoring, it can build up to the point that it falls from these surfaces. It is possible, and dangerous, for this condensation to fall onto sterilized surfaces, operating tools or possibly into an open wound. As moisture moves along these surfaces, it can pick up additional bacteria that can seriously infect a patient. In order to prevent the buildup of these droplets, the operating room should be equipped with a temperature sensor and monitoring system for maintaining the recommended range. Hospital personnel comfort Operating rooms are also kept cool to ensure that the doctors and nurses working on the patient do not sweat. Although anesthesiologist resource Great Z's stated that many hospital personnel would not reveal this reason to patients, the source does point out that operating requires a lot of heat producing equipment. This includes large overhead lights that allow doctors to clearly see their activities. Because these systems can produce a lot of excess warmth in addition to the heat already being produced by the human body, they can cause a surgeon to sweat. This can also cause a serious health risk to the patient. Therefore, in order to prevent this and keep hospital personnel comfortable in the operating room, it is important to have a temperature monitoring system in place to ensure the space is cool. Preventing bacteria growth while ensuring AC functionality Similar to the case of cold temperatures for food safety, operating rooms are also kept cool to slow the rate of bacteria growth, stated Great Z's. Experts have proven that bacteria, viruses and other organisms reproduce and grow more slowly when subjected to lower temperature environments. Therefore, to fight off any infections, hospitals keep their operating rooms cooler. Additionally, these facilities also employ temperature monitoring as a means to oversee the functionality of their AC systems. Many AC units, especially within older buildings, were not designed to consistently maintain such low temperatures. For this reason, hospitals count on their temperature monitoring systems and high temperature alarms as a means of notification if their AC fails.
<urn:uuid:e640d400-cab6-4a5d-a722-16f23c7b6f72>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/healthcare/the-importance-of-temperature-monitoring-in-operating-rooms-543469
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954888
619
3.046875
3
60GHz: A Frequency to Watch It's now likely that 60GHz will become the next big frequency in wireless world, with both short-range and wider area applications ahead for the tiny beams of this unlicensed millimeter radio technology. The frequency -- part of the V-Band frequencies in the US -- is considered among the millimeter radio (mmWave) bands. Millimeter wave radios ride on frequencies from 30GHz to 300GHz. Until recently, 60GHz has typically been used for military communications. (See 60GHz Giddyup.) Recent acquisitions by massive technology players indicate growing interest in the technology and the associated patents. Qualcomm Inc. (Nasdaq: QCOM) bought Wilocity recently to combine 60GHz WiGig technology with WiFi. Google (Nasdaq: GOOG) bought Alpental, a startup that, according to one of its founders, is using 60GHz to develop a "hyper scalable mmWave networking solution for dense urban nextGen 5G & WiFi." (See Qualcomm Advances WiGig With Wilocity Buy and Google Buys Alpental for Potential 5G Future.) Why 60GHz, and why now? Here are a few pointers for you. WiGig: A new short-range wireless specification -- using the Institute of Electrical and Electronics Engineers Inc. (IEEE) 802.11ad specification -- that can link devices at up to 7 Gbit/s over a distance of up to 12 meters. That's 10 times faster than the current 802.11n WiFi, though with less range. This makes the technology ideal for wirelessly delivering high-definition video in the home. The Wi-Fi Alliance is expecting WiGig-certified products to arrive in 2015. (See Wi-Fi Alliance, WiGig Align to Make WiFi Super Fast.) Wireless backhaul: Particularly for small cells, operators can use the 60GHz radios to connect small cells to a fiber hub. (See More Startups Target Small-Cell Backhaul.) Wireless bridges: These are useful for providing extra capacity at events, ad-hoc networks, and private high-speed enterprise links. (See Pushing 60.) Wireless video: Some startups have jumped the gun on the WiGig standard and plowed ahead with their own 60GHz video connectivity using the Sony-backed WirelessHD standard. A global unlicensed band exists at 57-64GHz. It is largely uncongested compared to the 2.5GHz and 5GHz public bands currently used for WiFi. (See FCC to Enable Fast Streaming With New 60GHz Rules.) There's also a lot of it. "The 60 GHz band boasts a wide spectrum of up to 9GHz that is typically divided into channels of roughly 2GHz each," Intel Corp. (Nasdaq: INTC)'s LL Yang wrote in an article on the prospects for the wide-area and short-range use of the technology. Spectrum availability is "unmatched" by any of the lower-frequency bands. The spectrum is now open and approved for use across much of the world. This includes the US, Europe, and much of Asia, including China. Here's a spectrum map from Agilent on the band's global availability. As we've already seen, 60GHz technology is expected to offer blazing wireless transmission speeds. Issues with 60GHz No technology is ever perfect, right? Transmissions at 60GHz have less range for a given transmit power than 5GHz WiFi, because of path loss as the electromagnetic wave moves through the air, and 60GHz transmissions can struggle to penetrate walls. There is also a substantial RF oxygen absorption peak in the 60GHz band, which gets more pronounced at ranges beyond 100 meters, as Agilent notes in a paper on the technology. Using a high-gain adaptive antenna array can help make up for some of these issues with using 60GHz for wider area applications. Some vendors have also argued that there are potential advantages for the technology over omnidirectional systems. "The combined effects of O2 absorption and narrow beam spread result in high security, high frequency re-use, and low interference for 60GHz links," Sub10 Systems Ltd. notes. Next time, we'll look at some of the key private and startup companies looking to ride the 60GHz wave. — Dan Jones, Mobile Editor, Light Reading
<urn:uuid:f5b9881d-e08f-41a9-af23-83226387b100>
CC-MAIN-2017-04
http://www.lightreading.com/mobile/backhaul/60ghz-a-frequency-to-watch/d/d-id/709910?_mc=RSS_LR_EDT
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925382
899
2.625
3
There are legions of them. Mormon crickets crawl. They leap. They destroy everything in their path. Nothing can halt the attack of these armored masses. No, this isn't a B-grade sci-fi flick. It's not a biblical plague, although early Mormon settlers in Utah thought as much when hordes of Anabrus simplex Haldeman -- the scientific name for this two-inch, shield-backed, short-winged katydid -- descended on them in 1848, devouring their crops. Desperate for salvation from the pestilence they believed God sent them, the settlers prayed to rid themselves of what they called "Mormon crickets." According to church legend, their prayers were answered when a flock of seagulls swooped down to feast on the insects. If burgeoning populations of Mormon crickets in recent years are any indicator, ravenous bands could be poised to march across the western United States and Canada. Idaho, Utah, Colorado and Wyoming are typically hardest hit, spending millions of dollars to control the cricket migrations and the damage they do. During a 1937 outbreak, crop damage amounted to $500,000 in Montana and $383,000 in Wyoming. In 2004, Congress made a special appropriation of $20 million for Mormon cricket control. Three researchers are studying the crickets' migration by attaching tiny radio transmitters to them that chart their migration path. The goal is determining if better ways exist to stop the migration from hitting certain states, either through killing the crickets or diverting their migration path with concentrated and targeted pesticide application. Containing the Swarm "Little is known about what causes increases in population size," said Patrick Lorch of the University of North Carolina's biology department. "We know extended drought, early spring snow thaw and overgrazing all seem to favor high cricket densities. They lay eggs in the soil, and the eggs can sit for several years, hatching when conditions are most favorable." Mormon crickets' culinary tastes lean toward succulent forbs, or broad-leaved flowering plants, but they'll graze on desert grasses before moving to greener pastures. Insatiable, the insects engulf rangelands, laying waste to cultivated crops such as wheat, barley, alfalfa and clover. Experts say swarms of the crickets can cover a mile a day and eat everything in their path. Some packs stretch several miles wide and 10 miles long. "A farmer might not see a single cricket one day but end up facing millions the next day because they move in such large groups," explained Gregory Sword, a USDA Northern Plains Agricultural Research Laboratory research ecologist in Sidney, Mont. "They can potentially eat everything in the field." As unpredictable and destructive as a tornado, the ominous black band of crickets inexplicably shifts direction, decimating one field and sparing the next. In a moveable feast, the band can overrun communities, consuming ornamentals and stripping vegetable gardens bare. There have even been accounts of them chewing wood siding off homes. In addition to the crop damage they do, the crickets also pose a threat to public safety. "When their bands cross roads, they tend to mass together and cannibalize the crushed dead bodies of other insects," Sword elaborated. "These in turn get crushed by more passing vehicles, leading to large, messy 'oil slicks' of crushed crickets." Until recently, when cricket bands were on the run, no one could predict where or how far they would travel. A study of Mormon crickets conducted by Lorch, Sword and Darryl Gwynne, a zoology professor at the University of Toronto, sheds new light on accurately tracking the Mormon cricket's migration habits. Together, these scientists devised a way to bug the pests that have been bugging humans for more than 2,000 years. Radio transmitters about the size of a dime and weighing 0.5 grams were hot-glued onto the backs of adult female crickets.
<urn:uuid:8b312bed-9ecf-43d2-a4d4-9a55194e18e2>
CC-MAIN-2017-04
http://www.govtech.com/e-government/Bugging-Crickets.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00295-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952153
828
3.171875
3
Anyone who is concerned about their Linux servers' security, stability, and proper functioning needs to audit their systems. Auditing may include anything from logging simple Bash commands to following complex system processes. Linux auditing can be a complex process, but here are some basic tools and techniques you can use to make the task simpler. First, let's talk about some simple ways you can get some idea of what users do that don't reach the level of auditing. You can, for instance, check the shell command history using Bash's built-in command history. It shows the last commands executed by the current user. To see the date and time of executions, reconfigure history's settings with the command export HISTTIMEFORMAT='%F %T '. history command acts only upon the current user. To see other users' activity, provided you have permission, read the hidden file called .bash_history in their home directories. One caution, however: The Bash history can incomplete, making the record meaningless. If you see in it ./myscript.sh, all you know is that the user executed a script; you can't tell what was in this script or what it did, unless of course the script remains available and unchanged. To see is who is currently logged in and what each user is doing, use the command /usr/bin/w, which gives you this information in full. Many services, including a couple of popular databases, provide logs for simple auditing. For MySQL a hidden file called .mysql_history in users' home directories logs all the user actions in the MySQL console. A similar file for PostgreSQL is called .psql_history. History logs frequently contain passwords or other sensitive data. Consider, for example, the commands they might show when you create MySQL or PostgreSQL users. Therefore, make sure that history log files can be read (and of course written) only by each user; they should have chmod permission flags set to 600. If you are more concerned about security than auditing, you can delete these history files entirely, then create a soft link to /dev/null in their place. For example, for MySQL, run the command ln -s /dev/null ~/.mysql_history. These basic practices are just a start. They touch only the surface of what users do and what happens on the system, and it's trivial for an attacker to delete logs showing his traces. You therefore need an advanced auditing solution to not only reveal malicious activity but also store this information in a secure, preferably remote place. Advanced Linux auditing The Linux Auditing System is a Linux kernel implementation available in CentOS and other distributions that enables in-depth and advanced auditing. It works on the kernel level, where it can oversee every process and activity on the system. It uses the auditd daemon to log what it finds. In most Linux distributions auditd is preinstalled and starts and stops automatically with the system. It logs information according to its auditing rules, and also conveys SELinux messages as described in our article about Linux server hardening. Auditd's configuration is controlled by a few files in different directories. The daemon's configuration file is /etc/audit/auditd.conf, which contains all the settings except the auditing rules. Leave the default values in place while you're still exploring auditing. Important settings include max_log_file (default 6), the maximum size of the log file in megabytes. Once a log file reaches this limit, the action specified in the setting max_log_file_action (default rotate) takes place. The setting num_logs (default 5) specifies the number of log files and thus determines how many log files are kept. The file /etc/audit/audit.rules contains the auditing rules that control what events should be audited and logged. You can specify three types of options in this file: control, file system, and system call. The control options manage the system rather than the auditing rules. For example, the audit.rules file should always start with a directive that any existing auditd rules are deleted ( -D). Another useful control option is -e 2, which makes the configuration immutable and requires a server restart for new changes to take effect. File system rules pertain to files and directories recursively, and each rule looks like this: -w <em>path-to-file</em> -p <em>permission</em> -k <em>keyword</em> All file system rules begin with -w, which stands for watch. A permission is an action that reads ( r), writes ( w), executes ( x), and/or changes the attribute ( a) of a file. The keyword is an intuitive you choose to connect to one or more auditd rules. The same keyword can be used for more than one rule. An example should help illustrate how file system rules work. The rule below instructs auditd to watch the file /etc/shadow, the Linux password file, for being read, written to or having its attributes modified: -w /etc/shadow -p rwa -k shadow_watch When a rule is tripped, auditd writes a log entry to its log file /var/log/audit/audit.log. If you make any changes to the audit.rules file you must restart (or reload) the auditd daemon with the command service auditd restart. If you enter the rule above, you can test it by restarting the daemon, then trying to read /etc/shadow. Next, search the current autitd log using the ausearch command with the keyword for the rule in question: ausearch -k shadow_watch -i. The result should be similar to: type=PATH msg=audit(11/18/2012 16:24:19.963:61) : item=0 name=http://www.openlogic.com/etc/shadow inode=163882 dev=fd:00 mode=file,000 ouid=root ogid=root rdev=00:00 obj=system_u:object_r:shadow_t:s0 type=CWD msg=audit(11/18/2012 16:24:19.963:61) : cwd=http://www.openlogic.com/root type=SYSCALL msg=audit(11/18/2012 16:24:19.963:61) : arch=i386 syscall=open success=no exit=-13(Permission denied) a0=bfde58e9 a1=8000 a2=0 a3=1 items=1 ppid=2148 pid=2149 auid=root uid=anatoli gid=anatoli euid=anatoli suid=anatoli fsuid=anatoli egid=anatoli sgid=anatoli fsgid=anatoli tty=pts0 ses=1 comm=cat exe=http://www.openlogic.com/bin/cat subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=shadow_watch The ausearch argument -i tells the command to interpret numbers; that is, uid 503, for instance, is translated to "anatoli." The above log excerpt shows that the user anatoli has tried to read the file /etc/shadow (name) using the executable /bin/cat (exe). The log shows success=no, the attempt was unsuccessful, and the exit code was -13, which means permission denied. ausearch utility lets you filter your results. To see all unsuccessful attempts, use -sv no, where sv stands for success value. The full command with the shadow keyword would be ausearch -k shadow_watch -sv no. For more information about the returned values and ausearch, check its manual page by running the command The third type of option, system call (syscall), provides the interface between an application and the Linux kernel. These auditd rules act upon the specified interfaces to detect and log events. System call auditd rules have the following structure: -a <em>when</em>,<em>filter</em> -S <em>system-call</em> -F field=<em>value</em> -k <em>keyword</em> -a stands for append – that is, append the rule at the end of the ruleset. You can also use -A to place it at top of the list, or -d to delete the rule. when values are never, meaning always or never to create an event log. For the third argument, filter, two values are frequently used: exit value means to act upon a syscall exit, when an operation is completed. The user filter is for userspace events and can be further filtered to The next argument is -S followed by a syscall name. There are hundreds of syscalls – to see all of them, go to the syscalls man page – and often more than one can be used to get a similar result. For example, if you want to see whether a file or directory has been deleted, you could use The last argument before -k for keyword is -F, which stands for fine-tune filter field. If you go back to the ausearch -k shadow_watch result, you can see the numerous fields that can be used for fine-tuning the rules and hence the results. A sample fine-tuned rule looks like this: -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete This rule covers syscalls that may lead to a file disappearing. The two fine-tuning rules ( -F) state that in order the rule to be tripped the user's ID should be above 500 (regular users) but should be different from 4294967295, which is the representation of the auditd system ID. For further information about the fine-tuning fields, check the audit.rules man page. To get you started faster with the auditd rules, CentOS 6 provides some example rules in the file /usr/share/doc/audit-2.2/stig.rules. Just using them without any adjustment provides a solid ground for auditing. Auditd log interpretation Once you've set the events you want to track, you can do the actual auditing by interpreting auditd's log file at /var/log/audit/audit.log. The file contains the same information given by ausearch but in more user-unfriendly format. One problem that you may encounter, if you decide to read the log file directly, is that the time is given in Unix timestamp format, which means you have to convert the timestamps to readable dates and times in order to tell when an event occurred. Many tools can make it easier to read and analyze information from auditd's log. The aureport utility, for instance, lets you generate reports from the auditd log file. Running just aureport provides you with an easy-to-understand summary report. It includes counters for all important auditing event groups, such as number of changes to accounts, groups, or role. Detailed reports are also available, and they can be filtered by type (file system or syscalls), fields, and time. Here are a few useful --auth– shows authorization attempts. The aureport command can be further extended with the --failedargument to show failed attempts only and with --startto limit the time frame of the report. Thus to see failed logins for yesterday use the command aureport --auth --failed --start yesterday --key– lists events for keywords defined in the auditd rules --file– shows events for specified files and directories --syscall– reports on system call events that have been configured to be logged in the auditd rules For more details about these reports and to find out more information about aureport check its manual page ( Auditd reporting has to provide genuine and reliable information. To make this possible the auditing cycle has to be secured and hardened. Auditing hardening enforces best practices for ensuring reliability, integrity, and security of the auditing process. The first step for hardening is to ensure auditd's configuration is immutable by using the control option -e 2. Next, ensure the logs are stored in a secure centralized location. The best place is a server dedicated to accepting remote syslog events. A utility called audispd (auditd's dispatcher) can help with this task, along with one of its plugins, audisp-remote. Audisp-remote allows events to be sent to a remote syslog server. Its configuration can be found in the /etc/audisp/audisp-remote.conf file. Here's a sample configuration specifying that a remote rsyslog server 10.0.0.1 is listening on port TCP 514: remote_server = 10.0.0.1 port = 514 transport = tcp Having events logged and stored safely to a remote location helps provide peace of mind. Of course there's no guarantee that the remote server cannot be corrupted or compromised, but doing this adds one step more toward better system administration and security. That's why reliable remote logging is a requirement for financial and government environments. Linux auditing can be as simple as reading simple history log files, or it can be a real challenge if you decide to be serious about it. For reliable auditing, use the powerful Linux kernel options and the supplementary services auditd and audispd.
<urn:uuid:dfee63f8-5dec-4949-a3e5-505c3b29628e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224092/opensource-subnet/linux-auditing-101.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.870062
2,965
2.71875
3
Understand the differences between public key and symmetric key encryption These days it seems that concern over network security is at an all time high. Because of this, it's important to understand what's really happening when you encrypt your data. You might have assumed that when you enable encryption, a single type of encryption is at work. However, you're actually using two types of encryption. In this article, I'll introduce you to these two types of encryption and explain how they work together. One key, two keys Most people assume that the various public key infrastructures use strictly public key technology. This isn't the case. Many of the functions used within the Windows 2000 implementation of public key encryption use both public key and symmetric key encryption algorithms. To understand why this is the case, it's necessary to understand a little bit about how each encryption technology works. Symmetric key encryption involves using a single key to encrypt and decrypt data. For example, suppose that you took a document and placed it in a file cabinet and then locked the cabinet with a key. For you or anyone else to access the document, you'd need the key to the file cabinet. Generally speaking, symmetric key encryption is fast and secure. On the other hand, symmetric key encryption works well locally, it doesn't work very well across networks. In order for the receiver of the encrypted packets to be able to decrypt the packets, they must use the key. Needless to say, this means that you must send them that key along with the message. The other problem is that the physical medium you're sending the packets across is insecure. If it were secure, there would be no reason to encrypt the message in the first place. Anyone who might be monitoring the network could steal the encrypted packets and the key necessary for decrypting them. Public key encryption on the other hand uses a pair of keys: a public key that's sent along with the message and a private key which is always in the possession of the recipient. The private key is based on a derivative of the public key and only the two keys working together can decrypt the packets. Because the private key is never sent across the network, it remains secure. The down side of public key encryption is that it tends to be very slow and resource intensive. This makes it difficult to send large amounts of data using public key encryption. Mix and match Because of the nature of the two types of keys, Windows 2000 uses a mixture of the two types of encryption for many operations. The idea is to encrypt the data itself using symmetric key encryption. This means that the data can be sent quickly and without hogging all of the available resources. The encryption key is then sent in a packet encrypted using the public key algorithm. This means that when the recipient receives the encrypted packets, they must wait for the key to arrive. When the key arrives, they use their private key and the attached public key to decrypt the package. Once the package has been decrypted, the recipient is free to use the symmetric key that it contains to decrypt the main data. The entire process is similar to activating a new credit card received in the mail. The credit card company mails you the card and the activation code separately. Before you can use the card, you must receive the activation code and then either activate it over the phone or in an ATM machine to validate the card. As you can see, using a combination of two types of encryption combines the best of both worlds. You get the speed of symmetric key encryption combined with the security of public key encryption. This combination allows secure Windows 2000 transactions to take place with maximum efficiency and security. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the Director of Information Systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:1fefbf25-8c25-4eea-8471-706908b98cdb>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/623901/Understand-the-differences-between-public-key-and-symmetric-key-encryption.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00321-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949866
829
3.484375
3
The theft and use of illegal personal computer software is well above the national average in some of the nation's largest and fastest-growing states, while the impacts are serious and wide-ranging. These are among the findings of the 2007 State Piracy Study, released last week by the Business Software Alliance (BSA), an international association representing the software industry and its hardware partners. The national average for software piracy in 2007 was 20 percent, meaning that one in five pieces of PC software in use in the United States was unlicensed. States with piracy rates well above the national average include California, 25 percent; Illinois, 22 percent; Nevada, 25 percent; and Ohio, 27 percent. States closer to or below the national average include Arizona, 21 percent; Florida, 19 percent; New York, 18 percent; and Texas, 20 percent. The study was conducted by IDC. Software piracy in the eight states studied cost software vendors an estimated $4.2 billion, said BSA in a release, which is higher than the national figure for all other countries in the world except China. Lost revenues to software distributors and service providers were an additional $11.4 billion, for a total tech industry loss of more than $15 billion. Software piracy also has ripple effects in local communities, continued BSA. The lost revenues to the wider group of software distributors and service providers ($11.4 billion) would have been enough to hire 54,000 high tech industry workers, while the lost state and local tax revenues ($1.7 billion) would have been enough to build 100 middle schools or 10,800 affordable housing units, or hire nearly 25,000 experienced police officers. "The United States may have the lowest PC software piracy rate in the world, but still, one out of every five pieces of software put into service is unlicensed," said BSA Vice President of Anti-Piracy and General Counsel Neil MacBride. "Not only is this a problem for the software industry, but piracy also creates major legal and security risks for the companies involved." "The most tragic aspect is that the lost revenues to tech companies and local governments could be supporting thousands of good jobs and much-needed social services in our communities," he said. State Piracy Highlights Among the highlights of each state's piracy picture: How to Avoid Piracy "For companies, the first step in avoiding the risks of software piracy is awareness of the problem," MacBride added. "Not only can the use of unlicensed software potentially trigger an external audit or even a lawsuit, it can also create security vulnerabilities, productivity breakdowns and hidden internal support costs." "To avoid such risks, businesses must create, communicate, and enforce an effective software asset management (SAM) program," MacBride said. Organizations can download a variety of free software asset management (SAM) tools from BSA's Web site. Individuals have a simpler task, MacBride said. "Just know where your software is coming from," he said. "The old adage that, 'if something seems too good to be true, it probably is' certainly applies when it comes to deeply discounted software from dubious sources. Not even an auction site such as eBay is able to keep its site free of illegitimate software."
<urn:uuid:f2b88e47-172e-41cd-8d2e-f2f9defab30f>
CC-MAIN-2017-04
http://www.govtech.com/pcio/Twenty-Percent-of-US-Software-is.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953843
665
2.546875
3
Power to the PC ProcessorBy Baselinemag | Posted 2004-09-01 Email Print Like small and agile drones, servers with Intel processors are eating into a realm once dominated by the proprietary (and expensive) queen bees of traditional Unix and mainframe systems.Like small and agile drones, servers with Intel processors are eating into a realm once dominated by the proprietary (and expensive) queen bees of traditional Unix and mainframe systems. The Public Broadcasting Service used to have a mélange of massive Unix servers powering its data center in Alexandria, Va. About five years ago, pbs decided to transfer all of its applications to servers running Intel processors. What changed? André Mendes, pbs' chief technology integration officer, couldn't stomach paying for proprietary hardware anymore. "We felt the price/performance [ratio] of Intel platforms just could not be beat," he says. Now the broadcaster uses Intel-based servers from Hewlett-Packard and ibm to handle almost all of its data needs, from Web serving to distributing video feeds to 349 stations. Running high-performance applications on Intel servershardware that was originally designed for personal computersused to be seen as risky, if not out of the question. Servers using processors found in PCs were written off as light-duty pickups compared with Unix or mainframe 18-wheelers. It's still true that individual Unix systems can handily speed past systems that use Intel processors. But per transaction, the Unix bunch also costs three times as much or more, based on a system's price divided by its transaction rate, according to recent benchmark tests published by the independent Transaction Processing Performance Council. Today, Mendes is part of a growing group finding that fleets of multiple Intel-based servers working together are the less expensive and more flexible option. Driving the sector's economics are the millions of Intel-compatible processors (also known as "x86" chips after Intel's old nomenclature) flowing into personal computers. In 2003, idc says 44.6 million PCs were sold worldwide, 4.7 million of which were servers. By contrast, 523,000 Unix servers with reduced instruction set computing (risc) chips shipped last year. The sheer size of the Intel processor market has caused prices to drop as performance of the chips has steadily increased. Intel claims that a server with four 3-gigahertz Xeon processors (its fastest multiprocessor-capable chip today) performs 3 to 4.6 times faster, depending on the application, than a top-of-the-line system four years ago with four 700-megahertz Pentium IIIs. As faster chips roll out, prices for the previous generations typically fall; for example, Intel last month cut the price of its 3.6-GHz Pentium 4 by 35%. As a result, worldwide sales of Intel-based servers have been booming up 14%, to $5.1 billion, during the first three months of 2004 compared with a year earlier, according to idc. Meanwhile, the Unix server market was down 3%, to $4.1 billion, in the same period. Consider this telling change: Sun, which once insisted on selling only servers with its own microprocessors, now offers low-end servers with Intel-compatible processors from Advanced Micro Devices (amd) to compete with hp, ibm and Dell, the three powerhouse players in the segment. Don't count on Sun winning business from someone like Aaron Branham, vice president of global operations and networking at job-search company Monster Worldwide. In 2000, Monster acquired JobTrak, which ran a career site for college students and alumni. JobTrak, Branham found out, had been paying about $500,000 per year in leasing and maintenance fees to Sun for a Sun Fire 6800 server. Most of Monster's other online properties were already running on standard Dell servers with Windows. Branham's team promptly ousted the Sun box and rewrote the applications for the JobTrak site (now called MonsterTrak) to run on eight Dell machines, a project that cost a total of $150,000. "We knew it was going to be much cheaper to go with Intel servers," he says. Plus, Intel servers let an organization add computing power more easily because they don't require a colossal capital outlay, says Damien Bean, vice president of corporate systems at Hilton Hotels. Buying larger, more expensive Unix servers means "you have to take it up through the cfo to get it approved," he says. "You can't be as nimble." Another benefit: There's less chance of getting locked in to one supplier. Chips are available from Intel and amd, and they can run a broad array of operating systems, from Linux to Windows. "It never really hurts to have two vendors in the server room," says pbs' Mendes. But one downside of PC servers is that they can become problematic to manage in large numbers. After all, it's potentially a bigger task to care for and feed 100 individual servers than one gigantic system. The key, say users of Intel-based servers, is to rigidly standardize on a set of server configurations so that machines behave the same way and use the same replacement parts. "We've developed all the tools and procedures to manage hundreds of servers," says Monster's Branham. Another catch is that Intel-based servers are not quite as reliable as proprietary systems. That's because Intel servers include components from multiple suppliers, each independently engineered and manufactured, says Jay Bretzmann, ibm's director of server product marketing. He says none of the x86 servers on the market, including ibm's, can achieve 99.999% uptimea standard measure of high-reliability systemswithout extra measures, such as having a standby server ready to kick in if the main one dies. "The hardware is not a 'five-nines' platform by itself," Bretzmann says. One way the industry is addressing those management and reliability questions is with "blade" servers. These pool resources such as power, cooling, storage and network connectivity for multiple server "cards" that run processors and memory. The idea is to save space and provide a more manageable alternative to dozens of standalone PC boxes. So, it seems, everything old is new again, as blade servers start resembling mainframes. "From an operational standpoint, a blade system looks like a single box," says Robert Wiseman, chief technology officer of Cendant Travel Distribution Services. And because the blades are relatively cheap, it's feasible to plug in extra blades to boost the performance and availability of an application, he says. "The chance you'd lose two blades at the same time is very small," Wiseman says, "and the odds you'd lose threewell, that's astronomical." Group Dynamics: Built to Serve Category: Servers with Intel (or compatible) processors. What It Is: Computer systems designed for network-based applications, typically running either Microsoft Windows or Linux operating systems. Key Players: Dell, Fujitsu Siemens Computers, Hewlett-Packard, IBM, NEC, Unisys Market Size: $19.1 billion worldwide, 2003 (IDC) What's Happening: New "blade" server systems can consolidate multiple physical servers into one unit by sharing power, networking and other resources. Also hot: server virtualization software, which lets multiple operating systems run on the same processor. Expertise Online: The server section on IT Manager's Journal (www.itmanagersjournal.com/servers) offers user-posted articles, discussions and links to news stories. Worldwide Server Share Fujitsu Siemens 2.9% For x86-based servers in 2003, by revenue* *Total does not add up to 100% because of rounding
<urn:uuid:77d558f1-4f4c-4694-9aa0-99a9e610140d>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Projects-Networks-and-Storage/Power-to-the-PC-Processor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946747
1,626
2.578125
3
The US Education Industry Report: 2015 Edition - Report Description Table of Contents Education refers to a process of facilitating learning through knowledge, skills, values, beliefs and certain habits. It is on its way to becoming a universal right and is likely to be available everywhere, to everyone without any hurdles. The U.S. education system follows a specific pattern where early childhood education is followed by primary school (Elementary school), middle school, secondary school (High school), and post-secondary (Tertiary) education. Education in the U.S. is provided both by public and private schools. Public education is universally required at the K-12 level, and is available at state colleges and universities for all students. The education industry of the U.S. has undergone several changes over the past few years and continues to invite significant spending by the public. The overall growth of the industry will be driven by rising responsiveness of people towards the benefits of early education, rising awareness of the advantages of higher education and growing demand for online teaching methods. The major trends in the industry include growth of educational content and technology, rising demand for digital textbooks, high penetration rate for U.S. postsecondary education sector, students shift towards online education and students dependence on family for higher education funding. The major growth drivers include increasing work participation of women in the U.S., rising postsecondary enrollment rates in the U.S and growing merger and acquisition activities in the industry. However, growth of the market is hindered by several factors including declining population of children under five years of age and legal and regulatory issues. The report, “The U.S. Education Industry” analyzes the current prevailing condition of the industry along with its major segments including Pre-K, K-12, Post-Secondary and Corporate Training. The U.S. market along with specific dependence on other countries for growth including China, India, France and Germany is being discussed in the report. The major trends, growth drivers as well as issues being faced by the industry are being presented in this report. The major players in the industry are being profiled, along with their key financials and strategies for growth.
<urn:uuid:19838149-b227-49e6-9f26-387d91999d4f>
CC-MAIN-2017-04
http://www.marketreportsonline.com/442408.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00165-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967152
443
2.59375
3
Definition: A reduction that maps an instance of one problem into an equivalent instance of another problem. See also NP-complete, Turing reduction, Cook reduction, Karp reduction, l-reduction, polynomial-time reduction. Note: From Algorithms and Theory of Computation Handbook, page 24-19, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "many-one reduction", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/manyonerdctn.html
<urn:uuid:6c499386-a1ff-4593-ba8c-57ec4acf7be4>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/manyonerdctn.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00249-ip-10-171-10-70.ec2.internal.warc.gz
en
0.806246
225
2.6875
3
Ensuring the trustworthiness of the Internet of Things (IoT) and Cyber Physical Systems (CPS) consists of a variety of factors, not all of them absolutes, according to panelists at a National Institute of Standards and Technology (NIST) workshop last week. “The sense of trust is not absolute. Trust does allow for failure,” said Greg Shannon, assistant director for cybersecurity strategy at The White House Office of Science and Technology Policy. “What bolsters people’s trustworthiness is that there’s a sense of accountability.” “Everything should not have to be ultra trustworthy, because making things highly trustworthy is going to come at a cost of processes,” agreed Cynthia Irvine, distinguished professor of computer science at the Naval Postgraduate School. The NIST CPS Framework says that trust in IoT and CPS systems relies on the elements of resilience and reliability, security, safety, and privacy, which work in tandem to form a trustworthy system. Panels at the workshop explored these elements and how they can be achieved. Resilience and Reliability Presidential Policy Directive 21 defines resilience as “the ability to prepare for and adapt to changing conditions and to withstand and recover rapidly from disruptions.” These disruptions include everything from cyberattacks to natural disasters and physical attacks. “While we focus very heavily on cyber adversaries and cyberattack, we need to be mindful of all forms of disruption,” said Deb Bodeau of MITRE. At the workshop, reliability was defined as “the ability of a system or component to function under stated conditions for a specified period of time.” “Greater reliability means less need for resilience,” said Pat Muoio, director of research and development at G2 Inc. She explained that reliability consists of things that people understand will disrupt the mission, while resilience consists of the unexpected things that happen to a system. “You need a policy to explain, at least in terms of security, what the system is supposed to do,” said Irvine. She explained that, due to the difficulty of defining and testing how a system is secure or not, trust in the system comes from concrete security policies. Steve Lipner, former partner director of program management at Microsoft, added that organizations must first adhere to a set of cyber physical best practices, such as the SANS top 20 or DSD top 35. “In the world of IoT, I don’t know that there are equivalent, common best practices documented, but if I were going to start facing the problems there, that would be an early thing I would do,” Lipner said. In order to implement these practices, and therefore make cybersecurity a primary concern of an organization, Shannon said that companies have to “make cybersecurity less onerous while providing more effective defenses.” “Safety is often an explicit objective that’s laid out in terms of the goals of the organization. Cybersecurity, per se, might not be, however, cybersecurity risks can impact all of those objectives,” said Al Wavering, chief of the intelligent systems division of the Engineering Laboratory at NIST. For example, Wavering described a manufacturing plant in which employee safety on the floor is a priority. That employee safety can be compromised by a hacked or failing cyber physical system, making cyber considerations a key element of safety concerns. Consumer safety is also a major concern in IoT and CPS. “Trustworthiness is very similar to airworthiness,” said Ravi Jain, an aerospace systems engineer at the Federal Aviation Administration. He described the interconnected nature of flight systems, flight communications, and passenger devices as a major safety consideration for commercial flights. “A lot of the principles that those of us in the privacy field talk about for all sorts of things certainly apply in the cyber-physical systems space,” said Lorrie Cranor, chief technologist at the Federal Trade Commission, professor at Carnegie Mellon, and director of the Carnegie Mellon Usable Privacy and Security Laboratory. Primarily, Cranor addressed a person’s right to access their own data, as well as the need to know what kinds of data is being collected about them. “In cyber physical systems, there’s a lot of data collection going on that is probably not obvious to the humans,” said Cranor, pointing to navigation systems in cars as an example of where personal data can be collected about a person that would enable someone with the data to figure out where they live or work. “Privacy is generally not the first thing on the minds of the engineers building these things,” Cranor added.
<urn:uuid:7a288142-e377-49ba-b8b7-070db72a6b64>
CC-MAIN-2017-04
https://www.meritalk.com/articles/nist-defines-elements-needed-to-trust-iot-and-cyber-physical-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00303-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955669
974
2.90625
3
September 11, 2001-¦ that date will be engraved upon the memories of most Americans for many years to come. That is the date when Terrorists brought their battle to the U.S. soil. One week later, the Internet came under attack by the Nimda worm. Many claimed this was an act of Information Warfare. This was not the first “attack” on the Internet, and it certainly won’t be the last, but was this an act of Info War? I don’t believe it was. Let’s compare the tragic events from the 11th with the Nimda worm to see if we can draw some conclusions about Information Warfare. On September 11th, without warning, 4 commercial jets were hijacked. Contrary to the historic profile of such events, no negotiations took place. Instead the aircraft were flown into prominent U.S. landmarks. Both World Trade Center towers were completely destroyed, and the Pentagon suffered major damage as a result of this attack. On or about September 18th, the first signs of the Nimda worm began to surface. This worm used several methods to propagate around the Internet. It was again targeted at computers running various Microsoft products (Internet Information Server, and Outlook). It rapidly moved throughout the Internet, compromising thousands of computer systems around the world. So, was it Info War? In a word-¦ No! This was just another Internet worm. It used well-known vulnerabilities just like previous worms, Trojans, and malicious software. It was not targeted against prominent U.S. targets. It did not specifically target any of the U.S. critical infrastructures. Instead, it indiscriminately scoured the Internet for vulnerable computers, infected them, and moved on. This is not what we can expect in the event of a true Information War. So what is Information Warfare? There have been many definitions of Information Warfare offered. My favorite definition comes from Dr. John Alger, at a seminar on Information Warfare (I found this reference here). Information warfare is the offensive and defensive use of information and information systems to deny, exploit, corrupt, or destroy, an adversary’s information, information-based processes, information systems, and computer-based networks while protecting one’s own. Now that we have a definition, we can think about the form these attacks might take. How will we know if and when we’ve been targeted by an Info War attack? Let’s see what lessons, if any, can we learn from the events of September 11th? The airline hijackings and subsequent attacks against the World Trade Center and the Pentagon buildings were almost a complete surprise. It turns out the Intelligence community was aware of a threat of “unprecedented attacks” against the U.S., but they didn’t have the specifics. It also quickly became clear that these attacks were very well planned out. Preparations had been ongoing for at least 12-18 months. Terrorists had established a presence in the community, and had even taken flying lessons. Even now we don’t know the extent of their plans, or how long they’ve been setting this up. I suggest that we will get hit with Info War attacks in a very similar manner. We already know the threat, in vague terms. There will be “offensive use of information and information systems to deny, exploit, corrupt, or destroy our information, information-based processes, information systems, and computer based networks. More simply put, we’ll be the target of crippling viruses and worms. Our infrastructure will be infiltrated with the goal of manipulating, corrupting or destroying our data and systems. We’ll also be denied access to our systems and infrastructure by some form of “denial of service” attacks. Hmmm-¦ sound familiar? We’ve been experiencing all these forms of attacks for quite some time, but this is NOT Information Warfare in it’s true sense. I believe that when the real attacks arrive, we won’t even know we’ve been hit. Not at first, anyway. I believe that targets of Info War and cyber-terrorism have been identified, and possibly infiltrated. This infiltration may be physical, such as people working under cover at power plants, telecommunications centers and the like, or it may be electronic. There may already be Trojans, viruses and malicious code in our most critical networks and systems, laying dormant for now and awaiting an electronic trigger to wreak havoc. The reality is that if we are going to experience an Info War attack it will probably not be noticed by conventional defensive measures. Our current security defenses are designed around various specific countermeasures: - Block unused ports or services - Filter traffic going to allowed ports and services - Search the remaining traffic for known attack strings - Use anti-virus programs to search for malicious software This is not intended to be an all-inclusive list, but it gives a very high-level overview of common defensive measures. These standard measures may be ineffective against Information Warfare. Let’s look at each measure listed above and discuss it’s weakness. - Blocking unused ports and services is the foundation of most hardening procedures. If you don’t need the service, disable it so you don’t have the additional overhead of maintaining it. Let’s face it-¦ we all have enough work to do without adding more, unnecessary work. This is a sound concept, but the converse of this rule is to allow access to used ports and services. One of the most common services used on the Internet is http, or Web Access. This is also the most attacked and exploited service. This fact should be clear in everyone’s memory after the recent Code Red and Nimda attacks. - Since we have to allow some traffic over our network (we created the networks to allow some traffic) then how do we protect ourselves from allowed traffic? One method is to use content filtering to try and stop attacks from entering our network. This method is good for information traveling in the clear, or unencrypted. The shortcoming is that any form of encrypted traffic cannot be monitored for content. This includes such common protocols at https, ssh, and VPN traffic. Again, most attacks in recent history have been web based, and they will still work against a server running https. There have also been some recent attacks against ssh that demonstrate this problem as well. - Another method of stopping attacks against our network is to use an Intrusion Detection System (IDS) to search for signatures of known attacks. There are many shortcomings to this method. First, this only defends us against known attacks. New attacks will not be detected by conventional IDS. Next, these systems generate a huge number of false positives. They search for a string or sequence of characters or data. If this string is contained in innocuous traffic, the IDS will still trigger an alarm. This requires someone to investigate the cause. Too many false alarms, and you have a worthless system that will be largely ignored. An attacker may take advantage of this weakness and flood the network with a huge volume of attacks in an attempt to overload the monitoring system. At this point, it would be much easier to sneak a true attack through the flood of false alerts. - Anti-virus software has become more prominent as the quantity, maliciousness, and speed of propagation of malicious code has increased. Anti-virus software now detects most Trojans, viruses, worms, and many hacking tools that are available on the Internet. This is a powerful security tool that should be installed on every computer in existence. But this too has its weaknesses. Like an IDS, anti-virus software only truly effective against known attacks. New attacks usually slip right by, unless it’s a close variant on an older virus. The signature database must be regularly updated, and during high profile events, such as the Anna Kournikova virus/worm, some anti-virus sites can be so overwhelmed it would be impossible to download the updates. As you can see, each type of security measure has its weakness. The combination leaves an opening in our defenses that cannot be closed if we are to maintain any sort of functionality. That’s why most security experts recommend that security be applied in layers. A well-planned and orchestrated Info War attack would take advantage of this combination of vulnerabilities. Specific entities would be targeted. Reconnaissance would be complete, documenting the critical systems in the target infrastructure. Operating System versions, and hosted applications and services would be identified. A deployment method would be developed. The actual attack would depend could be reliant on a couple different scenarios. The most trivial method would be to wait for new vulnerabilities in the targeted systems. With all plans in place, the new attack could be quickly utilized to gain access to the systems. If the attack were carried out quickly enough the relevant patch might not yet be available. Signatures for the IDS or Anti-Virus software might not have been developed or distributed. Another, more discreet scenario is also possible. Once the target systems have been profiled, a new exploit could be developed to slip by all defenses. If it were exploited in a limited manner, the exploit might never become known. Where does this leave our defenses? There is a little-explored area of security defense known as anomaly detection that, once fully developed, could provide a much-needed extra layer of protection. Anomaly detection systems look for behavior that deviates from normal system use. It would generally involve an initial baseline of normal system traffic behavior. Once the profile has been established, any traffic which not matching this profile would be flagged for analysis. This would be especially useful in the previously mentioned scenario because an Info War attack is likely to result in some new stream of traffic. If a system is compromised, with the purpose of gaining access to the internal network, the resulting network profile would change. This compromise would have to make use of existing traffic patterns, such as establishing a tunnel via http. But the difference might be inbound traffic on port 80 to a system that has not historically provided this service. Developing an anomaly detection system or ADS is a very complex venture. It is likely to be more prone to false alerts than current intrusion detection methods. It would likely require more vigilance, more interaction, and a higher level of technical knowledge and experience to effectively manage. But it’s a method that will hopefully be explored in the near future. With all it’s potential shortcomings, it would nonetheless provide another layer of security monitoring, and one more defensive tool that might help us in the event of a true Information War.
<urn:uuid:4cc3d01e-b615-4981-823e-b3dd5abd0de7>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2002/04/01/information-warfare-when-intrusion-detection-isnt-enough/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956988
2,212
2.71875
3
Researchers test simian virus to develop AIDS vaccine Tuesday, Sep 17th 2013 The HIV virus has produced one of the most pervasive medical conditions present in today's society: AIDS. After many attempts to stop the virus from spreading, researchers from the Oregon Health & Science University may have discovered the secret in creating a vaccine to prevent the ailment by studying a virus that causes the disease in non-human primates. The simian immunodeficiency virus is very similar to HIV, allowing the university researchers to conduct reasonable experiments with the virus to develop a vaccine. After researchers combined a modified version of the herpes virus cytomegalovirus with the SIV, the CMV initiated a response to create "effector memory" T-cells, which then destroyed SIV-infected cells, according to International Business Times. The vaccine made from this process functionally cured the simians without needing to remove the infection. While the unconventional vaccine still has some work to do, environmental control systems were able to help researchers in their efforts to create the product. The next step for this breakthrough will be to conduct studies on humans to achieve the same results and prepare their immune systems to combat the disease. "To date, HIV infection has only been cured in a very small number of highly publicized but unusual clinical cases in which HIV-infected individuals were treated with antiviral medicines very early after the onset of infection or received a stem cell transplant to combat cancer," said Louis Picker, associate director of the OHSU Vaccine and Gene Therapy Institute. "This latest research suggests that certain immune responses elicited by a new vaccine may also have the ability to completely remove HIV from the body." Utilizing a temperature monitoring system to keep an eye on vaccine conditions will ensure that the products are viable for use. Most vaccines must typically be refrigerated immediately in an environment between 35 and 46 degrees Fahrenheit, according to the Centers for Disease Control and Prevention. Putting these products in extreme heat or cold could damage the product and decrease potency. Many frozen vaccines may not show indications of reduced effectiveness, however, these items are sensitive to major temperature differences from their optimal condition. Keeping a handle on storage and handling plans will help medical personnel ensure that the vaccines remain viable for use. Developing a detailed strategy from management to inventory organization will give staff an idea of what is expected and create a unified process for vaccine conduct. While a temperature sensor will keep products in an appropriate environment, protocols should also be developed for disposal, not only of used products but items that have expired or were exposed to inappropriate conditions. "Think of your vaccine storage equipment as an insurance policy to protect patients' health and safeguards your facility against costly vaccine replacement, inadvertent administration of compromised vaccine and other potential consequences (e.g., the costs of revaccination and loss of patient confidence in your practice)," according to the CDC. "Reliable, properly maintained equipment is critical to the vaccine cold chain." The AIDS breakthrough is the first step toward developing a solid solution that will treat the condition. Appropriately handling vaccines will ensure that patients receive the appropriate care and give them the tools to beat the disease.
<urn:uuid:c8a181f2-18cd-4a29-a3c0-00e17b753312>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/research-labs/researchers-test-simian-virus-to-develop-aids-vaccine-508093
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951378
639
3.484375
3
A research team from the University of California Los Angeles created the first fully stretchable organic light-emitting diode (OLED) and shared findings in a paper published in Advanced Materials last month. In the past, stretchable electronics featured at least one brittle element. The overall pliability of the electronic device was therefore only as robust as its most rigid component. The researchers at UCLA solved the problem using organic compounds and stretchable polymers to create the stretchable OLED said Qibing Pei, professor of materials science and engineering at UCLA and principal investigator of the project. "To make a stretchable electronic device you must find materials for the different purposes that are stretchable. We did that here with carbon nanotubes and a polymer electrode infused into the carbon nanotube coatings to preserve conductance," Pei said. |Polymer light-emitting electrochemical cells sustain up to 45 percent linear strain.| Pei says implications for stretchable OLEDs range from multimedia-enhanced clothing to flexible display technology. He added that flexible display technology would have the potential to change the way mobile hardware is designed. "Right now smart phone dimensions are determined by the screen size. If the display could roll out or stretch when needed, the form factor decisions around smart phones would change," said Pei. The working prototype built at UCLA emits blue light in a one centimeter square area and stretches up to 45 percent. With enough monetary investment in the research, Pei suspects a flexible display is 3 to 5 years in the future, or whenever the packaging problem is solved. "The biggest barrier is the packaging. The polymer we use is sensitive to air and moisture. If you take the device out in the air it won't last long," Pei said. "The current encapsulation technology is plastic and rigid. So either we have to develop a stretchable encapsulation material or improve the polymer so you don't have to protect it from air." Pei said the research, funded by the National Science Foundation, began about a year ago and future goals include building a stretchable transistor, the semiconductor devices used to amplify and switch electronic signals.
<urn:uuid:15599bd2-d7e4-4634-9a47-a2f4d88ae751>
CC-MAIN-2017-04
http://www.crn.com/news/components-peripherals/231600445/flexible-displays-ucla-demonstrate-first-fully-stretchable-oled.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929582
443
3.078125
3
Following Hurricane Sandy, let's say you've been asked to set up replication to a disaster recovery site. Your company has chosen to back up its core operations located in Boston with space in a collocation center in Chicago -- about a thousand miles away. You've done the math and determined that you'll need a 500Mbps circuit to handle the amount of data necessary to replicate and maintain recovery-point SLAs. As you get your Chicago site and connectivity lit up, you decide to test out your connection. First, a ping shows that you're getting a roundtrip time of 25ms -- not horrible for such a long link (at least 11ms of which is simple light-lag). Next, you decide to make sure you're getting the bandwidth you're paying for. You fire up your laptop and FTP a large file to a Windows 2003 management server on the other side of the link. As soon as the transfer finishes, you know something's wrong -- your massive 500Mbps link is pushing about 21Mbps. Do you know what's wrong with this picture? If not, keep reading because this problem has probably affected you before without your realizing it. If you decide to move to the cloud or implement this kind of replication, it's likely to strike again. To continue reading, register here to become an Insider This story, "All IT pros need to understand TCP windowing" was originally published by InfoWorld.
<urn:uuid:6774158f-f9a4-450f-9be5-c3191ac032eb>
CC-MAIN-2017-04
http://www.itworld.com/article/2712189/networking/all-it-pros-need-to-understand-tcp-windowing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00452-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960595
288
3.015625
3
Some Viruses or Worms can Reinfect Your Computer. Why? This might happen for example if the malware (worm/trojan/backdoor etc.) uses Windows network and drive shares that are behind weak password to spread. An example of this is the Randex worm: Change all passwords in your computer to good ones, and remove unneeded user accounts. In this way you can prevent reinfection. Other option is to install a firewall (included for example in F-Secure Internet Security and F-Secure Anti-Virus Client Security) and block Windows network shares with it. If the virus warning keeps reappearing every time you start a browser, change the home page to a different page. This is the case especially if the full path to the infected files includes text "Temporary internet files" or for example "Cache". The warning comes then probably from the HTML document that contains malicious code segments. An example of this is warning is the Iframe exploit:
<urn:uuid:38bc1b10-2245-4c84-894a-15addfb8989f>
CC-MAIN-2017-04
http://support.f-secure.com/enu/home/virusproblem/howtoclean/reinfectcomputer.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00084-ip-10-171-10-70.ec2.internal.warc.gz
en
0.818461
211
2.734375
3
In Palm Beach County, Fla., innovative ideas for storm water management systems surfaced in the early '90s, when the Northern Palm Beach County Improvement District (Northern) faced growth and development within its jurisdiction. Managers of the special district had to come up with a way to make their storm water systems more efficient without increasing staff or budget. Water quality control The Florida Legislature created Northern as an independent special district in 1959 in order to provide water management and infrastructure services for properties within its current 128 square-mile jurisdiction. Northern's goal is to provide innovative design, management and oversight of water resource management systems. Northern provides a wide range of services to its constituents, including: Maintenance of canals, waterways and lakes Growth and the Development of the Telemetry System "We were planning for the future in terms of our needs for the area, and were looking at new ways of managing our systems," said O'Neal Bardin Jr., executive director of Northern. "At the time, there weren't effective storm water systems available, so we looked for alternatives." During the research process, Northern met with several different entities, including the South Florida Water Management District (SFWMD) and other municipal utility operations. There was one common denominator -- the use of telemetry systems for remotely monitoring activity. However, the technology wasn't being used for storm water management. It was being utilized for wastewater. "We were the first to ask the manufacturer to design a telemetry system to monitor storm water," Bardin said. The manufacturer was Data Flow Systems based in Melbourne, Fla. The system Northern began with initially cost approximately $50,000. The Tac II Telemetry System was placed in one specific unit of development and consisted of four remote telemetry units and one central telemetry unit. Telemetry works by measuring and communicating data through wireless radio signals from remote sources to receiving stations. Northern's system runs through 59 wireless radio signals. It uses programmable logic controllers for monitoring telemetry stations throughout Northern's jurisdiction, which covers 128 square miles of Palm Beach County. Licensing is required through the Federal Communications Commission. The system can monitor a total of 180 different points within a single pump station. A point can be a proximity switch for a door, for example. With the advent of the telemetry system, Northern had expanded its capacity and in doing so, greatly decreased its response time to any situation requiring attention, which could be anything from blockage in a drainage system to rising water levels due to a rainstorm. An operations staff of six people can handle all aspects of monitoring, even from their homes. "Especially during hurricane season, it is helpful to be able to monitor our systems from home," said Bobby Polk, operations manager. The Hyper Supervisory Control and Data Acquisition (SCADA) Server Telemetry System has improved the efficiency of Northern's storm water management systems in the following ways: Fifty-nine different sites are monitored at once from a remote central location. Reaction time to an event has improved by 50 percent. The system actually monitors itself and is able to dial on-call staff via computer modem for any emergency alert during evening or weekend hours. It allows for remote control of emergency operable gates and canal water levels. Security is also monitored at all sites, especially pump stations. Prior to a storm, the operations team can begin monitoring water elevations to determine whether there is a need to lower or "draw down" the levels to prevent flooding. The Operations Pilot Program As Northern mastered the telemetry system, its reputation grew as a water control district that could manage its jurisdiction efficiently. Northern entered into a pilot program to work extensively with SFWMD over a three-year period, which ends in March 2004. The result will be the ability to open and close floodgates as needed. The pilot allows Northern's operations staff to respond faster to any emergency or storm event. Quick response is crucial in Florida because of the number of unpredictable storms that may cause flooding. The pilot project has helped everyone involved understand the impact of opening a floodgate on neighboring areas in other jurisdictions. Northern will complete the pilot program then begin the permit process that may allow for a more customer service-oriented response to any flooding or emergency situation within a defined area of its jurisdiction. "There is no way to avoid flooding in South Florida," said Tommy Strowd, the former operations director who helped oversee the project for SFWMD. "This project significantly reduces the possibility that an operational problem will worsen a flooding situation." Susan Nefzger is the community information specialist for the Northern Palm Beach County Improvement District.
<urn:uuid:35fdb6b1-410a-4543-b06d-2c44084b592f>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/When-It-Rains-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956124
948
2.609375
3
Following my blog last week about the transition to GPU computing in HPC, I ran into a couple of items that cast the subject in a somewhat different light. One was a paper written by a team of computer science researchers at Georgia Tech titled “On the Limits of GPU Acceleration” (hat tip to NERSC’s John Shalf for bringing it to my attention.) The other item surfaced as a result of an Intel presentation on the relative merits of CPU and GPU architectures for throughput computing, titled “Debunking the 100X GPU vs. CPU Myth.” I think you can guess where this is going. Turning first to the Georgia Tech paper, authors Richard Vuduc and four colleagues set out to compare CPU and GPU performance on three typical computations in scientific computing: iterative sparse linear solvers, sparse Cholesky factorization, and the fast multipole method. If you don’t know what those are, you can look them up later. Suffice to say that they are representitive of HPC-type algorithms that are neither completely regular, like dense matrix multiplication, or completely irregular, such as graph-intensive computations. For these codes, Vuduc and company found that a GPU was only equivalent to one or two quad-core Nehalem CPUs performance-wise. And since a single high-end GPU draws nearly as much power as two high-end x86 CPUs, from a performance-per-watt standpoint, the GPU advantage nearly disappears. They also bring up the fact that the additional cost of transfering data between the CPU and the GPU can further narrow the built-in FLOPS advantage enjoyed by the GPU. The authors sum it up thusly: In particular, we argue that, for a moderately complex class of “irregular” computations, even well-tuned GPGPU accelerated implementations on currently available systems will deliver performance that is, roughly speaking, only comparable to well-tuned code for general-purpose multicore CPU systems, within a roughly comparable power footprint. The GPU technology chosen was based on NVIDIA’s Tesla C1060/S1070 and GTX285 systems, so the authors do admit that the results may have been very different if they had run these code on the lastest ATI hardware or the new NVIDIA Fermi card. Also, while the researchers made an attempt to tune both the CPU and GPU codes for best performance, they may have missed some important opportunities. Presumably the Georgia Tech research was unencumbered by commercial agendas. Support for the work came from the National Science Foundation, the Semiconductor Research Corporation, and DARPA. It is worth noting, however, that Intel was also listed as a funder. Hmmm. Which provides an interesting segue to our second item. At the International Symposium on Computer Architecture in Saint-Malo, France, Intel presented a paper that cast a few more aspersions on the lowly graphics processor. Like the Georgia Tech researcher, the Intel folks did their own CPU vs GPU performance benchmarking, in this case, matching the Intel Core i7 960 with the NVIDIA GTX280. They used 14 different throughput computing kernels and found a mean speedup of 2.5X in favor of the GPU. The GPU did best on the GKJ kernel (collision detection), with a 14-fold performance advantage, and worst on the Sort and Solv kernels, where the CPU actually outran the GPU. The GPU-loving folks at NVIDIA took this as good news, however, noting the 14-fold performance advantage is quite nice, thank you. In a blog post this week, NVIDIAn Andy Keane writes: It’s a rare day in the world of technology when a company you compete with stands up at an important conference and declares that your technology is *only* up to 14 times faster than theirs. In fact in all the 26 years I’ve been in this industry, I can’t recall another time I’ve seen a company promote competitive benchmarks that are an order of magnitude slower. Of course the 14X value was the best kernel result for the GPU, not the average. Intel’s real point was that they couldn’t produce 100-fold increases in performance on the GPU, like NVIDIA claims for some apps. NVIDIA actually freely admits that not all codes will get the two orders of magnitude increase. Keane does, however, list ten examples of real codes where users did record a 100X or better performance boost compared to a CPU implementation. He also points out that for these throughput benchmarks, Intel relied on a previous generation GPU, the GTX280, and doubted that the testers even optimized the GPU code properly — or at all. So what does it all mean? Well, when it comes to the CPU vs. GPU performance wars, it pays to know who’s runnning the benchmarks — not only in relation to vendor loyalties, but also programming skills, software tools they used, etc. It’s also worth comparing like-to-like as far as processor generations. In this regard, I think the NVIDIA Fermi GPU should be used as sort of a ground floor for all future benchmarks. To my mind, it represents the first GPU that can really be called “general-purpose” without rolling your eyes. It’s also important to keep in mind the effort required to port these parallel codes to their respective platforms. Skeptics are quick to point out that porting code to a GPU requires a significant up-front investment. But in his blog Keane reminds us that scaling codes on multicore CPUs is not a guaranteed path to delivering performance gains either. As a wise computer scientist once said: “All hardware sucks; all software sucks. Some just suck more than others.”
<urn:uuid:3d79675f-32be-4261-ad99-3963e741ea42>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/06/24/gpu_computing_ii_where_the_truth_lies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00204-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939154
1,207
2.796875
3
Productivity is the new buzzword, and HPC now stands for High Productivity Computing; even HPCwire has adopted this moniker. Can we usefully define productivity? Several metrics have been proposed, most being difficult or impossible to use in any scientific way. The performance metric is typically results per time unit, like flops per second, or runs per day. A productivity metric has a different denominator, usually convertible into dollars (or other currency), such as programmer hours, total system cost, or total power usage. For example, a simple (and useless) metric, let’s call it M1, is to measure the speedup gained for an application relative to the cost of attaining that speedup. Speedup is measured relative to some base time, and cost can be measured in dollars or hours (for programmer time). If we fix the target system, the hardware cost is constant; software development cost is sometimes normalized across different programmers by counting source lines of code (SLOC), which is coarse but defensible. Using SLOC favors higher level languages, which have shorter programs, though the performance may suffer. The metric M1 is defined as M1=Sp/SLOC where Sp is the speedup, and SLOC is the program length, estimating the programming effort. One study used this metric and indeed found that sequential MATLAB competes well with parallel C or Fortran; because the MATLAB program is shorter, the productivity metric is high, even though the absolute performance does not measure up to a parallel implementation. On the other hand, high-level parallel array languages like ZPL (http://www.cs.washington.edu/research/zpl) benefit both from low SLOC and high performance, and really shine using this metric. One problem with M1 as a metric is that it implicitly assumes that you will run your program only once. If you run your program many times, it may be worthwhile to invest a great deal of additional effort for a comparatively small speedup; metric M1 will not show this to be beneficial, but the total time savings may change your mind. Another problem with M1 is that it can show improved productivity even if the performance decreases. While it is true that most of our standard computing needs are not particularly sensitive to performance (think email), this is not the segment that HPC is intended to address. (If it is, someone let me know. I want out!) Even in the high performance world, we might be willing to accept small performance decreases if the development time and cost are significantly lower. However, rating a slow program as highly productive is counterproductive (pun intended). Yet another problem with M1 is that it ignores additional considerations, such as debugging, portability, performance tuning, and longevity. These all fit into the productivity spectrum somewhere. Let’s discuss each briefly. Debugging includes finding any programming errors as well as finding algorithmic problems. Interactive debuggers are common, but as we inexorably move into the world of parallel programming, these will have to scale to many simultaneously active threads. Right now the only commonly available scalable parallel debugger is Totalview, which sets the standard. Mature systems with available, supported debuggers are often preferable to a newer system where debugging is limited to print statements. Portability concerns limit innovation. If we need portability across systems, we are unlikely to adopt or even experiment with a new programming language or library — unless or until it is widely available. Standard Fortran and C address the portability problems quite nicely, and C++ is also relatively portable. A common base library, such as MPI, however difficult to use, is at least widely available, and if necessary, we could port it ourselves. Another aspect of portability is performance. When we restructure a program for high performance on one machine, we hope and expect the performance improves on other platforms. Programmers who worked on the vector machines in years past found that the effort to restructure their code for one vector machine did, in fact, deliver the corresponding high performance on other vector machines; the machine model was stable and easy to understand. MPI-based programs benefit from this; a parallel MPI program will run more or less as well in parallel on any reasonable MPI implementation. Longevity concerns also limit innovation. We might be willing to adopt a new programming language, such as Unified Parallel C, for a current research project, but we are unlikely to use it for a product that we expect to live for a decade or more. Regardless of one’s feelings about UPC as a language, we are typically concerned that we will write a program today for which there will be no working compilers or support in ten years. I had the same problems with Java in its early years; programs that I built and used for months would suddenly stop building or working when we upgraded our Java installation. We know what we really mean by high productivity, though it’s hard to quantify: we want to get high performance, but spend less to get it. Usually we mean spending less time in application development. If we go back 50 years, productivity is exactly what the original Fortran developers had in mind: delivering the same performance as machine language, with the lower program development cost of a higher level language. We would do well to be as successful as Fortran. There are no magic bullets here; someone has to do the work. There are four methods to improving productivity. The first, and the one we’ve depended upon until now for improved performance (and hence productivity), is better hardware; faster processors improve performance. Hardware extraction of parallelism has long been promised (as has software parallelism extraction) and has been quite successful at the microarchitectural level (e.g., pipelined superscalar processors). But the gravy train here has slowed to a crawl. Hardware benefits are going to come with increased on-chip parallelism, not improved speed, and large scale multiprocessor parallelism is still the domain of the programmer. The second (quite successful) method is faster algorithms. Sparse matrix solvers can be an order of magnitude more efficient than dense solvers when they apply, for instance. No hardware or software mechanism can correct an inappropriate or slow algorithm. Algorithm improvements are often portable across machine architectures and can be recoded in multiple languages, so the benefits are long-lived. So while new algorithm development is quite expensive, it can pay off handsomely. The third method, often proposed and reinvented, is to use a high performance library for kernel operations. One such early library was STACKLIB, used on the Control Data 6600 and 7600 (ten points if you remember the etymology of the name). This library morphed over time into the BLAS, and now we have LINPACK and LAPACK. The hope is the vendor (or other highly motivated programmer) will optimize the library for each of your target architectures. If there are enough library users, the library author may have enough motivation to eke out the last drop of performance, and your productivity (and performance) increases. In the parallel computing domain, we have had SCALAPACK, and now we have RapidMind and (until recently) PeakStream. In these last two, the product is more than a library, it’s a mechanism for dynamic (run-time) code generation and optimization, something that was just recently an active field of research. The upside of using a library is that when it works — when the library exists and is optimized on all your platforms — you preserve your programming investment and get high performance. One downside is that you now depend on the library vendor for your performance. At least with open source libraries you can tune the performance yourself if you have to, but then your productivity rating drops. More importantly, the library interface becomes the vocabulary of a small language embedded in the source language. Your program is written in C or Fortran, but the computation kernel is written in the language of whatever library you use. When you restrict your program to that language, you get the performance you want. If you want to express something that isn’t available in that language, you have to recast it in that language, or work through the performance problems on your own. With the latest incarnations of object-oriented languages, the library interface looks more integrated with the language, complete with error-checking; but you still miss the performance indicators that vector compilers used to give (see below). The fourth method is to use a better programming language; or, given a language, to use a better compiler. New languages are easy to propose, and we’ve all seen many of them over the decades; serious contenders are less common. Acceptance of a new language requires confidence in its performance, portability, and longevity. We often use High Performance Fortran as an example. It had limited applicability, but had some promise within its intended domain. It had portability, if only because major government contracts required an HPF compiler. However, when immature implementations did not deliver the expected performance, programmers quickly looked in other directions. Perhaps it could have been more successful with less initial hype, allowing more mature implementations and more general programming models to develop. We now see new parallel languages on the horizon, including the parallel CoArray extensions to Fortran (currently on the list for addition to Fortran 2008), Unified Parallel C, and the HPCS language proposals. Let’s see if they can avoid the pitfalls of HPF. Compilers (or programming environments) also affect productivity. Early C compilers required users to identify variables that should be allocated to registers and encouraged pointer arithmetic instead of array references. Modern compilers can deliver the same performance without requiring programmers to think about these low-level details. Compilers that identify incorrect or questionable programming practice certainly improve productivity, but in the high performance world we should demand more. Vectorizing compilers in the 1970s and 1980s would give feedback about which inner loops would run in vector mode and which would not. Moreover, they were quite specific about what prevented vectorization, even down to identifying which variable in which subscript of which array reference in which statement caused the problem. This specificity had two effects: it would encourage the programmer to rewrite the offending loop, if it was important; and it trained the programmer how to write high performance code. Moreover, code that vectorized on one machine would likely vectorize on another, so the performance improvements were portable as well. Learning from the vector compiler experience, we should demand that compilers and programming tools give useful, practical performance feedback. Unfortunately, while vectorization analysis is local to a loop and easy to explain, parallel communication analysis is global and can require interprocedural information. One HPF pitfall that the HPCS languages must avoid is the ease with which one can write a slow program. In HPF, a single array assignment might be very efficient or very slow, and there’s no indication in the statement which is the case. A programmer must use detailed analysis of the array distributions and a knowledge of the compiler optimizations to determine this. MPI programs, as hard to understand as they may be, at least make the communication explicit. The HPCS language proposals to date have some of the same characteristics as HPF, and implementations will need to give performance hints to ensure that users can get the promised performance/productivity. The key to a useful productivity metric is the ability to measure that we are improving the productivity of generating high performance programs. We may measure productivity as performance/cost, but we don’t get true high productivity by simply reducing the denominator faster than we reduce the numerator. We should want to reduce the denominator, the cost, while preserving or even increasing the performance. Michael Wolfe has developed compilers for over 30 years in both academia and industry, and is now a senior compiler engineer at The Portland Group, Inc. (www.pgroup.com), a wholly-owned subsidiary of STMicroelectronics, Inc. The opinions stated here are those of the author, and do not represent opinions of The Portland Group, Inc. or STMicroelectronics, Inc.
<urn:uuid:3056072a-f6c5-4857-baf3-3711e171d963>
CC-MAIN-2017-04
https://www.hpcwire.com/2007/07/27/compilers_and_more_productivity_and_compilers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945985
2,512
2.625
3
Are Those Web Applications Secure? In 2001, 75 percent of cyber-attacks and Internet security violations were generated via Internet applications, according to analyst firm Gartner. Now, nearly three years later, WebCohort, Inc., a leader in Web application security, says those Web applications are still not secure. In January 2004, the Federal Trade Commission announced that Internet-related fraud had led to more than half a million consumer complaints in 2003, with estimate losses of $200 million in the United States alone. Many of these losses can be blamed on unsecured Web applications, which leave a door open for hackers and for Internet fraud. WebCohort’s Application Defense Center studied four years of penetration testing on more than 250 Web applications, including e-commerce, online banking, enterprise collaboration and supply chain management sites, and concluded that at least 92 percent of Web applications are vulnerable to attack. The most common application-layer vulnerabilities include: - Cross-site scripting (80 percent) - SQL injection (62 percent) - Parameter tampering (60 percent) - Cookie poisoning (37 percent) - Database server (33 percent) - Web server (23 percent) - Buffer overflow (19 percent) For more detailed descriptions of these vulnerabilities, go to http://www.imperva.com/application_defense_center/glossary/. These types of attacks are common, yet many enterprises have not secured their Web sites, their applications or their servers against them. Firewalls and intrusion detection or prevention systems do not provide an adequate level of protection from hackers. According to Shlomo Kramer, CEO of WebCohort, increased network security has led hackers to see Web applications as easier targets. “We are only beginning to see the risks to businesses and consumers these vulnerabilities introduce,” he said. For more information, see http://www.webcohort.com. Emily Hollis is managing editor for Certification Magazine. She can be reached at firstname.lastname@example.org.
<urn:uuid:abe165bf-7ae2-42a8-8a14-df04e6788b8c>
CC-MAIN-2017-04
http://certmag.com/are-those-web-applications-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930152
426
2.71875
3
The Council of Europe has proposed international rules to govern the internet and insisted human rights must be at the fore. The draft rules, which could lead to a treaty to protect the international flows of information comparable to maritime rules protecting shipping lanes, would seek to clarify in law the way different countries depend on one another for the internet. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Maud de Boer-Buquicchio, deputy secretary general of the Council of Europe (CoE), told a conference in Strasbourg on Monday that the internet should be governed by international rules that protect freedom of expression across borders as well as the security of critical infrastructure. "We cannot simply sit and wait for some hidden force governing this new ecology to achieve a self-balance that will miraculously satisfy all our needs and expectations," she said in reference to those who claimed the market should be left to regulate the net. Technical detail and human rights The CoE presented the conference with draft rules inspired by Article 10 of the European Convention on Human Rights - the right to freedom of expression - and drafted for two years by a committee of academics and civil servants from institutions in Austria, France, Germany, Russia and Switzerland. Jan Malinowski, policy lead for the CoE initiative, told Computer Weekly: "In terms of fundamental rights, of access to information, of freedom of expression, of participation in democracy, you need to keep the internet running today. The rest can wait. "The technical aspect cannot be separated from human rights. We hear the argument that there is a clear distinction between [them]. I don't believe there is such a big gap. Technical decisions impact human rights; there consequently has to be a policy that ensures the fundamentals are preserved while working out the technical solutions." The rules would make architects answerable to human rights law. But in a radical departure for treaties, engineers may be brought into the fold and given a say over its drafting. The CoE is seeking to emulate the multi-stakeholder model of internet governance, which involves state, private and civil stakeholders, in creating international rules for internet governance. Malinowski proposed a process so unconventionally fluid that its outcome could not be determined. All he could say for sure was the CoE would encourage as wide participation as possible towards an international system to protect the internet. He mooted private sector agreements such as those already used for corporate social responsibility. Cross-border internet governance The Internet Engineering Task Force, the body of engineers who work on the internet architecture, has recently been in heated debate over the question of whether it would unjustifiably politicise its work by building better privacy protections into its protocols. The problem of cross-border internet governance has been addressed in recent months by UK foreign secretary William Hague and US secretary of state Hillary Clinton. Brazil has led the way by introducing internet governance rules. But the US has yet to back the European effort. The rules would seek to formalise the systems of mutual support which has helped the internet bounce back from its problems. Boer-Buquicchio used the example of a woman accidentally shutting the internet off for five hours a fortnight ago after putting a spade through a cable in the ground. The infamous cyber attack on Estonia in 2007 was suppressed by an unorchestrated defence operation mounted by internet engineers around the world. More difficult problems have been raised by plans in the UK to censor traffic and deny websites the right to operate, and the Egyptian government's internet blocking as a means of repression during the recent uprising.
<urn:uuid:ac48a1f0-5d4f-4f97-9ae0-e7162650b594>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280095736/Human-rights-must-trump-internet-engineering-says-Council-of-Europe
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956397
744
2.59375
3
Hot Stuff! Multi-Factor Authentication In security terms, authentication describes any of a number of mechanisms that may be used to demonstrate or prove user identity. These include techniques or technologies like account and password authentication, pass phrases, various challenge-response mechanisms, smart cards, security tokens, and biometric devices that may scan retinal patterns, fingerprints, or voiceprints to check and demonstrate human identity. Multi-factor authentication simply means that two or more authentication mechanisms are combined to provide a higher level of authentication than any single mechanism could provide on its own. The most common (and cheapest) form of multi-factor authentication is two-factor authentication, where two authentication mechanisms combine to raise the bar on entry to specific systems or services. Laptops or notebook computers can be configured to require two forms of authentication–typically, account and password plus a security token or a smart card and a PIN—so that thieves who steal such machines cannot access their contents despite physical possession of the machine (which permits tools like NT Locksmith to break through password/account protection on Windows XP or 2000 systems with ease). Likewise, some such configurations combine password/account information at the operating system with different password/account information to access drive-level encryption software. Without both sets of keys, as it were, nobody can access a machine’s contents, thereby making it safe enough to take on the road. Two-factor authentication is also often used when employees, partners, or contractors require remote access to networks and systems. In these situations, password/account information (something a user knows) is combined with a physical device like a security token or smart card (something the user has in his or her possession) or with biometric data (something a user is) to determine if remote access will be allowed or denied. Recent widespread adoption of easy-to-use device interfaces like USB and smart card readers has made this approach affordable; other two-factor systems avoid added hardware costs by using cell-phones and text messaging to obtain one-time-use entry passwords that must be used with normal password/account authentication to gain system access. Token and mobile phone based vendors of two-factor authentication systems appear in Table 2. Table 2: Two-factor authentication systems Name & type Ikey (token based) ASAS (token based)
<urn:uuid:650449c7-b7ce-45c8-b089-f9a194690a6a>
CC-MAIN-2017-04
http://certmag.com/hot-stuff-multi-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.909634
475
3.71875
4
In the first article of this series, Technology: Can't Live with It, Can't Live without It, we discussed whether we control technology, or if technology already controls us. I introduced a number of dilemmas that modern technology presents us. But technology innovation also comes to the rescue. Innovation can be defined as the art of lifting constraints. The Sony walkman lifted a constraint called “size.” It made it possible to pick up a tape recorder and make it portable. MP3 players, a few steps later in the same line of innovation, eliminated the capacity constraints so you don’t have to choose what music to bring. MP3 players also eliminated a stability constraint, allowing people to run while listening to music in high quality. In business, service-oriented architectures and model-driven applications eliminate the need to choose between standard functionality, while servicing your unique requirements. Internet technology dramatically reduced transaction costs, which allows organizations to outsource activities, improving both quality and price. No choice between the two is needed anymore. These constraints are all material, physical constraints. It is only logical that technology conquers those first. But as we saw with the example of the MP3 player, eliminating or drastically pushing one constraint usually only reveals another. Once most practical physical constraints are lifted, presenting the question how to do things, there is a completely new level of dilemmas and constraints. If there are no barriers to using technology in terms of time or money, the question moves to whether or why we should use certain technologies, or functionalities and possibilities offered by technology. Both the consequentialist and universalist would agree that those are worthwhile and more fundamental questions. They would just disagree on when to answer them. Consequentialists would judge the situation based on whether the outcome is good; universalists would like to determine that up front, based on the intention. The new set of dilemmas, constraints or barriers for technology innovation are not technical but ethical in nature. What should we do with technology and what not? Or should we do everything technology allows us to do, as this is what evolution suggests? What knowledge should we have and what not? Should the freedom for research be restricted because of possible unethical consequences? I stated it before. You can’t undo knowledge, an issue for the consequentialists, and you can’t determine all intention up front, an issue for the universalists. The two instruments used most so far are regulation and transparency. Regulation is a top-down approach. Some types of research are (or have been) forbidden. Think, for instance, of stem cell research. Transparency is more of a peer-oriented mechanism. Research and research data should be public.1 This is pretty well established in the exact sciences, but is not on the level where it should be in many of the social sciences. But rules and procedures, as valuable and needed as they are, can only do so much. The forces of curiosity, innovation, progress and evolution seem to find ways around it. Nobel-prize winner Manfred Eigen suggests that the answer is not in trying to regulate and restrict knowledge, but in accumulating even more knowledge to harness all the knowledge we already have to get a grip on our future. He essentially proposes to use the forces of progress themselves to control and steer them. In other words, the best way forward is even more forward. But what additional knowledge do we need? Three areas come to mind: knowledge of the basics, usage feedback and contextual knowledge. Understanding the Basics In his 2008 article “Is Google Making Us Stupid?” in Atlantic Magazine, technology writer Nicholas Carr confesses his skill of deep reading is in danger. Is sitting down, reading an article, essay or book, really getting into the story, or trying to carefully follow a train of thought that is laid out middle-age mind rot? No, it’s the Internet. As the mind continuously reprograms itself, a different style of reading leads to a different style of thinking. And the Web is structured around a much more fragmented style of reading – little nuggets of information, linked together in countless ways, and surrounded by advertisements. The Web seems to be built for distraction, hopping from one small bit to the next, instead of the focus that traditional book reading invites the reader to have. Our reading strategies have changed from being effective, absorbing new knowledge into our own frame of mind, to being efficient, quickly finding what we are looking for. Is this different reading strategy a bad thing? Socrates, in the writings of Plato, argued that reading leads to rhetoric deficiency, compared to debating skills. In fact, I heard someone argue that if books would have been invented after video games, parents would have been worried because books don’t allow you to interact and don’t have the multi-sensory richness of games. Regardless of whether Web reading is good or bad for our intelligence, it is good to have a choice – to be able to do proper deep reading on a subject, combined with quickly synthesizing information from various sources, representing multiple perspectives. Being a child of my time, I would suggest to learn deep-reading first before allowing technology to help you jump between sources quickly. When we rely too heavily on technology and devices, we disengage from the world around us. You may argue that calculators allow us to focus on the logic of what we are trying to achieve, instead of losing ourselves in manual calculations. But where does the understanding of logic come from? Probably from being able to process arithmetic in our brains and with pen and paper as well.2 Professional programmers benefit from having coded Java before starting with more advanced environments; Java teaches logic better. Accountants benefit from doing bookkeeping by hand before using financial packages. It allows them to predict the inner workings better. Car drivers should learn how to navigate without a system before relying on a TomTom or other GPS device. Having basic skills in areas where we depend on technology allows us to survive in case the technology fails. Granted, there are practical limitations. Unless you are a boy scout, not many feel the urge to understand how to purify water or practice making fire without matches or a lighter. But in the Western world, confidence levels about the supply of water are higher than the confidence people have in IT. A second reason why having basic skills is good is that they help you understand whether systems and technologies deliver the right output –to see the result of a calculation on the calculator or a destination on the navigation system and think “that doesn’t feel right.” Where did the feeling come from? It is the result of having built a good frame of reference first. Even 25 years ago we had usability laboratories, where people could be observed using systems. This provided tremendously valuable input for the engineers designing the “user experience.” Within ambient computing, the user experience is either completely transparent (it is simply there and gets its input from invisible sensors) or manifests itself in many different ways, for instance on a range of devices depending on where you are and what you are doing, such as your tablet, smartphone, car or glasses. But essentially it is still driven by engineering thinking, where an optimized design is deciding how the system will look for the user. One-directional. But how does the user look like to the system? There’s no telling. Systems should be more open to taking feedback regarding use and unanticipated use. Many large websites are not truly designed anymore. Rather, screens are generated based on specific user input, and on the templates and content in the web content management system. I think we’ve all had the experience of being stuck or running around in circles, trying to find a way out. Web servers can record every click, and analytics can help interpret where users give up, but the input is not very rich. It is not possible to see how the user is reacting through facial expressions, hitting the keyboard and shouting at the system. What if we could make usability laboratories more scalable? Think of using Microsoft Kinect style technology, where an application can watch us and adapt.3 It could suggest, for instance, what to do next based on the experience with other users. It could suggest the right help topics, or direct a user to a call center or web care team. System design should also solicit unexpected feedback. Systems routinely offer users recommendations based on their preferences, customer segment and what other comparable users have done in the past. Although technically this leads to user preference feedback and creates a learning loop, these recommendations only reinforce the picture the system already had and lead to even more rigid recommendations the next time around. Different ways of suggesting the unexpected are needed, based on principles of serendipity – finding something useful you weren’t specifically looking for. Systems should be more like a shop in which you can roam around and be inspired by everything that’s there. Experiment with proximity of options and recommendations by systems, creating a virtual form of market basket analysis,4 or systematically ask for feedback on random recommendations. These strategies are suboptimal in nature and counterintuitive to the engineering approach. Perhaps systems (and their designers) shouldn’t try to be smarter all the time, trying to guess user preferences correctly. Outliers are the first sign of change. Systems that do not recognize differences in use over time, or a different context in which they are used, run the risk of disconnecting from reality. Not good for systems we depend on. Understanding the ContextZen and the Art of Motorcycle Maintenance is the world’s most read book on the philosophy of technology. The core of the message is a description of two extreme views on the use of technology. There are some who see technology, their motorcycle, as something they use. They know how to operate it, but feel they don’t need to understand how it works. A technology is simply the sum of its parts; and if one part is broken, it needs to be fixed or replaced. That’s what highly skilled and trained mechanics are for. They have the experience, do nothing else all day and have all the right tools. The other group sees the beauty of the technology itself. They see a larger picture of how parts interact with each other and are influenced by the context in which they operate. One part may be broken, but it might have been caused by something else that is not working properly. And if you are driving on your motorcycle, weather conditions partly determine how smoothly the engine runs. There is no mechanic traveling with you to make tiny adjustments. And if a paperclip helps as a tool to fix something, by all means it should be used. Although the book focused more on the need to understand technology, and become one with it, it makes a small point that I think is worth emphasizing. It is not enough to understand the technology as a sum of all parts, and not even enough to understand the technology as a sum greater than all parts. As weather conditions affect the performance of the engine, you can induce a general rule. For a technology to be successful, it is particularly important to understand the context in which the technology is used. This idea is supported by the definition of “wisdom,” which is the object of philosophy itself. Wisdom is not only understanding the matter at hand, but particularly the context in which it matters.5 For instance, let’s take a look at decision support systems that help judges in determining the right sentence. For the acceptance of a system, it is very important that the rules that drive a sentence recommendation be transparent and that every recommendation can be traced back. In many cases, the process can be automated, and perhaps technically it is not required to route the sentence through an actual judge. It would make the process better (more objective), more cost-effective, and much faster. Cost, quality and speed, the three pillars of an efficient operation, are all served at the same time. However, for such a system to be successful, it is equally important to understand how people will accept a sentence from a machine. Will it cause people to resist and overflow the system with appeals? This would certainly negatively affect the business case. What additional measures would be needed for people to buy into such a system? If you build a recommendation engine for YouTube to predict what other video clips we’d like to see, we need to understand how the human mind jumps from one association to the other. How else would such a system be able to provide recommendations you wouldn’t think of yourself, but you’d like anyway? If you build a business intelligence (BI) system to help analyze complex strategic issues, it is not enough to understand the data structure and the statistical techniques used to come to analytical conclusions. BI systems are already far more efficient than any human brains, but for such a system to be effective, we also need to understand human decision making – how people absorb and process information, weigh different factors, collaborate with others and eventually reach a conclusion. This sounds logical, but most business intelligence system designs do not take this into account at all and focus exclusively on the technical side of data structure and analysis. As a last example, let’s consider the implementation of a business process management system. Most business cases focus on operational excellence. If this means taking repetitive work out of the hands of the users, there are no immediate ethical consequences of using the technology. However, if the business case involves administrative professionals having to follow rigid rules and procedures, enslaving them to the system, the business case may be financially sound, but fail on ethical grounds. Human beings are motivated by factors such as autonomy, mastery and purpose. Most humans want to be able to plan and perform their duties the way they see fit for themselves, making every day a learning experience and seeing their contribution to the organizational goals. If the goal of technology, that we increasingly depend on, is to augment human capability, we should have a clear understanding of human capabilities and how they vary per person. This is the context we should be looking for.End Notes: - Science has become too complex and interconnected to do alone anyway. James Bond type structures in deserted places where scientific teams work on devices to destruct the world, funded by a rich villain, are not possible. - I hate spreadsheets; they invite messy structures and are very error prone. But if you need to build analytical skills, it doesn’t harm to have to set up a decent spreadsheet or two before using more advanced statistical tools. - Face recognition is being used for smartphones, replacing a password. I don’t see any acceptance problems here. The question, of course, is if users feel comfortable being watched and analyzed while interacting with their phones, tablets and computers, and how to address these issues. - Market basket analysis tells you which items consumers typically buy together, like bread and butter, or trousers and socks. - Also see my series on wisdom. Recent articles by Frank Buytendijk
<urn:uuid:249e5985-3f18-47d7-9ef9-0f6dd0f200c5>
CC-MAIN-2017-04
http://www.b-eye-network.com/view/16668
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95244
3,094
2.921875
3
Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers. Earlier this year, the United States announced its goal to stand up two capable exascale machines by 2023 as part of the Exascale Computing Project and Distinguished Argonne Fellow Dr. Paul Messina is leading the charge. Since the project launched last February ECP has awarded $122 million in funding with $39.8 million going toward 22 application development projects, $34 million for 35 software development proposals and $48 million for four co-design centers. At SC16, we spoke with Dr. Messina about the mission of the project, the progress made so far — including a review of these three funding rounds — and the possibility of an accelerated timeline. Here are highlights from that discussion (the full interview is included at the end of the article). Why exascale matters “In the history of computing as one gets the ability to do more calculations or deal with more data, we are able to tackle problems we couldn’t deal with otherwise. A lot of the problems that over the years we first could simulate and validate with an experiment in one-dimension, we’re now able to do it in two or three-dimensions. With exascale, we expect to be able to do things in much greater scale and with more fidelity. In some cases we hope to be able to do predictive simulations, not just to verify that something works the way we thought it would. An example of that would be discovering new materials that are better for batteries, for energy storage. “Exascale is an arbitrary stepping stone along the way that will continue. Just as we had gigaflops and teraflops, peta- and so on, exascale is one along the way. But when you have an increase in compute power by a factor of one-hundred, chances are you will be able to tackle things that you cannot tackle now. Even at this conference you will hear about certain problems that exascale isn’t good enough for, so that indicates that it’s a stepping stone along the way. But we have identified dozens of applications that are important, problems that can’t be solved today and that we believe with exascale capability we will be able to solve. Precision medicine is one, additive manufacturing for very complex materials is another, climate science and carbon capture simulation, for example, are among the applications we are investing in.” On the significance of ECP being a project as opposed to a program “There have been research efforts and investigations into exascale since 2007, nine years ago. At the point that it became a project, it indicates that we really want to get going on it. The reason it is a project is that there are so many things that have to be done simultaneously and in concert with each other. The general outline of the project is that we invest in applications, we invest in the software stack, we invest in hardware technology with the vendor community — the people who develop the technologies so that those technologies will eventually land in products that will be in exascale systems and that will be better suited to our applications — and we also invest in the facilities from their knowledge of what works when they install systems. Those four big pieces have to work together and this is a holistic approach. “The project will have milestones, some of which are shared between the applications and the software so if application A says ‘I need a programming language feature to express this kind of calculation more easily,’ then we want the compiler and programming models part of the software to try to address that but then they have to address it together — if it doesn’t work, try again. That’s why it’s a project, because we have to orchestrate the various pieces. It can’t be just invent a nice programming model, tackle a very exciting application. We have to work together to be successful at exascale; same thing goes with the hardware architecture, the node technology and the system technology.” The mission of ECP “The mission is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to an RFP by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge and Los Alamos. There will be a software stack that we hope will not only meet the needs of the exascale applications, but will also be a good HPC software stack because one of our goals is also to help industry and the medium-sized HPC users more easily get into HPC. If the software stack is compatible at the middle end as well as the very highest end, it gives them an on-ramp. And a major goal is the applications we are funding to be ready on day one to use the systems when they are installed. These systems have a lifetime of four to five years. If it takes two years for the applications to get ready to use them productively, half the life of the system has gone by before they can start cranking out results, so part of the ecosystem is a large cadre of application teams that know how to use exascale, they’ve implemented exciting applications, and that will help spread the knowledge and expertise.” The global exascale race “The fact that these countries and world regions like the EU have announced major investments in exascale development is an indication that exascale matters. Those countries would not be investing heavily in exascale development if they didn’t think it was useful. The US currently has a goal to develop exascale capability with systems installed and accepted in a time range of seven to ten years. It is a range, and certainly the government is considering an acceleration of that — it might be six to seven years. Any acceleration comes at a price. This project is investing very heavily in applications and software, not just on buying the system from vendors — so it’s a big investment but one that I think is necessary to be able to get the benefits of exascale, to have the applications ready to use and exploit the systems. “Could we be doing better? If this project had started two-three years ago we would be farther ahead, but that didn’t happen. We got going about a year ago — it isn’t clear that we would be the first country that has an exaflop system. But remember I haven’t used the word exaflop until now. I’ve talked about exascale. What we’re focusing on is having applications and a software stack that runs effectively in a ratio that would indicate that it’s exascale. It might take two exaflops, so who gets an exaflop first might not be as important as who gets the equivalent of exascale. We also have goals around energy usage, 20-30 MW, which is a lot but if we didn’t have a goal like that we might end up with 60 or 100 MW, which is very expensive. “If we are asked as a project to accelerate, we will do our best to accelerate — it will require more money and more risk, but within reason we will certainly do that.” “I often emphasize that for the technologies that we’re hoping the vendors will develop partly with our funding and the software stack that we’re developing in collaboration with universities and industry that that will create a sustainable ecosystem. It will not just be that we’ve gotten to exascale, systems can be anointed as exascale, we breath a sigh of relief and relax. It needs to be sustainable and that’s why we really want systems that are in the vendor’s product line — they’re not something they are building just for us one of a kind. It needs to be part of the business model that they want to follow, and software that is usable by many different applications, which will make it sustainable — open source almost exclusively, which again helps sustainability because many people can then contribute to it and help evolve it beyond exascale.”
<urn:uuid:2bfc9f40-b3b2-409f-a07b-469f9467c864>
CC-MAIN-2017-04
https://www.hpcwire.com/2016/12/08/us-exascale-update-interview-paul-messina/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00002-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962398
1,744
2.53125
3
Netiquette Rules: Writing Civil, Compliant, Conversational Email Best practices call for adherence to the rules of netiquette, or electronic etiquette, when communicating via email, text messaging, social media, or any other electronic business communications tool. In other words, be polite, polished, and professional at all times. By adhering to the rules of netiquette, you can avoid potential compliance problems and enhance communication with all readers-internal and external. The author of Writing Effective E-Mail , Nancy Flynn-an internationally recognized expert on electronic writing, netiquette, and compliance-shares 20 essential netiquette rules for employees and seven special netiquette guidelines for executives and managers. You'll learn how to strike a business-appropriate tone, keep your conversational cool, use gender-neutral language, and avoid conversational pitfalls. You'll leave with the skills necessary to write email that is 100% business-appropriate, civil, and compliant. Course Content: Tone, Civility, and Netiquette Rules - Writing civil, compliant, conversational email. - Striking an appropriate tone. - Being sincere, not blunt. - Adhering to conversational dos & don'ts. - Keeping your conversation cool. - Using gender-neutral language. - Avoiding conversational pitfalls. - Using business-appropriate language - Netiquette rules: 20 best practices. - Special netiquette rules for executives and managers: seven guidelines. Client Testimonials & Book Reviews "Really hits the mark. We're going back for more." "She was great!!!!" -Training Manager, International Accounting & Consulting firm "This course should be mandatory for anyone in a leadership role." -MD, Physician Leadership Academy "Valuable and enlightening. Old grade school writing myths dispelled. Management is very impressed with the immediate impact your class had. Noticeable improvement in the effectiveness of communications skills of all who attended." -Tech Company Education Committee Chairperson "Just excellent. You brought solid information and new insights." -Superintendent, County Juvenile Detention Center "Writing Effective E-Mail is a must-read for everyone who sends and receives e-mail in a business setting. Presented in a no-nonsense efficient style. Take this quick course to e-mail success." -Business Librarians at the Carnegie Library, Pittsburgh Post-Gazette. (Review of Nancy Flynn's Writing Effective E-Mail.) "Employees' surreptitious use of IM can come back and bite your business in the bottom line, so it pays to learn about the technology and impose policies and standards." -Harvard Business School, "Working Knowledge" newsletter. (Review of Instant Messaging Rules by Nancy Flynn.) "A well-organized guide to implementing e-risk management….Delivers a high-level training program." -American Bar Association, "Business Law Today." (Review of E-Mail Rules by Nancy Flynn.) "Top 10 risk management books....37 rules for retaining and managing e-mail in ways to reduce corporate liability. Good stuff!" -Claims magazine. (Review of E-Mail Rules by Nancy Flynn.) View our E-Mail Rules Brochure Published in August 2014, Nancy Flynn's Writing Effective E-Mail, 3rd Edition, has been completely updated with new content & exercises. Bonus: Sample e-mail, mobile device, and writing style policies that you can implement immediately. Buy now. Nancy Flynn to discuss your training needs, get a quote, and schedule your program.
<urn:uuid:022cbd90-8a58-4413-8a54-cc7e1e02111f>
CC-MAIN-2017-04
http://www.epolicyinstitute.com/netiquette-rules
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00396-ip-10-171-10-70.ec2.internal.warc.gz
en
0.85677
744
2.625
3
Can anything you create digitally - software code, e-mail or documents - be traced back to you like so much DNA from a crime scene? Research scientists at the Defense Advanced Research Projects Agency (DARPA) seem to think so as they announced this week the $43 million Cyber Genome Program it hopes will develop technologies that will help law enforcement types collect, analyze and identify all manner of digital artifacts. The objective of the four-year program is to produce revolutionary cyber defense and investigatory technologies for the collection, identification, characterization, and presentation of properties and relationships from software, data, and/or users to support law enforcement, counter intelligence, and cyber defense teams, DARPA stated. Such digital artifacts may be collected from computers, personal digital assistants, and/or distributed information systems such as cloud computers, from wired or wireless networks, or collected storage media. The format may include electronic documents or software to include malware, DARPA stated. Layer 8 Extra: "A challenge in the cyber community is the ability to identify, analyze, and classify users, software, and digital artifacts. The traditional approach has been to develop custom solutions addressing individual threats for individual systems. However, it is not a viable approach to enumerate all possible combinations of solutions for each network threat for every sensor, weapon, and command-and-control platform," DARPA stated. "The result has been a continuous and rapid proliferation of cyber attacks, malicious software and 'spam' email. These challenges provide an asymmetric advantage to adversaries who can develop inexpensive, evolutionary cyber exploits that bypass or defeat intrusion detection and protection systems, host-based defenses, and forensic analysis." As with most DARPA projects, this one has a number of advanced requirements. For example, according to DARPA the new system must: - Identify and/or validate users from their host and/or network behavior. "Something you do" may augment existing identification and/or authentication technologies to discover "insiders" with malicious goals or objectives. - Handle automated analysis and visualization of computer binary (machine language) features and behaviors (reverse engineering) to help assist analysts understand the software's function and intent. - Create lineage trees for a class of digital artifacts to gain a better understanding of software evolution. In other words trace what DARPA calls the ancestors or descendants of digital artifacts and determine the author and development environment of digital artifacts - Identify and categorize of new variants of previously seen digital artifacts to reduce the threat of zero-day attacks that are variants of previously seen attacks. - Determine or characterization of digital artifact developers or development environments to aid in software and/or malware attribution. This isn't the only cyber systems DARPA is working on as you might imagine. It also has in the pipe-line an avant-garde artificial intelligence (AI) software system known as a Machine Reading Program (MRP) that can capture knowledge from naturally occurring text and transform it into the formal representations used by AI reasoning systems. The idea is that such an intelligent learning system would unleash a wide variety of new AI applications - military and civilian -- ranging from intelligent bots to personal tutors DARPA said. For example, all of the text in the World Wide Web will become available for automating the monitoring and analysis of technological and political activities of nations; plans, rhetoric, and activities of transnational organizations; and scientific discovery within various disciplines, DARPA stated. As digitized text from library books world wide becomes available, new avenues of cultural awareness and historical research will be enabled. With truly general techniques for effectively handling the incompatibilities between natural language and the language of formal inference, a system could, in principal, be constructed that maps between natural and formal languages in any subject domain, DARPA said. DARPA also recently awarded almost $56 million to two contractors it expects will develop the second phase of technologies that it promises will be revolutionary and bolster current cyber security technology by orders of magnitude. DARPA spent $30 million to develop Phase 1. The contracts are part of DARPA's ambitious National Cyber Range program the agency says will develop revolutionary cyber research and development technologies. DARPA says that the NCR will advance myriad security technologies and "conduct unbiased, quantitative and qualitative assessment of information assurance and survivability tools in a network environment." Layer 8 in a box Check out these other hot stories:
<urn:uuid:2c5f9676-15e0-40df-aa66-09f10ac830a6>
CC-MAIN-2017-04
http://www.networkworld.com/article/2229770/security/us-developing-extreme-digital-forensic-wizard.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00241-ip-10-171-10-70.ec2.internal.warc.gz
en
0.91601
896
3.15625
3
So far, in our discussion of Router Information Protocol (RIP), we’ve discussed the basics and also verified and reviewed RIP version1. We stated that RIP version 1 is a classful routing protocol that used FLSM and sent it routing updates without the subnet mask. In this post we will review the features of RIP version 2. Our baseline example will be between routers R1 and R2. R1 has a network of 192.168.1.0/24 and two subnets 192.168.2.0/25 and 192.168.3.0/24. (There’s actually a loopback on R1 that has the network 192.168.1.0 associated to the interface and the two subnets as secondary.) First, RIP version 2 is a classless routing protocol. Classless routing protocols always advertise the subnet mask in their routing updates. As shown in example 2, the debug ip rip command displays a routing update that’s being sent from router R1 to router R2. Here the subnet masks are displayed in CIDR notation. Also with RIP version 2 the update includes IP address of which router to get to that destination (which we see is 0.0.0.0 meaning to use this router) and a tag value which is function is a deprecated function. Second, RIP version 2 uses a multicast of 126.96.36.199 instead of a broadcast message as RIP version 1 does. With updates being sent as multicast, routers that are in this group will accept the message, and rather than all of the receiving the message and processing it, hosts can just ignore the message. Also we can see in example 3 that when we display the show ip rip database command, the subnets are also shown with the subnet mask. Here you can see that the sending router R2 is receiving from R1 via FastEthernet 0/0. You can also see that the router isn’t assuming that the subnets are using the same mask as the interface. Because of this we can use variable length subnet mask (VLSM) on its interfaces. Other verification for RIP version 2 is the show ip protocols command. As shown before, this command displays the timers, however this time we also see that the interfaces are sending and receiving only version 2 updates out of its interfaces. If a version 1 update is received on its interfaces, it will be ignored. Lastly you can see that from R2’s point of view, the routing table has the subnets and they are listed with the subnet mask. The metric is still the same, as well as the administrative distance of 120. Also, automatic summarization is enabled by default on for RIP version 2, but typically, to achieve the benefits of being classless, automatic summarization needs to be disabled. In my next post we look at all of the configuration commands for RIP version 1 and 2. Author: Jason Wyatte
<urn:uuid:f4e5861d-16fd-44e8-8343-b25768a4a143>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/08/05/understanding-rip-v2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920344
618
2.734375
3
French-American mathematician Benoit Mandelbrot discovered fractal mathematics, the study of measuring and simulating irregular shapes found in nature. Mathematician Benoit Mandelbrot, the father of fractals, died of pancreatic cancer on Oct. 14, AFP reported. He was 85 His work on fractals has become a foundation of the Chaos theory and is critical to many applications and systems, ranging from digital compression on computers, modeling turbulence on aircraft wing designs, and texturing medical Most people know fractals as the weird, colorful patterns drawn by computers. The word "fractals" was coined by Mandelbrot to refer to rough or fragmented geometric shapes or processes that have similar properties at all levels of magnification or across all times. There are mathematical shapes with uneven contours that mimic irregularities found in nature, such as clouds and trees, and can be measured and simulated, Mandelbrot Up until then, mathematicians believed that most of the patterns of nature were too complex and irregular to be described mathematically. "Fractals are easy to explain, it's like a romanesco cauliflower, which is to say that each small part of it is exactly the same as the entire cauliflower itself," Catherine Hill, a Gustave Roussy Institute statistician, told the AFP. "It's a curve that reproduces itself to infinity. Every time you zoom in further, you find the same curve." In his 1982 book, "The Fractal Geometry of Nature ," Mandelbrot said complex outlines of clouds and coastlines, once considered unmeasurable, could "be approached in rigorous and vigorous quantitative fashion" with fractal geometry. With fractals, it is possible to create models of coastlines, cell growth and other processes that look like the real thing. He even applied the theory to the financial market, predicting and warning about the global financial meltdown in his 2005 book "The (Mis)Behaviour of Markets ." He cited the huge risks being taken by traders who tend to act as if the market is predictable, comparing them to "mariners who heed no weather warnings. Mandelbrot was analyzing electronic noise that was interfering with IBM electronic emissions as an IBM research fellow in the 1960s. The scientists had noted the blips occurred in clusters, with a period of no errors followed by a period of many. Mandelbrot noticed a pattern to these error clusters. He found an hour where there were no errors, and the next hour had many errors. When he divided the error period into a smaller interval, he found the ratio of errors and no errors remained the same. In other words, carving up the hour into 20-minute sections resulted in 20 minutes with no errors followed by 20 minutes with many errors. Regardless of the interval size, Mandelbrot found the ratio of error-free periods and error-filled periods remained the same. He called the property "self After retiring from IBM, Mandelbrot became a professor of mathematical sciences at Yale, and later held appointments as professor of the practice of mathematics at Harvard University, professor of engineering at Yale, professor of mathematics at the ??½cole Polytechnique in France, professor of economics at Harvard, and professor of physiology at the Einstein College of He was awarded the Wolf Prize for Physics in 1993 and in 2003 the Japan Prize for Science and Technology. In 2006 he was knighted by the French. He even has an asteroid named after him: 27500 Mandelbrot. Born in Poland, he was educated in France before joining IBM in the 1950s.
<urn:uuid:644d6999-5840-4f4f-9876-1d86f3cd269d>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Enterprise-Applications/Benoit-Mandelbrot-Father-of-Fractals-Dies-at-85-802522
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00177-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953103
809
3.453125
3
Optical fiber communications have changed our lives in many ways over the last 40 years. There is no doubt that low-loss optical transmission fibers have been critical to the enormous success of optical communications technology. It is less well known however, that fiber-based components have also played a critical role in this success. Initially, fiber optic transmission systems were point to point systems, with lengths significantly less than 100 km. Then in the 1980s, rapid progress was made on the research and understanding of optical components including fiber components. Many of these fiber components found commercial applications in optical sensor technology such as in fiber gyroscopes and other optical sensor devices. Simple components such as power splitters, polarization controllers, multiplexing components, interferometric devices, and other optical components proved to be very useful. A significant number of these components were fabricated from polarization maintaining fibers (PMFs). You can buy the PM fiber patch cables from Fiberstore. Although not a large market, optical fiber sensor applications spurred research into the fabrication of new components such as polarization maintaining components, other components such as power splitters were fabricated from standard multimode (MM) or single-mode telecommunication fiber. In the telecommunication sector, the so-called passive optical network was proposed for the already envisioned fiber-to-the-home (FTTH) network. This network relied heavily on the use of passive optical splitters. These splitters were fabricated from standard single-mode fibers (SMFs). Click here to get the price single mode cable fiber optic. Although FTTH, at a large scale, did not occur until decades later, research into the use of components for telecommunications applications continued. The commercial introduction of the fiber optical amplifier in the early 1990s revolutionized optical fiber transmissions. With amplification, optical signals could travel hundreds of kilometers without regeneration. This had major technical as well as commercial implications. Rapidly, new fiber optic components were introduced to enable better amplifiers and to enhance these transmission systems. Special fibers were required for the amplifier, for example, erbium-doped fibers. The design of high-performance amplifier fibers required special considerations of mode field diameter, overlap of the optical field with the fiber active core, core composition, and use of novel dopants. Designs radically different from those of conventional transmission fiber have evolved to optimize amplifier performance for specific applications. The introduction of wavelength division multiplexing (WDM) technology put even greater demands on fiber design and composition to achieve wider bandwidth and flat gain. Efforts to extend the bandwidth of erbiumdoped fibers and develop amplifiers at other wavelength such as 1300nm have spurred development of other dopants. Codoping with ytterbium (Yb) allows pumping from 900 to 1090nm using solid-state lasers or Nd and Yb fiber lasers. Of recent interest is the ability to pump Er/Yb fibers in a double-clad geometry with high power sources at 920 or 975 nm. Double-clad fibers are also being used to produce fiber lasers using Yb and Nd. Besides the amplication fiber, the EDFA (Erbium-Doped Fiber Amplifier) requires a number of optical components for its operation. These include wavelength multiplexing and polarization multiplexing devices for the pump and signal wavelengths. Filters for gain flattening, power attenuators, and taps for power monitoring among other optical components are required for module performance. Also, because the amplifier-enable transmission distance of hundreds of kilometers without regeneration, other propagation propeties became important. These properties include chromatic dispersion, polarization dispersion, and nonlinearities such as four-wave mixing (FWM), self-and cross-phase modulation, and Raman and Brillouin scattering. Dispersion compensating fibers were introduced in order to deal with wavelength dispersion. Broadband coupling losses between the transmission and the compensating fibers was an issue. Specially designed mode conversion or bridge fibers enable low-loss splicing among these thre fibers, making low insertion loss dispersion compensators possible. Fiber components as well as microoptic or in some instance planar optical components can be fabricated to provide for these applications. Generally speaking, but not always, fiber components enable the lowest insertion loss per device. A number of these fiber devices can be fabricated using standard SMF, but often special fibers are required. Specialty fibers are designed by changing fiber glass composition, refractive index profile, or coating to achieve certain unique properties and functionalities. In addition to applications in optical communications, specialty fibers find a wide range of applications in other fields, such as industrial sensors, biomedical power delivery and imaging systems, military fiber gyroscope, high-power lasers, to name just a few. There are so many linds of specialty fibers for different applications. Some of the common specialty fibers include the following: Fiberstore is the largest Manufacturer & Supplier of Fiber Cables in China. We can provide almost all the special fibers for optical communication systems including those introduced in this article and correspondingly we also have various kind of fiber optic patch cords in our store.
<urn:uuid:f2d325bf-1448-4d1b-95d6-ed5adfcd1748>
CC-MAIN-2017-04
http://www.fs.com/blog/introduction-of-specialty-fibers-for-optical-communication-systems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940241
1,043
3.4375
3
The importance of risk related concepts It is very important that one possess some knowledge on the various risk related concepts ranging from the control types, risk avoidance and risk handling techniques since they play a very important role in our security policies. They help one avoid the risks. Hence, one can ensure that the quality standards are being met and the instruments used would not be doing any harm in the future. These are concepts that should be integrated into security systems. The National Institute of Standards and Technology is a federal organization in the United States that is responsible for coming up with standards for use not only by the federal government but also nationally and worldwide. Under the NIST Special Publication 800-53, there are several classes of control types that are clearly outlined. All the three classes work together and one cannot just look at a single class. There are also some special types of families that each class is associated with. Technical: The technical control types are the first categorized in the NIST Special Publication. This can be viewed as access control, authentication of the different resources on one's computer or network, how one control one's communication among many other technical aspects are contained in this class. Management: The management class is the one that talks about how one manage some of the different aspects of risk in one's environment such as security assessment and authorization, planning, risk assessment and service acquisition which are important aspects of security. Basically, security is not only based on proper firewall configurations but also proper management. Operational: This is a class that is mainly concerned with the operations and activities one do to maintain security in one's environment such as what one do when an incidence occurs, how one handle changes in configurations in one's network so that one do not create security issues related to changes, how one protect things physically among some other aspects. Working in the field of security for quite a long time could see one come across the false positives concept. This basically refers to something that is reported to one but is not really the case. For instance, one's security systems may report a virus attack on one's server but when one take a look, one may find that there is no such attack. This is a kind of phenomenon that is mostly witnessed in intrusion detection systems which are signature detection systems and occasionally, they can see something that goes through the signature but is not related to an attack. In case of such an incidence, make sure one double-checks so as to determine what the IDS and IPS is telling one so that one is able to link it back to a threat. False positives in addition to being wrong warnings, they can also cause problems with one's operating system. In case of such a message, one might consider uploading the file in on set anti-virus engines where one can run one's file to check for any viruses in it. This is a good way to have a strong affirmation that the false positive message one are receiving is not accurate. Importance of policies in reducing risk In every working setup, the availability of policies is very important since in some cases, the strength of one's security is almost similar to that of one's policies. It is therefore important that people and employees are made aware of these policies all the time since they allow one to do all one's jobs. It is outright that one's security roles begin and end at these policies. The better are the policies, the better would be one's security. One cannot create policies all at once, but this is an activity that keeps going on and on with the documentation of new policies. There should also be policies outlining the kind of privacy the employees should expect. Acceptable use: The acceptable use policy is one that gives regulations on how employees and people in an organization use its assets such as computers, mobile phones, telephones and even the internet. With this policy, an individual handling a company's assets in the wrong way can be easily prosecuted. Security policy: Security policies tend to cover a very wide area in the security perspective. First is the physical aspect of security. There should be policies outlining what should be done to doors without locks, how visitors in a company should be handled and how employees who come without badges should be handled. Another aspect is technical security. There should be policies outlining what should be done in case a computer gets a virus attack. These are policies that all people must be aware of so as to be in a position to handle such cases when they arise. Mandatory vacations: In a business perspective, there are some policies that must be enforced such as mandatory vacations. One should not wait to be told to go on vacation. With such vacations, it is easy to identify some misconducts going on in the organization. Job rotation: A job rotation policy is also very important so as to make sure that activities in an organization or company run continuously even when one individual is absent. Through such rotations, people are not able to commit frauds since a new person brought into the same position can identify them. Through this policy, people are rotated around many responsibilities and thus no one maintains full job control for extensive periods of time. Separation of duties: Separation of duties is also very important in an organization. One aspect in this case is split knowledge. This is where one ensures that no one person knows every bit of information. There is also the dual control which calls for the presence of two people so that it can work. In this case, one's business can make it mandatory that finances can only be withdrawn from a bank if two specific people are present. Least privilege: The least privilege is also another set of business policy. This is a concept which means that one only have the rights to access information that is necessary for one's task. For instance, if one is accessing information on a server, one's rights can be set to only read and not write. Risk calculation is a kind of activity that is normally carried out so as to determine the amount of cost incurred after the occurrence of a security breach. This does not specifically look at the amount of money lost or damage incurred but also the money incurred in resolution of the security issue. Likelihood: Likelihood is one way through which a risk can be calculated. It can also be referred to as the Annualized Rate of Occurrence. In this case, one looks at how often the incidence occurs probably in a time span of one year. This is in some way a form of guessing. ALE: The Annual Loss Expectancy is used to calculate the amount of loss incurred in a year. In this case, one takes the Annualized Rate of Occurrence and multiplies by the Single Loss Expectancy. With the value of the expected loss in a year, one can easily plan and start budgeting for the following year. Impact: In the impact case, the monetary value lost is not what is solely considered. In this case, we consider other impacts such as if a stolen laptop contains important company information, then that can be a huge blow to privacy. SLE: The Single Loss Expectancy helps one determine the amount of monetary loss one will incur in the event of such an incidence. For instance, if a laptop is stolen, one can estimate the amount lost. ARO: The Annualized Rate of Occurrence is a risk calculation method that involves determining the number of times an incidence occurs for instance, the number of times a laptop is stolen in a year. This does not rely on facts but guessing. Quantitative vs. qualitative The quantitative and qualitative calculation methods are quite different. The quantitative method seeks to establish the amount of money that will be lost but qualitative also gives the value of information one may lose. Vulnerabilities basically refer to the levels or extent to which one is prone to a risk. Depending on one's security policies, the vulnerability levels may be different for various organizations. Threat vectors outline the extent of damage or loss that may be incurred in the occurrence of a risk. Probability / threat likelihood Probability and threat likelihood is a risk calculation technique that is normally based on the frequency of occurrence of a risk. With such information, one can determine the probability of a risk happening at a specific time period. Risk-avoidance, transference, acceptance, mitigation, deterrence In our daily setting, a risk is something that we cannot avoid whether we are at work, at home or along the streets. With respect to this, it therefore becomes a very big challenge when one has to deal with a risk. One way that one can deal with it is through risk avoidance. This involves making proper decisions and deciding not to engage in activities that expose one to a lot of risks. For instance in institutions, students may be at a high risk of accessing some illegal material from the internet and hence the institution can block some of the sites. Another way one can deal with a risk is by transferring it to another person, this known as transference. In this case, one can insure a certain risk with an insurance company. Acceptance is also another way of dealing with a risk. This is where one decides to live with the risk and deal with it all by one's self. Mitigation can also be a way of handling one's risks. This is where one comes up with strategies to decrease the occurrence of a risk. For instance in a data centre, one can invest in security systems that will provide good security. Ultimately, deterrence can also be a way of dealing with a risk. For instance, one can security fences and dogs do deter unauthorized people from accessing a particular premises or area. Risks associated with Cloud Computing and Virtualization Cloud computing is an emerging technology where we are able to encrypt information and resources in mobile locations or in the cloud. However, there are some risks associated with it. One of the risks is that the data on the cloud may be available to other people. With cloud computing devices under the control of third parties, one's information might be accessed by such people. For security, one can encrypt one's data before putting it in the cloud. Another risk is that security of this data is managed by other people and hence it may not meet one's requirements. In addition, cloud computing involves storing of data in servers we have no control of and therefore in case of a power loss, one might not access one's information. Virtualization is also another growing technology that involves having a large computer in which one can build many virtual systems. One risk associated with this is that if the virtualization layer is accessed by cyber-criminals, the whole system is at risk too. Another risk associated with virtualization is that there is limited control over what happens between virtual systems. There is also a risk associated with securing all the virtual systems in the server since each separate system requires a separate security profile and hence this process of security can be quite tiring. It is very important that we are conversant with all the risk related concepts since with their conversance, one can not only handle risks easily but also plan adequately for their occurrence and take all the necessary precautions so as to avoid damage or loss. By Knowing about the risks can ensure the stability and the safety of the workplace very well. This way, one can have the work done in a safe and nice environment.
<urn:uuid:43a63fc0-8810-4065-9b66-e8d282e4f558>
CC-MAIN-2017-04
https://www.examcollection.com/certification-training/security-plus-importance-of-risk-related-concepts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00507-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962139
2,310
2.71875
3
Smart energy for smart cities technology revenue is expected to reach $20.9 Billion in 2024 according to a new report from Navigant Research. Increasing pressure from climate change The report, Smart Energy for Smart Cities, has its focus on the smart grid and advanced energy technologies segments. Smart energy management is an important area for city leaders. They are under increased pressures to take measures to deal with climate change, which mean city leaders need to develop policies around energy efficiency and carbon reduction. Navigant Research says this has given rise to ambitious energy policies involving a range of innovations such as smart grid technology, demand management, alternative and renewable energy generation and distributed energy resources. Synergies and complementary technologies The report points out that there are many synergies in the technology framework that makes up the backbone of a smart grid. Over the next ten years cities, utilities, and third-party vendors are expected to increasingly seek out complementary technologies. This will be vital if they are to optimise the use of resources – both those of the city and the citizen – and to ensure that investment is used to best effect. The report says the smart energy for smart cities vision has both economic and technical barriers which inhibit current development. The varying business models of cities’ utilities and private stakeholders mean that alignment between stakeholders can be difficult to achieve. Nevertheless the report says that global smart energy for smart cities technology revenue is expected to grow from $7.3 billion in 2015 to $20.9 billion in 2024. “Energy is the lifeblood of a city,” says Lauren Callaway, research analyst with Navigant Research. “Developing an integrated and sustainable energy strategy within the smart city framework is one of the most effective ways cities can contribute to their larger goals of addressing climate change, supporting citizen well-being, and fostering economic development.”
<urn:uuid:9b2482ea-f437-482b-963e-aa5cd79fc828>
CC-MAIN-2017-04
https://internetofbusiness.com/smart-energy-smart-cities-technology-revenue-reach-20-9-billion-2024/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00563-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935828
381
2.5625
3
The father of the battery technology that powers Tesla’s cars has some advice before electric car manufacturer Elon Musk builds a $5 billion battery factory with Panasonic. “I would think that by the time they build the factory, there will be a new battery technology,” said John B. Goodenough, a professor at University of Texas’ Cockrell School of Engineering. “I assume they are gambling that the technology can be adapted.” Goodenough, 91, is widely hailed for his pioneering work with the lithium-ion battery. Musk is planning a 10-million-square-foot factory that will begin production of lithium-ion batteries by 2017 and hit full production by 2020. In a recent interview, Goodenough stressed that he is not privy to Musk’s business plan, but said that building a rechargeable commuter car is “an incredible intermediate plan.” Tesla officials declined to comment. Musk is addressing the 250- to 300-mile range of his cars’ batteries by creating a network of “supercharging” stations along major highways. He also is trying to cut the battery’s cost by at least 30 percent. Goodenough said new battery technologies could increase the range and cut the costs by substituting sodium for lithium, for example, and by finding a replacement for the expensive cobalt in the battery. Goodenough’s colleague, Arumugam “Ram” Manthiram, agrees that battery technology is changing. “Down the road, they can’t use this battery strategy,” he said. “They have to be adaptable.” The Cockrell School of Engineering hopes to be a reason Tesla might locate to Central Texas. “I’d love to bring Tesla to Texas,” Manthiram said. He acknowledged that other states have the advantages of being closer to Tesla’s car assembly plant or to sources of lithium, but he argued that proximity to a research university is important. “You can transplant lithium easier than knowledge,” he said. Manthiram said a partnership with Tesla would provide jobs for UT students, help win federal grants and provide training for Tesla’s engineers. “All these innovations will occur at the university, not in the industry,” he said. “They can take courses and work in our labs.” UT has often been a draw in attracting business and industry, particularly in science and technology. Sam Jaffe, a battery expert with Navigant Research in Boulder, Colo., said he doubts a research university is high on Musk’s list. Taxes, land costs, wages and incentives are all economic factors in Tesla’s selection process, which includes Arizona, Nevada, New Mexico, California and Texas. Jaffe said he has the “utmost respect” for Goodenough’s work but added, “The kind of work done at universities is usually far removed from commercialization.” ©2014 Austin American-Statesman, Texas
<urn:uuid:88382160-3740-470e-a3d5-f9f29bfd6ffc>
CC-MAIN-2017-04
http://www.govtech.com/education/University-of-Texas-Wants-to-Work-With-Teslas-Gigafactory.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957428
651
2.796875
3
On June 13, the Nova Scotia College of Art and Design (NSCAD) announced that its robot, hitchBOT, would travel across Canada, from Halifax to Victoria. Armed with speech recognition, 3G and Wi-Fi connectivity -- and a Wikipedia-powered cache of trivia -- the robot will rely on the help of strangers to travel its course. “Usually we are concerned whether we can trust robots, but this project takes it the other way around and asks: Can robots trust human beings?” Dr. Frauke Zeller of Ryerson University stated in a press release. “We expect hitchBOT to be charming and trustworthy enough in its conversation to secure rides through Canada.” The robot has a moving arm to show people that it wants a ride, but is otherwise in capable of moving on its own. The robot will be designed to have a cobbled-together appearance of household items like a bucket, pool noodles, garden gloves and rubber boots.
<urn:uuid:d4419c76-d992-442c-8e73-03c0adbfab01>
CC-MAIN-2017-04
http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-062514.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94763
201
2.859375
3
Gigabit Ethernet: If your network infrastructure supports Gigabit Ethernet, make sure your server also comes with Gigabit Ethernet capability to prevent your server's network connection from being a bottleneck. Multiple CPUs: It's cheaper to put an additional CPU into an existing server than to buy another entire server, so multiple-CPU options for Unix/Linux, Windows, and Macintosh servers are becoming common. RAID: If your server will be using internal storage, as opposed to accessing a NAS (see glossary), make sure it supports RAID (see glossary). One form of RAID known as RAID 5 is often used to boost both performance and fault-tolerance. Hot-swappable drives in your RAID can allow you to replace failed drives without shutting down the server. RAM: One of the most critical aspects of sizing and configuring a server is making sure you've got enough RAM. One gigabyte is a minimum for any heavily used server, and 2GB or 4G are better. The good news is that RAM is still relatively inexpensive.
<urn:uuid:63ce8dc3-3e13-42e3-a304-4a505098b640>
CC-MAIN-2017-04
http://www.networkcomputing.com/networking/select-server/437751640
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935716
213
2.546875
3
Introducing the Digital Attack Map What our ATLAS data highlights is just how commonplace DDoS attacks have become – both in terms of frequency but also in terms of how many Internet users are impacted by DDoS. It’s not just a problem for large, global organizations and service providers, but anyone with an Internet connection can be caught in the crossfire of an attack. The ‘collateral damage’ of an attack against a large organization or service provider are the people that rely on those networks every single day. The Digital Attack Map utilizes anonymous traffic data from our ATLAS® threat monitoring system to create a data visualization that allows users to explore historical trends in DDoS attacks, and to make the connection to related news events on any given day. The data is updated daily, and historical data can be viewed for all geographies. This collaboration brings life to the ATLAS data we leverage every day to uncover new attack trends and techniques, sharing it in a visual way that connects the dots between current events and cyberattacks taking place all over the world. We invite you to explore the Digital Attack Map to see for yourself how DDoS has become a global threat to the availability of networks, applications and services that billions of people rely on every day.
<urn:uuid:f21ff3b4-ec6e-45ee-8ebb-f7fbd79a5c6a>
CC-MAIN-2017-04
https://www.arbornetworks.com/blog/asert/introducing-the-digital-attack-map/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00407-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933266
256
2.515625
3
Identifying areas of security within a network requires a 24/7 strategy to keep botnets and cyber criminals out of your network. Ultimately, cyber attacks can bring businesses to a grinding halt, creating untold damage to network architecture, operational efficiencies, and the all-important, bottom line. The best way to deal with these threats is to maintain a trained, knowledgeable staff that understands security risks and vulnerabilities. This trained staff should be looking at several areas of the infrastructure. Externally — From the outside of a network, the security team should ask, “What can the attacker see?” Ethical hacking and penetration testing is one approach. Internally — From the inside of a network, there also needs to be a layered defense. There are huge risks coming from malicious websites, tainted e-mails, and viruses. Operationally — From the operation side of a network, one should never forget the importance of training staff on good security practices. Many highly technical attacks make use of social engineering. DMZ — The DMZ is the no man’s land between the internal and external network. This environment needs more than basic firewalls. It needs web application filtering and deployment of current security appliances to protect web and application servers. Having a trained security team that understands how hackers think can help counter their attacks at each of these infrastructure areas.
<urn:uuid:bd30fb42-d50b-4817-9821-392f4e472570>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/12/10/stopping-hackers-requires-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00433-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936199
277
2.59375
3
A team of researchers at Xerox has discovered a way to print plastic transistors using a semiconductive ink, paving the way for flexible displays and low-cost RFID (radio frequency identification) chips. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Other companies are working on ways to print chips using inkjet printing technology or other methods of depositing liquid on a surface. Most of those techniques have required manufacturing environments at high temperatures or high pressures, but Xerox has developed a way to print transistors at room temperature, said Xerox fellow Beng Ong. The technique builds on a polythiophene semiconductor developed by Ong's team last autumn. Polythiophene is an organic compound that resists degradation in open air better than other semiconductor liquids and also exhibits self-assembling properties. Ong's team has now found a way to take the polythiophene semiconductor and process into a liquid which can form ordered nanoparticles. When the particles are put into liquid form, they make an ink that can be used to print the three key components of a circuit: a semiconductor, a conductor and a dielectric, Xerox said. The CMOS (complementary metal-oxide semiconductor) technology used to build most chips today is expensive, and requires a solid base such as silicon to manufacture circuits. Xerox hoped this technology can be used to build displays that can be rolled up, bent around a corner, or otherwise stretched in ways not possible. Backers of RFID technology are also looking for a way to build low-cost chips that can be used to track inventory in warehouses and grocery stores. Companies such as Wal-Mart Stores are looking for ways to improve their inventory management techniques with these chips, but the cost of putting an RFID chip in every product sold through a company as large as Wal-Mart is prohibitive. Tom Krazit writes for IDG News Service
<urn:uuid:c2519c3d-5d49-40f8-9c74-d58280870259>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240055638/Xerox-cooks-up-printed-chips
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935031
412
2.9375
3
Organizations can cut PC energy consumption to save significant costs and reduce environmental impact. However, many stakeholders are skeptical about how powering down idle computers will significantly reduce costs. Therefore, a business case may be needed to demonstrate the benefit. This calculator can be employed to: - Consider factors such as energy costs, number of PCs, typical PC and monitor wattage, and the organization's usage patterns. - Estimate potential savings. - Estimate power savings from laptops as well. Use this tool to determine what cost savings are possible and to build a business case for a PC power saving plan. This downloadable tool is associated with the research note, "PC Power Saving Plans Reduce Costs and Environmental Impact."
<urn:uuid:831b300f-d596-4b5f-9261-23636c6c8891>
CC-MAIN-2017-04
https://www.infotech.com/research/pc-power-saving-plan-calculator
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00150-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913542
145
2.59375
3
Two-factor authentication is a security method that helps to ensure that data when accessed remains secure. Ensuring that both sides of a connection are who they “say” they are, it ensures the authenticity of a user before providing access to sensitive information or remote networks. Here’s what you need to know about two-factor authentication, which is rapidly becoming the gold standard for securing remote desktop connections. Two-factor authentication adds an extra step to the standard login procedures that most computer users are accustomed to, which is the typical username and password combination. Two-factor authentication requires the user to verify two out of three types of credentials, or factors, before access is granted. Access solutions supporting a second authentication factor adds confidence that the user is, in fact, an authorized user of a secure remote network. While it’s pretty simple for hackers to crack typical username and password combinations, it’s a much more difficult feat to verify two authentication factors and gain unauthorized access to a remote network / computing resources. Two-factor authentication isn’t a new idea. In fact, it’s been around for years. Most people have gone through a two-factor authentication process, although they likely didn’t realize they were doing so at the time. For example, credit and debit cards make use of two-factor authentication anytime the card is used in a retail location or even online. Having the card in possession is one factor, while knowing the PIN is the second. In the case of an online purchase, users are generally required to enter the three-digit code found on the back of the card, thereby verifying that the individual attempting to use the card is, in fact, in possession of the physical card. Credit card companies aren’t the only entities benefiting from this added security. Companies like Google, wireless service providers, and banks are implementing two-factor authentication to cut down on fraud and identity theft. The value in using access solutions that support two-factor authentication is that it adds an additional layer of protection should one factor become compromised. In the event that a user’s password is obtained by a hacker, for instance, the remote network is still secure if protected by another factor, such as a security question or SMS code. The two-factor authentication process isn’t infallible, but the additional layer of protection is highly desirable. This is especially true in the modern landscape, where security breaches and identity theft are all too common. It’s a relatively simple added step that has minimal impact on the user experience in most cases. For remote desktop connections, it’s too easy for hackers to intercept the data users are sending across the network. The few seconds it takes to complete an extra authentication step is well worth it for the added security provided by two-factor authentication. Ericom access solutions support integration with RSA® SecurID® and SecurEnvoy® SecureAccess and SecurICE two-factor authentication products.
<urn:uuid:f5cb2265-6a44-462e-a3b4-f163f05ec603>
CC-MAIN-2017-04
https://www.ericom.com/communities/blog/two-factor-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931925
613
3.1875
3
Definition: Two different edges cross in a graph drawing if their geometric representations intersect. The number of crossings in a graph drawing is the number of pairs of edges which cross. Note: From Algorithms and Theory of Computation Handbook, page 9-23, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "edge crossing", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/edgecrossing.html
<urn:uuid:61ab8ffa-49af-494a-823d-10d9c022eb81>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/edgecrossing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.813944
207
3.40625
3
Definition: An abstract data type to efficiently support finding the item with the highest priority across a series of operations. The basic operations are: insert, find-minimum (or maximum), and delete-minimum (or maximum). Some implementations also efficiently support join two priority queues (meld), delete an arbitrary item, and increase the priority of a item (decrease-key). Formal Definition: The operations new(), insert(v, PQ), find-minimum or min(PQ), and delete-minimum or dm(PQ) may be defined with axiomatic semantics as follows. Generalization (I am a kind of ...) abstract data type. Specialization (... is a kind of me.) pagoda, leftist tree, van Emde-Boas priority queue. Aggregate parent (I am a part of or used in ...) best-first search, Dijkstra's algorithm. Aggregate child (... is a part of or used in me.) heap, Fibonacci heap. See also discrete interval encoding tree, hash heap, calendar queue, queue. Note: It can be implemented efficiently with a heap. After LK. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 20 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "priority queue", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 20 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/priorityque.html
<urn:uuid:76325627-88c9-4c1a-b6e4-902102c51ae6>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/priorityque.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.839134
365
2.734375
3
The statistical programming language R is the most popular statistical programming language in the world. It is used by 70 percent of data scientists, according to a Rexer Analytics study, including those at big data gravity centers such as Facebook, Google, and Twitter. In addition, thousands of university students around the world use R, and thousands more take R courses on Coursera. Joseph Kambourakis, lead data science instructor at EMC Corporation, fielded questions from Data Informed about R’s popularity, strengths, weaknesses, and what he sees ahead for the open-source programming language. Data Informed: Why is R so popular? Joseph Kambourakis: It’s popular primarily because it works well and is free. R is also slightly better adapted to the way data scientists think when compared to tools such as Java or Python, which are more adapted to the way computer scientists think. The vast number of libraries and packages (available for R) really make anything possible. What advantages does being open source give R over proprietary software? Kambourakis: Being open source makes it much more agile and fast growing. Users can write new packages and functions at any time and much quicker than a company. There is also no limit to the number of people who can do so, whereas a proprietary software company is limited by their current number of employees. There is no hassle or time spent negotiating, updating, or maintaining software licenses. The download size is also a fraction of the size. How quickly are user-created tools and libraries being created? Is this creation accelerating over time? Kambourakis: In R, the user created tools and libraries are in the form of packages. There are multiple new R packages created every day. This graphic shows how dramatic this growth has been lately: What are some business use cases R is best suited for? Kambourakis: R is best suited for developing data models or building graphics. At this point, every business has data they should be modeling, and everyone needs graphics to help explain the models. What are some of R’s shortcomings? Kambourakis: Some companies don’t allow the use of open-source tools for security reasons. There is a lack of certification and product specific trainings that a company like SAS or Oracle would typically administer. There isn’t customer support, so if something goes wrong there is no one to call or help aside from message boards. (Revolution Analytics recently announced AdviseR, the first commercial support program for R – ed.) What are some of the ways users are misusing R? Kambourakis: I think the two biggest misuses are in how and where they use it. Using R without an integrated development environment (IDE) or graphical user interface (GUI) makes everything much harder than it needs to be. The other misuse is putting it right into production. It really should be used for developing and testing a model. Then, for production, something faster should be implemented. Are there potential business consequences of using this tool for the wrong business problems? Kambourakis: Thankfully, I haven’t seen this problem before. I’m sure it exists, but I haven’t heard any real horror stories comparable to things I’ve heard about other tools such as Excel and the London Whale scandal at JPMC. Is there a simple way to evaluate if R is the proper tool for a particular business challenge? Kambourakis: I think the data size and speed dictates most of the decisions. If it’s small data, then R is a great tool. If you have multiple gigabytes of data, then you’re likely to run out of memory. R is much slower than compiled languages such as C or Python. If you need the algorithm to run in microseconds, R likely will be too slow. What do you see as the future of R? Kambourakis: In the future, I really see R being the most popular and most commonly used language for any kind of mathematics or statistics. The fact that it is being taught in universities means that in the future there will be more and more students and future employees using it. The availability of functions will only continue to expand. I also think many of the problems that R currently has will be worked out. The community behind R is very dedicated and passionate. Related on Data Informed: Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.
<urn:uuid:b76a3cea-0498-427c-9df3-7b2071f90864>
CC-MAIN-2017-04
http://data-informed.com/backed-passionate-users-r-enjoys-widespread-adoption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00572-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943932
945
2.96875
3
Oak Ridge National Laboratory (ORNL) has officially launched its much-anticipated Titan supercomputer, a Cray XK7 machine that will challenge IBM’s Sequoia for petaflop supremacy. With Titan, ORNL gets a system that is 10 times as powerful as Jaguar, the lab’s previous top system upon which the new machine is based. With a reported 27 peak petaflops, Titan now represents the most powerful number-cruncher in the world. The 10-fold performance leap from Jaguar to Titan is courtesy of NVIDIA’s brand new K20 processors – the Kepler GPU that will be formally released sometime before the end of the year. Although the Titan upgrade also includes AMD’s latest 16-core Opteron CPUs, the lion’s share of the FLOPS will be derived from the NVIDIA chips. In the conversion from Jaguar, a Cray XT5, ORNL essentially gutted the existing 200 cabinets and retrofitted them with nearly ten thousand XK7 blades. Each blade houses two nodes and each one of them holds a 16-core Opteron 6274 CPU and a Tesla K20 GPU module. The x86 Opteron chips run at a respectable 2.2 GHz, while the K20 hums along at a more leisurely 732 MHz. But because to the highly parallel nature of the GPU architecture, the K20 delivers around 10 times the FLOPS as its CPU companion. (Using the 27 peak PF value for Titan, a back-of-the-envelope calculation puts the new K20 at about 1.2-1.3 double precision teraflops.) Thanks to the energy efficiency of the K20, which NVIDIA claims is going to three times as efficient its previous-generation Fermi GPU, Titan draws a mere 12.7 MW to power the whole system. That’s especially impressive when you consider that the x86-only Jaguar required 7 megawatts for a mere tenth of the FLOPS. It would appear, though, that IBM’s Blue Gene/Q may retain the crown for energy-efficient supercomputing. The Sequoia system at Lawrence Livermore Lab draws just 7.9 MW to power its 20 peak petaflops. However, it’s a little bit of apples and oranges here. That 7.9 MW is actually the power draw for Sequoia’s Linpack run, which topped out at 16 petaflops. Since we don’t have the Linpack results for Titan just yet, it’s hard to tell if the GPU super will be able to come out ahead of Blue Gene/Q platform. For multi-petaflopper, Titan is a little shy on memory capacity, claiming just 710 terabytes – 598 TB on the CPU side and 112 TB for the GPUs. The FLOPS-similar Sequoia has more than twice that – nearly 1.6 petabytes. Back in the day, the goal for balanced supercomputing was at least one byte of memory for every FLOP, but that era is long gone. Titan provides around 1/40 of a byte per FLOP and from the GPU’s point of view, most of the memory on the wrong side of the PCIe bus – that is, next to the CPU. Welcome to the new normal. Titan is more generous with disk space though, 13.6 PB in all, although again, a good deal less than that of its Sequoia cousin at 55 PB. Apparently disk storage is being managed by 192 Dell I/O servers, which, in aggregate, provide 240 GB/second of bandwidth to the storage arrays. Titan’s big claim to fame is that it’s the first GPU-accelerated supercomputer in the world that’s has been scaled into the multi-petaflop realm. IBM’s Blue Gene/Q and Fujitsu’s K computer — both powered by custom CPU SoCs — are the only other platforms that have broken the 10-petaflop mark. Titan is also the first GPU-equipped machine of any size in the US. As such, it will provide a test platform for a lot of big science codes that have yet to take advantage of accelerators at scale. Acceptance testing is already underway at Oak Ridge and users are in the process of porting and testing a variety of DOE-type science applications to the CPU-GPU supercomputer. These include codes in climate modeling (CAM-SE), biofuels (LAMMPS), astrophysics (NRDF), combustion (S3D), material science (WL-LSMS), and nuclear energy (Denovo). According to Markus Eisenbach, his team has already been able to run the WL-LSMS code above the 10-petaflop mark on Titan. He says that level of performance will allow them to study the behavior of materials at temperatures above the point where they lose their magnetic properties. At the National Center for Atmospheric Research (NCAR), they are already using the new system to speed atmospheric modeling codes. With Titan, Warren Washington’s NCAR team has been able to execute high-resolution models representing one to five years of simulations in just one computing day. On Jaguar, a computing day yielded only three months worth of simulations. ORNL’s Tom Evans is using Titan cycles to model nuclear energy production. The simulations are for the purpose of improving the safety and performance of the reactors, while reducing the amount of waste. According to Evans, they’ve been able to run 3D simulations of a nuclear reactor core in hours, rather than weeks. The machine will figure prominently into the upcoming INCITE awards. INCITE, which stands for Innovative and Novel Computation Impact on Theory of Experiment, is the DOE’s way of sharing with the FLOPS with scientists and industrial users on the agency’s fastest machines. The program only accepts proposals for end users with “grand challenge”-type problems worthy of top tier supercomputing. With its 20-plus-petaflop credentials, Titan will be far and away the most powerful system available for open science. (Sequoia belongs to the NNSA and spends most its cycles on classified nuclear weapons codes.) The DOE has received a record number of proposals for the machine, representing three times what Titan will be able to donate to the INCITE program. Undoubtedly some of that pent-up demand is a result of the delayed entry of the US into GPU-accelerated supers. Over the past three years, American scientists and engineers have watched heterogeneous petascale systems being built overseas. China (with Tianhe-1A, Nebulae, and Mole 8.5), Japan (with TSUBAME 2.0), and even Russia (with Lomonosov) all managed to deploy ahead of the US. Some of that is due to the slow uptake of GPU computing by IBM and Cray, the US government’s two largest providers of top tier HPC machinery. IBM offers GPU-accelerated gear on it x86 cluster offerings, but its flagship supercomputers are based on their in-house Blue Gene and Power franchises. Cray waited until May 2011 to deliver its first GPU-CPU platform, the XK6 (with Fermi Tesla GPUs), preferring to skip the earlier renditions of NVIDIA technology. While Titan could be viewed as just another big supercomputer, there is a lot on the line here, especially for NVIDIA. If the system can be a productive petascale machine, it will go a long way toward establishing the company’s GPU computing architecture as the go-to accelerator technology for the path to exascale. The development that makes this less than assured is the imminent emergence of Intel’s Xeon Phi manycore coprocessor, and to a lesser extent, AMD’s future GPU and APU platforms. Intel will get its initial chance to prove Xeon Phi’s worth as an HPC accelerator with Stampede, a 10 petaflop supercomputer that will be installed at the Texas Advanced Computing Center (TACC) before the end of the year. That Dell cluster will have 8 of those 10 petaflops delivered by Xeon Phi silicon and, as such, the system will represent the first big test case for Intel’s version of accelerated supercomputing. It also represents the first credible challenge to NVIDIA on this front since the GPU-maker got into the HPC business in 2006. Whichever company is more successful at delivering HPC on a chip, the big winners will be the users themselves, who will soon have two vendors offering accelerator cards with over a teraflop of double precision performance. At a few thousand dollars per teraflop, supercomputing has never been so accessible.
<urn:uuid:e7d7e514-55c6-44e6-85b3-66c80adb00f2>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/10/29/titan_sets_high-water_mark_for_gpu_supercomputing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927657
1,853
2.515625
3
Internet's Roots Stretch Nearly 600 Years to Gutenberg Printing Press NEWS ANALYSIS: The single most important invention of the past millennium that revolutionized society, science and technology started out as a way to make wine.Mainz, Germany—The lever was cool to the touch as I grasped it, reaching head high with both hands. "Pull hard," the man next to me advised, so I did, and kept pulling as the lever creaked. "That's enough," he said, "you can let go now." I dropped my hands to my side and then helped the museum staffer from the Gutenberg Museum here slide the printing plate and the paper it held from beneath the press. I held out my hands, and in them he placed a paper in Germanic script and the Latin text. It was a newly printed page from Gutenberg's Bible. But it was more than that. It was from the device that had changed civilization forever. While the printing press with moveable type that Johannes Gutenberg developed nearly 600 years ago seems modest by today's standards, there is a direct technological link between this machine—with parts originally designed for a wine press—and the Internet. The printing press led almost immediately to the printing revolution, widespread literacy and the development of mass communications. Today, the ultimate means of mass communications is the Internet and the HTML language. HTML itself is derived from a markup language to specify document formatting for printing. It is, in effect, a means for printing on a screen instead of paper. But being able to produce printed pages isn't what changed civilization. In addition, Gutenberg, as is the case with many who created a transformative technology, developed his inventions based on previous inventions. Printing presses existed before Gutenberg, as did moveable type. But it was the combination of several technologies put together that made the difference.
<urn:uuid:c8fefc4f-16b1-4dfc-ab9c-36e78f057c30>
CC-MAIN-2017-04
http://www.eweek.com/cloud/internets-roots-stretch-nearly-600-years-to-gutenberg-printing-press.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970986
370
3.78125
4
The way in which passwords work, though they are the most common security control in use, is little understood. Raising awareness is key to improving the acceptance by staff of the enterprise's password policy. Understanding why password construction matters, how passwords are stored and used by information systems and just how password crackers operate will go a long way to making sure the enterprise's passwords are a point of strength, not a point of weakness. The two significant factors in the construction of a strong password are length and complexity. Length is simply the number of individual characters used in the creation of the password, while complexity refers to the number of characters that could potentially be used in the creation of the password. Of the two, complexity is far more important to password strength than is length. A little mathematics bears this out.
<urn:uuid:c18ad23e-4c40-4d02-ad95-80a822367524>
CC-MAIN-2017-04
https://www.infotech.com/research/understanding-password-cracking-the-key-to-better-passwords
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00012-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942723
162
2.9375
3