text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Zenyatta Ventures Ltd says that it has made progress on its project to determine possible uses for the graphite powder produced from its mine in Albany, Canada. Experts believe that graphene could be used for a range of innovative cleantech applications, including low-cost solar cells, super computers and rapid charge batteries. However, one obstacle to its widespread use is the high manufacturing cost for high-quality graphene. A lower-cost approach is to use high-purity natural graphite. The goals of the Natural Science and Engineering Research Council of Canada Collaborative Research and Development (NSERC CRD) project, run by Dr Aicheng Chen, Professor of Chemistry at Lakehead University are: to characterize the physical and chemical properties of Zenyatta’s Albany graphite, to understand its electrochemical behaviours, to modify the graphite for practical applications and to develop advanced carbon nanomaterials such as graphene from the Albany graphite. Since the beginning of the project Dr. Chen and his research group report significant advances in the characterization the material graphite and the development of new materials from it for practical applications. Initial results indicate that high quality graphene oxides can be produced from Albany graphite at a laboratory scale, which can in turn be converted to graphene via a simple reduction process. Preliminary graphene yields of approximately 98% from Zenyatta’s Albany graphite emerge from these tests. ‘From an analytical perspective, the Albany graphite meets all the stringent requirements for a high-quality product, encompassing high-purity, crystallinity, thermal stability, and high surface area,’ said Dr. Chen. ‘Interestingly, the crystallinity found in Zenyatta’s Albany graphite was greater than that of commercially available graphite samples which were also tested for comparative purposes. These initial studies indicate that there are great potential opportunities for the utilization of this product in multiple practical applications. For example, these graphite derivatives will be explored for their medical, energy and environmental technology applications.’ This story uses material from Zenyatta, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. « LLNL team finds hydrogen treatment improves performance of graphene nanofoam anodes in Li-ion batteries | Main | UMTRI: Average US new vehicle fuel economy in October drops to 25 mpg » In a boost to technology transfer, Sandia National Laboratories has launched a program that lets researchers consult for companies that license their Sandia work. There is a need for this. We hear often in the business community that it would help a lot if our people could consult on their inventions. Licensees, especially small businesses, really need the technical guidance to take the next steps A researcher who wants to consult for a company that licenses his or her technology must first get a green light from the labs. Work would be done on the researcher’s own time. Atherton said the availability of consulting should lead to more licensing as businesses learn they can get follow-up technical help from Sandia. The program is one of a number of ways Sandia supports technology transfer and the business community. The New Mexico Small Business Assistance (NMSBA) program lets for-profit companies team with Sandia researchers free of charge to solve technical challenges. In 2014, Sandia provided $2.31 million in assistance to 197 New Mexico small businesses in 27 counties. Scientists can leave Sandia to launch technology companies or expand existing ones through the Entrepreneurial Separation to Transfer Technology (ESTT) program. It guarantees reinstatement if the researcher chooses to return to Sandia within three years. The Sandia Science and Technology Park is a 340-acre technology community adjacent to Sandia and Kirtland Air Force Base where startups and mature companies can collaborate with the Labs on a variety of technologies, products and services. The park’s Center for Collaboration and Commercialization, or C3, will offer programs and services to strengthen partnerships, technology transfer and ties to the community. Companies can also contract to work with Sandia through Collaborative Research and Development Agreements (CRADA) and Strategic Partnership Projects, Non-Federal Entity (SPP/NFE) agreements. McInnis J.,Collaborative Research and Development | Singh S.,Collaborative Research and Development | Huq I.,Collaborative Research and Development Mitigation and Adaptation Strategies for Global Change | Year: 2015 Coal is the most abundant hydrocarbon energy source in the world. It also produces a very high volume of greenhouse gases using the current production technology. It is more difficult to handle and transport than crude oil and natural gas. We face a challenge: how can we access this abundant resource and at the same time mitigate global environmental challenges, in particular, the production of carbon dioxide (CO2)? The editors of this special edition journal consider the opportunity to increase the utilization of this globally abundant resource and recover it in an environmentally sustainable manner. Underground coal gasification (UCG) is the recovery of energy from coal by gasifying the coal underground. This process produces a high calorific synthesis gas, which can be applied for electricity generation and/or the production of fuels and chemicals. The carbon dioxide emissions are relatively pure and the surface facilities are limited in their environmental footprint. Unused carbon is readily separated and can be geo-sequester in the resulting cavity. The cavity is also being considered as a potential option to mitigate against change impacts of other sources of CO2 emissions. These outcomes mean there is an opportunity to provide developing and developed countries a source of low-cost clean energy. Further, the burning of coal in situ means that the traditional dangers of underground mining and extraction are reduced, a higher percentage of the coal is actually recovered and the resulting cavern creates the potential for a long-term storage solution of the gasification wastes. The process is not without challenges. Ground subsidence and groundwater pollution are two potential environmental impacts that need to be averted for this process to be acceptable. It is essential to advance the understanding of this practice and this special edition journal seeks to share the progress that scientists are making in this dynamic field. The technical challenges are being addressed by researchers around the world who work to resolve and understand how burning coal underground impacts the geology, the surface land, and ground water both in the short and the long term. This special issue reviews the process of UCG and considers the opportunities, challenges, risks, competitive analysis and synergies, commercial initiatives and a roadmap to solutions via the modelling and simulation of UCG. Building and then disseminating the fundamental knowledge of UCG will enhance policy development, best practices and processes that reflect the global desires for energy production with reduced environmental impact. © 2015 Springer Science+Business Media Dordrecht Source
<urn:uuid:69cee6cd-9350-4063-9dd0-96c1e5975fe3>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/collaborative-research-and-development-1561103/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926813
1,389
3.15625
3
Scientists at the Weizmann Institute say they have developed the world’s first photonic router, an important advance towards building a full-on quantum computer. The router is a quantum device based on a single atom that can switch between two states. The single atom is coupled to a fiber-coupled, chip-based microresonator. The state is flipped by sending a single particle of light from the right or the left via an optical fiber. In response, the atom reflects or transmits the next incoming photon. The new development combines laser cooling and the trapping of atoms with chip-based, ultra-high quality miniature optical resonators that couple directly to the optical fibers. These are very advanced technologies and the laboratory responsible for this breakthrough is one of the few with the necessary expertise to carry it out. “In a sense, the device acts as the photonic equivalent to electronic transistors, which switch electric currents in response to other electric currents,” says Dr. Barak Dayan, head of the Weizmann Institute’s Quantum Optics group. What’s especially interesting is that the switch is operated solely by single photons. The photons comprise the information, and control the device. It’s the sameness of the control and target photons that makes this scheme compatible with scalable architectures for quantum information processing, as explained further in a writeup in Science Magazine (subscription req’d). Quantum computing hinges on the phenomenon of superposition, where particles can exist in multiple states simultaneously. Superposition is highly unstable, however and subject to the least bit of interference. Photons are considered to be the most promising candidates for communication between quantum systems because they do not interact with each other at all, and interact very weakly with other particles. The project represents and important step on the road to more complex quantum-based systems. Says Dayan: “The road to building quantum computers is still very long, but the device we constructed demonstrates a simple and robust system, which should be applicable to any future architecture of such computers. In the current demonstration a single atom functions as a transistor – or a two-way switch – for photons, but in our future experiments, we hope to expand the kinds of devices that work solely on photons, for example new kinds of quantum memory or logic gates.”
<urn:uuid:bbbf6e63-3894-4322-85a0-67236e310e38>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/07/16/scientists-devise-photon-based-router/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00361-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924715
485
3.75
4
Bell Labs, the research arm of Alcatel-Lucent, is claiming a broadband speed record of 10 gigabits per second (Gbps) on traditional copper telephone lines. The same technology, called XG-FAST, can be used to deliver symmetrical 1 Gbps on copper access networks, Bell Labs said. If the technology can be commercialized in the next few years, it could have significant ramifications for phone companies, some of whom insist they must rip out copper and replace it with fiber in order to remain competitive with cable competitors. The cable industry plans to get to 1 Gbps with DOCSIS 3.1 (aka Gigasphere) on extant coax wires, and is confident it can get to at least 10 Gbps riding that technology. XG-FAST is an extension of G.fast, a new broadband standard currently being finalized by the ITU. On the upside, the technology achieves ultrafast transmission rates. On the downside, it can do so only on extremely short loop lengths. When G.fast becomes commercially available in 2015, it will use a frequency range for data transmission of 106 MHz, giving broadband speeds up to 500 Mbps, over a distance of 100 meters. XG-FAST, meanwhile, uses a frequency range up to 500 MHz to achieve higher speeds, though over shorter distances. Bell Labs achieved 1 Gbps symmetrical over 70 meters on a single copper pair. Signals at higher frequencies were completely attenuated after 70 meters, the company reported. 10 Gbps was achieved over a distance of 30 meters by bonding two lines, Bell Labs said. Both tests used standard copper cable provided by a European operator. Marcus Weldon, President of Bell Labs, said, “by pushing broadband technology to its limits, operators can determine how they could deliver gigabit services over their existing networks, ensuring the availability of ultra-broadband access as widely and as economically as possible.” In practical situations, other significant factors that can influence actual speeds (not taken into account during these tests but which have been studied extensively elsewhere) include the quality and thickness of the copper cable and cross-talk between adjacent cables (which can be removed by vectoring), Bell Labs said. Commenting on the achievement, Federico Guillén, President of Alcatel-Lucent’s Fixed Networks business said: “The Bell Labs speed record is an amazing achievement, but crucially in addition they have identified a new benchmark for ‘real-world’ applications for ultra-broadband fixed access. XG-FAST can help operators accelerate FTTH deployments, taking fiber very close to customers without the major expense and delays associated with entering every home. By making 1 gigabit symmetrical services over copper a real possibility, Bell Labs is offering the telecommunications industry a new way to ensure no customer is left behind when it comes to ultra-broadband access.” Maximum aggregate speed G.fast phase 1* G.fast phase 2* Bell Labs XG-FAST** 2 Gbps (1 Gbps symmetrical) Bell Labs XG-FAST with bonding*** 10 Gbps (two pairs) * Industry standard specifications. G.fast allows for upload and download speeds to be configured by the operator. ** In a laboratory, reproducing real-world conditions of distance and copper quality. *** Laboratory conditions. Bell Labs, the research arm of Alcatel-Lucent, is claiming a broadband speed record of 10 Gbps on traditional copper telephone lines using XG-FAST technology, a variant of the G.fast standard that is now being finalized. XG-FAST can also be used to deliver symmetrical 1 Gbps on copper.
<urn:uuid:d88b0e6f-84ec-4eb2-95e8-dd78291c7bdb>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/news/2014/07/bell-labs-achieves-10gbps-speeds-on-copper-lines
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00085-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918901
771
2.578125
3
Agencies collaborate on learning research project Goal is to analyze trends from data trails left by researchers, students and teachers - By Alice Lipowicz - Apr 11, 2011 Agencies are collaborating on the online Learning Registry project to make federal primary source materials easier to find on the Web and to integrate into educational curricula, Steve Midgley, deputy director for the U.S. Office of Education Technology, said at a conference today. “Digital resources are available on the Web, but they are difficult to integrate,” Midgley said at the Ignite Smithsonian media innovation conference in Washington. “We want to capture more information to achieve knowledge amplification.” The project collects information about user data trails made when researchers, students and teachers search for federal data and resources available on third-party websites. By aggregating the information from the data trails, the registry expects to identify the information that would be most useful to educators. Education Department releases National Education Technology Plan The user data trails also are known as “data exhaust,” Midgley said, because they consist of information that is typically spewed out in large quantities and not collected — such as data on how users use search engines, which search terms they use, how many searches they perform and in what order, and which websites they click through as they perform their research. If that usage data is properly collected and analyzed, it can yield benefits in streamlining the research, integrating data into a curriculum and improving performance, he said. For example, NASA regularly posts videos on third-party aggregation sites, such as the National Science Digital Library. A review of data trails on the digital library website showed that one of the NASA videos had been used by high school physics teachers for a lesson on velocity. That tip was shared with national educational agencies to help them prepare source materials for lessons on that topic. “We want to interconnect users in a way that we are not doing today,” Midgley said. In addition to Midgley’s office, the other key member of the collaboration is the Advanced Distributed Learning Initiative. Other agencies involved include Data.gov, the National Science Foundation, the White House Office of Science & Technology Policy, the Federal Communications Commission, the National Institute of Standards and Technology and the National Archives and Records Administration. Midgley said additional partners are welcome. “We are a do-ocracy with open participation,” he said. Anyone with interest in providing data, consuming data or sharing data is welcome to participate in the research. Alice Lipowicz is a staff writer covering government 2.0, homeland security and other IT policies for Federal Computer Week.
<urn:uuid:ee0b0f6a-e56a-4e5d-98e1-dce628a33a5f>
CC-MAIN-2017-04
https://fcw.com/articles/2011/04/11/education-dod-depts-collaborate-on-learning-registry-research-project.aspx?s=fcwdaily_130411
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00233-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940143
554
2.640625
3
When it comes to employing physics in medicine, there are two major fields in terms of their relevance in clinical practice: medical imaging and radiation therapy. A recent paper from an Argentinian research duo addresses how these domains can benefit from high-performance computing techniques. Medical imaging and radiation therapy both rely heavily on computational resources. Ideally, computational work can be performed in real-time or near-real-time to benefit patient outcome as much as possible, the researchers note. While execution times have dropped significantly with the advent of faster CPUs, wait times are still problematic. In tomographic image reconstruction, internal dosimetry calculation and radiotherapy planning, accelerating these processes is enormously important, “not only for the patient – whose quality of life improvement is the ultimate goal-, but also for optimizing professional work in a busy hospital environment.” Over the last several years, the rise of multicore and GPU-based computing has boosted many technical computing domains, including the field of medical physics. The research paper explores the ways that medical physics has benefited from advances in HPC and specifically GPU computing. The authors describe two typical lines of research in medical image processing, image segmentation and registration, that are good candidates for parallel computing on GPU cores. Image segmentation, which falls under general image processing, involves the identification and further classification of different constituents or textures depicted in a given dataset. In the case of biomedical images, this discovery process is crucial to both diagnosis and therapy. The authors found that implementing an image segmentation algorithm on GPU delivered impressive results, a 15x speedup in comparison to the optimized code running on a CPU-only setup. The second medical imaging process, known as registration, involves bringing two or more datasets into spatiotemporal alignment. There are many reasons this is done, including diagnostic power enhancement after comparing different modalities, disease follow-up, and assistance in radiotherapy planning. It’s a complex process and the algorithm designed by the researchers requires 30-40 minutes of CPU to register two 512x512x50 voxel datasets. Because the algorithm uses a hierarchical subdivision scheme, the authors are confident that it will benefit from acceleration using parallel computing. Radiotherapy is the second main area examined in the paper. “In Radiation Therapy, the calculation of the dose delivered by ionizing radiation and the use of optimization algorithms on advanced methods of treatment, are the main areas where GPU programming has its greatest impact,” write the authors. There are different ways of computing this dose. There is a 2D solution, known as the pencil beam algorithm, and a 3D algorithm known as convolution/superposition. The authors note that other research groups have developed reformulated pencil beam and convolution/superposition algorithms for GPU-based processing, with speedups of 200-400x. At the authors’ home institution, Fundación Escuela de Medicina Nuclear de Mendoza, they are working to refine these techniques using the accelerative power of the GPU when it’s feasible to do so. It’s worth noting that even when an algorithm, e.g. Monte Carlo, is ideal for parallel computation, the complexity of the method can limit the acceleration potential. The clinical value of this work is the development of treatment plan that strikes the best compromise between the dose of radiation delivered to the tumor and dose received by healthy organs located around it.
<urn:uuid:7c9f1b3b-14c5-450b-813d-6e6622e65f35>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/09/05/hpc_boosts_medical_physics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00169-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930308
697
2.6875
3
streams within the technological wave -- the hard trends of technological advancement: In addition to the eight pathways of technological advancement listed above, we have three digital accelerators making change possible. In fact, a closer look at this technological tsunami were about to experience shows that it is actually a braid of three powerful, interlocking hard trends. Each on its own is capable of driving a huge amount of change, but the impact of all three acting in concert is enormous. If you think of the forward advance of technology as a car, we are stepping down hard on the accelerator or, more accurately, on three accelerators at once. For example, scientists have found that DNA nanostructures (about one-thousandth the diameter of a human hair) can serve as scaffolds for the assembly of computer chips. The process involves placing a long, single strand of viral DNA in a solution with short, synthetic strands. The large molecule self-assembles into various configurations, folding itself into a square, triangle, or other two-dimensional shape, with the short segments acting as staples. The structures are positioned precisely on a silicon wafer using electron-beam lithography and oxygen plasma etching. Carbon nanotubes, nanowires, and other microscopic components can then be assembled on the scaffold to create complex circuits that are much smaller than any conventional semiconductors. Dubbed DNA origami, this breakthrough is one of many that will maintain an increase in processing power well into the future. Another radically different method will be to access the processing power of a remote supercomputer from our smart phone or tablet. The processing power of our computing device will be less important with processing-power-as-a-service (PPaaS). Accelerator No.2: Bandwidth - Today bandwidth is lightning fast compared to a few years ago, but its accelerating even faster than the doubling of processing power. We think websites are sophisticated today because we have high-quality graphics that load quickly, and even streaming video. But tomorrow it will be common to have 3D web sites that allow you to walk a virtual tour of your store, new house, or vacation site, in real time. The acceleration in bandwidth that made possible outsourcing to India and ushered in a multimedia Web was generated mainly by advances in fiber optic technology, which translated into a huge increase in speed and doubling potential, as compared to glacially slower copper wire. Increasing these fiber optic strands capacities doesnt require laying down new fibers, it only requires innovation in the switching units at the ends of each cable. In other words, we can easily multiply the capacity of our existing network by orders of magnitude without any substantial new investment in the infrastructure. Accelerator No.3: Storage - Even as processing power and bandwidth climb at ever-increasing rates, the increase in our tools capacity to store all the information from that increased processing and bandwidth is going through an even steeper, more dramatic change. My first computer didnt even have a hard disk drive. Today, data storage capacity is so huge its almost unlimited and so cheap its practically free. Thats the continuing impact of the third digital accelerator: The capacity to store digital data is doubling every 12 months -- faster than the increase in both bandwidth and processing power -- and remote storage using cloud services provides us with almost limitless capacity. While current laser technologies are continually increasing the amount of data stored by using shorter and shorter wavelengths of light, they are limited by the nature of their two-dimensional design. But scientists at GE are looking at new ways of increasing storage capacity using holographic principles. They have developed specialized polycarbonate materials that write data to a disk by chemically altering the composition of the material when exposed to specific types of laser light. This method allows them to use the entire volume of the recording medium instead of just the visible surface, permitting two hundred times more data to be recorded on the same size disk. Because surface area is no longer a factor, the size and shape of the media can be more flexible. And data retrieval is considerably faster with the use of parallel reading schemes. You could someday store your entire movie collection on one DVD. But you probably wont because, as you may have already begun to notice, even DVDs will soon be obsolete. After all, who buys CDs anymore? We download our music direct from iTunes. My current laptop doesnt even have a hard drive. It uses solid-state memory chips with no moving parts. My data is sitting on a server Im linked to on the cloud. And thats today. What about tomorrow? As dramatic as the technological changes produced by these three accelerators have been up until now, they are only a hint of what lies ahead. With bandwidth accelerating even faster than processing power, and storage capacity accelerating faster still, all three digital accelerators are coming together like a perfect storm to create an enormous force of transformation that is shooting up and off the charts of conventional expectations. We are about to put the pedal to the metal.
<urn:uuid:e9caa28d-d958-46dd-998d-9b55c55415f1>
CC-MAIN-2017-04
http://www.cioupdate.com/reports/article.php/3932451/Special-Report---Seeing-the-Tech-Tsunami-iBeforei-the-Impact-Part-II.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937778
1,018
2.921875
3
Near-Threshold Voltage, or NTV, has the potential to significantly cut energy requirements for high performance computing. This is becoming especially important for the largest supercomputers, which are already well into the multi-megawatt realm and are expected to hit tens of megawatts in the exascale era. Intel recently demonstrated their NTV capabilities at ISSCC 2012, operating an x86 microprocessor on only 2 milliWatts of power. The company published three papers on their results, which were analyzed and discussed in an article at Real World Technologies by David Kanter. The threshold voltage is the voltage required to generate a minimum of current across a transistor. Intel has discovered that the most efficient use of energy would be to operate a circuit near that of its threshold voltage, that is, the amount required to turn a transistor on. There are a couple of intrinsically tricky things about operating at such a low voltage. The first is limiting dI/dt, the fancy mathematical way of expressing change in current over time. Rapid spikes or drops in current, especially those that would occur as a result of a particular transistor accidentally dropping beneath the threshold, can create computational errors. Ideally, all transistors would be created equally. Statistically, however, since there sometimes lie billions of transistors on a given chip, some will perform worse that others. Another challenge to overcome is the resulting power loss. The power available is proportional to the square of the voltage, such that a 10 percent reduction in voltage leads to a 19 percent reduction in power. While this reduced voltage would be a great way to increase efficiency, it would also be a great way to ensure your CPU does not have the juice required to run what it needs to. Further, NTV significantly decreases frequency. “The 32nm Pentium core,” Kanter said about a core that ran using NTV “increased efficiency by about 5×, by running at slightly under 100MHz. The maximum frequency was 915MHz, so the absolute performance decreased by about an order of magnitude.” As he notes, NTV would be impractical for general-purpose CPUs, as they are generally used for applications that expect reasonable single-threaded performance. Thus they require the higher voltages needed to drive faster clocks. On the other hand, HPC and its massively parallel computing environment could benefit greatly from NTV. “Based on our analysis of these papers,” Kanter wrote, “Near-Threshold Voltage computing techniques are most applicable to highly parallel workloads. Generally, NTV is an ideal fit for HPC workloads and works very well for graphics, but not general purpose CPUs.” Since HPC is highly parallelized and requires backups and fail-safe mechanisms throughout a computation, it can withstand the consequences of a single transistor giving out. HPC computations are also not expected to happen anywhere near real time, making the frequency decrease less of a problem. This is especially true of “throughput” accelerators like GPGPUs and Intel’s Xeon Phi, which are naturally frequency-constrained because of their high core counts. There is a sense that this technology is being developed to specifically benefit HPC rather than it accidentally doing so. This is not only hinted at by the Intel papers themselves, but is also indirectly supported by the people funding the papers, specifically the US Government. “Perhaps most telling,” Kanter wrote, “US government grants typically focus on areas of national interest. Graphics simply is not vital to the country, whereas HPC is a critical tool for the Departments of Defense, Energy, and any number of intelligence agencies.”
<urn:uuid:e0c4b5b2-269a-4d4f-b25c-4a784359d872>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/09/19/for_energy-efficient_hpc_less_is_more/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00040-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961658
760
3.4375
3
Virus:Boot/Stoned is a simple virus that seems to have been designed to be harmless. Due to a mistake however, it did not quite work out that way. Stone is able to infect the boot sectors of floppy disks. The virus has spawned a large number of variants. Stoned was one of the most widespread viruses in existence. Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action. More scanning & removal options More information on the scanning and removal options available in your F-Secure product can be found in the Help Center. You may also refer to the Knowledge Base on the F-Secure Community site for more information. On an infected diskette, the original boot sector is stored on track 0, head 1, sector 3. This is the last sector of the root directory on a 360K diskette, so this will work unless the root directory contains more than 96 files, which is rather unlikely. Overwriting this sector on a 1.2M diskette is, however, much more likely to cause damage. A computer infected with this virus will sometimes display the following message when it starts. Your computer is now stoned. There are a large number of Stoned variants, many with no significant differences. The most notable are: - This virus This variant is one of several politically motivated viruses and contains the message: - "Bloody! Jun. 4, 1989". - Swedish Disaster This virus contains the string "The Swedish Disaster", which may indicate it was written in Sweden. Closely related to the original Stoned, Manitoba's main difference is that on floppies it doe not store the original boot sector anywhere, just overwrites it. Manitoba allocates two kilos of memory while in resident and corrupts 2.88MB EHD floppies while infecting them. Manitoba has no activation routine. It was probably written in the University of Manitoba. NoInt was also known as Stoned III. It infects boot sectors on diskettes and Master Boot Records (MBRs) on hard disks. It infects a hard disk only if you try to boot from an infected diskette. The virus will be loaded into memory if the hard disk is infected and the machine is booted from it. Once the virus is in the memory, it will infect all diskettes that are used in the machine, unless the diskettes are write protected. It is sufficient to enter a command like DIR A: to get a diskette infected. NoInt tries to prevent other programs from detecting it by causing read errors if partition table is tried to access. It does not do anything else visible and it does not contain any texts inside it. It is possible though that it causes damage to directories indirectly. The amount of base memory decreases by 2 kB. This virus is a standard boot sector infector that will infect the MBR or the boot sector of a floppy. If the computer is booted from an infected floppy, the virus immediately attempts to infect the MBR of the hard disk. Once Flame is active in memory, any operation on a non-infected floppy will result in infection. Virus reserves 1KB of DOS memory. The virus stores the original boot sector or MBR at cylinder 25, sector 1, head 1 regardless of what media is infected. Flame saves the current month when it infects a system. When the month changes, it activates by displaying coloured flames on screen and overwriting the MBR. This Stoned variant has stealth-mechanisms. It is probably made in Poland and contains the following texts: - Greetings for ANGELINA!!!/by Garfield/Zielona Gora Zielona Gora is a town in Poland. In October 1995, Angelina was found on new Seagate 5850 (850MB) IDE drives which were still factory sealed.
<urn:uuid:8fcf56e6-288b-45f0-8b3f-84bb3ac92dc7>
CC-MAIN-2017-04
https://www.f-secure.com/v-descs/stoned.shtml
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00436-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930818
819
2.75
3
Emergency responders are often faced with uncertainty when responding to incidents, but soon, they could have some help in assessing a location or situation's safety: a rubber-framed camera the size and shape of a tennis ball that can instantly transmit a 360-degree image to a smartphone from its six cameras. The tool, created by Bounce Imaging, could be useful when a responder is unsure if an area is safe or wants more information about an area that isn't visible, such as through a crack or beneath debris, RedOrbit.com reported. With an expected price tag of less than $500, the device would be sold for about one-tenth of the cost of the cheapest competing devices designed for the same use, said Francisco Aguilar, the company's founder. “But we hope that with our technology, it could be expanded to volunteers with low-cost units that could be tossed into air pockets and collapsed spaces in search of victims,” Aguilar told the BBC. The device also has slots for additional sensors, such as smoke or temperature sensors for a firefighting model, or methane and coal dust sensors for a mine inspection model. The ball can also use infrared imaging to take pictures in low light conditions. Bounce Imaging stated that its camera ball, which was named one of Time Magazine’s Best Inventions of 2012, could see prototype deployment with several police units in Massachusetts in the coming months. See the bouncing ball in action: Image courtesy of Bounce Imaging
<urn:uuid:b4332b43-d7d4-4714-ad6f-8cbded0d2837>
CC-MAIN-2017-04
http://www.govtech.com/Massachusetts-State-Police-May-Test-Rubber-Ball-Camera.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00280-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961493
303
2.90625
3
Oputils' diagnostic tools are used to check the availability, route and health of a system in a network using ICMP and SNMP. By drawing a graph with the available details, the tools show the status of a node. The tools also scan and check a range of IPs and come up with their status. This tool is the graphic representation of the ICMP PING utility. It helps in discovery of the status of a network device, that is whether the device is alive or not. Before you ping a device you can configure the ping settings such as number of packets, time to live, size, and timeout. This tool checks whether a node is SNMP-enabled or not. It helps the network engineers know the availability of a device and provides basic information such as DNS name, system name, location, system type, and system description. Following the SNMP discovery, if required, more details of the node can be retrieved using SNMP Tools such as SNMP walker, MIB Browser and SNMP Graph Using this tool you can scan an entire range of IP Addresses to check the availability through ICMP/SNMP, check forward and reverse lookup and determine their MAC addresses The Proxy ping tool enables you to ping a target device using a Cisco router. The router acts as the proxy for the target device and responds to the ping request. This tool records the route followed in the network between the senders computer and a specific destination computer. The user can configure the settings such as number of hops, and timeout. For more details on each of the tools, refer to the Diagnostic Tools section of the online help.
<urn:uuid:02f0ae6d-bf6b-4a2e-a3e4-79cc16f5dabb>
CC-MAIN-2017-04
https://www.manageengine.com/products/oputils/diagnostic-tools.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00546-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907317
337
2.59375
3
Solar Eclipse from Space / March 12, 2013 The latest transmissions from NASA's Solar Dynamics Observatory (SDO), which took off in 2010 for its five-year mission to observe solar activity, shows the sun partially blocked from view by the Earth -- and the moon. Such solar eclipses will be regular occurences for the next three weeks, cnet.com reported, when the Earth blocks the SDO's view of the sun for a period of time each day. The photo above was taken on Monday, March 11, and shows the moon crossing in front of the sun. Photo courtesy of NASA/SDO
<urn:uuid:9bdd55e0-e0c4-4a0a-811e-8f1154ff1f48>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Solar-Eclipse-from-Space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943128
129
3.078125
3
The Department of Transportation National Highway Traffic Safety Administration (NHTSA) issued proposed guidelines on how to prevent drivers from becoming distracted by mobile devices. The guidelines include encouraging cell phone manufacturers to build products that incorporate ways to decrease driver distraction. The DOT said that companies should build features such as pairing, where a mobile device can be connected to a vehicle’s infotainment system, and Driver Mode, a setting that users could enable that would block them from certain functions on their phones while behind the wheel. Pairing and Driver Mode would reduce the time a driver’s eyes are off the road, while at the preserving the operability of these devices outside of the vehicle. “As millions of Americans take to the roads for Thanksgiving gatherings, far too many are put at risk by drivers who are distracted by their cellphones,” said Anthony Foxx, secretary of the Department of Transportation. “These commonsense guidelines, grounded in the best research available, will help designers of mobile devices build products that cut down on distraction on the road.” The NHTSA is seeking public comment on the new regulations. The agency reminded drivers to put away cell phones while driving and enter an address into any electronic GPS system before starting to drive. The rules are the second phase of voluntary guidelines to address driver distraction. The first phase focused on devices or systems built into the vehicle, rather than capabilities that could be included in the mobile devices. “NHTSA has long encouraged drivers to put down their phones and other devices, and just drive,” said Mark Rosekind, administrator for the NHTSA. “With driver distraction one of the factors behind the rise of traffic fatalities, we are committed to working with the industry to ensure that mobile devices are designed to keep drivers’ eyes where they belong — on the road.”
<urn:uuid:a6768afc-752e-4ec5-8a57-4730971ee4aa>
CC-MAIN-2017-04
https://www.meritalk.com/articles/dot-proposes-guidelines-against-texting-and-driving/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959653
380
2.796875
3
The government has made important strides in lowering the carbon footprint of its information technology enterprise in recent years, but responsible agencies haven't set up reliable metrics to gauge how much good those initiatives have done, a watchdog said Friday. Friday's report from the Government Accountability Office also urged agencies to adopt best practices from the private sector to curb energy consumption, such as establishing policies on when to print documents and transitioning to thin client computing where the most intensive computer processing is done on a central server instead of employees' desktops. The White House Council on Environmental Quality advises agencies on establishing green IT initiatives in accordance with two executive orders from 2007 and 2009. The Office of Management and Budget is charged with reviewing and approving those green IT plans and rating agencies' success in achieving them on standardized score cards. Most agency plans don't include baseline information, so it's often not clear what, say, a 10 percent reduction in server space will mean for an agency's overall server count, GAO said. Moreover, agencies often don't identify the environmental benefit of their green IT targets, so it's not clear what effect the planned action will actually have on an agency's carbon footprint. "For example, [the Agriculture Department] had a goal to reduce the number of its data centers by 5 percent during fiscal 2010," GAO said. "However, it is unclear whether or by how much meeting this 5 percent reduction goal was expected to result in energy or dollar savings or other benefits." Establishing energy use baselines is difficult, agencies told GAO, because many government properties don't have meters capable of differentiating electricity use between IT and non-IT enterprises. A separate GAO report on agencies' attempts to meet an OMB initiative to cut roughly one-third of the government's 2,100 data centers by 2015, found that only a handful of small agencies had reliable metrics on their data centers' energy use. GAO acknowledged that collecting baseline information would be difficult, but said it could be extremely important to measuring agencies' progress. Carbon emissions from information technology account for about 2 percent of all emissions worldwide. The government spends about $80 billion on information technology and purchases or leases about 1 million computers annually, according to the GAO report. The Federal Electronic Challenge, a partnership of federal agencies, estimates the government saved 500,000 megawatts of power and $48 million through green IT initiatives in 2009, the latest figure for which estimates are available. Typical green IT initiatives include shutting down or consolidating inefficient data centers, transferring more of an agency's IT enterprise to more efficient cloud-computing, cutting down on printing and donating old government computers to be retooled for others' use rather than throwing them away.
<urn:uuid:12ebd1c7-228b-4389-9a19-1d9e8d174530>
CC-MAIN-2017-04
http://www.nextgov.com/technology-news/2011/08/green-it-initiatives-need-stricter-standards-watchdog-says/49545/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949972
558
2.53125
3
In this second installment of the Internet of Things, we are going to focus on the IoT architecture. The IoT architecture consists of 4 main layers – devices, networks and gateways, management and application. Let’s take a look into each of these layers in more detail. Devices Layer: This layer is comprised of sensors, actuators, RFID etc. Sensors are used to detect and transmit data regarding location, movement, temperature, humidity and more. Sensors are typically powered by batteries that can last for months or even years. Ideally, sensors will have unique IPv6 addresses for identification and communication. IPv6 uses a 128-bit address schema and thus can allow 228 or 340 trillion trillion trillion devices! RFID (Radio Frequency IDentification) tags are sensors that are commonly used to track objects in logistics and transportation. RFID chips are typically passive – they don’t have batteries but draw power from the RFID reader using built-in antennas8. NFC is a unique version of RFID where the devices have to be very close to each other, in the order of few centimeters. If you ever used Apple Pay or Google Wallet, you have used NFC. Networks and Gateways: IoT often requires unique networks and protocols in order to deal with devices that have limited power and need to wirelessly transmit data over a very short range. In a typical IoT environment, there is a WPAN (Wireless Personal Area Network) that connects the IoT devices. Apart from Wi-Fi or cellular networks, this layer can also use protocols such as Bluetooth Low Energy (BLE), Zigbee or 6LowPAN – all of which fall under IEEE 802.15.4 specification. BLE, also called Bluetooth Smart uses less than half the energy as the standard Bluetooth protocol. Zigbee is better than Bluetooth in situations where hundreds or thousands of devices have to be managed in a WPAN. Zigbee can also drastically extend battery life by using sleep mode in devices. 6LowPAN is a long acronym that stands for IPv6 for Low power Personal Are Network. 6LowPAN9 is a newer technology that allows individual devices to have IPv6 addresses, and thus enables the devices to directly communicate with the internet. IoT vendors use many other protocols, including proprietary ones, which is a huge challenge in operation and management. As the IoT industry matures, these problems will go away because of standardization and better interoperability. Gateways sit in between the internet and the WPAN. They aggregate and filter the data from various devices and send it to the cloud. They also protect the IoT devices from intruders. Star, P2P, Mesh and Cluster Tree are some of the network topologies that are used to connect the devices. The appropriate topology is chosen based on the communication protocol and the overall architecture. Management Layer: This layer includes the cloud storage platform to store data and software tools to manage, monitor and secure the IoT devices and network. The cloud platform may need to be flexible enough to handle not only HTTP but also other protocols such as MQTT and CoAP – lightweight, open standard protocols for small devices10. Other functions of this layer can include billing, data mining, in-memory analytics, predictive analytics, access control, encryption, business rules management (BRM), and business process management (BPM). Applications layer: The applications provide the interface and tools to the end user whose needs may vary widely. The application may be a website where a doctor logs in to check the status of his elderly patients who are monitored by wearable devices; it may be a smartphone app that someone uses to turn on the thermostat at home; or it may be an insurance company that uses a SaaS application to run queries on driving habits of customers with connected cars. There are many companies such as Xively and ThingWorkx that offer IoT platforms to enable quick development of applications. IoT has already started to make an impact in almost every aspect of the economy – consumer electronics, home appliances, healthcare, retail, logistics, transportation, surveillance, industrial control, agriculture, and environment are just a few examples where smart devices are bringing about a consequential paradigm shift. All these extraordinary changes also mean incredible opportunities for technology companies, whether they make low-cost devices, provide massively scalable cloud platforms, or create IoT applications and platforms. Let EMC help position your company to successfully ride the wave of IoT by visiting www.emc.com/getecs. Tags: ECS, Elastic Cloud Storage, internet of things, IOT, Object Storage
<urn:uuid:4871090d-3032-4f49-96f1-e77736b0c2fa>
CC-MAIN-2017-04
http://emergingtechblog.emc.com/breakfast-with-ecs-the-internet-of-things-iot-part-2-disrupt-or-be-disrupted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927243
940
3.5
4
But those digital images could be tampered with, claimed critics, unless they contained a digital watermark proving their authenticity. And until now, embedding a digital watermark has meant a loss of image quality when enlarging an image. This week, however, researchers at the University of Rochester and Xerox Corp announced that they have discovered a way to embed information in digital images, without distorting the original document. Current digital watermarking techniques irreversibly change the image, resulting in distortions or information loss. "While these distortions are often imperceptible or tolerable in normal applications," says Gaurav Sharma, one of the Xerox scientists who developed the new approach, "if the image is enlarged, enhanced, or processed using a computer, the information loss can be unacceptable." "The greatest benefit of this technology is in determining if anyone has clandestinely altered an image," explained Murat Tekalp, a University of Rochester electrical engineering professor who also worked on the research. "These days many commercial software systems can be used to manipulate digital images. By encoding data in this way we can be sure the image has not been tampered with, and then remove the data within it without harming the quality of the picture." While it will probably take some time before the technology is translated into commercial products, it likely be useful not just for law enforcement, but for a broad range of commercial applications, according to Matt Jackson, a professor of communications at Pennsylvania State University. "Digital watermarking is one of the key components to making content available online," Jackson says. "By providing a way to track down and prove copyright infringements, it gives people confidence that they can control the distribution of their product." In addition, recent legislation, such as the 1998 Digital Millennium Copyright Act (DCMA), as well as international copyright treaties, are beginning to require that anyone distributing a copyrighted work keep the copyright owner's information with the work. "Digital watermarking is one of the best ways to keep that information with the work," says Jackson. "By facilitating that, this technology could help make more content available online."
<urn:uuid:100c0692-b1ea-4451-9c99-b3bc96946f4e>
CC-MAIN-2017-04
http://www.cioupdate.com/news/article.php/1469691/New-Technique-Promises-Better-Digital-Watermarks.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00501-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936646
433
2.90625
3
Why are some of the remote administration programs being called "backdoors"? As you know, one of the most frequently occurring malicious programs is a "Trojan horse." Just like their ancient namesake, "Trojan horses" intrude into PCs under the disguise of a harmless program, attracting users by their unique functionality. Until recently, few people could resist opening a file promising to considerably improve processor capacity without any additional expense for equipment modernisation. There is no need to point out that more often than not, "Trojan horses" were hidden under the guise of these programs, and rather than providing "to good to be true" benefits, were acting maliciously. However, the situation has now completely changed, since even a novice user is unlikely to be deceived, because everyone has heard about the phenomenon of "Trojan horses." The spectrum of effects from "Trojan horses'" is extremely wide, and their classification may seem to some people as complicated as the periodic table of the elements. The most frequently occurring and most dangerous of malicious programs belong to a group of utilities that enable unauthorised remote administration, so-called backdoors. As described above, a backdoor intrudes into a PC and imperceptibly opens it to remote administration. This creates the opportunity for a third party to fully control an infected computer: to create, copy, read, delete any files or directories; to track a user's work; to act illegally on the user's behalf; to control bank accounts and so on. The spectrum of what can be achieved is limited only by the imagination of the "Trojan horse" writer. It is possible to detect an installed backdoor by the use of an anti-virus scanner or through the installation of a firewall that controls the use of the computer's ports. The main problem is how to determine whether a remote administration program is legitimate or whether it is a backdoor. What is the difference between the infamous "Trojan horse" "Back Orifice" (BO) and the well-known utility "pcAnywhere"? Upon first glance, both of the programs appear to use the same principles to provide remote administration. Why then do anti-viruses utilities define only one of them as a malicious program? The answer is simple: it is not the functionality that is the determinative factor, but rather the installation order and how visible and obvious its presence is in the system. Let's consider the problem from this point of view: The installation of a full-function utility for remote administration is performed by the appearance of several interactive windows, by a licensed agreement, and by a graphic accompaniment to the process. A backdoor, however, installs itself quietly and invisibly. After the installation file starts, no message appears on the screen that would directly inform the user of the installation. On the contrary, often some signs designed to confuse are displayed to distract the user's attention. While working on an infected PC, a backdoor does not give any sign of its presence. It is invisible on the taskbar, in the system tray and, in many cases, even on the active process list. This means remote access can be gained and actions performed to the computers, which remain absolutely imperceptible to users. Legitimate administrators always provide some signals that inform the user of their activity: either in the system tray or in the taskbar, and the signals practically always are seen on the active processes list or among services. Lastly, any full product has an Uninstall option. It is located in the program tree, which may be used at any time. Backdoors, on the other hand, may be deleted only by an anti-virus utility or by "a surgical intervention" - manually searching and deleting. Because of the previously described reasons, some utilities, claiming to be a full commercial product, are being considered as backdoors. The position of Kaspersky Lab Int. is clear: Although these programs may be used for authorised remote administration, the user has to be informed about the presence of such utilities on his/her computer. If the user were aware of the program's presence, the message highlighting its detection would hardly confuse him/her. However, if a backdoor has been installed illegally, neglect on the part of the anti-virus program can only be considered as disregarding user security.
<urn:uuid:3c014989-dfb7-4fda-a660-4d584b6da469>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2000/A_Backdoor_or_not_a_Backdoor_
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00409-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964616
880
2.90625
3
5.1.5 What is SSH? SSH, or Secure Shell, is a protocol which permits secure remote access over a network from one computer to another. SSH negotiates and establishes an encrypted connection between an SSH client and an SSH server, authenticating the client and server in any of a variety of ways (some of the possibilities for authentication are RSA, SecurID, and passwords). That connection can then be used for a variety of purposes, such as creating a secure remote login on the server (effectively replacing commands such as telnet, rlogin, and rsh) or setting up a VPN (Virtual Private Network). When used for creating secure logins, SSH can be configured to forward X11 connections automatically over the encrypted ``tunnel'' so as to give the remote user secure access to the SSH server within a full-featured windowing environment. SSH connections and their X11 forwarding can be cascaded to give an authenticated user convenient secure windowed access to a complete network of hosts. Other TCP/IP connections can also be tunneled through SSH to the server so that the remote user can have secure access to mail, the web, file sharing, FTP, and other services. The SSH protocol is currently being standardized in the IETF's SECSH working group: http://www.ietf.org/html.charters/secsh-charter.html. More information about SSH, including how to obtain commercial implementations, is available from - SSH Communications Security (http://www.ssh.fi). - Data Fellows (http://www.datafellows.com). - Van Dyke Technologies (http://www.vandyke.com).
<urn:uuid:6f419f81-05ed-429c-96dc-95cdc1ef07a9>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/ssh.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887071
349
3.796875
4
Definition: A spatial access method which divides space into a hierarchically of nested boxes. Objects are indexed in the lowest cell which completely contains them. See also BANG file. Note: After [GG98]. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 17 December 2004. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Paul E. Black, "R-file", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/rfile.html
<urn:uuid:dbdd25e0-84fc-4919-9b34-0681275583b0>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/rfile.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00253-ip-10-171-10-70.ec2.internal.warc.gz
en
0.844439
159
2.640625
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. Intel claims that moving to a 0.13-micron chip core means Intel can get nearly twice the number of chips on a 200-mme processor die than it could with 0.18-micron core. The 0.13-micron manufacturing process allows Intel to pack components more tightly on a chip, which can boost speed and lower cost, as well as reduce heat and power consumption, according to Intel. Some of the cost savings produced by the new manufacturing processes will be passed on to PC makers, said Intel. The new P4 processor will be Intel's first based on the company's Northwood core, which features 512KBytes of Level 2 cache compared to current 0.18-micron Pentium 4 processors that offer only 256KBytes of cache. "Larger cache does provide an important performance benefit," said Nathan Brookwood, principal analyst at Insight 64. "It's more than the increase you would get going from 2GHz to 2.2GHz [with the smaller cache]." Chips using the Northwood core will also run at 1.5 volts, compared to 1.75 volts for current P4 processors. Lowering the voltage lets the chips run cooler. In the first half of 2002 Intel plans to improve the efficiency of its Pentium 4 chip making process with a transition to a 300-mm wafer die. According to Intel the 300-mm die will yield almost three times the number of Pentium 4 chips as the current 200-mm die.
<urn:uuid:dbe749aa-5a75-4723-8aca-cd9a1b312d33>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240043709/New-process-heralds-Intel-3GHz-P4-chips
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922698
335
2.765625
3
New research suggests that 27 percent of teenagers are cyberbullies and 6 percent harass or bully others online frequently. The authors conclude that innovative approaches are needed to reduce the occurrence of Internet harassment and cyberbullying. The research, conducted by the University of New Hampshire's Crimes against Children Research Center, was published in the Journal of Adolescent Health. The conclusions were based on interviews with 1500 young people ages 10 to 17 across the nation. The study found that many youth involved in cyberbullying have poor relationships with their parents. As such, efforts aimed directly at teenagers are needed. Dr. Michele Ybarra, the principal author of the study and top researcher in the field, explains, "Youth who harass others online are twice as likely to have conflict with their parents. It's important to involve parents in Internet safety efforts, but it's important also to engage teenagers." Ybarra has been involved in research about Internet harassment since its emergence in 2000. As a researcher, she examines the data clinically and dispassionately. Nonetheless, she was unable to ignore the pain that teens involved in Internet harassment are experiencing that is emerging through the statistics, she says. "For some young people," Ybarra explains, "Internet harassment and cyberbullying can be a very disturbing experience." What has resulted is Cyberbully411.org, a Web site developed by Ybarra for teens involved in Internet harassment. This site provides an informed-by-research roadmap to thwarting teen cyberbullying. "While there are many sites out there that talk about cyberbullying, none of them speak to teens. To truly make an impact on this teen-perpetrated victimization, we decided to design a Web site that is fresh and exciting for teens, but also accurate and research based," Ybarra describes. Explosive growth of Internet use among young people has been mirrored by increasing awareness of its potential positive and negative impact on them. Recently, public concern has focused on accounts of children and teenagers being sexually solicited and harassed on social networking sites. Some politicians and lawmakers are advocating measures to restrict children and teenagers' access to these sites as a means of preventing sexual exploitation of young Internet users. "What is more important than restricting sites," Ybarra urges, "is for parents to be involved in their children's lives. Do you know where your child goes and who your child is with when they're online?"
<urn:uuid:6c0f4268-2ab6-4d31-a07c-05151b082f55>
CC-MAIN-2017-04
http://www.govtech.com/security/One-in-Four-Teens-Admit-to.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.976034
496
3.53125
4
In our last post we learnt about the Discrete Lograithm problem, why it is a difficult problem and how we can attempt to solve it if the numbers are manageable. Of course, in a real setting we wouldn't use 16 bit numbers as in my example, but at least 1024 bit numbers nowadays (and most likely even bigger numbers). Now, we are going to see how to make use of that problem to create a public key cryptosystem. We will look at how ElGamal uses the DL problem to provide public key encryption and digital signatures. Keep on reading if you are interested! So we have our friends Alice and Bob wanting to communicate securely. To that end, they first agree on the public settings of the ElGamal cryptosystem. They need a finite cyclic group G to work on (such as ) and a generator for that group, g. Of course, the group G must be a group where computing discrete logarithms is infeasible. Otherwise the system will not work. With these numbers, Alice and Bob first generate their respective key pairs. First, they generate a random element in G, which will serve as a private key: and respectively. Now, they compute the corresponding secret keys as follows: And now they can publish their public keys, , without any fear. Thanks to the difficulty of solving the discrete logarithm in G, their respective private keys remain safe even though everyone knows how they were generated. So, now our friend Alice wants to send some message m to Bob. This message is represented as an element in the group G. First, she grabs Bob's public key. Then, she generates a random number r in the same group G. With that number and Bob's public key, she computes the following cryptogram: It is needless to say that these operations always take place in the group G. Now, when Bob receives this message he can compute m like this: This is good news, at least Bob can recover the message knowing . But that doesn't mean that the message will be safe from everyone else. However, since the DL problem is difficult, it turns out that recovering r from R is difficult. Therefore, it is not easy to compute from the cryptogram and then recover m. It is also difficult to compute Bob's private key from his public key, which would be another way to recover the message. And also, since r was random, R is randomized as well as S. Thus, an attacker has no information on the structure of the message and the system seems secure under the assumption that the DL problem is hard. Example: ElGamal encryption Let's continue with our previous example. We take again the same group, and its generator . Alice and Bob compute their respective private and public keys: sage: G = IntegerModRing(17627) sage: g = G(6) sage: xA = G.random_element() sage: xB = G.random_element() sage: hA = g^xA sage: hB = g^xB So now everyone knows the public keys of Alice (11094) and Bob (1593). Now let's imagine that Alice wants to send the message m (1337) to Bob. She has to create a new random number and compute the cryptogram: sage: R = g^r Alright, now Alice sends this pair of numbers to Bob and he receives it and tries to decrypt them: sage: mp = S/(R^xB) Great, it works! However, note that this is not secure against chosen ciphertext attacks and the cryptogram is easily modifiable. For instance, one could modify the decrypted message by modifying only the S part of the cryptogram: sage: Sp = 3*S sage: mp = Sp/(R^xB) Here an attacker has intercepted the message and modified S to be 3S. This results in the decrypted message being 3m instead of m. However, this kind of properties becomes very useful in multiparty computations such as electronic voting schemes. ElGamal signature scheme Now we know how to encrypt and decrypt messages using ElGamal. Next step is to see how ElGamal approaches digital signatures. The steps for generating the key pair are the same, i.e. each participant generates a random number as their private key and then computes as their public key. Now, given a message m, Alice will first generate a cryptographic hash H(m). Then, she picks again a random number r and computes the following things: Note that now we used the private key for the generation of the signature. Otherwise, we would not be able to prove that the message is linked to Alice since everyone knows the public key. If S turns out to be 0, Alice has to pick a new random number and compute the signature again. The verification of the signature is performed by Bob as follows. Bob first computes the message H(m) and then performs the following two calculations: Due to the way in which the values R and S have been computed, the two results should be the same if the signature and the message have not been modified: So this tells us that the system is correct, and again in order to forge a signature one would need to either find collisions in the function H (see my post on hash functions) or solve a discrete logarithm. Both problems are believed to be hard. Note that the hash collision must occur over the group G, so that . Once again, I refer the interested readers to the Handbook of Applied Cryptography for more extensive and accurate information on these topics. In this case, the ElGamal public key system is described in chapter 8, section 8.4.
<urn:uuid:cd2007e3-a38e-402d-8f6b-c8ea52d083ab>
CC-MAIN-2017-04
https://www.limited-entropy.com/elgamal/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00125-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932311
1,213
3.6875
4
Any files that start with a period on a Mac are considered hidden files in the Mac OS and are not visible from within the Finder. You can see these hidden files from within the Terminal utility by using the ls -a command, but that is not convenient when you wish to see all files on your computer through the Finder. This tutorial will describe how to make it so that all files on your Mac are visible from within the Finder. The first thing you need to do is click on empty portion of your desktop so that the Finder is selected. Once it is selected, click on the Go button and select Utilities as shown in the image below. The Utilities folder should now appear as shown in the image below. Scroll through the list of apps till you find the Terminal icon, as shown by the arrow in the image above, and double-click on it. The Terminal utility should now open and you will be shown a prompt and rectangular cursor as shown in the screen below. The Mac OS Terminal utilities allows you to enter commands by typing on your keyboard. In order to show hidden files within the Mac OS Finder, we need to first issue a command to enable the viewing of hidden files. To enter this command, simply type the following bold text on your keyboard defaults write com.apple.Finder AppleShowAllFiles TRUE and then press the Enter key on your keyboard. Once you have done this, your Terminal screen should look similar to the image below. It is now necessary to restart the Finder so that these settings go into effect. To do this please type the following bold text on your keyboard killall Finder and then press the Enter key on your keyboard. Please note that you must capitalize the F in Finder or the command will not work. Once you have done this, your Terminal screen should now look similar to the image below. After typing the killall Finder command, you should have seen the Mac desktop go away and then start again. This was the Finder restarting and enabling the new settings. You can now close the Terminal utility. You will now be able to see all files, including hidden ones, when using the Finder on your Mac. In the future if you want to turn this setting off so that hidden files are not visible in the Finder, you can follow the same steps, but for the first command you should instead type the following bold text on your keyboard defaults write com.apple.Finder AppleShowAllFiles FALSE and then press the Enter key on your keyboard. You will then need to issue the killall Finder command to restart the Finder. If you have any questions about this process please feel free to post them in our Mac OS Forum. A file extension, or file name extension, is the letters immediately shown after the last period in a file name. For example, the file extension.txt has an extension of .txt. This extension allows the operating system to know what type of file it is and what program to run when you double-click on it. There are no particular rules regarding how an extension should be formatted other than it must ... When you double-click a file on your Mac, the operating system will automatically open the file using the program assigned to that type of file. It is possible, though, to open the file using another program if you wish. To open a file on your Mac using a different program, navigate to the file you wish to open and right-click on it to see the file menu as shown below. When using an application on a Mac it may become unresponsive and become frozen. When an application is in this state you are normally not able to interact with the program or close it normally via the Quit menu option. When this occurs the only way to close the program is to use Force Quit, which will forcefully close the programs. This tutorial will walk you through terminating an unresponsive ... By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them. In order to keep your Mac secure and operating efficiently it is important that you always install software updates as they become available. These updates not only fix problems with applications and the operating system, but also fix security vulnerabilities that can be used by computer viruses to infect your computer. Unfortunately, many people feel that because they are using a Mac they are ...
<urn:uuid:02155dde-dbc3-4898-af93-870bb38151c7>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/how-to-see-hidden-files-in-mac-os/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00243-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918204
925
2.703125
3
Many consumers have heard about Bitcoin, but they don’t necessarily know anything about it: not the full spectrum of benefits, and definitely not the risks they can expose themselves to by using it. And when it comes to lesser-known digital currencies, they often don’t even know their names, let alone anything else. The US Consumer Financial Protection Bureau (CFPB) has finally decided to step in and publish an advisory explaining some of the things mentioned above. Naturally, they concentrated on spelling out the risks: hacking, fewer legal protections, the cost, and scams that abound in the still nascent digital currency market. “While virtual currencies offer the potential for innovation, a lot of big issues have yet to be resolved – some of which are critical,” the organization noted. One of the things they warned about is the fact that virtual currencies are not issued or backed by any government or central bank, and that no one is required to accept them as payment or to exchange them for traditional currencies. “If something goes wrong with your purchase of virtual currencies, do you know how to contact the seller? Some virtual currency exchanges do not identify their owners, their phone numbers and addresses, or even the countries where they are located,” the bureau warns. “Ask yourself: In any other business transaction, would you trust these people with your money?” They also pointed out that consumers should be aware of the costs tied to the use of digital currencies (exchange rates, transaction fees, etc.), as well of the fact that their price is subject to dramatic price fluctuations. For those believing that Bitcoin transactions are and always will be anonymous, the bureau has bad news: “Information about each and every Bitcoin transaction is publicly shared and stored forever. Persistent, motivated people will likely be able to link your transactions to, among other things, your other transactions and public keys, as well as to your computer’s IP address.” Digital currencies are kept in digital wallets, and these wallets are secured with a private key that users should keep secret from everyone. Unfortunately, if this key gets compromised (via hacking) or lost, users can lose all of their funds – and have no legal way of getting them back. “Read your agreement with your wallet provider carefully,” they advise. “If you have linked your bank account or payment card to your digital wallet, they may also be at risk.” For additional information and insightful questions that users should know the answers to before embarking on the digital currency train, check out the helpful advisory.
<urn:uuid:df190cf5-2225-4142-87a2-facddf68be04>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/08/12/what-are-the-risks-of-virtual-currency-use/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960172
536
2.5625
3
A researcher is warning of a newly discovered attack vector affecting TLS which could lead to hackers uncovering the length of supposedly secret data such as passwords, making them easier to crack. The HTTPS Bicycle attack is explained by Guido Vranken in a research paper here. Although it is completely undetectable by the user, the real world impact may be minimal as there are several prerequisite conditions that may be hard to meet. Specifically, it requires a packet capture of HTTPS traffic from a victim’s browser to a specific site—via Man in the Middle—and that the TLS traffic must use a “stream-oriented cipher”—a particular type of encryption. What’s more, it can only reveal the length of unknown data if the rest of the data is known. Websense security researcher, Nicholas Griffin, explains here how an attack targeting a victim's password would work: “All a user needs to do is have a packet capture of requests to a known site, including an authentication (login) request containing an already known username and an unknown plain-text password. If an attacker can determine the user's browser and how that browser would send requests to the site, they can subtract the length of all the known data the browser would send except for the piece of information they are interested in, which will result in them knowing the length of the unknown data.” Once the length of a target’s password has been ascertained, in theory it should be easier to crack. If a password is eight characters long and an attacker is able to send 10 log-in requests to the website in question every second, it could be cracked in 5.5 hours, Griffin estimated. Although the plausibility of carrying out such an attack in the real world has yet to be tested, it should be another reason to ensure any passwords contain numbers and letters and at least eight characters, he added. Websense principal security analyst, Carl Leonard, added that webmasters must do their bit too—for example by offering two-factor authentication. “End users must ensure their passwords are sufficiently strong, while website operators and web platform developers must ensure they are fully up to date to guarantee all steps are taken to prevent this attack from occurring in the future," he argued. HTTPS Bicycle doesn’t just work with password-based attacks, of course; it could theoretically be used to steal GPS co-ordinates or IP addresses.
<urn:uuid:a495d9f1-2f00-4aac-a7ac-1c05c62d5e99>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/https-bicycle-attack-reason-shun/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00455-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945576
507
2.8125
3
A global epidemic of the network worm "Nimda" has been reported Kaspersky Lab, an international data-security software developer, reports that on September 18th, an outbreak of the network worm "Nimda" was detected. We have received more than 500 reports from around the world regarding incidents of infection in connection with this malicious program. "Nimda" ("Admin" backwards) poses a serious threat to both companies and individual users alike. The worm opens all disks installed on an infected computer for full access. In this way, anyone who wishes may delete, change, copy, or view any document on the infected computer. This could cause the disclosure, loss, and unauthorized changing of confidential information. "Nimda" penetrates a computer in several different ways: First of all, via e-mail: an infected e-mail in HTML format, containing several embedded objects enters a target computer. Upon viewing the e-mail, one of the objects automatically starts up unbeknownst to the user. In order to accomplish this, the worm exploits a breach in Internet Explorer security that was first detected in March of this year. Second of all, while surfing infected Web sites: In place of the original Web site, a user is shown its modified version containing a malicious Java program, which downloads and starts the "Nimda" copy on a remote computer, using the aforementioned breach. Thirdly, via the local network: the worm scans all accessible network resources, dropping thousands of its copies here. This is done with the idea that upon finding the file on a disk or server, a user will single-handedly infect his/her own computer. In addition to penetrating workstations, "Nimda" also carries out an attack on Web servers running under Microsoft Internet Information Server (IIS). The method for infecting IIS servers is identical to "BlueCode." The malicious program gains access to the hard disk of a remote server, downloads its file here from a previously infected computer, and then starts it. In order to accomplish this, it exploits a breach in IIS called "Web Server Folder Traversal" as described in the corresponding Microsoft announcement. "The reason for the heavy 'Nimda' outbreak is the non-standard means for penetrating a computer. Instead of the 'traditional' attached file, the worm takes advantage of a system-security breach. It is generally known that most users neglect the advice of installing the 'patch'; therefore, the level of infection resulting from 'Nimda' could surpass that of the recent infamous 'SirCam' worm," commented Eugene Kaspersky, Head of Anti-virus Research at Kaspersky Lab. In order to thwart "Nimda," it is necessary to download and install the latest Kaspersky Anti-Virus update. The corresponding update was released on September 18 at 4:30 p.m. GMT (11:30 a.m. New York time). Also, we urge the immediate installation of the Internet Explorer and IIS patches that block the aforementioned breaches. These patches not only repel "Nimda" attacks, but those of similar worms that could appear in the future. "Without taking these protective measures, coupled with the level of the epidemic, we would recommend users either be extremely cautious or temporarily hold off using e-mail or the Internet altogether," summed up Eugene Kaspersky. More detailed information about the "Nimda" worm can be found in the Kaspersky Virus Encyclopedia.
<urn:uuid:998f6a0e-8fef-46e4-a4fc-1c98824290a6>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2001/Kaspersky_Lab_Warns_Not_to_Use_the_Internet_or_E_Mail_without_the_Patch
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00391-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918221
725
2.546875
3
In the first quarter of 2012 alone, six million new malware samples were created, following the trend of increasingly prevalent malware statistics of previous years, according to PandaLabs. Trojans set a new record as the preferred category of cybercriminals for carrying out information theft, representing 80 percent of all new malware. In 2011, Trojans ‘only’ accounted for 73 percent of all malware; worms took second place, comprising 9.30 percent of samples; followed by viruses at 6.43 percent. Interestingly in 2012, worms and viruses swapped positions from the 2011 Annual Report, where viruses stood at 14.25 percent and worms at 8 percent of all circulating malware. When it comes to the number of infections caused by each malware category, the ranking supports the hierarchy of new samples in circulation with Trojans, worms and viruses occupying the top three spots. Interestingly, worms caused only 8 percent of all infections despite accounting for more than 9 percent of all new malware. This is quite noteworthy as worms typically caused many more infections due to their ability to propagate in an automated fashion. The figures corroborate what is well known: massive worm epidemics have become a thing of the past and have been replaced by an increasing avalanche of silent Trojans, cyber-criminals’ weapon of choice for their attacks. The average number of infected PCs across the globe stands at 35.51 percent, down more than three percentage points compared to 2011, according to Panda Security’s Collective Intelligence data. China once again led this ranking (54.25 percent of infected PCs), followed by Taiwan and Turkey. The list of least infected countries is dominated by European countries with nine out of the first ten places being occupied by them, the top three being Sweden, Switzerland and Norway. Japan is the only non-European country among the top ten nations with fewer than 30 percent of computers infected.
<urn:uuid:982d752d-1afa-4ea2-888e-c14447dd6597>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2012/05/07/ransomware-increases-in-prevalence-as-cyber-criminal-tactic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942773
386
2.5625
3
MD Anderson Cancer Center researchers are relying on a powerful supercomputer to develop a dosing protocol for an MRI-guided radiation therapy for cancer care, called MRI-linac. The Lonestar system at the Texas Advanced Computing Center (TACC) is helping researchers fine-tune the radiation dosing mechanism so that just the right amount of radiation is delivered to the tumor, maximizing the sparing of healthy surrounding tissues. The Elekta and Philips Research Consortium on MRI-Guided Radiation Therapy is advancing the MRI-guided linear accelerator (linac) to address the limitations of traditional imaging methods based on computed tomography (CT) scans. In cases where the tumor is constantly moving, for example, in concert with the patient’s breathing as would be likely to occur with lung cancer, CT scans don’t provide the necessary real-time component. By combining radiation therapy with magnetic resonance imaging (MRI), MRI-linac enables physicians to view the tumor in real-time with high detail during the radiation treatment. The ability to deliver radiotherapy in such a precise way constitutes a major breakthrough in cancer care. A group at the MD Anderson Cancer Center, a member of the research consortium, is tracking how much radiation is being delivered through the MRI-linac, part of a discipline known as dosimetry. By carrying out simulations on TACC’s Lonestar, the researchers are able to model radiation in a magnetic field, helping to establish this safer, more effective treatment. “Precise knowledge of dose is critical to effective radiation treatment,” said Michelle Mathis, a medical physics researcher for the MD Anderson dosimetry team, in an article on the TACC website. “Different tumors need different doses to be killed,” Mathis added. “Our work focuses on gaining a better understanding of how to precisely calibrate the new MRI-linac system so that the appropriate amount of radiation is delivered to the cancer tumor while healthy tissue is spared.” The MD Anderson team has so far run simulations comprising 250,000 computing hours on Lonestar. They are working to develop correction factors for 16 different ionization chambers. These chambers detect radiation, providing feedback so that dose can be calibrated. The addition of MRI and the resultant magnetic field affects the way the chambers operate. The team’s solution is to develop correction factors that enable the ionization chambers to be appropriately calibrated. “Using Lonestar, we are able to simulate the effect of many variables on the ionization chamber readings, which will allow us to precisely calculate radiation dose,” Mathis said. The MRI-linac system is still in development and is not yet available for sale. However, the research partners have completed work on some of the core components and installation of the first-generation test system is underway. The supercomputing simulations being performed at TACC are critical to ensuring the system works as intended. Feature coverage of this important innovation comes to us via TACC Science and Technology Writer Makeda Easter.
<urn:uuid:c35aef83-dadc-4c63-bb28-b323acf6a95c>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/08/25/supercomputing-facilitates-breakthrough-cancer-treatment/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00023-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922508
626
2.6875
3
Definition: An unordered collection of values where each value occurs at most once. A group of elements with three properties: (1) all elements belong to a universe, (2) either each element is a member of the set or it is not, and (3) the elements are unordered. Formal Definition: As an abstract data type, a set has a single query function, isIn(v, S), which tells whether an element is a member of the set or not, and two modifier functions, add(v, S) and remove(v, S). These may be defined with axiomatic semantics as follows. The predicate isEmpty(S) may be defined with the following additional axioms. Generalization (I am a kind of ...) bag, abstract data type. See also intersection, union, complement, difference, list, set cover. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 September 2014. HTML page formatted Mon Feb 2 13:10:40 2015. Cite this as: Patrick Rodgers and Paul E. Black, "set", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/set.html
<urn:uuid:5aa307a1-4a58-4890-8882-4053ef174bb4>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/set.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00327-ip-10-171-10-70.ec2.internal.warc.gz
en
0.869181
299
3.34375
3
Electronic communication services and networks provide the backbone of European economy. 93% of EU companies and 51% of Europeans actively used the internet in 2007. However natural disasters, terrorist attacks, malicious human action and hardware failure can pose serious risks to Europe’s critical information infrastructures. Recent large scale attacks on Estonia, Lithuania and Georgia proved that essential electronic communication services and networks are under constant threat. Preparing Europe to act in case of major disruptions or attacks is the goal of a new strategy proposed today by the European Commission. In 2007, after large-scale cyber attacks, the Estonian Parliament had to shut down its email system for 12 hours and two major Estonian banks had to stop their online services. There is a 10% to 20% probability that telecom networks will be hit by a major breakdown in the next 10 years, with a potential global economic cost of around €193 billion ($250 billion). This could be caused by natural disasters, hardware failures, rupture of submarine cables (there were 50 incidents recorded in the Atlantic Ocean in 2007 alone), as well as from human actions such as terrorism or cyber attacks, which are becoming more and more sophisticated. Smooth functioning of communications infrastructures is vital for European economy and society. Communications networks also underpin most of our activities in daily life. Purchases and sales over electronic networks amounted to 11% of total turnover of EU companies in 2007. 77% of businesses accessed banking services via internet and 65% of companies used online public services. In 2008, the number of mobile phone lines was equivalent to 119% of the EU population. Communications infrastructure also underpins the functioning of key areas from energy distribution and water supply to transport, finance and other critical services. The Commission today called for action to protect these critical information infrastructures by making the EU more prepared for and resistant to cyber attacks and disruptions. At the moment Member States’ approaches and capacities differ widely. A low level of preparedness in one country can make others more vulnerable, while a lack of coordination reduces the effectiveness of countermeasures. Viviane Reding, Commissioner for Information Society and Media said: The Information Society brings us countless new opportunities and it is our duty to ensure that it develops on a solid and sustainable base. Europe must be at the forefront in engaging citizens, businesses and public administrations to tackle the challenges of improving the security and resilience of Europe’s critical information infrastructures. There must be no weak links in Europe’s cyber security. The European Commission wants all stakeholders, in particular businesses, public administrations and citizens to focus on the following issues: Preparedness and prevention: fostering cooperation, exchange of information and transfer of good policy practices between Member States via a European Forum. Establishing a European Public-Private Partnership for Resilience, which will help businesses to share experience and information with public authorities. Both public and private actors should work together to ensure that adequate and consistent levels of preventive, detection, emergency and recovery measures are in place in all Member states. Detection and response: supporting the development of a European information sharing and alert system. Mitigation and recovery: stimulating stronger cooperation between Member States via national and multinational contingency plans and regular exercises for large-scale network security incident response and disaster recovery. International cooperation: driving a Europe-wide debate to set EU priorities for the long term resilience and stability of the Internet, with a view to proposing principles and guidelines to be promoted internationally. Establish criteria for European critical infrastructure in the ICT sector: the criteria and approaches currently vary across Member States. The Commission today invited the European Network and Information Security Agency (ENISA) to support this initiative by fostering a dialogue between all actors and the cooperation necessary at the European level. Mr. Andrea Pirotti, Executive Director of ENISA, confirmed today the ability of the Agency to support the initiative of the Commission, by strengthening its resources. Commenting on the communication, Mr Pirotti clarified: ENISA is ready to pick up the gavel and support the European Commission in its efforts to address these crucial matters. The Agency is willing to do everything within its mandate to support all necessary actions of the EU and its Member States to combat these threats and to protect the economy of Europe, which, ultimately may be at stake.
<urn:uuid:ade27e75-fce6-4bc0-8296-6585a6276fcc>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2009/03/31/commission-acts-to-protect-europe-from-cyber-attacks-and-disruptions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00353-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939151
892
2.640625
3
Cybersecurity 101: Protect your home or personal network Intrusion detection systems. Network firewalls. Behavioral analysis. Encryption. The toolkit of the modern information security professional is full of complex, advanced technical controls designed to protect enterprise networks against increasingly sophisticated attacks. How should home users protect themselves — without investing thousands of dollars in specialized security equipment — against cybercriminals who want to steal sensitive personal information? Fortunately, there are simple and inexpensive steps that every home user can take to build a robust, layered defense that will protect them against most of the malicious threats that jeopardize the security of their systems and personal information. Let’s take a look at five simple ways that you can keep your network secure without breaking the bank. Think of these recommendations as being a Cybersecurity 101 course for the average home computer user. Use a Firewall Businesses spend thousands of dollars on sophisticated firewalls designed to keep malicious threats out of their protected networks. Firewalls sit at the border between a private network and the Internet, enforcing rules that regulate the traffic allowed to cross that border. Enterprise-grade firewalls are expensive and require extensive configuration to precisely define the types of traffic that should be allowed to enter the network unsolicited. For example, a business firewall would typically allow connections from the Internet to the company’s web server. Fortunately, home users don’t need a sophisticated firewall because they don’t have sophisticated networking needs. Unless you’re running public web servers in your home, your firewall policy should be very simple: Don’t allow any unsolicited connections to your network. You probably already have a firewall built-in to the Internet router provided by your service provider. Even better, it’s probably already configured to enforce this simple “deny everything” firewall policy. Take the time to understand what type of router is sitting at the border of your home network. Find the instruction manual for that model router and use it to verify that the firewall function is enabled and blocking all unsolicited connection requests. This will go a long way toward keeping the bad guys out of your network. Install and Update Antivirus Software Antivirus software is still one of the tried-and-true ways to protect your network against malicious threats. Signature-based software runs on your systems, scanning them constantly for any signs of malicious software. When antivirus software detects a threat, it acts to immediately neutralize it by removing the software entirely or, if that’s not possible, quarantining it in a safe location until you can take further action to clean your system. You can’t just simply install antivirus software and walk away, however. The manufacturers of antivirus software release new updates on a daily basis to combat recently discovered strains of malicious software. If you haven’t updated your software in a few years, it’s next to useless as a defense against modern threats. Take a few minutes to verify that all of the systems on your network have current antivirus software and that they’re configured to receive daily signature updates from the vendor. Keep Computers Patched Whether you’re running Windows or Macintosh systems, you need to apply security updates on a regular basis to keep your systems secure. Microsoft and Apple release patches whenever they become aware of a security vulnerability in their operating systems. If you don’t apply those patches, attackers will likely discover your vulnerability and exploit it to gain access to your network and data. Fortunately, it’s easy to keep your computers patched. Both Mac OS X and Windows provide automatic updating mechanisms that check every day for new security patches and automatically apply them to your systems. You just need to ensure that this functionality is turned on and your computer will take care of all of the work. Encrypt Wireless Networks Your wireless network is the easiest path for an attacker to gain access to the systems in your home. You should use strong WPA2 encryption to protect your network and configure it with a strong password known only to authorized network users. If you have no encryption, or use the outdated WEP encryption standard, it’s equivalent to leaving your front door unlocked and open, waiting for intruders to wander by and steal your belongings. Configuring wireless encryption is usually very easy. Check the manual for your wireless access point. You’ll probably just need to select WPA2 encryption from a drop-down menu and then enter a strong passphrase used to access the network. Once it’s up and running, reconfigure all of your devices to use the new encrypted network and the contents of your communications will be safe from prying eyes. Encrypt Sensitive Files One oft-forgotten risk is the physical theft of computing devices. If an intruder steals a computer out of your home or a thief grabs your bag on the subway, you may lose physical possession of the computer. It’s one thing to lose a couple thousand dollars because of the device theft, but it’s far worse to lose your tax returns, credit card statements and other sensitive information that might be stored on the device. You can protect yourself against the loss of sensitive information by encrypting the contents of your computer. Even if the computer falls into the wrong hands, the thief won’t be able to access your encrypted personal information without knowing your password. Both Windows and Mac systems offer free built-in encryption technology that you can easily enable. FileVault on Macs and BitLocker on Windows provide an easy way to protect the contents of your hard drive from prying eyes. Just make sure that you know your own password so that you don’t lock yourself out from access to your personal files! Securing a home network is far simpler than securing the complex corporate networks that offer public services, but it still requires effort. Take the time to assess your network by verifying that your firewall is active, installing antivirus software, applying security patches, using WPA2 on your wireless network and encrypting your sensitive files. The few hours you might spend securing your network today may prove themselves worth the effort when they successfully protect you from hackers down the road!
<urn:uuid:311dc776-2f9c-4222-be47-c785e498efc9>
CC-MAIN-2017-04
http://certmag.com/cybersecurity-101-protect-home-personal-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00171-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911659
1,276
2.671875
3
It is extremely important that you as the user understand why in the heck you should be concerned about the security of your device. Sure you may have heard about the tons of malware out there or the ransomware stealing millions from large corporations, but it is easy to disregard such headlines as a user. “What would anyone want with my computer?” proves to be the usual user mindset. It really does pay to be conscious however, and proactive none the less. Malware, and ransomware, a type of malware, is designed by cyber criminals with boatloads of knowledge as to how to steal your information, passwords, bank account numbers, log-ins, sensitive data and of course, your money. The general tactic appears in the form of downloaded malware or ransomware, unsuspecting to the user, waiting idly by until the person on the other side decides to take a dig into your life. Like the monster under your bed, but worse. Malware is something to worry about because for one, it is used to indefinitely steal your data and these days..your money. Not to mention the fact that if you happen to lose to cyber theft, not much can be done to help your case. Most cyber criminals operate in foreign countries outside U.S. legal jurisdiction, and to be honest even if they were, you still wouldn’t get your money back. It’s just not the way it works. Don’t be a victim. Ask anyone and they will tell you the quickest way to get hacked is by lack of updates for commonly hacked programs, basically leaving your doors unlocked and asking to be robbed, and by being tricked into installing a Trojan, the equivalent of the robber ringing the doorbell and you inviting them to stay for dinner before they rob you dry. Neither is good! “Sure, there are hundreds of other methods: SQL injection attacks, password guessing, and so on. But nearly everything besides unpatched software and downloaded Trojans is statistical noise. In fact, if you fix the main two issues, you almost don’t need to do anything else.” – Roger A. Grimes computer security columnist for Info World Malware can be broken down into worms, viruses, Trojans, and hybrids. Viruses spread by infecting other host files and when run initiate the malware to commence. Worms are self replicating, once started they need no further assistance. Trojans need victims to get to business. They do not spread themselves, rather the originating hacker must spread each copy to each victim separately, usually via email. The benefit to this is that unless you experience ransomware, that locks the device, Trojans can be removed once identified. You’d be surprised the amount of users that still give away their logins to hackers every day. It’s insane. Typically the user is sent a phishing email asking for credentials and claims to be from a legitimate website. Many times the email makes a small call to action such as threatening the termination of service. Trust the website in this case, not the email and go directly to the website to confirm. Signature-based anti-malware simply cannot keep up with the thousands of malicious programs that hit each month. That is just the truth of the matter. Some of the responsibility must be in the hands of the user, or a good IT management team. A single antivirus program can only get so far, it would be who of you to periodically run a boatload of free antivirus programs at once. Together, the programs together can identify what the single one could not. If you would like to educate yourself in more detail about the information presented in this blog post please visit : www.infoworld.com
<urn:uuid:3bb46d8c-013a-48b6-b130-ec8946960035>
CC-MAIN-2017-04
http://www.bvainc.com/basic-security-facts-every-user-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949937
767
2.546875
3
Intel researchers show how using simple processor cores can present a radically different approach to building processors. Can this stab at the future go mainstream? Intel is about to deliver the opening salvo in a wave of multicore processors that could ultimately lead to chips with scores of cores aboard. The chip maker will begin the rollout of its Core Microarchitecturenew chip circuitry that emphasizes power efficiencyJune 26 with the arrival of the dual-core "Woodcrest" Xeon 5100 series server chip. But Intel researchers, speaking at the VLSI Symposium June 15, said that they have already seen results with projects associated with its Tera-scale Computing effort to explore processors containing tens or even hundreds of cores. Intel has already implied that it is aiming for processors with more than 10 processor cores by the end of the decade. However, Tera-scale chips would look and act differently. They would be built from relatively simple general-purpose IA (Intel Architecture) x86 processor coreswith the potential to include specialized cores for some jobsto boost performance by dividing up jobs and running them in parallel. Tera-scale chips would use semiconductor design lawswhich state that smaller, slower cores tend to use less powerto meet businesses needs for performance, while acknowledging concerns about matters like server power consumption. Click here to read more about Intels plans to speed chip architecture redesigns. "Theres this advantage to simplifying the individual [processor] core, accepting the reduction in single-thread performance, while positioning yourself, because of the power reduction, to put more cores on the die," said Intel CTO Justin Rattner, in Hillsboro, Ore. "Thats the energy-efficiency proposition of Tera-scale. Less is more, actually, in the case of a Tera-scale machine, because the underlying core efficiency is better than the cores weve been introducing this year." Tera-scale chips would be particularly good for jobs requiring the processing of large amounts of data, such as computer visualization or using gestures to control a computer, or more business-oriented applications like data mining. But extracting the true performance potential of such a new approach wont be possible without improving chip technologies, including boosting onboard memory caches, creating high-speed interconnects for distributing data, and more efficient clock timing systems. Nor will it be successful without getting software developers, many of whom are just now starting to tackle the move from single-thread applications to multi-threaded applications, on board, Intel executives said. "Every time you increase the number of threads, youre putting greater burden on the programmers to write the applications to actually harness all that available parallelism," Rattner said.
<urn:uuid:35769d46-dddf-4192-b398-3390fe98c937>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Desktops-and-Notebooks/Terascale-Computing-Intels-Attack-of-the-Cores
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927328
558
3
3
any one can brief explanation about mf cobol and other cobol types. Actually i know vs cobol-II,(cobol-74,cobol-85) .Im not aware of other types of cobol. can anyone help me to brief introduction about all types of cobol. COBOL-COmmon Business Oriented Language.A common bussiness man can also understand the program which is written in COBOL,But a COBOL programmer know the syntax for it. Cobol is having many types like MS-COBOL,VS-COBOL- II,etc., normally now a days we are using VS COBOL-II
<urn:uuid:8af890c8-81e9-4f23-b81f-bd2ed610d46a>
CC-MAIN-2017-04
http://ibmmainframes.com/about14965.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00197-ip-10-171-10-70.ec2.internal.warc.gz
en
0.88141
147
2.75
3
We cannot overstate the importance of the goal of bringing the power of the Information Age into all our schools and public libraries. As President Clinton stated in his 1997 State of the Union address, connecting every classroom to the information superhighway will help to ensure that all Americans have the best education in the world. Connecting public libraries will extend the benefits of new information technologies to all members of society. Increasingly, computers and the Internet are becoming the language of the future, and that technology can be a powerful tool to help teachers teach and students learn. Indeed, technological literacy
<urn:uuid:1fed530a-513c-4b14-b5c3-d62e328460da>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Student-Technology-Access-is-Essential.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944579
117
2.640625
3
Pew! Pew! Pew! went the Mars Curiosity Rover. At least in my head, it did. That's right, folks, the laser is a-firing on the Red Planet. Here's the weekly video update from NASA's Jet Propulsion Laboratory, giving us details on what the rover has been up to (remember, it's got work to do - it's not just there to admire the scenic landscape). I find it amusing that the laser can shoot something 600 times at the same location and only drill about 1mm thick. NASA's gonna need a bigger laser if we're ever going to find gold up there. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Science Monday #1: Why it's dark at night BBC gives Doctor Who fans an Amy/Rory postscript The best remote-control car chase ever Science Monday: Origins of Quantum Mechanics in under 5 minutes Motion-copy robot can mimic painting brush strokes
<urn:uuid:6e4e7ca8-089f-44f0-b6ad-70cb2770500c>
CC-MAIN-2017-04
http://www.itworld.com/article/2719580/consumer-tech-science/this-week-on-mars--shooting-rocks-with-lasers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894725
242
2.59375
3
Question 5) Cert-XK0-002 – CompCert: Linux+ Objective : Security SubObjective : Given security requirements, implement basic IP tables/chains Single Answer Multiple Choice Which target option in the ‘iptables’ command will drop a data packet and send back an error message after receiving a matching data packet from the network? The REJECT target type for the ‘iptables’ command will drop the data packet and send back an error message after receiving a matching data packet from the network. The message is not sent if error messages have already been sent to the system in the past. The REJECT target type has the ‘–reject-with type‘ parameter, where the type variable is used to specify the type of message that should be sent back to the user. The DROP target type drops the data packet. The DROP target, however, cannot send back an error message to the user. There is no target named DENY in the ‘iptables’ command. The RETURN target type cannot drop a data packet. While traversing a rules chain, if a RETURN target type is encountered, the control will be restored to the chain that invoked the rules chain. The ‘iptables’ command is used to create and manage the system tables that contain rules for filtering IP packets. There are three independent tables supported by the Linux kernel: filter, nat, and mangle. Each table contains a set of chains that includes a sequence of rules for the packets traveling on the network. The filter table is the default table that contains the INPUT, OUTPUT, and the FORWARD chains. The nat table contains the PREROUTING, OUTPUT, and POSTROUTING chains. The mangle table keeps the PREROUTING, OUTPUT, INPUT, FORWARD, and POSTROUTING chains. Linux Command Directory, iptables, http://www.linuxdevcenter.com/linux/cmd/cmd.csp?path=i/iptables These questions are derived from the Transcender Practice Test for the CompTIA Linux+ certification exam.
<urn:uuid:341ef314-8e31-4863-a59f-3a2c177c6825>
CC-MAIN-2017-04
http://certmag.com/question-5-cert-xk0-002-compcert-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.742678
449
2.796875
3
Using the Windows 2000 Distributed File System A simple solution to time-consuming data management: the Windows 2000 DFS tree appears as one contiguous directory structure, regardless of the logical or physical location of the data. In Windows NT 4.0, Microsoft provided an add-on product called Distributed File System (DFS) that allowed physically separate network file resources to be grouped together and accessed as if they were a single logical structure. The product, which was a free download, failed to make a great impact with network administrators and went largely unnoticed. With Windows 2000, DFS is included with the OS and provides a number of new functions. The tool for managing the DFS structure has been improved, and wizards serve to make setup an easy task. DFS is a service that gives administrators a way to provide users with simple access to increasingly distributed amounts of data. In this article, I will look at some of the features of DFS and how to create a DFS tree in Windows 2000. |DFS in a Heterogeneous Environment| DFS file structures can be accessed from any workstation that is running the DFS client software. This software is included with Windows 98, Windows NT 4.0, and Windows 2000. A downloadable client is available for systems running Windows 95. To take full advantage of the fault tolerance capabilities of DFS, the updated Active Directory Client Extensions must be installed for the respective client platforms. What Is DFS? DFS provides the ability to create a single logical directory tree from different areas of data. The data included in a DFS tree can be in any location accessible from the computer acting as the DFS root. In other words, the data can be on the same partition, disk, or server, or on a completely different server. As far as DFS is concerned, it makes no difference. A DFS tree appears as one contiguous directory structure, regardless of the logical or physical location of the data. After the DFS root is created, links to directories can be added or removed to construct the single logical directory structure. The DFS tree can be navigated using standard file utilities such as Windows Explorer. Unless users are made aware of the fact that the data is being accessed from different locations, they will not realize that they are using a DFS system at all. DFS trees can be used with both FAT and NTFS partitions. If you do use NTFS, the inclusion of a file or directory in a DFS structure has no effect on security permissions. There are two types of DFS: - Stand-alone DFS--Refers to a DFS tree that is hosted on a single physical server, and is accessed by connecting to a DFS share point on that server. DFS configuration information is stored in the server's Registry. Stand-alone DFS provides no fault tolerance. If the server hosting the DFS root should go down, users will no longer be able to access their data unless they explicitly know where the data is stored. - Domain DFS--Provides more functionality, including features such as replication and load-balancing capabilities. Domain DFS information is stored in Active Directory. A domain member server must act as the host for the DFS tree. By storing the domain DFS configuration in Active Directory, the server-centric nature of stand-alone DFS is removed, enabling the administrator to create DFS root replicas. If a server were to go down, users would be redirected to a DFS root replica and could continue to access the DFS tree. |DFS Disk Space Reports| When a DFS share is accessed, the amount of free disk space on the drive is reported for the drive that hosts the DFS root. This amount will often differ from the amount of disk space available through different areas of the DFS structure. As an administrator, this change is easy to account for, but it can be confusing for users.
<urn:uuid:e4d03160-fe36-4e56-8358-e9b6a1354abd>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsysm/article.php/624521/Using-the-Windows-2000-Distributed-File-System.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00530-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927052
813
2.765625
3
What the average guy might call a con is known in the security world as social engineering. Social engineering is the criminal art of scamming a person into doing something or divulging sensitive information. These days, there are thousands of ways for con artists to pull off their tricks (See: Social Engineering: Eight Common Tactics). Here we look at some of the most common lines these people are using to fool their victims.. Social networking scams "I'm traveling in London and I've lost my wallet. Can you wire some money?" a whole new door for social engineering scams, according to Graham Cluley, senior technology consultant with U.K.-based security firm Sophos. One of the latest involves the criminal posing as a Facebook "friend." They send a message or IM on Facebook claiming to be stuck in a foreign city and they say they need money. Social networking sites have opened "The claim is often that they were robbed while traveling and the person asks the Facebook friend to wire money so everything can be fixed," said Cluley. One can never be certain the person they are talking to on Facebook is actually the real person, he noted. Criminals are stealing passwords, hacking accounts and posing as friends for financial gain. "If a person has chosen a bad password, or had it stolen through malware, it is easy for a con to wear that cloak of trustability," said Cluley. "Once you have access to a person's account, you can see who their spouse is, where they went on holiday the last time. It is easy to pretend to be someone you are not." "Someone has a secret crush on you! Download this application to find who it is!" Facebook has thousands of applications users can download. Superpoke is one example of a popular application many users download to enhance their Facebook experience. But many are not trustworthy, according to Cluley. "It is impossible for Facebook to vet all of the applications people write," he said. Sophos, which tracks cybercrime trends, is seeing Facebook applications that install adware, which cause pop-up ads to appear on a user's screen. The other danger, according to Cluley, is that installing many of these applications means you give a third-party access to your personal information on your profile. "Even if they are legitimate, can you trust them to look after your data properly?" said Cluley. "A lot of these applications are really jokey. You don't really need those. People should consider carefully which ones they choose to accept." "Did you see this video of you? Check out this link!" Spam on Twitter, the popular social network where users "Tweet" quick one line messages to others in their network (Read: 3 Ways a Twitter Hack Can Hurt You). Sophos is also seeing an increase in A spam campaign on Twitter in recent weeks involved a Tweet that said "Did you see this video of you?" "If you think the link is from a friend, you are much more likely to click on it," said Cluley. Unfortunately, users who clicked on the link ended up at a bogus site that only looked like the Twitter web site. Once there, unsuspecting Twitterers entered passwords, which then ended up in the hands of hackers. "This is Chris from tech services. I've been notified of an infection on your computer." Before there were computers, email, web browsers and social network sites for communication, there was the phone. And although it may seem archaic now, it is still a handy way to pull off a social engineering scam, according to Chris Nickerson, founder of Lares, a Colorado-based security consultancy. Nickerson said scammers often take advantage of a timely event to strike. The Downaup worm that is currently infecting many PCs is a good example (Read Downadup Worm Now Infects 1 in every 16 PCs). Nickerson's firm conducts what he calls 'Red Team Testing' for clients using techniques that involve social engineering to see where a company is vulnerable. "I will call someone and say "I've been informed that you've been infected with this worm.' And then I walk them through a bunch of screens. They will see things like registry lines and start to get nervous with the technicality of it. Eventually, I say 'Look, why don't I fix this for you? Give me your password and I will deal with it and call you back when I am done.'" The strategy plays on a person's fear and lack of comfort with tech, said Nickerson. "If you can put someone in a position where they think they are in trouble, and then be the one to fix it, you automatically gain their trust." "Hi, I'm from the rep from Cisco and I'm here to see Nancy." Anatomy of a Hack). Nickerson recently pulled off a successful social engineering exercise for a client by wearing a $4 Cisco shirt that he got at a thrift store (Read: Criminals will often take weeks and months getting to know a place before even coming in the door. Posing as a client or service technician is one of many possibilities. Knowing the right thing to say, who to ask for, and having confidence are often all it takes for an unauthorized person to gain access to a facility, according to Nickerson. Well, cookies can't hurt either. Nickerson said he always brings cookies when he is trying to gain the trust of an office staff. In fact, a 2007 diamond heist at the ABN Amro Bank in Antwerp, Belgium involved an elderly man who offered the female staff chocolates and eventually gained their trust with regular visits while he pretended to be a successful businessman. "It was just plain old chocolate," said Nickerson. "Sweets loosen everybody up." Ultimately the bank lost 120,000 carats of diamonds because the man was able to gain enough trust to be given off-hours access to the bank's vault. "Can you hold the door for me? I don't have my key/access card on me." In the same exercise where Nickerson used his shirt to get into a building, he had a team member wait outside near the smoking area where employees often went for breaks. Assuming his team member was simply a fellow-office-smoking mate, employees let him in the back door with out question. This kind of thing goes on all the time, according to Nickerson. The tactic is also known as tailgating. Many people just don't ask others to prove they have permission to be there. But even in places where badges or other proof is required to roam the halls, fakery is easy, he said. "I usually use some high-end photography to print up badges to really look like I am supposed to be in that environment. But they often don't even get checked. I've even worn a badge that said right on it 'Kick me out' and I still was not questioned." "You have not paid for the item you recently won on eBay. Please click here to pay." "We see emails impersonating complaints from eBay for non-payment of winning bids," said Shira Rubinoff, founder of Green Armor Solutions, a security software firm in Hackensack, New Jersey. "Many people use eBay, and users often bid days before a purchase is complete. So, it's not unreasonable for a person to think that he or she has forgotten about a bid they made a week prior." Rubinoff, who was once targeted and almost fell prey to a phishing attack, was inspired to found Green Armor after the incident. She said this kind of ploy plays to a person's concerns about negative impact on their eBay score. "Since people spend years building eBay feedback score or "reputation," people react quickly to this type of email. But, of course, it leads to a phishing site." Rubinoff recommends not clicking on any emails of this kind. Instead, if you are concerned about something like your eBay score, go to eBay directly by typing the url into the browser bar on your own. "You've been let go. Click here to register for severance pay. " With the economy in the state it is in now, people are afraid for their jobs and criminals are taking advantage of that fear, said Rubinoff. A common tactic includes sending an email to employees that looks like it is from the employer. The message appears to relay news that requires a quick response. "It can be an email that appears to be from HR that says: 'You have been let go due to a layoff. If you wish to register for severance please register here,' and includes a malicious link." No one wants to be the person that causes problems in this economy, so any email that appears to be from an employer will likely elicit a response, noted Rubinoff. Lares' Nickerson has also seen cons that use fake employer emails. "It might say, 'In an effort to cut costs, we are sending W-2 forms electronically this year,'" said Nickerson.
<urn:uuid:7f64988f-930c-44bc-8adc-5ba490616839>
CC-MAIN-2017-04
http://www.csoonline.com/article/2123756/fraud-prevention/9-dirty-tricks--social-engineers--favorite-pick-up-lines.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973659
1,888
2.765625
3
Table of Contents System Restore is a system recovery feature of Windows that creates snapshots, or restore points, of the state of your computer at various intervals or before you perform a certain task. These restore points can then be used to restore your computer back to the state it was in when they were taken. When these restore points are created, and restored, the only files affected are the Windows Registry, programs, and system files. Your data such as spreadsheets, documents, images, and music remain untouched between restores. You may be wondering why you would want to restore your computer to a previous point. The reason being that there comes a time in every computer user's life that they install a new program, install a new driver, or just turn on the computer and find it no longer works as it did the day before. You have no idea why, can't resolve the problems, and are left with two options. You can either backup your data, reinstall the operating system, and then reinstall all your applications or you can restore your computer to a previous snapshot in the hopes that the problems will now be gone. By using System Restore to restore your computer to a previously known working state you can save considerable time or money compared to reinstalling the entire computer. System Restore points are automatically created when the following actions occur: In order for System Restore to work you must have 300 MB of free space for each hard disk that System Restore is monitoring. System Restore will also use up to 15% of the disk space on each disk that it monitors. As hard drive space runs out, older restore points will be deleted as newer ones are created. It is also important to point out that you must be logged in as an Administrator in order to use System Restore. Now that you understand the basics of System Restore, you should continue to the next section to learn how to use it. In the case of a problem on your computer that cannot be solved by normal means, you can restore your computer back to a previous working state. To do this you need to start System Restore so that you can choose the restore point to restore. If you are currently having problems starting Windows Vista, you can use System Restore from the Windows Recovery Environment. Instructions on how to do this can be found in this tutorial: Using System Restore from the Vista Windows Recovery Environment. If you can log into Windows Vista, then you should follow these steps. You will now be at the System Restore screen as shown below in Figure 1. From this screen you can specify the restore point that you would like to restore. Figure 1. System Restore Screen By default, Vista will already have selected the Recommended restore option. This restore point is one was made after a new program, driver, or update was installed. If you would like to use this restore point, you can click on the Next button to start the restore process. On the other hand, if there is a more recent restore point that you would like to restore you should select Choose a different restore point and press the Next button. This will bring you to a screen, as shown in Figure 2, that contains a listing of all the available restore points that you can restore to. Figure 2. List of available restore points You should select the restore point that you would like to restore and press the Next button to start the restore process. Vista will display a Window showing your selected restore point and asking you to confirm that this is the one you would like to restore. Figure 3. Confirm the selected restore point If you would like to select a different restore point press the Back button. Otherwise you can press the Cancel button to exit System Restore or the Finish button to begin the restore process. If you selected Finish, Vista will display a second prompt asking you to confirm that you would like to continue the restore. Figure 4. Second Confirmation If you are sure you want to do the restore, then press the Yes button. Vista will now log you off of the computer and start the System Restore process as shown in Figure 5 below. Figure 5. Restoring a restore point When the restore has been completed, you computer will be restarted and when Vista boots back up it will be restored to its previous state. When you log in to Vista for the first time after the restore, you will see a message showing that the restore was successful. Figure 6. System restore was successful If there are any problems with your computer due to the last restore, you can revert back to your previous settings by going back into the System Restore Utility and selecting the Undo System Restore option and pressing the Next button. Figure 7. Undo the last System Restore Your computer should now be working properly again. As said previously, it is also possible to create manual restore points as needed. Popular reasons to create manual restore points are when you have your computer set up perfectly and would like to save the state in the case of problems in the future. To create a manual restore point you need to follow these steps: You will now be at the System Protection tab in the System control panel. This tab allows you to enable and disable System Restore as well as make new manual restore points. Figure 8. System Protection tab To create the manual restore point you should click on the Create button. When you press this button a prompt will appear asking you to provide a title for this manual restore point. Figure 9. Enter title for manual restore point Type in a title for the manual restore point and press the Create button. Vista will now create a manual restore point, and when completed, display a notice saying that it was created successfully. Figure 10. Manual restore point was created Now that you have completed making the manual restore point you can close the System window. It is advised that you do not turn off System Restore unless you have specific need to do so. WARNING: By disabling system restore you will delete all stored restore points and shadow copies of documents on your computer. To disable System Restore you would follow these steps: System Restore is now disabled on your computer. By default System Restore is enabled on Windows Vista computers so you will only need to enable it if you have previously disabled it. To enable System Restore do the following: System Restore is now enabled on your computer. There are two safe ways to delete restore points stored on your computer. These ways are described below: Turn off System Restore - When you turn off System Restore, all previously created restore points will be deleted. System Restore runs out of storage space - If System Restore runs out of allocated space it will delete the older restore point in order to create free space in which to create the new restore point. If you are the owner Windows Vista Business, Ultimate, or Enterprise then you also have a feature called Shadow Copy available to you. Shadow Copy is a feature integrated into System Restore that makes copies of your documents when a restore point is created. You can then use these shadow copies to restore these files at a later date if they have been accidentally deleted or altered in some way. Since Shadow Copy is integrated into System Restore, if System Restore is disabled you will not be able to use Shadow Copy. To use Shadow Copy to restore a particular file to a previous state you would right-click on the file and select the Restore previous versions option as shown in Figure 8 below. Figure 12. Restore Previous Versions This will bring up a new screen showing the previous versions of the file that are available to restore as shown in Figure 9 below. Figure 13. List of previous versions When you select a version from the list you have three options. The interesting this about this feature is it does not work only on files. You can also use this feature to restore folders, and all of its contents, that were deleted. To restore a previous version of a folder you would do the following. When restoring folders, if you only wish to restore a particular file in the folder Shadow Copy, you should select the Open option to open the Shadow Copy folder and then copy the particular Shadow Copy files out of it that you need. As you can see Shadow Copy is a powerful way to keep your data safe and to have backups available in the case of accidental deletions or irreversible changes. The System Restore tool is a powerful feature that can be used to keep your computer operating properly. Now when you run into a problem that cannot be resolved normally, you can use System Restore to restore your computer to a previous known working state. Even more powerful is the ability to use system restore from the Windows Recovery Environment when you can't properly boot up into Windows. This allows you a second chance to get your computer operating as it should without having to do a time consuming and possibly expensive reinstall. More information about Vista System Restore can be found here: http://bertk.mvps.org/html/vista.html As always if you have any comments, questions or suggestions about this tutorial please do not hesitate to tell us in the Windows Vista Help Forums. Windows Vista comes with a rich feature set of diagnostic and repair tools that you can use in the event that your computer is not operating correctly. These tools allow you to diagnose problems and repair them without having to boot into Windows. This provides much greater flexibility when it comes to fixing problems that you are not able to resolve normally. This guide focuses on using the ... A powerful new feature in Windows Vista is the ability to use System Restore while in the Vista Recovery Environment. This allows you to restore your computer to a previous working state even in the event that you can't normally start Windows Vista. Before you can use System Restore to repair Vista, though, System Restore will need to have been enabled and running previously. The reason is ... Windows Vista Business, Ultimate, and Enterprise come with a more advanced backup and restore utility called Windows Complete PC Backup and Restore. This program allows you to create an entire backup of your computer that can be used to restore your computer in the case of system-wide failure. Unlike the standard backup and restore feature that comes with all the versions of Windows Vista, Windows ... If you are an owner of Windows Vista Business, Ultimate, or Enterprise then you have access to a feature called Complete PC Backup and Restore. This feature allows you to create backup images of your computer so that you can do a system-wide restore in the case of complete system failure. This allows you to not only restore user data and programs, but also the entire operating system onto new or ... With new programs being installed, viruses infecting, and spyware lurking in your browsers it is not uncommon for your computer to suddenly stop behaving correctly. In fact, it is almost guaranteed that at some point your computer will just not do what you expect it to. This is not because your a lousy computer user or even a bad person, this is just the life as we know it when working with ...
<urn:uuid:9da4724b-13b6-42a3-b8ca-d34d56f15429>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/windows-vista-system-restore-guide/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00006-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927662
2,236
2.65625
3
The advantages of IoT in business are becoming better understood with each passing day. Farming is emerging as one of the early, surprising adopters of the technology, even in some of the most remote parts of the world. In a proof-of-concept study that was published on its website last week, open-source IoT and M2M platform provider Libelium reveals how some cocoa farmers in Indonesia are now using 50 of its wireless Waspmote sensors and Libelium Meshlium gateway in conjunction with cloud analytics to improve cocoa production and profit margins in the face of climate change, which is a growing threat to their business. The Spanish firm, named a Gartner ‘Cool Vendor’ in embedded software and systems last year, said that Singapore-based IoT solution provider BioMachines designed a wireless sensor network system, integrating its Waspmote Smart Agriculture sensors to measure environmental parameters in the cocoa fields of tropical Indonesia. These sensors collect environmental data from the laboratory and field-based experiments, such as on temperature, humidity, Photo-synthetically active radiation (PAR) and soil water potential. There is also NFC tags on trees. This data is subsequently passed onto the cocoa farmers and it is hoped that this information could contribute to the development of pest-resistant cocoa clones, the learning and sharing new techniques to revive old and damaged trees, and the prevention of deforestation – all of which are hampering cocoa production. All of this is optimising production, enhancing the commercial viability of the cocoa supply chain and – with this monitoring all done remotely – making logistics a little easier too. “The Internet of Things (IoT) solves one of the major challenges of access, via remote monitoring systems. The Indonesian cocoa farms and research stations are located in far-flung areas that previously required experts to travel for days in arduous conditions to access the field and the data,” reads the blog post. This project, as part of Indonesia’s Sustainable Cocoa Production Program (SCPP), saw BioMachines work with a client organization to transform a remote site into a Smart Cocoa research station that monitors environmental parameters. Indonesia is said to be the third largest cocoa producer in the world, with the vast majority of this production coming from small family-run farms. However, despite the International Cocoa Organization (ICCO) predicting that cocoa demand will exceed supply by 2020, Indonesia is struggling to keep up; the Indonesian Cocoa Association (ASKINDO) says that the country produced 450,000 tons of cocoa beans last year – significantly down from 620,000 tons in 2006. Worse still, the association predicts a further 11 percent decline in production this year. A key contributor to this decline, and a reason why IoT is so important, is climate change. Farms are plagued by aging trees are prone to pests and diseases, as well as a lack of scientific knowledge and analysis on the crop at farm level. Rob Bamforth, principal analyst at Quocirca, told Internet of Business that it was great to see “practical and pragmatic applications” for IoT, and believes that deployments don’t have to be costly. “IoT does not have to mean big bucks as long as the idea is well thought out and has direct impact. Often good IoT ideas are really about feedback and closing the loop in open-ended processes.” Next year, he expects deployments to grow, but only in certain sectors. “I think we will see a mixture, but I don’t think home automation will be as big as much of the hype as it’s difficult to see the value in many cases. “I think we’ll see more enterprise deployments and a spread of sensors, but limited a little bit not by the costs of the sensors, but the cost of deploying them.” He expects to see more going on with wearables and in-vehicle telematics, as deployment can be done from the outset. “Retro fitting [IoT] will work but only in apps where there is clear value to process improvement and resource savings.” This is by no means the first case study of IoT being used in the farm, and much talk recently has been – as amusing as it sounds – on how to connect sheep and cows to the Internet for continuous health monitoring. Beecham Research published a research paper on the topic earlier this year. Meanwhile, in a recent blog post for the government’s IoTUK hub, Kisanhub co-founder Giles Barker detailed how his start-up recently partnered with Nwave to run a pilot IoT project at NIAB’s innovation farm in Cambridge. Here, their potato crop uses third-party moisture sensors, and send moisture readings via Nwave network. “IoT is a bit of a buzzword, but we are trying not to get bogged down in the hype and instead focus on creating use cases for farmers,” said Barker in his post. “I am hugely opportunistic about what remote sensing will do for agriculture, in terms of reducing water consumption and improving yield. Sensors and IoT will be impactful, not just for farming and agriculture, but also for all of society. 70 percent of all water used in the world is used by agriculture and of that 50 percent is wasted; think of the global impact if you could improve that.”
<urn:uuid:d4da67bb-0fe8-49bb-8b31-80a3e0947b9e>
CC-MAIN-2017-04
https://internetofbusiness.com/indonesian-cocoa-farmers-are-hoping-iot-can-help-their-crops-survive-climate-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00336-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956503
1,124
2.515625
3
Listen attentively: listen for the words used by the other person but watch for other clues also. Listen to the tone of voice, watch the body language and posture. People (men especially) do not like to admit that they are wrong or that they don't understand. If I don't understand something, I may not say it for fear that I will be perceived as incompetent or weak. When you ask me "Do you understand?" and I reply "Yes," I may be doing so even if I don't really understand. How can you pick up on it? Did I sound hesitant when I answered? Does my body language look like that of a confident person? Or did I drop my shoulders and lower my head? These, and other similar clues, scream that I don't understand, but I am unwilling to admit it. If it is important for you that I understand, then you need to validate that I am saying the truth. Ask questions. If your client or your listener doesn't understand, you need to ask questions. The questions should help you figure out where there is a gap in understanding so you can close it. Questions that use non-threatening language are more effective. What is non-threatening language? It is a way of speaking that ensures your listener does not put herself in a defensive position. When someone is defending herself, it inhibits clear communication. How can you make use of such language and questions? If the person has admitted that she does not understand, ask her to be more specific about what she doesn't understand. Better yet, ask what she does understand and fill in the gap. If you are unconvinced that the person has understood, put the onus on yourself: "Just to make sure I explained myself properly, can you tell me what you understood from our discussion?" Use metaphors and other images to explain difficult concepts. It is harder to grasp abstract concepts like directories and inodes; it is easier to understand folders and documents. Metaphors, images, and comparisons are good ways to illustrate your words for the other person. Human beings tend to process information as images not words. The more visual your explanations, the better. If you cannot explain it in a metaphor, use drawings, diagrams, or concrete examples. Compare your reality to the other person's reality if you can. For example, a colleague once told me how she explained to a group of schoolchildren that she was a mediator: "I help you get along if you're fighting in the school yard." Information is what you say. Communication is what your audience understands. The best IT specialist is not the one who has the most ideas. He or she is not the one that programs the best. The best one is the person that is able to take her knowledge and skills and share them effectively with the people that surround her. By paying attention to these five points, you can dramatically improve the effectiveness of your every day communications. Laurent Duperval is the president of Duperval Consulting which helps individuals and companies improve people-focused communication processes. He may be reached at email@example.com or 514-902-0186.
<urn:uuid:d1dab316-fb0e-4e4e-ba7b-6dbe88c07ddf>
CC-MAIN-2017-04
http://www.cioupdate.com/career/article.php/11048_3751231_2/Effective-Communication-and-the-IT-Specialist-133.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961596
651
3
3
Researchers at Georgia Tech and MIT have developed a proof of concept to demonstrate that it is possible to record a computer user's keystrokes using an iPhone 4's accelerometer. The researchers developed a method to accurately translate the vibrations from typing on a keyboard picked up by the device's accelerometer when placed on a desk near a PC. Though they warn that hackers could potentially use their method to eavesdrop on a user's keystrokes, they believe the actual threat is quite low. The method, detailed in a paper titled “(sp)iPhone: Decoding Vibrations From Nearby Keyboards Using Mobile Phone Accelerometers,” works by interpreting pairs of keystrokes in successive order. According to principal researcher Patrick Traynor, assistant professor at Georgia Tech’s School of Computer Science, the method can't reliably pinpoint single keystrokes. But by characterizing the successive strokes as left-right, right-left, left-left, or right-right, and then whether the pair is nearer or further away form the device, the pairs can be statistically analyzed to represent probably letter pairs. Then those pairs can be compared to a dictionary. According to Traynor, the method is 80 percent accurate with a 58,000 word dictionary. Even that accuracy, though, requires thoroughly modern equipment. “We first tried our experiments with an iPhone 3GS, and the results were difficult to read,” Traynor said in a statement. “But then we tried an iPhone 4, which has an added gyroscope to clean up the accelerometer noise, and the results were much better. We believe that most smartphones made in the past two years are sophisticated enough to launch this attack.” Similar keylogging methods have been developed which use a smartphone's microphone. But malware masquerading as a legitimate app can usually access a smartphone's accelerometer without tripping built-in security features, according to the researchers, which tend to prevent access to a device's sensors without a specific OK from the user. Traynor characterized the likelihood of a smartphone user succumbing to such keyboard eavesdropping as "pretty low." With only 80 percent accuracy, the attack would likely have trouble accurately interpreting usernames or passwords that aren't common dictionary terms. And with an effective range of just three inches, users can easily mitigate any potential threat by keeping their iPhone further away from their keyboard, or off the desk entirely. The paper will be presented Thursday at the currently in progress 18th ACM Conference on Computer and Communications Security in Chicago.
<urn:uuid:b3de8517-5dd0-4b63-9065-f00643e81ef7>
CC-MAIN-2017-04
http://arstechnica.com/apple/2011/10/researchers-can-keylog-your-pc-using-your-iphones-accelerometer/?comments=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00236-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946363
522
2.78125
3
Writing and implementing high performance computing applications is all about efficiency, parallelism, scalability, cache optimizations and making best use of whatever resources are available — be they multicore processors or application accelerators, such as FPGAs or GPUs. HPC applications have been developed for, and successfully run on, grids for many years now. HPC on Grid A good example of a number of different components of HPC applications can be seen in the processing of data from CERN’s Large Hadron Collider (LHC). The LHC is a gigantic scientific instrument (with a circumference of over 26 kilometres), buried underground near Geneva, where beams of subatomic particles — called Hadrons, either protons or lead ions — are accelerated in opposite directions and smashed into each other at 0.999997828 the speed of light. Its goal is to develop an understanding of what happened in the first 10-12 of a second at the start of the universe after the Big Bang, which will in turn confirm the existence of the Higgs boson, help to explain dark matter, dark energy, anti-matter, and perhaps the fundamental nature of matters itself. Data is collected by a number of “experiments.” each of which is a large and very delicate collection of sensors able to capture the side effects caused by exotic, short lived particles that result from the particle collisions. When accelerated to full speed, the bunches of particles pass each other 40 million times a second, each bunch contains 10^11 particles, resulting in one billion collision events being detected every second. This data is first filtered by a system build from custom ASIC and FPGA devices. It is then processed by a 1,000 processor compute farm, and the filtering is completed by a 3,400 processor farm. After the data has been reduced by a factor of 180,000, it still generates 3,200 terabytes of data a year. And the HPC processing undertaken to reduce the data volume has hardly scratched the surface of what happens next. Ten major compute sites around the world comprising many tens of thousands of processors (and many smaller facilities) are then put to work to interpret what happened during each “event.” The processing is handled, and the data distribution managed, by the LHC Grid, which is based on grid middleware called gLite that was developed by the major European project, Enabling Grids for E-sciencE (EGEE). High performance is achieved at every stage because the programs have been developed with a detailed knowledge and understanding of the grid, cluster or FPGA that they target. From Grid to Cloud Grid computing isn’t dead, but long live cloud computing. As far as early-adopter end users in our 451 ICE program are concerned, cloud computing is now seen very much as the logical endpoint for combined grid, utility, virtualization and automation strategies. Indeed, enterprise grid users see grid, utility and cloud computing as a continuum: cloud computing is grid computing done right; clouds are a flexible pool, whereas grids have a fixed resource pool; clouds provision services, whereas grids are provisioning servers; clouds are business, and grids are science. And so the comparisons go on, but through cloud computing, grids now appear to be at the point of meeting some of their promise. One obvious way to regard cloud computing is as the new marketing-friendly name for utility computing, sprinkled with a little Internet pixie dust. In many respects, its aspirations match the original aspirations of utility computing — the ability to turn on computing power like a tap and pay on a per-drink basis. “Utility” is a useful metaphor, but it’s ambiguous because IT is simply not as fungible as electrical power, for example. The term never really took off. Grid computing, in the meantime, has been hung up on the pursuit of interoperability and the complexity of standardization. Taking the science out of grids has proved to be fairly intractable for all but high performance computing and specialist application tasks. Clouds usefully abstract away the complexity of grids and the ambiguity of utility computing, and they have been adopted rapidly and widely. Since then everyone has been desperately trying to work out what cloud computing means and how it differs from utility computing. It doesn’t, really. Cloud computing is utility computing 2.0 with some refinements, principally, that it is delivered in ways we think are very likely to catch on. But as cloud abstracts away the complexity, it also abstracts away visibility of the detail underlying execution platform. And without a deep understanding of how to optimize for the target platform, high performance computing becomes, well, just computing. Human readable programs are translated into ones that can be executed on a computer by a program called a compiler. A compiler’s first step is that of lexical analysis, which converts a program into its logical components (i.e., language keywords, operators, numbers and variables). Next, the syntax analysis phase checks that the program complies with the grammar rules of the languages. The final two phases of optimization and code generation are often tightly linked so as to be one and the same thing (although some generic optimizations such as common sub-expression elimination are independent of code generation). The more the compiler knows about the target systems, the more sophisticated the optimizations it can perform, and the higher the performance of the resulting program. But if a program is running in the cloud, the compiler doesn’t know any detail of the target architecture, and so must make lowest common denominator assumptions such as an x86 system with up to 8 cores. But much higher performance may be achieved by compiling for many more cores, or an MPI-based cluster, or GPU or FPGA. Such technology has become a hot commodity. Google bought PeakStream, Microsoft bought the assets of Interactive Supercomputing and Intel bought RapidMind and Cilk Arts. So the major IT companies are buying up this parallel processing expertise. Multicore causes mainstream IT a problem in that most applications will struggle to scale as fast as new multicore systems do, and most programmers are not parallel processing specialists. And this problem is magnified many times over when running HPC applications in the cloud, since even if the programmer and the compilers being used could do a perfect job of optimizing and parallelizing an application, the detail target architecture is unknown. Is there a solution? In the long term new programming paradigms or languages are required, perhaps with a two-stage compilation process that compiles to an intermediate language but postpones the final optimization and code generation until the target system is known. And no, I don’t think Java is the answer.
<urn:uuid:a77780a2-6028-4021-a87b-d2c6cc43a30e>
CC-MAIN-2017-04
https://www.hpcwire.com/2009/11/02/grid_computing_done_right/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948632
1,382
3.3125
3
The effect of telehealth on patient’s health-related quality of life is “weak or non-existent” according to the latest findings in the UK’s influential Whole System Demonstrator (WSD) trial. Researchers led by City University’s school of health sciences found that the psychological difference for patients, whether they were remote monitored or underwent conventional care, was “little or non-significant”. The patients in the 12-month study had one of three chronic diseases: chronic obstructive pulmonary disease (COPD), diabetes or heart failure. The findings were first published by the British Medical Journal. The WSD trials have been widely watched internationally as well as in the UK because of its scale which gives it findings greater credibility. For instance, the current trial had 1,650 patients spread across three separate locations in England. This is the latest survey in the wider WSD trial. Previous findings in 2012 have found telehealth to be expensive (London School of Economics) or successfully delivering health benefits although with only modest cost savings (Nuffield Trust). However, the UK government, which supports the widespread introduction of telehealth, drew highly positive results from the WSD trial. The Department of Health published its findings in December 2011. The City University-led study only covered the 12 months up to December 2010 and the researchers acknowledged that telehealth technology has advanced in the last two years.
<urn:uuid:249e5248-2b85-4bf0-ac07-5de8ffabb25c>
CC-MAIN-2017-04
https://www.mobileworldlive.com/telehealth-impact-is-weak-or-non-existent-says-wsd-study
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00109-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966396
295
2.578125
3
Pereira J.R.,Embrapa Cotton | Duarte A.E.,Regional University of Cariri | Pitombeira J.B.,Federal University of Ceara | da Silva M.A.P.,Regional University of Cariri | And 2 more authors. Phyton | Year: 2013 An experiment was conducted in dryland conditions of the Brazilian Northeast to determine the number of viable weed seeds (seedbank) in an upland cotton crop, and its distribution in the soil profile, before and after using various herbicide treatments. A randomized block design in a split-plot block scheme with 6 replications was used, where the main plots were constituted by a factorial (13 treatments and 2 sampling soil depths), and the subplots by 2 sampling dates. Te seedbank was determined by germination of the recovered weed seeds obtained from different soil depths. Te highest number of viable weed seeds in the area was found before the application of the herbicide treatments at 0 - 10 cm soil depth. Te treatments metalachlor + diuron; diuron + pendimethalin and the control (no herbicide treatment, weeded weekly during the entire cotton crop cycle) were the most effective in reducing the weed seed-bank in the area. Source Lucena W.A.,Embrapa Cotton | Lucena W.A.,Federal University of Rio Grande do Sul | Pelegrini P.B.,Embrapa Genetic Resources and Biotechnology | Martins-de-Sa D.,Embrapa Genetic Resources and Biotechnology | And 12 more authors. Toxins | Year: 2014 Bacillus thuringiensis (Bt) is a gram-positive spore-forming soil bacterium that is distributed worldwide. Originally recognized as a pathogen of the silkworm, several strains were found on epizootic events in insect pests. In the 1960s, Bt began to be successfully used to control insect pests in agriculture, particularly because of its specificity, which reflects directly on their lack of cytotoxicity to human health, non-target organisms and the environment. Since the introduction of transgenic plants expressing Bt genes in the mid-1980s, numerous methodologies have been used to search for and improve toxins derived from native Bt strains. These improvements directly influence the increase in productivity and the decreased use of chemical insecticides on Bt-crops. Recently, DNA shuffling and in silico evaluations are emerging as promising tools for the development and exploration of mutant Bt toxins with enhanced activity against target insect pests. In this report, we describe natural and in vitro evolution of Cry toxins, as well as their relevance in the mechanism of action for insect control. Moreover, the use of DNA shuffling to improve two Bt toxins will be discussed together with in silico analyses of the generated mutations to evaluate their potential effect on protein structure and cytotoxicity. © 2014 by the authors; licensee MDPI, Basel, Switzerland. Source Bezerra C.A.,Catholic University of Brasilia | Macedo L.L.P.,Catholic University of Brasilia | Amorim T.M.L.,Federal University of Rio Grande do Norte | Santos V.O.,Federal University of Rio Grande do Norte | And 8 more authors. Gene | Year: 2014 α-Amylases are common enzymes responsible for hydrolyzing starch. Insect-pests, whose larvae develop in seeds, rely obligatorily on α-amylase activity to digest starch, as their major food source. Considering the relevance of insect α-amylases and the natural α-amylase inhibitors present in seeds to protect from insect damage, we report here the molecular cloning and nucleotide sequence of the full-length AmyHha cDNA of the coffee berry borer, Hypothenemus hampei, a major insect-pest of coffee crops. The AmyHha sequence has 1879. bp, containing a 1458. bp open reading frame, which encodes a predicted protein with 485 amino acid residues, with a predicted molecular mass of 51.2. kDa. The deduced protein showed 55-79% identity to other insect α-amylases, including Anthonomus grandis, Ips typographus and Sitophilus oryzae α-amylases. In depth analysis revealed that the highly conserved three amino acid residues (Asp184, Glu220, and Asp285), which compose the catalytic site are also presented in AmyHha amylase. The AmyHha gene seems to be a single copy in the haploid genome and AmyHha transcription levels were found higher in L2 larvae and adult insects, both corresponding to major feeding phases. Modeling of the AmyHha predicted protein uncovered striking structural similarities to the Tenebrio molitor α-amylase also displaying the same amino acid residues involved in enzyme catalysis (Asp184, Glu220 and Asp285). Since AmyHha gene was mostly transcribed in the intestinal tract of H. hampei larvae, the cognate α-amylase could be considered a high valuable target to coffee bean insect control by biotechnological strategies. © 2014 The Authors. Source Nascimento D.M.D.,Federal University of Ceara | Almeida J.S.,State University of Ceara | Vale M.D.S.,Embrapa Tropical Agroindustry | Leitao R.C.,Embrapa Tropical Agroindustry | And 4 more authors. Industrial Crops and Products | Year: 2015 The high lignin content in the unripe coconut fiber limits the use of this biomass as a cellulose nanocrystal source compared to other cellulose-rich materials. The aim of this study was to obtain lignin and biomethane, and evaluate different approaches for extracting cellulose nanocrystal from unripe coconut coir fiber. The environmental evaluation of these approaches is presented in the second part of this paper. Lignin was extracted by acetosolv pulping and cellulose by alkaline hydrogen peroxide bleaching respectively. Were evaluated the biochemical methane potential of the effluents resulting from acetosolv pulping as well as the lignin concentration. Cellulose nanocrystals were prepared from cellulose pulp via four methods: acidic hydrolysis with high acid concentration, acidic hydrolysis with low acid concentration, ammonium persulfate oxidation, and high-power ultrasound. The cellulose nanocrystals were analyzed by FTIR spectroscopy, X-ray diffraction, transmission electron microscopy, and TG analysis. Using these methods, the whole coconut fiber could be used to produce cellulose nanocrystals and lignin. Among the proposed methods, high-power ultrasound showed the highest efficiency in cellulose nanocrystal extraction. © 2016 Elsevier B.V. Source Alves T.J.S.,Federal University of Pernambuco | Wanderley-Teixeira V.,Federal University of Pernambuco | Teixeira A.A.C.,Federal University of Pernambuco | Silva-Torres C.S.A.,Federal University of Pernambuco | And 4 more authors. Animal Biology | Year: 2014 Parasitoids have evolved mechanisms to evade their hosts' defenses. Bracon vulgaris (Ashmead) is a larval ectoparasitoid responsible for natural reduction of Anthonomus grandis (Boheman) and Pectinophora gossypiella (Saunders), which are considered the main cotton pests in the cotton agroecosystem in northeastern Brazil. This study aimed to analyze the sensory structures (antennae and ovipositor) involved in the parasitism behavior of B. vulgaris, and to describe and evaluate associations between composition, morphology, and functions of these structures in the parasitoid-host interaction. Results showed that the B. vulgaris ovipositor is a multifunctional structure of 2.7 ± 0.3 mm in length composed of 3 valves. Valves 1 and 2 are elongated, rigid, and act jointly to pierce the host's cuticle, to inject the poison glands secretion, and to deposit eggs. Valve 3 covers the other valves, giving them protection. Valve 3 also presents annulations in all its extension, which gives flexibility to the ovipositor, and trichoid sensilla that possibly capture vibrations from the host's feeding and locomotion, thereby aiding the parasite in the host selection. The presence of cuticular microtrichia was possibly responsible for the cleaning of the ovipositor, keeping it functional between the various insertions that occur during the parasitism behavior. The parasitoid's antennae are filliform-like, measure about 2 mm, and are composed of four types of sensilla (trichoids, basiconical, coeloconical, and placodes) that act as olfactory and gustatory receptors and/or express tactile, thermo,- and hygroreceptionfunctions. The integrated action of these sensory components corroborates the successful parasitism behavior of the parasitoid B. vulgaris. © 2014 Koninklijke Brill NV, Leiden. Source
<urn:uuid:4cb56ab2-6fd7-44d9-b7a5-bb14348b1902>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/embrapa-cotton-494315/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00135-ip-10-171-10-70.ec2.internal.warc.gz
en
0.90546
1,969
2.890625
3
Today faster connections well over 100Mbps are becoming more common, and end users expect to test they are receiving the amount of speed they are paying for. When testing a connection faster that 100Mbps you may be surprised that accurate internet speed testing can be limited or affected by several factors. Several limiting factors when testing bandwidth: 1. Limitations with NIC cards 2. Wi-Fi will almost always test slower than actual. This is due to wireless speed limitations, signal strength and packet loss 3. PC/Firewall/router duplex settings 4. Browser or device slowness (Cache) 5. Old cabling and patch panels limited to 10Mbps or 100Mbps 6. Slower 100Mb switches in between you and the core network switch, firewall or router 7. Other bandwidth traversing in/out from your networking to the internet needs to be factored into the math. For example, on a 100Mbps/100Mbps circuit and test your current usage lets say is about 20/50, your speed should reflect about 80/50. 8. Testing server speed limitations – Because testing servers are free, some companies do not like to pay for upgraded networks, servers and maintenance or additional bandwidth and so they limit the bandwidth for example to 50Mbps or 100Mbps. This bandwidth pipe will be shared across all network tests hitting that server at the same time, and that can be hundreds or thousands of simultaneous tests at one time. 9. Testing server test limitations – In most cases the testing servers are just not capable of testing speeds over 100Mbps per connection or test and when multiple tests are being run. The testing server or network is not capable of sustaining the many concurrent testing requests. If you try internet speed testing using different speed testing systems, you will notice inconsistent results. This can be due to the current testing server load or software, geographical location of the server or network configurations may be different from the other servers. Also, the most ideal way to test is to isolate out your network by connecting directly into the core switch or router and test from there. No other network devices should be connected during your test. This will rule out any local usage, viruses or an incorrectly configured device. Read more on our website or test our network speed Here are some approved testing servers:
<urn:uuid:1e1fd052-aee9-4a96-bef3-e25cb616930d>
CC-MAIN-2017-04
http://info.globalit.com/2014/11/14/internet-speed-test/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00043-ip-10-171-10-70.ec2.internal.warc.gz
en
0.89799
470
2.78125
3
When using the Internet most people connect to web sites, ftp servers or other Internet servers by connecting to a domain name, as in www.bleepingcomputer.com. Internet applications, though, do not communicate via domain names, but rather using IP addresses, such as 192.168.1.1. Therefore when you type a domain name in your program that you wish to connect to, your application must first convert it to an IP address that it will use to connect to. The way these hostnames are resolved to their mapped IP address is called Domain Name Resolution. On almost all operating systems whether they be Apple, Linux, Unix, Netware, or Windows the majority of resolutions from domain names to IP addresses are done through a procedure called DNS. What is DNS DNS stands for Domain Name System and is the standard domain name resolution service used on the Internet. Whenever a device connects to another device on the Internet it needs to connect to it via the IP address of the remote device. In order to get that IP address, DNS is used to resolve that domain name to its mapped IP address. This is done by the device querying its configured DNS Servers and asking that server what the IP address is for that particular domain name. The DNS server will then query other servers on the Internet that know the correct information for that domain name, and then return to the device the IP address. The device will then open a connection directly to the IP address and perform the desired operation. If you would like a more detailed explanation of the Domain Name System you can find it here: The Domain Name System Enter the Hosts File There is another way to resolve domain names without using the Domain Name System, and that is by using your HOSTS file. Almost every operating system that communicates via TCP/IP, the standard of communication on the Internet, has a file called the HOSTS file. This file allows you to create mappings between domain names and IP addresses. The HOSTS file is a text file that contains IP addresses separated by at least once space and then a domain name, with each entry on its own line. For example, imaging that we wanted to make it so that if you typed in www.google.com, instead of going to Google we would go to www.yahoo.com. In order to do this you would need to find out one of the IP addresses of Yahoo and map www.google.com to that IP address. One of the IP addresses for Yahoo is 220.127.116.11. If we wanted to map Google to that IP address we would add an entry into our HOSTS file as follows: NOTE: When inputting entries in the hosts file there must be at least one space between the IP address and the domain name. You should not use any web notations such as \, /, or http://. You can disable a specific entry by putting a # sign in front of it. You may be wondering why this would work as we said previously that when you need to resolve a domain name to an IP address the device will use its configured DNS servers. Normally this is true, but on most operating system the default configuration is that any mappings contained in the Hosts file overrides any information that would be retrieved from a DNS server. In fact, if there is a mapping for a domain name in a hosts file, then your computer will not even bother querying the DNS servers that are authoritative for that domain, but instead read the IP address directly from the HOSTS file. It is also important to note that when you add entries to your HOSTS file they automatically start working. There is no need to reboot or enter another command to start using the entries in the HOSTS file. An example HOSTS file can be found here: HOSTS Please note that there are ways to change the order that your computer performs Domain Name Resolution. If there are problems with HOSTS file not working you may want to read this article that goes into much greater detail on Domain Name Resolution on various operating systems: For reference the HOSTS file is located in the following locations for the listed operating systems: Location on Hard Drive |Windows NT/2000/XP Pro||c:\winnt\system32\drivers\etc\hosts or c:\windows\system32\drivers\etc\hosts| |Windows XP Home||c:\windows\system32\drivers\etc\hosts| |Apple||System Folder:Preferences and in the System Folder itself.| In Windows machines you may not already have a hosts file. If this is the case there will most likely be a sample hosts file called hosts.sam that you can rename to hosts and use as you wish. You can edit this file either from the cmd prompt using Edit or Notepad on windows or VI on Unix/Linux. Really any text editor can open and modify the HOSTS file. It is also recommended that if you use this file that you make periodic backups of it by copying it to another name. Some people recommend that you make this file read only so that it will be harder to modify by a malicious program, which there are Hijackers that are known to do this, but there are Hijackers such as CoolWebSearch that add entries to the file regardless of whether or not its read only. Therefore you should not think that having your HOSTS as read only will make it safe from modification. Why would I want to use a HOSTS file There are a variety reasons as to why you would want to use a HOSTS file and we will discuss a few examples of them so you can see the versatility of the little file called the HOSTS file. Network Testing - I manage a large Internet Data center and many times we need to set up test machines or set up development servers for our customers applications. When connecting to these development or test machines, you can use the HOSTS file to test these machines as if they were the real thing and not a development server. As an example, lets say that you had a domain name for a development computer called development.mydomain.com. When testing this server you want to make sure it operates correctly if people reference it as the true web server domain name, www.mydomain.com. Since if you change www.mydomain.com in the DNS Server to point to the development server everyone on the Internet would connect to that server, instead of the real production server. This is where the HOSTS file comes in. You just need to add an entry into your HOSTS file that maps www.mydomain.com to the IP address of the development server on the computers that you will be testing with, so that the change is local to the testing machines and not the entire Internet. Now when you connect to www.mydomain.com from your computer with the modified HOSTS file you are really connecting to the development machine, but it appears to the applications that you are using that you are connecting to www.mydomain.com. Potentially Increase Browsing Speed - By adding IP address mappings to sites you use a lot into your HOSTS file you can potentially increase the speed of your browsing. This is because your computer no longer has to ask a DNS server for the IP address and wait to receive it's response, but instead can quickly query a local file. Keep in mind that this method is not advised as there is no guarantee that the IP address you have for that domain name will always stay the same. Therefore if the web site owner decides to change their IP address you will no longer be able to connect. Block Spyware/Ad Networks - This reason is becoming a very popular reason to use the HOSTS file. By adding large lists of known ad network and Spyware sites into your hosts file and mapping the domain names to the 127.0.0.1, which is an IP address that always points back to your own machine, you will block these sites from being able to be reached. This has two benefits; one being that it can make your browsing speed up as you no longer have to wait while you download ads from ad network sites and because your browsing will be more secure as you will not be able to reach known malicious sites. NOTE: It is important to note that there have been complaints of system slowdowns when using a large hosts file. This is usually fixed by turning off and disabling the DNS Client in your Services control panel under Administrative Tools. The DNS client caches previous DNS requests in memory to supposedly speed this process up, but it also reads the entire HOSTS file into that cache as well which can cause a slowdown. This service is unnecessary and can be disabled. There are HOSTs file that are already made that you can download which contain a large list of known ads servers, banner sites, sites that give tracking cookies, contain web bugs, or infect you with hijackers. Listed below are web sites that produce these types of hosts files: hpguru's HOSTS File can be found here: http://www.hosts-file.net/ The MVPS Host File can be found at: http://www.mvps.org. Hosts File Project can be found here : http://remember.mine.nu/ If you choose to download these files, please backup your original by renaming it to hosts.orig and saving the downloaded HOSTS file in its place. Using a HOSTS file such as these is highly recommended to protect your computer. Utilities for your HOSTS file If you do not plan on modifying your HOSTS file much and plan on using it occasionally for testing purposes, then the basic text editors like VI, Notepad, and Edit are more than adequate for managing your HOSTS file. If on the other hand you plan on using the HOSTS file extensively to block ads/spyware or for other reasons, then there are two tools that may be of use to you. eDexter - When you block ads on web sites using a HOSTS file, there tends to be empty boxes on the web site you are visiting where the ad would normally have appeared. If this bothers you, you can use the program eDexter to fill in the image with one on your local machine such as a clear image or any other one for that matter. This removes the empty boxes and is quick because the replacement image is loaded off of your hard drive. Hostess - Hostess is an application that is used to maintain and organize your HOSTS file. This program will read your HOSTS file and organize the entries contained in it into a database. You can then use this database to scan for duplicates and to manage the entries. It is a program that is definitely worth checking out if you plan on using the HOSTS file extensively. As you can see the HOSTS file is a powerful tool if you understand how to use it. You should now know how to use the HOSTS file to manipulate Domain Name Resolution to suit your needs. It is also important that you use its ability to block malicious programs as discussed above to make your computing environment more secure. As always if you have any comments, questions or suggestions about this tutorial please do not hesitate to tell us in the computer help forums. Bleeping Computer Basic Internet Concepts Series BleepingComputer.com: Computer Support & Tutorials for the beginning computer user. 04/09/04 : Added information about hpguru's host file and http://remember.mine.nu/. Warned about potential slow downs caused by large hosts files and how to fix that. Updated information that changing the hosts file to read only may not stop hijackers from changing information. Added info about hostess host file manager and - Thanks to CalamityKen In this tutorial we will discuss the concept of Ports and how they work with IP addresses. If you have not read our article on IP addresses and need a brush up, you can find the article here. If you understand the concepts of IP addresses, then lets move on to TCP and UDP ports and how they work. When using the Internet most people connect to web sites, ftp servers or other Internet servers by connecting to a domain name, as in www.bleepingcomputer.com. Internet applications, though, do not communicate via domain names, but rather using IP addresses, such as 192.168.1.1. Therefore when you type a domain name in your program that you wish to connect to, your application must first convert ... Every machine on the the Internet has a unique number assigned to it, called an IP address. Without a unique IP address on your machine, you will not be able to communicate with other devices, users, and computers on the Internet. You can look at your IP address as if it were a telephone number, each one being unique and used to identify a way to reach you and only you. A key component of the Internet and how it works revolves around the Domain Name System, otherwise known as DNS. The underlying technology behind the Internet, is that when a computer needs to talk to another computer on the Internet, they communicate via the computer's IP Address. The IP Address is a unique set of numbers associated with a particular machine, which will be discussed in a ... With so much of Computer use these days revolving around the Internet and communicating with others, its important that you understand what exactly a network is. Without networks, all communication between your computer and other computers whether it be instant messaging, email, web browsing, or downloading music could not be achieved. This tutorial will strive to teach you about networks and ...
<urn:uuid:54618efa-1e56-4d69-8ee2-dd29cc2ad093>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/hosts-files-explained/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00531-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937973
2,831
3.984375
4
Teens Learn to Teach Seniors About the Internet The project aims to first build trust between the seniors and the teens by collecting and recording video stories of the seniors and placing them online on a special Website, where the seniors will be able to access them later, he said. Through that site, project planners hope that the seniors will then have incentives to want to learn about going online so they can share their stories and learn about going online at the same time. The teens will receive a $250 stipend for taking the course and another $250 after they participate in the senior Internet training project, said Villasenor. But while the teens will earn some cash for participating, the money is not the prime reason for their involvement, he said. "I was surprised by their motivations," he said. "I went in assuming that the biggest motive would be the parents forcing the kids to do something productive or for the money, but the majority of the teens say they want to get the skills [from the project]for future work opportunities," said Villasenor. "A good portion of the kids want to do it for the community. I have high expectations for the class." In February 2014, Google unveiled plans to potentially bring its services to another 34 communities across nine metro areas of the nation, according to an eWEEK report. The 34 additional communities—which are clustered around the Atlanta; Charlotte, N.C.; Nashville, Tenn.; Phoenix; Portland, Ore.; Raleigh-Durham, N.C.; San Antonio; Salt Lake City; and San Jose, Calif., metro areas—will be invited to work with Google Fiber to see if they are interested in having the Gigabit-speed cable TV and Internet services brought to their communities for new subscribers. The communities and their potential participation will be reviewed over the next year. Not all of the 34 communities that will now be in discussions with Google for Fiber service will ultimately get it in this round.
<urn:uuid:6b78d9d1-5848-4b9b-bc6b-e90efeb8cdf2>
CC-MAIN-2017-04
http://www.eweek.com/cloud/teens-learn-to-teach-seniors-about-the-internet-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962321
398
2.5625
3
There's another prototype meant to enhance security, but will it too eventually turn into an assault on privacy? Researchers in London have devised a stealthy system that gives off no radio waves so it can't be detected, but by sniffing Wi-Fi signals, it can pinpoint a person's movement inside a building. University College London scientists Karl Woodbridge and Kevin Chetty developed this suitcase-sized prototype that has successfully been tested through a one-foot-thick brick wall to determine "a person's location, speed and direction." PhysOrg added, "See Through The Wall (STTW) technologies are of great interest to law enforcement and military agencies; this particular device has the UK Military of Defense exploring whether it might be used in 'urban warfare,' for scanning buildings. Other more benign applications might range from monitoring children to monitoring the elderly." "Fundamentally, this is a radar system - you're just using radio waves that have been emitted by an external WiFi router, rather than creating your own," explained ExtremeTech. "Compare this with MIT's through-the-wall (TTW) radar, which is 8 feet (2.4m) across and requires a large power source to generate lots and lots of microwaves." 1. MOVING SUBJECT: When Wi-Fi radio waves bounce off a moving object, their frequency changes. If, for example, a person is moving toward the Wi-Fi source, the reflected waves' frequency increases. If a person is moving away from the source, the frequency decreases. 2. REGULAR OL' ROUTER: A Wi-Fi Internet router already in the room fills the area with radio waves of a specific frequency, usually 2.4 or 5 gigahertz. 3. BASELINE SIGNAL: One antenna of the radar system tracks the baseline radio signal in the room. 4. SHIFTED SIGNAL: A second antenna detects radio waves that have reflected off of moving objects, which changes their frequency. 5. PERP, SPOTTED: By comparing the two antennas' signals, the computer calculates the object's location to within a few feet as well as its speed and direction. If you think the answer would be to hold perfectly still in order to avoid detection, to trick it into thinking you are nothing more than a piece of furniture, think again. As Engadget previously pointed out, engineers at the University of Utah developed a wireless network capable of seeing through walls to detect and monitor breathing patterns. In this case, it's not meant to be a surveillance system, but an inexpensive way to monitor patients' breathing. As we move forward and more covert surveillance tech that was previously science fiction becomes real-life technology, it will continue to clash with civil liberties. In the military where our soldiers' lives are on the line, then seeing through the walls could be a good thing. In the case of a bank robbery or another such crime where regular surveillance cameras have been disabled, then this tech could again be used for good. However, as we've seen historically, technology that starts off for military or law enforcement use often bleeds out and onto the public for covert surveillance. One example of this is Z Backscatter, full-body scanners in the form of mobile X-ray scanning units covertly driving around on streets that can scan you without you ever knowing it happened. Another example is Homeland Security's portable molecular-level scanning devices that can see through clothing at 164 feet away. It could scan everyone at airports without anyone knowing it happened. DHS said "this scanning technology will be ready within one to two years." Here's the abstract for the research paper about seeing through walls by using Wi-Fi signals: In this paper, we investigate the feasibility of uncooperatively and covertly detecting people moving behind walls using passive bistatic WiFi radar at standoff distances. A series of experiments was conducted which involved personnel targets moving inside a building within the coverage area of a WiFi access point. These targets were monitored from outside the building using a 2.4-GHz passive multistatic receiver, and the data were processed offline to yield range and Doppler information. The results presented show the first through-the-wall (TTW) detections of moving personnel using passive WiFi radar. The measured Doppler shifts agree with those predicted by bistatic theory. Further analysis of the data revealed that the system is limited by the signal-to-interference ratio (SIR), and not the signal-to-noise ratio. We have also shown that a new interference suppression technique based on the CLEAN algorithm can improve the SIR by approximately 19 dB. These encouraging initial findings demonstrate the potential for using passive WiFi radar as a low-cost TTW detection sensor with widespread applicability. The paper, "Through-the-Wall Sensing of Personnel Using Passive Bistatic WiFi Radar at Standoff Distances" is behind a paywall for $31, so I didn't read it. Like this? Here's more posts: - EFF: Americans may not realize it, but many are in a face recognition database now - HOPE 9: Whistleblower Binney says the NSA has dossiers on nearly every US citizen - NSA Whistleblower Drake: You're automatically suspicious until proven otherwise - Mobile Phone Surveillance Out of Control: Cops Collected 1.3 Million Customer Records - Perfect, persistent, undetectable hardware backdoor - DEFCON Kids: Hacking roller coasters and the power grid with cell phones - Gov't surveillance 'unreasonable' & violated the 4th amendment 'at least once' - Kingpin aka Joe Grand of Prototype This: The Birth of Hardware Badge Hacking - NSA claims it would violate Americans' privacy to say how many of us it spied on - Hacking Humanity: Human Augmentation on the Horizon - Black Hat: Microsoft incorporates BlueHat Prize finalist defensive tech & releases EMET 3.5 Preview - Going Dark in the Golden Age of Cyber-Surveillance? - Microsoft BlueHat Prize Winners Follow me on Twitter @PrivacyFanatic
<urn:uuid:30f238e5-385b-47bd-873b-57df68f554b0>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222896/microsoft-subnet/stealthy-wi-fi-spy-sees-you-through-walls-thanks-to-your-wireless-router.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00439-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937473
1,263
2.53125
3
Hi Nathan - On Thu, 31 Jul 2014 21:40:35 -0600, Nathan Andelin Nathan called it a sub-domain (and you agreed)... It's not a sub-domain it's a host. Only hosts can have IP addresses. Charles, you appear to be talking about setting up "A" records (aka DNS host records) which I would call mapping sub-domain names to IP addresses. For a definition of sub-domains see: I began learning about sub-domain names after having to pay what appeared to me to be an outrageous premium for a wildcard certificate for *. my-domain.com, where the asterisk refers to "all" sub-domains of a Charles is correct, and in fact that Wikipedia article agrees. To |A resource record, such a A(host), CNAME(alias) or MX (mail), should |not be confused with a subdomain node. A subdomain does not point to |any specific server location, while most resource records do |(resource records that do not point to specific hosts contain Opinions expressed are my own and do not necessarily represent the views of my employer or anyone in their right mind.
<urn:uuid:cd8867e3-369f-4221-ac01-5daded0939f7>
CC-MAIN-2017-04
http://archive.midrange.com/midrange-l/201408/msg00015.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00255-ip-10-171-10-70.ec2.internal.warc.gz
en
0.861534
260
2.515625
3
No anonymity is the future of web in the opinion of Google's CEO Eric Schmidt. He said many creepy things about privacy at the Techonomy Conference. The focus of the conference was how technology is changing and can change society. Schmidt's message was that anonymity is a dangerous thing and governments will demand an end to it. In an video interview with Julia Boorstin, CNBC Correspondent, Schmidt stated (starting at 5:13): "Privacy is incredibly important," Schmidt stated. "Privacy is not the same thing as anonymity. It's very important that Google and everyone else respects people's privacy. People have a right to privacy; it's natural; it's normal. It's the right way to do things. But if you are trying to commit a terrible, evil crime, it's not obvious that you should be able to do so with complete anonymity. There are no systems in our society which allow you to do that. Judges insist on unmasking who the perpetrator was. So absolute anonymity could lead to some very difficult decisions for our governments and our society as a whole."Whether it was a Freudian slip or a simple misstatement, Schmidt is correct; it is not obvious that if you are anonymous, you are therefore likely to commit a "terrible, evil crime." Anonymity equaling a future heinous act seems to be the direction some online security experts are headed. The National Strategy for Trusted Identities in Cyberspace proposes to do away with anonymous multiple identities in favor of one real identity. Part of the reasoning behind one trusted identity is to do away with crime. But isn't this the same logic of anonymity breeding anti-social behavior and criminals? According to ReadWriteWeb, Schmidt said of anti-social behavior, "The only way to manage this is true transparency and no anonymity. In a world of asynchronous threats, it is too dangerous for there not to be some way to identify you. We need a [verified] name service for people. Governments will demand it." Since Google's CEO has proclaimed the future of the web is no anonymity, does that make it a fact? If we keep hearing that privacy is dead and long buried, how long before we accept that anonymity is an anti-social behavior and a crime? Security expert Bruce Schneier suggests that we protect our privacy if we are thinking about it, but we give up our privacy when we are not thinking about it. Schneier wrote, "Here's the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users' information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control." The loss of anonymity will endanger privacy. It's unsettling to think "governments will demand" an end to anonymous identities. Even if Schmidt is Google's CEO, his message of anonymity as a dangerous thing is highly controversial. Google is in the business of mining and monetizing data, so isn't that a conflict of interest? Look how much Google knows about you now. Bruce Schneier put it eloquently, "If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can't rely on market forces to maintain it." Like this? Check out these other posts: - All of today's Microsoft news and blogs - Will Future Virtual Intelligence & Precrime Predictions Kill Privacy? - Marketing Gone Wild: One Product Helps You Stalk, One Stalks You - Rogue Security Researchers vs Microsoft: Karma Is Brutal! - Verizon's 2010 DBIR: Rise in Misuse, Malware and Social Engineering - The Next Big Privacy Concern: RFID “Spychips” - Certified Lies: Big Brother In Your Browser - Privacy Wars: How to Hide While Google is Watching You - Report: Microsoft Cut Privacy Features to Sell Ads in IE8 - EFF Fights To Allow People To Comment Anonymously Online Follow me and on Twitter @PrivacyFanatic
<urn:uuid:4f05943d-5196-4554-bf80-a388e1359e1d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2231573/microsoft-subnet/google-ceo-schmidt--no-anonymity-is-the-future-of-web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00465-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942803
885
2.546875
3
We are very confused about the meaning of the word "information." And that’s for two good reasons. First, it’s a really important word, and important words are almost always stretched to the ripping point as they struggle to cover topic after topic after topic. (Perhaps this indicates that topics are more different, and less susceptible to uniform explanations, than we think.) Second, the word "information" became important because a particular genius—Claude Shannon—took it out of everyday parlance and used it in a very different way in his theory. It’s as if Einstein had used the word "deliciousness" instead of "relativity," so now when we talk about Swiss Chocolate Almond ice cream we’re not sure if ... OK, skip the analogy. The point is that Shannon gave "information" a mathematical, probabilistic sense that had little to do with what we’d meant by the term before that. His theory was so powerful that it got applied to everything from DNA to black holes, and thus the term "information" got spread around and intermingled with the ordinary sense, along with the communication theory sense and the computer science sense, until it became a hodge-podge word. It has some precise meanings within particular limited fields, but if you try to define the term as it’s used in the phrase "The Information Age," I bet you can’t in a way that covers everything we mean by it. Throughout the Information Age, however, the term has also retained its original meaning. That meaning is hard to pin down, too, but only in the usual way that words escape their definitions. Its normal sense hasn’t changed much in the past 150 years. In fact, the memoirs of Charles Babbage are a good place to use as a source. Babbage is the English inventor and mathematician who is credited with designing gear-based computers, starting in the 1820s, that anticipated the modern computer with eerie precision. In fact, that’s a very bad misreading of Babbage, in my opinion, but that’s a different hobby horse to ride. In his memoirs, written in the 1860s, Babbage uses the word "information" 28 times. In most of those instances, he means something quite ordinary, such as when he says he asked some young classmates how to invoke the devil, and they gave him that information. We can get all twisted up in trying to figure out what are the defining characteristics of the class of statement called "information," but it’s really much simpler than that. In most of those instances, Babbage means information to be simply something about the world that he did not know before. And that remains one of our usual senses of the term as well. But a second meaning shows up in Babbage’s memoirs. For example, to help the British railway system decide what the distance between the rails should be, Babbage set up a metering system to measure the sway of railway coaches. The data that his instruments produced he casually refers to as "information." And that does indeed refer to a special class of knowledge: what fits into a table. The importance of table-based information cannot be over-emphasized. Before computers, tables of numbers were crucial to applying mathematics. If you wanted to know the angle at which to aim your artillery, you had to look it up in a table. Galileo himself created and sold artillery tables. In fact, tables could themselves be an instrument of computation. For example, in 1684, Edmond Halley noticed a pattern of recurrence in the appearance of a comet. Yet it didn’t come back as regularly as it should. Halley thought that perhaps this was because of the subtle gravitational forces of the planets. But he couldn’t figure out how to calculate that, so he went to Newton himself. The "three body problem" was too hard, Newton said, demurring. It took three French aristocrats spending an entire summer filling in a table, manually calculating the gravitational forces effect on the comet at step-by-step intervals, to confirm that Halley’s comet was indeed a single heavenly object. But there was a problem with tables. Because they were created by humans, they were error prone. They’d issue errata sheets, and then errata sheets for the errata sheets. In fact, the French tables that predicted Halley’s comet’s return only worked because—it was discovered later—the copious errors canceled one another out. Jonathan Swift declared mathematics to be a dim science precisely because we would never get the tables right. But Swift didn’t count on Adam Smith. When the French government, after the Revolution, ordered new logarithmic and trigonometric tables created to reflect the new metric system, Gaspard de Prony used Smith’s description of the division of labor to structure the process. De Prony broke down the task of computing tables into a few simple steps, most of which could be performed by workers who only had to know how to do basic math. In fact, many of the people he hired were former hairdressers to aristocrats who had lost their hair because they had lost their heads. De Prony manufactured tables the way factories manufactured pins, he said. Babbage explicitly characterizes his own mechanical computers the same way: They were intended to be factories for the complete and error-free manufacturing of tables. (David Alan Grier’s When Computers Were Human is a terrific history of tables.) Tables-based information is familiar to us. Tables pare a topic down to a simple set of repeating parameters. They are designed for fast retrieval. They are unambiguous. They are intensely useful. They are basically what we find in computerized databases. But if information stayed that simple, we wouldn’t be calling this the Information Age. For that, we had to take information as standing for something far more important. In the Age of Information, information becomes the very stuff of consciousness and even (according to some) of physics. Tables are too humble to carry such a burden. So, we know what information was. We know what it still is, within circumscribed areas. But this is the Information Age, not the Age of Electronic Tables. Information means something much more than what it meant to Babbage. The problem is that we have an entire age named after it, and we still can’t say what it is. We have reconstructed our understanding of our world and ourselves using a term we don’t understand and don’t agree about. What a species.
<urn:uuid:07f1ff98-0c12-4eed-bc4e-0b13b2ac1de2>
CC-MAIN-2017-04
http://www.kmworld.com/Articles/Column/David-Weinberger/The-ambiguity-of-information-50882.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972451
1,392
2.6875
3
Automatic derivation of meta-information broadens search technique capabilities and can improve results up to 400%. When ZyLAB was founded in 1983, full-text retrieval was a new technology whose application and relevance in the marketplace was untested. Although its basic algorithms originated in the 1960s, full-text search was still not broadly viewed as a trustworthy enhancement to traditional key-field searching on meta-information (i.e. information about information in a file). Acceptance of these new tools was slow to materialize and only came about after heavy market evangelism by some early adapters who envisioned the future potential of these tools. But as we know, perceptions change fast in technology. By the late 1990s, the increasing capacity of computers and further sophistication of search algorithms enabled Internet search engines to realize the powerful potential of full-text search. Full-text retrieval had become the de facto standard for search, and, perhaps as a result, a lot of people no longer felt there was a need to add and search additional metadata. Beyond the Google Standard Now, an entire generation of tech-savvy computer users exist whose expectations and perceptions of full-text search functionality and performance are almost completely influenced by the "Google effect." In most instances, this type of approach works fine if users only need to find the most appropriate website for answering general questions. Users type in full-text keywords and expect to see the most relevant document or website appear at the top of a result list. Page-link and similar popularity-based algorithms work very well in this context. But problems arise when users view this searching model as the default approach to finding any kind of information. People who have become conditioned to viewing search through the prism of Google-type approaches often are not interested in, or even aware of, other search techniques. However, a lot of information that may be vital for them to know may not come to light using only these basic search techniques. If, for example, a user’s search is related to fraud and security investigations, (business) intelligence, or legal or patent issues, other searching techniques are needed that support different sets of issues and requirements, such as the following: Focusing on optimized relevance. The first requirement of broader search applications is that not only does the best document need to be found, but all potentially relevant documents need to be located and sorted in a logical order, based on the investigator’s strategic needs. "Popularity-based" results generated by Internet search engines cannot support these criteria. Consider all the criminal elements that have vested interests in keeping themselves and their activities anonymous. Many of these people understand how basic search engines work and how to minimize their exposure to these search mechanisms so that they don’t appear at the top of results lists. Handling massive data collections. Another issue impacting effective strategic searching is how to conduct extensive searches among extremely large data collections. For example, if email collections need to be investigated, these repositories are no longer gigabytes in size; rather, they can be a terabyte or more. When handling this volume of data, plain full-text search simply cannot effectively support finding, analyzing, reviewing and organizing all potentially relevant documents. Finding information based on words not located in the document. In this context, consider investigators who may have some piece of information concerning an investigation but don’t necessary know other details they may be looking for. Who is associated with a suspect? What organizations are involved? What aliases are associated with bank accounts, addresses, phone records or financial transactions? Traditional precision-focused, full-text approaches are not going to help users find hidden or obscure information in these contexts. The searching framework must take into account additional information, which can be obtained by using text analytics to extract meta-information from the original document to provide other insights. Defining relevancy. When defining a search’s relevance, all factors that could be in play during a specific search instance must be accounted for (in the context of overall goals). Using the investigative example again, consider possible involved parties and what "relevance" would mean to their actual search: - Investigators want to comb documents to find key facts or associations (the "smoking gun"); - Lawyers need to find privileged or responsive documents; - Patent lawyers need to search for related patents or prior art; - Business intelligence professionals want to find trends and analyses; and - Historians need to find and analyze precedents and peer-reviewed data. All of these instances require not only sophisticated search capabilities but also different context-specific functionalities for sorting, organizing, categorizing, classifying, grouping and otherwise structuring data based on additional meta-information, including document key fields, document properties and other context-specific meta-information. Utilizing this additional information will require a whole spectrum of additional search techniques, such as clustering, visualization, advanced (semantic) relevance ranking, automatic document grouping and categorization. New Expectations for Search Performance The insights mentioned above have been confirmed by various scientific research. For instance, during the TREC 2007 (http://trec-legal.umiacs.umd.edu/) legal conference, presented studies concluded that traditional keyword and Boolean searches (such as those found in Internet search engines) resulted in only 20% of all present, relevant information being found. Again, for many common usages, finding the best 20% of documents is usually enough, but if need dictates that all potentially relevant documents must be found, 20% isn’t going to get the job done. (This result is in line with the findings of the seminal Blair and Maron study in the ‘80s. Here, highly qualified lawyers and paralegals thought they had found 75% of the relevant documents in a specific case, but the reality was that they only found 20%. One could conclude that the performance of Boolean keyword searching has not improved in 30 years.)
<urn:uuid:474bd565-e93a-4b17-a881-fd9bfdc83878>
CC-MAIN-2017-04
http://www.kmworld.com/Articles/ReadArticle.aspx?ArticleID=48837
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00375-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936178
1,216
2.53125
3
A new IBM chip-making technology has the potential to dramatically increase the memory capacity and processing speeds of chips used in routers and switches that support fiber-optic and wireless networks. The idea is to offer manufacturers and service providers the ability to handle the data deluge driven by smart phones and other Web-connected devices, IBM stated. IBM says the new chip process known as Cu-32 is some 15% faster than its current process and can result in the following: - A Cellular infrastructure that can move one year's worth of text messages (six trillion, worldwide in 2010) in less than ten seconds - A consumer downloading a feature-length film on a smart phone in less than ten seconds; or a HD version in under a minute - Routers that can stream every motion picture ever produced in less than one minute IBM says a suite of new high-speed serial cores give Cu-32 chips the ability to network with more than a dozen different interface standards. These cores were developed to provide jitter performance and equalization support for enhanced system performance with the lowest possible bit error ratio. Cu-32 offers the industry's first set of HSS cores in 32nm SOI technology including a 15G Backplane core supporting 16G Fibre Channel standard and a 15G Chip-to-Chip core supporting low-power optical and chip-to-chip applications. Te need for such technology is obvious, IBM says: "The number of people using the internet has doubled in the past five years, with two billion logging-on in 2010. Smart phones, game consoles, digital TVs, GPS devices and MP3 players are among the consumer gadgets that now ride the internet. As the world's infrastructure gets further digitized, connected and monitored, vast arrays of machine-to-machine sensors are also beginning to use the internet to transmit data on commuter traffic, buildings' energy usage or the health of newborn infants, for example. Manufacturers of communications infrastructure will increasingly need breakthrough semiconductor technologies such as Cu-32 to keep up with the demand to secure, store and move an ever-growing amount of web traffic." Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:4b02e056-15d5-4add-9a4c-8b82ab4a4d75>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227696/security/ibm--new-chip-making-technology-will-boost-internet.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00401-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932998
458
2.734375
3
Women aged 25 to 34 are most likely to fall victim to online scams, according to research published today. The research was commissioned by a online advice site, knowthenet.org.uk, to build up a picture of the likeliest online scam victims. It measured the ability of more than 2,000 consumers to spot and respond appropriately to seven online scam scenarios. The tests ranged from identifying fake Facebook pages to testing how consumers respond to competition scams or the sale of counterfeit goods online. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. In six out of the seven tests, women proved the most likely to fail, and most of those were in the 25-34 age group. However, the most likely victim depended on the type of scam. For example, among those who fell for confidence tricks, 53% were men. With internet scams on the rise, this means anyone, whether they use the internet regularly or not, could be at risk, the research concluded. "Scammers are becoming more devious in how they target victims and are constantly changing their attacks to reflect what people expect to see online or are interested in," said Peter Wood, security expert at knowthenet.org.uk. New tricks, such as pharming, work by redirecting the user's web browser, he said, so that when they type in a legitimate web address, they are redirected without knowing to a bogus site that appears genuine. "People then happily type in their personal details and don't know they are being scammed before it's too late," he said. The popularity of social networks such as Facebook also means many people give away far too much personal data on the web, said Wood, which can be a goldmine for scammers. Launched by Nominet, the knowthenet.org.uk site was developed to provide independent advice and support on getting started online, staying safe online, and doing business online. Online fraud affects 1.8 million Britons every year, costing the economy £2.7bn, according to National Fraud Authority research published in January 2010.
<urn:uuid:2ed69c84-1535-4c7d-8138-3013ca0b6ab4>
CC-MAIN-2017-04
http://www.computerweekly.com/news/1280094314/Young-women-are-the-most-likely-victims-of-online-scams-research-shows
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00025-ip-10-171-10-70.ec2.internal.warc.gz
en
0.973464
448
2.5625
3
The US Navy makes more efficient use of open source technology in complex unmanned aircraft than its counterparts in the Army and Air Force. That was but one of the conclusions of a recent Government Accountability Office report that looked at the use of open source systems in developing advanced military drones. From the GAO report: The services vary in the extent to which they have adopted open systems for DOD's 10 largest unmanned aircraft, with the Navy leading the other services. Three of the Navy's four current and planned unmanned system programs incorporated, or are planning to incorporate, an open systems approach from the start of development in key components of their Unmanned Aircraft System (UAS)-the air vehicle, ground control station, and payloads such as cameras and radar sensors. Conversely, none of the Army or Air Force drone programs incorporated the approach from the start of development because, according to Army and Air Force officials, legacy unmanned programs tried to take advantage of commercial off-the-shelf technology or began as technology demonstration programs, the GAO stated. That decision however has led to cost over-runs and upgrade difficulties. The GAO noted that several of programs are starting to incorporate open source, primarily for the ground control station of aircraft during planned upgrades. "For example, the Army did not initially include an open systems approach for its three UAS programs, but has since developed a universal ground control station with open interfaces that each of its programs will use. None of the Air Force's three UAS programs were initially developed as an open system, and only one is being upgraded to include an open systems approach. Each of the programs that have adopted an open systems approach expects to achieve cost and schedule benefits, such as reduced upgrade costs and quicker upgrade times." Some other interesting items from the GAO open source drone repot included: - Three of the Navy's four current and planned UAS programs-the Small Tactical UAS (STUAS), Triton, and Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS)-which are less than 5 years old, included or are planning to include an open systems approach from the start of development for the key components of their systems. The Navy expects significant benefits in return, such as reduced development and integration time and costs, as well as increased future competition for new system payloads. - In addition, program and contractor Navy officials noted that by having the rights and specifications to the payload interfaces, the program will be able to integrate and test third-party designed payloads within a matter of days or months, as opposed to years typically required to test new system payloads. Program officials also anticipate that they will be able to independently integrate at least 32 different payloads developed by 24 different manufacturers. - The Army's three UAS programs-Hunter, Shadow, and Gray Eagle-were all initially developed as proprietary systems and did not include an open systems approach for all three key components-the air vehicle, ground control station, and payloads. Moreover, the Army's UAS ground control stations limited interoperability and resulted in the Army paying for ground control stations that provided similar capabilities. - The Army eventually developed a common ground control station for the three UAS; however, the new station was still proprietary. All three of the Army's UAS programs are now upgrading to a universal ground control station that incorporates an open architecture to address obsolescence issues and increase interoperability. According to Army program officials, ground control stations require continuous hardware and software upgrades as the technology becomes obsolete. Even though an open systems approach is being incorporated later in the programs' life cycles, officials believe the benefits-reduced obsolescence issues, reduced upgrade costs, and increased interoperability-outweigh the costs. For example, the Army's new universal ground control station will give Army operators in the field the ability to fly Hunter, Shadow, and Gray Eagle from one ground control station. This was not possible with the Army's legacy ground control stations that did not use open architectures. - The Air Force has had limited success in modernizing its UAS to include open systems. For example, the Reaper plans to upgrade to an open ground control station, but the remainder of the system remains proprietary. The other two programs-the Predator and Global Hawk-included language in their planning documents stating their intention to introduce open system elements later in their respective life cycles. However, Predator's age and Global Hawk's fiscal constraints prevented them from adopting an open systems approach. As a result, the two systems remain largely proprietary and are now facing challenges sustaining and upgrading their systems. - The Predator program began in 1994 as an advanced concept technology demonstration program and is one of the oldest systems in DOD's UAS portfolio. Program officials stated that the Predator's software is not modular and the program has no intention of modifying the software because the Air Force is planning to divest itself of Predator aircraft once more Reapers are fielded. Predator officials also noted that sustainment and obsolescence challenges remain a risk area for the program. - Officials from the Global Hawk program, which started development in 2001, also stated that obsolescence is a major problem for that program, particularly for the ground control station. The program recently had planned to develop a new ground control station that utilized an open systems architecture. However, the Air Force cancelled the upgrade effort in 2013 due to what program officials described as fiscal constraints, even though it plans to use the aircraft through at least 2032. The Air Force is now planning to continue to maintain the legacy Global Hawk ground control station and communications system, but it will require upgrades and costly support. The GAO said that while the Department of Defense has policies that direct programs to use an open systems approach, the Navy is the only service that largely followed the policy when developing its UAS. "In addition, while new open systems guidance, tools, and training are being developed, DOD is not tracking the extent to which programs are implementing this approach or if programs have the requisite expertise to implement the approach. Navy UAS program officials told us they relied on technical experts within Naval Air Systems Command to help develop an open systems approach for their programs. Until DOD ensures that the services are incorporating an open systems approach from the start of development and programs have the requisite open systems expertise, it will continue to miss opportunities to increase the affordability of its acquisition programs," the GAO stated. In the end the GAO recommended that the Air Force and Army implement open systems policies, DOD develop metrics to track open systems implementation, and the services report on these metrics and address any gaps in expertise. For its part, the DOD partially concurred with the GAO but stated that its current policies and processes are sufficient. Check out these other hot stories:
<urn:uuid:ced78bdb-7194-4936-a2d6-f22a1bd7e30d>
CC-MAIN-2017-04
http://www.networkworld.com/article/2225138/applications/when-open-source-and-drones-mix--us-navy-better-than-army-and-air-force.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00053-ip-10-171-10-70.ec2.internal.warc.gz
en
0.966617
1,377
2.5625
3
The definitive definition of Bitcoin from the experts for the kids (and adults, too). Mark Ranta, CTP, senior solutions consultant, ACI Worldwide "Once upon a time there was no such thing as computers, so everyone relied on paper for everything from talking with a family member overseas to paying for sweeties or getting a school bus. And then the computer was born and as computers became smarter and smarter, the need for paper slowly vanished. The new computers made almost everything we do easier, connecting us in ways we never thought imaginable. Bitcoin is part of this change from paper money to computer money." Max Excell, global business development manager at GBGroup "Bitcoins are a form of electronic currency. They cannot be seen or touched but can be swapped by trading them online – a bit like trading football cards. Bitcoins exist only on the web and are not controlled by any bank or government. They are created by a set of complex maths equations (‘mined’) online rather than being physically printed or minted. As more and more people begin to own them, their value increases. The amount of Bitcoins that will exist is strictly limited to 21 million. So, like trading cards, the more people start to collect and create a demand for it, the greater value each Bitcoin will have. "Bitcoin is a bit like the new ‘gold’ of the Internet – offering a currency that can be used to purchase goods or services (e.g. holidays through Expedia) anywhere in the world without the need to manage different exchange rates. "Leading Bitcoin expert Radoslav Albrecht predicts that by 2020 Bitcoin will handle as many transactions as PayPal does today. So, as people start to trust that the value of the currency will increase, they start to invest in it, further increasing demand. The Bitcoin market has clearly grown and its future potential is huge. But it is an independent currency, not yet subject to the regulations that cover other more traditional currencies. So it needs to shake off the image that it is mainly used by ‘baddies’ to fund criminal and terrorist activity. Some people don’t think regulation is a good thing, but we think it is, showing that the currency has started to ‘grow up’ which will further encourage ordinary people to invest in it. "Maybe it won’t be long before you get your pocket money in Bitcoin, paid over the internet into your digital piggy bank? Souheil Badran, SVP and GM of Digital River World Payments. "First introduced in late 2008, Bitcoin is a crypto-currency that represents a new digital form of money, and whose value has risen to $16 billion at some points over the last year. Rather than being regulated by a central bank or government, bitcoin is radically different to traditional currencies and has the potential to significantly change the way we pay online for goods and services by presenting users with a simple ecommerce experience. "Although it is still an emerging currency, bitcoin can play a major role in helping retailers to expand their online businesses globally, without the initial limitations of cross-border hurdles. However, since bitcoin has no central bank, there have been concerns about its stability. The market is addressing this through the evolution of exchange services, and retailers should consider these to help manage the possible risks."
<urn:uuid:2d0cf208-518f-40de-a379-c31960a9131f>
CC-MAIN-2017-04
http://www.cbronline.com/news/verticals/3-ways-to-explain-bitcoin-to-a-five-year-old-4327572
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00173-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964085
687
2.78125
3
Predictive Analytics is the stream of the advanced analytics which utilizes diverse techniques like data mining, predictive modelling, statistics, machine learning and artificial intelligence to analyse current data and predict future. Predictive Analytics is popular in various field ranging from Retail, Finance, Healthcare to Education. We at ALTEN Calsoft Labs have used Predictive Analytics to predict patient re-Admission within 30 days, diabetic retinopathy, predicting length of stay in healthcare, student retention/performance in education and revenue forecasting/product recommendations in retail segment. Some of the techniques used in Predictive Analytics are: Once the dataset is available. It is sent for processing, cleaning up, etc. The refined dataset is split into train and test sets in the ratio of 70% and 30% respectively. The larger set forms the training data set and will be used to train the model whereas, the purpose of the test dataset is used to evaluate the performance of the final model at the very end. There are many different learning algorithms viz. Random Forest, Support Vector Machine (SVM), Naive Bayes, Artificial Neural Networks (ANN), Decision Tree Classifiers which can be used for training the model. Techniques such as cross-validation are used in the model creation and refinement steps to evaluate the classification performance. The most popular tools used are Python, R, Scikit lib, SAS, Mathematica and Matlab. Once the model is ready, its performance is evaluated on the test data at the very end. There are many techniques for evaluating the performance of a model. The techniques vary according to the type of model (regression, classification) and the problem domain. As a complete solution to Predictive Modelling, the ALTEN Calsoft Labs’ Predictive Analytics Platform provides multiple micro-services for various data processes, analysis processes and finally data visualization processes. The Apache Hadoop software library is a framework that allows distributed processing of large datasets across clusters of computers using…
<urn:uuid:b20d0928-7cd4-4beb-900c-f54da543e525>
CC-MAIN-2017-04
http://www.altencalsoftlabs.com/services/next-gen-services/analytics/predictive-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz
en
0.893542
403
2.546875
3
Do You Hear What I Hear?�Part VII: Differentiated Services The Differentiated Services architecture is a built-in feature of the Internet protocol that allows the coding of traffic streams for special treatment. Continuing with our series on VoIP Quality of Service (QoS), our previous installments have looked at some of the key factors surrounding the quality of the voice connection: Part I: Defining QoS Part II: Key Transmission Impairments Part III: Dealing with Latency Part IV: Measuring "Toll Quality" Part V: Integrated Services Part VI: Resource Reservation Protocol In our two most recent tutorials, we examined the Integrated Services (intserv) project of the Internet Engineering Task Force (IETF), and the Resource Reservation Protocol (RSVP) defined in RFC 2205, which is used by the intserv architecture to reserve network resources along the path from the sender to receiver. Another IETF development called Differentiated Services (diffserv) is also designed to support QoS requirements, but in a much different way. As we discovered in our discussion on intserv, all nodes from the sender to the receiver must support that architecture, along with RSVP, in order for network resources to be successfully reserved. In other words, if you wish to send traffic from New York to Los Angeles, all of the intervening routers much understand RSVP in order to reserve the bandwidth that you need for this connection. If some router in the Midwest does not support RSVP, you immediately have a challenge on your hands. Thus, we could say that intserv does not scale well, and as the internetwork grows, the end-to-end communication becomes more complex. Differentiated Services takes a different approach to the QoS challenge. Instead of relying upon some type of bandwidth reservation protocol, it leverages a little used field within the Internet Protocol (IP) header to accomplish its goals. Three key RFC documents detail the operation of diffserv: - RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers - RFC 2475: An Architecture for Differentiated Services - RFC 3260: New Terminology and Clarifications for Diffserv To quote from RFC 2474: "Differentiated services enhancements to the Internet protocol are intended to enable scalable service discrimination in the Internet without the need for per-flow state and signaling at every hop." In other words, diffserv could be described as a "built in solution", where interv could be described as a "bolted-on solution" (with RSVP being the bolted on solution requiring flow states and per-hop signaling). In a nutshell, the idea is to differentiate between multiple types of Internet service. Those services could be distinguished in a number of ways, including: quantitative network characteristics such as packet delay, jitter or loss; some type of a priority scheme based upon application requirements; or financial requirements and/or Internet service pricing levels. The diffserv architecture defines network service provisioning policies that govern how the traffic streams are allocated a particular amount of available bandwidth, how that traffic is marked, and how it is conditioned upon entry into the network. The algorithms then classify packets for a specific traffic type, and mark the packets accordingly. Packets then receive a per-hop forwarding operation along the path from source to destination based upon that packet marking. The key to diffserv operation is a field within the IPv4 header called the Differentiated Services (or DS) field, and a corresponding field within the IPv6 header called the Traffic Class field. For IPv4, this field was previously known as the Type of Service, or TOS field, which was relevant at the time that IP was designed in support of government and military networks, but became less useful (and often ignored by routers) as IPv4 moved into mainstream business applications. Similarly, the architects of IPv6 recognized the need for this type of QoS function, and built this functionality into the protocol from the ground up. The DS field is eight bits in length, with six of these bits currently defined, and two bits that are not currently used by diffserv. The six defined bits are called the Differentiated Services Code Point, or DSCP. With six bits available, a total of 64 distinct codepoints are available, thus allowing 64 Internet service distinctions. These codepoints are divided into three pools: Pool 1, with 32 codepoints to be standardized by the IETF, Pool 2, with 16 codepoints that are intended for local or experimental use, and Pool 3, with 16 codepoints that are presently designated for experimental or local use, but may be used for standardized assignments in the future. For specific details on the DSCP assignments, check out the Internet Assigned Numbers Authority (IANA) site at http://www.iana.org/assignments/dscp-registry. In addition, many other RFC documents have been written that describe specific diffserv applications and implementations, and can be found at the RFC Editor site, www.rfc-editor.org, with a search for "differentiated services." In our next tutorial, we will look at another QoS-enhancing protocol, Multiprotocol Label Switching, or MPLS. Copyright Acknowledgement: © 2005 DigiNet ® Corporation, All Rights Reserved Mark A. Miller, P.E. is President of DigiNet ® Corporation, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Voice over IP Technologies, and Internet Technologies Handbook, both published by John Wiley & Sons.
<urn:uuid:94d80c47-b08e-4e40-9bab-019d3cecba7b>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/unified_communications/Do-You-Hear-What-I-Hear151Part-VII-Differentiated-Services-3526166.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00376-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938308
1,167
2.5625
3
|Objects and Messages Tutorial||Inheritance Tutorial| In the Objects and Messages Tutorial you learned how to create instances of a class and send messages. In this chapter you will learn how to write classes for objects of your own, using the example Stopwatch class introduced in the Objects and Messages Tutorial. The first part of the discussion looks at the way a class program is built up, using relatively little syntax which is different from ANSI 85 COBOL. It is followed by sections in which you animate through the Stopwatch code. This tutorial consists of the following sessions: Time to complete: 45 minutes. This tutorial starts with a look at the overall structure of a class program, using the Editor to examine the structure of Stopwatch. The class program consists of a set of nested programs. Nesting programs is a concept which was introduced to COBOL in the ANSI 85 standard. The sections below examine the following elements of the class program: To load Stopwatch into the Editor: As you read the explanations in the following sections, use the editor to locate and examine the code in stopwtch.cbl. Each class program starts with a class-id identifier and finishes with an end class clause. These bracket the outermost level of the nesting. The Stopwatch class looks like this: class-id. Stopwatch data is protected inherits from Base. ... end class Stopwatch. inherits from phrase identifies Stopwatch's superclass, data is protected phrase enables any subclasses of Stopwatch to inherit Stopwatch data. If this clause is omitted, or replaced by data is private, subclasses of Stopwatch cannot access inherited data directly. Inheritance of data is explained in more detail in the To see this code: class-control paragraph identifies the executable code files which implement classes used by the program. The superclass, the class itself, and every class which will be invoked from the class must be identified To see the class-control paragraph for Stopwatch class-controlparagraph, located directly below tag S005. Scroll down the text edit pane until you get to the paragraph, located below tag S005. class-control paragraph looks like this: class-control. Base is class "base" StopWatch is class "stopwtch" . is class clause serves two purposes: On Object COBOL for UNIX, a class is guaranteed to be loaded before the class object receives its first message. Usually this occurs when you send the first message to the class object, but before the class object receives it. This is the same behavior as Object COBOL on NetExpress. The class object program defines the data and methods for the class object. It is nested within the class program, immediately following the class program data division (if there is one). It looks like this: class-object. object-storage section. * class data . ... * class methods end class-object. The Object-Storage Section defines the class object data. The class object data can only be accessed from the class methods. It can also be inherited for direct access by subclasses (this depends on the contents of the Class-Id paragraph). Each class method is a nested program. The code below shows an outline for a "new" method for Stopwatch. method-id. "new". ... linkage section. 01 lnkWatch object reference. procedure division returning lnkWatch. * code to create and initialize a Stopwatch object. exit method. end method "new". As with the class program itself, you can declare different types of data in the Data Division of the method. The DATA DIVISION header itself is optional. Data declared here is only accessible to the code in this method. The data division can contain any of the following sections: Variables used by the method for processing. Data in working-storage is never reinitialized between different invocations of the method. This Working-Storage data is also shared between all instances of the object - you can't rely on it not being overwritten by a different instance between invocations. Variables needed to support recursive working by the method. When a method is called recursively, new local-storage data is created for each level of recursion. You have to initialize the data items within the method code; although VALUE clauses in Local-Storage are accepted by the Compiler, they have no effect at run-time. Variables passed as parameters to and from the program. The Procedure Division contains the code for the method. You terminate processing of the method with an EXIT METHOD statement. This returns processing to the program which invoked the method. To see the "new" method This method uses a Linkage Section to return data from the method. The object program defines the data and methods for instances of the class. It is nested within the class program. It looks like this: object. object-storage section. * instance data for the object. ... * Instance methods end object. The only Data Division section that has any meaning in an object program is the Object-Storage Section. You can create other data sections, but the run-time behavior if you try to access the data in these sections is undefined. Any data you declare in the Object-Storage Section is accessible to all the instance methods, and may be inherited by instances of subclasses of the class. There is no Procedure Division in an object program, only methods. To write an initialization method for instances, write a method called "initialize", and then invoke it from the "new" method for the class after you have created an instance. To see the object program and data declarations paragraph for Stopwatch The OBJECT header and Object-Storage Sections are located below tags B009 and B010. Instance methods are nested inside the object program. Writing an instance method is exactly like writing a class method, with the only difference being the scope of data which the instance method can access. The instance method can access data: To see the "start" method for Stopwatch This method does not declare any data of its own, but makes changes to the object's state by altering data declared in the Object-Storage Section. The code below summarizes the structure of an Object COBOL class, and recaps the material covered so far in this tutorial. class-id. Stopwatch inherits from Base. *> Identification and inheritance class-control. *> Class control paragraph *> names the files Stopwatch is class "stopwtch" *> containing the Base is class "base" *> executables for each *> class. . *> Period terminates paragraph. data division. *> Data division header is *> optional. ... working-storage section. ... procedure division. *> procedure division is *> optional. You can *> use it for class *> initialization. exit program. *> Terminates procedure division *> division. class-object *> Defines the start of the class *> object. object-storage section. *> Defines class object data ... ... method-id. "new". *> Start of class method "new". ... end method "new". *> End of class method "new". end class-object. *> End of the class object object. *> Start of the code *> defining behavior *> of instances *> of the class. object-storage section. *> Defines instance data. ... method-id. "start". *> Start of instance *> method "sayHello" ... end method "stop". *> End of instance method. end object. *> End of code for *> instances. end class Stopwatch. This completes the summary of class structure. In the next section you will animate some of the Stopwatch code. In this session, you will animate some of the code in the Stopwatch class, to see how classes and objects work. You are going to use the same programs as in the Objects and Messages Tutorial, but this time Stopwatch is compiled for animation so that you can see the code execute. To animate the Stopwatch class: This compiles timer.cbl and stopwtch.cbl for animation. Animator starts with the statement below tag T001 highlighted ready for execution. invoke StopWatch "new" ...). This sends the "new" message to the Stopwatch class, and execution switches to the "new" method of the Stopwatch class. invoke super "new"...). The mechanism for actually creating a new object (allocating the memory and returning an object handle) is inherited from the supplied class library, and this statement executes the inherited method. Some classes do not implement the "new" method at all, but rely on the inherited method. Those that re-implement it usually do so to send an initialization message to the new object. In this case we have overridden it to keep track of the number of instances created. add 1 to osCount). Data item osCount is part of the class data, which is declared in the class Object-Storage Section. exit methodstatement to return from the method back to timer.cbl. invoke wsStopWatch1 "start"). Control switches to the "start" method of Stopwatch. Scroll up through the code to the Object header (between tags S030 and S035). Methods which appear after the Object header are instance methods, and can access data declared in the Object-Storage Section below the Object header. They can't access data declared in the class object (between the Class Object and End Class-Object headers). The "start" method tests to see whether the stopwatch is currently running, and if it isn't, stores the current time in Object-Storage, in the startTime variable. if watchStopped), up to and including the Control returns to invoke StopWatch "new" ...): push the Perform Step keys. This creates a second stopwatch; using perform step saves you from having to step through all the "new" code a second time. invoke Stopwatch "howMany"). Execution switches to the "howMany" method of Stopwatch. This is a class method (between the Class-object and End Class-Object headers), and returns the value in class data variable osCount. move osCount to lnkCount) up to and including invoke wsStopwatch2 "start"). Execution switches to the "start" method of Stopwatch. When you executed this method previously, on step 8, you set watchRunning to true, but now it reads false. The reason is that each different instance of Stopwatch has its own unique data. The last time you executed this method, you sent the "start" message to the instance of Stopwatch represented by the handle in wsStopwatch1; this time you have sent it to a different instance, which has its own data. Control returns to timer.cbl. At this point you have seen class object and instance object code executing, and how different instances have different data. You can animate the rest of the code if you are interested to see how the Stopwatch works. This concludes this tutorial on writing a class program. In this tutorial you learned: The next tutorial explains inheritance in more detail. Copyright © 1999 MERANT International Limited. All rights reserved. This document and the proprietary marks and names used herein are protected by international law. |Objects and Messages Tutorial||Inheritance Tutorial|
<urn:uuid:36c2f0b6-f867-4ee7-8612-968f8ddf935b>
CC-MAIN-2017-04
https://supportline.microfocus.com/documentation/books/sx20books/opclau.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00128-ip-10-171-10-70.ec2.internal.warc.gz
en
0.865269
2,418
3.640625
4
According to Stuart Paton, senior solutions architect with Cloudmark, the internet is moving swiftly to adopt a new technology for the underlying networking used to track nodes on the network and communicate. Because of this, he argues that the introduction of the IPv6 scheme could have a far reaching impact on spam security. "As an example, the primary method for stopping the majority of spam used by email providers is to track bad IP addresses sending email and block them - a process known as IP Blacklisting", he said. "With IPv6 this technique will no longer be possible and could mean that email systems would quickly become overloaded if new approaches are not developed to address this. This is one example, but there are other examples across the web", he added. Paton went on to say that IPv6 has been designed to have a significantly larger number of available IP addresses than IPv4. "Fundamentally, this presents serious difficulties in tracking all of the IPs for any purpose-email sender reputation, denial of service, sources used for malicious sign ups to websites, sources of click fraud attacks, influencing of search engine results, and many other scenarios", he explained. Paton cites the example of the IPv6 address space being so large that it would be easy for spammers to use a single IP address just once to send a single email. Based on these new risks with IPv6, he says that Cloudmark advocates that ISPs do not initially need to be able to receive mail from IPv6 addresses except from their own customers. This would, he adds, ensure business continuity for ISPs and provisioning of ADSL/Cable modems to continue. "This measure will also protect the IPv4 reputation system that is currently in use and working well", he noted.
<urn:uuid:aaf12a20-8893-4eb1-b482-4deb60467b3e>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/cloudmark-says-ipv6-may-cause-rise-in-spam-volumes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958156
364
2.625
3
Marques J.T.,University of Lisbon | Marques J.T.,Institute Desenvolvimento Sustentavel Mamiraua | Ramos Pereira M.J.,Institute Desenvolvimento Sustentavel Mamiraua | Ramos Pereira M.J.,University of Aveiro | And 7 more authors. PLoS ONE | Year: 2013 Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. © 2013 Marques et al. Source Dias S.,University of Lisbon | Moreira F.,University of Lisbon | Beja P.,University of Porto | Carvalho M.,University of Lisbon | And 5 more authors. European Journal of Wildlife Research | Year: 2013 The European turtle dove is both a highly valued game species and a species of conservation concern, which is declining due probably to a combination of habitat degradation and unsustainable hunting. Although declines seem to be less severe in the Mediterranean region, it remains uncertain the extent to which ongoing land use changes will negatively affect this species. This study examined this issue, by estimating the effects of landscape composition on the broad scale abundance pattern of breeding turtle doves in continental Portugal. Turtle doves were surveyed in the breeding seasons of 2002 and 2003, from 3160 point counts spaced at about 1-km intervals along 158 transects of about 20 km, evenly covering the country. The frequency of occurrence of turtle doves at each transect was used as a proxy of species abundance, and related using GAM modelling to 21 variables describing land cover and woody linear features (e.g., hedgerows and riparian galleries). Turtle doves were most abundant in north- and central-eastern Portugal, with high abundances also recorded in the regions around Lisbon and along the Guadiana valley. Abundances were positively related to forest cover, particularly by broadleaved forests and by pine stands without woody understory, to cover by permanent crops, and to the density of woody linear habitats. Results suggest that conservation of Mediterranean turtle doves requires policies and management strategies reversing the pervasive trends of forest management neglect and agricultural abandonment, while preserving hedgerows and riparian galleries in more intensive agricultural landscapes. © 2013 Springer-Verlag Berlin Heidelberg. Source
<urn:uuid:945e773b-72a1-4dc3-b980-7ce6321e94a8>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/erena-sa-895829/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00090-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927784
820
2.703125
3
Businesses are slowly but surely integrating wireless technology and associated components into their wired infrastructure. This article steps the reader through wireless LAN applications, Bluetooth and the IEEE 802.11 standard, as well as security risks and the need for the security implementation team to address wireless threats. Demand for wireless access to LANs is fueled by the growth of mobile computing devices, such as laptops and personal digital assistants, and a desire by users for continual connections to the network without having to “plug in.” There will be more than a billion mobile devices by 2003, and the wireless LAN market grew to more than $2 billion in 2002, according to The Yankee Group. Wireless applications are diverse. For example, the Microsoft campus is probably the largest 802.11b network in the world, with 15,000 people using wireless technology in 65 buildings. Offices aren’t the only places where wireless networks are proving useful—many airports, conference centers and other public spaces are installing networks that allow laptop users to get connected without cables. Another example of the application of wireless technology is the installation at Children’s Hospital of Wisconsin. In the hospital’s ICU, nurses, doctors and therapists go from patient to patient and need to be able to place orders for medications and treatment. In the past, this was accomplished by grabbing the patient’s chart, writing the order and putting the chart where it could be retrieved at a later time. Today, the unit has more than a dozen wireless-enabled PCs. Home networks provide yet another opportunity for the application of wireless technology. The Internet is fast becoming a mandatory utility that most professionals are provisioning for their homes and offices. It’s as important as electricity, water, gas and telephone service to the operation of families and home businesses. Wireless Security Risks Along with the convenience of connectivity offered by wireless and portable devices, however, come increased security risks. Wireless transmissions are susceptible to interception and tampering. Portable devices with no fixed connection offer tempting wireless access points to hackers. Portable devices also contain valuable information and credentials. This information must be protected in case of theft or loss of a device. The wireless world presents a far greater security risk than the wired world. There are two key aspects of security that are of particular concern: access control and privacy. Access control ensures that only authorized users can access sensitive data. Privacy ensures that transmitted data can be received and understood only by the intended audience. Access to a wired LAN is governed by access to an Ethernet port for that LAN. Therefore, access control for a wired LAN often is viewed in terms of physical access to LAN ports. Similarly, because data transmitted on a wired LAN is directed to a particular destination, privacy cannot be compromised unless someone uses specialized equipment to intercept transmissions on their way to their destination. In short, a security breach on a wired LAN is possible only if the LAN is physically compromised. Wireless threats include viruses. The majority of PDAs do not have anti-virus software installed on them. Further, the antivirus software on most desktop systems does not scan for viruses during the HotSync process. A survey by Information Security Magazine in January 2002 showed that 98 percent of all PDAs do not have antivirus protection. PDA virus threats have included viruses such as Phage.963, Vapor.741 and LibertyCrack. With physicians and other business professionals increasingly having access to protected and sensitive information on wireless devices and networks, the area of wireless security cannot be overlooked. What is essential for wireless security is a scheme that: - Bases wireless LAN authentication on device-independent items such as user names and passwords, which users possess and use regardless of the clients on which they operate. - Uses WEP keys that are generated dynamically upon user authentication, not static keys that are physically associated with a client. In 1999, the Institute of Electrical and Electronics Engineers (IEEE) ratified an extension to a previous standard. Called IEEE 802.11b, it defines the standard for wireless LAN products that operate at an Ethernet-like data rate of 11 Mbps, a speed that makes wireless LAN technology viable in enterprises and other large organizations. Interoperability of wireless LAN products from different vendors is ensured by an independent organization called the Wireless Ethernet Compatibility Alliance (WECA), www.wi-fi.com, which brands compliant products as “Wi-Fi.” Dozens of vendors market Wi-Fi products, and organizations of every size and type are deploying wireless LANs. Bluetooth is vying to be the de facto standard for the exploding wireless revolution. Bluetooth networks are created whenever two devices come within that 30-foot range. Bluetooth uses the radio waves located in the frequency band of 2.4 GHz (2400 to 2483.5 MHz), an increasingly popular (and crowded) slice of the spectrum. In this band, Bluetooth transmits voice and data at flows lower than 1 megabit per second. Although it has been around for just a couple of years, Bluetooth is steeped in history. According to the Gartner Group, Bluetooth will play a vital role in uniting the 70 percent of new cell phones and 40 percent of new PDAs accessing the Web by 2004. Bluetooth has backing from wireless giants Ericsson, Motorola and Nokia, along with Intel, Microsoft, 3Com, Lucent, IBM, Toshiba and another 2,000 companies. With Bluetooth, devices need not be in line-of-sight. Up to eight devices are supported by one Personal Area Network (PAN). By overlapping networks, up to 80 items can be linked. Wireless LAN Security The IEEE 802.11b standard includes components for ensuring access control and privacy, but these components must be deployed on every device in a wireless LAN. An organization with hundreds or thousands of wireless LAN users needs a solid security solution that can be managed effectively from a central point of control. Some cite the lack of centralized security as the primary reason why wireless LAN deployments have been limited to relatively small workgroups and specialized applications. A client cannot participate in a wireless LAN until that client is authenticated. The IEEE 802.11b standard defines two types of authentication methods: open and shared key. The authentication method must be set on each client, and the setting should match that of the access point with which the client wants to associate. The IEEE 802.11b standard defines two mechanisms for providing access control and privacy on wireless LANs: service set identifiers (SSIDs) and wired equivalent privacy (WEP). Another mechanism to ensure privacy through encryption is to use a virtual private network (VPN) that runs transparently over a wireless LAN. You need to consider a number of factors before deploying wireless LAN technology. For example: - Consider your data security needs before you deploy. The default settings might not be adequate for companies that handle confidential information. - Don’t install access points without investigating whether they are properly placed. Plan on extensive testing. - Don’t assume your IT staff is knowledgeable about wireless networking. Make sure that the people who install and manage your networks are aware of wireless networking’s unique configuration issues. Training to cover wireless networks and security may be required for system and network administrators.
<urn:uuid:c2f02370-14d1-4647-bdb5-5527589317d7>
CC-MAIN-2017-04
http://certmag.com/wireless-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930978
1,488
2.84375
3
For many years, businesses and consumers have used anti-virus software to protect their computers from malware. As we see in the news every week, security breaches are now increasing in severity and regularity and with IT being integral to how businesses work, security is a top priority for all business stakeholders. While some businesses continue to use anti-virus software, whitelisting software is becoming more popular and provides a more effective method of security. Whitelisting and blacklisting prevent malware but they do this in opposite ways. Blacklisting software (or anti-virus software) works by comparing files against a list of known threats. If a file is on the list, then it won’t be allowed to execute. Whitelisting software in comparison, works by having a list of allowed files and applications. If a file tries to execute that is not on this list, then it will not be allowed. The difficulty with anti-virus software (or blacklisting) is that is can only protect against known threats. In 2014 alone, there were 317 million new pieces of malware according to a study by Symantec, which shows how fast threats are being created. Anti-virus software has to keep up and add these new threats to their lists so if your business is attacked by a new threat that hasn’t been seen before then it is powerless to stop it. Additionally, this places a heavy load on your network (downloading new virus definition updates to every PC), and a heavy load on your PC’s (as they scan every file executed against a long list of virus definitions to see if it matches). The key to whitelisting is that you only have to manage a very short list and anything that is not on this will not execute. This has a number of benefits from a security and management perspective: Only trusted files and applications will execute, which means that any new threat will be automatically blocked – even if they’ve never been detected before. By being able to define what programs can run across the business, it makes it much simpler to uphold and maintain your businesses IT policy and have better control over your IT. Users cannot download unauthorised programs, personal programs or even unlicensed software without gaining IT administration permission. Whitelisting also reduces human error and accidental security issues, such as any malicious files that are mistakenly downloaded or clicked will not run. As well as giving IT administrators peace of mind it also helps give users peace of mind. Most statistics cite human error as a large factor of security breaches – due to accident or lack of IT and security knowledge –and whitelisting reduces the worry and risk for general users. With any piece of software there are items to consider. A few considerations for whitelisting software include: Management and maintenance – Whitelisting requires some maintenance as the ‘whitelist’ must stay up to date. If your business starts using a new application, this must be added to the list so it can be allowed. Blocking desired software – If the whitelist has not been maintained then new applications or those that have not been identified as safe, will not be allowed to run. While this could cause some annoyance for users that have to request access, it could be considered an advantage by ensuring users only run business-approved apps and do not try to download unauthorised software. As internal issues and user errors make up a large proportion of security issues, having this control over what staff can and cannot access or download will give that additional level of security and help enforce an effective IT policy. Testing – With any solution, the best fit will vary from business to business due to different requirements and internal procedures. One final consideration around this is that whitelisting may not be an appropriate approach for those that carry out a lot of testing and need to access and test lots of new applications. Whitelisting is an effective element of security – but just having whitelisting software (or anti-virus) does not mean your business is ‘secure’. Whitelisting is only one aspect of security and should be used as part of a larger defence in depth approach in combination with a mixture of other elements, such as firewalls, intrusion protection or behavioural analytics. There is no ‘one solution fits all’ for business IT and this includes security. If you want to discuss your businesses security, please contact us and we would be happy to work with you to find the best solutions available. We also regularly write about IT solutions and trends - you can stay updated by following us on Twitter.
<urn:uuid:bd344a68-440b-4950-8683-164fb6911cec>
CC-MAIN-2017-04
https://www.chorus.co/filters/news/whitelisting-versus-blacklisting-anti-virus-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00448-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954532
943
2.640625
3
Lonnroth K.,World Health Organization | Migliori G.B.,Collaborating Center for Tuberculosis and Lung Diseases | Abubakar I.,Public Health England | D'Ambrosio L.,Collaborating Center for Tuberculosis and Lung Diseases | And 70 more authors. European Respiratory Journal | Year: 2015 This paper describes an action framework for countries with low tuberculosis (TB) incidence (<100 TB cases per million population) that are striving for TB elimination. The framework sets out priority interventions required for these countries to progress first towards "pre-elimination" (<10 cases per million) and eventually the elimination of TB as a public health problem (less than one case per million). TB epidemiology in most low-incidence countries is characterised by a low rate of transmission in the general population, occasional outbreaks, a majority of TB cases generated from progression of latent TB infection (LTBI) rather than local transmission, concentration to certain vulnerable and hard-to-reach risk groups, and challenges posed by cross-border migration. Common health system challenges are that political commitment, funding, clinical expertise and general awareness of TB diminishes as TB incidence falls. The framework presents a tailored response to these challenges, grouped into eight priority action areas: 1) ensure political commitment, funding and stewardship for planning and essential services; 2) address the most vulnerable and hard-to-reach groups; 3) address special needs of migrants and cross-border issues; 4) undertake screening for active TB and LTBI in TB contacts and selected high-risk groups, and provide appropriate treatment; 5) optimise the prevention and care of drug-resistant TB; 6) ensure continued surveillance, programme monitoring and evaluation and case-based data management; 7) invest in research and new tools; and 8) support global TB prevention, care and control. The overall approach needs to be multisectorial, focusing on equitable access to high-quality diagnosis and care, and on addressing the social determinants of TB. Because of increasing globalisation and population mobility, the response needs to have both national and global dimensions. ©ERS 2015. Source
<urn:uuid:0e06815a-7bd9-434a-ac13-b0dc95181963>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/marius-nasta-pneumology-institute-180474/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00356-ip-10-171-10-70.ec2.internal.warc.gz
en
0.875658
445
2.671875
3
Analyze Grid-Based Data Using Simple Analysis Codes Step No. 3: Analyze grid-based data using simple analysis codes and the MapReduce programming pattern Once a collection of objects (such as a Website's shopping carts or a financial company's pool of stock histories) has been hosted in a distributed data grid, it's important to be able to scan all of this data for important patterns and trends. Over the last 25 years, researchers have developed a powerful two-step method now popularly called "MapReduce" for analyzing large volumes of data in parallel. In the first step, each object in the collection is analyzed for an important pattern of interest by writing and running a simple algorithm that just looks at one object at a time. This algorithm is run in parallel on all objects to quickly analyze all of the data. Next, the results that were generated by running this algorithm are combined to determine an overall result, which hopefully identifies an important trend. For example, an e-commerce developer could write a simple code which analyzes each shopping cart to rate which product categories are generating the most interest. This code could be run on all shopping carts several times during the day (or perhaps after a marketing blitz on the Website has been launched) to identify important shopping trends. Distributed data grids offer an ideal platform for analyzing data using this MapReduce programming pattern. Because they store data as memory-based objects, the analysis code is very easy to write and debug as a simple "in-memory" code. Programmers do not need to learn parallel programming techniques or understand how the grid works. Also, distributed data grids provide the infrastructure needed to automatically run this analysis code on all grid servers in parallel and then combine the results. The net result is that, by using a distributed data grid, the application developer can easily and quickly harness the full scalability of the grid to rapidly discover data patterns and trends that are vital to a company's success. As companies become ever more pressed to manage increasing data volumes and quickly respond to changing market conditions, they are turning to distributed data grids to obtain the "scalability" boost they need. As clouds become an integral part of enterprise infrastructures, distributed data grids should further prove their value in harnessing the power of scalable computing to provide an essential competitive edge. William L. Bain is founder and CEO of ScaleOut Software. He founded the company in 2003. He has worked at Bell Labs research, Intel and Microsoft. Bill founded and ran three startup companies prior to joining Microsoft. In the most recent company (Valence Research), he developed a distributed Web load balancing software solution that was acquired by Microsoft and is now called Network Load Balancing within the Windows Server operating system. William holds several patents in computer architecture and distributed computing. As a member of the screening committee for the Seattle-based Alliance of Angels, William is actively involved in entrepreneurship and the angel community. He has a PhD in Electrical Engineering/Parallel Computing from Rice University. He can be reached at email@example.com.
<urn:uuid:1830a8c3-2ae5-4f50-8341-3c04db8187bf>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/How-to-Scale-the-Storage-and-Analysis-of-Data-Using-Distributed-Data-Grids/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949955
626
2.71875
3
A new Pew Research report is confirming the trend that more people than ever are using cell phones to access the Internet instead of desktop PCs and other devices and shows how the current digital divide is being bridged... and a new one could open. The report's findings show that 55 percent of cell phone users use their phones to go online, and of that group 31 percent use their phone the most to go online. "That works out to 17% of all adult cell owners who are 'cell-mostly internet users'--that is, who use their phone for most of their online browsing," the report stated. The use of cell phones for online use seems to be set along income and racial lines, as well. "Half (51%) of African-American cell internet users do most of their online browsing on their phone, double the proportion for whites (24%). Two in five Latino cell internet users (42%) also fall into the 'cell-mostly' category," the report read. "Additionally, those with an annual household income of less than $50,000 per year and those who have not graduated college are more likely than those with higher levels of income and education to use their phones for most of their online browsing." If these results are indeed representative of the population, it would mean that the so-called digital divide, where those with poorer incomes are not be able to access the Internet or use local software because they can't afford a computing device, may be getting bridged through alternate means. This news should affect everyone from web designers, who should really start thinking more about mobile Internet design for their sites, to PC manufacturers, who could cut back or drop cheaper PC products as they see potential customers relying on cell phones. This could also be a good target market for low-cost tablets, which are cheaper than PCs and laptops and would offer mobile connectivity with a form factor that's easier to use than smaller cell phones. The big unknown result from this trend will be how productivity will be affected. Internet use is ofen a big reason people will purchase a PC or laptop. If more people are using their cell phones for Internet connectivity, then there may be less call to use computers, which would see a corresponding decrease in content production from PCs, as well as a decline in skills for using computers. Such a shift might a create a new digital divide: one not based on ownership of Internet-capable devices, but on who can create on the Internet and who will be relegated to consumption of content. As for the desktop paradigm, this is yet another signpost that marks the way towards an overall decline in desktop use, one to which the FLOSS community should pay heed. The desktop may never die completely, but the mobile market is something which cannot be ignored. Read more of Brian Proffitt's Open for Discussion blog and follow the latest IT news at ITworld. Drop Brian a line or follow Brian on Twitter at @TheTechScribe. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:bad6981a-8b0a-4f24-b380-ad1f38075777>
CC-MAIN-2017-04
http://www.itworld.com/article/2722970/mobile/rising-mobile-internet-use-could-create-new-digital-divide.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00384-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95216
630
2.75
3
Eleven Myths About 802.11 Wi-Fi Networks Wi-Fi networks have been misunderstood by much of the IT community since their inception. Even the reasons for this misunderstanding are kind of hard to understand. The result has been that myths about 802.11 (better known as Wi-Fi) networks have grown almost as fast as the technology itself. In this web seminar, we'll examine 11 common Wi-Fi myths and explore ways to use correct information to make your networks scalable, secure and satisfying for your users. - Myth #1: If you leave your Wi-Fi adapter turned on, someone could easily hijack your notebook and take control of your computer. - Myth #2: Even with 802.11i, you still need a VPN to provide enterprise-class security for a wireless network. - Myth #3: Captive Portals are an effective way to prevent unauthorized users from accessing a network via Wi-Fi. - Myth #4: Disabling the SSID broadcast will hide your wireless network from wardrivers and hackers. - Myth #5: You need a wireless IDS to prevent rogue access points. - Myth #6: A wireless IDS is unnecessary if other rogue AP prevention measures are in place. - Myth #7: Assigning low Wi-Fi data rates is a good way to make sure that every station gets equal bandwidth. - Myth #8: If channels 1, 6, and 11 are already being used, it's best to choose another channel somewhere in the middle. - Myth #9: When an 802.11b station connects to an 802.11g network, the entire network is reduced to 802.11b speeds. - Myth #10: If you need more Wi-Fi coverage, replace the antenna on your access point with one that has a higher gain. - Myth #11: You can point two antennas in different directions to get more area covered with one access point. - Question & Answer Session Who Should Watch and Why? - Administrators: network, systems, infrastructure, security, and LAN/WLANs - Support professionals: technical assistance and field support - Designers: network, systems, and infrastructure - Developers: wireless software and hardware products - Consultants and integrators: IT and security - Decision makers: infrastructure managers, IT managers, security directors, chief security officers, and chief technology officers
<urn:uuid:4529e836-b17e-49ca-9e7f-51c943a825d6>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/recorded-webinar/eleven-myths-about-80211-wi-fi-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00200-ip-10-171-10-70.ec2.internal.warc.gz
en
0.901986
499
2.78125
3
View the Exhibit and examine the structures of the EMPLOYEES and DEPARTMENTS tables.Examine the PL/SQL block that you execute to find the average salary for employees in the’Sales’ department:DECLARETYPE emp_sal IS TABLE OF employees.salary%TYPE INDEX BY VARCHAR2(20);v_emp_sal […] View the Exhibit and examine the PL/SQL code.Identify the sections of the code that need to be modified for better performance gains. (Choose all that apply.) You executed this command to gather information about the memory allocation for storing query results:SQL> execute dbms_result_cache.memory_reportView the Exhibit and examine the output for the execution of theDBMS_RESULT_CACHE.MEMORY_REPORT procedure.Which two statements are true about the […] Examine the following settings for a session:PLSQL_CODE_TYPE = NATIVEView the Exhibit and examine the PL/SQL code.You compile the program with the following attributes:SQL> ALTER PROCEDURE proc1 COMPILE PLSQL_OPTIMIZE_LEVEL = 1;Which statement is true about the execution of the PROC1 […] Examine the following structure:SQL> DESCRIBE user_identifiersName Null? Type—————————————– ——– ———————–NAME VARCHAR2(30)SIGNATURE […] Which two statements are true about the usage of the DBMS_DESCRIBE.DESCRIBE_PROCEDURE procedure? (Choose two.) View the Exhibit and examine the settings for the PLSQL_CODE_TYPE parameter.After sometime, the user recompiles the procedure DISPLAY_SAL_INFO by issuing the following command:SQL> ALTER PROCEDURE display_sal_info COMPILE;Which statement would be true in this scenario? You have an external C procedure stored in a dynamic-link library (DLL).The C procedure takes an integer as argument and returns an integer. You want to invoke the Cprocedure through a PL/SQL program.View the Exhibit.Which statement is true about the C_OUTPUT PL/SQL program? Examine the structure of the TEST_DETAILS table:Name Null? Type——————- ——– ————-TEST_ID NUMBERDESCRIPTION CLOBDESCRIPTION data was entered earlier and saved for TEST_ID 12.You execute this PL/SQL block […] Examine the structure of the EMPLOYEES table that exists in your schema.Name Null? Type————————– ————— ———————EMPLOYEE_ID NOT NULL NUMBER(6)FIRST_NAME […]
<urn:uuid:44c5bc44-9f92-4203-83f8-eca50ffc2012>
CC-MAIN-2017-04
http://www.aiotestking.com/oracle/category/1z0-146-oracle-11g-advanced-plsql/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.740997
580
2.625
3
Certain IT decisions (such as technical standards) should be made by IT, while others, such as technical vendor selection, should involve IT to a major degree. Other decisions, such as budget and project prioritization, should be driven by business management, rather than assumed by IT. This storyboard will help IT leaders: - Determine what role IT should take in typical decision-making situations. - Diagnose challenges to decision-making in the organization that are caused by an inappropriate decision-making role for IT. - Take steps to increase IT influence when IT is either left out of decisions that it should be making or should at least influence. - Engage those stakeholders who need to take responsibility for certain decisions or who at least need to be involved in the decision. Inappropriately assigned decision rights involving Information Technology can create major grief for IT Management; fix the problem.
<urn:uuid:f70dcd50-1188-4ee9-95fc-e962e5dd5863>
CC-MAIN-2017-04
https://www.infotech.com/research/it-storyboard-optimize-it-decision-making
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00229-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938447
180
2.578125
3
When people talk about Big Data, they often talk in terms of Messianic solutions for economy-size systems. Roads and highways. Diagnoses and treatments. Buyers and sellers. But some of the most interesting work being done with data addresses a different kind of complex system. The information for this system is neither private nor proprietary: In fact, its ownership is a little more evergreen. It’s the weather. And this week, one of the more interesting recent online weather data products opened to the public and explained itself. It’s called Quicksilver. Quicksilver aims to provide the highest-resolution, most up-to-date map of global temperatures ever created. Click around its maps or zoom in, and it paints hot reds, frigid blues, and temperate greens at a more detailed, more local level than any previous planetary* temperature map ever has. It does all this without adding any new sensors to the world: Humanity’s raw observational power wasn’t increased to make the Quicksilver map work. Rather, the Quicksilver team merged and correlated existing data, from different public sources, for the map. A blog post, just posted on the Quicksilver website, explains how they did it. The map relies on temperature data from a number of sources. One of those is the Global Forecast System (GFS), a service of the U.S. National Oceanic and Atmospheric Administration. GFS measurements are “live,” or updated so frequently as to be essentially live, but they're at one-tenth the resolution of the Quicksilver map.
<urn:uuid:6569a71c-36f8-4716-86b3-f1226f503ebb>
CC-MAIN-2017-04
http://www.nextgov.com/big-data/2013/09/one-map-world-temperatures/70086/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00533-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924341
339
2.859375
3
Denial of Service (DoS) attacks continue to be on the rise, which is no surprise given our ever-growing dependency on Web-based services, coupled with the fact that these attacks are relatively cheap and easy to carry out. In this article, we’ll discuss what DoS attacks are, some various types of DoS attacks, tips to keep them at bay, and references to security tools to help you mitigate vulnerabilities. DoS attacks and their impact A DoS attack is an explicit attempt to prevent legitimate users from accessing information or services on a host system. It does this by overloading the targeted machine or service with requests, thus making the resource unreachable or unresponsive to its intended users. DoS attacks exploit known weaknesses and vulnerabilities in systems and applications. These attacks aim to consume valuable resources to disrupt a service. Resources targeted include: - Network connectivity - Data structures - CPU usage - Disk space - Application exception handling - Database connections. Unfortunately, DoS attacks are becoming more sophisticated and getting better at evading detection. They can wreak havoc on organizations by bringing down business critical services and inhibiting Web access to users, which can result in thousands to hundreds of thousands of dollars per day in lost revenue! Hackers use several methods to deploy DoS attacks. These attacks come in all different shapes and sizes. Let’s take a quick look at some of them: 1. SYN attacks In a SYN (synchronize) attack, networking capability of the targeted system can be knocked out by overloading its network protocol stack with information requests or connection attempts. A SYN attack exploits known weaknesses in the TCP protocol and can impact any system providing TCP-based services, including Web, email, FTP, print servers, etc. In a normal TCP connection, the client and server exchange a series of messages to establish the connection, known as the three-way handshake. First, the client sends a SYN message to the server. The server will acknowledge the receipt of this message with a SYN-ACK (synchronize-acknowledgement) back to the client. Lastly, the client responds with an ACK (acknowledge) and the connection is established. Taking advantage of this process, an attacker sends multiple SYN packet requests continuously, but then doesn’t return a response. This means the targeted host just sits and waits for acknowledgement for each request, which ties up the number of available connections. In turn, connection attempts from legitimate users get ignored. Tips to stay secure: Make sure you have a firewall/security device in place that is capable of detecting the characteristics of this type of attack. Also, be certain that you have the appropriate filters configured, including one that restricts input to your external interface by denying packets that have a source address from your internal network. You should also filter outgoing packets that have a source address different than your internal address scheme. Additionally, ensure you have the latest security patches in place, including operating system and application updates, as well as firmware updates for your network and security devices. 2. Poisoning of DNS cache DNS cache poisoning exploits vulnerabilities in the domain name system (DNS). In this case, the attacker attempts to insert a fake address entry into the DNS server’s cache database in order to divert Internet traffic from legitimate sites to “rogue” sites. The goal is to lure unsuspecting users to download malicious programs, which can then be exploited by the attacker. Tips to stay secure: First, ensure you’re running the latest release of your DNS software. You should also configure your firewall to drop packets having an internal source address on the external interface as these are in most cases “cooked-up” addresses. Another important step is to collect and analyze log files from your DNS servers to identify anomalies and suspicious patterns, such as a multiple queries from the same IP within a short amount of time. 3. ICMP/Ping flood In this case, the attacker sends a continuous stream of ICMP echo requests to the victim as fast as possible without waiting for a reply—in other words, “floods” it with ping packets. This barrage of data packets consumes the victim’s outgoing and incoming bandwidth, preventing legitimate packets from reaching their destination. Tips to stay secure: Filter ICMP traffic appropriately. Block inbound ICMP traffic unless you specifically need it, such as those tools used for normal administration and troubleshooting. For ICMP traffic you do allow, do so only to those specific hosts that require it. Also, configure appropriate parameters and rate limits on firewalls and routers, such as setting a threshold for the maximum allowed number of packets per second for each source IP address. Additionally, make sure you’re monitoring those device logs in real time to immediately detect patterns of high ICMP volume. 4. E-mail bombs This type of attack involves sending huge volumes of bogus emails simultaneously, and in most cases, containing very large attachments. E-mail bombs consume large amounts of bandwidth, as well as valuable server resources and storage space. An attack of this kind can quickly bring your mail service to a crawl or crash the system altogether. Tips to stay secure: In addition to firewalls, you can put other perimeter protection in place, such as content filtering devices. It’s also wise to limit the size of emails and attachments, as well as limiting the number of inbound connections to the mail server. 5. Application-level floods Application denial-of-service attacks target Web servers and take advantage of software code flaws and exception handling. These types of attacks are common and difficult to defend against since most firewalls leave port 80 open and allow traffic to hit the backend Web applications. Tips to stay secure: Make sure servers and applications stay up-to-date with security patches. Also, educate developers on the risks of sloppy code and leverage a Web Application Firewall (WAF) to protect against bad code and software vulnerabilities. In addition, you should be logging relevant data from all your business-critical applications. Security tools to mitigate vulnerabilities As long as there are vulnerable systems on the Web, there are going to be denial-of-service attacks. And, though some DoS attacks can be difficult to defend against, there are ways to mitigate your risks to these types of cyberattacks. First and foremost, ensure you systems are up-to-date with the latest patches. Patch management is one of the most critical processes in vulnerability management. You need to apply the latest security patches and updates to operating systems and applications, as well as firmware updates for your network devices, including routers and firewalls. Next, continuously monitor your systems and devices. Start by creating a baseline and then monitor how the network is behaving to identify anomalies. To do this successfully requires that you have a solution in place that is capable of monitoring and correlating log event data throughout your environment, and very importantly, reacting in real time. This is where Security Information and Event Management (SIEM) solutions come into play. Log management solutions centrally collect and correlate logs from network and security devices, application servers, databases, etc., to provide actionable intelligence and a holistic view of your IT infrastructure’s security. Another important step is to ensure your firewalls and network devices are configured properly and that you have the appropriate rules and filters in place. Configuration and change management plays a vital role in protecting your network from unauthorized and erroneous changes that could leave your critical devices vulnerable. Following these guidelines can go a long way in protecting your IT infrastructure and services. It’s much better to implement precautionary measures up front to prevent an attack than to try and recover after one has occurred.
<urn:uuid:83c04696-a9e3-447a-aa4f-270b6ad09568>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2013/09/06/understanding-and-defending-against-denial-of-service-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00467-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936688
1,598
3.15625
3
Fiber optic connector, also called optical fiber connector or fiber connector, is used to terminate the end of fiber optics where a connect or disconnect capability is required. The connectors mechanically couple and align the cores of fibers so light can pass. Every fiber connection has two values – Attenuation and Reflection. Better connectors lose very little light due to reflection or misalignment of the fibers. Due to its diversity, fiber optic connectors are the largest number of optical passive devices and now there have been about 100 fiber optic connectors introduced to the marke with different standards and applications. The essential elements found in most types of connectors are: Fiber optic connectors can be divided into common silicon-based optical fiber single-mode and multimode connectors, as well as other issues such as plastic and as the transmission medium of optical fiber connector; connector structure can be divided into: FC,SC, ST, LC, D4, DIN, MU, the MT and so on.
<urn:uuid:21ddba68-fa09-4433-af32-e5340838133b>
CC-MAIN-2017-04
http://www.fs.com/blog/fiber-optic-connector.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931407
197
3.375
3
With the considerable cost and performance advantages of fiber, in theory every network would have a completely fiber horizontal structure running from a fiber port on a data center switch to a fiber port on a PC. But in practice, copper is often installed simply because it is more familiar. Plus, many network devices have copper ports, and organizations cannot afford to replace the most expensive components (the electronics) of their networks to install fiber. One simple and inexpensive solution is offered by the Media Converters. The media converter connects the Ethernet through an RJ-45 port to a fiber optic cable port (such as SC, ST, and LC) fiber connections. This connection retains the collision domain between the two Ethernet participants and means that status transparency exchanges between the two Ethernet interfaces. The port status multi-mode glass fibers allow distances of up to 5,000 m to be bridged without intermediate repeaters. Single-mode fibers can be used over distances of up to 40 km. Network backbone and long-distance applications have long taken advantage of fiber optic cable. However, horizontal fiber cabling has been widely regarded as impractical and too expensive for an application that doesn’t need to cover long distances or move vast amounts of data. Fiber is regarded as expensive and difficult to install while copper is still the dominant cable type in local networks. But times have changed and fiber is gaining an edge over copper, especially for new installations, and is now often the first choice even for horizontal cabling, which has traditionally been copper. Media converters are the key to integrating fiber into a copper infrastructure, making it possible to migrate a local network to fiber while extending the productive life of existing infrastructure. On the most basic level, media converters are simple networking devices that make it possible to connect two dissimilar media types. Although the most common type of media converter connects UTP to fiber optic cable, media converters may also connect other cable types such as coax. Media converters are often used to connect legacy Ethernet equipment with copper ports to new fiber cabling. The converters may also be used in pairs to insert a fiber segment into a copper network to increase cabling distance. Media converters may be simple devices, but they come in a dizzying array of types. Newer media converters are often really a switch, which confuses the issue even more. Ethernet media converters are available in many configurations, with the most common being UTP to multi-mode or single-mode fiber, although UTP to thin coax (thinnet), UTP to thick coax (standard Ethernet), thin coax to fiber, and UTP to SFP are also available. On the copper side, most media converters have an RJ-45 connector for 10BASE- T, 100BASE-T, and 1000BASE-T connectivity. The fiber side usually has a pair of SC or ST connectors, although newer compact connectors such as LC and MT-RJ are becoming increasingly common. Here is a 10/100/1000Base-TX to 1000Base-FX Media Converter in the picture below. Media converters may support network speeds from 10 Mbps to 10 Gbps, thus there are Fast Ethernet media converters, Gigabit Ethernet media converters, and 10-Gigabit Ethernet media converters. Traditional media converters are purely Layer 1 devices that only convert electrical signals and physical media and do not do anything to the data coming through the link so they are totally transparent to data. These converters have only two ports—one port for each media type—and support one speed. Some media converters are more advanced Layer 2 Ethernet devices that, like traditional media converters, provide Layer 1 electrical and physical conversion. But unlike traditional media converters, these converters also provide Layer 2 services—in other words, they are really switches. This kind of media converter often has more than two ports, enabling the user to extend two or more copper links across a single-fiber link. These media converters usually feature auto-sensing ports on the copper side, making them useful for linking segments operating at different speeds. The introduction of Layer 2 converters has blurred the line between media converters and switches. The same device may be called a media converter or a switch by different vendors. Media converters are available in standalone models that convert between two different media types, in chassis-based models that connect many different media types in a single housing, and in hybrid systems that feature standalone modules that also work in a chassis. Standard media converters come with an AC power supply that plugs into a standard wall outlet. It may be 120V AC for domestic U.S. power only or may be an auto-sensing 120 to 240V AC power supply that can be used domestically or easily converted to European power with a simple plug adapter. When media converters are used in areas that do not have convenient power outlets, they may be powered by Power over Ethernet (PoE), which provides power to network devices over the same Category 5 or higher UTP cable used for data. PoE Media Converters may also provide power through PoE to a PoE-powered device such as a security camera or wireless access point. (A Fast Ethernet PoE Media Converter is shown in the picture below.) Small standalone media converters intended mainly for Fiber-To-The-Desk (FTTD) applications may be USB powered, enabling them to draw their power from a PC’s USB port. In chassis-based media converter systems, media converters or media converter modules draw their power from the chassis, which avoids the clutter of individual media converters that must be individually powered. Industrial media converters have demanding power requirements. Because the power supplied to industrial sites varies greatly, industrial media converters are either sold entirely separately from their power supply or are available with a choice of power supplies. Unlike standard networking devices, industrial media converters often require you to select the correct power supply for both device and application. Industrial, or hardened, media converters are intended for use outdoors or in areas that may be exposed to temperature fluctuations, moisture, dirt, and EMI. Industrial Media Converters are rated for a specific temperature range. Temperature tolerances from –13 ° F to 140 ° F (–25 ° C to 60 ° C) are common, and some media converters are rated for extreme temperatures from –40 ° F to 167 ° F (–40 ° C to 75 ° C). These media converters are usually housed in hardened metal cases that are sealed against contaminants including particulates such as airborne dust, moisture, and sometimes chemicals. Conformal coating is a special film or coating applied to electronic circuitry to provide additional protection from contaminants. Industrial media converters are often designed to be DIN rail mounted or have separate brackets for DIN rail mounting. The media converters for industrial applications are usually built to withstand higher EMI than those intended for office or data center use. When a network device such a switch detects that a link is broken, the Link indicator on its front panel goes out, alerting the network administrator that the connection is lost. The situation becomes a bit more complicated, however, when the switch has a media converter between it and its primary link. In this case, the switch can detect that the link to the media converter is broken but can’t detect a broken link on the other side of the media converter. If the fiber link goes down, the switch does not notice because it still “sees” the media converter. To counteract this problem, media converters commonly have a feature called link loss pass through, which simply means that the media converter passes the news of a broken link onward. In other words, when either a twisted-pair or a fiber link is broken, the information about this link loss is transferred to the other media link. Fiber is already established for LAN backbone applications, and now fiber is making inroads in horizontal cabling. Fiber carries more data than copper, making it more suitable for high throughput applications such as streaming media and VoIP. Additionally, as the price of copper rises, the price of installing fiber continues to fall, making it an economical choice as well. Copper-to-fiber media converters help to ease the financial shock of migrating network equipment to fiber. These media converters are a simple, inexpensive solution for matching copper ports to fiber infrastructure. From the data center to the desktop, from the central office to the home, media converters are bringing fiber connectivity to areas where copper has long been the medium of choice. In the data center, media converters extend the productive life of existing copper-based switches, providing a gradual migration path from copper to fiber. Chassis-based media converters mount in racks alongside network switches, enabling the conversion of copper ports on legacy switches to fiber. Media converters can also be used with new copper switches that have fixed RJ-45 ports, which are significantly less expensive than the equivalent fiber switches. Here, network managers can convert only selected copper ports for multi-mode or single-mode fiber as needed, bringing versatility to the data center while bringing overall costs down. But factors stand in the way of migrating copper infrastructure to fiber. First of all, there is the familiarity factor; if an IT staff is familiar with copper, they are likely to continue to install copper even if copper is not the best choice. Another major factor preventing the migration to fiber is the cost of changing network devices out for fiber versions. An enterprise switch is a major investment, and there is also the cost of adding fiber Network Interface Cards (NICs) to desktop PCs as well as other networked devices such as printers and wireless access points.
<urn:uuid:2f48e241-1af3-4b39-b8dd-559ebd12a8a0>
CC-MAIN-2017-04
http://www.fs.com/blog/media-converters-bring-fiber-connectivity-to-copper-networks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00377-ip-10-171-10-70.ec2.internal.warc.gz
en
0.92311
1,982
2.828125
3
Cities wanting to remove the guesswork from solar power installations are turning to friendlier technology for citizens. Since typing a street address into a Web site is all it takes to determine how much solar power can be obtained from a rooftop installation, who wouldn't take a look? An exemplar is the San Francisco Solar Map that lets residents view buildings that are equipped with solar power. Users also can type their address into the Solar Map site to get an analysis of how much solar power their roof could harness. "We wanted something that would help people, that would break down some myths about installing solar in San Francisco, and then offer a tool to people who were interested in solar but didn't really know how to take the first step," said Johanna Partin, renewable energy program manager of the San Francisco Department of the Environment. San Francisco has set a lofty goal: 10,000 roofs equipped with solar power by 2012. As of press time, 871 of the city's roofs had solar panels installed. The Solar Map's goal is to offer residents a simple tool to learn about solar installation. The department envisioned a platform similar to Google Earth, so residents could zoom into the view of their building, according to Partin. In spring 2007, San Francisco presented the idea to CH2M HILL - an engineering, consulting and construction company. "Mayor [Gavin] Newsom was very interested in promoting solar, and they said, 'Can we quickly come up with a solution that allowed business owners and residents of San Francisco to make an assessment of the solar potential of their building?'" said Dave Hermann, client solutions director of CH2M HILL. "And that was really the genesis of the idea of the Solar Map." The Solar Map combines aerial photography, GIS software and data supplied by the client. "It's a combination of parcel data ... and the tax assessor's database, which then gets the size of each building and the number of floors which we used to come up with an algorithm that estimates the photovoltaic potential for a rooftop," said Ryan Miller, lead technologist for solar mapping initiatives for CH2M HILL. When someone types a street address into the Web site, the technology identifies objects on the roof that cast shadows, such as HVAC units, skylights, perimeter walls or an adjacent building that's taller. "It takes out the unusable space," Partin said. "It takes out the north-facing side of the roof if it's a pitch roof. It takes out the shaded areas; it takes out roof obstructions and those kinds of things." CH2M HILL uses "stereo-pair aerial imagery" - taking side-by-side photographs to view three-dimensional features - to build models of the buildings, which are run through a computer rendering that determines where the sun is throughout the year in relationship to the building. This determines the ideal location for solar panels. San Francisco chose Google Maps as the platform, but Hermann said Microsoft Virtual Earth can be used, and the company is working on implementations using ESRI solutions. According to Miller, there are two features to the Solar Map. First is the mapping of existing solar installations, which required city-provided data for geocoding - the process of determining geographic coordinates. These data points are mapped through the Solar Map Web portal and incorporate characteristics of each installation, such as system size, the amount of electricity it generates, the installer, a link to the installer's Web site, and photos and comments posted by the home or business owner. This information is displayed after users click on a dot on the map that represents each location where photovoltaic (PV) systems are installed. Information that's publicly available is posted for each solar installation, but Partin said about 1 in 500 owners chose not to participate in the Solar Map. The second feature of the Solar Map is for people who are interested in installing solar panels. "They enter their address, and that address hits a database that contains their building-specific characteristics. That information is then rendered on the page and includes the PV potential and computations based on that PV potential, such as the amount of CO2 [carbon dioxide] that would be reduced, the [power] output, the estimates of cost, etc.," Miller said. If users want a more detailed price, they may click on "get cost estimates" and the information is tied into Clean Power Research's Clean Power Estimator. According to Jeff Ressler, product line manager of Clean Power Research, the company was approached by CH2M HILL and San Francisco, who asked that solar economics analysis be added to the map. The information provided by the Solar Map is shared with the estimator. "The Clean Power Estimator Web services can produce a total energy output, and that energy output can dictate what incentives are paid out and thus the overall cost of the system," Ressler said. Hermann said the Solar Map's accuracy is dependent on the data because cities can choose between a low-resolution or a high-resolution map. The map's assessments have been compared with those made by a solar installer who does the measurements in-person. The installer tells CH2M HILL the information meets their needs and the technology can be used in place of physical assessments, though the installer didn't reveal the exact results. According to Hermann, the high-resolution version of the Solar Map costs about $4,000 per square mile, which is a one-time fee and covers approximately 2,500 buildings in an urban area. The low-resolution version costs about $25,000 for a 50-square-mile city or county. "The low-res version is really a great marketing tool and is a great one-stop shop for all your solar information as a city or county," he said. Miller added that there's a cost-accuracy tradeoff depending on how detailed the client wants the data to be. The less detailed, the less expensive the map will be. San Francisco first launched the site in low resolution for it to display quickly and then updated the portal with high-resolution information. The site was expected to be updated to high resolution by November 2008. It takes about 45 days to launch a low-resolution solar map, as long as there isn't a backlog, Miller said. Clients can add unique details to their solar map. CH2M HILL has cities that are interested in allowing users to draw solar panels on their rooftop in the Solar Map; for example, if users add three solar panels, the map will determine the details and then they can see what the difference would be compared to two solar panels. According to Hermann, Los Angeles County is customizing its solar map with high-resolution imagery for county buildings, and each city within the county can decide whether to fund the high-resolution imagery. The Solar Map site can also match the colors and structure of a government's overall Web page style. "The goal is to have the Solar Map feel like a part of their existing city presentation," Miller said. San Francisco hasn't yet received feedback from citizens that the Solar Map led directly to solar installation, Partin said, but the Department of Environment has heard that potential customers use information garnered from the site when speaking with solar installers. She suspected city grants for residents who install solar power is the main driver of solar installations. The Web site receives about 200 page hits per month, according to Partin. However, the number increases after a big announcement is made about solar. In the future, San Francisco will add measurements of solar water heating potential to the map and eventually make it a wind power resource. Hermann said estimating wind power depends on if cities can measure the wind microclimate - the climate of a specific place in contrast to the climate of an entire area - but San Francisco has some capability to do that. Video: Sacramento, Calif., nuclear facility used as solar farm.
<urn:uuid:50611fb1-0c96-4d13-95d4-8919c398af33>
CC-MAIN-2017-04
http://www.govtech.com/technology/Solar-Map-Helps-San-Francisco-Residents.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00495-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950476
1,640
3.078125
3
Creating multiple eDirectory partitions does not, by itself, increase fault tolerance or improve performance of the directory. However, strategically using multiple replicas does. The placement of replicas is extremely important for accessibility and fault tolerance. eDirectory data needs to be available as quickly as possible and needs to be copied in several places to ensure fault tolerance. For information on creating replicas, refer to Section 6.0, Managing Partitions and Replicas. The following guidelines will help determine your replica placement strategy. Place replicas of each partition on servers that are physically close to the workgroup that uses the information in that partition. If users on one side of a WAN link often access a replica stored on a server on the other side, place a replica on servers on both sides of the WAN link. Place replicas in the location of highest access by users, groups, and services. If groups of users in two separate containers need access to the same object within another partition boundary, place the replica on a server that exists in the container one level above the two containers holding the group. If a disk crashes or a server goes down, replicas on servers in other locations can still authenticate users to the network and provide information on objects in partitions stored on the disabled server. With the same information distributed on several servers, you are not dependent on any single server to authenticate you to the network or to provide services (such as login). To create fault tolerance, plan for three replicas for each partition if the directory tree has enough servers to support that number. There should be at least two local replicas of the local partition. There is no need to have more than three replicas unless you need to provide for accessibility of the data at other locations, or you participate in e-business or other applications that need to have multiple instances of the data for load balancing and fault tolerance. You can have only one master replica. Additional replicas must be read/write, read-only, or filtered. Most replicas should be read/write. They can handle object viewing, object management, and user login, just as the master replica can. They send out information for synchronization when a change is made. Read-only replicas cannot be written to. They allow object searching and viewing, and they are updated when the replicas of the partition synchronize. Do not depend on a subordinate reference or filtered replicas for fault tolerance. A subordinate reference is a pointer and does not contain objects other than the partition root object. Filtered replicas do not contain all objects within the partition. eDirectory SP4 allows for an unlimited number of replicas per partition, but the amount of network traffic increases as the number of replicas increase. Balance fault tolerance needs with network performance needs. You can store only one replica per partition on a server. A single server can store replicas of multiple partitions. Depending on your organization's disaster recovery plan, the major work of rebuilding the network after a loss of a server or location can be done using partition replicas. If the location has only one server, back up eDirectory regularly. Consider purchasing another server for fault tolerance replication. Some backup software does not back up eDirectory automatically. We recommend you exclude the DIB directory on your eDirectory server from any antivirus or backup software processes. Use the eDirectory Backup Tool to back up your DIB directory. For more information about backing up eDirectory, see Backing Up and Restoring NetIQ eDirectory. The limiting factor in creating multiple replicas is the amount of processing time and traffic required to synchronize them. When a change is made to an object, that change is communicated to all replicas in the replica ring. The more replicas in a replica ring, the more communication is required to synchronize changes. If replicas must synchronize across a WAN link, the time cost of synchronization is greater. If you plan partitions for many geographical sites, some servers will receive numerous subordinate reference replicas. eDirectory can distribute these subordinate references among more servers if you create regional partitions. The Tree partition is the most important partition of the eDirectory tree. If the only replica of this partition becomes corrupted, users will experience impaired functionality on the network until the partition is repaired or the eDirectory tree is completely rebuilt. You will also not be able to make any design changes involving the Tree. When creating replicas of the Tree partition, balance the cost of synchronizing subordinate references with the number of replicas of the Tree partition. Because partition changes originate only at the master replica, place master replicas on servers near the network administrator in a central location. It might seem logical to keep masters at remote sites. However, master replicas should be where the partition operations will occur. We recommend that major eDirectory operations, such as partitioning, be handled by one person or group in a central location. This methodology limits errors that could have adverse effects to eDirectory operations and provides for a central backup of the master replicas. The network administrator should perform high-cost activities, such as creating a replica, at times when network traffic is low. If users currently use a WAN link to access particular directory information, you can decrease access time and WAN traffic by placing a replica containing the needed information on a server that users can access locally. If you are replicating the master replicas to a remote site or are forced to place replicas over the WAN for accessibility or fault tolerance, keep in mind the bandwidth that will be used for replication. Replicas should only be placed in nonlocal sites to ensure fault tolerance if you are not able to get the recommended three replicas, increase accessibility, and provide centralized management and storage of master replicas. To control the replication of eDirectory traffic over WAN links, use WAN Manager. For more information, see Section 14.0, WAN Traffic Manager.
<urn:uuid:0793ad0f-1020-4302-a1d8-1e8688b69bec>
CC-MAIN-2017-04
https://www.netiq.com/documentation/edir88/edir88/data/a2iiie1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.896212
1,218
2.828125
3
Kaspersky Lab, a leading developer of secure content management solutions, announces the successful patenting of advanced technology capable of identifying spam in raster images. Spammers often send out messages containing graphics with the aim of avoiding detection by spam filters. In order to identify this type of message the text has first to be distinguished from the picture. In order to make detection even more problematic, spammers often add complex diversionary graphics to the background image, interfere with the geometry of the letters and break up messages using bogus frames and lines. Optical Character Recognition (OCR) is the conventional method used to identify text in images. However, this is resource-intensive and does not offer the necessary level of accuracy. Unlike OCR, the newly patented technology guarantees rapid and precise detection of any spam contained within the images. It readily identifies any additional graphics placed in the image for the purposes of obfuscating the text and is not deterred by distorted text, which significantly increases the program's detection levels. At the core of the new technology is an algorithm that uses statistical analysis based on probabilities to determine whether an image contains text or not. The program examines the characteristics of the image and uses the algorithm to decide if the image contains recognizable text. The new technology, authored by the head of Kaspersky Lab's Anti-Spam Technology team Evgeny Smirnov, was issued with two patents – Nos. 7706613 and 7706614 – on 27 April, 2010 by the U.S. Patent and Trademark Office. On 4 May, an enhanced variant of the technology was issued with patent No. 7711192. This variant incorporates optimized object isolation that makes objects more readily distinguishable, and includes improved filtering of spam. "Mechanical methods of recognition require symbols to be of the same size and placed at regular intervals. Our new technology can work with a variety of warped or distorted letters or words, greatly enhancing the accuracy of detection. The patented method is also much faster at processing images," said Nadezhda Kashchenko, Chief Intellectual Property Counsel at Kaspersky Lab. Kaspersky Lab currently has more than 50 patent applications pending in the U.S., Russia, China and Europe that relate to a range of unique IT security technologies developed by the Company's personnel.
<urn:uuid:7fc0c9ab-ce45-4db9-8efd-1b2d12c60fc0>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/product/2010/Kaspersky_Lab_successfully_patents_technology_for_combating_graphics_based_spam
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00523-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917062
469
2.65625
3
A Femtocell is a small base station, like a 3G or a wi-fi access point, specifically designed for cell phones. The device is used to boost cellular reception indoors and allows for a lower monthly bill for both consumers and cell users. As a phone call is being made from your cell phone, it is then redirected to the Femtocell, which is routed through your high-speed internet connection and from there is redistributed back to the cellular network. Above all it allows people to set up a Femtocell network in areas that were typically a dead zone for service prior, enabling you to gain greater clarity with your indoor calls. A Femtocell is a cost-effective way to reduce your static and gain the cellular service that your cellular provider can’t always offer. It allows unlimited minutes and is traditionally going to lower your monthly bill since it enables the cellular providers to defer the traffic that causes poor reception, eliminating the cost for more ineffective cell towers. Another advantage of using a Femtocell within your home involves better data bandwidth performance which results in a superior experience with music, photos, and live video on your cell phone. Submitted by Nikki
<urn:uuid:86e8b590-93fd-451d-901a-91cd256bff05>
CC-MAIN-2017-04
https://www.myvoipprovider.com/en/news/femtocell
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00339-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944725
238
2.84375
3
The Dark Side of Moore's Law Useless machines that nobody knows how to fix anymore are a staple of science fiction -- and a threat to businesses today. "Call it the dark side of Moore's Law: poor planning for parts obsolescence causes companies and militaries to spend progressively more to deal with the effects of aging systems--which leaves even less money for new investment, in effect creating a downward spiral of maintenance costs and delayed upgrades." That's a quote from " Trapped on Technology's Trailing Edge," an article in IEEE Spectrum by Peter Sandborn, associate professor of mechanical engineering at the University of Maryland and a member of the university's Center for Advanced Life Cycle Engineering. The absence of crucial parts now fuels a multibillion-dollar industry of obsolescence forecasting, reverse-engineering outfits, foundries, and unfortunately, a thriving market of counterfeits. Without advance planning, only the most expensive or risky options for dealing with obsolescence tend to remain open. A lot of the problems center on hardware, but aging software is an issue, too -- not to mention the software that tracks the hardware: "[T]he Federal Logistics Information System databases encode parts made by 3M in 64 different formats. That includes small differences in the way the company's name appears, like 3M versus 3 M or MMM, and so on. And that number doesn't even count the Air Force, Army, and Navy databases, nor all the ways that Lockheed Martin, Boeing, or other contractors might keep track of 3M's parts." One solution is stockpiling parts like Elaine Benes with her sponges: Consider one major telecommunications company (which wishes to remain unnamed for competitive reasons) that typically buys enough parts to fulfill its anticipated lifetime needs every time a component becomes obsolete. Currently, the company holds an inventory of more than $100 million in obsolete electronics, some of which will not be used for a decade, if ever. In the meantime, parts can be lost, degrade with age, or get pilfered by another product group--all scenarios that routinely undermine even the best intentions of project managers. Previously: "And the hot new tech job is...COBOL programmer?" Another version of the Vint Cerf quote: "[T]he probability of maintaining continuity of the software to interpret the old stuff is probably close to zero. Where would you find a projector for an 8mm film these days? If the new software can't understand, we've lost the information. I call this bit rot. It's a serious problem." Earlier at CIOI: Product Lifecycle Management.
<urn:uuid:4e926377-ba4c-4ed8-9109-874c0f2674b8>
CC-MAIN-2017-04
http://blogs.cioinsight.com/it-strategy/the-dark-side-of-moores-law.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00247-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928007
537
2.59375
3
Ransomware is any malware that manages to make a computer system or any of the files on it inaccessible to the user and asks him or her to pay up if they want to regain access. Unfortunately for us all, two separate ransomware variants have been recently detected by Kaspersky Lab experts. The first one is similar to the GpCode Trojan that has been detected in 2004 and whose variants have been making a regular appearance until 2008. It usually encrypts the data, and the chances of getting it back are low. This new, improved version overwrites the data that has been encrypted instead of deleting it – making it impossible for the users to recover it with data-recovering tools. The malware uses various encryption algorithms, and encrypts only part of the data. Kaspersky’s researcher advises victims not to make changes in their system if they notice they have been hit, until a solution to the problem of recovering the data is found. He also points out that if you see the following text pop up in a text file on your computer or on your background image, you should immediately shut down the computer to prevent at least some of the data to be encrypted: The second one piece of ransomware found today was one that overwrites the master boot record (MBR), reboots the victim’s computer, and shows this message: Luckily for the victims, the claim that the drives and files are encrypted is false – the original MBR has only been overwritten with the malicious one, and the original is save on a sector of the disk. The researcher offers two possible ways to restore the original MBR. First, you should try entering the password aaaaaaciip or aaaaadabia (depending on the variant of the malware), and if that doesn’t work, use Kaspersky’s tool for disk rescue. In any case, don’t visit the website mentioned in the message and pay the $100 to have your data back.
<urn:uuid:5370ab33-5cd3-4540-b322-0b63fb70281e>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2010/11/30/ransomware-abounds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00367-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932201
410
2.6875
3
Gamification is everywhere, and it’s expected to become even more of a force in the coming years – we can see trends of this in our daily lives such as loyalty programs, wearable devices and training. In fact, the global market for gamification is expected to reach $5.5 billion in 2018, surpassing $10 billion by 2020. And a 2015 survey showed that 87% of retailers plan to introduce gamification to engage customers within the next five years. But what exactly is gamification? Gamification is split in to two categories: structural and content gamification. Structural gamification is the version that most people are familiar with –the use of game-like rewards in a non-game setting to create engagement. Examples include points, badges and leaderboards to encourage competitions with other “players.” Content gamification is very prevalent in the learning and development space – it is the use of game-like content, including the use of storytelling, or the addition of characters or the element of mystery to create engagement. Structural and content gamification can be used either separately or combined. Both examples of gamification are used to engage people and motivate them to a certain end. Gamification is now so common that we scarcely notice that the experiences around us have been “gamified.” Consider, for example, the use of “likes” in social media. They are seen as an objective measure of the success of any post or announcement, and by extension, of the popularity of whoever placed the post. The power of the “like” has moved it beyond Facebook to other social media platforms, including LinkedIn, Instagram and Twitter. Airline points programs encourage loyalty and drive behaviours. We see gamification driving behaviours change with smart wearable devices for example: The Apple Watch, Samsung Gear and FitBit feature fitness monitors that motivate the wearer to reach certain activity goals during the day. Marketing and advertising campaigns have long been using story, characters or mystery to create an emotional response to ‘hook’ audiences and create powerful, lasting impressions of products. Reward and recognition programs are making their way into business training. Businesses recognize the power of gamification as a way to elicit an emotional response to motivate and engage employees, and that can drive performance or change behaviour. Human beings crave connection, attention, recognition and gratification. Gamification can deliver all these things; encouraging user engagement in virtually every digital context. That’s why Global Knowledge incorporates gamification elements in so many of its communications, courses, and custom digital learning programs. To learn more about how Global Knowledge gamification can work for your organization, please contact us at: LearningServices.CA@globalknowledge.com
<urn:uuid:5cfa26f1-df0c-42fa-9904-4a88741a46d4>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/content/articles/the-growth-of-gamification/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00275-ip-10-171-10-70.ec2.internal.warc.gz
en
0.935351
559
2.671875
3
The talent pipeline of female workers in science, engineering and technology fields is on the rise, yet many women – faced with hostile work environments, extreme work pressures and isolation – are fleeing these in-demand fields in droves. That’s according to “Athena 2.0,” a new report by the Center for Talent Innovation, which surveyed women in science, engineering and technology (SET) fields in the U.S., Brazil, China and India, and found that while women make up nearly 50 percent of SET college graduates in every nation, roughly one-third of them say they feel stalled and are likely to not only quit their jobs within one year but to leave their respective SET field entirely. “There’s unique challenges that women in different industries face,” Tara Gonsalves, a senior research associate at the Center for Talent Innovation, told Wired Workplace. “Women in science are struggling against the lab coat culture, women in engineering are facing the hard hat culture and women in technology are facing the geeky, late-night hacking culture.” In the United States specifically, the majority (80 percent) of women love their work, yet many feel excluded from male-dominated “buddy networks” and lack female role models. Most SET women (86 percent) in the U.S. also lack sponsors or mentors, and nearly half (46 percent) believe senior managers more readily see men as leadership material. In addition, many SET women in the U.S. (54 percent) say they are eager to get to the top of their organizations, yet nearly one-quarter (23 percent) feel a women could never get a top position at their company. U.S. respondents also felt their leadership does not endorse (62 percent) or implement (75 percent) ideas from SET women. As women have been touted as a key solution to the talent gap in SET fields, it is critical that organizations not only maximize on the increasing number of women graduating with SET degrees, but also make greater efforts to retain these women and their unique contributions to SET fields. Above all, the report recommended that organizations provide more sponsorship opportunities to women to help them improve their chances of being perceived as leadership material. Organizations also should embrace a “speak-up culture” where women are fully engaged and free to have their ideas heard, the report states. “In some ways, this should be a huge opportunity for women in SET because there are these huge demands because of the developments in the industry and a shrinking immigrant labor pool,” Gonsalves said. “In reality, there should be huge opportunities for women in the U.S. and around the world. It’s something that organizations need to take seriously right now, as they’re losing critical talent.” Get the Nextgov iPhone app to keep up with government technology news.
<urn:uuid:9102181b-b85c-4771-9ef4-2141033eab2f>
CC-MAIN-2017-04
http://www.nextgov.com/cio-briefing/wired-workplace/2014/02/women-fleeing-science-tech-fields/78935/?oref=ng-skybox
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00513-ip-10-171-10-70.ec2.internal.warc.gz
en
0.969383
603
2.515625
3
Definition: (1) A Turing machine with a small number of states that halts when started with a blank tape, but writes a huge number of non-blanks or takes a huge number of steps. (2) The problem of finding the maximum number of non-blanks written or steps taken for any Turing machines with a given number of states and symbols. Note: The problem is well-defined but rapidly becomes impractical to determine for even a small number of states and symbols. This problem is related to the halting problem since one must determine if a machine is looping or eventually halts. History, explanation, and links about the Busy Beaver Turing Machine. Heiner Marxen's currently known busy beaver results page. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 November 2007. HTML page formatted Mon Feb 2 13:10:39 2015. Cite this as: Paul E. Black, "busy beaver", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 November 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/busyBeaver.html
<urn:uuid:a1c03e73-1c48-446a-a4e0-1e9846226802>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/dads/HTML/busyBeaver.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00331-ip-10-171-10-70.ec2.internal.warc.gz
en
0.885759
268
3.109375
3
The Internet of Things, with its potential for interconnectivity and constant communication, is seeing a growth in popularity, but Prescient solutions owner and CIO Jerry Irvine, who is also a member of the National Cybersecurity Partnership, warns that the infrastructure is uncertain and not secure. "Truthfully, it's scary as hell," Irvine is quoted in IT World as saying. "The Internet in and of itself is an insecure and highly risky environment. It's like walking down an alley at night without the appropriate security measures." Early remotely controlled devices were not very intelligent and existed simply to gather and share basic information, without any security measures. Modern "Internetable" devices are seeing the same trend with little more than an individual user ID and password for security, making them what Irvine called "the weakest link in your network." "Most likely (hackers) are going to steal your information the same way they're stealing everything else, with a virus or malicious application that you download from the Internet," Irvine said. "Your PC is going to be breached, it's going to gather all your information, send it out in a script to somebody, and now they're going to have all your information. Antivirus solutions only protect you against 30 percent of known viruses and malware." Irvine advises that consumers put their IoT device on a VLAN and only communicate to them with a VPN. Although this is a bit extreme for the average consumer, he said they can have a professional install security measures for them, too, otherwise they're better off avoiding IoT devices.
<urn:uuid:3d5f7bf9-1689-4185-a8e8-0346fc32e3e6>
CC-MAIN-2017-04
http://www.channelpartnersonline.com/news/2014/03/internet-of-things-has-limited-security.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00055-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972419
318
2.6875
3
In supercomputing these days, it’s usually the big science applications (astrophysics, climate simulations, earthquake predictions and so on) that seem to garner the most attention. But a new area is quickly emerging onto the HPC scene under the general category of informatics or data-intensive computing. To be sure, informatics is not new at all, but its significance to the HPC realm is growing, mainly due to emerging application areas like cybersecurity, bioinformatics, and social networking. The rise of social media, in particular, is injecting enormous amounts of data into the global information stream. Making sense of it with conventional computers and software is nearly impossible. With that in mind, a story in MIT Technology Review about using a supercomputer to analyze Twitter data caught my attention. In this case, the supercomputer was a Cray XMT machine operated by the DOE at Pacific Northwest National Lab (PNNL) as part of their CASS-MT infrastructure. The application software used to drive this analysis was GraphCT, developed by researchers at Georgia Tech in collaboration with the PNNL folks. GraphCT is short for Graph Characterization Toolkit, and is designed to analyze really massive graph structures, like for example, the type of data that makes up social networks such as Twitter. For those of you who have been hiding under a rock for the last few years, Twitter is a social media site for exchanging 140-character microblogs, aka tweets. As of April 2010, there were over 105 million registered users, generating an average of 55 million tweets a day. The purpose of Twitter is, of course… well, nobody knows for sure. But it does represent an amazing snapshot of what is capturing the attention of Web-connected humans on any given day. If only one could make sense of it. Counting tweets or even searching them is a pretty simple task for a computer, but sifting out the Twitter leaders from the followers and figuring out the access patterns is a lot trickier. That’s where GraphCT and Cray supercomputing comes in. GraphCT is able to map the Twitter network data to a graph, and make use of certain metrics to assign importance to the user interactions. It measures something called “betweenness centrality,” to rank the significance of tweeters. Because of the size of the Twitter data and the highly multithreaded nature of the GraphCT software, the researchers couldn’t rely on the vanilla Web servers that make up the Internet itself, or even conventional HPC computing gear. Fine-grained parallelism plus sparse memory access patterns necessitated a large-scale, global address space machine, built to tolerate high memory latency. The Cray XMT, a proprietary SMP-type supercomputer is such a machine, and is in fact specifically designed for this application profile. I suspect the reason you don’t hear more about the XMT is because most of them are probably deployed at those top secret three-letter government agencies, where data mining and analysis are job one. The XMT at PNNL is a 128-processor system with 1 terabyte of memory. The distinguishing characteristic of this architecture is that each custom “Threadstorm” processor is capable of managing up to 128 threads simultaneously. Tolerance for high memory latencies is supported by efficient management of thread context at the hardware level. The system’s 1 TB of global RAM is enough to hold more than 4 billion vertices and 34 billion edges of a graph. To put that in perspective, one of the Twitter datasets from September 2009 was encapsulated in 735 thousand vertices and 1 million edges, requiring only about 30 MB of memory. Applying the GraphCT analysis, the data required less than 10 seconds to process. The researchers estimated that a much larger Twitter dataset of 61.6 million vertices and 1.47 billion edges would require only 105 minutes. When the Georgia Tech and PNNL researchers ran the numbers, they found that relatively few Twitter accounts were responsible for a disproportionate amount of the traffic, at least on the particular datasets they analyzed. The largest dataset was made up of all public tweets from September 20th to 25th in 2009, containing the hashtag #atlflood (to capture tweets about the Atlanta flood event). In this case, at least, the most influential tweets originated with a few major media and government outlets. We’re likely to be hearing more about the graph applications in HPC in the near future. Data sets and data streams are outpacing the capabilities of conventional computers, and demand for digesting all these random bytes is building rapidly. Since the optimal architectures for this scale of data-intensive processing is apt to be quite different than that of conventional HPC platforms (which tend to be optimized for compute-intensive science codes), this could spur a lot more diversity in supercomputer designs. To that end, a new group called the Graph 500 has developed a benchmark aimed at this category of applications, and intends to maintain a list of the top 500 most performant graph-capable systems. The first Graph 500 list is scheduled to be released at the upcoming Supercomputing Conference (SC10) in New Orleans next month. In the meantime, if you’re interested in giving GraphCT a whirl, a pre-1.0 release of the software can be downloaded for free from the Georgia Tech website. You’ll just need a spare Cray XMT or POSIX-compliant machine to run it on.
<urn:uuid:e86413ed-dfff-4462-b565-891e3114b8c5>
CC-MAIN-2017-04
https://www.hpcwire.com/2010/10/21/supercomputing_meets_social_media/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00357-ip-10-171-10-70.ec2.internal.warc.gz
en
0.93537
1,140
3.140625
3
18 kilowatts (kW) is a lot of power. According to eia.gov, the average US residential utility customer consumes just 1.3 – 1.8 kW of electricity. So 18 kW is about the same amount of power being used in 10 to 14 average US homes. 18 kW is also the equivalent of about 24 horsepower. That’s enough power to run a very nice riding lawn mower. Or keep two average American cars with air conditioners running, cruising along at 60 mph. An 18 kW tankless water heater can deliver 105-degree water from input water at 62 degrees at a rate of 2.5 gallons per minute. So a never-ending supply of comfortable hot water in your shower is just 18 kW away. This is possible because 18 kW can be used to generate about 61,400 BTUs per hour of heat. A burner on your gas stove might offer up to 12,000 BTUs per hour. A wood stove that might be used to heat a small home is probably around 55,000 BTUs per hour. And in case none of that hit home, 1 BTU = 252 calories. So 18 kW is the calorie equivalent of 28,132 Big Macs per hour. Got it now. 18 kW is a lot of power, right? Aside from being able to share some “interesting” trivia, I thought 18 kW was interesting because Internap now has customers using 18 kW per cabinet in our data centers. And not just one cabinet here or there, but a cage of 30 cabinets using up to 18 kW per cabinet. And it’s not some monstrous football field-sized cage. It’s a tidy 576 square feet of space. That’s right, better than a half a megawatt in less than 600 square feet. It’s not quite 1.21 gigawatts, but some say that at 10:04pm on Saturday nights, this space can actually travel through time. Of course, Internap customers know all about time travel. They’ve been to one of our facilities, where instead of confining your business to 2005, we have solutions for today’s users. Like the fastest, most consistent Performance IPTM service. Like a flexible AgileCLOUD and hosting solution. And now also including ultra-high power dense colocation services. Does your organization want to get back to the future? Learn more about high data center power density in our Next-Generation Colocation white paper. Or, come see what 18 kW of power looks like in person by touring our New York metro data center.
<urn:uuid:4f559668-baf9-44e2-8f85-17deea42c4aa>
CC-MAIN-2017-04
http://www.internap.com/2013/10/01/data-centers-of-the-future-no-flux-capacitor-required/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.917186
544
2.578125
3
German security provider ABUS Security-Center explains video surveillance needs in project planning, installation and component selection. Video surveillance is primarily used for monitoring buildings and company premises. Surveillance cameras are also used for personal identification and access control. Aside from secured areas, surveillance can be found in medicine to monitor patients, science for microscopy, measurement engineering and industry to monitor assembly lines. Video software defines the entire system's capabilities. From surveillance cameras to danger detection systems, video surveillance technology is multifaceted. This overview of options, with expert advice on technical issues, offers planning tips for video surveillance systems. 1. Are lighting conditions suitable? The light intensity from available light sources must correspond to camera sensitivity, which should be measured. This can also be calculated by wattage, area reflection characteristics and geometric distance from cameras. Short camera distances have a positive effect. 2. How should lighting be positioned? The quality of the lighting should increase depending on the desired amount of detail in the recording. The light spectrum for the light source at night and in the day should be similar. Cameras should be mounted outside the light source. To avoid flickering camera images, the camera should not be mounted directly on the light source. It should not be directed into the beam of vehicle headlights. The light source should be directed on the monitored area, not the camera. If the surrounding area is too dark for the camera, then an infrared illuminator may help. 3. What should the camera watch for? Before cameras are installed, the operator must know what to watch for onscreen. Users must determine whether an overview of a large parking lot suffices or if identification of individuals and license plates is required. It is impossible to achieve both with a single camera. The operator must specify the video system for perception, detection, recognition or identification (see Table 1). 4. Which lens to use? Clearly displaying objects requires measuring the optimum distance between the camera and object, along with the object's width. The focal length can be calculated after both measurements are taken. In millimeters, focal length is equal to the camera-object distance in meters, multiplied by the sensor size in millimetres and divided by the object width in meters. A 1/3-inch sensor would be entered as 4.8 millimeters, while a 1/4-inch sensor would be 3.6 millimeters. 5. Installation location and power supply? A lower IP protection class is suitable for indoor cameras, but outdoor cameras must be waterproof, dustproof or have additional protective housing. Power is usually supplied over a DC 12 V or AC 230 V connection. As a mains power supply is rarely available at the installation location, a video-combi cable for analog cameras is recommended. Apart from the BNC connector for video files, two wires for the power supply are integrated into the system. Network cameras are usually powered by PoE. 6. How is data transmitted? There are several ways to transfer camera video signals to the monitor, including cable (coaxial and dual-wire transmission), wireless transmission and network transmission. Cable transmission: The most inexpensive method is by coaxial cable (RG59). One advantage is BNC connections are available on virtually all video surveillance devices. Ranges of up to 150 meters are possible, with dual-wire transmission extending range up to 2 kilometers. Wireless transmission: Although wireless video offers wider transmission ranges, it is more expensive and more prone to malfunction. It limits the number of cameras which can be connected. Network transmission: Network cameras (LAN or WLAN) are increasingly used in video surveillance. Provided a local network is available, installation is simple and global access is possible. 7. How are recordings saved? Recording technology has changed, with different media types are available for recording, saving and managing data. Computers combined with special PCI monitoring cards and digital recorders are used. As network cameras are a growing trend, hybrid digital recorders with intelligent functions and user-friendly video management are in demand. Files can be stored to hard disks, CDs, DVDs, USB devices or SD cards. Image rate is crucial for recorder storage. The human eye sees 24 images per second as a fluid action, with 25 fps classified as “real-time.” The video is less fluid when recording at 12.5 or 6.25 fps, but may be sufficient, depending on the requirements. In general, the more images recorded per second, the more data is written, taking up more storage. Compression also matters. Without it, a real-time recording from a single day would take up several DVDs. Common compression types are MPEG-4, M-JPEG and H.264. 8. How can digital recorders and video cameras be expanded for a complete security solution? Modern cameras and recording units feature alarm inputs and outputs, which be programmed through video management software. For example, sensors or intrusion detectors can be integrated into the video surveillance system via the alarm inputs. The recorders can be programmed to begin recording when a detector is triggered, such as for video verification. A high-performance digital recorder is then transformed into a professional danger detection system. 9. Which software issues should I watch for? Video software influences the range of applications for a video system. The software is the central point where video signals come together for processing, display and management. High demands are made on these programs. They should handle many channels with high image rates and resolutions, offer a wide range of functions and be operated intuitively and swiftly. The software must display and manage several analog and digital cameras, as well as record. As a result, the maximum number of channels is a decisive aspect for video management software. Aside from performance, simple operation ensures proper use in critical situations. The ideal solution can adjust the user interface according to user requirements or abilities. Uncomplicated access to important functions should be the priority. Live image display, recording, alarm management and video analysis should quickly identified. The archive and search functions should also be clearly structured. Following an incident, the corresponding images must be found quickly according to date, time, camera, video analysis information, PoS data or triggered alarms. Another central aspect of professional video management software is the presence of intelligent security functions (see table above). 10. Analog or IP ? Despite the paradigm shift in network camera revenues overtaking analog sales, older analog cameras are still preferred to network cameras sometimes. Network cameras have integrated video servers, and transmit the recorded video images to the local network or Internet. The live images from the camera can be displayed, saved and managed on computers worldwide. Each camera is assigned an individual IP address for this purpose. This technology makes network cameras larger and more expensive compared to analog models. Hidden video surveillance on an extra-small scale cannot take place. The range of available analog cameras remains larger — particularly when carrying out video surveillance under extreme conditions such as underwater recording or intense backlighting. However, digital technology has key advantages such as higher resolution, global access and remote installation. The cameras are integrated into existing networks and the recorder does not need to be located on-site. Analog cameras can be integrated into a network by video servers.
<urn:uuid:d1e76c82-875d-4908-9bfe-567c2fff0550>
CC-MAIN-2017-04
https://www.asmag.com/showpost/8670.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00293-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915622
1,483
2.703125
3
The firm is looking to protect traffic privacy in cloud systems. Cisco says it is experimenting with ciphers it claims can better protect traffic privacy in cloud systems and result in bandwidth and storage savings. The networking firm has designed what it calls is the Flexible Naor and Reingold (FNR) encryption scheme under open source licence LGPLv2. Cisco software engineer Sashank Dara explained that since traditional block ciphers, such as AES, work on a fixed block length, for example 28, 192 or 256 bits, small blocks of data get bloated when they’re encrypted. "FNR is an experimental small domain block cipher for encrypting objects (< 128 bits) like IPv4 addresses, MAC addresses, arbitrary strings, etc. while preserving their input lengths," he explained in a blog post. "Such length preserving encryption would be useful when encrypting sensitive fields of rigid packet formats, database columns of legacy systems, etc. in order to avoid any re-engineering efforts for privacy preservation." He added that the "length preserving nature" in FNR could result in bandwidth and storage savings for cloud providers. "Like all deterministic encryption methods, this does not provide semantic security, but determinism is needed in situations where anonymizing telemetry and log data (especially in cloud based network monitoring scenarios) is necessary," he said. "This also lends itself nicely to achieving searchable encryption operations such as provided the cryptdb project. Due to the length preserving nature in FNR, it is a better fit in some scenarios than cryptdb, as the cryptdb method expands the data size, resulting in bandwidth and storage savings."
<urn:uuid:8aa3ab3d-d950-44d0-b432-6802851dcc93>
CC-MAIN-2017-04
http://www.cbronline.com/news/cybersecurity/data/cisco-developing-open-source-block-cipher-4299586
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00201-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916697
342
2.703125
3
I'm an amateur woodworker and, as such, am often to be heard reciting the woodworkers creed "measure twice, cut once". The idea goes that, rather than making annoying and time-consuming mistakes, it is best to measure a task more than once, and only when one is sure of the measurement to cut the piece once, and to the correct length. The same metaphor exists in the world of applications, especially for those who are wrestling with the annoying task having to bring critical old code into more modern formats (the example GitHub gives is the mind-bending task of bringing Fortran application code into Ruby). Scientist 1.0 is the first public iteration of an open source tool that GitHub uses itself internally and is now releasing to the rest of the world. It allows creation of new code in parallel to the old code and the running of clear and concise tests on the new code within a real environment - all without having to actually run it live in production. Jesse Toth, a principal engineer at GitHub, was the brains behind the tool and has written extensively about it on the GitHub engineering blog. Toth details the common architectural pattern which is often used for making large-scale changes, Branch by Abstraction. In this pattern, an abstraction layer is inserted around the code that is intended to be changed - the abstraction layer delegates to existing code to begin with and the substitute code once the cut-over occurs. Abstractions are a good way to deal with the routing of data from old code to new, but it doesn't resolve the issues around whether the behavior of the new code will match the old system. GitHub (and organizations more generally) not only need to ensure that the code will be used in the correct place, but that it will actually work. Toth went on to explain why standardized testing isn't enough to ensure new code replicates the behaviors of the old code. Tests, in the case of complex systems, are unlikely to cover all the possible cases of actual usage. Furthermore, Toth raises the bug issue with cut-over code, writing that: "Since software has bugs, given enough time and volume, your data will have bugs, too. Data quality is the measure of how buggy your data is. Data quality problems may cause your system to behave in unexpected ways that are not tested or explicitly part of the specifications. Your users will encounter this bad data, and whatever behavior they see will be what they come to rely on and consider correct. If you don't know how your system works when it encounters this sort of bad data, it's unlikely that you will design and test the new system to behave in the way that matches the legacy behavior. So, while test coverage of a rewritten system is hugely important, how the system behaves with production data as the input is the only true test of its correctness compared to the legacy system's behavior." Which is where Scientist comes in. Scientist works by creating a lightweight abstraction called an experiment around the code that is to be replaced. The original code — the control — is delegated to by the experiment abstraction, and its result is returned by the experiment. The rewritten code is added as a candidate to be tried by the experiment at execution time. When the experiment is called at runtime, both code paths are run. The results of both the control and candidate are compared and, if there are any differences in that comparison, those are recorded. The duration of execution for both code blocks is also recorded. Then the result of the control code is returned from the experiment. By comparing behavior between old code and new a continual feedback loop is created to ensure that, before code is cut over, there are no differences between the two systems. Increasingly organizations will need to think about bringing legacy applications kicking and screaming into the modern world. Often that will entail replacing legacy code. Scientist looks like an extremely useful tool to help with that task. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:63284fed-56df-46d5-89b6-23dca12da7e8>
CC-MAIN-2017-04
http://www.networkworld.com/article/3030646/application-development/github-releases-scientist-so-developers-and-operations-can-measure-twice-cut-over-once.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952604
816
2.546875
3
By Michael Chance University of New Haven M.S. National Security Since the events of September 11, 2001 terrorism has been an issue at the forefront of National Security. This paper will explore the more specific threat of cyberterrorism that exists and why we are in danger, examine incidents of cyberterrorism and our response, and provide a look into the role it will play in the future. This review of cyberterrorism was conducted using open source information such as unclassified government documents and newspaper articles concerning the subject matter. To understand cyberterrorism, one must first be familiar with terrorism. According to the Code of Federal Regulations terrorism is “the unlawful use of force and violence against persons or property to intimidate or coerce a government, the civilian population, or any segment thereof, in furtherance of political or social objectives.” (Code of Federal Regulations Title 28 Section 0.85 Set. (2007). Government Inst.) This concept is fairly easy to grasp and most American’s have an understanding of what terrorism is. But when talking about cyberterrorism there seems to be some confusion as to its components. In February of 2002 Executive Assistant Director of the FBI Dale Watson gave testimony before congress stating that “cyberterrorism-–meaning the use of cyber tools to shut down critical national infrastructures (such as energy, transportation, or government operations) for the purpose of coercing or intimidating a government or civilian population–-is clearly an emerging threat.” (http://www.fbi.gov/congress/congress02/watson020602.htm) While still a form of terrorism it is a different approach than conventional terrorism. Dorothy Denning, a well-known information security researcher, provides a more comprehensive definition: “Cyberterrorism is the convergence of terrorism and cyberspace. It is generally understood to mean unlawful attacks and threats of attack against computers, networks, and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives. Further, to qualify as cyberterrorism, an attack should result in violence against persons or property, or at least cause enough harm to generate fear. Attacks that lead to death or bodily injury, explosions, plane crashes, water contamination, or severe economic loss would be examples. Serious attacks against critical infrastructures could be acts of cyberterrorism, depending on their impact. Attacks that disrupt nonessential services or that are mainly a costly nuisance would not.” (http://www.cs.georgetown.edu/~denning/infosec/cybert:rror.html) Richard Clarke, a counterterrorism expert and special advisor to President Bush on cyberspace security, described our vulnerability to a cyber terrorist attack as a digital Pearl Harbor. One where you would never see it coming and would have devastating effects. We can no longer turn a blind eye to these possibilities. In moving forward “it is imperative to imagine the ways terrorists could disrupt the nation’s information infrastructure and the computer networks that control telecommunications, the electric grid, water supplies and air traffic.” This research was conducted using open source documents that are open to the public. All documents are unclassified and openly available for viewing. References used for the analysis of the topic were found via the Internet. Examples of works cited are unclassified government documents found on government websites using search terms related to the topic. Internationally distributed newspapers were also used to support the construction of the paper. Other valid and reliable sources used in collecting data were government websites for agencies such as the Federal Bureau of Investigations. Additional research was pursued utilizing college and university websites that posted studies of similar matters. Furthermore, books written by experts were examined and relevant information was extracted to reinforce the views within this text. In reviewing the literature it was important to disseminate that which was reputable and worthy of noting. Information that was not corroborated or from a source that was not credible was examined and excluded from use based on its merit. Data from respectable scholars and universities were studied and surveyed. Ideas were compared and contrasted and then used to support my thesis. Inquiries into this particular field produced numerous results. A logical analysis of the material was conducted and presented in this paper. REVIEW OF THE LITERATURE Critical infrastructure is defined by the USA Patriot Act as “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” (United State, 2001) It can be said that this infrastructure represents the backbone of the United States. Minimizing our vulnerabilities to terrorist threats is a shared responsibility that falls on federal, state, and local government as well as private industry. According to the National Strategy for the Physical Protection of Critical Infrastructure and Key Assets, we must commit to “secure(ing) the infrastructure and assets vital to our national security, governance, public health and safety, economy, and public confidence.” (United States, 2003. Pg vii). This network is made up of the institutions that our country relies on to function as a society. It is comprised of “agriculture, food, water, public health, emergency services, government, defense industrial base, information and telecommunications, energy, transportation, banking and finance, chemical industry and hazardous material, and postal and shipping.” (United States, 2003. Pg 6). These represent the staples of our nation and its economy. Even though they are separate entities that are self-governing they are interdependent upon one another. The relationship is complex and the disruption of one could adversely affect the other. Each sector plays a key role in our daily lives providing services that are invaluable. This infrastructure is so essential that in 1996 President Clinton devised Executive Order 13010, Critical Infrastructure Protection, which addresses “threats of electronic, radio-frequency, or computer-based attacks on the information or communications components that control critical infrastructures (‘‘cyber threats’’)” (http://www.fas.org/irp/offdocs/eo13010.htm) The components of agriculture and food and water represent the most basic needs of the people of the United States. All citizens require a reliable food supply and clean drinking water. Without these necessities people would go hungry or even starve. Even something as simple as washing your hands or brushing your teeth would be impossible. Any threat to these sectors could spread panic or fear amongst the people. Any disruption in public health and emergency services would jeopardize the safety of everyone. Hospitals maintain human life and provide assistance to those in need. Public safety departments such as fire, police, and ambulances provided emergency services that are invaluable. You cannot put a price on the services that preserve human life and property. Those that provide telecommunications, energy, and transportation are also taken for granted. In our daily lives we make phone calls and use the Internet for communications. We travel on highways and fly from airports to our destinations. Electricity is so vital to our everyday lives yet we fail to appreciate its value. Oil fuels our cars and heats our homes. Without these services our society would break down. The United States relies heavily on the banking and finance industry to fuel the economy. Also of importance is postal and shipping. Business depends heavily on the mail system and the shipment of goods. Both are keys elements in keeping the economy thriving. Other key players in the economy are the chemical industry and hazardous materials and the defense industrial base. There are safety issues as well as economic repercussions involved with chemicals and hazardous substances. Additionally, the Department of Defense is tasked with securing our nation. They harbor military secrets and plans to carry out that task. Clear Defense Contractors also are responsible for supporting this goal. These groups that make up the critical infrastructure are the pillars that support the United States. If they are compromised or exploited the United States would cease to function properly and crumble. Many of these sectors are vulnerable to cyberterrorism due to their centralized control systems known as Supervisory Control and Data Acquisition (SCADA). The American National Standards Institute (ANSI) defines SCADA “as a system operating with coded signals over communication channels so as to provide control of Remote Terminal Unit (RTU) equipment.” (http://www.inl.gov/technicalpublications/Documents/3310858.pdf) This simply means that they “contain computers and applications that perform key functions in providing essential services and commodities [and] are part of the nation’s critical infrastructure and require protection from a variety of threats that exist in cyber space today.” (United States, 2002. Pg 2) These centralized networks may provide a single point of failure for an organization that plays a key role in critical infrastructure. They make it easier to carry out a cyber attack and provide vulnerabilities for anyone with hacking abilities. Most of these SCADA systems are secure but some utilize public telephone lines in their transmissions. An example of an exploited vulnerability in a SCADA network is the staged cyber attack performed in March known as “Aurora.” The experiment was conducted by the Department of Energy in Idaho where “researchers who launched an experimental cyber attack caused a generator to self-destruct, alarming the federal government and electrical industry about what might happen if such an attack were carried out on a larger scale.” (http://www.cnn.com/2007/US/09/26/power.at.risk/index.html) It proved that critical infrastructure SCADA networks could be hacked. More importantly, it showed that even in the event of such an attack control can be gained and damage inflicted as opposed to just shutting the system down. This experiment was an eye opener for those tasked with securing critical infrastructure and raised concerns of similar attacks. In identifying potential threats there is now proof that it can be done and it has been done. Evan Kohlman, a renowned counter terrorism expert, published an article in 2006 where he paraphrased Clarke’s “fears of a “digital Pearl Harbor” — a cyberattack against critical infrastructure.” (http://www.foreignaffairs.org/20060901faessay85510/evan-f-kohlmann/the-real-online-terrorist-threat.html) His views are in alignment with Clark’s in taking preventative measures to “keep terrorists from breaching sensitive government networks.” (http://www.foreignaffairs.org/20060901faessay85510/evan-f-kohlmann/the-real-online-terrorist-threat.html) He stresses the importance of this security and how it has become a growing threat. It has become apparent that these are real threats that need to be addressed. Examples of Cyberterrorism The Execution of Daniel Pearl Probably the best example of using the Internet as a tool for cyberterrorism is the incident of Daniel Pearl, a Wall Street Journalist that was kidnapped and murdered in February 2002. Pearl, whose family is Jewish, was kidnapped by a group known as The National Movement for Pakistani Sovereighnty while in Karachi, Pakistan. Pearl was investigating the infamous shoe bomber, Richard Reid, and thought he was meeting with a source for an interview. Instead, he was abducted and subsequently beheaded. The execution and decapitation was videoed taped and later posted on the Internet. The video served as a message to spread religious, political, and ideological views. As most terrorist events such as this the intention was also to spread fear and to coerce and intimidate foreign governments, specifically the United States. The video was graphic in nature showing Pearl beheaded with a sword and then the executor holding his head. The explicit video promotes terrorism and makes use of the Internet to recruit new members and motivate those already on board. Pearl’s captors sent demands via a hotmail e-mail address. Eventually, law enforcement traced the IP address, which led to three arrests. The person charged with the murder was Khalid Sheikh Mohammed who is affiliated with Al Quaida. He was sentenced to death and is being held prisoner at Guantanamo Bay, Cuba where he awaits his fate. Abdul Aziz, aka Imam Samudra, has been linked to several bombings including the Bali bombing of a nightclub in October 2002 where 202 people were killed. He is part of a group called Jamaah Islamiah, which is linked to Al Quaida. It is believed that he is the mastermind behind the bombing responsible for organizing and financing the attack. Aziz used the Internet to get fraudulent credit card information in order to finance the bombing. Investigators claimed that he “left a trail of evidence on his personal computer of how he tried to commit credit card fraud to help finance terror attacks.”(Fayler, 2007. Pg 24) Aziz was sentenced to death for his role in the killings and is being held in an Indonesian prison. While incarcerated he has been busy on the Internet and still currently active in spreading his message. During his time behind bars he wrote a book “in which he described how to perpetrate credit card fraud as a means of funding terrorist attacks.” (http://www.timesonline.co.uk/tol/news/world/asia/article617892.ece) The book contains a chapter titled “Hacking- Why not?” In this portion of his book Aziz “urges fellow Muslim radicals to take the holy war into cyber-space by attacking US computers specifically for the purpose of credit card fraud.” (http://epress.anu.edu.au/sdsc/cyber_warfare/mobile_devices/ch04s06.html) Aziz goes on to guide aspiring terrorists by telling them how to make contact with others with similar interests in chat rooms and how to communicate using e-mails and instant messaging. He also instructs these individuals how to browse the Internet to collect intelligence and download tools to carry out credit card fraud. Overall, his efforts are helping those that wish to organize, recruit, and fund for the purpose of carrying out terrorist attacks. A resident of the United Kingdom, Younes Tsouli is referred to as the world’s most wanted cyber-jihadist. Tsouli is responsible for many web sites and web forums posted on the Internet that promote terrorism. His support for Al Quaida and Islamic terrorism is clearly stated on these web sites. His web forums, Islamic Terrorists and Islamic Supporters Forum, contain images of terrorism and helped others plan attacks. By posting this content on his web sites “he became the main distributor of video material from al-Qaeda in Iraq.” (http://news.bbc.co.uk/2/hi/americas/7191248.stm) He is responsible for “covertly and securely disseminate manuals of weaponry, videos of insurgent feats such as beheadings and other inflammatory material.” (http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html) Due to his technical abilities and support for Islamic terrorism Tsouli quickly became involved in its workings. He was recruited by high-ranking Al Quaida members to aid and assist in the movement. His web sites provided information on how to acquire explosives and make bombs. They also gave instructions and often had hidden links to more extremist information. There was also hacked software offered to download from these sites. Tsouli once “posted a 20-page message titled “Seminar on Hacking Websites,” to the Ekhlas forum. It provided detailed information on the art of hacking, listing dozens of vulnerable Web sites to which one could upload shared media.” Al Quaida provided the funding for his operations.” (http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html) Authorities were able to locate Tsouli through Internet and phone records. Eventually “investigators raided Tsouli’s house, where they found stolen credit card information [and] looking further, they found that the cards were used to pay American Internet providers on whose servers he had posted jihadi propaganda.” (http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html) He was sentenced to 16 years in prison for his involvement with terrorist groups. At the time “his conviction was the first for incitement to commit an act of terrorism through the Internet” Georgian web site Defaced In mid 2008, conflicts arose between Russia and the small country of Georgia, which lies on the southern border of Russia. The conflict was fought over control of South Ossetia, which borders both Russia and Georgia. Both countries attempted to assume control of South Ossetia and military action was taken by both sides. Eventually Georgia would withdraw its troops conceding to Russia. As part of this conflict cyber attacks were launched against “the main website of the Georgian Ministry of Foreign Affairs (mfa.gov.ge).” According to McAfee, web site defacement is “changing the home page or other key pages of a Web site by an unauthorized individual or process.” (http://www.mcafee.com/us/threat_center/glossary.html) Such unauthorized access could damage or threaten the credibility and reputation of the victim. In this case images of Georgian President Mikheil Saakashvili were likened to those of Adolph Hitler. The images were meant to send a politically motivated message. Vandalizing the web site for the Ministry of Foreign Affairs of Georgia was an attempt to damage its reputation and discredit them amidst a politically driven war. Defacement is a new tactic for fighting war that does not employ violence but instead spreads propaganda across the Internet. Response to Cyberterrorism In 1996 President Bill Clinton issued Executive Order #13010, which dealt with the protection of critical infrastructure. It mentioned “threats of electronic, radio-frequency, or computer-based attacks on the information or communications components that control critical infrastructures (“cyber threats”)” (http://www.fas.org/irp/offdocs/eo13010.htm) It was a basic plan to deal with threats to critical infrastructure and outlined the agencies that were part of this plan. Mainly, the objective was to protect institutions and have plans for their continued operations. Once again in May 1998 the issue of cyber security was addressed in the Presidential Decision Directive 63. This directive was aimed at protecting the critical infrastructure discussed earlier in my paper. It summarized the need to address vulnerabilities. It also put the burden on the Federal Government and its agencies to get involved and stressed public/private partnerships. President Clinton stated his intentions to “take all necessary measures to swiftly eliminate any significant vulnerability to both physical and cyber attacks on our critical infrastructures, including especially our cyber systems.” (http://www.fas.org/irp/offdocs/pdd/pdd-63.htm) The PDD 63 would later be superceded by the Homeland Security Presidential Directive #7. Issued by President George W. Bush in 2003, it was meant to “update policies intended to protect the country from terrorist attacks.” (http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci1144956,00.html) Since the events of 9/11, there were new concerns and the need for new guidelines. It continued the path of prevention and security but also identified the potential for serious cyber attacks. The new directive set out to “establish a national policy for Federal departments and agencies to identify and prioritize United States critical infrastructure and key resources and to protect them from terrorist attacks.” (http://www.whitehouse.gov/news/releases/2003/12/20031217-5.html) In February of 2003 the White House released The National Strategy to Secure Cyberspace. This report is “a 76-page document outlining a sustained, multi-faceted approach to safeguarding the nation’s vital communications technologies.” (http://usinfo.state.gov/journals/itgic/1103/ijge/gj11.htm) It acknowledged the importance of the use of computer networks and their security in maintaining National Security. The plan outlines the need for a planned response to cyber attacks as well as preparedness and prevention methods. In President Bush’s letter addressing Americans in the document he describes it as “a framework for protecting this infrastructure that is essential to our economy, security, and way of life.” (United States, 2003.) The strategy itself is made up of five key points which are “(1) a national cyberspace security response system; (2) a national cyberspace security threat and vulnerability reduction program;(3) a national cyberspace security awareness and training program;(4) securing governments’ cyberspace; and,(5) national security and international cyberspace security cooperation.” (United States, 2003. Pg 54) The plan addresses particular safeguards and the role of federal, state, and local government agencies. Overall, the United States has made it clear that there are concerns for protecting America’s critical infrastructure and securing cyberspace. Efforts have been made to address these concerns and to clearly define whose responsibility it is to do so. In moving forward it is important for our country to continue to identify new threats and respond to them with solutions. The Future of Cyberterrorism In moving forward in the age of technology it would be foolish to discount the risks of cyberterrorism. It is important to keep in mind that “the next generation of terrorists are now growing up in a digital world, one in which hacking tools are sure to become more powerful, simpler to use, and easier to access.” (Weimann, 2006. Pg 170) If you consider how easy it is to attain the tools and skills necessary to carry out an attack you then must consider the true threat that cyberterrorism poses to our National Security. Knowing the intent of terrorists opens up many possibilities of using technology to achieve their goals. Consequently, “in the future, the logic bomb rather than the conventional bomb may prove to be the terrorist weapon of choice.” (Hodge, 1999. Pg 105) It is expected that “in the future, the threat of cyberterrorism appears more ominous… cyberterrorists have the advantage of attacking from almost anywhere, by themselves, at a minimal expense, without risk of harm, and with limited risk of detection.” (Purpura, 2007. Pg 61) Many experts believe that this is a real threat and must be dealt with. As suggested by Barry Collins, Senior Research Fellow at the Institute for Security and Intelligence, “cyber-terrorism… is a misnomer in that the consequences are not limited to the world of cyberspace but occur in the physical world.” (Hodge, 1999. Pg 105) He goes on to say that “if we fail to be ready when and where the virtual and physical worlds converge, then all that will be left is terror.” (http://afgen.com/terrorism1.html) Historically, terrorism has been characterized by acts of violence carried out with the intent to cause panic and fear. But with cyberterrorism “the face of terrorism is changing. While the motivations remain the same, we are now facing new and unfamiliar weapons.” (http://afgen.com/terrorism1.html) Frank Cilluffo of the Office of Homeland Security stated “while bin Laden may have his finger on the trigger, his grandchildren may have their fingers on the computer mouse.” (Weimann, 2006. Pg 170) The emerging threat of cyberterrorism is quickly growing and becoming a reality. We can no longer sit idly and disregard the possibility of a cyber attack. It is “likely that the threat will increase in the future for a coordinated cyberattack… cyberterrorism become increasingly more mainstream in the future.” (Wilson, 2005. Pg 22) If we fail to prepare for this inevitable future we allow terrorists an avenue to accomplish their goals. We must consider the likelihood that “tomorrow’s terrorist may be able to do more with a keyboard than with a bomb.” (Arquilla & Ronfeldt, 2001. Pg 282) The United States needs a plan for dealing with cyberterrorism. Efforts need to be undertaken and precautionary measures put in place. A strategy for cyberterrorism should be two- fold. First, a proactive approach that anticipates future events and attempts to avoid them. The best way to deal with an attack is to be prepared and prevent it from happening in the first place. Secondly, a reactive approach which deals with the response to a cyberterrorism event. This involves identifying and reacting to an attack. In an effort to thwart cyberterrorism “a proactive approach to securing the global information infrastructure may help to prevent future disasters in the making.” (Colarik, 2006. Pg xvii) We must locate our vulnerabilities and harden them before they are exploited. Critical infrastructure needs to be secured as well as the computer networks that control them. The need for secure computer networks does not only apply to government agencies but also to private sector companies that have databases of crucial information. Unlawful access to these networks could be catastrophic. A proactive strategy needs to be updated regularly and be one step ahead of those it is designed to protect against. Continuing safeguard measures needs to be explored in order to seriously address the invisible threat against the United States. In the near future “cyber-terrorism will increase and likely target U.S. government facilities, as well as infrastructure centers and nongovernmental organizations such as relief agencies.” (http://www.israel21c.org/bin/en.jsp?enScript=PrintVersion.jsp&enDispWho=Articles^l40) And when these attacks are carried out we need to be prepared. A reactive approach is one that has a response to the attack. There should be counter-cyberterrorism standards in place for such activity. Our preparedness and response to an attack should be planned. We need to be able to detect and then recover from any attempt at illegally accessing a computer network. A successful reactive strategy “will detect and respond to Internet events… and coordinate cybersecurity and incident response with federal, state, local, private sector and international partners.” (http://www.pcworld.com/article/111066/homeland_security_to_oversee_cybersecurity.html) Overall, the future of cyberterrorism and the role it plays is somewhat unknown. But what is known is that the threat exists and it is real. The United States must take measures to safeguard against cyberterrorism. There are documented events of cyberterrorism and how terrorists use cyberspace to conduct their business. Additionally, the threat to our critical infrastructure is far too serious to be taken lightly. The threat of cyberterrorism has been addressed by several presidents and acknowledged by many reputable professionals. The government has also played a role by drafting numerous Executive Orders and Presidential Directives. But it seems these efforts to assess and manage the threat fall short. More steps need to be taken for awareness and incident response and they need to be taken now. If the United States continues to struggle to allocate resources and fail to take this threat serious we are in jeopardy of a digital Pearl Harbor and open ourselves up to a repeat of the events of 9/11. If we continue to question whether this threat is viable and do nothing about it we are vulnerable to an attack. Ultimately “the threat of cyberterrorism may be exaggerated and manipulated, but we can neither deny it nor dare to ignore it.” (Weimann, 2004.) Arquilla, J., & Ronfeldt, D. F. (2001). Networks and netwars: the future of terror, crime, and militancy. Santa Monica, CA: Rand. Code of Federal Regulations Title 28 Section 0.85 Set. (2007). Government Inst. Colarik, A. M. (2006). Cyber terrorism Political and economic implications. Hershey, PA: Idea Group Pub. Fayler, G. (2007). The globalization of terror funding. Ramat-Gan: The Begin-Sadat Center for Strategic Studies, Bar-Ilan Univ. Hodge, C. C. (1999). Redefining European security. Garland reference library of social science, v. 1154. New York: Garland. Purpura, P. P. (2007). Terrorism and homeland security: an introduction with applications. The Butterworth-Heinemann homeland security series. Amsterdam: Butterworth-Heinemann. United States. (2001). Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001. Washington, D.C.: U.S. G.P.O. United States. (2002). 21 steps to improve cyber security of SCADA networks. Washington, D.C.: President’s Critical Infrastructure Protection Board. United States. (2003). The national strategy for the physical protection of critical infrastructures and key assets. Washington, D.C.: [Dept. of Homeland Security?]. United States. (2003). The national strategy to secure cyberspace. Washington, D.C.: [Dept. of Homeland Security?]. Weimann, G. (2006). Terror on the Internet: the new arena, the new challenges. Washington, D.C.: United States Institute of Peace Press. Weimann, G. (2004). Cyberterrorism How real is the threat? Washington, DC: U.S. Institute of Peace. Wilson, C. (2005). Computer attack and cyber terrorism vulnerabilities and policy issues for Congress. [Washington, D.C.]: Congressional Research Service, Library of Congress.
<urn:uuid:a70c84d1-d8bb-4f92-9e83-b508c838fb83>
CC-MAIN-2017-04
https://articles.forensicfocus.com/2012/06/01/the-role-of-cyber-terrorism-in-the-future/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00230-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951825
6,350
2.625
3
14 Amazing DARPA Technologies On TapGo inside the labs of the Defense Advanced Research Projects Agency for a look at some of the most intriguing technologies they're developing in computing, electronics, communications, and more. 11 of 14 Early pilots in DARPA's so-called Mind's Eye program have demonstrated an ability to recognize and describe certain events. The program uses artificial intelligence to analyze video feeds from a camera, then describe what is going on within the camera's fields of vision. Image credit: DARPA Military Transformers: 20 Innovative Defense Technologies DARPA Demonstrates Robot 'Pack Mules' DARPA Seeks 'Plan X' Cyber Warfare Tools DARPA Cheetah Robot Sets World Speed Record DARPA Demos Inexpensive, Moldable Robots DARPA Unveils Gigapixel Camera DARPA: Consumer Tech Can Aid Electronic Warfare 11 of 14
<urn:uuid:cd784884-3295-4162-a7a2-d37284e79d9a>
CC-MAIN-2017-04
http://www.darkreading.com/risk-management/14-amazing-darpa-technologies-on-tap/d/d-id/1106551?page_number=11
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00258-ip-10-171-10-70.ec2.internal.warc.gz
en
0.804995
186
2.765625
3
What Does Flame Mean? Once a system is infected, Flame begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on. All this data is available to the operators through the link to Flame’s command-and-control servers. Lots of people are asking “What does Flame do?” The more important question, however, as the era of cyber war continues to evolve, is “What does Flame mean?” Flame, in fact, shows just how far and fast we’ve come along in cyber war. In the “old days” we saw the simple use of DDoS when Russia attacked Estonia in April of 2007. Just five years later, Flame shows the world that cyber war has evolved into something stealthier, more effective and a serious part of a military strategy. To borrow Andy Grove’s phrase, we’ve hit an inflection point. Consider: - Cyber attack is now preferable to a military attack. The consequences of NOT using cyber warfare now outweigh cyber pacifism. It’s a bloodless form of war which can still inflict great damage. (What amazing irony that the same day Flame is revealed, the New York Times highlights the US approach to terrorism that involves a targeted “kill list.”) In fact, in the case of Iran, it seems cyber attack may have proven more effective than economic sanctions that seem to have done little to stop the development of nuclear weapons. For the attacker, anonymity is a major benefit as the victim can only speculate but can’t point a finger. Graphic images of source code just aren’t the same as pictures of dead or injured civilians when it comes to altering public opinions. If there were a physical attack on Iran, Iranian public opinion would very likely be mobilized behind a normally unpopular government. - Cyber attack is a new form of deterrence. During the Cold War, if the US had 1,000 warheads the Soviets would try to get 1,001 which would lead to a Strategic Defense Initiative, a.k.a., Star Wars. Cyber attack gives deterrence a totally new spin: for the first time, a nation can prevent someone from garnering weapons. And this approach, conveniently, appears morally superior and so far has proven much less costly. - Cyber attack will force adversaries to minimize their electronic productivity. It took nearly a decade to find Osama Bin Laden since he went completely off grid. No internet or phone, just couriers. Consequently, he became more of a titular versus operational leader. Does this mean that scientists developing weapons will resort to crayons and paper only? Probably not, but today life very likely got a lot harder for scientists working on military projects worldwide. Authors & Topics:
<urn:uuid:60e94df9-db1b-4e10-8a72-b801c2f8a850>
CC-MAIN-2017-04
http://blog.imperva.com/2012/05/what-does-flame-mean.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00074-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951682
578
2.984375
3
This course is designed to develop multichannel applications with IBM Web Experience Factory. IBM Web Experience Factory supercharges application development with tools and technology for creating, customizing, deploying, and maintaining portlets, widgets, and web and rich clients. This course is designed to help students develop the skills that are needed to create and assemble multichannel applications easily and rapidly by using Web Experience Factory, formerly known as WebSphere Portlet Factory. On day 1 of the course, students are introduced to Web Experience Factory through an overview of developing multichannel applications for desktop browsers, smartphones, and tablets, and then begin by using Web Experience Factory to create a simple application. More in-depth, hands-on experience is offered on Day 2 of the course, during which students create a simple, data-driven application, and a desktop application. On Day 3, mobile and multichannel enhancements are explored through discussion and hands-on activities.
<urn:uuid:0483b471-87a5-469a-8ce3-fca75f891342>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/120670/developing-multichannel-applications-with-ibm-web-experience-factory-85/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.922982
190
2.578125
3
Google Street View Documenting Japan's Nuke Evacuation Area The area near the Fukushima nuclear plant was contaminated by radiation after the March 2011 tsunami and earthquake.A Google Street View team is in Japan for the two-year anniversary of the March 11, 2011, earthquake and tsunami, taking stark photographs of a city about 12 miles from the heavily damaged Fukushima nuclear plant, which spread its radiation across a wide area. The photography project is being conducted in Namie, Japan, a city of about 21,000 people, which remains a ghost town since it was evacuated after the earthquake and tsunami, said Deanna Yick, a Google spokesperson. "Our team has been helping to map the damage as well as the recovery," including previous Street Maps excursions that mapped the carnage from the magnitude 9.0 earthquake and the deadly tsunami that followed, which killed more than 16,000 people and left another 2,900 missing, Yick said. "We have taken the Street View cars in on many occasions in conjunction with local officials." An early Street View project came in December 2011 when Google created a special "Build The Memory" Website to document photographs of the devastation and the areas as they were before the carnage. Those photos were collected beginning in July 2011 along more than 27,000 miles across affected regions of Japan. The latest Street View project will provide photographs in areas that have been generally off limits due to the dangerous radiation that was released from the damaged nuclear plant, Yick said
<urn:uuid:05caf373-d151-42e0-9711-8d8941a095ba>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-street-view-documenting-japans-nuke-evacuation-area
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.977885
299
3.03125
3
Big Data is Rocket Fuel Larry Page, CEO of Google, believes in “moonshots.” Not just incremental thinking, but breakthrough progress that makes an order of magnitude difference in a field. At his company's developer conference in San Francisco, Page urged others to do the same: “I'd encourage more companies to do things that are a little bit outside of their comfort zone, because I think it gives them more scalability in what they can get done. Almost every time we have tried to do something crazy we have made progress, not all the time, but much of the time.”[i] Aside from Page's belief, what has been Google's secret? Quite simply: data. Google's order-of-magnitude advantage in the search stakes was maintained by algorithms that gave positive returns on scale. The more people used the search engine, the better it got, thanks to deriving quality signals from user behavior—if you used the first result from a search, it's a good result, if you have to use the seventh, then the result can be improved. So far, so obvious. Yet the connection can also be drawn to Google's more explicitly adventurous innovations, the autonomous car, or the wearable computer Glass. The car could not drive without the masses of data accumulated from the company's maps product, and Glass would be useless without the social data collected by Google Plus. “GOOGLE'S BOLD MOVES HAVE BEEN FUELED BY THE VAST POTENTIAL OF DATA.” In each instance, Google's bold moves have been fueled by the vast potential of data. For this reason, as a community we should welcome and endorse moves to open up public data. Not just from a perspective of democratic accountability, but for the potential of growth in research and commerce that this data provides. We saw a commendable step forward in the United States this May, as President Obama signed an executive order focusing on using government open data for entrepreneurship, innovation, and scientific discovery.[ii] The advent of big data technologies has caused wild excitement for its potential and also the inevitable backlash due to media overstatement. Big data stands on the legs of two important pillars: open source software and commodity computing hardware. These things have lowered the bar for accessing supercomputer-level power and vastly increased data processing throughput. To focus on these two legs alone, though, misses an important component. Critics of big data observe that most companies don't have big data,[iii] and the IT industry's marketing of big data tools does little to serve them. This is unsurprisingly true. Without data, tools have little to do: In our journey to the moon, it helps if we put fuel in our ship. Hence, the third and equally important leg of the big data platform is the data itself. It is the most valuable asset, being the image of the world in which we are attempting our endeavors. Prizing data is something Google understood from the start. As the field of big data progresses, this growing maturity will be shown in the focus of enthusiasm moving away from the initial excitement that we can process large data and toward understanding the importance of acquiring, stewarding, and sharing our data. If, like Larry Page, you're making a moonshot, then having big data as rocket fuel sure helps. [i] Copeland MV and Wohslen M. May 16, 2013. With I/O speech, Larry Page reminds us why Google rules tech. Wired.com. [iii] Mims C. May 6, 2013. Most data isn't “big,” and businesses are wasting money pretending it is. Quartz.
<urn:uuid:99a007d5-0a66-40c2-a76f-240f374dbd99>
CC-MAIN-2017-04
http://www.ibmbigdatahub.com/blog/big-data-rocket-fuel?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+big-data-hub-blog+%28The+Big+Data+Hub+Blog%29
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937415
760
2.5625
3
Viruses are the common cold of computers: easily spread and often misdiagnosed. The word “virus” is frequently misused in describing other forms of malware. Actual viruses are a small bit of executable code that spreads when users open infected files or applications. A few viruses that affect Mac OS X have been found, and Mac users can also inadvertently spread Windows viruses by passing along infected files. A Trojan horse is like the college roommate that seems cool until they stop paying rent, eat your food and leave dirty clothes everywhere. It enters under the pretense of usefulness but actually contains malicious code. Several Trojan horses affect Mac OS X, most notably the Flashback Trojan. Worms are like the elusive varmint scurrying through the insulation in your wall. Because they don’t need to attach themselves to an existing file or program, they can be very difficult to find. A type of virus, worms spread over networks, and can carry out malicious actions once they find new hosts. Spyware is like the creepy neighbor that stares in your window and shuffles through your mail. It enters your Mac as a Trojan horse and then secretly monitors your computing behavior, collecting personal information such as surfing habits and web sites you’ve visited. A botnet is like an army of zombie computers bent on destruction. Your Mac could be forced to join the zombie army as a consequence of a malware attack. The network of compromised computers is then used to send spam or to attack other computers. Spam is the mosquito of the computer world: annoying and ubiquitous. A single spam message can be dealt with easily enough, but en masse it can crowd your inbox and cause a significant loss of productivity. An exploit is like a schoolyard bully that zeroes in on your weaknesses. It’s a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug, glitch or vulnerability in order to break through your Mac’s security defenses.
<urn:uuid:78ae5214-d230-494c-8022-473f3112c39c>
CC-MAIN-2017-04
https://www.intego.com/es/mac-malware-definitions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927637
413
3.25
3
du Rau P.D.,CNERAavifaune migratrice | Bourgeois K.,University of Auckland | Bourgeois K.,Aix - Marseille University | Thevenet M.,Conservatoire du littoral | And 19 more authors. Journal of Ornithology | Year: 2015 Scopoli’s Shearwater (Calonectris diomedea) is a Procellariiform endemic to the Mediterranean Basin which is considered to be vulnerable in Europe due to recent local declines and its susceptibility to both marine and terrestrial threats. In the 1970s–1980s, its population size was estimated at 57,000–76,000 breeding pairs throughout the Mediterranean Basin, with the largest colony, estimated at 15,000–25,000 pairs, found on Zembra Island, Tunisia. The objectives of our study were to re-estimate the size of the breeding population on Zembra Island, to reassess the global population size of the species, and to analyse the implications of these findings on status and conservation of this species in the Mediterranean. Using distance sampling, we estimated the Zembra breeding population to be 141,780 pairs (95 % confidence interval 113,720–176,750 pairs). A review of the most recent data on populations of this species throughout the Mediterranean Basin led us to estimate its new global population size at 141,000–223,000 breeding pairs. Using the demographic invariant and potential biological removal approaches, we estimated the maximum number of adults which could be killed annually by all non-natural causes without causing a population decline to be 8800 (range 7700–9700) individuals, of which could be 3700 breeders. Although these results are less alarming in the context of species conservation than previously thought, uncertainties associated with global population size, trends and major threats still raise questions on the future of this species. More generally, we show how a monitoring strategy for a bird supposed to be relatively well known overall can be potentially misleading due to biases in survey design. The reduction of such biases would therefore appear to be an unavoidable prerequisite in cryptic species monitoring before any reliable inference on the conservation status of the species can be drawn. © Dt. Ornithologen-Gesellschaft e.V. 2015 Source
<urn:uuid:e74daa4f-a79c-4e65-bf77-0ff94a9db7a0>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/ridha-ouni-consultant-2676305/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882956
472
2.78125
3
The Ohio Supercomputer Center (OSC) is being used to test the new generation of an electric car that has been reported to be capable of reaching speeds upwards of 400 miles per hour. The team, comprised mostly of engineering students at the Ohio State University’s Center for Automotive Research, has been refining the design, build, and alternative fuel scenarios since 2001, producing a number of experimental vehicles that reach top speed using battery power. This newest version of the “Buckeye Bullet” which is currently undergoing aerodynamic testing at OSC, could dramatically outpace its competitors, the fastest of which was clicked at just over 300 miles per hour in 2004 and well over 300 mph a few years later. The current incarnation of the Buckeye Bullet is being redesigned “from the ground up” according to Ohio State University’s project lead, Giorgio Rizzoni. As Rizzoni went on to note, “Driven by two custom-made electric motors designed and developed by Venturi, and powered by prismatic A123 batteries, the goal of the new vehicle will be to surpass all previous electric vehicle records.” According to Chief Engineer for the project, OSU mechanical engineering student, Cary Bork, “What sets the new design apart from the previous Buckeye Bullet vehicles is that at these higher speeds it is possible to produce shock waves under the vehicle. Such shock waves under the vehicle negatively affect the vehicle drag and can produce lift. Lift is undesirable in this application. Minimizing or eliminating these shock waves is critical to ensuring the safety and stability of the vehicle.”. In addition, the student team behind the project hopes to be able to reduce drag on the new vehicle by almost 15%, running several fluid dynamics scenarios at OSC.
<urn:uuid:5b9e26fe-5b5d-47a6-9670-4022464a4f80>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/08/09/ohio_supercomputer_steamlines_400-mph_electric_car/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956084
371
3.140625
3
Briody C.,Dublin Institute of Technology | Briody C.,Center for Elastomer Research | Duignan B.,Dublin Institute of Technology | Duignan B.,Center for Elastomer Research | And 3 more authors. Polymer Testing | Year: 2012 Compressive creep gradually affects the structural performance of flexible polymeric foam material over extended time periods. When designing components, it is often difficult to account for long-term creep, as accurate creep data over long time periods or at high temperatures is often unavailable. This is mainly due to the lengthy testing times and/or inadequate high temperature testing facilities. This issue can be resolved by conducting a range of short-term creep tests and applying accurate prediction methods to the results. Short-term creep testing was conducted on viscoelastic polyurethane foam, a material commonly used in seating and bedding systems. Tests were conducted over a range of temperatures, providing the necessary results to allow for the generation of predictions of long-term creep behaviour using time-temperature superposition. Additional predictions were generated, using the William Landel Ferry time-temperature empirical relations, for material performance at temperatures above and below the reference temperature range. Further tests validated the results generated from these theoretical predictions. © 2012 Elsevier Ltd. All rights reserved. Source
<urn:uuid:1cafc885-7eac-4c7f-a151-3c142117250d>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-elastomer-research-2662847/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886706
273
2.578125
3
One of the most interesting software releases in programming terms is (or was) The Last One (TLO), a Pascal-driven application that was released to critical acclaim in 1991 and coded by David James, a programmer famous for his expert coding exploits in the late 1980s and early 1990s. TLO was notable in that it allowed its users to enter a programme design by selecting from a number of flowcharting options. As each flowchart is selected, the programme asks a series of simple questions, in order to better understand what the user wants from that particular element of programme code. Once the entire flowchart has been entered by the user, TLO's programme code checks it for logic and errors, requesting further information from the user as and when required. Once this process is completed, TLO goes off and writes a programme that does what the flowchart specifies. The resultant programme - coded in BASIC, a simple programming language - could then be compiled into machine code. Although simplistic by today's 32 and 64-bit programming standards, TLO was - and still is - largely unique in being an application that generates relatively unique programme code as a direct function of its operation. As its creator David James said in the early 1990s, TLO does what it claims - it writes programmes automatically. Now here's the big question - could the self-programming capabilities of TLO be applied to modern-day coding? The answer, of course, is that within certain parameters, a modern-day TLO could be created, but the programme code would be highly complex and cover tens, if not hundreds of millions of lines of source code. A specialised version of a modern-day TLO – one designed, for example, for the creation of specialised spreadsheet applications – would be a realistic option, but most observers would ask ‘why bother’? The re-inventing worm Let’s advance this question further – could the self-programming capabilities of TLO be applied to a piece of malware that infects users’ PCs and propagates using a number of self-modifying techniques? The answer is a resounding yes. And, to a certain extent, the Conficker worm that has been hitting the headlines in the last nine months or so applies these principles, since each iteration of the Conficker worm seems to perform different attack functions. Detailed analysis of the Conficker worm reveals that the secret to its success is its modular nature, allowing third-party hackers – as well as the original programmer/programming team – to `develop' extra features to the malware as and when required. |"Under certain circumstances, it is even possible for a hacker to seize control of a supposedly secure - and authenticated - IP session just as the user has entered their payment card data and other personal information."| According to Alan Bentley, vice president of EMEA for vulnerability analysis security specialist Lumension, the Conficker malware is one of a new generation of worms that can automatically update itself and even adapt to the way it infects users based on different system conditions it encounters. “It's success, if that is the right word”, says Bentley, “is based on its intelligence in this regard”. In its first iteration in October 2008, Bentley tells Infosecurity, the Conficker worm was a piece of shellware that exploited a number of issues with the Windows Server environment. Its uniqueness, he says, is that the Conficker worm allows itself to update by – quite literally – phoning home across the internet, and downloading fresh instructions, then modifying its malware programme code and capabilities accordingly. "It's a very powerful worm in this context. Much more powerful than, say, the Blaster worm seen in 2003/2004, which was written by Jeffrey Lee Parson, and was considered quite revolutionary in its day", he says. The Conficker worm, he explains, will update itself and then look for network share and other resources through which it can propagate and further infect a given set of IT resources. Automating the manual hacker process Over at penetration and networking specialist First Base Technologies, Peter Wood, the firm's chief of operations and ISACA conference committee member, says that the automation of malware and attacking processes – especially those involving man-in-the-middle types of attack – is something which is now being carried out by hackers. At the Infosecurity Europe show in April, for example, Wood and his team revealed a serious structural error in the security of secure cookies in regular use on the internet. Many sites, says Wood, “do not set the secure text flag on their site’s session cookie”. He explains: when web sessions flip between the https and http protocols - as many major e-commerce sites frequently do – this flaw can be exploited. According to Wood, because http sessions have far less data and IT resource overheads than https sessions, major sites often only use the latter secure protocol when requiring users to enter personal data such as payment card details on specific pages. Furthermore, if the hacker uses the cookie to take over an internet session, they can then intercept this personal data. “Under certain circumstances”, says Wood, “it is even possible for a hacker to seize control of a supposedly secure – and authenticated – IP session just as the user has entered their payment card data and other personal information”. The sequence of assuming control of a user's IP session is highly manual but Wood tells Infosecurity that it is perfectly possible to automate the process to the point where a piece of malware could be coded to conduct the hacking whilst the hacker watches. "I`m pretty sure this exploit has been used by hackers in the past. It explains a lot about how some sites have been hacked", he says. Paul Wood, chief information security analyst with Messagelabs, meanwhile, is also a believer in the automated approach to hackers and their malware. Citing his firm's April 2009 Intelligence Report, he explains to Infosecurity that, if you look closely at the incidence of various pieces of high-profile malware on a month-by-month basis as they start to infect, continue their infection of large numbers of internet users, and then start to fade away, you get some interesting results. Some pieces of malware start to fade away and then, he says, seemingly come back from the dead and start to infect a second wave of users. If you look at the programme code of the malware that exhibits this type of behaviour, he says, you start to realise that there is more at work than hackers simply modifying existing malware. "Some malware types are emerging – like Conficker – that are capable of modifying themselves in the face of changing situations on the internet", he says. |"Some malware types are emerging - like Conficker - that are capable of modifying themselves in teh face of changing situations on the internet."| It's not yet clear, Paul Wood adds, whether this process is entirely automated, or whether it is being assisted by the actions of the original malware programmer(s) and/or third party hackers. What is certain, however, he says, is that self-modifying malware is a reality in 2009 and is why some pieces of malware resurface in the Messagelabs charts on a regular basis. According to Lumension's Bentley, one of the ways in which organisations can better protect themselves against self-modifying malware, such as Conficker and its descendants, is to reduce the IT resource's attack surface. This is achieved, he says, by a five-stage process of discovery, assessment, prioritisation, remediation and reporting. In the discovery stage, IT managers should attempt to discover all the network resources on the company systems. Once this process is carried out, he says, managers can then identify the vulnerabilities that exist on the network. The third step is to prioritise the remedial steps needed, namely understand the details of the IT resource's vulnerabilities, their potential severity and their impact of the business. “The fourth stage”, says Bentley, “is to remediate, or eliminate the network vulnerabilities, using a process of installing security patches at all points, and mitigate any other risks by creating custom remediation”. The fifth and final stage is to report on the risks and the solutions applied to the problem of reducing an organisation's attack surface. This is normally achieved, says Bentley, “by in-depth reporting at all stages in the process, and then consolidating the reports into a master analysis, which can be updated as new risks arrive and new resources are added to the IT system(s) concerned”. Messagelabs' Paul Wood, meanwhile, says the process of blocking malware – especially self-modifying hacker code – can also be highly automated, with IT security technology analysing and stepping through the analysis process at high speed. The Messagelabs' modus operandi in this regard, he says, is to adopt a five-stage real-time analysis process that steps through a number of stages as various IT threats are encountered when monitoring an organisation's emails as they stream in – and out. The first stage is to bandwidth throttle any suspicious IP traffic to give the organisation's IT security software a chance to analyse the suspect messages and/or attachments in real time. If the email is found to be suspect, but does not conform to known infection signatures, then the message's header can be analysed and, if an infection etc., is found, the email can be quarantined. The third stage in the analysis process is to perform user management and address validation, with Messagelabs' security applying a number of automated checks to verify whether the message comes from a source previously known to be dangerous. “The fourth stage”, Paul Wood says, “is to apply the Skeptic anti-malware and anti-hacking analysis programme for anything suspicious that has passed the first three analysis stages but does not pass muster”. The fifth and final stage, he explains, is to apply Skeptic's anti-spam technology to the messages, allowing the security software to weed out anything that still looks suspicious for later, manual, analysis by the IT staff concerned. One additional security step that can be carried out to detect hybrid and self-modifying malware and email infections – that cannot be spotted using conventional signature pattern analysis, or heuristic analysis – says Paul Wood, is to perform a DNA analysis on the message and its attachment(s). This process, he tells Infosecurity, is arguably the most interesting of all, since it allows an evolutionary approach to be taken to the analysis process, with the security software modifying its approach as it encounters new potential infection or hacker attack vectors. Currently, he says, this process normally requires the interventions of programmers to check for false positives, but - in time - the process could well become automatic. Almost as automatic, Infosecurity notes, as the self-modifying malware that the software is designed to detect and deal with.
<urn:uuid:8f72e09f-03e1-4bd3-9dda-d609c7948b03>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/magazine-features/nine-lives-when-malware-becomes-self-modifying/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00542-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942656
2,308
2.859375
3