text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
As you know, nowadays Ethernet is the most common network standards.But you may be confused what is Ethernet. Ethernet is a data link and physical layer protocol defined by the IEEE 802.3 specification. It comes in many flavors, defined by maximum bit rate, mode of transmission and physical transmission medium. What Is The Background Of Ethernet? In the early 1980s, Digital Equipment Corporation, Intel, and Xerox developed the Ethernet Local Area Networking format. This technology was soon accepted by the IEEE Committee, creating the 802.3 standard. This standard dictates the use of CSMA/CD (Carrier Sense Multiple Access with Collision Detection) as its accessing scheme. Networks use NIC (network interface card), hub, transceiver, converter, repeater & switch, as well as different types of transmission medias for carrying signals. A variety of Ethernet types have come and gone over the years, such as the following: In the mid 1990s, 100BASE-T (unshielded twisted-pair [UTP]) and 100BASE-FX (using fiber) were ubiquitous in the enterprise network, and they still are. Since the start of the millennium, enterprise networks have actively implemented Gigabit Ethernet, 1000BASE-T, in their network. The push for today is 10 Gbps in the core of the enterprise network. What Is The Basic Ethernet Theory? Ethernet Theory is a concept of how computers that are not physically connected should communicate with each other for the transmission of data. 1.Ethernet operational theory is quite easy to understand and a simple analogy is helpful to visualize the basics. Imagine a long hallway lined with offices. The hallway represents the physical network, the offices represent the attached stations. When an occupant wishes to speak to another occupant they would lean into the hallway, listen to make sure no one else is engaging in a conversation, then speak out addressing the desired recipient. All other occupants hear the conversation but ignore it knowing it is not directed to them. 2.Returning to our analogy, what if two or more occupants decide to speak at the same time? Naturally the overlapping voices would become garbled and indistinguishable. With Ethernet this is known as a collision. In the CSMA/CD method, CD stands for Collision Detection. If a collision is detected by a transmitting station(s) the rule states: stop transmitting immediately, transmit a jamming signal to inform all other stations to stop, then wait a random period (binary exponential backoff) and re-transmit. Unfortunately, as the quantity of stations increases so does the amount of collisions. This causes the average access time to increase proportionally. This is referred to in the industry as network congestion. 3. Fortunately, there are several ways to alleviate network congestion. One way is that the entire network can be upgraded to Fast Ethernet (100 Mbps) which represents a 10 fold increase in transmission speed. This, however requires upgrading of all components and can be rather expensive. Another approach is to add an Ethernet Switch. Ethernet is an asynchronous Carrier Sense Multiple Access with Collision Detect (CSMA/CD) protocol/interface, with a payload size of 46-1500 octets. With data rates of tens to hundreds of megabits/second, it is generally not well suited for low-power applications. However, with ubiquitous deployment, internet connectivity, high data rates and limitless range expansibility, Ethernet can accommodate nearly all wired communications requirements. Common applications include: 1. Remote sensing and monitoring; 2. Remote command, control and firmware updating; 3. Bulk data transfer; 4. Live streaming audio, video and media; 5. Public data acquisition (date/time, stock quotes, news releases, etc.
<urn:uuid:96c7c8be-cc2e-4aa4-b7a5-94462589c4e0>
CC-MAIN-2017-04
http://www.fs.com/blog/how-much-do-you-know-about-ethernet-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00066-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9238
771
3.875
4
Tezanos-Pinto G.,University of Auckland | Tezanos-Pinto G.,Massey University | Constantine R.,University of Auckland | Mourao F.,University of Auckland | And 3 more authors. Marine Mammal Science | Year: 2015 Bottlenose dolphins (Tursiops truncatus) in the Bay of Islands, New Zealand, have been studied for almost two decades. Since 2003, fewer than 150 dolphins visited the bay during each season and the local unit has declined 7.5% annually from 1997 to 2006. The causes of decline are unclear but probably include mortality and emigration. Here, we used a long-term database to estimate reproductive parameters of female bottlenose dolphins including recruitment rates. A total of 704 surveys were conducted in which 5,577 sightings of 408 individually identified dolphins were collected; of these 53 individuals were identified as reproductive females. The calving rate increased between periods (1997-1999 = 0.13, CL = 0.07-0.21; 2003-2005 = 0.25, CL = 0.16-0.35 calves/reproductive female/year). A 0.25 calving rate suggests that on average, a female gives birth only once every four years, which is consistent with the estimated calving interval (4.3 yr, SD = 1.45) but still is lower than values reported for other populations. Conversely, apparent mortality rates to age 1+ (range: 0.34-0.52) and 2+ (range: 0.15-0.59) were higher than values reported elsewhere. The high apparent calf mortality in conjunction with a decline in local abundance, highlight the vulnerability of bottlenose dolphins in the Bay of Islands. Long-term studies are required to understand the causes of high calf mortality and the decline in local abundance. Meanwhile, management should focus on minimizing sources of anthropogenic disturbance and enforcing compliance with current legislation. © 2014 Society for Marine Mammalogy. Source Zaeschmar J.R.,Massey University | Visser I.N.,Orca Research Trust | Fertl D.,Ziphius EcoServices | Dwyer S.L.,Massey University | And 5 more authors. Marine Mammal Science | Year: 2014 On a global scale, false killer whales (Pseudorca crassidens) remain one of the lesser-known delphinids. The occurrence, site fidelity, association patterns, and presence/absence of foraging in waters off northeastern New Zealand are examined from records collected between 1995 and 2012. The species was rarely encountered; however, of the 61 distinctive, photo-identified individuals, 88.5% were resighted, with resightings up to 7 yr after initial identification, and movements as far as 650 km documented. Group sizes ranged from 20 to ca. 150. Results indicate that all individuals are linked in a single social network. Most observations were recorded in shallow (<100 m) nearshore waters. Occurrence in these continental shelf waters is likely seasonal, coinciding with the shoreward flooding of a warm current. During 91.5% of encounters, close interspecific associations with common bottlenose dolphins (Tursiops truncatus) were observed. Photo-identification reveals repeat inter- and intraspecific associations among individuals with 34.2% of common bottlenose dolphins resighted together with false killer whales over 1,832 d. While foraging was observed during 39.5% of mixed-species encounters, results suggest that social and antipredatory factors may also play a role in the formation of these mixed-species groups. © 2013 Society for Marine Mammalogy. Source
<urn:uuid:11cbb2c7-84d9-441a-a93c-1664145870f5>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/4-access-road-1445915/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00462-ip-10-171-10-70.ec2.internal.warc.gz
en
0.921947
773
2.53125
3
A Cross-Site Request Forgery (CSRF, sometimes pronounced “Sea-Surf”) attack abuses the trust between the application and a given client (the victim) to perform an application level transaction on behalf of the attacker using the identity of the client. The attack is based on embedding URLs that represent specific transactions of the target application in an attacker controlled page and have this page accessed by the victim from a browser that has already established trust relationship with the target application (e.g. through authentication). Examples of such requests include the transfer of monetary funds, provisioning activities, application administration and even purchase transactions. A user establishes trust with a target web application through different authentication and identification procedures (Basic authentication, Form authentication, X509 certificates, etc.). Once trust is established it is usually kept for the lifetime of the browser process (or until a properly implemented logout transaction is invoked by the client). Thus, repeated requests sent from the same browser to the target application do not require further human intervention with respect to trust (e.g. by using session cookies or automatically resending authentication header data). A CSRF attack assumes that the same browser process that has an established trust with a given application is used for browsing an attacker's controlled page. This assumption has become a very reliable one since the introduction of multiple tabs functionality by browser software makers (Internet Explorer 7.0). An attacker's controlled page could be as complex and robust as an entire site created and maintained by the attacker as part of a fraud scheme or as simple as a forum page containing messages introduced by the attacker, a web-mail application that displays a spam message distributed by the attacker or even an ad banner. The attack is mounted by embedding a URL that invokes a given transaction of the target application in an HTML attribute that automatically generates a request for the URL. The most obvious example would be the src attribute of an img element (e.g. <img src=”http://www.targetapp.com/doTransaction?param1=x¶m2=y) but the number of possible attributes is actually very large. When the victim's browser is rendering the page it will send a request to the target application with the attacker's created URL and with any trust attributes related to the target application. Thus the transaction injected by the attacker will be accepted and preformed under the identity of the victim. A more sophisticated CSRF attack can be mounted using known security glitches of the XMLHttpRequest object or Flash scripts when embedded in an attacker's controlled page. Cross Site Request Forgery Prevention: There are several ways to prevent CSRF attacks. One mitigation technique involves adding a nonce (large randomly chosen number) to each transaction. The value of the nonce attached to the request is validated against the value given for that specific user session. Thus an attacker cannot embed a URL representing a valid transaction in the attacker's controlled page. Alternatively additional human interaction could be requested for sensitive transactions in the form of repeated authentication or answering a CAPTCHA. However, such measures demand much modification of already-existing Web applications, so these techniques are unlikely to gain popularity among Web application developers. Another mitigation technique relies on checking the Referer header of HTTP requests and validating that it contains a URL from within the server's domain, rather than from an external source.
<urn:uuid:7cb2d4d5-b8d4-4a64-801d-e3428fd08f14>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=cross_site_request_forgery
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00094-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906149
681
2.71875
3
However well-meaning, the efforts of individual nations to curb climate change will always fall short. Given that climate does not respect national borders, global cooperation will be the key to any solution. While international political cooperation to deal with the issue has been frustratingly slow, at least one aspect of the problem is now getting some global focus: climate modeling. The first international effort to bring climate simulation software onto the next-generation exascale platforms got underway earlier this spring. The project, named Enabling Climate Simulation (ECS) at Extreme Scale, is being funded by the G8 Research Councils Initiative on Multilateral Research and brings together some of the heavy-weight organizations in climate research and computer science, not to mention some of the top supercomputers on the planet. This project came out of the ongoing collaboration of University of Illinois at Urbana-Champaign (UIUC) and the French National Institute for Research in Computer Science and Control (INRIA) though their Joint Laboratory for Petascale Computing and takes advantage of the support of NCSA, which will provide access to the upcoming multi-petaflop Blue Waters system. In a nutshell, the objective of the G8 ECS project is to investigate how to efficiently run climate simulations on future exascale systems and get correct results. It will focus on three main topics: (1) how to complete simulations with correct results despite frequent system failures; (2) how to exploit hierarchical computers with hardware accelerators close to their peak performance; and (3) how to run efficient simulations with 1 billion threads. This project also aims at educate new generations of climate and computer scientists about techniques for high performance computing at extreme scale. The team is led by the UIUC’s Marc Snir (project director), and INRIA’s Franck Cappello (associate director). It gathers researchers from five of the G8 nations: the US (University of Illinois at Urbana Champaign, University of Tennessee and National Center for Atmospheric Research), France (INRIA), Germany (German Research School for Simulation Sciences), Japan (Tokyo Tech and University of Tsukuba), Canada (University of Victoria) and Spain (Barcelona Supercomputing Center). HPCwire got the opportunity to ask project director Mark Snir and atmospheric scientist Don Wuebbles at UIUC and INRIA’s Franck Cappello about the particulars of the G8 ECS effort and to provide some perspective on what it means to the climate research and computer science communities. HPCwire: How do the current climate models that are being run on terascale and petascale systems fall short? Don Wuebbles: There is a strong need to run global climate models with detailed treatments of atmospheric, land, ocean, and biospheric processes at very high resolution, with the newest generation of climate models that can be run on petascale computers being able to get to a horizontal resolution of as low as about 13 kilometers. Such a capability allows for many relevant processes to be treated without having to make the severe approximations and parameterizations found in the models used in previous climate assessments. As an example, it is now known that ocean models need to be run at roughly a tenth of a degree or about 10 kilometers horizontal resolution in order to adequately represent ocean eddy processes. Even on a petascale machine, only a limited number of runs can be done with the new high resolution models. A exascale machine will allow for even high resolution as new dynamical cores are developed. Even more important though is that ensembles of the climate analyses extending over many hundreds of years can be run, thus allowing better representation of natural variability in the climate system. In addition, exascale computing will allow for well-characterized studies of the uncertainties in modeling of the climate system that are impossible on current computer systems because of the extensive resources required. HPCwire: Will ECS effort be able leverage any of the work done by the International Exacale Software Project (IESP)? Marc Snir: Many partners of the project are active participants of IESP either as leader, members of the executive committee or experts of IESP. The research program has been defined taking into account the IESP results. IESP work was a instrumental in the clarification of the challenges and the definition of the research scope in the three main topic of our ECS project. Our project also carefully followed the discussions within the European Exascale Software Initiative (EESI) and Japan, where several G8 ECS partners are playing leading roles. IESP was instrumental in motivating the RFP that was issued jointly by seven of the G8 countries. However, one should remember that IESP established a roadmap. New collaborations are needed to implement it. The program that funds us and five other projects is a (very modest) first step in this direction. HPCwire: What kinds of assumptions will have to be made about the future exascale systems to redesign the software? Franck Cappello: We tried to take reasonable assumptions according to the current state of the art, the projections made in the exascale preparation reports and discussions with hardware developers. These assumptions are essentially following the ones considered in IESP. Exascale systems are likely to have hybrid (SIMD plus sequential) cores, hundreds of cores per chip, many chips per nodes and deep memory hierarchies. Another important element is the uncertainty about the system MTBF predictions. This essentially will depends on the level of masking provided by the hardware. A key choice in our project was to test our research idea on a significant variety of available HPC systems: Blue Waters, Blue Gene P and Q, Tsubame2, the K machine in Kobe and Marenostrum2. We believe that what we will learn by testing our improvements on these machines will help us to better prepare climate code for exascale. HPCwire: What kinds of changes to today’s climate simulations do you anticipate to bring this software into the exascale realm? Cappello: Our project focuses on three key issues: system level scalability, node level performance and resilience. No existing climate model scales to the order of a million cores. Thus, studying system level scalability is a critical. The main research driver is to preserve locality, since strong locality will be crucial for performance. We shall explore three key areas: topology and computation-intensity-aware mappings of simulation processes to system, communication-computation overlap, and the use of asynchronous collective communications. Concerning node level performance, we shall explore modeling and auto-tuning/scheduling of intra-node heterogeneity with massive numbers of cores, for example, GPUs; exploiting locality and latency hiding extensively to mitigate the performance impact of intra-node traffic; and studying task parallelism for the physics modules in the atmosphere model. ECS will address resilience from multiple complementary approaches, including resilient climate simulation algorithms, new programming extensions for resilience, and new fault tolerant protocols for uncoordinated checkpointing and partial restart. These three approaches could be considered as three levels of failure management, each level being triggered when the previous one is not enough to recover the execution. Our work is by no means a full solution to the problem of exascale climate simulations. New algorithms will be needed. There is another G8 project that looks at algorithm changes to enhance scalability. New programming models may be needed to better support fine-grain communication and load balancing. Some of us are involved in other projects that focus on this problem. However, our work is, to a large extent, agnostic on these issues. HPCwire: By the time the first exascale systems appear in 2018 to 2020, climate change will almost certainly be much further along than it is now. Assuming we’re able move the software onto these exascale platforms and obtain a much more accurate representation of the climate system, what will policy makers be able to do with these results? Snir: I suspect that all participants in our project believe that the time to act on global warming is now, not ten years from now. The unfortunate situation is that we seem incapable of radical action, for a variety of reasons. It is hard to have international action when any individual country will be better served by shirking its duties — the prisoner’s paradox — and it is hard to act when the cost of action is immediate and the reward is far in the future. As unfortunate as this is, we might have to think of mitigation, rather than remediation. More accurate simulations will decrease the existing uncertainty about the rate of global warming and its effects; and will be needed to assess the effect of unmitigated climate change, and the effect of various mitigation actions. Current simulations use 100 km grids. At that scale, California is represented by a few points, with no discrimination between Coast Range and Central Valley, or Coastal Range and Sacrament-San Joaquin Delta. Clearly, global warming will have very different effects on these different geographies. With better simulations, each House member will know how his or her district will be impacted. HPCwire: How much funding is available for this work and over what time period? Is each country contributing? Cappello: This three-year project receives G8 coordinated funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), French National Research Agency (ANR), German Research Foundation (DFG), Japan Society for the Promotion of Science (JSPS) and the National Science Foundation (NSF). This project, together with five other projects, was funded as part of the G8 Research Councils Initiative on Multilateral Research, Interdisciplinary Program on Application Software towards Exascale Computing for Global Scale Issues. This is the first initiative of its kind to foster broad international collaboration on the research needed to enable effective use of future exascale platforms. The total funding for this initiative is modest, about 10 million euros over 3 years, spread over 6 projects. HPCwire: Is that enough money to meet the goals of the project? Do you anticipate follow-on funding? Snir: The project has received enough money to fund the research phase and develop separated prototypes on the three main topics. Our focus is on understanding the limitations of current codes and developing a methodology for making future codes more performing and more resilient. The development of these future codes will require significantly higher funding. We expect to collaborate with other teams that are continuing to improve climate codes and seek future funding to continue our work as new codes are developed.
<urn:uuid:fca6eb89-36ed-4e51-a218-c84f0173ddba>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/12/international_project_readies_climate_models_for_exascale_era/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933183
2,189
2.640625
3
Something to note, Rails typically is run in three modes: - Test - Mode typically used for Unit Tests. - Development - Development environment, includes verbose errors and stack traces. - Production - Settings are as if you were running in this application in a production environment. Now obviously, if you've done something custom like `export RAILS_ENV=production` this would be different. Additionally, explicitly casting the mode in which something like the Rails console runs (example: rails console production) will change the default behavior or mode, rather. What does all this mean? Well, really it means that you want to develop in development mode and run a production application in production mode. Pretty simple huh? Time to configure for Unicorn versus the default Webrick web server. If you are asking yourself "why", the answer is fairly straightforward. Unicorn is meant for production and handles a large amount of requests better and overall, is more configurable. For the purposes of this tutorial, we will use Unicorn for both development and production. I want to demonstrate two ways of doing this. The first is by using a startup shell script. The other, for the purposes of an introduction to Rake tasks, will be to actually create a Rake task to start the application in lieu of a shell script. Startup shell file: Modify your Gemfile by uncommenting the line with the Unicorn gem. Also, while we are at it, let's uncomment the Bcrypt gem as well: Run `bundle install`: Make the startup script executable and fire it up: The line `rvmsudo bundle exec unicorn $*` means... - rvmsudo - Allows you to run sudo commands while maintaining your RVM environment. - bundle exec = Directs bundler to execute the program which, automatically 'require'(s) all the gems in your Gemfile. - unicorn - Unicorn service. - $* - Any arguments passed to the script will be executed as part of the command inside of the script. Example: ./start.sh -p 4444 translates to - `rvmsudo bundle exec unicorn -p 4444` and would start the server on port 4444. Alternatively, we can just easily package this up as a Rake task. A Rake task is a repeatable task that can be executed using the `rake` command. Nothing magical, it just harnesses Ruby goodness to convert your task definitions into an executable command. There is an excellent tutorial on Rake available via the Railscasts site. For our purposes, let's create a Unicorn rake file. Do this under /lib/tasks and use the `.rake` extension. Presumably, you may wish to have multiple tasks available to the Unicorn namespace. For instance, if you'd like to both start and stop the Unicorn service it would be beneficial to create a namespace titled "unicorn" with multiple tasks inside it. For the purposes of this tutorial, I will only cover building a start task as you can easily expand upon this. Also, since we are running the Unicorn service in an interactive mode, you can hit ctrl+c to stop it. I would like to note that having a start and stop task is very beneficial if you are running Unicorn detached (non-interactive), where the service runs in the background. Moving along, here is the task... Lines 1 & 9 - Begin and end the unicorn namespace definition. Line 3 - Describe the task (useful at the console). Line 4 - Define the task with the first argument "task" and any additional definitions (comma separated) are arguments. In this example, we except a port argument. Line 5 - We code some logic that says, port_command will equal either an empty string or "-p <port number>" and if a port number is not provided (nil) it will equal an empty string. Line 6 - This is a shell command that appends the result of port_command to `rvmsudo bundle exe unicorn`. Let's list our tasks and see if it is available: Success! Notice how the description and command format are auto-magically taken care of for you.' You can run this in one of two ways. `rake unicorn:start` (starts the Unicorn service on port 444) OR.... `rake unicorn:start` (starts it on the default port, 8080) To recap, we've shifted off of Webrick and over to Unicorn. Also, we've introduced the concept of a Rake task. Stay tuned for more parts in this series...
<urn:uuid:de9f5ffb-75a7-4105-ad56-3a1123739c5b>
CC-MAIN-2017-04
http://carnal0wnage.attackresearch.com/2012/10/basics-of-rails-part-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00424-ip-10-171-10-70.ec2.internal.warc.gz
en
0.862769
972
2.640625
3
Ordering the evacuation of residents during a disaster can be one of the most formidable decisions a leader is faced with during an emergency of any size, as displacing citizens can have a profound impact on their lives. Large-scale evacuations can potentially affect hundreds of thousands of individuals. In addition to the impact on the general public, an emergency requiring widespread evacuation generally involves numerous jurisdictions moving individuals from an affected area to a host area, and thus requires meticulous coordination. The metropolitan Atlanta region, vulnerable to both natural and man-made disasters, has a history of successful, multijurisdictional collaboration on a wide range of emergency preparedness issues, and took that collaboration a step further with its Regional Evacuation Coordination Plan (RECP). Atlanta Mayor Shirley Franklin, who was instrumental in developing the plan, said, “Atlanta is the economic engine of the region, home to Hartsfield-Jackson Atlanta International Airport, the world’s busiest airport, several Fortune 500 companies, and a population of nearly 500,000, which makes it highly vulnerable. Having a coordinated regional emergency evacuation plan is critical as the city and the region work to enhance their ability to effectively respond to human-caused and natural disasters.” In April 2008, the Atlanta Regional Commission, Georgia Emergency Management Agency (GEMA) and Atlanta-Fulton County Emergency Management Agency collectively sought to develop a regional evacuation plan that would guide elected officials, emergency managers and other supporting organizations from the 10 contiguous counties in the metropolitan Atlanta region in coordinating a safe and effective regional evacuation. The RECP, which would be built on existing state and local emergency operations and coordination processes, would not supersede any existing emergency operations plan. Rather, it would supplement the all-hazards concept of operations described in the GEMA Emergency Operations Plan and each of the 10 counties’ emergency operations plans. At the project’s inception, a Planning Advisory Committee was formed, comprised of emergency managers from all 10 counties in the metropolitan Atlanta region — Cherokee, Clayton, Cobb, DeKalb, Douglas, Fayette, Fulton, Gwinnett, Henry and Rockdale — as well as the Atlanta Regional Commission, the Urban Area Security Initiative, the city of Atlanta and GEMA. The Planning Advisory Committee concluded that a sound RECP would provide sufficient answers to the following three questions: To answer these pressing questions, development of the RECP followed a three-phased approach: a workshop, a series of planning analyses, and plan development. The first two phases were critical to creating an informed, executable plan. Phase one featured an evacuation workshop to bring together the most critical stakeholders to the creation and implementation of the RECP. The workshop was intended to provide an opportunity for these stakeholders to discuss roles and responsibilities in preparation and response during an evacuation; brainstorm about critical success factors for an effective evacuation; discover existing capabilities and resources; determine shortfalls in capabilities and resources; and build consensus for developing an executable plan. The workshop involved more than 80 participants from more than 50 organizations, including officials from the emergency management, transportation, special needs, utilities, public safety and communication sectors. During the workshop, these stakeholders and local experts exchanged ideas, advice and experiences, which served as the foundation for the RECP. Phase two included the planning analysis, which was the most complex phase of the project. Analyses were conducted in seven areas of concern, and the results served as the planning assumptions on which the overall RECP was built. The analyses were as follows: The third phase, the plan development phase, combined the results of the phase two planning analyses to create an executable Atlanta regional evacuation coordination plan. Throughout the entire RECP development process, stakeholders from numerous areas including county, city, state, federal, private and nonprofit organizations were interviewed and consulted on their roles, responsibilities and capabilities to assist with an evacuation. The simple answer to this question is “yes.” However, because the incident most likely to necessitate an evacuation is one with little or no forewarning and an indeterminate impact location, it was vital that the RECP consider four distinctly different evacuation scenarios: Each of these evacuation scenarios focuses on distinct at-risk populations and is heavily influenced by the results of the behavioral analysis from phase two, which assessed how residents of the region would respond to an evacuation order. One of the key objectives of the Atlanta RECP was to gain a deeper understanding of how residents will react to an evacuation and what their needs might be. Using a phone survey system, more than 16,000 area households were called and asked to participate in a 13-question survey on emergency evacuations. The households surveyed were distributed throughout the region in proportion to the size of the county’s population. Each household was asked to place themselves in a hypothetical evacuation scenario and asked: The results of the RECP survey will help elected officials and emergency managers make better response and recovery decisions and assist emergency management planners in the Atlanta region with a more realistic understanding of the needs of an evacuating population, and the resources required to support them. The decision to evacuate an area is not one to be taken lightly. There are significant impacts on public safety, public perception and the economy. Moreover, it’s widely acknowledged that chief elected officials and emergency management directors don’t have time to memorize or even thumb through a 200-page plan in the midst of a serious crisis. With that in mind, the entire Atlanta RECP was summarized into the first four pages of the document. This important four-page section contains a first-hour checklist and an evacuation process flow chart, and can be pulled out as stand-alone document. The first-hour checklist summarizes the key activities and tasks that chief elected officials, emergency managers and other decision-makers must be aware of to effectively respond to an emergency incident. The evacuation process flow chart outlines the general coordination process during an emergency as well as the plans and procedures that must be followed during each step. The concept of coordination described in the chart works hand-in-hand with the concept of operations described in the GEMA Emergency Operations Plan and county emergency plans. The RECP was organized into three sections: base plan, evacuation planning analysis and county-specific information. As the regional planning and intergovernmental coordination agency for the 10-county area, the Atlanta Regional Commission is focused on unifying the region's collective resources to prepare it for a safe and prosperous future. In keeping with its mission to catalyze regional progress, the commission skillfully coordinated the planning effort between emergency managers from the 10 counties, GEMA and key community, faith-based, private and transportation-related organizations during this project. The resulting Atlanta RECP — which reflects the region’s commitment to ensuring the safety and well-being of its citizens — serves as a national best-practice model for regional collaboration and planning. Jeff Hescock is a senior consultant with Beck Disaster Recovery, Inc., www.beckdr.com, and was the project manager on the Atlanta Regional Evacuation Planning Project. He can be reached at email@example.com. [Photo courtesy of Liz Roll/ FEMA News Photo.]
<urn:uuid:1b9a3050-aa0c-4df9-ba85-32313dc16979>
CC-MAIN-2017-04
http://www.govtech.com/em/disaster/Atlanta-Collaboration-Evacuation-Plan.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00104-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954666
1,467
2.65625
3
EMC and CNT recently created the first storage-over-IP system that mirrors data across the Atlantic Ocean. EMC and CNT recently created the first storage-over-IP system that mirrors data across the Atlantic Ocean. The implementation allowed P&O Nedlloyd Container Line to mirror data between sites in Rutherford, N.J., and London on EMC Symmetrix storage systems over the companys private IP network. EMCs SRDF (Symmetrix Remote Data Facility) and new encapsulation/decapsulation technology from CNT made the project possible. SRDF-over-IP enables automatic mirroring of files, databases and applications among EMC Symmetrix storage systems via private IP networks. By leveraging IP-based networking infrastructures, IT managers can extend the reach of SANs. However, this sort of infrastructure could create a point of failure in the network.
<urn:uuid:a63500db-d887-43e8-8c6e-675434ae3383>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/Storage-Spans-Atlantic-on-IP
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.868917
187
2.5625
3
From A to V: Refuting Criticism of Our Antivirus Report While our report acknowledged the limitations of our methodology, we believe that, fundamentally, the model for antivirus—and not our methodology—is flawed. Antivirus was built years ago during an age when mass infections was the name of the game. Today, malware is deployed to target SPECIFIC individuals—CEOs, researchers, politicians, executives—and not everyone’s mom. One reaction to our study asserted that a virus can be blocked based on source IP: “email with the malware attached, or the included URL… could have been blocked based on its source IP.” This approach, however, addresses an old threat model in which the attacker would try to infect as many as possible targets with a single campaign – that included reusing URLs to hoax the malware and IP addresses to send an email. Reusing IPs allowed security companies to have blacklists for both IPs and URLs. However, in today’s threat scape, where we consider attackers that are specifically targeting a specific victim, they create a dedicated URL to host the malware and use a dedicated IP address to send malicious mail, easily overcoming blacklists. Our study concluded that antivirus solutions are very effective in fighting widespread malware, and slightly less effective for older malware (2-3 month old). But for a new malware, there is a good chance it will evade the antivirus. In fact, our results are consistent with other studies. For example, let’s look at the AV-TEST Institute’s results. The AV-TEST Institute, according to their site, is a “leading international and independent service provider in the fields of IT security and anti-virus research.” According to AV-TEST’s website, in order to test the protective effect of a security solution, AV-TEST researchers simulate a variety of realistic attack scenarios such as the threat of e-mail attachments, infected websites or malicious files that have been transferred from external storage devices. When carrying out these tests, AV-TEST takes the entire functionality of the protection program into account. But even when all of the Anti-virus functionality enabled, the results reveal a worrisome security gap: While antivirus solutions are very effective in fighting widespread malware and slightly less effective for older malware, for a new malware, there is a good chance it will evade the antivirus solutions. That’s exactly what we found. Finally, one should ask a question CEOs are asking CISOs worldwide: if antivirus software is so good, how come we see so many successful attacks based on infected computers (Coca-Cola, South Carolina DoR to name a few)? And the obvious answer is that antivirus is not perfect and needs to be augmented with data security solutions, as was honestly acknowledged by antivirus veteran researcher, Mikko Hypponen “Antivirus systems need to strike a balance between detecting all possible attacks without causing any false alarms. And while we try to improve on this all the time, there will never be a solution that is 100 percent perfect. The best available protection against serious targeted attacks requires a layered defense.” Authors & Topics:
<urn:uuid:a9f105cf-e109-4458-bd6e-9bb62661b31f>
CC-MAIN-2017-04
http://blog.imperva.com/2012/12/av_response.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950391
666
2.734375
3
The Object Oriented Data Technology (OODT) architecture, which was first developed at the NASA Jet Propulsion Laboratory as a way to make use of metadata to seek out distributed computing and data resources, has been selected as one of a small handful of projects that will receive management and resource support from the Apache Software Foundation. As NASA notes, the OODT architecture was first intended “to build a national framework for data sharing, but soon other applications in physical science, medical research and ground data systems became apparent.” It has already been used for a number of Earth-based scientific missions as well as a number of ongoing projects at the Jet Propulsion Laboratory (JPL) in the areas of astrophysics and climate change research. There are a number of benefits that several at the JPL see by offering OODT as an open source package, including the fact that it will be opened to a wide base of developers who can build on the existing code, speed development and provide the “peer review process that pushes a certain development standard for the package.” According to Chris Mattmann, one of the lead developers at JPL and Vice President of OODT at the Apache Software Foundation, “we regularly used open software in our daily JPL tasks and were impressed with the quality of code and vibrant nature of free and open source software communities…it was then decided that OODT should be released as an open source software package.
<urn:uuid:2fadeaae-fbbd-471c-94e3-b8d65939c136>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/05/nasa_jpl_software_architecture_to_become_top-level_apache_software_foundation_project/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00003-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962143
298
2.9375
3
There are good and bad hackers. Here is a window into what they do and why: White Hat Hackers: These are the good guys, computer security experts who specialize in penetration testing and other methodologies to ensure that a company’s information systems are secure. These IT security professionals rely on a constantly evolving arsenal of technology to battle hackers. Black Hat Hackers: These are the bad guys, who are typically referred to as just plain hackers. The term is often used specifically for hackers who break into networks or computers, or create computer viruses. Black hat hackers continue to technologically outpace white hats. They often manage to find the path of least resistance, whether due to human error or laziness, or with a new type of attack. Hacking purists often use the term “crackers” to refer to black hat hackers. Black hats’ motivation is generally to get paid. Script Kiddies: This is a derogatory term for black hat hackers who use borrowed programs to attack networks and deface websites in an attempt to make names for themselves. Hacktivists: Some hacker activists are motivated by politics or religion, while others may wish to expose wrongdoing, or exact revenge, or simply harass their target for their own entertainment. State Sponsored Hackers: Governments around the globe realize that it serves their military objectives to be well positioned online. The saying used to be, “He who controls the seas controls the world,” and then it was, “He who controls the air controls the world.” Now it’s all about controlling cyberspace. State sponsored hackers have limitless time and funding to target civilians, corporations, and governments. Spy Hackers: Corporations hire hackers to infiltrate the competition and steal trade secrets. They may hack in from the outside or gain employment in order to act as a mole. Spy hackers may use similar tactics as hacktivists, but their only agenda is to serve their client’s goals and get paid. Cyber Terrorists: These hackers, generally motivated by religious or political beliefs, attempt to create fear and chaos by disrupting critical infrastructures. Cyber terrorists are by far the most dangerous, with a wide range of skills and goals. Cyber Terrorists ultimate motivation is to spread fear, terror and commit murder. McAfee Identity Protection includes proactive identity surveillance to monitor subscribers’ credit and personal information and access to live fraud resolution agents who can help subscribers work through the process of resolving identity theft issues. For additional tips, please visit http://www.counteridentitytheft.com
<urn:uuid:b1a0f107-af92-44df-9865-abdb3abb5b22>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/12659-Seven-Types-of-Hacker-Motivations.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00489-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928695
538
2.71875
3
Myth as myth One of those mythical statements is; “the public cloud is the most inexpensive way to procure IT services.” Companies who promulgate this style of myth often do so for their own ends and begin by freely conceding that, yes indeed, a characteristic of the public cloud is a relatively inexpensive ‘pay-as-you-use’ model, before proceeding to ‘bust the myth’ – on their terms. Would-be myth busters Initially, setting the scene, these ‘axe grinders’ will indicate that the starting price for basic, on-demand instances within Amazon’s EC2, for example, is less than 10¢ per hour, based on metrics like system size, operating system, etc. Such companies usually conclude their introduction by suggesting that it’s easy to see why people think all delivery from the public cloud is cheaper than that delivered by internal IT. It’s no surprise that the punch line comes next, when a ‘myth buster’ will typically state, with a leading ‘However’, that if you probe further, the picture will change. The changed picture presented is painted thus: for resources that are needed constantly, the private cloud is more cost-efficient than the pay-as-you-use public cloud model. A common analogy What usually follows is a pithy example by way of analogy, such as the comparison between renting and buying a car. The argument goes that for short-term use, a car rental is cost-effective, because you only pay for what you consume. The clincher seems to be the statement that if you drive frequently and/or for longer, owning a vehicle makes better financial sense. Does it, indeed? Let’s explore that idea. Truth rather than myth The myth is stated as; “the public cloud is the most inexpensive way to procure IT services.” In reality, what you’ve just read is a misrepresentation of the situation. The claims made for cloud computing are not presented in that fashion. It is a myth that such claims are made. The real statement should be presented as; “the public cloud is the most cost-effective way to procure IT services.” Therein lies truth and no myth. Unsurprisingly, the myth busters chose not to present it that way, because it’s more difficult to argue against it being cost-effective. They prefer self-fulfilling prophecies in their desire to influence behaviour. The analogy explored To buy the analogous car, you first need a deposit, typically thirty percent of the purchase price, which is a non-trivial investment, assuming you don’t have to borrow the cash in the first place, in which case the proposition becomes even less attractive. To retain ownership of the car, you are then committed to paying the balance of the price of the vehicle, in monthly instalments, over an extended period of time, not atypically several years. During that hire purchase contract term, you are also likely to take up a maintenance or service package, which may or may not include provision for consumables e.g., tyres, and you may even extend the warranty for an extra cost. It isn’t that simple At the end of the term, you own a car. Bully for you! However, the car is worth far less than you paid for it and (don’t forget this is merely an analogy) probably no longer ideal for the task for which it was first procured. So you buy a new car and if you’re lucky, the residual value will suffice for the deposit on the new one, but don’t count on it. Now stop and think. You’re on the merry-go-round. You’re locked into buying car after car, forever – and a day. Let’s look at the hypothetical outlay involved, substituting a PBX system for the car. Total payment (p) Total cost (d+p) The key cost for comparison is the monthly payment of $1,500.00 as that’s what we have to spend on an equivalent, cloud-based IT service, in this case an IP-PBX. And don’t forget, that monthly cost is payable in perpetuity as long as you keep replacing and funding each new PBX in the same way. We can assume market price pressure rather neatly counteracts inflation, for the sake of simplicity. The false trail The myth busters would have you believe that you will pay a flat fee per seat per month for an IP-PBX, just as you do with certain software-as-a-service offerings. Thus, if you’ve got 250 employees at $10 per seat month, your monthly budget is exceeded to the tune of $1,000.00 and the idea that the public cloud is the most inexpensive way to procure IT services is, therefore, held up as a myth – because, in pure monetary terms, it’s plainly not the “most inexpensive” option. But they’re barking up the wrong tree. You’ve already read that they’ve made a myth of a myth. Consider now, again, the real claim, which is, “the public cloud is the most cost-effective way to procure IT services.” The whole truth In fact, you can argue that the ‘per seat’ method is cost-effective, however, there is another model. If you are not paying a flat rate monthly fee and, instead, you’re paying on an ‘as-you-go’ usage basis, the picture is quite different. If you’re paying 3¢ per minute for platform time and your usage is reasonably typical e.g., you are not an outbound collections agency, your enterprise’s monthly call volume would be in the region of 12-15,000 minutes, which is less than $500 per month. To put it another way, you can consume up to 50,000 minutes per month before you exceed the above notional budget. You may call that inexpensive if you wish, but you surely can’t argue against its being cost-effective. Little more needs to be written. You can sum it up yourself and substitute your own figures. Whichever way you approach the issue, it’s reasonably clear that, in many typical scenarios, the public cloud can be a very cost-effective way to procure IT services. Monthly outlay per above example Capital purchase over term Flat fee per seat That is what is claimed; nothing more, nothing less – and there’s no myth about it. Of course, there are other concerns beyond mere price, such as performance, security, compliance, service level agreements, availability, etc. However, we shall consider those another day. See you next time.
<urn:uuid:bb587ed9-e2ec-439f-965e-1827d5eee0ac>
CC-MAIN-2017-04
http://blog.aculab.com/2012/02/myths-and-legends-cloud-myths-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943063
1,467
2.515625
3
One question that frequently pops up in evaluating security systems is whether to use hardware-based or software-based encryption systems. Typically, hardware-based encryption is considered more secure because the encryption keys are embedded in the hardware and would require a very sophisticated attack at the hardware layer to acquire the encryption keys. In addition to better security, hardware-based encryption systems do not require system resources which results in much faster performance for cryptographic operations. Software-based encryption, on the other hand, are more vulnerable to hacking attacks particularly from virtual rootkits which penetrate the operating system. These rootkits are a primary threat to corporate systems with root kit malware being delivered to desktops using the attack vectors of social engineering and phishing emails. This conventional wisdom of hardware being better than software for cryptography was recently called into question when secret documents procured by Edward Snowden were leaked which revealed that the National Security Agency (NSA) was working with chipmakers to insert backdoors and cryptographic weaknesses into their products, presumably for the NSA to access as the need arose. The processors of some manufacturers were thought to no longer be trusted because the random number generators (RNGs) needed to generate cryptographic keys had been sufficiently weakened to the point of failing to provide strong, near unbreakable encryption. In addition to deliberate sabotage, there is always a possibility, however slight, that some chips were faulty to the point of introducing vulnerabilities into the encryption process. So do these doubts being raised mean that hardware encryption is no longer the preferred method of encryption? No. What it does mean, however, that extra precautions must be taken before blindly accepting a hardware based encryption system. There should be a due diligence evaluation process to ensure that the hardware based encryption system will worked with the needed levels of security. This is especially important for applications security sensitive data or high value transactions. A first step should be to ensure that the product has been tested by an independent third party laboratory and has been certified against a known standard such as the Federal Information Processing Standard (FIPS) 140-2 or the Common Criteria for Information Technology Evaluation. FIPS 140-2 sets security requirements for cryptographic modules and has designated 4 levels of security certification with each level detailing the standards that must be met for increasing levels of security assurance. The standard reviews the basic design and documentation, physical security measures, cryptographic algorithms and module interfaces. Certification can only be achieved through rigorous testing handled by third-party laboratories that are accredited as Cryptographic Module Testing laboratories. The other accreditation process mentioned above, the Common Criteria for Information Technology Security Evaluation (or Common Criteria for short) as a much wider scope of review than FIPS, and covers the product from its inception, to final product and overall use. Common Criteria reviews the software, hardware, and firmware of a device, as well as the overall development process of the product from planning to commercial release. Almost every aspect and process which goes into the design, development, release, and support of the product is reviewed. Like FIPS, there are several levels of achievement based on the level of complexity, security and functionality necessary and Common Criteria has 7 levels of increasing security assurance. So would obtaining these certification address the cited allegations of backdoors raised in documents leaked by Snowden? Maybe, but to be sure you could also introduce addition sources of randomness to the RNGs to ensure the encryption keys generated are sufficiently strong. This would also help guard against the possibility that faulty chips were generating less than random encryption keys. Thus, in general relying on multiple sources of randomness is a good practice. The amount of money and effort invested in the due diligence should be commiserate with the level of security needed for the application. In layman’s terms, don’t go spending more on the bicycle lock than the bicycle is worth. That’s why hardware security encryption systems with advanced levels of FIPS or EAL certification are usually reserved for national security or financial applications. In summary, using hardware based encryption is still your best choice for secure transactions at higher processing speeds. However, one must perform the proper due diligence by ensuring the hardware has been certified by an independent testing laboratory and, if needed, using additional random number generators. These measures alone cannot guarantee absolute security, but they can make security breaches far less likely to occur. They also demonstrate the due diligence and high levels of assurance needed for high end security applications. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:780d510d-41f2-469d-925a-a6130f8f3795>
CC-MAIN-2017-04
http://www.csoonline.com/article/2912486/vulnerabilities/the-hardware-roots-of-trust.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95987
916
3.015625
3
An Alternate Approach to Inertial Fusion Energy / July 3, 2012 A new accelerator -- the NDCX-II, shown above -- has been designed to study an alternate approach to inertial fusion energy. (In an Inertial Fusion Energy Power Plant, 10 to 20 pulses of fusion energy per second heat a low-activation coolant, such as lithium-bearing liquid metals or molten salts, surrounding the fusion targets, according to Lawrence Livermore National Laboratory (LLNL). The coolant transfers the fusion heat to a turbine and generator to produce electricity.) The Department of Energy's Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) -- whose member institutions include LLNL, Lawrence Berkeley National Laboratory and the Princeton Plasma Physics Laboratory -- recently completed the NDCX-II, which is a compact machine designed "to produce a high-quality, dense beam that can rapidly deliver a powerful punch to a solid target," LLNL reported. Research with NDCX-II will introduce advances in the acceleration, compression and focusing of intense ion beams that can inform and guide the design of major components for heavy-ion fusion energy production. Photo by Roy Kaltschmidt/Lawrence Berkeley National Laboratory
<urn:uuid:95aacf4f-8a6e-41b3-82e2-96dc7674cbd5>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Alternate-Approach-to-Inertial-Fusion-Energy-07032012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.879231
250
3.34375
3
A software repository is a storage location where you can store software packages. You can access these software packages when required and install them on computers in your network. In Desktop Central, there are two types of software repositories: A network-share repository is used when you want to deploy a software application to multiple computers in a network. It is recommended that you store the software package that you want to deploy in a network share that is accessible from all the computers in the network. The software application will be installed directly in the computers that you specify. Most software applications have a single installation file like <setup>.exe or the <softwarename>.exe. Other applications have more than one installable file, however, these files are located in the same directory. Some complex applications, like Microsoft Office, have multiple installable files. Here each installable file is located in a different directory. It is recommended that you deploy such applications from a network share that is accessible from all the computers in your network. Using a network-share repository enables you to do the following: The network-share repository should have the Read and Execute permission for all the users and computers in the network. You should set the permissions mentioned above for the group Everyone. This ensures that the network-share repository is accessible from all the computers in the network. However, ensure that you do not set the permissions to Read and Execute for all the users and computers in the network when you want to do the following: Creating a Network-share Repository To create a network-share repository, follow the steps given below: If you do not enter a path for the network share, it will automatically be created in the computer where Desktop Central server is installed. a. If you are creating the network share on a domain computer, prefix the domain name to the username. For example, ZohoCorp\Administrator. b. If you are creating the network share on a workgroup computer, prefix the computer name to the username. For example, You have created a network-share repository. An HTTP repository is used to store executable files before you install them in computers in your network. You can use this repository when you want to deploy software packages to computers using the HTTP path. You can also change the location of the HTTP repository if required. The HTTP repository is created automatically when you install Desktop Central. It is located in the same folder as the Desktop For example, <Desktop Central server>\webapps\DesktopCentral\swrepository. You can change the location of the repository if required. Using an HTTP repository enables you to do the following: Changing the Location of the HTTP Repository To change the location of the HTTP repository, follow the steps given below: You have changed the location of the HTTP repository. If you are unable to change the location of the HTTP repository, see Cannot Change the Location of the HTTP Repository While it is recommended that you have a common software repository, it is not mandatory. You also have an option to upload the executable files in the Desktop Central server from where they are copied into the computers before being deployed. Using this approach will increase your bandwidth overhead as the executable files are copied into each of the computers. Therefore, it is recommended that you use this approach when you are deploying software applications to computers in a remote location. This is because, in most cases, when you deploy software applications to computers in remote locations you do not have access to the respective network-share repository. When you want to deploy software packages to computers in a LAN and WAN, create two packages for the same software application. Store one set of packages in the network-share repository. These will be deployed and installed in the computers in the LAN. Store the other set of packages in the HTTP repository. These will be uploaded and deployed to the computers in the WAN. When you want to install multiple packages you can zip them and upload. For more information, see How to use the HTTP Path option to deploy software packages that have multiple executable files in different directory structures? There are a few exceptional scenarios where executable files are copied to computers in your network when using network-share repository. This can happen when you do the following:
<urn:uuid:39bb7449-6c0e-4009-8bba-45ba815ab762>
CC-MAIN-2017-04
https://www.manageengine.com/products/desktop-central/help/configuring_desktop_central/edit_network_shared_path.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903328
872
3.171875
3
Are your passwords safe? Bad news. Almost certainly not. According to some troubling new data released Tuesday by Deloitte, password security will be a primary concern for all connected users in 2013. The global consulting firm predicts that over 90%—yes, you read that right, 90%—of passwords generated by users will be vulnerable to hacking this year. You may be thinking: Not mine! I’ve got the recommended 8-character mix of letters, numbers and symbols. Think again. In this new era of crowd-hacking and sharing passwords across multiple accounts (big no-no), even passwords that in the past were considered strong are now highly hackable. The truth is, one symbol and a capital letter at the beginning of a word is just not enough. According to a study cited by Deloitte, the vast majority of a sample of 6 million accounts were accessible with the only the 10,000 most common passwords. Users tend to rely on the same character combinations and the reuse of passwords for multiple accounts for one simple reason: it’s easier to remember. Online bank accounts, PayPal accounts, social media, work email, personal email… The passwords pile up, and most people choose to put themselves at risk for the sake of ease and convenience. This article about the Deloitte study, however, suggests a solution: password managers. Not only does a password vault like Keeper keep track of your passwords for you, it encrypts them heavily to protect against hacking. And as an added level of protection, Keeper generates random password with a roll of the dice for seriously strong combinations of symbols. It’s a simple and highly effective solution, eliminating the deficiencies of human memory and predictability.
<urn:uuid:a063a3da-37b3-4605-8f06-07195dab73b5>
CC-MAIN-2017-04
https://blog.keepersecurity.com/2013/01/18/report-says-90-of-user-generated-passwords-are-hackable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00453-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925314
353
2.59375
3
On Wednesday, D-Wave Systems made history by announcing the sale of the world’s first commercial quantum computer. The buyer was Lockheed Martin Corporation, who will use the machine to help solve some of their “most challenging computation problems.” Lockheed purchased the system, known as D-Wave One, as well as maintenance and associated professional services. Terms of the deal were not disclosed. D-Wave One uses a superconducting 128-qubit (quantum bit) chip, called Rainier, representing the first commercial implementation of a quantum processor. An early prototype, a 16-qubit system called Orion, was demonstrated in February 2007. At the time, D-Wave was talking about future systems based on 512-qubit and 1024-qubit technology, but the 128-qubit Rainier turned out to be the company’s first foray into the commercial market. According to D-Wave co-founder and CTO Geordie Rose, D-Wave One, the technology uses a method called “quantum annealing” to solve discrete optimization problems. While that may sound obscure, it applies to all sorts of artificial intelligence-type applications such as natural language processing, computer vision, bioinformatics, financial risk analysis, and other types of highly complex pattern matching. We asked Rose to describe the D-Wave system and the underlying technology in more detail. HPCwire: In a nutshell, can you describe the machine and its construction? Rose: The D-Wave One is built around a superconducting processor. The processor is shielded from noise using specialized filtering and shielding systems that ensure that the processor’s environment is extremely quiet, and is cooled to almost absolute zero during operation. The entire system’s footprint is approximately 100 square feet. While there is a substantial amount of exotic technology inside the D-Wave One, the system has been built to require very little specialized knowledge to operate. Users interact with the system via an API that allows the D-Wave One to be accessed remotely from a variety of programming environments, including Python, Java, C++, SQL and MATLAB. HPCwire: What is “quantum annealing?” Rose: Quantum annealing is a prescription for solving certain types of hard computing problems. In order to run quantum annealing algorithms, hardware that behaves quantum mechanically — such as the Rainier processor in the D-Wave One — is required. Quantum annealing is conceptually similar to simulated annealing and genetic algorithms, but is much more powerful. HPCwire: Can you prove that quantum computing is actually taking place? Rose: This was the question we set out to prove with the research published in the recent edition of Nature. The answer was a conclusive “yes.” HPCwire: How much power is required to run the machine? Rose: The total wall-plug power consumed by a D-Wave One system is 15 kilowatts. This power requirement will not change as the processors become more powerful over time. HPCwire: How much does D-Wave One cost? Rose: Pricing for D-Wave One is consistent with large-scale, high-performance computing systems. HPCwire: What kinds of problems is it capable of solving? Have you demonstrated any specific algorithms? Rose: We have used the D-Wave One to run numerous applications. For example, we used the system to solve optimization problems arising from building software that could detect cars in images. This process outputs software that can be deployed anywhere — mobile phones, for example. The software the D-Wave One system wrote, with collaborators from Google and D-Wave, was among the best detectors of cars in images ever built. It is discussed at http://googleresearch.blogspot.com/2009/12/machine-learning-with-quantum.html. HPCwire: What’s next? Rose: This is a very significant time in the history of D-Wave. We’ve sold the world’s first commercial quantum computer to a large global security company, Lockheed Martin. That’s a real milestone for us. We are excited to work with Lockheed and future customers to tackle complex problems traditional methods cannot resolve. Last week we were validated on the science side by Nature and this week, on the business side, by the sale of our quantum computer to this Fortune 500 company.
<urn:uuid:0d741cfe-9cc5-41f8-83cd-058b424963f3>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/05/26/d-wave_sells_first_quantum_computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00021-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948953
921
3.09375
3
"Pharming" is a variety attack type in which the attacker hijacks the network address (either IP address or domain name) of a target application for the purpose of intercepting all end-user interaction with the target application. The attacker can then make use of this interception to compromise sensitive information or distribute malware, including back doors and Trojans. In a “pharming” attack, the attacker sets up a web server to intercept all communication between a set of end-users and a target application. The attacker then hijacks the network address of the target application causing all end-user communications with the target application to go through the attacker controlled server. Victims of this attack access the target application, not knowing that their requests are being intercepted, compromising sensitive information, such as credentials. Attacker can relay requests to the target application and intercept sensitive information (balance sheets, personal details, etc.) going back to victims or even create bogus replies injecting malware into unsuspecting victim’s machine. Attackers use a variety of techniques to hijack the network address of the target application. One set of techniques is targeted at end-point computers with loose security controls (many home computers fit this profile). A sample technique would be to tamper the computer’s hosts file in a way that the domain name of the target application points to an attacker controlled server. Another method to redirect the traffic is by “DNS-poisining” where the attacker exploits a DNS (Domain-Name Service) vulnerability so that the DNS’ returned address references the IP address of an attacker controlled server rather than the IP address of the actual application server. While this type of attack is more effective in terms of the number of user affected by it, it is harder to execute since DNS servers are usually more secure than workstations. Since the popular method of attack in a “pharming” scheme is to change the hosts file on the victim’s computer, implementing personal computer safety practices is a wise choice. Avoid the usage of default usernames and passwords and make sure the computer has a network firewall configured and running. A user should also check the security certificate of the website that prompts for user credentials. If the security certificate is outdated or invalid, the user should suspect a “pharming” scheme.
<urn:uuid:3b3d6f20-456e-4487-937c-706721d6e0ad>
CC-MAIN-2017-04
https://www.imperva.com/Resources/Glossary?term=pharming
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00049-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911511
476
3
3
SQL Injection Vulnerability SQL Injection is one of the most widely exploited web application vulnerability of the web 2.0 era. SQL Injection is used by hackers to steal data from online businesses' and organizations' websites. This web application vulnerability is typically found in web applications which do not validate the user's input. As a result, a malicious user can inject SQL statements through the website and into the database to have them executed. NOTE: This article gives an overview of what is the SQL Injection vulnerability. If you are looking for more technical information about all the different variants of the SQL Injection vulnerability, refer to the SQL Injection cheat sheet. SQL Injection Vulnerability Target - Database Driven Web Applications Nowadays all online businesses are using database driven web applications to sell products and services to their customers and share real time information with their business partners. Be it a news website, an online shopping website, blog, social network website or an enterprise resource planning system, all of these web applications have access to and interact with an online backend database. It is typical for people browsing the internet to read data from an online backend database, most of the time without even realizing. When you search for a pair of running shoes on an online shopping website or check the balance of your bank accounts through an e-banking web application you are retrieving data from the backend database through the web application. On the other hand, if you register to a news website, blog or forum, submit credit card details to an online shopping website or make an update on a social network website, you are writing data to an online backend database through the web application. One of the main problems with database driven web applications is that if the user input is not properly sanitized, a hacker will take advantage of such situation and use an SQL Injection hacking technique to pass SQL statements through the web application so they are executed by the backend database. Impacts of an SQL Injection If your web application is vulnerable to SQL injection, a hacker is able to execute any malicious SQL query or command through the web application. This means he or she can retrieve all the data stored in the database such as customer information, credit card details, social security numbers and credential to access private areas of the portal, such as the administrator portal. By exploiting an SQL injection it is also possible to drop (delete) tables from the database. Therefore with an SQL Injection the malicious user has full access to the database. Depending on your setup and the type of server software being used, by exploiting an SQL injection vulnerability some malicious users might also be able to write to a file or execute operating system commands. With such escalated privileges this might result into a total server compromise. Unfortunately it is very difficult to determine the impact of an exploited SQL injection. Most of the times, if the hackers are well trained you won't be able to detect the attack until your data is available to the public and your business reputation is going down the drain. Example of an SQL Injection For this SQL injection example we will use a typical login page where users enter their credentials to login to a website or private portal. When a user submits a username and password, the web application uses these credentials in an SQL query. This SQL query is sent to the backend database to be executed and depending on the result of the query, the website determines if the credentials are correct or not, thus allowing the user to access the portal or denying access. E.g. if the username is "admin" and the password is "12345678", the web application sends an SQL query similar to the one below to the database to verify the credentials: SELECT * FROM Users WHERE name = 'admin' AND password = '12345678' Suppose a malicious user enters something like "test' OR 1 = 1--" instead of the username and anything else as password. In this case the SQL query will look like the below: SELECT * FROM Users WHERE name = 'test' OR 1 = 1 --' AND password = 'xxxxx' The above SQL statement will always return a true because: Name= 'test' or 1 = 1will always return a true statement (1 OR 1 = 1) 2. The rest of the SQL Statement after the -- signs is commented out, i.e. that part of the query is not executed. Since the database returned a true value, the malicious user was able to trick the web application and manages to gain access to a logged in session without the need to guess the credentials. This Type SQL injection vulnerability can also be used to retrieve further data from the database, such as table names and their content. Even though this might look like a simple old school trick, many web applications are still being hacked today by exploiting a similar SQL Injection. There are many other more complex variants of SQL Injections and it is almost impossible to manually check if all the inputs in your web application are vulnerable to all variants of SQL injection. Automatically Detecting SQL Injection Vulnerabilities in your Web Applications Web applications need to have direct access to the backend database to be able to retrieve any information or save information to the database. The same applies to your customers. They need full access to your website. Therefore firewalls and other type of intrusion detection / prevention system will not block someone trying to exploit an SQL Injection vulnerability. The only way to check if your websites and web applications are vulnerable to SQL Injection is by scanning them with an automated web application security scanner such as Netsparker. Netsparker is a dead-accurate and fully automated web application security scanner that can be used to identify web application vulnerabilities such as SQL Injection and Cross-site scripting in your web applications and websites. Download the trial version of Netsparker to find out if your websites are vulnerable.
<urn:uuid:db098831-29e2-4699-8aa8-b18737b25f34>
CC-MAIN-2017-04
https://www.netsparker.com/web-vulnerability-scanner/vulnerability-security-checks-index/sql-injection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.903294
1,203
2.921875
3
Silver surfing lessons can help fight dementia Encouraging the elderly to use the internet can not only help them keep in touch with friends and family and take advantage of the best deals, it can also reduce the likelihood of dementia. The results of an eight-year study of 6,500 50-90 year-olds reveal that those who regularly go online experience less mental decline compared to those who don't use the internet. The study shows a significant improvement in delayed recall over time for those who were frequent online users, highlighting the role played by the internet in preventing the degeneration of mental abilities in the elderly. Dr Tom Stevens, Consultant Psychiatrist at London Bridge Hospital says, "People over the age of 65 must remember the phrase 'use it or lose it', and the internet is a good way to ensure that older people are still able to use their mental faculties". However, there is a need to educate older people about online dangers. Ben Williams, head of operations at open source ad control project AdBlock Plus warns, "...we mustn't forget that with more older people using the internet, they must be informed about the choices they have online. With no experience of online advertising, constant blinking banners and pop-up adverts could spoil the internet for them, making them think it is a tasteless and unmanageable jungle, and put them off the whole experience". Older users are at more risk of being drawn into online scams and are likely to suffer more from the intrusiveness of ads such as pop-ups and banners that obscure their view and make it harder for them to use the internet effectively. Education is therefore an important factor in helping the elderly make the most of the internet and stay safe online. Williams adds, "Plus, there are online risks that specifically target older users, such as phishing scams, or promotions of miraculous and discount medication, and low-cost insurance, and it is our responsibility to ensure that older people aren't ignorant about these. Basic lessons in how to stay safe and not put yourself in danger of online scams and viruses is essential". You can read more about the study on the Journal of Gerentology website.
<urn:uuid:b255c0a6-b942-49c9-944d-3e8f4141e59a>
CC-MAIN-2017-04
http://betanews.com/2014/08/15/silver-surfing-lessons-can-help-fight-dementia/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954151
443
2.703125
3
Will Masdar City Be a Global Model for Sustainability? / October 11, 2011 What would a sustainable city look like? The answer lies in Masdar City, a 2.3 square mile carefully planned community in the United Arab Emirates, which relies on solar energy and other renewable energy sources. Construction on the city began in 2006 and is projected to be complete by 2025 at a $20 billion price tag. Everything in Masdar City has been meticulously designed, constructed and tested to maximize the region’s resources and advocate eco-friendly practices, which is reflected in the use of battery-powered driverless vehicles, photovoltaic panels and an LED tower that changes color to alert people when too much energy is being consumed. Even businesses are carefully selected and must comply with the city’s low carbon mandate. Photos courtesy of MasdarCity.ae
<urn:uuid:7d6b20d2-72cf-4c88-a177-cad3af31ae6e>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-Will-Masdar-City-Be-a-Global-Model-for-Sustainability-10042011.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00344-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950199
178
2.65625
3
What You'll Learn - Apply data definition language (DDL) to describe tables and views - Use UPDATE, INSERT, ALTER, and DELETE to modify SQL tables and maintain a database - Make use of the SELECT statement to extract data from tables and views - Code SQL queries that include column and scalar functions - Code SQL queries that include inner and outer joins - Utilize SQL Query Manager Who Needs To Attend This is an intermediate course for experienced IBM i (including i5/OS and OS/400) programmers. However, very skilled IBM i users who want to learn how to use the SQL programming language as a means to access the DB2 Database for IBM i may also consider attending this class. This course is not recommended for users who need to perform simple queries. DB2 WebQuery Getting Started is an end user oriented tool designed for this purpose. You can learn about IBM Query for IBM i by attending such courses as DB2 WebQuery for IBM i Workshop (OD040).
<urn:uuid:7b2fc152-e989-4323-ad39-8f9e0b356f4e>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/course/119452/accessing-the-ibm-i-database-using-sql/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00554-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911392
211
2.5625
3
“Firesheep,” a new add-on for Firefox that makes it easier to hijack e-mail and social networking accounts of others who are on the same wired or wireless network, has been getting some rather breathless coverage by the news media, some of whom have characterized this a new threat. In reality, this tool is more of a welcome reminder of some basic but effective steps that Internet users should take to protect their personal information while using public networks. Most online services use secure sockets layer (SSL) encryption to scramble the initial login — as indicated by the presence of “https://” instead of “http://” in the address field when the user submits his or her user name and password. But with many sites like Twitter and Facebook, subsequent data exchanges between the user and the site are sent unencrypted and in plain text, potentially exposing that information to anyone else on the network who is running a simple Web traffic snooping program. Why should we care if post-login data is sent in unencrypted plain text? Most Web-based services use “cookies,” usually small, text-based files placed on the user’s computer, to signify that the user has logged in successfully and that he or she will not be asked to log in again for a specified period of time, usually a few days to a few weeks (although some cookies can be valid indefinitely). The trouble is that the contents of these cookies frequently are sent unencrypted to and from the user’s computer after the user has logged in. That means that an attacker sniffing Web traffic on the local network can intercept those cookies and re-use them in his own Web browser to post unauthorized Tweets or Facebook entries in that user’s name, for example. This attack could also be used to gain access to someone’s e-mail inbox. Enter Firesheep, a Firefox add-on released this past weekend at the Toorcon hacker conference in San Diego. Eric Butler, the security researcher who co-authored the tool, explains some of the backstory and why he and a fellow researcher decided to release it: “This is a widely known problem that has been talked about to death, yet very popular websites continue to fail at protecting their users. The only effective fix for this problem is full end-to-end encryption, known on the web as HTTPS or SSL. Facebook is constantly rolling out new ‘privacy’ features in an endless attempt to quell the screams of unhappy users, but what’s the point when someone can just take over an account entirely?” In his blog post about Firesheep, I believe Butler somewhat overstates the threat posed by this add-on when he says: “After installing the extension you’ll see a new sidebar. Connect to any busy open wifi network and click the big ‘Start Capturing’ button. Then wait.” Continue reading →
<urn:uuid:e7c1ca5f-331a-48df-be17-040c1272f60e>
CC-MAIN-2017-04
https://krebsonsecurity.com/tag/dave-marcus/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00004-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947186
617
2.859375
3
NASA site asks public to choose telescope's mission People can visit a new site to vote for an astronomical object to be viewed by the Hubble Space Telescope - By Doug Beizer - Jan 29, 2009 NASA wants the public to decide which astronomical object the Hubble Space Telescope will next observe. Votes can be cast until March 1 at the new Web site YouDecide.Hubblesite.org, the space agency said Jan. 28. The public will vote for one of six astronomical objects for Hubble to observe in honor of the International Year of Astronomy. The choices, which Hubble has not previously photographed, range from distant galaxies to dying stars. The telescope's camera will try to make a high-resolution image revealing new details about the object that receives the most votes. The image will be released in April, according to NASA. Everyone who votes also will be entered into a random drawing to receive one of 100 copies of the Hubble photograph made of the winning celestial body, the space agency said . Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:fd1d5564-00b5-47ee-8c8c-16a947743a2c>
CC-MAIN-2017-04
https://fcw.com/articles/2009/01/29/nasa-seeks-public-input-for-next-hubble-mission.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00398-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925506
218
2.609375
3
Astronauts have been drinking recycled urine for some time, and according to Wired magazine, on the long trip to Mars, they’ll be shielded from radiation by astronaut poop. That, in microcosm, is what’s happening back here on Earth. According to the “One Water Vision,” all water is just the same H2O recycled over and over. Some of the water in your coffee, for example, might have been excreted by a Neanderthal, or been part of the iceberg that sank the Titanic — or both, although the chances of that seem fairly slim. Following the Industrial Revolution, the human population spiked — from about 1 million to an estimated 7 billion in 2013, hammering freshwater supplies. The results? More waste entering the water supply, more incentive to recover potable water from that waste, and the discovery that pharmaceuticals and street drugs are entering the water supply. And if you think marijuana in the drinking water is “far out,” you could be part of the problem. Some of this waste has bad effects, like death. Canadian researchers suspect that birth control pill estrogen in drinking water is causing a spike in prostate cancer deaths. Traditional water-treatment technology does little to remove illegal or pharmaceutical drugs that have been excreted or flushed, and thus “what goes around comes around.” Global Water Senior Vice President Graham Symmonds suggests a “three-pipe” system, comprising a potable water pipe, a nonpotable water pipe for irrigation or industrial use, and a sewer pipe. The system reduces demand for potable water by 40 percent, he said. And while removing pharmaceuticals from water is expensive, only the potable system would need such treatment. “Your grass doesn’t care if there is aspirin in the water,” he said. Advanced metering infrastructure (AMI), said Symmonds, can help struggling municipalities and utilities by letting customers monitor their consumption and reduce waste. AMI can also help recover “nonrevenue” water, he said, from missing or ineffective meters, leaks and errors. “You can find a lot of revenue by cleaning up your system.” What about ocean water? Desalination is expensive compared to traditional water treatment. Texas, for example — which already desalinates brackish groundwater — estimates that desalinating sea water would cost $800-$1,400 per acre foot, with each acre foot being equivalent to 326,000 gallons or roughly the amount used by an average household in a year. As for recovering water from waste? “The technologies exist to produce high-quality water from sewage,” Symmonds said, “and the regulatory framework is under construction, but some places do it already.” Luckily new ideas are flooding in for better desalination techniques, protecting fish from medication, extracting useful chemicals from sewage, and stopping the formation of hydrogen sulfide, a poisonous, explosive and smelly material that corrodes sewer pipes, costing the U.S. $14 billion annually. Sewage can even be used to generate electricity.
<urn:uuid:4ed4c0ea-16c3-45aa-b0f2-b692525450a5>
CC-MAIN-2017-04
http://www.govtech.com/transportation/209248441.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.946464
649
3.046875
3
For all the repeated advice to use different, complex password for each online account, users are still opting for easy-to-guess, short ones and use them repeatedly across many websites and online services. Unfortunately, it seems that security professionals must make peace with the situation, or find another way to make users listen and do as they are counselled. But is the experts’ advice sound? A trio of researchers from Microsoft Researcher and Carleton University, Ottawa, Canada are of a different mind, and are challenging that long-held belief that every account needs a strong password. As the number of online accounts opened by each user grows, and many are not open to using password managers, users are finding managing a large password portfolio increasingly burdensome. “Both password re-use, and choosing weak passwords, remain popular coping strategies,” the researchers noted, and believe that they will continue to be popular. “Both are valuable tools in balancing the allocation of effort between higher and lower value accounts.” So, they decided to see whether and how these strategies can be used in a correct way, given the users’ fixed and limited “effort budgets.” And the answer is they can – and the should – but the trick is to re-use simple, memorable passwords for bigger group of accounts of low value, i.e. accounts that, if compromised, will not yield much or very helpful information to attackers. Complex and unique passwords should be reserved for high-value accounts such as those containing a lot of personal information, financial information, confidential documentation, or email accounts that are used for registering to online services and to which password reset emails of those services are sent. For more details about their research, check out their whitepaper.
<urn:uuid:f19897e6-e138-44ab-adfa-0e292dc9c8bf>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/07/16/selectively-re-using-bad-passwords-is-not-a-bad-idea-researchers-say/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00454-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939833
370
2.515625
3
You learn something new every day, and today I learned that the U.S. Government has an actual standard of what counts as a high-speed Internet connection. But as the Washington Post reported on Friday, the Federal Communications Commission is looking into whether it needs to raise the baseline of what constitutes broadband Internet. Right now, your connection needs to achieve a download speed of at least 4 megabits per second for the FCC to consider it a broadband connection. But the agency may change the definition so only connections with download speeds over 10Mbps would count as broadband. It may even consider setting the proverbial bar as high as 25Mbps. Under this proposal, the FCC would also require broadband Internet connections to support upload speeds of at least 2.9Mbps, a significant boost from the current definition's 1mbps minimum. The reason for the proposed change is our insatiable appetite for more bandwidth thanks to things like video streaming, cloud storage, and cat photos, among other things.A As the Post points out, "An HD-quality Netflix stream, for instance, requires at least a 5Mbps connection." The FCC hasn't settled on a plan, but it is considering asking for public comment on the proposal in the not-too-distant future.
<urn:uuid:2886ebd3-f6d0-442d-ab62-bf6f2c66c6fb>
CC-MAIN-2017-04
http://www.cio.com/article/2375840/networking/report--the-fcc-may-redefine-what-counts-as-broadband.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941299
257
2.78125
3
DARPA plots supercomputing revolution DARPA program aims to kickstart revolutionary high-performance technologies - By Henry Kenyon - Jun 29, 2010 A new research program is attempting to develop a new generation of power-efficient, space-saving supercomputers. The Defense Advanced Research Projects Agency has launched its Omnipresent High Performance Computing program, which seeks to develop breakthrough technologies in the areas of hardware, software, scalable input/output systems, programming models and low power circuits. The goal of this and other related computing research efforts is to create new, compact supercomputers to support the Defense Department’s growing need for applications and processing capability. Such systems could rapidly manage and interpret the massive streams of sensor data generated by next-generation unmanned and manned platforms. These new computers could potentially be installed in individual vehicles or command centers to provide sensor fusion and analysis, and vastly increase the reaction and decision time of U.S. forces. DARPA investigates extreme supercomputing Flash! Supercomputing goes solid-state In its broad agency announcement issued June 21, DARPA stated that "current evolutionary approaches to progress in computer designs are inadequate." The agency stated that it wants to develop technologies to reduce the power requirements for high performance computers, including memory storage hierarchies; developing highly programmable systems to reduce operational complexity; and improving system dependability, managing component failure rates, and security issues including methods for sharing information and responsibility between the operating system, runtime system and applications. The program also will research self-aware system software. This includes operating systems, runtime systems, I/O systems, system management and administration, resource management and external environments. DARPA also wants to study programming models that allow developers to more easily design in security, dependability, power efficiency and high performance. Advances developed by the HPC program will complement DARPA’s Ubiquitous High Performance Computing program. According to the announcement, “The purpose of this effort is to accelerate the performance capabilities of UHPC program systems through selected, critical research and development activities that have high impact on ExremeScale computing and specifically UHPC program systems, up to but not necessarily including whole-system prototype development.” DARPA describes ExtremeScale systems as a computer that is a thousand times more powerful than a current comparable system with the same power and physical footprint. The goals for the UHPC program include developing a petaflop supercomputer that fits into a single cabinet and runs a self-aware operating system. The effort also seeks to develop a prototype compiler to ease the programming for an ExtremeScale system and a dynamic system that adapts to achieve optimal application execution goals without the direct involvement of the application developer. DARPA plans to have a prototype UHPC computer by 2018.
<urn:uuid:a91f1f67-7455-4b17-9844-21195f7fe0ff>
CC-MAIN-2017-04
https://gcn.com/articles/2010/06/29/darpa-effort-seeks-to-kickstart-revolutionary-supercomputing-technologies.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00390-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929317
573
2.515625
3
“A logical partition, commonly called an LPAR, is a subset of computer’s hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple logical partitions, each hosting a separate operating system.” from Wikipedia The value of LPARs is the ability to dynamically scale compute resources, increase availability, and dynamically load balance resources to maintain quality of service. LPARs can be thought of as hardware virtualization as opposed to software or hypervisor virtualization. IBM developed the logical partitioning function and mode of operation back in 1972 for the S/370 mainframe. Today LPARs are available in mainframes and UNIX processors. Hitachi Data Systems employs the LPAR concept in their storage virtualization systems, Hitachi VSP and HUS VM, to ensure quality of service and privacy for applications that share the same physical storage resources behind the storage virtualization layer. Hitachi Data Systems is also one of the few vendors to provide LPAR functionality in their X86 blade servers. This LPAR functionality was developed for Hitachi mainframes back in the mid 1980’s and has been available on Hitachi x86 blades servers since the mid 2000’s. For more information on Hitachi’s x86 LPAR technology see: http://www.hds.com/assets/pdf/hitachi-datasheet-compute-blade-logical-partitioning-lpar.pdf At Sapphire NOW+ASUG Conference, last week in Orlando Florida, Hitachi Data Systems announced that the LPAR technology on Hitachi x86 servers has been verified to run SAP Business Suite software to enable secure, scalable production environments for on-premise and cloud deployments of SAP solutions. In EMEA, customers have already implemented this solution. One such customer is Duisburger Versorgungs- und Verkehrsgesellschaft mbH (DVV), known as DU-IT. Du-IT is the leading provider of IT services in and for the city of Duisburg, its subsidiaries and other municipal customers. Working with Green Data Systems, their SAP infrastructure partner they chose the Hitachi Data Systems x86 blade servers to consolidate their virtual SAP environment, optimize availability across two locations 34 km apart, reduce power and licensing cost, and simplify the management and administration of their SAP environment. They achieved all these goals with the help of x86 with LPAR technology. The LPAR technology enabled them to share resources, CPU and memory with logical partitions, virtual instances, or applications in a dynamic or dedicated way. This not only helped to increase SAP applications and database performance, availability, and security, but also increased their cost benefits through increased utilization of servers, reduction in rack space, power, and cooling, and reduction in software licensing. Visit Green Data Systems website for more details on this user’s experience. An internal survey of customers who are using Hitachi x86 blade servers with LPAR technology are realizing improved efficiencies and simplified deployment and management of SAP solutions. Some of the benefits being realized are: o Total costs reduced by as much as 70% o Improved application density by as much as 33:1 o Reduction in the number of servers required by up to 80% o Space, power and maintenance reduction up to 40% We have enjoyed a 20 year relationship with SAP, both as a customer and partner. We are pleased to be the only partner to provide SAP customers with the benefits of LPAR technology on an x86 platform that has the robustness and scalability of mainframes at a fraction of the cost of mainframes.
<urn:uuid:b2a1862a-bdb8-4122-874c-98396b58af07>
CC-MAIN-2017-04
https://community.hds.com/community/innovation-center/hus-place/blog/2014/11/11/x86-with-lpars-a-reality-for-sap-business-suite-customers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00298-ip-10-171-10-70.ec2.internal.warc.gz
en
0.919589
755
2.59375
3
Radar images taken from planes or satellites could someday be used to predict where sinkholes might form — a potential boon for Florida, the nation’s sinkhole capital. The possibility of an early-warning system stems from new NASA research into a monstrous sinkhole that opened in Louisiana in 2012, forcing the evacuation of hundreds of residents. Two NASA researchers examined radar images of the sinkhole area near Bayou Corne, La. Cathleen Jones and Ron Blom discovered that the ground near Bayou Corne began shifting at least a month before the sinkhole formed — as much as 10 inches toward where the sinkhole started. Since its formation, the sinkhole has expanded to 25 acres and is still growing. The NASA findings raise the possibility that engineers eventually could develop a way to predict the location of sinkholes. It would require the constant collection and monitoring of the Earth’s surface with radar data collected from planes or satellites. “It’s not a magic bullet,” Blom said. But it could be “one more tool in a tool kit.” The radar images studied by the two NASA scientists were part of the agency’s ongoing effort to monitor the Louisiana coast, which is rapidly sinking into the Gulf of Mexico. Although the Louisiana images were taken from a research jet, the scientists said a satellite with similar technology could do the same job. And though such a system wouldn’t be cheap — the price of building and launching a satellite usually is in the hundreds of millions of dollars — the gains could be significant. In Florida alone, sinkholes cause about that much property damage each year. Although there are no recent state data on sinkhole damage, a 2010 report by the Florida Office of Insurance Regulation estimated that sinkholes each year cost the state $200 million to $400 million. Thousands of claims related to sinkholes were made and closed in recent years. In 2009 alone, there were about 4,700 closed claims and 2,600 open claims, according to the report. The majority of claims tallied by state officials were from three counties — Hernando, Pasco and Hillsborough along Florida’s west coast — though Orange and Polk were in the top 10 statewide from 2006 to 2010. In one high-profile case last year, a sinkhole wrecked villas at Summer Bay Resort near Walt Disney World, forcing residents to evacuate. There’s a human cost, too. Even though sinkhole deaths are rare, a Hillsborough County man was killed last year when a sinkhole formed beneath his house. The prevalence of sinkholes in Florida can be attributed to the state’s geology. Sinkholes are most commonly found in areas where the underlying rock can easily be dissolved by groundwater. Once eroded, the surface then can collapse into underground caves and other spaces. Florida, with its wet climate and porous limestone beneath the surface, is particularly susceptible to this type of natural disaster. Aware of the dangers, Florida officials also are taking steps to detect sinkholes. Last year, state geologists began a three-year, $1 million project to identify which areas in Florida are most conducive to sinkhole formation. They’ve begun by surveying three northern Florida counties — Columbia, Hamilton and Suwannee — with the goal of creating a statewide “rating of vulnerability for sinkhole formation,” said Clint Kromhout, a geologist with the Florida Geological Survey. The hope, he added, is to give emergency officials more information to help “mitigate against potential loss of property and life during sinkhole formation,” he said. Although property owners have limited options when faced with a sinkhole — other than to evacuate — state officials said knowing more about vulnerable areas could help homebuilders and local governments avoid sinkholes when planning developments. “One of the most important things we can do, and one of the more effective things we can do, is educate the public about the risks,” said Aaron Gallaher, a spokesman for the Florida Division of Emergency Management. ©2014 The Orlando Sentinel (Orlando, Fla.)
<urn:uuid:d5239910-db40-4242-92e2-1b990258186b>
CC-MAIN-2017-04
http://www.govtech.com/public-safety/NASA-Research-Could-Help-Predict-Sinkholes.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00022-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952494
855
3.109375
3
Cisco CCENT TCP/IP Part II – Buffers All computers have buffers…the problem occurs when a computer runs out of room in their buffers. Cisco CCENT Stop and Go When the receiving host buffers are full, it will send a not ready packet to the transmitting host. Cisco CCENT Windowing & Acknowledgements The quantity of data segments (measured in bytes) the transmitting machine is allowed to send without receiving an acknowledgment for them is called a window. Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other through a fully functional data link. It guarantees that the data won’t be duplicated or lost. This is achieved through something called positive acknowledgment with retransmission—a technique that requires a receiving machine to communicate with the transmitting source by sending an acknowledgment message back to the sender when it receives data. Windowing is used to control the amount of outstanding, unacknowledged data segments. Cisco CCENT User Datagram Protocol (UDP) If you were to compare User Datagram Protocol (UDP) with TCP, the former is basically the scaled-down economy model that’s sometimes referred to as a thin protocol. UDP doesn’t offer all the bells and whistles of TCP, but it does do a fabulous job of transporting information that doesn’t require reliable delivery—and it does so using far fewer network resources. Like TCP, UDP resides at layer 4 of the OSI model and utilizes IP as the transport. It is a connectionless protocol that does not have windowing, sequencing or acknowledgements which are the things that make TCP a reliable protocol and UDP not a reliable protocol. Cisco CCENT UDP Header UDP Source port – Optional, when specified, identifies the UDP source port. If not specified, should be zero. UDP Destination port – Identifies the UDP destination port. UDP Message Length – The number of octets that comprise user data and the UDP header. UDP Checksum – Optional, a value of zero means the checksum was not used. Provides a way to ensure the data arrived intact. Data – User data. Cisco CCENT Internet Protocol (IP) Internet Protocol (IP) essentially is the Internet layer. The other protocols found here merely exist to support it. IP holds the big picture and could be said to “see all,” in that it’s aware of all the interconnected networks. It can do this because all the machines on the network have a software, or logical, address called an IP address, which we’ll cover more thoroughly later in this chapter. IP looks at each packet’s address. Then, using a routing table, decides where a packet is to be sent next, choosing the best path. Cisco CCENT IPv4 Header Version – Indicates the version of IP currently used, currently 4 IP header length – Indicates the datagram header length in 32-bit words Type of service (TOS) – Specifies how a particular upper layer protocol would like the datagram to be handled. Total length – Length of the entire IP packet in bytes including data and header. Identification – Used to help piece together data fragments. Contains an integer to identify the current datagram. Flags – Three bit field used for fragmentation. Fragment offset -Offset in the original datagram of the data being carried. Measured in 8 octets Time to live (TTL) – Counter that is decremented as a packet traverses the network. Used to keep packets from looping endlessly. Protocol – Indicates which upper layer protocol receives incoming packet after IP processing is complete. Header checksum – Helps ensure IP header integrity. Source address – IP address of sending node. Destination address – IP address of receiving node. Options – Allows support of various options. Data – Contains upper layer information. Cisco CCENT Protocol Data Unit (PDU) This slide is a review of the protocol data unit (PDUs) at each layer and how data is encapsulated for transmission on the network. Cisco CCENT Transport and Network Layer Bits = 1 IP addresses = 3 Windowing = 4 Segments = 4 Switching = 2 Routing = 3 UDP = 4 MAC Addresses = 2 Packets = 3 Frames = 2 TCP = 4 Cisco CCENT Internet Control Message Protocol Internet Control Message Protocol (ICMP) works at the Network layer and is used by IP for many different services. ICMP is a management protocol and messaging service provider for IP. Its messages are carried as IP datagrams. ICMP Packets can provide hosts with information about network problems ICMP Packets are encapsulated within IP datagrams Cisco CCENT ICMP Example ICMP is used in this example to provide status of packets not being able to be reached to Host B. Cisco CCENT Network Layer ICMP Message Types This slide depicts all the ICMP message types. The most common is used by ping which is type 0 (Echo Reply) and type 8 (Echo Request). Cisco CCENT Address Resolution Protocol Address Resolution Protocol (ARP) finds the hardware address of a host from a known IP address. Cisco CCENT Reverse ARP When an IP machine happens to be a diskless machine, it has no way of initially knowing its IP address. But it does know its MAC address. Reverse Address Resolution Protocol (RARP) discovers the identity of the IP address for diskless machines by sending out a packet that includes its MAC address and a request for the IP address assigned to that MAC address. Cisco CCENT Proxy ARP Proxy ARP is the technique in which one host, usually a router, answers ARP requests intended for another machine. By “faking” its identity, the router accepts responsibility for routing packets to the “real” destination. Proxy ARP can help machines on a subnet reach remote subnets without configuring routing or a default gateway. Proxy ARP should be used on the network where IP hosts are not configured with default gateway or does not have any routing intelligence. Cisco CCENT ARP Table on a PC The ARP table will show IP address to MAC address mappings for hosts on the local subnet along with multicast addresses. This slides shows how hardware addresses are used by hosts to communicate on a local LAN, and how logical addresses are used to communicate to hosts on remote networks. Routers use logical addresses to forward packets to remote networks. If a remote network is not listed in a route table of a router, the router will drop any packet destined for that remote network.
<urn:uuid:dac452e8-d4bc-4320-af60-91b18719d489>
CC-MAIN-2017-04
https://www.certificationkits.com/cisco-certification/ccent-640-822-icnd1-exam-study-guide/cisco-ccent-icnd1-640-822-exam-certification-guide/cisco-ccent-icnd1-tcpip-part-ii/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890077
1,425
3.109375
3
I-Worm.Zircon: New Virus is rapidly spreading on the Internet 14 Mar 2002 Kaspersky Lab reports the details Kaspersky Lab, a leading international data-security software developer, reports the detection of the Internet-worm known as I-Worm.Zircon.c. At this time it is known that infections from this dangerous virus have occurred in several countries. Zircon.c spreads via e-mail in the form of an e-mail message with the attachment "patch.exe". The message subject field may contain in English the word "Important" or one of seventeen variations in Japanese. Which subject one receives depends on the recipient's e-mail address. If an address ends with ".jp" the worm uses a subject written in Japanese, while all others receive the line "Important". The body of the message is blank but contains an attachment - the executable file "patch.exe" , which stores the damaging code. The worm is activated only if a user launches this program file. Zircon.c is a worm that activates only once - it does not install itself into the system and does not repeatedly launch itself (except in cases where a user repeatedly opens the infected attachment). If the worm is launched, it sends itself to all the users in the Outlook address book by using the SMTP server, which it automatically connects with and manages. The defense procedure against "Zircon.c" has already been added to KasperskyT Anti-Virus database. Further details regarding this Internet-worm are available in the Kaspersky Lab Virus Encyclopedia
<urn:uuid:3c6779f4-395c-43b0-a78c-33f0753d07ec>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2002/I_Worm_Zircon_New_Virus_is_rapidly_spreading_on_the_Internet
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00444-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916885
334
2.578125
3
Spectra - Speech-to-speech translation New technological advancements in speech, language, and media are changing the way people interact with devices and with one another. These same technologies have tremendous potential in building assistive technologies that enable users with disabilities to more easily use computers, communicate with others, browse the web, log in to secure websites, and navigate city streets. Speech technologies are especially adaptable. Speech programs that convert between text and audio make it possible to convert emails and other text sources to audio for users with low vision, and for audio to be converted to text so users with impaired hearing can read voice mails and captions from broadcasts. The same speech interfaces that make mobile phones easier to use can be incorporated into almost any device -- from laptops to TV remotes to game platforms, further expanding the range of tasks that users with disabilities (including those with limited motion) can perform for themselves, giving users independence, autonomy, and privacy. AT&T Research is focusing on how best to adapt its speech and language projects for the assistive technology market, estimated to be 41 million in the US alone (US Census Bureau, 2006 American Community Survey). We are building assistive technology prototypes for our core markets (mobility, internet and entertainment services) -- see some examples below. We are also pursuing basic research in collaboration with universities, companies and non-profits that work with people with disabilities. If you are interested in working with us, get in touch with any of the researchers listed on this page! Making technology work for those with disabilities improves technology for everyone, including the growing population of senior citizens. And by freeing users from staring at a screen or using a keyboard or other input device, assistive technologies enable hands-free control of devices and computers, allowing everyone to more safely multi-task and interact more freely with devices. Get more info on: Our assistive technology research and prototypes are built on our basic speech, language and multimedia processing technologies, including: AT&T researchers, along with the ASA Text-to-Speech (TTS) Technology working group (S3-WG91), are currently collecting data comparing the intelligibility of seven different synthetic speech systems at various speaking rates. The aim is to make TTS more usable by all, including users of mobile devices, children with learning disabilities, people with visual disabilities (see paper) or hearing impairments. The iWalk prototype gives speech-mainly access to local business listings and walking directions. iWalk was designed primarily as an assistive technology. Features of iWalk include: Spectra is an iPhone application for interactive speech-to-speech translation. Features of Spectra include: iMIRACLE is a prototype iPad application that lets users search for video content by station, genre, title, or content keywords. Users can browse retrieved content, and can watch it on the iPad or on connected televisions. iMIRACLE could be useful for users with hearing loss or physical dexterity disabilities, who cannot "fast forward" through a TV show to the segment they want to watch. The iRemote prototype is an electronic program guide designed to reduce the television guide search problem: it permits users to search for TV and movie listings by title, actor, genre or keyword, and integrates with Windows Media Center and the Microsoft set-top box. AT&T Labs researchers have made an accessible version of iRemote with the following features: You can read a short paper about EPGAAC here. The eReader prototype is another example of an existing technology that is being repurposed to make it more accessible - in this case, specifically for people with visual disabilities. It is built over the Calibre open-source eReader software. Features include: See a demo of the eReader prototype here. The iPad-based StorEbook prototype is a different kind of eReader. It is designed to engage children with learning disabilities. It features: See a demo of the StorEbook prototype here. SAFE is a prototype for multi-factor authentication. Following a simple enrollment procedure, users download the SAFE application onto their mobile device. Then, instead of using a password, users use SAFE to log into any participating website or application. SAFE can help users with visual disabilities, who may find it hard to use web forms for authentication. SAFE features: See a demo of SAFE here. We are actively seeking partners in this research. If you are: Also, we periodically run evaluations of our basic technologies. If you are interested in participating in a user study or evaluation, please get in touch! AT&T values diversity in its workforce and customer base. In 2012, AT&T ranked No. 1 in CAREERS & the disABLED magazine's 2012 list of “Top 50 Employers” for people with disabilities. Take a look at: Multimedia (videos, demos, interviews) MIRACLE video content analysis Demonstration of the MIRACLE video content analysis engine. MIRACLE (1k) AT&T SAFE A demonstration of authentication using AT&T SAFE. SAFE (2k) Spectra - Speech-to-speech translation This video is a demonstration of the Spectra speech-to-speech translation application. Spectra (2k) Taniya Mishra demonstrates the StorEBook expressive e-reader Taniya Mishra demonstrates the StorEBook expressive e-reader. StorEBook (3k) eReader for people with visual disabilities In this video, researcher Ben Stern introduces eReader, a speech-enabled e-reader for people with visual disabilities built over the open-source tool Calibre. eReader (2k) Demonstration of the iMIRACLE content-based multimedia retrieval system Bernie Renger demonstrates the iMIRACLE content-based multimedia retrieval system on the iPad. iMiracle (2k) The iWalk navigation service for people with visual disabilities In this video, ALFP fellow Shiri Azenkot and researcher Amanda Stent demo the initial prototype of iWalk, a local business search and navigation service for people with visual disabilities. iwalk (2k)
<urn:uuid:efbc481a-85da-4ab0-b7ad-8362d5bb5822>
CC-MAIN-2017-04
http://www.research.att.com/projects/AssistiveTechnology/index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00078-ip-10-171-10-70.ec2.internal.warc.gz
en
0.894715
1,287
3.40625
3
PKI: The Myth, the Magic and the Reality - Page 3 Part 3: X.509 vs. PGP Certs Digital certificates are used by a variety of applications to provide user or device identity for authentication. Certs might also contain security policy or rules for authorization. A digital certificate is a collection of multiple attributes (information) which have been cryptographically bound by the digital signature of a CA that is recognized and "trusted" by a community of certificate users. Each certificate is unique and has three primary parts: 1. The subjects public key value or principal element (a subject can be a person or a device). 2. One or more subject attributes (name, account number, validity period). 3. The CAs signature, which binds the attributes to the subjects public key. Since there is no need to keep the subjects public key confidential, certificates can be distributed unprotected (however, there may be privacy concerns about data mining of the certs attributes). The two most common certificate types are PGP and X.509. Most secure business applications use an X.509 certificate-based scheme. PGP and X.509 are dramatically different in almost every aspect. The X.509 certificate format, as defined in the ISO/IEC/ITU, has evolved since 1988 to its current version 3 (1996), with many other standards dependent upon its specification. PGP, on the other hand, was originally defined by a grassroots effort, and then by PGP and Network Associates. Its now in the IETF arena, with change control owned by the IETF Open-PGP working group. X.509 has a rigid structure, ASN.1 encoding and a single issuer (CA). PGP is a flexible "wallet" of signatures over specific attributes using RADIX-64 encoding. X.509 was tied to the X.500 directory service, but is now used with LDAP as the standard protocol for accessing the cert in a directory (the same as PGPs use of LDAP). Digital certificates are just one small component of the bigger PKI picture, but theyre the fundamental building block that can limit or extend the overall capabilities of a secure infrastructure. Charles Breed is vice president of Kroll-OGaras Information Security Group, a vendor-neutral security services and risk mitigation firm. Active in the IETF and a frequent lecturer on topics such as PGP, S/MIME, VPNs and PKIs, Charles is also the author/creator of the industrys de facto "Cryptographic & Security Threats" reference chart, a poster-sized guide distributed to more than 100,000 individuals and organizations worldwide. Source: "Cryptographic & Security Threats," © Charles Breed.
<urn:uuid:081bb014-ddf7-40d2-b3ec-3b218a211859>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/netsecur/article.php/10952_615851_3/PKI-The-Myth-the-Magic-and-the-Reality.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914109
562
3.21875
3
A website that uses directory browsing is a convenient way to display the files and folders in a directory using a web browser. An example of this is demonstrated here. To configure directory browsing in IIS6, you simply enable the Directory Browsing checkbox on Home Directory tab of the virtual directory. If you want to configure it so that users are required to authenticate to access the virtual directory, you disable anonymous access, enable Basic Authentication and configure the appropriate NTFS permissions on the target folder. It's slightly different in IIS7 since IIS7 introduces the concept of delegated administration. This means that you can have the IIS configuration in web.config files which reside in the virtual directory. IIS has to read these config files very early in the connection attempt, i.e. when there is no authenticated user available yet. For this reason IIS has to use the process identity (usually Network Service) to read the web.config file. To configure a virtual directory for directory browsing in IIS7: - Create or select the virtual directory in Internet Information Services (IIS) Manager - Double-click Authentication and select the appropriate authentication methods for the Vdir (default is Anonymous) - Select the Vdir again and double-click Directory Browsing. Click the Enable action - Right-click the Vdir and select Edit Permissions. Configure the NTFS permissions for the target folder and ensure that Network Service has read access to the folder If you don't grant the Network Service account read rights on the Vdir, you'll get the following error when accessing it: 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed.
<urn:uuid:0031588c-917e-4aa5-aaf6-5cdca888e9fd>
CC-MAIN-2017-04
http://www.expta.com/2008/03/configuring-virtual-directories-with.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00564-ip-10-171-10-70.ec2.internal.warc.gz
en
0.841454
357
2.625
3
220.127.116.11 What is Output Feedback Mode? OFB mode (see Figure 2.5) is similar to CFB mode except that the quantity XORed with each plaintext block is generated independently of both the plaintext and ciphertext. An initialization vector s0 is used as a ``seed'' for a sequence of data blocks si, and each data block si is derived from the encryption of the previous data block si-1. The encryption of a plaintext block is derived by taking the XOR of the plaintext block with the relevant data block. ci = mi Åsi mi = ci Åsi si = Ek(si-1) Figure 2.5: Output Feedback mode (click for a larger image) Feedback widths less than a full block are not recommended for security [DP83] [Jue83]. OFB mode has an advantage over CFB mode in that any bit errors that might occur during transmission are not propagated to affect the decryption of subsequent blocks. The security considerations for the initialization vector are the same as in CFB mode. A problem with OFB mode is that the plaintext is easily manipulated. Namely, an attacker who knows a plaintext block mi may replace it with a false plaintext block x by XORing mi Åx to the corresponding ciphertext block ci. There are similar attacks on CBC and CFB modes, but in those attacks some plaintext block will be modified in a manner unpredictable by the attacker. Yet, the very first ciphertext block (that is, the initialization vector) in CBC mode and the very last ciphertext block in CFB mode are just as vulnerable to the attack as the blocks in OFB mode. Attacks of this kind can be prevented using for example a digital signature scheme (see Question 2.2.2) or a MAC scheme (see Question 2.1.7). The speed of encryption is identical to that of the block cipher. Even though the process cannot easily be parallelized, time can be saved by generating the keystream before the data is available for encryption. Due to shortcomings in OFB mode, Diffie has proposed [Bra88] an additional mode of operation, termed the counter mode. It differs from OFB mode in the way the successive data blocks are generated for subsequent encryptions. Instead of deriving one data block as the encryption of the previous data block, Diffie proposed encrypting the quantity i +IV mod 264 for the ith data block, where IV is some initialization vector.
<urn:uuid:ee5004d6-4818-4d7b-a17f-4c36d244038b>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-output-feedback-mode.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00472-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923272
526
3.390625
3
A renaissance in Braille? That is what the developers of Canute are hoping for. Before Braille was invented anyone who was blind or vision impaired was unable to read or write and was hence illiterate. A privileged few might have people to read to them and to take dictation. Braille changed that by enabling blind people to read and later to write independently. The creation of books in Braille was a slow, cumbersome and expensive process, and the books were big so storage was also an issue. So the range of books was limited. New technology came along to alleviate the situation. Firstly recorded books which were easier to produce, simpler to reproduce, smaller and in addition had a market besides the visually impaired. This greatly increased the range available and reduced the cost. The one downside was that people listened but did not see the words on the page so could not improve their spelling, which made writing more difficult. The second technology to come along was the computer. Electronic text to speech meant that any electronic text was available to a blind person with the right technology. This once again greatly extended the range of information available especially with the advent of e-books. Braille printers attached to a computer mean that Braille can be printed on demand. Braille displays convert electronic text into a line of Braille on a refreshable display, giving the user an interactive experience compared to printing. The availability of voice and Braille output made input via a keyboard, or voice recognition, useful. The user could input text and later retrieve it. The computer and subsequent digital technologies gave the blind person access to a vast range of information and also enabled them to interactively communicate with vision impaired and non-vision impaired people alike. Text-to-voice and voice-recognition became mainstream technologies used to some extent by all the population and this led to innovations and reductions in cost. This was not true of Braille technology. The technology is still expensive and limited in capability. In particular the Braille displays only show one line of text so it is a bit like reading a book with only one line visible at a time. The cost, the limited technology and the apparent attractiveness of voice technologies has meant that the use of Braille is in decline. However watching an experienced Braille user interacting with a page of printed Braille shows that the user experience is profoundly different to the linear experience of a single line Braille display or text-to-voice. The ability to feel the overall shape of the page, in a comparable way that sighted users see the overall shape without reading anything, the ability to reread a few lines back immediately, the chance to skip to the end of a paragraph or to a heading; all of these and more make the experience akin to a sighted user interacting with a page of text. What is needed is a Braille page display. Given the high cost of Braille displays this seems like a pipe-dream, but if it could be done at an affordable price it should renew interest in Braille. Bristol Braille Technology CIC was set up specifically to find a solution to this challenge. To produce a robust reliable Braille page display and sell it for about a third of the price of existing single line displays. To do this required a complete rethink of the design so that it used fewer components and all the components were either cheap off-the-shelf products or easily manufactured parts. This has been accomplished, under the code name of Canute, and a 32 character-per-line by 16 line prototype is being tested. This will provide a true e-book experience to the Braille user. When this goes into production it should herald a renaissance in the use of Braille across the world. For more details go to http://bristolbraille.co.uk/ .
<urn:uuid:6abbe732-a3b8-49bf-8c15-6f81195aad15>
CC-MAIN-2017-04
http://www.bloorresearch.com/analysis/canute-a-renaissance-for-braille/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957772
788
3.34375
3
Tracing the Origin of an Email Message — and Hiding it We are often asked by our users to help them determine from where an email message has originated. “Where did this spam come from?” In general, it is fairly easy to do this if you have access to the “headers” of the message. In this post, we will show you how to determine a message’s original location yourself and also how you can protect yourself from others determining your location when you send email messages to them. Why would you need to protect yourself — If you are traveling and do not want people to know where you are; if your messages are not going through because your ISP is blacklisted or has a poor reputation. Determining the physical location of the sender of an email message In order to determine physical location of the sender of the message, you will first need the full headers of the message that you received. To get these, see: Viewing the Message Source / Full Headers of an Email. Here are the headers of a Spam message that LuxSci Support received. We’ll look at these and see where the message came from (we have removed some data from these headers so that they are suitable for publication): Received: via dmail-2009.19 for +mail/BACKUP; Mon, 4 Jan 2015 07:56:25 -0600 (CST) Received: from s4.luxsci.com ([10.225.3.213]) by s5.luxsci.com with ESMTP id o04DuOxL014677 for <email@example.com>; Mon, 4 Jan 2015 07:56:25 -0600 Received: from s4.luxsci.com (localhost [127.0.0.1]) by s4.luxsci.com with ESMTP id o04DuPUn030873 for <firstname.lastname@example.org>; Mon, 4 Jan 2015 07:56:25 -0600 Received: (from mail@localhost) by s4.luxsci.com id o04DuPSE030854 for email@example.com; Mon, 4 Jan 2015 07:56:25 -0600 Return-Path: <firstname.lastname@example.org> Received: from p01c11m093.mxlogic.net (mxl144v247.mxlogic.net [126.96.36.199]) by s4.luxsci.com with ESMTP id o04DuOYb030811 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for <email@example.com>; Mon, 4 Jan 2015 07:56:25 -0600 Date: Mon, 4 Jan 2015 07:56:25 -0600 Message-Id: <201001041356.o04DuOYb030811@s4.luxsci.com> Received: from unknown [188.8.131.52] (EHLO [184.108.40.206]) by p01c11m093.mxlogic.net(mxl_mta-6.4.0-2) over TLS secured channel with ESMTP id 583f14b4.0.3055948.00-006.5365891.p01c11m093.mxlogic.net (envelope-from <firstname.lastname@example.org>); Mon, 04 Jan 2015 06:56:23 -0705 (MST) From: VIAGRA (c) Best Supplier <email@example.com> To: firstname.lastname@example.org Subject: Visitor abuse's personal 80% OFF MIME-Version: 1.0 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit First, we see that this is a Spam message where the sender has forged the message so that the apparent “from” address matches the “to” address — to attempt to get around our spam filters. For more on this technique, see Save Yourself From “Yourself”: Stop Spam From Your Own Address and How can Spammers Send Forged Email? Next, we need to get the Internet (IP) address of the sender of the message. To do this we note a few facts: - Each server that accepts the email message adds a “Received” header to the message. In this header, the server records the IP address of the server from which it received the message (we have colored these red). - The “Received” headers are added to the top of the message each time. I.e. the “oldest” “Received” headers are at the bottom of the list of all “Received” headers. - It is possible, though not common, for the sender to add forged “Received” headers to the end of the list of headers. So, in the best case scenario where there are no forged “Received” headers (as in the above message), we look at the last “Received” header in the list: Received: from unknown [220.127.116.11] (EHLO [18.104.22.168]) by p01c11m093.mxlogic.net(mxl_mta-6.4.0-2) over TLS secured channel with ESMTP id 583f14b4.0.3055948.00-006.5365891.p01c11m093.mxlogic.net (envelope-from <email@example.com>); Mon, 04 Jan 2015 06:56:23 -0700 (MST) In this header, we see that the message was: - Received by server “p01c11m093.mxlogic.net” (one of the servers that perform Premium Email Filtering for us). - It was received from IP Address 22.214.171.124 Next, we take this IP address to a web site like “IPLocation.net” and IP WHOIS Lookup and enter it to see where it is located. In this case, we see that the Spam came from Tulcea, Romania! We see also that the IP address is owned by “RIPE.net” of Amsterdam and we can send abuse complaints to “firstname.lastname@example.org”. It is possible with more detailed IP address databases (paid ones for example), to narrow down the location of the IP to the region, city, or even approximate physical address of the user. I.e. if you send an email and say you are in Paris now — people can check and see if that is true. What about if there are forged Received lines? If you suspect that there are forged “Received” lines (or if the 1st Received lines do not have useful public IP addresses listed), then you have to work a little harder. You need to go into the list of “Received” lines and find the oldest one that corresponds to a server that you trust is real. I.e. the message has to leave the Spammer at some point and hit a real server which will record a real “Received” line (e.g. your own email server). We do this by starting at the top, first reviewing the received lines added by your own organization’s mail servers, and working your way down though servers that you recognize (you will need to know what servers are used in your network). The “Received” line added by the last one that you recognize may be the last trustable one. Hiding your location from message recipients OK, so now that you know how easy it is to find out the approximate location of the sender of an email message, the natural question is “how can I hide my own location?” The simplest thing to do is to use a web-based (WebMail) email interface. Messages sent from these interfaces are sent from the provider’s mail servers and not from your local machine. While the email provider may record your actual IP address for auditing purposes, this information will not (generally, and at LuxSci specifically) be in the “Received” headers of the message. As a result, your recipients will only be able to track the message back to your email provider’s mail servers … and not to you. If you are not sure about your WebMail provider, send yourself an email message and see what is in your Received lines. Compare this to your own current physical IP address (see www.whatismyip.com). If you need to send messages using an email program, like Outlook or Thunderbird, then you need an SMTP service that is able to “anonymize” your outgoing message. I.e. the service needs to be able to “scrub” the message of all information identifying your location and resend the message in a way that permits the recipients to only track it back to the service’s mail servers (like in the WebMail case). LuxSci’s anonymous SMTP email service offers this option for no additional cost; it is included as a feature with all email marketing and email hosting accounts.
<urn:uuid:a9725255-7cf5-44f5-bb00-7f2f7f28a22c>
CC-MAIN-2017-04
https://luxsci.com/blog/tracing-the-origin-of-an-email-message-and-hiding-it.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00288-ip-10-171-10-70.ec2.internal.warc.gz
en
0.905329
2,057
2.734375
3
Across the Sacramento River from the California state capital lies West Sacramento, a burgeoning, industrial city home to the Port of West Sacramento. The port is a large, if underused, inland seaway that serves as a hub for the international export of rice and materials such as cement and fertilizer. During a Sept. 2, 2009, meeting of the West Sacramento City Council it was revealed that Spanish solar power development firm Otras Producciones de Energia Fotovoltaico (OPDE) had opened talks with the city about leasing 160 acres along the port's deep water channel on which to build a 24-megawatt photovoltaic solar power plant. The OPDE is one of the world's largest builders of solar power plants. In an Aug. 20, 2009, letter to the city, Greg Brehm, director of distributed energy resources for OPDE's U.S. arm, proposed construction of a "single axis tracking solar power generation facility." Should the facility be built, Brehm wrote that in addition to powering 5,000 homes, it would have the environmental impact of taking more than 6,000 cars from the road and would sequester the same amount of carbon as would 8,000 acres of pine forest annually. Brehm also noted that the 18-month project would create 50 full-time jobs during construction and 10 permanent positions once the facility becomes operational. If built, the facility would be the largest photovoltaic plant in the nation. Photovoltaic solar power, as opposed to solar thermal, is what most imagine when thinking of solar power. Photovoltaic systems track the sun as it moves across the sky to collect solar radiation via solar cells, which convert sunlight into electricity. Solar thermal, meanwhile, relies on parabolic mirrors that reflect the sun's rays onto a boiler, which in turn generates steam to turn a turbine. Some solar thermal facilities direct the reflected rays onto oil-filled pipes instead of a boiler. The heated oil is pumped to heat engines, which convert the energy into electricity.
<urn:uuid:93f45b3c-a040-4bdd-a095-4b708783096e>
CC-MAIN-2017-04
http://www.govtech.com/technology/West-Sacramento-Calif-in-Talks-with.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00132-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942336
423
3.359375
3
Yin L.,CAS Wuhan Botanical Garden | Yin L.,Center for the Environmental Implications of Nanotechnology | Yin L.,Duke University | Cheng Y.,Center for the Environmental Implications of Nanotechnology | And 12 more authors. Environmental Science and Technology | Year: 2011 Silver nanoparticles (AgNPs) are increasingly used as antimicrobial additives in consumer products and may have adverse impacts on organisms when they inadvertently enter ecosystems. This study investigated the uptake and toxicity of AgNPs to the common grass, Lolium multiflorum. We found that root and shoot Ag content increased with increasing AgNP exposures. AgNPs inhibited seedling growth. While exposed to 40 mg L -1 GA-coated AgNPs, seedlings failed to develop root hairs, had highly vacuolated and collapsed cortical cells and broken epidermis and rootcap. In contrast, seedlings exposed to identical concentrations of AgNO3 or supernatants of ultracentrifuged AgNP solutions showed no such abnormalities. AgNP toxicity was influenced by total NP surface area with smaller AgNPs (6 nm) more strongly affecting growth than did similar concentrations of larger (25 nm) NPs for a given mass. Cysteine (which binds Ag +) mitigated the effects of AgNO 3 but did not reduce the toxicity of AgNP treatments. X-ray spectro-microscopy documented silver speciation within exposed roots and suggested that silver is oxidized within plant tissues. Collectively, this study suggests that growth inhibition and cell damage can be directly attributed either to the nanoparticles themselves or to the ability of AgNPs to deliver dissolved Ag to critical biotic receptors. © 2011 American Chemical Society. Source
<urn:uuid:0b5f2bee-b217-4b1b-97e0-77e58ba6e335>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/center-for-the-environmental-implications-of-nanotechnology-1874664/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00161-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911557
360
2.578125
3
Article by Dan Auerbach Researchers Use EFF's SSL Observatory To Discover Widespread Cryptographic Vulnerabilities Lenstra's team has discovered tens of thousands of keys that offer effectively no security due to weak random number generation algorithms. The consequences of these vulnerabilities are extremely serious. In all cases, a weak key would allow an eavesdropper on the network to learn confidential information, such as passwords or the content of messages, exchanged with a vulnerable server. Secondly, unless servers were configured to use perfect forward secrecy, sophisticated attackers could extract passwords and data from stored copies of previous encrypted sessions. Thirdly, attackers could use man-in-the-middle or server impersonation attacks to inject malicious data into encrypted sessions. Given the seriousness of these problems, EFF will be working around the clock with the EPFL group to warn the operators of servers that are affected by this vulnerability, and encourage them to switch to new keys as soon as possible. While we have observed and warned about vulnerabilities due to insufficient randomness in the past, Lenstra's group was able to discover more subtle RNG bugs by searching not only for keys that were unexpectedly shared by multiple certificates, but for prime factors that were unexpectedly shared by multiple publicly visible public keys. This application of the 2,400-year-old Euclidean algorithm turned out to produce spectacular results. In addition to TLS, the transport layer security mechanism underlying HTTPS, other types of public keys were investigated that did not use EFF's Observatory data set, most notably PGP. The cryptosystems that underlay the full set of public keys in the study included RSA (which is the most common class of cryptosystem behind TLS), ElGamal (which is the most common class of cryptosystem behind PGP), and several others in smaller quantities. Within each cryptosystem, various key strengths were also observed and investigated, for instance RSA 2048 bit as well as RSA 1024 bit keys. Beyond shared prime factors, there were other problems discovered with the keys, which all appear to stem from insufficient randomness in generating the keys. The most prominently affected keys were RSA 1024 bit moduli. This class of keys was deemed by the researchers to be only 99.8% secure, meaning that 2 out of every 1000 of these RSA public keys are insecure. Our first priority is handling this large set of tens of thousands of keys, though the problem is not limited to this set, or even to just HTTPS implementations. We are very alarmed by this development. In addition to notifying website operators, Certificate Authorities, and browser vendors, we also hope that the full set of RNG bugs that are causing these problems can be quickly found and patched. Ensuring a secure and robust public key infrastructure is vital to the security and privacy of individuals and organizations everywhere. Cross-posted from Electronic Frontier Foundation
<urn:uuid:5aab503f-e610-4668-bc7c-692db5136385>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/20260-Researchers-Discover-Widespread-Cryptographic-Vulnerabilities.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00555-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961823
582
2.78125
3
A new word in automatic optimization on computing accelerators was spelled by Intel when they released the Intel Xeon Phi coprocessor in November 2012, representing what is now known as the Many Integrated Core (MIC) architecture. The MIC architecture can be programmed with the standard Fortran, C and C++ languages, and it understands common HPC parallel frameworks such as OpenMP and MPI. But most importantly, the Intel compiler suite knows how to compile Fortran, C or C++ code written by a ”mere mortal” to run on the coprocessor as if it had been optimized by a ”ninja”. This automatic optimization capability was highlighted in this paper published by Colfax Research. The authors demonstrate step by step how to construct a library of special functions and make it offloadable to an Intel Xeon Phi coprocessor. Using a C++ language extension, they inform the compiler that certain functions are candidates for automatic vectorization in user applications. Finally, they brush up the high-language code of the function to allow the compiler to do its best with optimization. As a result, their implementation of the Gauss error function performs on par with the highly optimized vendor implementation. Demonstrated automatic optimization capabilities open doors to scientists and engineers wishing to boost the performance of their general-purpose functions using the MIC architecture. Be it a special mathematical function, an empirical functional relationship, or a solution of a differential equation, it is possible to express it in a high-level language and trust to the compiler to do the optimization. Additionally, the implementation of a library function in a high-level language will scale forward to future computing architectures in a blink of an eye. That is, in a swing of the compiler’s ”ninjato”.
<urn:uuid:d8bb5695-7219-4af0-8c76-c3a7c4228b8c>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/08/speaking_many_languages_into_the_mic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902938
367
2.734375
3
The mainframe became more powerful and was employed to solve more business problems. But also during this period, the introduction of the IBM(R) DB2(R) relational database opened up entirely new vistas for business by enriching the data and information the organization could acquire, store and analyze. This massive influx of data created difficult new challenges for IT as they mastered how to copy, load/unload, transform and recover what was becoming one of the organizations most valuable and critical resources. It was during this period that the legacy BMC Software company began delivering solutions to manage the databases and optimize the performance of the IT systems, anticipating ways in which they could power IT success and inventing solutions that delivered. BMC Innovations that Changed Mainframe Management (1980 – 1990) (Watch this blog in coming weeks for reminiscences about these innovations.) 1989 – COPY PLUS for DB2 created a faster way to copy a DB2 data base 1989 – Reorg Plus for DB2 provided a faster method for reorganizing DB2 tablespaces and indexes to help meet outage windows and SLAs 1988 – IMAGE COPY PLUS engineered a faster way to back up an IMS(R) data base 1988 – Catalog Manager for DB2 simplified the tasks required for DB2 catalog management 1987 – ALTER for DB2 automated DB2 change processes to reduce time required and eliminate manual errors 1985 – Control-M for MVS automated the scheduling and management of mainframe batch job execution 1984 –DELTA IMS eliminated outages when changing IMS system definitions 1984 – DATA PACKER/IMS used compression to reduce storage required for data bases 1984 – AutoOPERATOR/IMS automated responses to IMS system events 1980 – 3270 Optimizer/CICS and /IMS optimized network traffic to/from 3270 terminals You can view about BMC’s mainframe 50th celebration at http://www.bmc.com/mainframeanniversary (R) Trademarks or registered trademarks of International Business Machines Corporation in the United States, in other countries, or both.
<urn:uuid:225f55a8-052b-4d27-b130-0b2fe24d4b29>
CC-MAIN-2017-04
http://www.bmc.com/blogs/50-bmc-mainframe-innovations-308x3090period-1980-1990/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00491-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895065
436
3.0625
3
Asynchronous Transfer Mode (ATM) Service is a fast packet, cell-based technology that can support data and video applications requiring high bandwidth, high performance transport and switching. ATM Service will allow Customers who have requirements for high-speed connectivity to interconnect their multiple locations. ATM offers low latency, high throughput and flexible bandwidth interconnections capable of carrying a wide range of Services. - Bandwidth Efficiency - Flexibility and Scalability - Multiservice Networking - Works with Frame - DSL Backbone - Increased productivity - User Network Interface (UNI) Port and Access - User Network Interface (UNI) Port Only - Inverse Multiplexing over ATM (IMA) UNI Port and Access - Inverse Multiplexing over ATM (IMA) UNI Port Only - Broadband ISDN Inter-Carrier Interface (B-ICI) Port and Access - Broadband ISDN Inter-Carrier Interface (B-ICI) Port Only - Circuit Emulation Service (CES) Port Only PVCs are logical connections between ports that allow data to be sent from one customer location to another. PVCs do not engage capacity when idle, allowing the available capacity to be allocated to other active PVCs that are in need of additional bandwidth. With the exception of multicasting VCCs, PVCs are duplex (two-way). When placing an order for service, customer must specify the following for each PVC: - PVC Connection Type - Traffic Parameter - VCC/VPC Type, and - Quality of Service ATM when purchased for DSL transport is only available as UBR quality of service. PVC Connection Types ATM to ATM Frame Relay to ATM Service (FRATM) The Customer must choose the traffic parameters available for each PVC selected. Traffic parameters represent priorities given to cell transmissions, sensitivity of cells to delay variation and loss within the network. Traffic Shaping is a flow control functionality that must be enabled on the Customer Equipment to ensure the Customer's data traffic transmission rate does not violate the Customer's chosen traffic parameters. Peak Information Rate (PIR) The PIR designates an upper limit that the traffic information rate may not exceed. PIR is expressed in Kbps or Mbps. Traffic that exceeds the PIR value will be discarded from the network for all Quality of Service types. Sustainable Information Rate (SIR) The Sustainable Information Rate (SIR) specifies the "average" traffic rate that is transmitted and received. SIR is expressed in Kbps or Mbps. Maximum Burst Size (MBS) MBS specifies the maximum number of cells per second (cps) that can be transmitted at the PIR. The MBS default is 32cps. Virtual Channel Connection (VCC) Logical connection between one ATM switch port and another switch port. The VCC allows exchange of information in the form of fixed cells at variable rates. Company configures and maintains the individual VCCs within the ATM connection. Virtual Path Connection (VPC) A group of logical connections between one ATM switch port and another ATM switch port. A VPC connection is typically used to route multiple Customer defined VCCs as a group. It is the responsibility of the Customer to configure and maintain the individual VCCs within a VPC connection. There are several VCC/VPC types available, which may vary by region. Standard VCCs/VPCs are utilized in typical ATM networks to provide logical connections between two ports. Circuit Emulation Service (CES) VCC CES VCCs provide logical connection between an CES port and another ATM port. CES VCC is to be used in conjunction with CES Port Only. CES VCCs are always provisioned with CBR Quality of Service and a PIR traffic parameter of 1.755 Mbps. A CES DS1 VCC cannot be provisioned to an ATM DS1 UNI Port. CES VCCs are not available in all regions. A FRATM VCC is established to connect two Customer locations, one having a Frame Relay port and the other an ATM port, to provide transparent interworking between Frame Relay and ATM networks. The FRATM VCC is provisioned with VBR-nrt Quality of Service on the ATM portion, and Standard Quality of Service on the Frame Relay portion. The FRATM VCC is priced based upon the ATM SIR value selected. Disaster Recovery VCCs allow for the implementation of logical connections between branch locations and a secondary processor/server center (disaster recovery location) should a non-recoverable disaster occur at the primary host location. The disaster recovery location must also be served by an active, company provided ATM/Frame Relay Port. Disaster Recovery VCCs are provisioned based upon an initial order from the Customer and pre-configured in the ATM switch, but set to a disabled mode. Customer must initiate VCC activation with Company and necessary third party vendors. Alternate Routing VCCs provide a logical connection to an alternate host location processor/server in the event of an outage at the primary location. Alternate Routing VCCs are to be utilized in the event of an outage at the primary location only, not day-to-day use. Alternate Routing VCCs are provisioned based upon an initial order from the Customer and available at all times. The remote Customer location is provisioned with two active VCCs, one end to the primary Customer location and one end to the backup Customer location. Multicasting VCCs are used to communicate uni-directionally from one location to many locations. It allows Customer Equipment to send cells into the Company ATM network over a specially designated Multicast VCC. The cells are replicated and sent across various VCCs defined on the same port as the Multicast VCC. Multicast VCCs are used in conjunction with the VBR-nrt Quality of Service and SIR traffic parameter. Multicasting VCCs are not available in all regions. See table in Section 4.7.8.D for Multicasting VCC availability. ATM Host-Link gives the Customer the option to purchase multiple VPCs from the Company's ATM network to provide ATM connectivity for Digital Subscriber Line (DSL) Transport Services, including Wholesale DSL Transport Service and Remote LAN DSL Transport Service. Customer must obtain access to Company's ATM network by purchasing UNI Port and Access/Port Only or B-ICI Port and Access/Port Only. ATM Host-Link is offered only for DSL Transport connectivity and is applicable for all interfaces. ATM Host-Link will contain up to 10 VPCs for DS1, 25 VPCs for IMA, 100 VPCs for DS3 and 200 VPCs for OC-3. If required, additional ATM Host-Link VPCs (exceeding the quantities designated above) may be purchased individually as indicated in ASI FCC1 tariff.
<urn:uuid:355404c6-57da-4161-bf31-8d8b2ba0aa83>
CC-MAIN-2017-04
https://www.att.com/gen/isp?pid=2524
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00399-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891337
1,468
2.5625
3
Cloud isn’t a technology; it’s a business model. Cloud computing is transforming IT and business alike. Because of this, many vendors now claim to be “as a service” or “cloud”. This series of posts explain exactly what cloud is, how you get it, and what it does. Cloud computing is a new business model powered by new technologies. It’s an on-demand, self-service, “pay as you go” model for access to hosting infrastructure (networks, servers, storage, operating systems, applications, support, administration). Cloud providers deliver infrastructure, platforms and applications as a service. With Cloud computing the business pays only for what it uses. Compared to traditional models, cloud can deliver five- to ten-fold improvements in costs and time to market (although 20% is more realistic). Pay-as-you-go eliminates over- and under-provisioning capacity. Over-provisioning wastes money. It also reduces funds available- for other investments. Under-provisioning increases time to value and can result in lost revenue as customer experience degrades. Automated capacity management is built into the cloud. Adding or removing infrastructure quickly in response to demand offers agility and cost effectiveness traditional IT cannot match. A cloud has five essential characteristics, three service models, and four deployment models. Each has its pros and cons. cloud can reduce the time, money, and the number of people it takes to build and deploy applications and related hosting infrastructure. Yet cloud is not always the right solution. What You Need to Know Five key feature characteristics define the cloud. - On-demand self-service to infrastructure, platforms and applications delivered by a “pay-as-you-go” model based on usage. - Broad access through mobile phones, tablets, laptops, and workstations. - Resource pooling and automation to combine resources into managed services. - Rapid elasticity that scales automatically and quickly with demand. - Measured service with usage monitored, controlled, and reported. Three cloud service models define decreasing levels of control. - Infrastructure as a Service (IaaS) provides network, server, storage, and middleware that IT uses to deploy and run their own operating systems and applications. IT has control only over operating systems and applications. IT can configure storage and some network configurations. IaaS is used to create platforms for service and application development, test, and deployment. - Platform as a Service (PaaS) provides application hosting and development tools. Developers create and deploy their applications into cloud infrastructures. Developers control only their applications and some operating system configurations. PaaS is used to create and deploy applications and services for users. - Software as a Service (SaaS) provides pre-built applications, typically available via web browser. Consumers control only application configuration settings. SaaS is used to complete business tasks. Four cloud deployment models describe cloud ownership and usage. - Private Cloud: infrastructure dedicated to a single organization. The organization or a provider owns and operates it. It may be on- or off-premises. - Community Cloud: infrastructure shared by a group of organizations with similar needs. One or more of the organizations or a provider owns and operates it. It may be on- or off-premises. - Public Cloud: infrastructure for shared public use. It is owned, operated, and hosted by a service provider. - Hybrid Cloud: combines services from two or more different cloud models. In the next post I’ll talk about what you need to do in order to decide whether or not cloud is the right option for you and how to integrate it into your business model.
<urn:uuid:63566b25-bf5d-4f6c-aa5d-9c95658265c3>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2012/05/14/cloud-computing-what-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz
en
0.918812
774
3.0625
3
A History of Improvements A History of ImprovementsManaged virtual environments improve security by managing memory for applications, by protecting memory corruption errors, for example. The price of this is mostly system performance. The problem is that the environments themselves can have vulnerabilities, and quite a few of these have surfaced over the years. Plus, there are so many other classes of errors in addition to memory errors, so applications aren't secure purely by being written in a managed environment. Still, memory corruption errors are important, and the trend toward managed code is a net plus for security. This is one reason a lot of corporate development has moved to such environments-from Java to ASP.NET. Writing conventional code that is carefully scrutinized for security vulnerabilities is hard and requires expertise you may not have. Writing managed code takes care of at least the straightforward errors. And, once again, it shouldn't make anything harder unless you are relying on techniques you shouldn't be. With its Chromium environment forming the basis for the Chrome browser and operating system, Google has taken the sandbox to the next level by protecting native code running in the browser. It hasn't prevented vulnerabilities and exploits in the Chrome browser, but it has limited the impact of those exploits by preventing them from reaching beyond the limited capabilities of the browser environment. In fact, the entire Chromium sandbox runs in user mode, so nothing an attacker does will exceed the capabilities of the user running the program. Something similar can be said for Protected Mode in Microsoft's Internet Explorer 7 and 8 under Vista and Windows 7. Protected Mode runs the browser in a specially crippled user context that has no write access to anywhere outside of the temp folders. Look for all these techniques to be more widely available as generalized facilities for applications. However, both Chromium under Windows and Protected Mode rely on Windows-specific features, such as integrity levels, job objects and restricted tokens, which are not necessarily available on other platforms. Thus, the development of sandboxes could be the latest chapter in an old story: the trade-off between maximum functionality and platform portability. But it all depends on how you write your programs. If you write programs to run in the Chromiun sandbox and follow its rules, you should get some portability along with whatever sandbox features Chromium provides on Windows, as well as Mac and Linux. There are other systemic improvements that OS developers can and will implement. One of them, sandboxing, has a long history in managed environments such as Java. In fact, not too long ago, many felt that Java and such managed environments were the future of operating systems. There's still something to that, but the security records of Java and .NET haven't been especially impressive, even though they were supposedly designed with that objective.
<urn:uuid:fdd0b0aa-cb38-47f2-a5b7-5e5391356f3a>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Security/OS-of-the-Future-Built-for-Security-302210/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00271-ip-10-171-10-70.ec2.internal.warc.gz
en
0.962043
558
2.984375
3
The US Federal Trade Commission (FTC) adopted amendments to the Children’s Online Privacy Protection Act, designed to give parents greater control over personal information their children share with online services and apps. Earlier this month the FTC issued a report into the privacy practices of apps targeting children, noting that “our study shows that kids’ apps siphon an alarming amount of information from mobile devices without disclosing this fact to parents”. The changes include closing a loophole that allows apps aimed at children to permit third parties to collect personal information through plug-ins without parental notice or consent. COPPA compliance should also extend to third parties in these cases. The list of personal information that cannot be collected without parental notice now includes geolocation information, photos and videos. The move comes after an FTC review to ensure the COPPA Rule keeps up with evolving technology and the changing way children use and access the internet – including the use of mobile devices. COPPA applies to children under the age of 13. Other areas now covered by the COPPA Rule include persistent identifiers that can recognise users over time, such as IP addresses and mobile device IDs; and strengthened data security protection to ensure online service operators adopt reasonable procedures for data retention and deletion, and take steps to ensure children’s personal information only goes to companies capable of keeping it secure and confidential. The COPPA Rule was mandated when the US Congress passed the Children’s Online Privacy Protection Act of 1998. The final amended COPPA Rule will take effect on 1 July 2013.
<urn:uuid:fb829feb-8f49-4940-b648-40d2f41da176>
CC-MAIN-2017-04
https://www.mobileworldlive.com/ftc-strengthens-app-privacy-for-kids
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915742
314
2.625
3
(Editor's note: This is the second in a multi-part series on big data.) The big data movement is revolutionary. It seeks to overturn cherished tenets in the world of data processing. And it's succeeding. Most people define "big data" by three attributes: volume, velocity, and variety. This is a good start, but misses what's really new. (See "The Vagaries of the Three V's: What's Really New About Big Data".) At its heart, big data is about liberation and balance. It's about liberating data from the iron grip of software vendors and IT departments. And it's about establishing balance within corporate analytical architectures long dominated by top-down data warehousing approaches. Let me explain. Liberating Data from Vendors Most people focus on the technology of big data and miss the larger picture. They think big data software is uniquely designed to process, report, and analyze large data volumes. This is simply not true. You can do just about anything in a data warehouse that you can do in Hadoop, the major difference is cost and complexity for certain use cases. For example, contrary to popular opinion on the big data circuit, many data warehouses can store and process unstructured and semi-structured content, execute analytical functions, and run processes in parallel across commodity servers. For instance, back in the 1990s (and maybe even still today), 3M drove its Web site via its Teradata data warehouse, which dynamically delivered Web content stored as blobs or external files. Today, Intuit and other customers of text mining tools, parse customer comments and other textual data into semantic objects that they store and query within a SQL database. Kelley Blue Book uses parallelized fuzzy matching algorithms baked into its Netezza data warehouse to parse, standardize, and match automobile transactions derived from auction houses and other data sources. In fact, you can argue that Google and Yahoo could have used SQL databases to build their search indexes instead of creating Hadoop and MapReduce. However, they recognized that a SQL approach would have been wildly expensive because of database licensing costs due to the data volumes. They also realized that SQL isn't the most efficient way to parse URLs from millions of Web pages and such an approach would have jacked up engineering costs. Open Source. The real weapon in the big data movement is not a technology or data processing framework, it's open source software. Rather than paying millions of dollars to Oracle or Teradata to store big data, companies can download Apache Hadoop and MapReduce for free, buy a bunch of commodity servers, and store and process all the data they want without having to pay expensive software licenses and maintenance fees and fork over millions to upgrade these systems when they need additional capacity. This doesn't mean that big data is free or doesn't carry substantial costs. You still have to buy and maintain hardware and hire hard-to-find data scientists and Hadoop administrators. But the bottom line is that it's no longer cost prohibitive to store and process hundreds of terabytes or even petabytes of data. With big data, companies can begin to tackle data projects they never thought possible. Counterpoint. Of course, this threatens most traditional database vendors who rely on a steady revenue stream of large data projects. They are now working feverishly to surround and co-opt the big data movement. Most have established interfaces to move data from Hadoop to their systems, preferring to keep Hadoop as a nice staging area for raw data, but nothing else. Others are rolling out Hadoop appliances that keep Hadoop in a subservient role to the relational database. Still others are adopting open source tactics and offering scaled down or limited use versions of their databases for free, hoping to lure new buyers and retain existing ones. Of course, this is all good news for consumers. We now have a wider range of offerings to choose from and will benefit from the downward price pressure exerted by the availability of open source data management and analytics software. Money ultimately drives all major revolutions, and the big data movement is no different. Liberating Data From IT The big data revolution not only seeks to liberate data from software vendors, it wants to free data from the control of the IT department. Too many business users, analysts, and developers have felt stymied by the long-arm of the IT department or data warehousing team. They now want to overthrow these alleged "high priests" of data who have put corporate data under lock and key for architectural or security reasons or who take forever to respond to requests for custom reports and extracts. The big data movement gives business users the keys to the data kingdom, especially highly skilled analysts and developers. Load and Go. The secret weapon in the big data arsenal is something that Amr Awadallah, founder and CTO at Cloudera, calls "schema at read." Essentially, this means that with Hadoop you don't have to model or transform data before you query it. This cultivates to a "load and go" environment where IT no longer stands between savvy analysts and the data. As long as analysts understand the structure and condition of the raw data and know how to write Java MapReduce code or use higher level languages like Pig or Hive, they can access and query the data without IT intervention (although they may need permission.) For John Rauser, principal engineer at Amazon.com, Hadoop is a godsend. He and his team are using Hadoop to rewrite many data intensive applications that require multiple compute nodes. During his presentation at Strata Conference in New York City this fall, Rauser touted Hadoop's ability to handle myriad applications, both transactional and analytical, with both small and large data volumes. His message was that Hadoop promotes agility. Basically, if you can write MapReduce programs, you can build anything you want quickly without having to wait for IT. With Hadoop, you can move as fast or faster than the business. This is a powerful message. And many data warehousing chieftains have tuned in. Tim Leonard, CTO of US Xpress loves Hadoop's versatility. He has already put it into production to augment his real-time data warehousing environment, which captures truck engine sensor data and transforms it into various key performance indicators displayed on near real-time dashboards. Also, a BI director for a well-known internet retailer uses Hadoop as a staging area for the data warehouse. He encourages his analysts to query Hadoop when they can't wait for data to be loaded into the warehouse or need access to detailed data in its raw, granular format. Buyer Beware. To be fair, Hadoop today is a "buyer beware" environment. It is beyond the reach of ordinary business users, and even many power users. Today, Hadoop is agile only if you have lots of talented Java developers who understand data processing and an operations team with the expertise and time to manage Hadoop clusters. This steep expertise curve will diminish over time as the community refines higher level languages, like Hive and Pig, but even then, it's still pretty technical. In contrast, a data warehouse is designed to meet the data needs of ordinary business users, although they may have to wait until the IT team finishes "baking the data" for general consumption. Unlike Hadoop, a data warehouse requires a data model that enforces data typing and referential integrity and ensures semantic consistency. It preprocesses data to detect and fix errors, standardize file formats, and aggregate and dimensionalize data to simplify access and optimize performance. Finally, data warehousing schemas present users with simple, business views of data culled from dozens of systems that are optimized for reporting and analysis. Hadoop simply doesn't do these things, nor should it. Clearly, big data and data warehousing environments are very different animals, each designed to meet different needs. They are the yin and yang of data processing. One delivers agility, the other stability. One unleashes creativity, the other preserves consistency. Thus, it's disconcerting to hear some in the big data movement dismiss data warehousing as it's an inferior and antiquated form of data processing best preserved in a computer museum but not used for genuine business operations. Every organization needs both Hadoop and data warehousing. These two environments need to work synergistically together. And it's not just that Hadoop should serve as a staging area for the data warehouse. That's today's architecture. Hadoop will grow beyond this to become a full-fledged reporting and analytical environment as well as a data processing hub. It will become a rich sandbox for savvy business analysts (whom we now call data scientists) to mine mountains of data for million dollar insights and answer unanticipated or urgent questions that the data warehouse is not designed to handle. Thomas Jefferson once said, "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants." He was referring to the natural process by which political and social structures stagnate and stratify. This principle holds true for many aspects of life, including data processing. For too long, we've tried to shoehorn all analytical pursuits into a single top-down data delivery environment that we call a data warehouse. This framework is now creaking and groaning from the strain of carrying too much baggage. It's time we liberate the data warehouse to do what it does best, which is deliver consistent, non-volatile data to business users to answer predefined questions and populate key performance indicators within standard corporate and departmental dashboards and reports. It's gratifying to see Hadoop come along and shake up data warehousing orthodoxy. The big data movement helps clarify the strengths and limitations of data warehousing and underscore it role within an analytics architecture. And this leaves Hadoop and NoSQL technologies to do what they do best, which is provide a cost-effective, agile development environment for processing and query large volumes of unstructured data. The organizations that figure out how to harmonize these environments will be the data champions of tomorrow. Posted January 19, 2012 9:50 AM Permalink | 4 Comments |
<urn:uuid:226ee5d0-41c8-43c2-a43d-5d5c8bae7391>
CC-MAIN-2017-04
http://www.b-eye-network.com/blogs/eckerson/archives/2012/01/let_the_revolut.php
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941976
2,113
2.625
3
Improved microprocessors make other technologies more valuable. Mass storage, for example, is worth more if less expensive, faster processors can execute more demanding compression algorithmsImproved microprocessors make other technologies more valuable. Mass storage, for example, is worth more if less expensive, faster processors can execute more demanding compression algorithms. Memory is worth more if high-bandwidth processors can sharpen and color-correct a video stream, to mention just one application, as easily as they used to optimize a single image frame. But the real explosion of processor demand doesnt come from high-end applications for chips that cost hundreds of dollars. It comes from the hordes of simple applications, formerly less expensive to do with hard-wired logic, migrating to software on newly affordable low-end general-purpose chips. If only all those general-purpose processors could find jobs to do in their spare time. Suns Project Jxta, a hot topic at this years JavaOne conference, proposes a framework for spontaneous collaboration among processing nodesand a peer-to-peer framework like Jxta doesnt merely partition existing large tasks. It also enables exploitation of previously invisible opportunities: for example, as suggested during Bill Joys JavaOne demo, to have all the cars near a given gas station negotiate as a group for a discounted price.
<urn:uuid:f0ec8d97-0afc-4c8e-bbc8-bfc0222b4847>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Philharmonic-Computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924644
262
2.53125
3
Building a high-speed brain-to-computer interface that would offer “unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world” is the goal of a new program announced by the Defence Advanced Research Projects Agency recently. The research agency’s Neural Engineering System Design (NESD) want to develop an implantable device that would “serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. You may recall in the sci-fi film The Matrix, protagonists were plugged into a violent virtual future world though a brain interface. +More on Network World: The weirdest, wackiest and coolest sci/tech stories of 2015+ The goal with NESD is to achieve a communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back,” DARPA stated. “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.” Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain, Alveda stated in a release. +More on Network World: 26 of the craziest and scariest things the TSA has found on travelers+ The program sounds like a complex undertaking. For example DARPA states: - In parallel with hardware developments and innovations in neural transduction techniques, the NESD program seeks to advance the state of the art in algorithms to identify neurons, neural circuits, and patterns of population-coded activity that represent and encode specific sensory stimuli and transform this neural-coded information to and from the digital electronic domain. New mathematical transformation algorithms will need to accommodate the increased scale of neural input/output, and leverage the developed NESD hardware systems to validate simultaneous high-bandwidth and high-precision, bi-directional information transfer between the system and animal/human subjects. - NESD hardware components and algorithms must be modular in design with clear, well-defined hardware interconnect and software Application Programming Interfaces (APIs) that can easily accommodate upgrades to componentry, new neural signal transduction modalities, and/or algorithms to enable their use as foundational engineering platforms for future research and development. - Successful NESD proposals must culminate in the delivery of complete, functional, implantable neural interface systems and the functional demonstration thereof. The final system must read at least one million independent channels of single-neuron information and stimulate at least one hundred thousand channels of independent neural action potentials in real-time. The system must also perform continuous, simultaneous full-duplex interaction with at least one thousand neurons. DARPA said it expects to spend $60 million in the NESD program over four years. The DARPA project is the second major brain-related venture announced recently. The Intelligence Advanced Research Projects Activity, the radical research arm of the of the Office of the Director of National Intelligence, this month said it was looking to develop human brain-like functions into a new wave of computers. IARPA said it was looking at two groups to help develop this new generation of computers: computer scientists with experience in designing or building computing systems that rely on the same or similar principles as those employed by the brain and neuroscientists who have credible ideas for how neural computing can offer practical benefits for next-generation computers. From the IARPA request for information: …”the principles of computing underlying today's state of the art digital systems deviate substantially from the principles that govern computing in the brain. In particular, whereas mainstream computers rely on synchronous operations, high precision, and clear physical and conceptual separations between storage, data, and logic; the brain relies on asynchronous messaging, low precision storage that is co-localized with processing, and dynamic memory structures that change on both short and long time scales.” Check out these other hot stories:
<urn:uuid:123fcf90-cd2d-4e2e-a261-7dc708df9d9f>
CC-MAIN-2017-04
http://www.networkworld.com/article/3026404/security/say-hello-to-the-matrix-darpa-looks-to-link-brains-and-computers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00445-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915606
959
3
3
DDoS Attack Doesn't Spell Internet Doom: 7 FactsDespite a record-setting DDoS attack against anti-spam group Spamhaus, the Internet remains alive and well. Let's break down the key facts. 5. Why DDoS Size Doesn't Always Matter. Still, the DDoS attacks launched against Spamhaus suggest that with a bit of effort, attack volumes -- which on average have remained stagnant in recent years, or even decreased -- can be increased in size. "Arbor has been monitoring DDoS for more than a dozen years and we've seen attack size peaking at around 100 Gbps in recent years," said Dan Holden, director of Arbor Network's security engineering and response team, in an email. But DDoS attack size need not matter, because DDoS attackers -- supported by free attack toolkits -- have found effective ways to disrupt websites that don't require launching massive quantities of packets. Instead, they can simply target choke points, for example by launching application-layer attacks. Such attacks can be just as effective as high-volume attacks. For example, the largest DDoS attack in 2012 peaked at just 60 Gbps, in a year that was filled with DDoS disruptions. 6. At Whatever Volume, DDoS Attacks Are Hard To Stop. The end result, of course, is still website disruptions. "The attack on Spamhaus, and their upstream security and Internet providers, is yet another example of how DDoS has become the de facto weapon of choice for cyber-activists, cyber-criminals, business competitors and others," said Marty Meyer, president of Corero Network Security, in an email. "Unfortunately, the shared infrastructure that is the Internet can be vulnerable to this type of attack on the DNS system. It illustrates the collateral damage that can be felt by individuals trying to access sites and businesses like Netflix" -- which reportedly saw its service slow down as a result of the Spamhaus DDoS attacks -- "for whom the Web is the cornerstone of their business," he said. The DDoS attack against Spamhaus also brought predictable dystopian hand-wringing from security vendors envisioning the potential evolution in online threats. "It also raises a worrying red flag that if an organization like CyberBunker could allegedly unleash this much damage, could a cyber-terrorist or state sponsored attacker use similar tactics to disrupt the communication and business channels of its enemies that rely on the Internet?" said Meyer. 7. Easy DDoS Attacks Support Online Grudges. Case in point: the group calling itself the al-Qassam Cyber Fighters, which has been waging six-month-long DDoS attack campaign against U.S. banking websites under the banner of "Operation Ababil." Although the group claims to be a cross-border band of Muslim hacktivists incensed over the July 2012 posting to YouTube of a film that mocks the founder of Islam, multiple U.S. government officials have accused it of being an Iranian government front. Regardless, the group continues to prove itself adept at preventing customers from reaching U.S. banking websites, either by disrupting targeted websites, or leading targeted websites to employ defenses that block some legitimate traffic from reaching their sites. No 300-Gbps attack volume required. Attend Interop Las Vegas, May 6-10, and learn the emerging trends in information risk management and security. Use Priority Code MPIWK by April 29 to save an additional $200 off All Access and Conference Passes. Join us in Las Vegas for access to 125+ workshops and conference classes, 300+ exhibiting companies, and the latest technology. Register for Interop today! 2 of 2
<urn:uuid:44152292-ecf9-49e6-a4a6-44a30980616f>
CC-MAIN-2017-04
http://www.darkreading.com/attacks-and-breaches/ddos-attack-doesnt-spell-internet-doom-7-facts/d/d-id/1109295?cid=sbx_iwk_related_mostpopular_mobile&itc=sbx_iwk_related_mostpopular_mobile&page_number=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.943663
754
2.515625
3
Many educational institutions are employing tablet PCs to elevate learning, class participation, and organization. Recent technological improvements have revitalized portable, touch-screen tablets and convertible laptops and the concurrent increase in adoption and drop in price have made tablets more attractive to teachers and students at all levels of study. Tablet PCs and Education Today In the recent past, popular touch-screen technologies have evolved to produce durable tablet displays and improved pen and handwriting recognition. The evolution in usability has influenced a growing investment in applications and system support – including significant software price drops (for example, Microsoft Education Pack for Windows XP Tablet PC is now free). Consequently, tablet PCs are emerging in many more classrooms.
<urn:uuid:b1c82003-b564-46b3-a8cd-371bcc3918f7>
CC-MAIN-2017-04
https://www.infotech.com/research/revitalized-tablet-pcs-educate
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00190-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925196
136
2.546875
3
Most network people understand the terrible affect delay can have on TCP sessions. TCP must pause data transfers while waiting for ACKs. The more network delay there is, the longer this pause will be. However, many people, including most other IT professionals, do not understand this problem. They assume everything is connected at 1Gbps. Plus, TCP window sizes - how much data a host will send/receive before waiting/sending an ACK - can also significantly affect the transfer rate. Sadly, most TCP sessions never even get close to 1Gbps, particularly over a WAN. I had a unique situation to prove this to user a while back. We have an OC-3 (155Mbps) between our two main offices on opposite US coasts. The round-trip delay is 75ms. This user opened a ticket complaining of slow file transfers between the two offices. He was only getting 196 KBps transfer rate (1.568 Mbps). He was sure there was a problem with the network since he was sure he should be able to transfer faster. So, I asked for source and destination IP addresses and did a test from my own PC while doing a packet capture. First, I downloaded a file to my PC with Ethereal running from the remote server. My PC (Windows XP) started the TCP session by advertising a large TCP window size (64,512), but the server returns a small window of 9,280: 4176 > http [SYN] Seq=0 Ack=0 <font=red>Win=64512</font> Len=0 MSS=1160 http > 4176 [SYN, ACK] Seq=0 Ack=1 Win=9280 Len=0 MSS=1460 4176 > http [ACK] Seq=1 Ack=1 Win=64512 Len=0 Thus, the server will only send 9,280 bytes before it waits for an ACK from my PC. Now, a few packets later when the HTTP 200 reply comes from the server the TCP windows grows to 16,384, but then it never got any larger: Protocol Info HTTP HTTP/1.1 200 OK (application/zip) Source port: http (80) Destination port: 4176 (4176) Sequence number: 1 (relative sequence number) Next sequence number: 1161 (relative sequence number) Acknowledgement number: 428 (relative ack number) Header length: 20 bytes Flags: 0x0010 (ACK) Window size: 16384 Checksum: 0x7183 [correct] Hypertext Transfer Protocol Media Type: application/zip (762 bytes) After a few seconds when TCP stabilizes it creates a situation where the server sends 14 packets then waits for the ACK from my PC. Since the round-trip-time is about 75ms between the two offices, the data transfer pauses while this ACK is in flight. Once the ACK was received, the data transfer starts again. I could see this again and again in the trace. So, let's do some math. The server is sending packets of 1,214 bytes each. It takes about 85 ms total to receive the 14 packets and send an ACK. So: 14 packets * 1,214 bytes = 16,996 bytes (there's a full TCP window) So, in 85ms, the server sends 16,996 bytes. Now do a proportion to find out how much is sent in 1 second (1000 ms): 16,996 X --------- = --------- 85 1,000 85X = 1,699,6000 X = 199,952.94 now convert to kilobytes: 199,952.94/1,024 = 195.26 KBps Look familiar? That's the exact value my user was reporting as a problem. The network, and TCP, is working perfectly. If you are able to get both ends to use the maximum TCP window size (65,536) with an 85 ms RTT, your theoretical maximum transfer rate is: 65,536 X --------- = --------- 85 1,000 85X = 65,536,000 X = 771,011.76 now convert to kilobytes: 771,011.76/1,024 = 752.94 KBps As a network guy I convert all things to bits, so the theoretical maximum rate over the WAN is: 752.94 KBps * 8 = 6023.52 Kbps = 6Mbps When you work through the packet trace this turns out to be a very simple math problem. It can be a powerful way to show users that the network is fine. Unfortunately, you won't be able to increase the speed of light for your users. More >From the Field blog entries: Go to Cisco Subnet for more Cisco news, blogs, discussion forums, security alerts, book giveaways, and more.
<urn:uuid:35aad150-695e-449d-ab5d-35fe5de5056b>
CC-MAIN-2017-04
http://www.networkworld.com/article/2233207/cisco-subnet/using-a-packet-trace-to-show-tcp-throughput.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00098-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914857
1,000
2.53125
3
High-tech companies use the Internet to provide customers with a clean, insulated environment where they can make purchases, do research and chat with friends and strangers. But behind every antiseptic Web page lies its real-world technological counterpart; and all too often the servers that run Web sites and online services arent nearly as spic-and-span as the pages they bring to life. Thats especially true in large cities where Internet companies either use servers owned by other firms or hire such firms to house their servers remotely. These server "farms," also known as data centers, use enormous amounts of energy relative to their size. Exodus Communications operations in the San Francisco Bay area, for instance, consume as much electricity as 12,000 houses. Whats more, server farms cant risk power outages, or the consequent lack of access to the Internet. Therefore, they rely on diesel backup generators, which typically generate more pollution than power plants. These concerns have led a number of city officials to rethink how their municipalities can accommodate the needs of high-tech firms while ensuring that the resulting energy use and pollution doesnt spiral out of hand. San Francisco, for example, recently passed temporary legislation that requires new server farms to apply for conditional use permits. To receive the permit, a server farm needs to demonstrate that it has minimized air pollution from its backup generators and designed its building to be energy efficient. The Board of Supervisors is now considering permanent guidelines for future server farms. "The regulation for the conditional use process is pretty balanced and is typical for a lot of development, so were not adding something that developers arent familiar with," said Greg Asay, a legislative aide for San Francisco Supervisor Sophie Maxwell, who sponsored the legislation. San Francisco has 16 server farms in existence or in development that total two million square feet, and many of them lie in Maxwells district. The legislation was driven by health and energy concerns, said Asay, "and the energy concerns have a health impact as well. We have, in California, a mad rush to build power plants, including here in San Francisco, and if the demand [for electricity] keeps increasing, were going to be stuck with a lot of power plants." Residents of Maxwells district, which also contains San Franciscos two power plants, already suffer disproportionately high asthma rates. The fear was that without this legislation, their problems would only get worse, whether from the construction of new power plants or from the exhaust of diesel generators that carry server farms through Californias increasingly common blackouts. The health concerns also became a pressing issue due to residential growth around the farms themselves. "Its tough to find a place where theres not someone [already living nearby]. Even within industrial areas, theres been massive growth in the last few years of these live/work lofts [converted warehouse offices exempt from most housing laws]," said Asay. "Even when an area is not zoned for housing, you still have housing across the street. Its tough to find a whole part of town that we can section off as industrial." Server Farm Repellant If San Francisco cant do so, though, server farms might go elsewhere. "You need to be in the proximity of those who own and use the servers, preferably within about 20 minutes to a half-hour, so that if theres a problem, the user will be able to reach the data center very quickly," said John Mogannam, senior vice president of engineering for U.S. DataPort in San Jose. "If somebody wants to build a data center [in San Francisco] and knows that he has to jump through hoops, it wont be hard to find a place in Oakland, South San Francisco, Millbrae or wherever else space is available." "We actually like the [San Francisco] regulations because it will encourage companies to come locate on our campus," said Mogannam, referring to his companys 174-acre, 10-warehouse complex thats currently under construction in San Jose. "As far as the city of San Francisco is concerned, it will discourage companies from locating data centers there." Asay isnt so sure. After all, server farms can actually be accessed from anywhere in the world; its mere habit that keeps them close to the companies they serve. Whats more, he says, "Even if San Francisco is first, it wont be long before the rest of the country enacts these types of regulations." U.S. DataPort, for instance, will be employing a natural gas cogeneration plant in its new San Jose facility at the insistence of both the city and the state. "It eliminates the diesels completely," said Mogannam. The fallout from San Franciscos new regulations probably wont be apparent in the near future. "The economic downturn is giving us time to figure out whether there will be a ripple effect from the legislation," said Asay. "We have a lot of permits already in, but they might not build out for a couple of years. Right now, theres not much of a demand."
<urn:uuid:bbc51be7-0574-49c3-ae92-b4fb2669e7f1>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/Down-on-the-Server-Farm.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00218-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958208
1,038
2.921875
3
The history of Unified Communications lies in the birth of voice over ip (VoIP) back in the late 1980’s and early 1990’s. We were all concentrating on toll bypass for enterprises to save money by placing voice over their data circuits. Then Cisco bought a company called Celsius who was exploring making VoIP to the actual phones and not just moving voice packets as data packets across just the gateway links. This was a gamble on Cisco’s part since everyone still relied on silo hardware based PBX switches and even the switch vendors stated many times this will not work. This was the first attempt to make voice an application on the data network which was being marketed hard by Cisco around the mid to late 1990’s by using the marketing term AVVID (Architecture for Voice Video and Integrated Data). Then with the invention of the Linux operating system kernel, other startups began mirroring Cisco’s example by building a cheap PBX with features to provide to medium and small companies. A company who was probably the first to do this was Asterisk. Now two things had to come together for Asterisk to be possible, one a cheap OS (Linux is free) to build the concept of a PBX on, and two a delivery protocol to control voice calls which was developed by the IETF called SIP (Session Initialization Protocol). These two events have also allowed other manufacturers to get into the VoIP game being Microsoft, IBM, 3com, Skype, Vonage, and now even Google. The second aspect of Unified Communications was the convergence of voice mail, e-mail and fax mail in the same Message Store or Mail server. Many today are converging on this technology or using it today. Today the voice is treated as just an application running on your network which a solid delivery protocol called SIP. What is unique about this is that now voice can be blended into your corporate applications as well. For instance, a group of users can be working on a research project all using the same network-based application and from the application they can set up phone calls between researchers or even raise it to a conference call. This will provide true real-time collaboration, which companies have been searching for and is finally now being delivered. This is the real growth that will be the focus point in this decade. What will Unified Communications look like in the future? All I have to say is the telephone itself may be extinct and only time will tell what the replacement will look like. Author: Joe Parlas
<urn:uuid:bbea8401-b5e0-4d73-bec4-ca6cce2fc8f4>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2009/03/16/a-brief-history-of-unified-communications-uc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967127
516
2.875
3
2.1.3 What are the advantages and disadvantages of public-key cryptography compared with secret-key cryptography? The primary advantage of public-key cryptography is increased security and convenience: private keys never need to be transmitted or revealed to anyone. In a secret-key system, by contrast, the secret keys must be transmitted (either manually or through a communication channel) since the same key is used for encryption and decryption. A serious concern is that there may be a chance that an enemy can discover the secret key during transmission. Another major advantage of public-key systems is that they can provide digital signatures that cannot be repudiated. Authentication via secret-key systems requires the sharing of some secret and sometimes requires trust of a third party as well. As a result, a sender can repudiate a previously authenticated message by claiming the shared secret was somehow compromised (see Question 188.8.131.52) by one of the parties sharing the secret. For example, the Kerberos secret-key authentication system (see Question 5.1.6) involves a central database that keeps copies of the secret keys of all users; an attack on the database would allow widespread forgery. Public-key authentication, on the other hand, prevents this type of repudiation; each user has sole responsibility for protecting his or her private key. This property of public-key authentication is often called non-repudiation. A disadvantage of using public-key cryptography for encryption is speed. There are many secret-key encryption methods that are significantly faster than any currently available public-key encryption method. Nevertheless, public-key cryptography can be used with secret-key cryptography to get the best of both worlds. For encryption, the best solution is to combine public- and secret-key systems in order to get both the security advantages of public-key systems and the speed advantages of secret-key systems. Such a protocol is called a digital envelope, which is explained in more detail in Question 2.2.4. Public-key cryptography may be vulnerable to impersonation, even if users' private keys are not available. A successful attack on a certification authority (see Question 184.108.40.206) will allow an adversary to impersonate whomever he or she chooses by using a public-key certificate from the compromised authority to bind a key of the adversary's choice to the name of another user. In some situations, public-key cryptography is not necessary and secret-key cryptography alone is sufficient. These include environments where secure secret key distribution can take place, for example, by users meeting in private. It also includes environments where a single authority knows and manages all the keys, for example, a closed banking system. Since the authority knows everyone's keys already, there is not much advantage for some to be "public" and others to be "private." Note, however, that such a system may become impractical if the number of users becomes large; there are not necessarily any such limitations in a public-key system. Public-key cryptography is usually not necessary in a single-user environment. For example, if you want to keep your personal files encrypted, you can do so with any secret key encryption algorithm using, say, your personal password as the secret key. In general, public-key cryptography is best suited for an open multi-user environment. Public-key cryptography is not meant to replace secret-key cryptography, but rather to supplement it, to make it more secure. The first use of public-key techniques was for secure key establishment in a secret-key system [DH76]; this is still one of its primary functions. Secret-key cryptography remains extremely important and is the subject of much ongoing study and research. Some secret-key cryptosystems are discussed in the sections on block ciphers and stream ciphers.
<urn:uuid:3f0ab71a-0ff6-4dd8-b62c-f619bd0fdecb>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/advantages-and-disadvantages.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9356
783
3.453125
3
For me, beginning a piece like this, it is customary to set a framework, and with open source software, the best place to start is with answering the question, “What Is It?” For the answer I always go to my favored source, Wikipedia, which defines it as:- “Open source software (OSS) is computer software that is available in source code form: the source code and certain other rights normally reserved for copyright holders are provided under a free software license that permits users to study, change, improve and at times also to distribute the software.” Interesting how Wikipedia has emerged to be the main reference point for definitions and other such information, given that by and large, Wikipedia is a great example of what open source is all about. Someone puts out an idea, and other people add, change, modify, improve, develop, and nurture that idea – which is just how open source software actually gets developed. By providing customers with the source code, open source companies actually provide them with two key advantages that proprietary vendors are not presented with. The first is quite simple and very often overlooked. By having access to the source code, you are able to verify all of the claims, be they around security or any other feature, that the supplier may care to make. With proprietary software, you must take it on faith that what you are being told is the reality. It was exactly this reason that helped the National Security Agency of the United States Department of Defense decide on Linux as the platform for their security environment. They saw the ability to effectively define their own architecture, and then work with other market forces to develop this component as critical. For them, the ability to modify an architecture to suit their business requirement, rather than change their modus operandi to suit some architecture that was being forced on them made much more sense. More to the point, and in true open source style, they decided to provide this security infrastructure to the world as part of the operating system, and element we now know as Security Enhanced (or SE) Linux. It is this ability to adapt technology to suit the business that is fueling the open source software phenomenon in corporations and governments both large and small. So much so that many proprietary vendors are now claiming to be “open source friendly”, which is significantly different from being genuinely open source. Being “open source” friendly mostly means that, for a proprietary vendor, they will certify their closed source offerings to work with particular open source technologies at various layers in the stack, it does not mean that they will provide you with a copy of the source code (as an open source vendor does), or in fact that they have optimized their offerings to leverage some of the advantages of the open source code base. There are exceptions to this though. Companies like SAP & Intel, for example, work very closely with the open source community and encourage their developers to contribute to various open source projects, to the benefit of their customers, and in SAP’s case, they also work very closely with open source companies like Red Hat to ensure that on both sides the customers are provided with optimized offerings. For example, there is a customized version of Red Hat Enterprise Linux (RHEL) for SAP environments which delivers the best possible performance and stability and it is certified and supported by both organizations. Open source software has really driven a major element which the business community is benefiting from, being the democratization of software. In terms of the democratization, anybody can contribute to an open source project, and I often refer to open source developers as being amongst the bravest people in the world, and my rationale is simple. How many of you reading this piece would develop something and then submit it to a community of more than 3,000,000 people for review and comment – it takes a pretty thick skin to do that, so my (Red) hat’s off to them for doing it. As such anybody can submit a feature and then by sheer weight of numbers, it gets “voted” on by the development community in terms of its relevance and importance. Open source is here to stay - the question is, are you 'open' enough to embrace it!! By George DeBono, General Manager of Red Hat Middle East and Africa Phil Muncaster reports on China and beyond Jon Collins’ in-depth look at tech and society Kathryn Cave looks at the big trends in tech
<urn:uuid:b413d738-2695-44ff-8507-c7ac92cd09ca>
CC-MAIN-2017-04
http://www.idgconnect.com/blog-abstract/509/george-debono-global-is-your-mind-open-open-source
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00520-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964661
912
2.921875
3
Spam can be a number of things. The original being canned spiced ham developed by Hormel in the 1930s. Due to food rationing in Britain during the Second World War, Spam became a popular menu item, so much so that it seemed to be everywhere, in every dish, whether you liked it or not. When the Internet was created and people started using email, we started getting emails that we didn’t want, these came to be known as spam. There are about a million different kinds of spam messages, here’s nine of the most popular (in no particular order) and how to identify that they are spam: - Emergency messages – These often come from family, or people on your contact list, usually asking you for money because they are stranded. While you may have relatives traveling, it’s a good idea to reach out to them using other means of communication when you get an email like this. Be wary, especially if they don’t want to give a phone number or exact location. - Requests to update your account – These usually come in after a website has had a security breach. They almost always ask you to update contact information, and usually provide a link. Clicking this link will take you to a site that looks almost exactly the same as the real one, only this one usually has viruses or other malicious intent. If you ever get an email like this: Read the email and sender’s email address carefully – they usually have spelling mistakes – and don’t click any links. Instead, close and log out of your email, go to the website and log in. - Requests for your password – Sometimes spammers don’t even bother to set up elaborate websites, they’ll just grab the company logo, make a fancy letterhead and send you an email, or message asking you for your password. This type of spam usually comes from scammers posing as representatives of a bank or credit card company. Never, ever reply with your password. Organizations do not ask for passwords over email. - Obvious misspellings – Unless you work with people or companies with employees who aren’t native English speakers, obvious misspellings in messages e.g., ‘Here iS som3 FREE Stuffz’, usually indicate the message is spam. If you’re not sure, and know the sender, contact them. If you don’t know the sender, or the sender has an email address like: pradaoutletonlinestore4u.comGliemATgmail.com, it’s spam. - Pleas for help – This is a tough one, we all want to help people, but when we receive pleas to help the poor starving hipsters of Manhattan, you have to be skeptical. Charities don’t email you unless you put your name on a mailing list, or gave them your email when you last donated. - Contest winner – The main rule here is: If you didn’t enter the contest, you’re not a winner, no matter how sweet the prize. The same goes for those spam pop-ups on some of the more adult oriented websites. You’re not the 1,000,000th viewer and clicking on the link, or shooting the three ducks won’t get you a free iPad. You will get more spam however, or a virus if you’re a really good shot. - Chain emails – These have been circling the globe more or less since the beginning of the Internet and have now made their way onto Facebook and other social networks. The vast majority of them are harmless, but, they are annoying. Think about it, you get one telling you to forward to 10 people or a cute, fluffy kitten will be shaved. If you forward it to 10 people, you’re now the spammer. If you get emails like these, they are spam, just delete them. - Messages in attachments – Be extra cautious with this one. If you get an email from any contact that says something along the line of, “Please see my message in this attachment,” or has nothing at all in the body, it’s pretty much guaranteed to be spam. That attachment is likely some malicious software. No organizations or companies will send you messages in an attachment, so when you get one, just delete it. - Awesome deals – Contacted out of the blue by someone offering you an all inclusive ski trip to Steamboat Springs Colorado for just a dollar? Or how about an LV Handbag for just USD$10? These deals seem too good to be true, and what’s the rule with things that seem too good to be true? They are. Just because it’s in an email, or chat message doesn’t mean it’s real. If you get these, don’t click on any links or even reply to the sender, just delete or ignore them. There’s one thing in common with nearly all forms of spam, messages usually contain links. If you’re ever unsure about the link, hover your mouse over it for a few seconds, and your browser should tell you where the link will take you i.e., Chrome will display the address at the bottom of the window. If the link looks unfamiliar, or seems wrong, don’t click it. An important thing to be aware of is that Spam is unwanted, or unasked for. If you sign up for a daily newsletter, that’s not spam, you agreed to allow the company to send you messages. Luckily, most of these have links you can press at the bottom of the message to unsubscribe. To learn more about spam, and how we can help you stop it, please contact us.
<urn:uuid:34a7d812-5d37-4192-b0d1-af57ac277a41>
CC-MAIN-2017-04
https://www.apex.com/know-spot-spam/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00154-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933996
1,211
2.765625
3
August 1, 2016 | 8:30 am - 4:30 pm This is a full-day tutorial in writing iOS apps in Apple's new language Swift using the Xcode IDE on Macintosh. An iPhone or iPad app is made of intercommunicating objects, and we will learn to create and destroy them and call their methods in Swift. - Learn to draw text and graphics - Perform simple animations - Display controls such as buttons and sliders - Respond to a touch or keystroke - Recognize a swipe or a pinch - Use a webview object to render a page containing HTML5, call functions to that page written in Java Prerequisite: some experience in any language with classes, objects, and methods. Instructor suggests that you acquire the current version of the application Xcode if you don’t already have it. It is a free app in the App Store. » Features and Quirks of Swift » What Happens when an iOS App is Launched? » Let’s Draw a Still Life » Creating a Touch Sensitive App » Gesture Recognition » Controls and their Target Objects » Class UIWebView for Platform-Independent HTML5 Tutorial Presented by: New York University (NYU) Adjunct Associate Professor of Information Technologies
<urn:uuid:40d822bb-a608-46a0-b75b-be5de5659cb9>
CC-MAIN-2017-04
http://www.html5report.com/conference/newyork/devcon5-tutorial.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00548-ip-10-171-10-70.ec2.internal.warc.gz
en
0.898255
274
3.21875
3
The very large mud slide in Washington State is a reminder of a number of things about the hazard, response operations and what we can expect in the future. I have been expecting mud slides in Western Washington since the significant increase in rainfall began after the first of this year. For Snohomish County they have had 15.6” of rain (about half of what we’d get on average for a year) in the first three months of 2014. What makes these slides predictable is the combination of heavy, persistent rainfall combined with hillside slope, gravity and the in most cases a layer of clay soils that can rapidly give way leading to the type of slide we observed in Oso, Washington. The picture above says it all about what the survival rate might be for people in their homes and hit directly by a slide this massive. I have to say that I more expected slides to happen in the City of Seattle where they have an extensive history and written record of slide areas in the city. As noted in this linked story, people have to decide how much risk they are willing to accept. Unfortunately, not many people take the time to investigate the natural or technological hazards when they are looking to purchase a home. For instance, being near the water, of any type, lake, stream, ocean, and on a hillside with a view always brings increased risks. Today the Governor was notified that the State of Washington was being given a limited Emergency Declaration, not to be confused with a Presidential Disaster Declaration. This one allows for Federal resources to be brought to bear to assist in the emergency response—at the Federal Government’s expense. Assistance of this type includes an incident support team, program specialists, and an Incident Management Assistance Team (IMAT). If there is to be a Presidential Declaration it will take a more formal process to have that happen. As to what to expect in the future from climate change in the Pacific Northwest: • More frequent and severe storms • Increase levels of rainfall due to warmer air temperatures • The above will lead to an increased risks of flooding, failure of flood control infrastructures and yes, an increased chance of mud slides. One of the big tasks for the local emergency management officials now is accounting for the missing. Being listed as missing is not the same as "being missing." People evacuate and go to live with friends or relatives, etc. Finding the living is an ideal task for social media. It is how students accounted for who was killed in the Virginia Tech Shooting from a few years ago. Claire Rubin shared the climate change link story above.
<urn:uuid:d5d59df2-bcb0-46b2-ba09-c6a8958ce6b6>
CC-MAIN-2017-04
http://www.govtech.com/em/emergency-blogs/disaster-zone/mudslideinwashingtonstate.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00576-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959619
534
2.671875
3
F-Secure data indicates that what is making computers most vulnerable to exploits are unpatched or out-of-date versions of the most widely-used software for PCs. Between 80-90% of users have security holes in their systems with on average 5 different vulnerabilities in the software on their computers. Users don’t remove the old versions of their programs and the vulnerabilities in them can leave the computer wide open to malware or malicious software. Trojans are a typical example of the kinds programs which take advantage of vulnerabilities. They are malicious applications which appear to do one thing but actually do another, giving the criminal access to your computer. In addition to the other programs on your computer, Web browsers can also be vulnerable to exploits. Sometimes these vulnerabilities are used by criminals before an update is available from the manufacturer. F-Secure Exploit Shield is a free beta tool that recognizes attempts to exploit a known web-based vulnerability and shields the user against them. It also works against new, unknown vulnerability exploits by using generic detection techniques based on the behavior of exploits. When giving your computer a spring cleaning, make sure: - Your software is updated with the latest patches - You only have programs you use installed on your computer - You remove old versions or unused software - Your security solution is up-to-date.
<urn:uuid:75fdbb47-5f87-46a6-b3c1-7bf1ebd9cafb>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2009/03/18/free-tool-helps-you-avoid-malicious-exploits/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936393
273
2.75
3
Top 3 ways to prevent data center fires Friday, Jul 26th 2013 Although data center fires are rare occurrences, they can cause enormous damage and shut down a network in seemingly no time at all. Macomb County, Mich., learned this lesson the hard way in April, when a fire destroyed their core IT infrastructure. However, in comparison to other external threats to data center operations, preventing fires can be somewhat difficult since traditional fire prevention methods such as sprinklers and fire extinguishers can damage critical hardware. As such, data center operators need to take special care in crafting their fire prevention strategies, using some or all of these environmental monitoring options. "Like data centers (once called computer rooms or electronic data processing centers), the approach to containing a fire in what is now the nerve center of your organization has significantly changed," TechRepublic contributor Tom Olzak wrote. "It isn't enough to simply install a smoke alarm and a few sprinkler heads. This only works if you don't mind having your business down for days or weeks after a fire." 1) Fuel cells While many facilities have started using temperature monitoring equipment to lower their annual energy bills, others have begun utilizing more disruptive technologies to lower power usage effectiveness. In particular, large technology companies like Apple, Google and Facebook have turned to everything from solar power to outside air to become more eco-friendly, but only one alternative energy source purports to help stop data center fires: fuel cells. According to GigaOM, one major data center has begun employing fuel cells at its facilities. They work by combining a fuel such as natural gas with other elements to generate electricity and heat on site. Fuel cells are more efficient that traditional power sources because they utilize cleaner burning fuels and their proximity ensures that less power is lost in the transmission process. In terms of fire prevention, GigaOM noted that the key benefit that fuel cells offer is nitrogen-filled air, which is a byproduct of the technology. So, instead of using water to put out a data center fire, facilities managers can instead pump in this nitrogenated air to suppress any flames. 2) Moisture monitoring equipment Although humidity monitoring may not be the first fire prevention-related measure a facilities manager considers, it is crucial in this regard. Typically, most data centers worry about humidity levels being too high, as then moisture collects on hardware and potentially causes it to malfunction. However, too little humidity can lead to equally disastrous results. The Data Center Journal reported that when the air inside a data center or server room is too dry, static electricity can begin to build up in dangerous amounts. The main concern presented by this scenario is the discharge of electricity that causes equipment to short circuit, but that static electricity can just as easily lead to a data center fire as well. To prevent these scenarios from coming to fruition, data center managers can leverage humidity monitoring equipment such as water sensors. The American Society of Heating, Refrigerating and Air Conditioning Engineers recommends that server rooms have a relative humidity between 45 percent and 55 percent, although that range can fluctuate depending on the temperature of the facility. By using humidity monitoring equipment, managers can make sure that the data center is always within this ideal range. 3) Temperature monitoring When it comes to fire prevention, data center managers need to have mechanisms in place to alert them when a potential incident is in progress. In this regard, temperature monitoring equipment is vital. By installing in a server room a temperature monitor that sends out real-time alerts, facilities managers can instantly know if a fire is causing internal temperatures to spike and take immediate action to prevent as much permanent damage as possible. Granted, this equipment won't necessarily help data center owners prevent fires, but it will go a long way toward limiting unplanned downtime and ensuring business continuity.
<urn:uuid:3ea7035f-5ae9-477e-a24b-fc48101c5006>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/top-3-ways-to-prevent-data-center-fires-481297
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942473
775
2.578125
3
Does keeping cyberattacks secret endanger US? - By Kevin Coleman - Sep 15, 2011 Hostile activities in cyberspace have grown, and by many accounts the growth rate has been dramatic. But few people have a real appreciation of just how big this issue actually is, and for good reason. When we look at the cyberattacks, we break the collective environment into three distinct areas: - What happens in the classified environment? - What happens and is disclosed in the open environment? - What happens and is undisclosed in the open environment? In the classified environment it is necessary to have controls in place to protect the information about cyberattacks from being disclosed. For these reasons information about cyberattacks in this environment is typically restricted to those with a need to know. The disclosure of this information could hinder ongoing investigations or compromise covert cyber missions. Your eyes only: CIA tech blurs computer screens to others In the open environment businesses, government entities with nonclassified-but-sensitive data, educational institutions and other organizations can and most of the time do disclose when they fall victim to cyberattacks. In some cases there are regulations that actually require the disclosure of these events. Organizations have learned that proper and timely disclosure of successful cyberattacks can actually help mitigate the total amount of attack damage to the organization. In the undisclosed environment, government entities with nonclassified or sensitive data, educational institutions and other organizations either do not have or choose to ignore their requirement to disclose successful cyberattacks. When an entity is compromised, it often is concerned about how its organization will be viewed because of the incident. In other incidents management or those who are responsible for securing the systems tend to operate in their own self-interests and do not inform management of the incident. The largest area is the undisclosed environment. That is why we call the cyberattack economic damage to the undisclosed environment "the big unknown." In one case, a privately held company experienced a cyberattack that was successful by anyone’s standard. The information on more than 200 pieces of intellectual property was copied and exfiltrated from their corporate systems. In a short period of time after the incident, the company noticed that a few patents had been filed in a foreign country. After examination of the foreign patent document, it was determined that they were clearly based on pieces of the intellectual property that had been stolen. The United States is the most innovative and creative country in the world.The national security implications associated with the theft of classified intellectual property and data are well recognized. However, the theft of our unclassified intellectual property and the economic impact on the company and the U.S. economy are underappreciated. The economic and national security implications of the recent publicly disclosed “Shady RAT” cyber espionage incident that operated for at least five years are unknown. Researchers into this incident are quick to warn that only one of the multiple control servers was analyzed; therefore, the number of entities compromised is likely to grow, as is the amount of data and intellectual property that were compromised in the attack. In a rare public statement, the Government Communications Headquarters, a British intelligence agency (much like the National Security Agency in the United States), expressed its concern and pushed for increased defenses. The United States has significant intelligence collections capabilities. Many claim they are the best in the world. It is important to recognize our intelligence agencies do not work alone. Our allies and their intelligence organization share intelligence they collect with us and we respond in kind. There are those who warn at some point in time, international intelligence providers to the United States might choose to mitigate the risk to their intelligence assets and stop providing the intelligence to the United States about these breaches. In fact, that could be one of the motives behind the constant attacks. Kevin Coleman is a senior fellow with the Technolytics Institute, former chief strategist at Netscape, and an adviser on cyber warfare and security. He is also the author of "Cyber Commander's Handbook." He can be reached by e-mail at: email@example.com.
<urn:uuid:3884a236-d679-4d42-8f75-b7bd2f055de2>
CC-MAIN-2017-04
https://gcn.com/articles/2011/09/06/digital-conflict-undisclosed-cyberattack-data.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00446-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963545
830
2.546875
3
Next Up on the System i: Python Published: April 17, 2006 by Timothy Prickett Morgan Strictly speaking, the popular open source software stack that is abbreviated as LAMP should really be shortened to LAMPPP, since the stack is comprised of the Linux operating system, the Apache Web server, the MySQL database, and the three programming languages made popular for Web programming: Perl, Python, and PHP. While the OS/400 platform has informally supported Perl for years, and is just now getting official support for PHP, what you might not know is that two Python variants also run natively on the box. Of course, running and having support--which means IBM's official blessing and tight integration into the unique features of the OS/400 platform--are two different things. With some notable exceptions like relational database technology, single-level store main memory (which no other computers has, as yet), and logical partitioning (which most computers now have), IBM has always taken its time supporting new software ideas on its midrange platforms. That's because the companies that buy midrange boxes want stable, easy-to-use platforms, and they, like IBM, do not like change for change's sake. This conservative stance makes for a very stable platform. But it also means that innovations occurring outside the OS/400 platform that are allowing companies to deploy new kinds of applications and often with less effort do not come to the OS/400 platform quick enough. Stability can lead to stagnation. IBM's support of Perl was never more than the bare necessity. If you want to run Perl on the AS/400, iSeries, or System i, you grab the Perl source code and compile it for AIX or grab the binaries for AIX and then run Perl in the PASE AIX runtime environment that has been embedded in OS/400 for many years. (The third iteration of the TCP/IP stack that IBM created for OS/400 is actually the TCP/IP stack from AIX running inside PASE, and Tivoli Storage Manager and a bunch of other programs, like the OpenSSH shell, run inside PASE, too.) If you wanted to run a hybrid AIX-OS/400 or Linux-OS/400 environment, you could also run Perl in an AIX or Linux partition and then have it talk back to the database and applications running on the OS/400 and i5/OS partitions on your IBM midrange box. But this is not exactly native support. You can get more details about supporting Perl on the System i in this document and you can get the latest ports for OS/400 right here. Larry Wall created Perl in 1987, and the interpreted programming language was designed to scan text and print out reports, but was also used as the system management scripting language; eventually, it became a sort of glue between various pieces of a Web application (technically known as a Common Gateway Interface, or SGI script). But Perl predates the commercialized Web by a decade. Like many of you, I have been banging the drum to get the PHP interpreted language supported natively on the iSeries platform, and as I explained a few weeks ago, IBM and Zend Technology are working right now to get an officially supported version of the commercial Zend software stack integrated on the System i platform and distributed for free to customers buying new machines. (See "PHP Will Soon Be Native on the System i5" from two weeks ago for more on the official PHP support.) Getting PHP support for the System i is made easier in many respects because Zend already exists as a company dedicated to providing commercialized versions of the PHP platform with enterprise-class technical support. Zend also has tight partnerships with operating system platform providers, which allows Zend to ensure that features specific to a given platform are woven into the commercialized versions of PHP. Technically speaking, getting official PHP support should not have taken very long, but it has been years in the making. I know more about PHP than I do about Python, and what little I do know comes from hunting around for a content management system (CMS) for the IT Jungle site. There are scads of open source CMS programs built in PHP, and quite a number that are built in Python, too. Zope and Plone are the two biggies, and while they are interesting, they are not exactly what I am looking for to display our newsletters. Another popular program written in Python is Mailman, the listserver and email sending program. In any event, my main reason for arguing for support of Python is the same reason I have argued for support of Perl and PHP on the iSeries: Any language that has millions of newbie and professional developers using it has to be on the iSeries if the iSeries, and now the System i, is not only to adapt to the modern IT world, but thrive in it. I don't care one little bit about instigating a war between RPG, Java, PHP, and Python programmers. That is like arguing over what is the better tool--a hammer, a screwdriver, a saw, or a staple gun. A toolbox has all of these tools, and more, and you use the right tool for the right job. Python: A Quick History Like Perl, Python predates the commercialized Internet, but has been widely adopted because of the desire to make Web pages and applications that are native to the Web browser interface, more flexible and more easily programmed. Over the Christmas holiday in December 1989, hacker Guido van Rossum of the Netherlands was bored, so he created a descendant of the ABC scripting language for the Unix platform, dubbing it Python, from the British comedy troop Monty Python's Flying Circus (who also brought us the wonderful term "spam" in its Web usage, not meat usage). Python has been controlled by various organizations throughout its history, but van Rossum, who is known as Benevolent Dictator For Life, or BDFL, has remained the spiritual and technical leader of the project until he created the Python Software Foundation in 2001. At that time, van Rossum and his cohort at PythonLabs were finishing up Python 2.0 and were also getting jobs in the commercial software field. (van Rossum eventually took a job for three years at Zope, the CMS company, and then left to join Elemental Security, a security and risk assessment software company based in San Francisco.) Since Python 2.1, all intellectual property relating to Python has been owned and controlled by this non-profit foundation, and the Python license was tweaked to be compatible with the GNU General Public License (GPL). In December 2005, search engine giant Google hired van Rossum, and paid him to dedicate half of his time to Python development. The latest Python release is Python 2.4.3, which is brand-spanking-new, having hit the Web on March 29. Python is an interesting language in a number of ways, mainly because it was designed to use as much English as possible and have a very simple, easy-to-read syntax. And, for you die-hard coders in card-walloper languages like RPG and COBOL, white space and columns mean something in Python, and the Python crowd doesn't go into this whole free-form, new-fangled approach because Python coders believe that indenting means something and makes code more readable. (I could not agree more, and I don't have to look at anything but HTML code all day.) It is hard to reckon how many Python programmers there are worldwide, if that is any measure of the importance of any language. In September 2003, the Python Software Foundation estimated that there were roughly 175,000 to 200,000 Python programmers worldwide, with about half of them being in Europe, based on sales of Python books. It is hard to say how the Python installed base has grown over the past three years, but it could have easily doubled or tripled. There isn't just one Python, by the way. There are a few different Pythons. The core Python created by the foundation and written in C runs on Linux, Windows, Mac OS X (which has a variant of the open source BSD Unix platform underneath its pretty Mac windows). Python is often distributed with commercial link distributions. There are special ports of Python for AIX and Linux on IBM's Power processors as well as a port to the AS/400-iSeries-System i platform, which was created from the Python 2.3.1 source code and moved to the OS/400 platform by Per Gummedal. (This project has its own site, called iSeriesPython.) The latest Python release supported is 2.3.3 on the iSeries, and it is available on OS/400 V4R5 and V5R2. Presumably it will also work on other V5 releases. The C source code is also available at the site, which means you can compile your own Python interpreter if you have the ILE C compiler on your box. Gummedal has created specific modules to interface Python with OS/400's file system and the DB2/400 database. And, if I may be so bold, it looks to me like Gummedal could use a little help with the OS/400 ports. Anyway, there are also ports for z/OS and S/390 on IBM mainframes, OS/2 (if you can believe it), as well as for obscure operating systems such as Aros for Amiga PCs and BeOS, the failed but technically elegant platform made for Apple Macs before Apple moved to BSD. There are even versions for PDAs and cell phones. The other Pythons are also very interesting. Jython is a version of the Python programming language that is written in Java rather than C, which means it can run in any machine that supports a Java Virtual Machine--such as the OS/400 platform. Interestingly, Jython can be semi-compiled down into Java bytecodes to boost performance, just like Java itself can be. Jython is at the Python 2.1 release level right now, and has been in alpha for the Python 2.2 level since last July. IronPython is an implementation of Python created by Jim Hugunin that is written in Microsoft's C# language that can run in the .NET Common Language Runtime or in the open source Project Mono clone of the .NET runtime. (Mono was created by a company called Ximian, which was founded by Miguel de Icaza, the creator of the Gnome graphical user interface for Linux who is now the vice president of developer platforms at Novell.) There is even a project called PyPy, which is an implementation of Python that is written in Python rather than C that is being done by Armin Rigo. He created a just-in-time compiler for Python called Psyco, which can significantly boost the performance of Python. Why i for Python? While PHP arguably has an order of magnitude more programmers than Python, the size of the programmer installed base is not everything. More importantly, the System i platform is in no position to limit itself. It needs all the help it can get to extend its reach in the data center, and I think that Python support can help, particularly when it comes to CMS software for automating Web sites. The one thing that Python needs is a commercial organization to provide installation and technical support for Python, like PHP has with Zend. While many Python-based programs have this type of support--Zope and Plone certainly do--this is a serious shortcoming for Python. This could turn out to be an opportunity for some intrepid Four Hundred gurus. So if you are bored and you feel like build a new business, this might be just such an opportunity. I know that many of you die-hard RPGers and COBOLers have very little or no use for Perl, PHP, or Python. But this isn't about you. It's about the millions of newbies in the world who do have uses for Perl, PHP, and Python and can me it do things. The System i needs them as much as it needs you. I have a few other ideas for what should be included in the System i, and I will share those with you in the coming weeks. PHP Will Soon Be Native on the System i5 Native PHP: Coming Soon to an iSeries Near You Zend Delivers New PHP Engine for IBM Servers PHP is Almost Certainly Coming to the iSeries
<urn:uuid:cfb812c9-430a-4ee7-b84d-796184b90b42>
CC-MAIN-2017-04
http://www.itjungle.com/tfh/tfh041706-story02.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279923.28/warc/CC-MAIN-20170116095119-00108-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957467
2,581
2.6875
3
Technology from little-known Panasas shatters an old I/O bottleneck. Supercomputings newest baby on the way is a $35 million system named after a cartoon character and a muscle car: Roadrunner. Currently being built by IBM for the Department of Energys Los Alamos National Laboratory, in Los Alamos, N.M., Roadrunner could develop into a machine capable of achieving a never-before-sustained speed of 1,000 trillion calculationsor one petaflopper second. If that sounds like a lot of computational intelligence, trust us, it is. Roadrunner could become the next-generation supercomputer rock star for the DOEs stockpile stewardship program, which helps ensure that the U.S. nuclear weapons stockpile is safe and reliable so nobody has to reinstitute underground nuclear testing. Roadrunner is being built entirely from commercially available hardware and is based on Red Hats RHEL (Red Hat Enterprise Linux) 4.3 operating system. IBM System x3755 systems based on Advanced Micro Devices Opteron processors are being deployed in conjunction with IBM BladeCenter H systems with powerful new cell chipsthe latter originally intended for high-end video games. Roadrunner is expected to help usher in a new computing paradigm, in which hybrid architectures are used for extreme-scale computation. Suffice it to say Roadrunner is intended to become the fastest computer in the world. While the processors get faster, the architecture and I/O more efficient, and the software better tuned, there remains a major issue: With that firehose stream of calculation data going through the system, how can a storage system be built big enough to get its digital arms around the entire load? Storage I/O has historically been the biggest, nastiest bottleneck for supercomputing. No more. Eight-year-old, 125-employee Panasas, of Fremont, Calif., was retained by the Roadrunner team to deploy Panasas ActiveScale 3.0 Storage Cluster as the storage package for the new petascale supercomputer, and the problem is being solved. Roadrunner will run extremely complex scientific calculations using the Linux operating system and the Panasas Storage Cluster with DirectFlow. The DirectFlow capability offers a fully parallel data path called PNFS (Parallel Network File System) to allow high-speed, direct communications between the Roadrunner teams Linux cluster and Panasas storage cluster nodes. Conventional storage systems use one two-way head controller to direct data traffic. Panasas alone features PNFS, which Panasas founder Garth Gibson, an internationally known inventor of RAID storage, has championed from the beginning. PNFS features two two-way head controllers; imagine adding a second two-way roadway over an existing two-way highway. "PNFS separates the metadata access from the data path, allowing clients to get direct and parallel access to NAS [network-attached storage]," Henry Baltazar, an analyst with The 451 Group, told eWEEK. "With a standard SAN [storage area network] or NAS storage system, a single controller head can wind up being a bottleneck, especially in performance-critical environments such as HPC [high-performance computing]. The main advantage of clustered storage systems is that they spread the load across multiple systems to ensure high-speed data access." The parallel file system is an "absolutely crucial part of the new Roadrunner ecosystem," Mike Karp, an analyst at Enterprise Management Associates, told eWEEK. "The type of calculations that Los Alamos runs are at a level of complexity that demands parallelized computing processes," Karp said, "which in turn means that the data must be delivered to the various CPUs simultaneously, with very low latency, and at a very high I/O rate, to ensure that calculations can be executed at the same timethat is, in parallel." Panasas Gibson told eWEEK that "reliability and integrity" are the two main hallmarks of the Panasas storage system. "When something really bad happensdisk read errors during disk failure rebuilds and maybe a network error thrown in for sportPanasas does not toss away terabytes of data just because a tiny amount of data is unreachable," Gibson said. "Instead, Panasas automatically fences off the file containing problematic data and makes the rest of the terabytes of data available to applications and users without interruption." Will this parallel file system structure eventually work its way into enterprise computing? "Panasas is continuing to drive for faster parallel I/O handling for the very-high-end supercomputing environment," Tom Trainer, an analyst with Evaluator Group, told eWEEK. "This is certainly a niche that most other storage vendors do not see as a large and profitable endeavor. But this is where the companies such as EMC and IBM are missing an opportunity." Panasas and BlueArc know that supercomputing is starting to have a trickle-down effect into the business computing environment, Trainer said. "More and more data is being created at alarming rates," Trainer said. "Credit card companies, for example, must move client account information at lightning speed and analyze for fraud detection at increasingly faster rates. Supercomputers are starting their walk into the data center, and as they step in, there will only be a small number of vendors positioned to provide the requisite storage products required by these data- munching monsters." Baltazar of The 451 Group had a different take. "The cluster technology that is around today is [all] proprietary," he said. "The forthcoming PNFS standard [which Gibson has been promoting in standards bodies for years] will help this technology move closer to the enterprise, but at this point, this technology will be confined to niche markets, such as HPC." Check out eWEEK.coms for the latest news, reviews and analysis on enterprise and small business storage hardware and software.
<urn:uuid:7dd6f0cc-28bb-45f4-8dd4-d1e7933ac16c>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Data-Storage/HPCs-New-Storage-Rock-Star
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932925
1,234
2.828125
3
ML and MR codes format numbers and justify the result to the left or right respectively. The codes provide the following capabilities: ML provides left justification of the result. MR provides right justification of the result. n is a number from 0 to 9 that defines the decimal precision. It specifies the number of digits to be output following the decimal point. The processor inserts trailing zeros if necessary. If n is omitted or is 0, a decimal point will not be output. m is a number that defines the scaling factor. The source value is descaled (divided) by that power of 10. For example, if m=1, the value is divided by 10; if m=2, the value is divided by 100, and so on. If m is omitted, it is assumed to be equal to n (the decimal precision). Z suppresses leading zeros. Note that fractional values which have no integer will have a zero before the decimal point. If the value is zero, a null will be output. , is the thousands separator symbol. It specifies insertion of thousands separators every three digits to the left of the decimal point. You can change the display separator symbol by invoking the SET-THOU command. Use the SET-DEC command to specify the decimal separator. If a value is negative and you have not specified one of these indicators, the value will be displayed with a leading minus sign. If you specify a credit indicator, the data will be output with either the credit characters or an equivalent number of spaces, depending on its value. $ specifies that a currency symbol is to be included. A floating currency symbol is placed in front of the value. The currency symbol is specified through the SET-MONEY command. fm specifies a format mask. A format mask can include literal characters as well as format codes. The format codes are as follows: The justification specified by the ML or MR code is applied at a different stage from that specified in field 9 of the data definition record. The sequence of events starts with the data being formatted with the symbols, filler characters and justification (left or right) specified by the ML or MR code. The formatted data is then justified according to field 9 of the definition record and overlaid on the output field - which initially comprises the number of spaces specified in field 10 of the data definition record. Input conversion works with a number that has only thousands separators and a decimal point. In the last example, the leading and trailing parenthesis are ignored.
<urn:uuid:4837ed18-3816-4fa9-bbf8-06bd27cc5c99>
CC-MAIN-2017-04
http://www.jbase.com/r5/knowledgebase/manuals/3.0/30manpages/man/jql2_CONVERSION.ML.MR.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.817561
518
3.09375
3
BALTIMORE, MD--(Marketwired - Feb 24, 2014) - Kiddie Academy® says that Dr. Seuss's birthday, March 2, 2014, is a great day for families to think about ways to give their children the gift of a lifelong love of reading. "While learning to read is a gradual process, involving letter sounds, vocabulary, grammar and comprehension, learning to enjoy reading is something that children do best at home," said Richard Peterson, vice president of education for Kiddie Academy® Educational Child Care. "When children read only in school, they may see reading only as school 'work'; but when children see their parents and siblings reading at home for entertainment or sit down with a family member to share a favorite book, they get the message that reading can be something you do for fun. And that's an essential step in raising a lifelong reader." Each year, on Dr. Seuss's birthday, the National Education Association (NEA) sponsors their "Read Across America Day" in honor of the well-known children's book author. The message of this celebration is clear: encourage children to read -- for enjoyment, for knowledge, for relaxation, for life. Kiddie Academy® shares three tips that families can use to bring the Read Across America celebration into their own homes: - Be a reading role model. Make sure your child sees you reading to gain knowledge and for entertainment. Talk about your favorite childhood books and introduce them to your child. - Make reading together interactive. Don't rush to "finish" the book. Ask your child questions about illustrations and characters; encourage your child to make predictions and observations about the story; invite your child to "retell" the story; and take turns reading out loud. - Have your child read you a book. Even pre-readers will enjoy "reading" to you if you pick a book that they know well. Turn the pages as they tell you the story, prompted by their memory and the book's illustrations. "The benefits of making reading a part of your child's everyday world go far beyond 'learning to read,'" said Peterson. "Activities like reading together, sharing favorite books, and discovering new interests and ideas together not only help boost your child's literacy skills; they help build treasured family memories." Kiddie Academy® is a leader in education-based child care, offering full- and part-time care, before- and after-school care and summer camp programs to families and their children. For more information, visit www.kiddieacademy.com. About Kiddie Academy® Since 1981, Kiddie Academy® has been a leader in education-based child care. The company serves families and their children ages 6 weeks to 12 years old, offering full time care, before- and after-school care and summer camp programs. Kiddie Academy's proprietary Life Essentials® curriculum, supporting programs, methods, activities and techniques help prepare children for life. Kiddie Academy is using the globally recognized AdvancED accreditation system, signifying its commitment to quality education and the highest standards in child care. For more information, visit www.kiddieacademy.com. About Kiddie Academy® Franchising Kiddie Academy International, Inc. is based in Maryland and has nearly 120 academies located in 23 states, including two company-owned locations. Approximately 70 additional academies are in development, with 15 to 20 new locations slated to open each year. For more information, visit www.kafranchise.com.
<urn:uuid:0c456815-8597-40c1-aad1-ee694b889f73>
CC-MAIN-2017-04
http://www.marketwired.com/press-release/kiddie-academyr-urges-families-share-joy-reading-with-their-children-as-dr-seusss-birthday-1882180.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955615
733
2.796875
3
Reducing power usage and cutting carbon emissions is probably the right thing to do for the future of the planet. But keep this is mind: Green is a powerful marketing term right now and cost-savings promises are part of the marketing pitch. Like all marketing promises, results vary. One example: The amount of money a typical consumer can save by using or powering down energy-efficient computers, printers and the like is often small—in the case of an up-to-date laptop, the energy savings add up to perhaps just $10 a year. I'm no denier of climate change, but technology users should always be skeptical. Just because a cause seems worthy, accepting conventional wisdom at face value isn't smart. Energy conservation is no exception. The purely economic benefits of power-saving lighting, heating and air conditioning systems dwarf the savings to be had by buying an "Energy Star PC," or simply turning off your electronic gear when not in use. Unless electricity gets much more expensive than it is—on average, most customers pay about 10 cents a kilowatt hour—those economics won't change. Even more disillusioning was the recent news that the vaunted Energy Star certification program run jointly by the Department of Energy and the Environmental Protection Agency is deeply flawed. Unlike many government programs, Energy Star resonates in the minds of consumers, and there's no end of advertising and commentary that tells us to look for the familiar blue logo. [ For more on Green IT, see CIO.com's Green IT Hype vs. The Real Deal and our case study, How Raytheon's IT Department Helps Meet Green Goals. ] So when you learn that government auditors were able to win Energy Star certification by filing bogus applications for non-existent products made by non-existent companies, who wouldn't feel cynical? Sleeping Computers and Saving Money When a laptop or desktop computer is asleep, your work is in active memory, but the hard drives have stopped spinning, the display is dark and the microprocessor is idle. As a result, power use drops sharply. A fully awake desktop system made in the last year or two uses some 60 watts of power, but consumes just three watts when asleep. Laptops use less power to begin with, perhaps 20 watts, and that drops to about 2 watts when the laptop is asleep, according to Bruce Nordman, a researcher at the Lawrence Berkeley National Laboratory. Well, that sounds like it should save plenty of cash. But let's do the math. To calculate energy use, multiply the watts by the hours used; divide the result by 1000 to calculate kilowatt hours and multiply that by 10 cents for the average cost of electricity. Do the same calculation for the sleep mode, but remember, your machine won't be asleep 24 hours a day. Instead, let's say that you'll let it sleep 16 hours a day. The result: annual savings of about $10. That's right, annual. The savings on a power-hungry desktop are greater, but still just about $33 a year. Meanwhile, screensavers not only don't save energy, they waste it. That's because those pretty designs and animations take a good deal of processing power, which in turn requires electricity. I'm not saying don't put your PC or Macs to sleep. You should, because there's no reason to waste energy. But understand that you'll hardly notice the difference on your monthly power bill. True Story of the Gas-Powered Alarm Clock I've never been comfortable with the Energy Star system. It reminds me of a pre-school class in which everybody gets an A to be sure all of the kids have plenty of self esteem. Have you noticed that it seems almost impossible to find a more or less mainstream PC that doesn't have Energy Star certification? So I wasn't altogether shocked when the Government Accountability Office issued a scathing and funny indictment of the program. Donning the mantle of investigative reporters, GAO staffers submitted applications for 20 or so fake products made by non-existent companies. Fifteen of those products passed muster with the Energy Star bureaucracy, including two that are so hilariously improbable it seems like a practical joke. One was a heck of an invention, a gasoline-powered alarm clock, said to be the size of a small generator. "Product was approved by Energy Star without a review of the company Web site or questions of the claimed efficiencies," the GAO wrote. My other favorite: the room air cleaner. The product is depicted as "a space heater with a feather duster and fly strips attached." This would be even funnier if taxpayers weren't paying for a program that steers well-meaning consumers to manufacturers who promise, but don't deliver, energy saving products. Or as Senator Susan Collins (R-Maine) who requested the audit put it in an interview with The New York Times: People "are ripped off twice," as consumers and as taxpayers. The moral? Your skepticism: don't leave home without it. San Francisco journalist Bill Snyder writes frequently about business and technology. He welcomes your comments and suggestions. Reach him at email@example.com. Follow everything from CIO.com on Twitter @CIOonline.
<urn:uuid:68151b23-0dd6-4e3f-9d5e-d546c41d14a6>
CC-MAIN-2017-04
http://www.cio.com/article/2419072/hardware/beware-worthless-claims-in-green-clothing.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00557-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9507
1,085
2.75
3
Work on a new version of the Internet Protocol, known as IPv6, has been under way for several years in the IETF. There is still some debate about when and how IPv6 will be deployed. Proponents of IPv6 argue that the demand for new IP addresses will continue to rise to a point where we will simply run out of available IPv4 addresses and that we should, therefore, start deploying IPv6 today. Opponents argue that such a protocol transition will be too costly and painful for most organizations. They also argue that careful address management and the use of Network Address Translation (NAT) will allow continued use of the IPv4 address space for a very long time. Regardless of the timeframe, a major factor in the deployment of IPv6 is an appropriate transition strategy that allows existing IPv4 systems to communicate with new IPv6 systems. A transition mechanism, known as "6 to 4," is described in our first article by Brian Carpenter, Keith Moore, and Bob Fink. In previous editions of this journal, we have looked at various security technologies for use in the Internet. Security mechanisms have been added at every layer of the protocol stack, and IP itself is no exception. IP Security, commonly known as "IPSec," is being deployed in many public and private networks. In our second article, William Stallings describes the main features of IPSec and looks at how IPSec can be used to build Virtual Private Networks. Our final article is a critical look at Quality of Service (QoS) in the Internet. The need to provide different priorities to different kinds of traffic in a network is well understood and the technical community has been hard at work developing numerous systems to address this need. Geoff Huston looks at the prospects of deploying QoS solutions that will operate across the Internet as a whole. The Y2K transition has been described as a "nonevent" by many. However, the lessons learned and the collaborative coordination efforts that were put in place for this transition can hopefully be used in the future. A colleague of mine had to call a plumber to his house on New Year's Eve. When he tried to pay for the repair with a credit card which had "00" as the expiration year, the plumber insisted that this meant the card was invalid. So while most systems were "Y2K compliant," this particular plumber was clearly not. Do you have a Y2K story to share? Drop us a line at firstname.lastname@example.org -Ole J. Jacobsen, Editor and Publisher email@example.com.
<urn:uuid:ae35ae30-7477-4fd1-bdc2-2cad5729da06>
CC-MAIN-2017-04
http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-4/ipj-archive/article09186a00800c8507.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00099-ip-10-171-10-70.ec2.internal.warc.gz
en
0.971661
525
2.796875
3
Crimping tool is designed to crimp or connect a connector to the end of a cable. Crimping device tools and cable stripping devices help properly crimp and strip different types of wires, such as solid wire, standard wire, Teflon, PVC, neoprene, rubber, nylon, etc. Apart from being one of the computer networking tools, these network cable crimping tools have various applications in different industries such as electronics, data, voice, video and signal industries. Network crimping tools are used to create telephone cables and network patch cables. Network crimping tools allow a user to trim the network or telephone cabling to size by means of an attached cable cutter. You can also strip the insulation from the individual conductor wires within the cabling with the integrated wire stripper. Interface ends, such as RJ-12 and RJ-45 connectors, may be attached to the cable by means of the network tool’s crimper head. A network crimping tool is actually three tools in one package: a wire cutter, an insulation stripper, and a crimper. While preparing networking cables for creating computer networks such as LANs (Local Area Networks) one has to be very careful and precise. One has to use a variety of wire connector assembly and installation tools to assemble and install such connectors to each other. These tools include a variety of cable tie guns, crimpers, cutters, pliers, punch down tools, screwdrivers, splicers, strippers, and cable pulling grips. Now you know the network crimping tools are helpful in telephone networks and computer networks. But do you know how to use a network crimping tool? Here is the insturction: 1. Insert the free end of the Category 5 cabling between the network crimping tool wire cutter blades, and pull approximately one foot of cabling through gap between the wire cutter blades, and cut the cabling. 2. Insert approximately 1/2 inch of one end of the cut length of cable into the wire stripper socket, and squeeze the network crimping tool handles together. Remove the stripped end from the wire stripper socket, and insert the unstripped end 1/2 inch into the wire stripper socket. Squeeze the network crimping tool handles together, and remove the stripped wire from the wire stripper socket. 3. Untwist the wire pairs at each end of the length of cable, and straighten each wire. Clip the wires so that each wire is the same length. Arrange the wires from left to right in this order: white/orange, orange, white/green, blue, white/blue, green, white/brown, brown. 4. Slip the wires into the RJ-45 connector with the prong facing downward (away from you). Push the wires all the way to the end of the plug. Slip the tool crimper socket over the RJ-45 connector, and squeeze hard to crimp the plug. Fiberstore offers a wide range of cable crimping tools, which are necessary tools for network professionals. The wire used to make network cables is not expensive when purchased in Fiberstore.com. The connectors also are reasonably priced. Once you have the crimp tool, the cost of making your own cable is a bargain compared to buying ready-made Network Cable.
<urn:uuid:05618cd8-43df-4dbf-9753-7045019531f2>
CC-MAIN-2017-04
http://www.fs.com/blog/crimping-tool-to-creat-your-own-cables.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00311-ip-10-171-10-70.ec2.internal.warc.gz
en
0.900843
683
2.828125
3
Image Spam—an e-mail solicitation that uses graphical images of text to avoid filters—is not new. Recently, though, it reached an unprecedented level of sophistication and took off. A year ago, fewer than five out of 100 e-mails were image spam, according to Doug Bowers of Symantec. Today, up to 40 percent are. Meanwhile, image spam is the reason spam traffic overall doubled in 2006, according to antispam company Borderware. It is expected to keep rising. Here's a graphical look at some of the techniques image spammers have used to try to beat your filters. First we'll zoom in on some of the details in this sample email. [image spam email] 1. GIF Layering Just as word splitting divides words into multiple images to elude spam filters (see number three), an image spam can be divided into multiple images. Like the transparent plastic overlays in Gray's Anatomy, pieces of a message are layered to create a complete, legible message. In this rudimentary example, the spam is divided into three pieces (cut in the middle of letters for added obfuscation). But one message could comprise as many as a dozen layered GIFs. 2. Optical Character Recognition Duping (Through Color Alteration) Optical character recognition (OCR) is the closest to sight that computers get. OCR works by measuring the geometry in images, searching for shapes that match the shapes of letters, then translating a matched geometric shape into real text. To defeat OCR, spammers upset the geometry of letters enough—by altering colors, for example—so that OCR can't "see" a letter even as the human eye easily recognizes it. The effect is something like blurred characters in an eye test. 3. Word Splitting and Ransom Notes If OCR catches up to the color tricks in image spam, a spammer's next defense is word splitting. By dividing the image and leaving space in between the pieces, any image the OCR engine is examining is only a piece of a letter with its own distinct geometry. Instead of word splitting, some spammers have employed a ransom note technique in which each letter in the spam message is its own image, and each letter image includes background noise and other baffling techniques. A program cobbles together randomized letter images to make words. The effect looks like a classic ransom note with a mishmash of letters cut out from magazines. 4. Geometric Variance Many filters can intercept mass mailings based on their sameness. Images, though, can be altered easily without disturbing the message inside them. Thus one spam message will arrive as dozens of differently shaped images, and each time the colors of the text images will have changed, as will the randomly generated speckling and pixel and word salads. No two images are alike despite the fact that they carry similar messages. Shown are two radically different images containing the same stock tip. The technique is popular as a scheme to boost prices of low-value stocks. In March, the SEC suspended trading on 35 such stocks that were the subject of these image spam messages, including some whose prices rose. 5. Speckling and Pixel Salad Confetti-like speckles don't affect the legibility of the necessary information but make every message unique to confuse a filter looking for patterns or high volumes of identical images. Similarly, a bar of randomly generated color pixels can contain the vast majority of the image data. To a filter it's full of patternless noise. We can see the words in the message while the image at the bottom doesn't bother us. 6. Hyperlink Elimination/Word Salad/Animated GIF Filters have improved their ability to find and trace spammy URLs and then block the message based on the inclusion of a bad link. To get around this, spammers will ask recipients to type the URL into their browsers. Other methods include word salads, text passages, often taken from classic novels, to confuse Bayesian filters and weighted dictionaries that rely on complex math or word scoring to determine the probability that some combination of words is spam. The filter sees predominantly natural text it can't flag as illegitimate. Another technique used to bypass filters consists of programming a GIF to slowly overlay its layers to create an animated GIF, similar to GIF layering. Here, with www.dvarx.com, each letter is a GIF layer. As they are stacked, it looks to the eye like someone typing in the letters into the address bar.
<urn:uuid:beddea3e-5234-42ef-a1db-288f17432a62>
CC-MAIN-2017-04
http://www.csoonline.com/article/2121792/data-protection/image-spam--by-the-numbers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938797
933
2.796875
3
Session Initiation Protocol (SIP) is used for controlling multimedia communication sessions over an IP network. Common applications include voice over IP (VoIP), videoconferencing, streaming multimedia, on-line gaming, and instant messaging. SIP is the protocol of choice for VoIP, and is used to create, modify, and terminate VoIP sessions, including functions such as call transfer, conference calls, and call hold. This very high-level protocol operates primarily in the Application Layer (Layer 7) of the OSI model. Because SIP runs independently of the Transport Layer (Layer 4), it works with most transport protocols, including TCP and UDP. Much like HTTP, SIP is a text-based protocol. SIP messages contain only as much information as is needed for each session, so it’s very efficient and can expand and contract to meet each application’s specific requirements. This extensibility makes SIP incredibly versatile, enabling it to cover functions ranging from simple VoIP calls to complex multi-user videoconferencing. SIP uses proxy servers to route requests, authenticate users, and provide features such as voice mail. SIP performs five basic functions: - User Location finds another user by way of an address, not unlike an e-mail address. - User Availability determines whether a user answers a request to communicate. A user may be registered under several addresses, in which case SIP may transfer an unanswered call to another address, which may be another device or an application such as voicemail. - User Capabilities checks for compatibility between clients. - Session Setup establishes session parameters for both called and calling party. - Session management handles changes to the call status, including transfer and termination of sessions, modifying session parameters, and invoking new services.
<urn:uuid:cfda8c8f-e9c2-4b17-b1c0-ea9efa6be292>
CC-MAIN-2017-04
http://www.datacenterjournal.com/what-is-sip/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910492
370
3.515625
4
President Obama last week signed an Executive Order allowing the U.S. to impose sanctions on people and organizations that threaten the U.S. in cyber space, ratcheting up pressure on allies and adversaries alike to police their cyber citizens. “Cyberthreats pose one of the most serious economic and national security challenges to the United States,” Obama said in a statement posted on the Website Medium. “That’s why … I’m for the first time authorizing targeted sanctions against individuals or entities whose actions in cyberspace result in significant threats to the national security, foreign policy, or economic health or financial stability of the United States.” The order empowers the Secretary of the Treasury – in tandem with the U.S. Attorney General and Secretary of State – to freeze financial assets, bar cyber attackers from trading goods and technology, and cancel or bar individuals from gaining travel visas. Why it Will Work 1. The Executive Order provides more tools to deter cyber-terrorism in the future. To meet the threshold for sanctions, attacks would need to meet four “harms,” writes Ellen Nakashima in the Washington Post: “attacking critical infrastructure such as a power grid; disrupting major computer networks; stealing intellectual property or trade secrets; or benefiting from the stolen secrets and property.” Michael Daniel, Obama’s cyber security adviser, said the law acts as a deterrent and punishment, filling a gap in U.S. cybersecurity efforts where diplomatic or law enforcement means are insufficient, according to a report in from the Reuters newswire. Elias Groll, an assistant editor at Foreign Policy, supports the sanctions, writing “when such transactions are ‘dollarized,’ U.S. officials have a prime opportunity to strike back at foreign hackers and their backers by seizing their funds as it transits through a U.S. bank.” While the executive order gives the administration broader authority to act, Daniel said it will be narrowly targeted to specific malicious activities. But even if sparsely used, the threat of sanctions could discourage cyberattackers from going after the U.S. 2. It will serve as a catalyst to bolster cybersecurity measures around the globe. Fear of sanctions could make other countries more vigilant policing cyber crime from within their borders. The order applies to anyone who steals American trade secrets or defrauds citizens’ personal information, and can apply not only to rogue entities, but also state sponsors. By setting a baseline for all nations to safeguard their cyber activity, it aims to force countries to pay attention to cybercrime within its borders. The North Korea attack “highlighted the need for us to have this capability,” Daniel said, according toForeign Policy. Shannon Tiezzi at The Diplomat, wrote that the order gives the president new ammunition. “The U.S. government has already proven willing to publicly charge Chinese citizens with hacking,” Tiezzi wrote. But “the new executive order could mean crippling sanctions for Chinese firms,” something that wasn’t possible before. Why the Order Won’t Work It will be too hard to enforce. It’s not that the executive order exceeds the president’s authority, as some suggested. The International Emergency Economic Powers Act empowers the president to impose sanctions beyond U.S. borders. But whether the administration can enforce the rules is another matter. John Reed Stark, a former head of Internet Enforcement for the Securities and Exchange Commission, cited the high number of state-sponsored cyberattacks and the difficulty identifying hackers, according to Reuters. Peter Baker reported in the New York Times that unilateral sanctions are useless for non-state actors that operate globally. “In contrast to states like North Korea or Russia that are sanctioned for traditional violations of international norms, hackers dwell in a murky digital world cloaked in ways that make them difficult to catch,” he wrote. The power of the sanctions, therefore, relies on the United States’ ability to raise the bar of foreign governments and press them to police their own citizens. That will work well enough with American allies. It may be harder with those operating apart from U.S. influence. Join the conversation. Post a comment below or email me at firstname.lastname@example.org.
<urn:uuid:6623fbc8-4435-42dc-b58c-6960bc9841f3>
CC-MAIN-2017-04
https://www.meritalk.com/articles/obamas-cyber-executive-order-two-reasons-it-can-work-and-one-reason-it-wont/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00155-ip-10-171-10-70.ec2.internal.warc.gz
en
0.928678
901
2.5625
3
Back on February 3, NASA released a memo saying that it is determined to move forward with a "human-tended waypoint" around the far side of the moon, according to MSNBC. The base's location would be in a pocket of space where the combined gravity pulling from two large bodies in space in different directions causes in essence a "parking spot", called a libration or lagrangian point. This point, just beyond the Moon, is technically known as the "Earth-Moon Libration Point 2" or EML-2, and it is the point of interest for NASA's current study. It would provide scientists and astronauts a jumping point towards further exploration into areas such as the Moon, Mars, and the moons surrounding the red planet. Such a base could also be pivotal in telerobotic science on the moon's far side where some of the oldest-known impact craters exist, in addition to building a place to service and build satellites and telescopes and test advancements in radiation shielding for long-term space flight. Also high on the list of awesome things to experiment with is the possibility of jumping from the base to the surface of the moon to test building large, permanent structures using hardened regolith, which means literally "a blanket of rocks," which litter the surface of the Moon. Radiation-shielded habitats as well as solar arrays and the like are the tip of the iceberg if science can take organic rock and debris on the surface of another planet and form it into habitable expanses. NASA is really pushing for international and academic partnerships for this, which has the big advantage of private sector and institutional research and development technology as well as the potential for capital where it's needed in the face of government cuts to critical programs and services. President Obama just released the 2013 budget for NASA, and with almost 20% in cuts slated to go into effect, NASA can use all of the help it can get. While any program to explore or build a base-station on the far side of the moon isn't explicitly mentioned in the cuts (like the joint Mars missions have been), we'll be keeping our fingers crossed that this program makes it and we'll see far-moon bases in our lifetimes. NASA's most recent astronaut class application deadline was January 27th, but if you didn't manage to apply in time, don't worry. It looks like deep space is in our future, and the future is deep space exploration. Jason's love of space exploration extends to writing, reading and watching whatever he can about it. The idea of space rations and the cold void of uncaring deep space will more than likely keep him firmly planted on this world. You can follow him on Twitter or Google Plus. Like this? You might also enjoy... - DARPA Controls Cyborg Moths in Flight, Mothpocalypse a Reality - U.S. Navy Testing Shiny New 32-Megajoule Railgun Prototype - Who Are the Real Aliens in Space, Anyway? This story, "NASA plans an outpost on the far side of the moon" was originally published by PCWorld.
<urn:uuid:9e57eef8-7f03-4c75-93cb-8f37b1448225>
CC-MAIN-2017-04
http://www.itworld.com/article/2732450/consumer-tech-science/nasa-plans-an-outpost-on-the-far-side-of-the-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952354
636
3.390625
3
According to researchers with Wake Forest University in North Carolina – who are working with the Pacific Northwest National Laboratory – when an ant comes across an intruder, other members of the colony will assist and help deal with any unwelcome visitor. This type of "swarming intelligence", say researchers, is at the heart of the software under development and, claims Errin Fulp, the university's professor of computer science, has the ability to monitor an electrical power grid, looking of all types of malware. If the approach proves successful in safeguarding the power grid, Fulp's team say it could have wide-ranging applications on protecting anything connected to SCADA (Supervisory Control and Data Acquisition) networks, the computer systems that control everything from water and sewer management systems to mass transit systems to manufacturing systems. Fulp and his team are working with scientists at Pacific Northwest National Laboratory in Richland, Washington, on the next steps in the digital ants technology, which has taken several years to develop. The university claims that the approach is so promising that it was named one of the 'ten technologies' that have the power to change our lives, by Scientific American magazine last year. According to Fulp, when the network connects to a power source, which connects to the smart grid, you have a jumping off point for computer viruses. A cyberattack, he says, can have a real physical result of shutting off power to a city or a nuclear power plant. The digital ants technology could transform cyber security because it adapts rapidly to changing threats, he adds. "The idea is to deploy thousands of different types of digital ants, each looking for evidence of a threat", Fulp went on to explain, noting that as they move about the network, they leave digital trails modelled after the scent trails ants in nature use to guide other ants. Then, each time a digital ant identifies some evidence, it is programmed to leave behind a stronger scent. Stronger scent trails attract more ants, producing the swarm that marks a potential computer infection.
<urn:uuid:87398c55-6e30-462e-b3f2-0ff0e42982b5>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/news/can-digital-ants-protect-computer-networks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941046
414
3.109375
3
How Does WAN Optimization Work? WAN optimization controllers (WOCs) can breathe new life into slow wide area network links, relieving congestion, speeding up file transfers and making applications more responsive. But how exactly do vendors like Riverbed Technology, Juniper Networks, Blue Coat Systems and Expand Networks get their devices to work their magic? To answer this question, consider the two fundamental problems that WAN links present: - They have limited capacity, so they can become congested - They suffer from high latency because they are long (relative to LAN links) The best strategy for overcoming these two problems is simply to avoid using the WAN links whenever possible, and to minimize their use when it's not possible to avoid them. It's this strategy that underpins all the techniques that WOCs employ to optimize WAN traffic. These most commonly used techniques include: - Data reduction - Latency reduction - Quality of Service (QoS) tagging - Packet coalescing This is one of the most obvious ways of improving WAN performance. When a file is transferred over a WAN, say from a head office to a branch office, a copy of it is cached by the branch office's WOC. When other users in the branch office request the same file, the request is intercepted by the WOC before it goes over the WAN link, and the file is served locally from the device's cache. Changes made to files in any location are communicated across the network to ensure that files are always kept in sync. Using caching, the first access of any file is still slow because it still has to pass over the WAN before it can be cached it's only subsequent accesses that are much faster. To speed up the first access, the cache can be pre-populated overnight with commonly used files so that they are immediately available in the cache the following day. This is another obvious step that can be taken to boost WAN performance. It tackles the problem of limited bandwidth by reducing the amount of data that has to be sent over the WAN using a variety of data compression techniques. Some WOCs also include header compression, which can reduce the size of packet headers dramatically. This is particularly effective when the size of the header is large compared to the size of the rest of the packet. Data reduction works like a combination of compression and caching. Driven by the principle that the best way of overcoming the problems presented by a WAN is not to use it if possible, a WOC using data reduction examines data as it travels over the WAN, and stores data it receives. If it detects a piece of data that it has already transmitted in a file that it is sending, that byte sequence is removed, and is replaced with a reference. When the WOC at the remote office receives the reference it can then retrieve that piece of data from its own cache and substitute it back in. This avoids transmitting over the WAN any data that has already been sent even as part of a completely different file. In some circumstances the amount of data traveling over a WAN can be reduced by 75 percent or more using data reduction techniques. Latency, as mentioned earlier, can be a problem with WANs. This is particularly true when dealing with "chatty" protocols like Common Internet File System (CIFS). CIFS (and other CIFS implementations like Samba on Linux) are frequently used when remote disks are browsed and files are transferred across a WAN, but the protocol was never really intended for use over high latency links. The term "chatty" refers to the fact that in order to send data (in chunks of no more than 61kb) a large number of background communications has to travel back and forth over the WAN link. For example, the next chunk of data will only start to be sent over the network once a response has been received for the previous one. Hundreds or thousands of communications have to be sent across the WAN during the process of sending a single file, and due to the high latency of the WAN this means that accessing a file which would be more or less instantaneous on a LAN can take several minutes on a WAN. The way that WOCs deal with this problem is by understanding that a file transfer is taking place, and pre-sending some or all of the file to the remote WOC as quickly as it can. Protocol communications at the remote end destined to for server at head office are then intercepted by the remote WOC which generates the appropriate response, so that much of the protocol "chat" never actually crosses the WAN—it is dealt with by the WOC, which already has the file that the protocol is trying to transfer. As long as the WOC "understands" a particular protocol, it can be used to accelerate transmissions—whether they are at the down at the TCP level, or up at the application level. Quality of Service (Qos) QoS is complex, although the underlying idea is simple. Traffic is identified, usually by its application, source, or destination, and given a priority for transmission over the WAN. This may include how long it has to wait before being sent over the WAN, or the amount of available bandwidth reserved for a given application. The result is to ensure that time sensitive packets, such as VoIP packets, are sent as quickly as possible - at the expense of less time sensitive packets during busy periods. Packet coalescing is useful in circumstances where the packet header is large relative to the size of the data it is transporting. In essence it saves bandwidth by consolidating multiple packets into a single (coalesced) packet, with a single header. This can make save considerable amounts of bandwidth, especially in applications like VoIP. All of the WOCs sold into this multi-billion dollar market offer some combination of the techniques mentioned above. The results speak for themselves: applications running up to 50 times faster, file transfers reduced from minutes to seconds, and WAN bandwidth requirements as much as halved. It's no surprise that over the last few years the market for WAN optimization controllers has been very strong indeed.
<urn:uuid:47ac0534-3fe8-4224-94d6-f3edf492f673>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3816601/How-Does-WAN-Optimization-Work.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00301-ip-10-171-10-70.ec2.internal.warc.gz
en
0.955719
1,274
2.671875
3
4 best practices to prevent foodborne illness Friday, Sep 27th 2013 Foodborne illness is one of the most pervasive ailments due to the number of different strains that can affect people as well as the significant number of potential sources. While most food has some types of bacteria, most of it is harmless. For the bacteria that causes illnesses, thoroughly cooking food typically mitigates the danger. Many people continue to have cases of food poisoning, with a lot of investigations failing to identify the source. Here are a few best practices to prevent foodborne illness: 1. Question and inspect A major cause of food poisoning can occur from raw food accidently mixing with other products through faulty packaging. Many people may simply pick up something and toss it into their cart, but this could potentially end up making them sick. Raw meat should be put into a plastic bag to prevent any juice from contaminating other food, and packaging should be inspected for any torn or crushed edges, according to the U.S. Department of Health and Human Services. Any open packages should be reported to store management to deter other individuals from getting sick. Bulging caps and cracks on jars or cans can also be signs of potential tampering and should be avoided. Doing thorough examinations of the products will ensure that they are viable for consumption. 2. Keep track of recalls While packaging can be a major indicator of a contaminated product, the items can still be tainted even in the absence of external signs. Some food items can be corrupted during processing or distributor mishandling, meaning that buyers may not know that their food is bad until it's too late. By monitoring any recalls, consumers can better deter potential foodborne illnesses. Many health departments routinely inspect public food service establishments as well as investigate food poisoning cases in order to better inform customers about the dangers, according to the MLive? Media Group. This effort will ensure that people are able to enjoy their products and have a better awareness to storage and preparation best practices. Cases of food poisoning should also be reported right away in order for health officials to begin investigating and determine if there is a larger problem as the cause. 3. Monitor conditions Many foodborne illnesses can also occur from improper storage. All foods have different specifications relating to freezing temperatures and refrigeration conditions. However, with temperature monitoring, the food will be better regulated. Perishables should be stored right away due to the possibility of spoiling. Fridges are typically kept at 40 degrees Fahrenheit while the freezer temperature is at 0 degrees, according to the Minnesota Department of Health. Ensuring that the spaces aren't crowded and that raw meat products are kept away from other items will give consumers the assurance that they won't get sick. For every step of the product's shelf-life, the environmental conditions should be appropriately monitored. 4. Cook thoroughly Properly cooking food is possibly the most important step to foodborne illness prevention. All meat, poultry, eggs and seafood should be appropriately prepared in order to kill off harmful bacteria. ABC News recommends using a temperature sensor to ensure that the meat product is at the appropriate level and is thoroughly cooked. Improper handling during this process is one of the main causes of food poisoning, which makes it important to have only healthy individuals preparing the food. In some cases, people have gotten sick due to food handlers having an illness while preparing the products. This is crucial to avoid for manufacturers, food service establishments and within the consumer's own home. Food poisoning cases can differ with those affected by the viruses and bacteria. However, by observing proper storage and handling best practices, consumers can better prevent foodborne illnesses.
<urn:uuid:0c74c741-ad1d-4a65-b11e-0a698d118f8a>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/4-best-practices-to-prevent-foodborne-illness-514628
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00348-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956822
733
2.9375
3
Networking 101: Understanding Routing To understand something in the networking world, you have to understand the problem it’s trying to solve. Memorizing the configuration options for a certain routing protocol won’t help you until you understand what it’s really doing. This installment of Networking 101 is designed to be a gentle introduction into the world of routing issues and concepts, arguably the most interesting and important part of networking, explaining the problems routing protocols address so you can understand why they do what they do. Before we get into the details, a clarification. When you hear people refer to "non-routable addresses," they are talking about RFC 1918 IP addresses, i.e. private addresses. Despite the misleading label, they certainly are routable. You can and should have some 10.x.x.x networks for local access and management. They can even be co-mingled with your real routers. They are called “non-routable” because the Internet routers will drop them. You should drop these packets at your border, as was pointed out in this Border Securityarticle last year. This is a point of confusion for a lot of people. On to the topic at hand. Routing, in essence, is the act of finding a path from one place to another on which a packet can travel. To find this path, we need algorithms. They will generally be distributed among many routers, allowing them to jointly share information. Routing is said to contain three elements: - Routing protocols, the things that allow information to be gathered and distributed - Routing algorithms, to determine paths - Routing databases to store information that the algorithm has discovered. The routing database sometimes corresponds directly to routing table entries, sometimes not. Our installment on layersactually introduces a bit of routing by talking about the paths an IP packet takes through operating systems and routers. What may not have been clear, though, is how the routing table lookup step works. Remember subnetting? Most routers will simply find the shortest prefix in the routing table when it looks for a path for your packet. If there’s a “host route,” or /32 entry, that is always preferred. Any more specific routes, like the one that says what subnet you’re on, will also be preferred before the default route is chosen. We also need to understand some really basic problems with routing. Just like in Layer 2, routers need to be redundant. Redundancy always introduces the possibility of a loop, and every routing protocol has to deal with this. As we’ll see in future Networking 101 articles about specific protocols, this is pretty much a solved problem. The idea of a network topology is pretty absurd in the context most people picture it. VLANs (define) turned the world up side down in that regard. But in routing, topology is actually important, if you zoom out a bit. The whole idea behind routers is that they will “pass it on,” either in the correct direction, or on to their smarter peers. If your network core has a bunch of stubs connected, many of the stub routers will know nothing about each other. But they know “the way to everything” is through the core, and they simply forward packets that way. Hesitantly, we’ll call this a star topology. Of course, I’m insulting your intelligence, because this is the concept of a default route. But pay attention here: this is how many dynamic routing protocols work. It isn’t always the case that you’ll pass a packet onto the all-knowing default router, instead sometimes you’ll be passing the packet to the router that you know handles a certain subnet. The point is that you know nothing about the other routers behind the one that tells you “I am network X.” The previous paragraph really embodies what routing is. You get packets closer to the destination. Of course, you have to know what’s at each destination, and that’s what routing protocols tell you. It’s really easy to jump back and fourth when talking about routing, so take note that all of the above was with the picture of a single network in mind. This is also known as a routing domain. A routing domain is a set of routers that are all under the same administrative control; presumably all running the same routing protocols. When routing packets, we have a few paradigms to choose from. The telco world sets up a circuit for your telephone call as soon as you dial. The path is always the same, and it’s very reliable. The IP world does not, and it can handle much more traffic. The tradeoff is that you can get congestion, and sometimes fail to reach certain websites, whereas your telephone call will never drop because of congestion. The IP world can almost do this, through a mechanism called loose source routing. This is how it started: each end node knew what hops it needed to take to reach its destination. Source-based routing doesn’t scale, and introduces security problems. So we use dynamic routing protocols to figure out the paths for us. Take note that each direction can take a different path! Routing protocols are broken up into a few different categories, in two senses. First, we have IGP, or Interior Gateway Protocols. RIP, OSPF, and ISIS are a few IGP’s you may have heard about. These are routing protocols that deal with intra-domain routing. EGP, Exterior Gateway Protocols, deal with inter-domain routing, between enterprises. Now defunct, EGP was actually a protocol, but BGP is now the standard inter-domain protocol. Second, routing protocols are said to be of two categories in another sense: link-state, or vector-distance. The vector-distance approach is: “tell your neighbors about the world.” This means that you will broadcast your entire routing table, to all your neighbors. The “vector” is the destination, and the “distance” is really a metric, or hop count. Link-state routing protocols “tell the world about your neighbors.” The idea is to figure out who is “up” and broadcast that information about their link’s state to all other routers. Link-state is very computationally intensive, but it provides an entire view of the network to all routers. Most people prefer link-state protocols because they converge faster, which means that all of the routers have the same information. Link-state calculations take a long time though, and happen every time we get an update, so they can’t be used Internet-wide. We’ll see why link-state eats CPU when we cover OSPF in the near future. Come back next week for our first routing protocol: RIP. In a Nutshell - Routers send packets toward their destination, normally by shipping it toward a router that knows a bit more about the destination topology. - Routing is two one-way problems; it is very common for your packets to take asymmetric routes. - Link-state: fast convergence, eats CPU. Vector-distance: slow convergence, easier on the silicon.
<urn:uuid:dbc58dcc-f8b1-4c82-a1c9-bafe2bbb0ff5>
CC-MAIN-2017-04
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3607381/Networking-101-Understanding-Routing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945689
1,525
3.921875
4
NASA laser comm will rocket data to and from space - By Kathleen Hickey - Jul 16, 2013 NASA is testing new technology to help bring its data transfer rates to and from space out of the dark ages. Since 1958, NASA has relied exclusively on radio frequency (RF)-based communications via the Deep Space Network for exchanging data between a mission and a spacecraft. The need for more and faster data (for example, high-definition video streams), however, is outpacing RF’s capabilities. It can take up to 15 minutes to send commands as far as Mars, -- and just as long to get a response back. Laser optical communication technology offers the promise of much higher data rates than what is achievable with RF transmissions. NASA’s Optical Payload for Lasercomm Science (OPALS) technology could increase RF transmission speed by a factor of 10 to 100, the agency reported. OPALS will be mounted on the outside of the International Space Station and will communicate with a JPL facilities at a ground station in Wrightwood, Calif. As the ISS travels across the sky, NASA will send a laser beacon from a ground telescope to the ISS payload. While maintaining lock on the uplink beacon, NASA explained, the OPALS flight system will downlink a modulated laser beam with a formatted video. "It's like aiming a laser pointer continuously for two minutes at a dot the diameter of a human hair from 30 feet away while you're walking," explained OPALS systems engineer Bogdan Oaida of JPL. It will be launched in December to the station via a SpaceXDragon commercial resupply capsule on the company’s Falcon 9 rocket. The mission is expected to run 90 days after installation on the station. "OPALS represents a tangible stepping stone for laser communications, and the International Space Station is a great platform for an experiment like this," said Michael Kokorowski, OPALS project manager at JPL. "Future operational laser communication systems will have the ability to transmit more data from spacecraft down to the ground than they currently do, mitigating a significant bottleneck for scientific investigations and commercial ventures." OPALS is not NASA’s first foray into laser optical communications. Earlier this year NASA announced it had beamed a picture of the Mona Lisa to a satellite circling the moon, the first time for one-way laser communication at planetary distances, GCN reported in January. Kathleen Hickey is a freelance writer for GCN.
<urn:uuid:10395efb-1105-43bf-b7a3-d392d58aaf88>
CC-MAIN-2017-04
https://gcn.com/articles/2013/07/16/nasa-opals-laser-data-transfer-test.aspx?admgarea=TC_BigData
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00256-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929073
506
3.109375
3
When we left off, we were examining the feasibility of using a Layer-2 topology within a WAN provider’s cloud, as shown in Figure 1: Having built the physical topology using Layer-2 switches and interconnecting links, let’s see how we might establish customer connectivity, using Frame Relay as an example. With Frame Relay, the Layer-2 address contained within the frame header is a ten-bit value called a DLCI (Data-Link Connection Identifier). The WAN provider builds a table within each switch that forwards frames based on a combination of the inbound interface and DLCI. Here’s an example of a Frame Relay switching table: |Inbound DLCI||Inbound Interface||Outbound DLCI||Outbound Interface| For example, when a frame enters the switch with DLCI = 246 on Interface 1, the DLCI is changed (swapped) to 801, and the frame is then forwarded out Interface 3. Note that with Frame Relay, unlike Ethernet, the Layer-2 addresses (DLCIs) can (and usually do) change at each switch hop as a data frame traverses the provider’s cloud. Once the provider has programmed the switches along the path with the correct DLCIs, the customer can then use the circuit. As an example, let’s look at Figure 2: Note the heavy red line representing the physical path of data flow between site A1 and site A3, and see how the DLCI values change (101-456-153-46-207) as a data frame proceeds from A1 towards A3. With Frame Relay, the physical path between A1 and A3 must be the same in both directions, but it’s not required that the intermediate DLCIs used in the two directions be the same. What about data flowing from A3 towards A1? Refer to figure 3: As you can see, the progression of DLCIs in the core when traveling from A3 to A1 (207-841-552-982-101) is not the same as when traveling from A1 to A3. Let’s discuss one more thing. Instead of a cloud of switches, let’s imagine that we have an actual physical cable running point-to-point between two sites. With this topology, we know that the following things would be true: - As frames cross the media, their order will be preserved. - At most one copy of each frame will reach the far side. It’s possible that one or more frames could be lost (the cable could break), but we’ll never get more than one of each frame making it to the far side, and whichever frames do make it across will arrive in the right order. If we build a cloud of switches, establish a path through it, and enforce the two rules listed above, we would have a “VC” (Virtual Circuit), where “virtual” means “it acts like”. In other words, VC’s emulate physical circuits with regard to the sequencing and numbers of frames. VC’s come in two varieties, “PVC” (Permanent VC) and “SVC” (Switched VC). In a PVC, the path is determined in advance, and programmed into the switches, reserving bandwidth on that path for that customer. With SVCs, the paths are determined “on the fly”, with the possibility that bandwidth between two sites might not be available (similar to “all circuits are busy” with a voice call over a telco). Frame Relay uses PVCs, while X25 and ATM can use PVCs or SVCs. Next time, we’ll discuss the advantages of the system we’ve designed. Author: Al Friebe
<urn:uuid:a84b5682-d784-4341-a1a5-0241cdefb9a8>
CC-MAIN-2017-04
http://blog.globalknowledge.com/2010/04/08/mpls-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.891694
814
3.390625
3
Level 3 tried to wrangle an invitation, but no dice. The recent battle between Level 3 Communications and Comcast has produced an entertaining volley of press releases from each company, complete with multi-point jabs and rebuttals. Level 3 has asked Comcast for more free transmission capacity. Comcast has provided some but has requested that Level 3 pay for the remainder. Level 3 is claiming the issue with its peering agreement with Comcast is not peering, but network neutrality. Comcast does not agree. Level 3's network neutrality argument was novel, but it failed to convince the final arbiter, the FCC, which in February sided with Comcast. It’s a basic truism that the Internet is a network of networks. Each network does one or more of three basic things: 1) provide content, 2) transport content or 3) consume content. Local distribution networks can be thought of as content consumers; they have the networks that primarily serve end users. Each type of network has its own set of costs. Content producers must first create or procure the content, store it, and then get it on the Internet. Let’s use Netflix as an example of a content producer. There is a cost to licensing television programs and movies, there’s significant hardware and storage required to hold the content, and then there are large amounts of “onramp” capacity to the Internet purchased to ensure the content can get to the consumers. In this example, Netflix could build its own network or outsource it and buy the Internet bandwidth it needs from a content delivery network (CDN), typically operated by a transit provider. The value of a transit provider is measured in how much of the Internet it can reach. Building a transit network that spans the globe is not cheap, either. In the example of Level 3 Communications, it dug up the ground and laid fiber in both the U.S. and Europe. A transit provider also needs to buy the most advanced routing and switching hardware available. Finally, to stay ahead of the demand curve, the transit provider has to constantly upgrade its backbone capacity. Local distribution networks, meanwhile, must build robust networks to provide bandwidth to individuals. They must have connectivity to the transit providers to allow subscribers to access content. This usually means a content consumer will also buy Internet bandwidth from a transit provider. Note that the transit provider charges for the same traffic twice: once coming and once going. Content producers must pay the transit provider to get their content on the Internet, and, likewise, content consumers must pay the transit provider to have access to the entire Internet. A successful transit provider connects with as many content providers and consumers as it can. Not all consumers and producers are connected to the same transit provider, of course, because there are a handful of large transit providers that provide the same basic service. Network service providers (NSPs) – the largest being the Tier 1 providers – understood that no NSP could connect to every network, but if they each connected, then at least they would have full connectivity to the routes that comprise the Internet. Tier 1 NSPs also expected that when interconnecting with one another, they would be sending and receiving roughly the same amount of traffic with each other’s network. If you couldn’t send and receive approximately the same amount of traffic with another Tier 1 network, then you probably didn’t belong in the Tier 1 club. The accounting scheme they created to simply share reciprocal traffic is peering. Since each network is sending and receiving the same amount of traffic, there’s no need to bill each other for something that’s basically a wash. Simple peering arrangements subsequently evolved into “settlement-free peering.” With settlement-free peering, each provider agrees to split the cost of the circuit between them, they each agree to fund the cost of the router and port on their network, and they agree not to charge each other for the traffic exchanged. When a network is first built, it has no traffic and no leverage to argue for settlement-free peering. This was the case when Level 3 built its backbone in 1998. Even before it had finished laying its own fiber, it leased network capacity from others. In October 1998, Level 3 acquired a small ISP, Geonet. Geonet didn’t have a huge customer base, but what it did have was a number of settlement-free peering agreements. Geonet had established these peering agreements in prior years, when the Internet was a much smaller place. Assuming Geonet’s peering agreements instantly allowed Level 3 to reach large portions of the Internet without having to pay transit fees. It took several years before Level 3 grew its customer base large enough to truly justify settlement-free peering with all of the major providers at the time. In the late 1990s, UUnet (now owned by Verizon, but previously owned by WorldCom) was the most dominant backbone (it continues to be one of the highest-trafficked parts of the Internet). At the time, Level 3 did not have a peering agreement with UUnet, the last major part of the Internet to which Level 3 lacked free access. What made it difficult for Level 3 to get peering with UUnet was the traffic imbalance. Without Level 3 having a significant customer base, either consumer or producer, UUnet simply refused to peer with Level 3. With the dotcom boom in full swing, it was easy to find content-producing start-ups with funding and Web servers eager to provide content and (hopefully) make millions of dollars doing so. These customers were attracted by Level 3’s colocation centers, large data centers where customers can place their equipment directly on the backbone. This threatened to cause imbalance at the peering points, because Level 3’s content producer customers originated far more traffic than they terminated. One way to bring the balance of traffic in line was to target customers who would consume traffic. Level 3’s acquisition of Xcom technologies in April 1998 served this purpose. While the main goal was to acquire a key voice over IP technology, Level 3 also acquired a dial-up modem business that eventually became the largest revenue generator for the company for many quarters. Level 3’s dial-up modem business did two things: It brought eyeballs to the network, and it generated revenue by means of an accounting scheme called “reciprocal compensation.” But that revenue was, in fact, not reciprocal at all. Reciprocal compensation was a construct that the telephone companies had long before created to compensate themselves for carrying and terminating telephone traffic originated by others. The expectation was that companies would originate and terminate the same amount of traffic, so it would all be a wash. But in case there was an imbalance of traffic, someone would pay. This scheme was in a sense a predecessor of peering. There was a funny thing about Level 3’s modems used for receiving dial-up Internet access calls. They terminated a bunch of calls but never seemed to make any calls themselves. Recall that if you owned a bunch of modems that did nothing but terminate calls, then you had the right to be compensated for terminating those calls. Level 3 became a major modem provider for one of the largest dial-up ISPs for its time, America Online, and for several years, the dial-up modem business was very good for Level 3. It not only generated the majority of the revenue for the company, it also served to keep Internet traffic balanced on the backbone. Maintaining balanced traffic helped Level 3 continue to grow its settlement-free peering base. AOL was paying Level 3 for that service but sought to reduce its payments to Level 3. So it built its own network (the AOL Transit Data Network, or ATDN) and then turned around and asked Level 3 for settlement-free peering. This, combined with the shift away from dial-up modems toward broadband, effectively killed the revenue of the dial-up modem business for Level 3. And remember, it was the dial-up modem business that helped keep Level 3’s traffic balanced. (By the way, the director of engineering at AOL at the time who orchestrated the shift from paid transit to settlement-free peering was John Shanz, now the executive vice president of national engineering and technology operations for Comcast.) So as we turn back to the dispute between Level 3 and Comcast, it’s clear that all companies involved (including Netflix) have a number of factors that compose their cost of service. As any good business would, each strives to reduce its cost. As a network provider, one always strives to use settlement-free peering where it can, where it makes sense and where agreements allow. Level 3 is trying to shift toward the CDN model. In so doing, it faces the same challenge it did as a colocation provider. CDNs work by offering to outsource the storage and distribution of content from the content provider to the network. The CDN takes on the burden of storing and delivering the traffic. A CDN sends far more traffic than it receives, which is not consistent with settlement-free peering. In fact, the numbers being quoted by both Level 3 and Comcast for the Netflix deal would cause Level 3 to send nearly five times as much traffic to Comcast as it would receive. A major factor affecting Level 3’s profitability is that the cost to create and deliver bandwidth might not be covered by the revenue generated by the service. In the late ’90s, the price of bandwidth was on the order of hundreds of dollars per Mbps. Today, the price of bandwidth can be lower than $10 per Mbps (Cogent Communications is currently marketing itself as “Home of the $4 Megabit”). Part of this is the result of a competitive market; part of it is the result of irrational pricing. But to continue to win business with prices like that, transit providers must keep their network costs as low as possible. Using settlement-free peering instead of paying transit is one way to do that. Settlement-free peering agreements have been shaped over the years, and ISPs that have balanced traffic exchange with one another view peering as a mutually beneficial relationship. But for everyone else, the Internet community has coined the phrase “sender pays” to point the finger at who should bear the burden of cost. But this dispute between Level 3 and Comcast could lead to a tighter definition of what constitutes “roughly equivalent” traffic exchange between backbone networks. Peering agreements could become more complicated and more formal. Today’s agreements are essentially “memoranda of understandings,” but this dispute could create the need for detailed contracts. New networks seeking to build their business and achieve peering could have a much harder time, and possibly the field of those that are entitled to settlementfree peering could narrow. Had Level 3 won this dispute, the precedent would have led to an unsustainable situation. In a climate of government regulation of network interconnection agreements, network service providers might no longer offer settlement-free peering at all, and networks would have to accept the substantial cost of keeping records of originating and terminating traffic loads, just like the old telephone companies did. It could cause the Internet to take one giant leap backwards. While peer-to-peer traffic dominated the Internet a few years ago, now video dominates. More video being sent over the Internet requires more network infrastructure and more bandwidth. How will that additional capacity be paid for? In the end, it’s the subscriber who will wind up having to pay more money to keep the model working. Whether that money is in the form of increased broadband fees paid from one network to another or in the form of increased subscription fees paid directly to the content providers (which indirectly flows to the transit provider and the consumer network), it may be the only way to continue to enjoy the connectivity that keeps up with the pace of our consumption. This may lead to the end of flat-rate pricing for broadband end users, so that those who consume video will pay for their fair share of network capacity. Operating the Internet is a business like all others, but it has a lot of moving parts, each with differing cost and revenue models. Level 3 continues to try to find a business model that, for the first time, would allow it to earn a profit. Part of its model seems to be settlement-free peering with Comcast, without the traditional burden of originating and terminating roughly equal traffic loads. In the end, it’s the Internet community’s own rule of thumb that should apply: sender pays. E-Mail: firstname.lastname@example.org
<urn:uuid:1ef97252-e787-4fc2-9317-b1cb88283737>
CC-MAIN-2017-04
https://www.cedmagazine.com/print/articles/2011/02/crashing-the-tier-1-party
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96737
2,632
2.53125
3
3.3.1 What is the AES? (Revised January 2003) The AES is the Advanced Encryption Standard. The AES was issued as FIPS PUB 197 by NIST (see Question 6.2.1) standard is the successor to DES (see Question 3.2.1). In January 1997 the AES initiative was announced and in September 1997 the public was invited to propose suitable block ciphers as candidates for the AES. The AES algorithm was selected in October 2001 and the standard was published in November 2002. NIST's intent was to have a cipher that will remain secure well into the next century. AES supports key sizes of 128 bits, 192 bits, and 256 bits, in contrast to the 56-bit keys offered by DES. The AES algorithm resulted from a multi-year evaluation process led by NIST with submissions and review by an international community of cryptography experts. The Rijndael algorithm, invented by Joan Daemen and Vincent Rijmen, was selected as the standard. Over time, many implementations are expected to upgrade to AES, both because it offers a 128-bit key size, and because it is a federal standard.
<urn:uuid:39ff151f-036f-4715-bd98-57c189e5c6e8>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-the-aes.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00558-ip-10-171-10-70.ec2.internal.warc.gz
en
0.981769
242
3.671875
4
The ANSI/ISA Standard and Hazardous Locations Fires and explosions are a major safety concern in industrial plants. Electrical equipment that must be installed in these locations should be specifically designed and tested to operate under extreme conditions. The hazardous location classification system was designed to promote the safe use of electrical equipment in those areas “where fire or explosion hazards may exist due to flammable gases or vapors, flammable liquids, combustible dust, or ignitable fibers of flyings.” The NEC and CSA define hazardous locations by three classes: Class 1: Gas or vapor hazards Class 2: Dust hazards Class 3: Fibers and flyings Division 1: An environment where ignitable gases, liquids, vapors or dusts can exist Division 2: Locations where ignitable are not likely to exist Hazardous classes are further defined by groups A, B, C, D, E, F, and G: C. Ethlene, carbon monoxide D. Hydrocarbons, fuels, solvents F. Carbonaceous dusts including coal, carbon black, coke G. Flour, starch, grain, combustible plastic or chemical dust Our line of Industrial Ethernet Switches (LEH1208A, LEH1208A-2GMMSC, LEH1216A and LEH1216A-2GMMSC) is fully compliant with ANSI/ISA 12.12.01, a construction standard for Nonincendive Electrical Equipment for Use in Class I and II, Division 2 and Class III, Divisions 1 and 2 Hazardous (Classified) Locations. ANSI/ISA 12.12.01-2000 is similar to UL1604, but is more stringent (for a full list of changes, see Compliance Today). UL1604 was withdrawn in 2012 and replaced with ISA 12.12.01. The standard provides the requirements for the design, construction, and marking of electrical equipment or parts of such equipment used in Class I and Class II, Division 2 and Class III, Divisions 1 and 2 hazardous (classified) locations. This type of equipment, in normal operation, is not capable of causing ignition. The standard establishes uniformity in test methods for determining the suitability of equipment as related to their potential to ignite to a specific flammable gas or vapor-in-air mixture, combustible dust, easily ignitable fibers, or flyings under the following ambient conditions: a) an ambient temperature of -25°C to 40°C. b) an oxygen concentration of not greater than 21 percent by volume. c) a pressure of 80 kPa (0.8 bar) to 110 kPa (1.1 bar). The standard is available for purchase at www.webstore.ansi.org. To learn more about ANSI/ISA 12.12.01 and hazardous location types, visit https://www.osha.gov/doc/outreachtraining/htmlfiles/hazloc.html. --
<urn:uuid:b0f2c418-a695-41c7-b154-18b1f46b3738>
CC-MAIN-2017-04
https://www.blackbox.com/en-pr/products/black-box-explains/the-ansi-isa-standard-and-hazardous-locations
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.866071
633
3.0625
3
Research from Norton estimates the global price tag of consumer cybercrime now topping some US$113 billion annually4 which is enough to host the 2012 London Olympics nearly 10 times over. The cost per cybercrime victim has shot up to USD$298: a 50% increase over 2012. In terms of the number of victims of such attacks, that’s 378 million per year – averaging 1 million plus per day. ”Domain Validated (DV)” SSL Certificates pose a direct threat to consumers on the Internet. Cybercriminals frequently use DV SSL certificates to impersonate real ecommerce websites for the purpose of defrauding consumers. This paper will explain SSL, the different types of certificates, how cybercriminals use DV certificates to steal personal and financial data, and what can be done to thwart this tactic. Please login to download this report.
<urn:uuid:96b61ab3-3a97-49e8-a808-499fe5d34493>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/white-papers/hidden-dangers-lurking-in-ecommerce/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00468-ip-10-171-10-70.ec2.internal.warc.gz
en
0.863073
180
2.546875
3
An attribute-based access control (ABAC) system is a strategy for making runtime decisions about what features or data a user can access in an application, based on a combination of policies and data about both the user and transaction context. Data about the user typically comes in the form of identity attributes -- things like the user's name, login ID, department, location, job role, etc. This data normally comes from an LDAP directory. Data about transaction context includes what operation the user is attempting to perform, what data the user would access through this operation, the current time and date, the location of the user (e.g., IP address or similar), the type of device from which the user connected (e.g., web user agent or similar) and how the user authenticated. Policy data links operations and data to identity and transaction data, to make runtime go/no-go decisions. There is an XML standard for expressing such policy decisions, called XACML. XACML stands for eXtensible Access Control Markup Language. XACML is described at https://en.wikipedia.org/wiki/XACML.
<urn:uuid:29626b19-2f72-46fe-9c3a-f0a19ad29d56>
CC-MAIN-2017-04
http://hitachi-id.com/resource/concepts/attribute-based-access-control.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00522-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915975
237
2.515625
3
Educational technology has revolutionized teaching and learning. With so much rapidly-changing technology in today’s classroom, it’s important to track the results, in order to build on the successful implementations and phase out what’s not working. That’s why analytics and student assessment are so important in education. With products like Extreme Networks’ Purview, it can be a simple matter to capture the data you need and graphically display it or format it into a report. Unlocking that data can bring enormous benefits for improving educational outcomes. Network analytics tell you which technologies, devices, and software are being used most and which are generating the best results. These analytics help you understand what the students are doing throughout the day. What are the most successful students up to that others could learn from? At Educause, Fontys Hogescholen described how his IT staff members were transformed into folk heroes through the creative and engaging use of Wi-Fi analytics. Hogescholen’s group helped students use network analytics for projects like tracking student activities across campus to correlate demographic data with behavior and even effect change. The concept of analytics is helping usher in the era of competency-based education (CBE), enabling students to master skills at their own pace. Data and network analytics are essential, not just for network managers, but also for teachers, curriculum directors, superintendents, principals, CFOs and students. Teachers can quickly see which applications are actually being used during class time and make sure the applications are running fast and responsively. Teachers can explore how activities and application usage varies among their students during the day and compare their respective results. District superintendents, principals, and curriculum directors use the analytics to preparing for new technology-related initiatives, like video, digital text books, and online testing. During online testing it can be absolutely critical to have a realtime view into network dynamics. Should an issue arise, Purview can determine whether the problem is at the student device, within the network, at the local servers, due to an Internet connection, or caused by the remote servers administering the test. The district or university finance managers use Purview to analyze the cost effectiveness of expensive software licenses. How often is the software used; how many simultaneous users are there; who and what departments are using it? Finance managers can also use the analytics to project capacity needs for investment planning. The network analytics available today are especially valuable for IT leadership and the help desk. The IT staff can spot bottlenecks even before users are affected. The staff can keep shadow IT at bay by insuring that only approved network and user devices are active on the network. Rogue IT devices can be easily located and disabled. Purview provides a single dashboard to show what’s happening across your school district or university network. It records what applications are being run by whom with full data on the locations and times. This is provided without taking away any performance from the network. The detailed view into the network provided by analytics solutions like Purview gives IT the ability to provide students, teachers, administration, and all users with the network experience they demand. Here are examples of how Purview network analytics can be used to improve education. For more on Purview read How to Turn Your Network into a Strategic Business Asset with Purview.
<urn:uuid:cdc37777-18f0-4ab0-bb1b-d0a6fd3eabf5>
CC-MAIN-2017-04
http://www.extremenetworks.com/improving-education-with-network-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949425
684
2.6875
3
VLANs and Trunks When properly utilized, VLANs and trunks provide flexibility, stability and ease of troubleshooting. This paper provides technical details about VLANs and trunks, along with design options at a basic to intermediate level. Recommendations and commands are included throughout. Virtual Local Area Networks (VLANs) provide several benefits to enterprise networks. VLANs provide a measure of flexibility, improve user mobility, ease the application of security measures, and increase the overall efficiency of the network. Trunks also provide benefits, specifically the ability to reduce the number of physical connections needed between switches in order to support multiple VLANs. This paper describes VLANs and trunks. It includes an overview of Ethernet-based computer networks, which includes definitions of several terms. This paper also includes technical details about VLANs and trunks, along with design options at a basic to intermediate level. Recommendations and commands are included throughout. The Open Systems Interconnect (OSI) communications model is a seven-layer reference model that describes the functions necessary for two endpoints to communicate. The 2nd Layer of the OSI model is called the Data Link Layer. Its functions are to format data for transmission on the physical media and to define how devices access the physical media (twisted pair copper, fiber optic, or wireless). Ethernet is the Layer 2 protocol used for devices that connect to a Local Area Network (LAN). Ethernet defines how data is formatted for transmission by creating frames. Figure 1 shows a sample Ethernet frame. When data is transmitted along the media, the data is just a series of bits. Ethernet defines how the sending and receiving device will interpret those bits. The key fields of an Ethernet frame are the destination and source address fields, the type field, and the Frame Check Sequence (FCS) field. The destination and source address fields contain the Media Access Control (MAC) addresses of the receiving and sending devices respectively as defined by standards from the Institute of Electrical and Electronics Engineers (IEEE). The type field contains a value identifying the next layer protocol. For example, a value of 0x0800 (0x indicates the numbers that follow are hexadecimal numbers) means IPv4 is the next layer protocol; 0x86DD means IPv6; and 0x0806 means Address Resolution Protocol (ARP). The FCS field is used for error detection. Its value is calculated by the sending device and attached to the end of the frame. As the frame is received, the receiving device performs the same calculation. If the values match, the receiving device knows the frame is error free. If the values do not match, the receiving device knows an error occurred during transmission and the frame is discarded. Another term for LAN is broadcast domain. A broadcast domain is the most basic of computer networks. It is defined as a collection of devices that receive broadcast frames from each other at Layer 2. A broadcast frame is one that is destined (addressed) to every device on the LAN by using the value 0xffff.ffff.ffff in the destination address field. This type of frame is flooded out of every interface of an Ethernet switch, except the interface on which the frame was received. This is very inefficient as the switch has to forward (or replicate) a copy of the frame for every active interface. For example, a fully populated 48-port switch has to forward 47 copies of every broadcast frame it receives. Additionally, the end hosts that receive the broadcast frame are required to perform some processing of the frame, even if the broadcast frame does not contain data that is pertinent to the host. Originally, broadcast domains (LANs) were implemented based on physical location. The first implementation of Ethernet was designed for use with coaxial cable. In this environment, all devices connected to a single piece of coaxial cable; in other words, the piece of coaxial cable was the LAN media, and the LAN was limited to a single room or maybe a group of two or three rooms. Eventually, Ethernet was updated to transmit data over twisted pair copper wiring, at which point hubs replaced coaxial cable as the LAN. However, whether the LAN was a piece of coaxial cable or a hub, the LAN was limited to devices that shared a physical media.
<urn:uuid:07c436d7-b058-4cd9-9e1e-ad65385f4443>
CC-MAIN-2017-04
https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/vlans-and-trunks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942631
877
3.546875
4
If you’ve been following the health care debate in the US, it’s become fairly clear that the current trajectory of medical costs will soon be unsustainable for the economy. The latest government figures has the average US health care spend per person at over $8,000, and is projected to top $13,000 by 2018. Whether the latest health care legislation will do much to curb these costs is debatable. If that $13,000 per capita figure holds up, that means about 20 percent of the nation’s GDP will be spent on medical bills. Other developed nations are currently about twice as efficient as the US, but even there health care cost are outrunning incomes. Fortunately, economic forces that strong have a way of disrupting the status quo. Probably the lowest hanging fruit for optimizing the health care sector is in information technology. Even though we think of medicine as a high-tech endeavor, it’s mostly based on 30-year-old IT infrastructure overlaid with a manual labor approach to data collection and analysis. Essentially we have a system using 20th century computing technology, but with 21st century wages. Just going to a doctor’s office and filling out a medical history form (on paper!) for the 100th time should give you some idea of how antiquated the health care industry has become. It’s as if the Internet was never invented. But it’s not just about your medical records ending up in isolated silos. The amount of data that can be applied to your health is actually growing by leaps and bounds. The results of medical research, genomic studies, and clinical drug trials are accumulating at an exponential rate. Like most sectors nowadays, health care revolves around data. In general though, your health care provider doesn’t do anything with all this information since the analysis has to done by a time-constrained, high-paid specialist, i.e., your doctor. But that could soon change. The latest advanced analytics technologies are looking to mine these rich medical data repositories and transform the nature of health care forever. Not surprisingly, IT companies are lining up to get a piece of the action. IBM, in particular, has been pushing its analytics story for all sorts of medical applications. Last week, the compay announced it was expanding its Dallas-based Health Analytics Solution Center with additional people and technology. Part of this is about sliding the IBM Watson supercomputing technology into a medical setting. With it’s impressive Jeopardy performance under its belt, IBM is now applying HPC-type analytics to understand medical text. Specifically, they want to combine Watson’s smarts with voice recognition technology from Nuance Communications to connect doctors to their patients’ medical data via a handheld device like a tablet or smart phone. From the press release: By using analytics to determine hidden meaning buried in medical records, pathology reports, images and comparative data, computers can extract relevant patient data and present it to physicians, ultimately leading to improved patient care. Analytics vendor SAS is also in the game. In May, they unveiled a new Center for Health Analytics and Insights organization that is designed to apply advanced analytics across health care and life sciences. Although the specifics were a little thin, the group will focus on “evidence-based medicine, adaptive clinical research, cost mitigation and many aspects of customer intelligence.” It’s not all about clinical care though. One of the most expensive undertakings of the health care industry is ensuring drug safety. Both the FDA and pharma have had some spectacular failures in this area, the most recent being Vioxx, a pain-relief drug that was pulled from the market in 2004 after it was discovered that it was causing strokes and heart attacks in some patients. A recent study by the RAND Corporation suggests data mining can be used to find some of these dangerous drugs before they enter into widespread usage. RAND CTO Siddhartha Dalal and researcher Kanaka Shetty developed an algorithm to search the PubMed database to uncover these bad players. The software employed machine learning algorithms in order to provide the sophistication necessary to differentiate truly dangerous compounds from ones that only looked suspicious (false positives). According to the authors, the algorithm uncovered 54 percent of all detected FDA warnings using just the literature published before warnings were issued. A more ambitious medical technology is envisioned by the X PRIZE Foundation, a non-profit devoted to encouraging revolutionary technologies. Recently they teamed with Qualcomm to come up with the Tricorder X PRIZE, offering a $10 million award to develop “a mobile solution that can diagnose patients better than or equal to a panel of board certified physicians.” In other words, make the Star Trek tricorder a reality. The device is intended to bring together wireless sensors, cloud computing, and other technologies to perform the initial diagnosis, and direct them to a “real” doctor if the situation warrants. Presumably the cloud computing component will support the necessary data mining and expert system intelligence, while the tricorder itself would mostly act as the data collection interface and do some medical imaging perhaps. The X PRIZE Foundation will publish the specific design requirements later this year, with the competition expected to launch in 2012. None of these solutions are being promoted as substitutes for doctors or other medical professionals. Inevitably though, if these technologies become established, these jobs will be very different. With powerful analytics available, doctors won’t have to memorize all the information about the biology, drugs, and medical procedures any more. In truth, they can’t even do that today; there is already far too much data, and it continues to expand. In an analytics-supported health care system, medical practitioners will need to do less data collection and analysis and more meta-data analysis. Just as today, writers don’t need to know how to spell words (remember, 50 years ago a spell checker was a person, not a piece of software) doctors will not need to memorize which drugs are applicable to which diseases. And that means a lot fewer doctor and less supporting staff. Essentially we’ll be replacing very expensive PhD’s with very cheap computer cycles. If that seems like a scary prospect, consider the more frightening scenario of a health care system that bypassed this technology and tried to burden medical practitioners with the data deluge. Also consider that without advanced analytics, the majority of the population will be burdened by the long-term costs of sub-standard medical care. Beyond that, advanced analytics will also be involved in propelling other health care technology forward, including drug discovery, genomics, and the whole field of personalized medicine. Many of these advances will enable medical conditions like heart disease, cancer and diabetes to be prevented, which is a far less expensive proposition than treatment. It’s reasonable to be optimistic here. Nature abhors a vacuum — in fact, any sort of stark discontinuity. Our problematic health care model will eventually be transformed by technologies that make economic sense. Advanced analytics is poised to be a big part of this.
<urn:uuid:e335a5d7-f39f-4cf7-830d-9f637d0b78fe>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/06/02/a_healthy_dose_of_analytics_from_ibm_watson_to_tricorders/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00486-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954543
1,461
2.53125
3
While Oracle and MySQL remain top picks for database systems, there are many others available, from big guns like Microsoft SQL Server to the increasingly popular MongoDB. Each has its own strengths and weaknesses, so your latest IT project may find you scratching your head as you try to decide on database software. If you’re looking for a database platform, you probably already know the basics, but a database is a collection of data, information of almost any type, organized in a manner that can be accessed, managed, and updated either by other programs or by users directly. They are required to recall specific data on demand, like when a social media user looks back on their profile one year ago. Databases can be installed on individual workstations or on central servers or mainframes. Applications are as varied as an industry might require; they are used to store and sort transactions, inventory, customer behavior, pictures, video, and more. Most business IT applications will require some form of database. The first decision you’ll need to make is between desktop and server database. Desktop database management systems are licensed for single users, while server database management systems often include failsafe designs to guarantee they will be always accessible by multiple users and applications. Some desktop database options include Microsoft Access (included with Office or Office 365 licenses), Lotus Approach, or Paradox. They are pretty inexpensive and use GUIs that make interacting with SQL simple for non-power users. Chances are you need a server database management solution, if you’re reading this blog. They offer greater flexibility, performance, and scalability than a desktop database. Oracle, IBM DB2, Microsoft SQL, MySQL, PostgreSQL, and MongoDB are all popular options. MySQL, MongoDB, and PostgreSQL are all open source while they others are closed. Another open source database gaining popularity is Cassandra, released by Facebook. The large vendors like Oracle and IBM have the advantage of longstanding popularity, meaning they now work with a variety of programming languages and operating systems. Microsoft SQL is conveniently integrated into the Windows Server stack and is relatively inexpensive. Before choosing a vendor, you’ll need to ask the following questions: One quick way to narrow down your options is to decide whether you need an SQL (Structured Query Language) based database or NoSQL. SQL databases are relational, which means they are sorted into a table and organized by each entry (the row) and its qualities (the columns). It is important to note that you have to predefine these qualities. NoSQL databases can have varying storage types, including document, graph, key-value, and columnar. Document databases store each record in a document and documents are grouped in collections. The structure of each document does not have to be the same. Graph databases are best suited for data types that graph well, like trends. The structures have entries and information about the entries connected via line. Key-value databases use pairs of key-values to associate data. The key is an attribute which is then linked to a value. The resulting associative array is also called a dictionary, made up of many record entries, each of which contains fields. The key is used to retrieve the entry from the database. Columnar databases have column families, each of which contains rows. The columns do not have to be predefined and the rows do not need to have the same amount of columns. Another important distinction between SQL and NoSQL is ACID compatibility. All SQL databases retain ACID functionality, while many NoSQL options do not. ACID stands for Atomicity, Consistency, Isolation, and Durability. Atomicity means if a transaction has two or more pieces of information, they either all make it into the database or none do. Consistency means if a new entry fails, the data is returned to its previous state before the entry was transacted. Isolation means a new transaction remains separate from other transactions. Durability means data remains in its state even after a system restart or failure. Note: a transaction refers to any retrieval or update of information. SQL servers are generally not scalable across other servers, while NoSQL servers are often used in cloud environments as they can scale across servers, with many platforms including automation. If your data needs will not change in structure (meaning you know the categories of each entry are stable, like a contact database of First Name, Last Name, Phone, Address, E-mail, etc) and you don't expect massive growth, SQL might fit the bill.
<urn:uuid:c76114c3-e221-4677-a150-57e392dfc551>
CC-MAIN-2017-04
https://www.greenhousedata.com/blog/which-database-management-platform-is-right-for-you
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931519
928
2.828125
3
Image from wikipedia Have you ever been asked a question about a particular type of proxy, e.g. HTTP connect vs. tunnel, or SOCKS 4 vs. 5, and not known the answer? Yeah, me too, that’s why I made this page. - a proxy is a piece of software that makes requests on behalf of a client. It serves as a go-between between the requestor and the resource. The most common type of proxy is a web proxy that enables business users to access the Internet. This is to say that in such environments it’s not possible for users to surf the web unless their traffic traverses said proxy. Explicit proxies are configured by explicitly defining proxy settings within a web browser. Figure 1 below shows such a configuration.
<urn:uuid:3065a58b-2847-4efc-ac9b-975b083b47a8>
CC-MAIN-2017-04
https://danielmiessler.com/study/proxies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00412-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927901
164
3.578125
4
Computers are so much more than the interface most Web-connected humans interact with on a daily basis. While many of us take for granted the thousands of instructions that must be communicated across a vast array of hardware and software, this is not the case for the computer scientists and engineers working to shave nanoseconds from computing times, people like University of Wisconsin researcher Mark Hill. As Amdahl Professor of Computer Science at the University of Wisconsin, it’s Professor Hill’s job to identify hidden efficiencies in computer architecture. He studies the way that computers take zeros and ones and transform this binary language into something with a more human-bent, like social network interaction and online purchases. To do this, Hill traces the chain reaction from the computational device to the processor to the network hub and to the cloud and then back again. Professor Hill’s interesting and important research was the subject of a recent feature piece by prominent science writer Aaron Dubrow. The opaqueness of computers is primarily a feature not a bug. “Our computers are very complicated and it’s our job to hide most of this complexity most of the time because if you had to face it all of the time, then you couldn’t get done what you want to get done, whether it was solving a problem or providing entertainment,” explains Hill. Over the last few decades, it made sense to keep this complexity hidden as the pretty much the entire computing industry road the coat tails of Moore’s law. With computing power doubling approximately every 24 months, faster and cheaper systems were a matter of course. As this “law” reaches the limits of practicality from an atomic and financial perspective, computer engineers are essentially forced to start examining all the other computational elements that come into play to identify untapped efficiencies. Waiting for faster processors is no longer a viable growth strategy. One area that Hill has focused on is the performance of computer tasks. He times how long it takes a typical processor to complete a common task, like a query from Facebook or perform a web search. He’s looking at both overall speed how long each individual step takes. One of his successes had to do with a rather inefficient process called paging that was implemented when memory was much smaller. Hill’s fix was to use paging selectively by employing a simpler address translation method for certain parts of important applications. The result was that cache misses were reduced to less than 1 percent. A solution like this would allow a user to do more with the same setup, reducing the number of servers they’d need and saving big bucks in the process. “A small change to the operating system and hardware can bring big benefits,” notes Hill. Hill espouses a more unified computational approach, and he’s confident that hidden inefficiencies exist in sufficient quanities to offset the Moore’s law slowdown. “In the last decade, hardware improvements have slowed tremendously and it remains to be seen what’s going to happen,” Hill says. “I think we’re going to wring out a lot of inefficiencies and still get gains. They’re not going to be like the large ones that you’ve seen before, but I hope that they’re sufficient that we can still enable new creations, which is really what this is about.” The forward-thinking researcher is a proponent of using virtual memory protocols and hardware accelerators like GPUs to boost computational performance. The “generic computer” is last century, according to Hill. “That’s not appropriate anymore,” he says. “You definitely have to consider where that computer sits. Is it in a piece of smart dust? Is it in your cellphone, or in your laptop or in the cloud? There are different constraints.” Hill along with dozens of top US computer scientists have penned a community white paper outlining many of the challenges and paradigm shifts facing computing in the 21st century. These include a transition from the single computer to the network or datacenter, the importance of communication as it relates to big data, and the new energy-first reality, where power and energy are becoming dominant constraints. The paper also gets into describing potential disruptive technologies that are coming down the pike. However, with no miracle technologies in hand, computer scientists must do what they can to optimize existing hardware and software. Read the paper here:
<urn:uuid:25699c8d-a447-443c-9ddb-dfc360093afa>
CC-MAIN-2017-04
https://www.hpcwire.com/2014/04/17/mining-efficiencies-post-moores-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00320-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950185
925
3.328125
3
A Healthy Approach to the Internet of ThingsBy Samuel Greengard | Posted 2016-02-12 Email Print The IoT is providing a stream of products aimed at making our lives easier, better and healthier. But some of these items are creating an 'Internet of Garbage.' As the Internet of things evolves from concept to reality, we're witnessing a growing stream of products and solutions aimed at making our lives easier, better and healthier. In many cases, as demonstrated by the recent Consumer Electronics Show, the net outcome is an "Internet of Garbage." Even the most optimistic or motivated person probably wouldn't see much value in auto-tightening shoes or vibrating yoga pants that correct your form. To be sure, technology advances often occur in fits and starts. Today's fitness trackers are good, but not great. They provide valuable insights, but they're not entirely accurate or reliable. I've placed Bluetooth LE trackers on my keychain and other items in order to find them if they're misplaced. The trackers work reasonably well, though they're far from perfect. For instance, I've had the tracker's battery die without warning, which renders it useless when it's needed. Advances From Connected Devices On the other hand, connected devices could lead to enormous advances. For example, Swaive recently announced the world's first smartphone-enabled ear thermometer. The company partnered with Sickweather, an online community that provides maps showing where people have caught the flu or common colds. Users can share their anonymous data, and machine learning takes it from there. Already, Sickweather processes more than 6 million illness reports each month by using social media and other data. The ability to plug in connected devices increases the value immeasurably. It's not difficult to extrapolate on this concept and think about public health experts and others using similar technology to map all sorts of other illnesses, diseases and afflictions. Ultimately, researchers and epidemiologists might better understand how to deal with various outbreaks. At the very least, hospitals and clinics would have a far better idea of where to direct supplies and resources, and the rest of us would know when and where it's riskier to head outside. It's also possible to envision similar systems—most likely grabbing data from fitness devices and other wearables—to better understand and formulate policy for everything from eating to exercise. And health care insurance providers, which today treat everyone roughly equally, could design programs to fit demographics and reward individuals for meeting minimum daily step goals and eating healthier foods. To be sure, the journey has just begun. Over the next few years, we will likely walk, run and skip past a lot of crazy and brilliant connected devices on the road to progress.
<urn:uuid:57aa8580-09e8-48f5-8f92-ac872714a4b0>
CC-MAIN-2017-04
http://www.baselinemag.com/blogs/a-healthy-approach-to-the-internet-of-things.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00349-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954508
562
2.921875
3
If you're elderly and want to abuse a (legal) drug, the latest health research says to go with caffeine. A new study by the Mayo Clinic shows that people age 70 and up with high-carbohydrate diets and those with high-sugar diets are more likely to develop dementia than people with healthier eating habits. (Also see: For gamblers, near-misses provide a dangerous reward The study involved more than 1,200 people ages 70 to 89 and spanned four years. Mayo writes: Those who reported the highest carbohydrate intake at the beginning of the study were 1.9 times likelier to develop mild cognitive impairment than those with the lowest intake of carbohydrates. Participants with the highest sugar intake were 1.5 times likelier to experience mild cognitive impairment than those with the lowest levels. There's a much better and safer way for older folks to get that addictive metabolism boost, thanks to the good people at 5-Hour ENERGY the planet's most popular drug delivery vehicle -- coffee. The Journal of Alzheimer’s Disease recently published results of a study involving 124 people ages 65 to 88 which concluded that subjects with elevated blood-caffeine levels showed no signs of Alzheimer's disease in follow-up examinations two to four years later. According to the Journal, "coffee appeared to be the major or only source of caffeine for these individuals." Wait a minute, you might be saying. What about tea? Surely drinking caffeinated tea will protect older people just as much as coffee, right? Don't really know. All we know is that the authors of this study -- Drs. Chuanhai Cao and Gary Arendash of the University of South Florida -- recently "reported that caffeine interacts with a yet unidentified component of coffee to boost blood levels of a critical growth factor that seems to fight off the Alzheimer’s disease process," writes the Journal. Alzheimer's is a devastating disease that slowly robs people of their memories and mental capacity through the loss of nerve cells and neural connections. “Moderate daily consumption of caffeinated coffee [about three cups a day] appears to be the best dietary option for long-term protection against Alzheimer’s memory loss,” Dr. Arendash said in a statement. “Coffee is inexpensive, readily available, easily gets into the brain, and has few side-effects for most of us. Moreover, our studies show that caffeine and coffee appear to directly attack the Alzheimer’s disease process.” Now read this:
<urn:uuid:b8c37963-6b6f-4d8f-9eec-21798bd859e5>
CC-MAIN-2017-04
http://www.itworld.com/article/2719144/mobile/caffeine-high-better-than-sugar-high-for-older-people--research-shows.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00285-ip-10-171-10-70.ec2.internal.warc.gz
en
0.954391
522
2.53125
3
Rice University is talking about its inexact computer chip which makes allowances for the occasional error and still delivers enhanced resource efficiency and power as well. The University has showcased the prototypes at the ACM International Conference on Computing Frontiers and has proved that the chip can function at 15 times more efficiency that current technology. For data center owners, this makes for extremely interesting news and applications indeed. The Rice University team has made silicon that allows for deviation around an average of 2.5% and reduces energy demand by around 3.5 times. In terms of speed and size, the chips also proved to be 7.5 times faster as well. For green data center operations, this can mean immense savings on overall energy management. The first waves of application of these chips are likely to be in areas like portable electronic devices and other application specific processors. The chip’s ability to account for error correction will find increasing use. Read More About Rice University
<urn:uuid:41520d6c-b7ef-4fc2-b804-7d621ad88093>
CC-MAIN-2017-04
http://www.datacenterjournal.com/rice-universitys-research-on-silicon-boon-for-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00157-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94553
191
2.65625
3
Pervasive computing and distributed computing. Chances are, your customers have been using both of those terms a lot lately. But it's up to you to set the record straight and identify how they can make the most of both computing architectures. Pervasive computing and distributed computing. Chances are, your customers have been using both of those terms a lot lately. But its up to you to set the record straight and identify how they can make the most of both computing architectures. IBM coined the term pervasive computing shortly after Lou Gerstner took over in 1993. It means that computing is being done everywhere in a corporation, from wireless devices all the way up to the mainframes in the glass house. Thats why IBM is rushing around the globe to secure deals to run front-end slices of its MQ Series middleware and DB2 on cell phones and other vendors handheld devices. IBM believes it can be everywhere, either with its own products or via partnerships with other vendors, ISVs or integrators that develop their own vertical applications. (Just as a side note, IBM has been pretty good at coining terms of late. It made the term e-business a household word, which is a vast improvement from the old days of arcane names like systems network architecture and computer information control system.) As important as pervasive computing these days is the term distributed computing, which is making quite a resurgence. The idea of sharing data among all processors on a network has been floating around for at least a couple of decades, and its impossible to figure out who first coined it. Sun has made it a centerpiece of its research over the past decade, and before that, Novell was on the case. No matter who gets the credit, the most recent iteration of distributed computing is different from the old term. Rather than referring to sharing the processing, the term now has come to mean sharing data, as well. Neither term does justice to whats really happening. Pervasive computing is a concept for selling a soup-to-nuts line of computer hardware and software. Distributed computing, in its current iteration, is a way of selling a variety of networking products and services. What were really going to do with all this stuff, however, depends upon whatever checks and balances are put into the back shop of most organizations. The shape of things to come isnt about raw potential, its about what makes sense and how it gets deployed. What were really talking about is integrated computing. If a companys back-end systems arent equipped to handle a variety of data, as well as provide authentication and security, it doesnt matter what you call it. Its a disaster waiting to happen. Theres more to terminology than just hype. Your customers need to know that, or they may not be willing to foot the bill for all the services you believe they need.
<urn:uuid:e4c97e83-49d9-45b7-9545-092f1eb97b2d>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Cloud-Computing/Editors-Note-January-22-2001
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.972227
584
2.515625
3
Mac Virus Terms A program that protects your computer from viruses and malware by scanning, disinfecting and repairing infected files. It looks for bits of code that make up the virus's "signature", in certain places in files and applications. A file that contains several files, and is usually compressed, to save space. (Used in Intego VirusBarrier.) A backdoor is a bit of malware that provides remote access to an infected computer. It is essentially a program that opens a port - a door - on an infected computer, allowing malicious users to access that computer, either to steal data, or to control it, and use it as a part of a botnet. A backup is a copy of files and folders made from one location, usually your active Mac, to another for safekeeping. Backups can be made to other computers, to other disks or partitions, or to removable media, such as CD-ROMs, DVDs or memory sticks. (Used in Intego Personal Backup.) A statistical method used to determine whether incoming e-mail messages are spam. This analysis uses a database of good and bad words, and weights the resulting analysis of each message according to the frequency of each type of word. A botnet, while not a form of malware, can be the consequence of a malware attack. Often created by a Trojan horse or a worm, a botnet - a network of compromised computers - can be used to send spam or to attack computers. Intego found a Trojan horse, called iServices, responsible for a Mac botnet in January 2009, hidden inside pirated copies of popular Mac software. See "bootable backup." (Used in Intego Personal Backup.) A file on your hard disk that contains information sent by a web server to a web browser and then sent back by the browser each time it accesses that server. Typically, this is used to authenticate or identify a registered user of a web site without requiring them to sign in again every time they access that site. Other uses are, e.g. maintaining a "shopping basket" of goods you have selected to purchase during a session at a site, site personalization (presenting different pages to different users), tracking a particular user's access to a site. (Used in Intego VirusBarrier and Washing Machine.) A disk image is a volume that is created as a file. You can copy disk images from one volume to another, and, when you double-click them, they open as if they were separate disks. On Mac OS X, disk images generally have the .dmg extension. (Used in Intego Personal Backup.) Domain Name System. Used by routers on the Internet to translate addresses from their named forms, such as www.intego.com, to their IP numbers. (Used in Intego VirusBarrier.) While not malware as such, exploits allow hackers to take advantage of software vulnerabilities, the weak spots in the armor of a computer's security. Serious vulnerabilities are regularly found affecting Mac OS X, and exploits, which are often just booby-trapped web pages or doctored files, can be used to break through a Mac's defenses. File Transfer Protocol. A protocol used for transferring files from one server to another. Files are transferred using a special program designed for this protocol, or a web browser. HyperText Transfer Protocol, the protocol used to send and receive information across the World Wide Web. A strategy whereby you perform a complete backup once, and then on each subsequent backup copy only files that have changed. (Used in Intego Personal Backup.) The network layer for the TCP/IP protocol suite widely used on Ethernet networks and on the Internet. An address for a computer using the Internet Protocol. (Used in Intego VirusBarrier.) Linux and Unix Viruses Linux is not immune to malware. While many users think that Linux systems are safe, they are no safer than any other operating system. Linux has a full range of malware: viruses, worms, Trojan horses and more. Intego VirusBarrier X6 detects this type of malware, ensuring that Mac users don't pass infected files on to Linux users. The same is the case for the Unix operating system: while there is not a great deal of malware affecting Unix, there is some, and businesses running Unix servers need to be protected from these. Again, Intego VirusBarrier X6 stops Unix viruses from spreading from Macs to other computers. A network of computers linked together in a local area. This may be a single building, site or campus. A virus is a small bit of executable code that spreads when users open infected files or applications by copying its code to other executables on a user's hard disk, or in memory. There are two viruses that affect Mac OS X: OSX.MachArena.A, a standard virus, and OSX/Oomp-A or Leap.A, which combines the techniques of Trojan horses, viruses and worms. While both of these have been found in the wild, neither are widespread. The word "virus" is used - usually incorrectly - by the general public, and even the press, when talking about malware. Some programs let users create "macro" commands, using both menu commands and a programming language to create routines to save time and perform complex tasks. Microsoft Word and Excel are the two most common such programs, and macro viruses, which use the Visual Basic language that these programs work with, are very common. They can damage Word or Excel files, and render these applications unusable. In addition, they are cross-platform: many macro viruses affect Windows and Mac OS X alike. Microsoft removed Visual Basic from Office 2008, but Office 2011 saw the return of this, and the return of the risk of macro viruses affecting these applications and files created with them. Viruses, Trojan horses, spyware and other dangerous types of computer code or programs are all grouped under the term "malware." A type of discussion group that uses a special protocol (NNTP) and special software. There are several tens of thousands of newsgroups, each dealing with very specific subjects. To access this kind of content, you need special software, or you can access them via a web browser, notably via Google Groups. The basic unit of data sent by one computer to another across most networks. A packet contains the sender's address, the receiver's address, the data being sent, and other information. (Used in Intego VirusBarrier.) A partition, or volume, is a logical part of a hard disk. It is possible to create many partitions on a hard disk, each of which functions as if it were a smaller hard drive. The operating system sees partitions as separate volumes. (Used in Intego Personal Backup.) A program used to test reachability of computers on a network by sending them an echo request and waiting for a reply. (Used in Intego VirusBarrier.) A ping attack on a computer, where the sending system sends a massive flood of pings at a receiving system, more than it can handle, disabling the receiving computer. (Used in Intego NetBarrier.) A procedure where an intruder scans the ports of a remote computer to find which services are available for access. (Used in Intego VirusBarrier.) The set of rules that govern exchanges between computers over a network. There are many protocols, such as IP, HTTP, FTP, NNTP, etc. (Used in Intego VirusBarrier.) Any data storage media that is inserted into a drive, such as a CD-ROM or DVD. (Used in Intego Personal Backup.) Restoration is the process of copying files from your backup to your active Mac, after files on the computer have been lost, erased or damaged. (Used in Intego Personal Backup.) A computer connected to a network that is serving, or providing data or files to other computers called clients. (Used in Intego VirusBarrier.) A network function available on a server, i.e. http, ftp, e-mail etc. (Used in Intego VirusBarrier.) Unwanted e-mail messages, usually sent to thousands, even millions of people at a time, with a goal of selling products or services. Also called unsolicited commercial e-mail, or junk mail. This is software that is installed maliciously, often by Trojan horses, and that collects information from an infected computer, then sends it to a remote server. Spyware is often used with a keylogger - a tool that records keystrokes typed on an infected computer - in order to try and capture such information as user names and passwords, credit card numbers, or other valuable information. When the creators of such tools collect the information, they can then exploit it by accessing user accounts, or using stolen credit card numbers. Synchronization is the process of comparing two folders, volumes or disks, and ensuring that both contain exactly the same files; any files changed on one side are copied to the other. This is especially useful for ensuring that you have the same files on two computers you work on, such as a desktop Mac and a laptop. (Used in Intego Personal Backup.) A utility used to determine the route packets are taking to a particular host. (Used in Intego VirusBarrier.) A Trojan horse is a file or application that claims to perform some useful task but contains malicious code. Several Trojan horses affect Mac OS X; one recent example is the Mac Defender Trojan horse which Intego discovered in 2011, and which spawned new variants frequently in the early months of its existence. This was a fake antivirus - a type of malware used to scam users by trying to trick them into thinking they are infected by malware - which was widely circulated. Another recent Trojan horse is Flashback, a fake Adobe Flash installer, which installed a backdoor, allowing malicious users to access infected Macs. Trojan horses use "social engineering" - in short, trickery - to get users to install them. They are currently the most common type of malware affecting Macs. A computer program or a bit of computer code capable of reproducing and propagating. Most viruses are malicious, and infect files by attaching to them. They then use these host files to spread when the files are open or run. A volume is, in essence, a hard drive, or other removable media unit. It can be an entire hard disk, a partition on a hard disk, a remote computer on a network, or a floppy disk. What is special about a volume is that it contains its own directory files indicating where, on the volume, files are stored. This is a list of good addresses, usually those of your contacts, from which all messages are considered to not be spam. An Internet directory service for looking up information on domain names and IP addresses. (Used in Intego VirusBarrier.) There are so many Windows viruses and other forms of malware that it is hard to get a good estimate. Windows viruses and malware don't affect Macs, but using Intego VirusBarrier X6 allows Mac users to make sure they don't inadvertently pass infected files on to friends and colleagues using Windows. Worms are one of the oldest forms of viral programs. They spread by methods other than attaching themselves to files and applications, and can be very difficult to find. They spread over networks, and, once they find new hosts, can carry out malicious actions. The OSX/Leap.A virus acts as a worm, when it sends a copy of itself by iChat to a user's contacts.
<urn:uuid:3dc12cda-3a80-4de6-ae90-79d55c85b3c4>
CC-MAIN-2017-04
https://www.intego.com/de/mac-security-center/mac-virus-terms
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00119-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913013
2,397
2.9375
3
Accurately modeling the flow of shockwaves across a fluid body can be a difficult thing to do. Attempts to address the problem by dialing up the computational accuracy of the models can actually make it worse. Now, researchers at the A*STAR Institute of High Performance Computing (IHPC) have come up with an innovative way that models shockwaves with a higher level of overall accuracy. Like scientists in many fields, researchers working in the field of computational fluid dynamics are accustomed to varying their models to suit the particular needs of their experiment. A scientist may test a fresh hypothesis with a low-order approximation that delivers a similarly low level of accuracy. He may follow that up by using a higher order model that is tuned to deliver more accuracy and similarity to real-world conditions. Simultaneously, he may tighten up the three-dimensional computational mesh to get more data into the equation. While one may assume that the higher order model would deliver a better result, that is not the case when it comes to modeling shockwaves. As Vinh-Tan Nguyen of the IHPC explains, shockwaves are a special case. “Simulating flows using high-order approximations triggers oscillations, which cause miscalculations at the front of shock waves where the flow is discontinuous,” Nguyen tells Phys.org. “It therefore becomes counterproductive to have high-order approximations in place right across shock regions.” Nguyen and his team addressed this problem by basically de-tuning the model and using lower-order approximations in the specific regions where shockwave fronts are active, which they detect by using a sensor. The researchers simultaneously increased the resolution of the 3D computational mesh to compensate for the lower-order approximations. Nguyen explains the outcome: “With precise detection through the shockwave sensor we can apply the right capturing scheme to treat each shockwave, regardless of its strength,” he tells Phys.org. “Our mesh adaptation procedure then simultaneously refines the mesh in shockwave regions and coarsens it in areas of least change, reducing computational costs significantly.” The new technique is applicable to modeling any high-speed shockwave, and is an improvement over the previous approaches, which were specific to particular flow problems. The approach is expected to have real-world application in the fields of aerodynamics and blast analysis. The researchers say the computational scheme may also be useful for simulating the interface between air and water, which would be useful in the marine industry. The IHPC was established in April 1998 under the Agency for Science, Technology and Research (A*STAR). The organization promotes and spearheads scientific advances and technological innovations through computational modeling, simulation and visualization methodologies and tools.
<urn:uuid:d8d806cf-4b2b-4605-8b72-944ddc225e07>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/08/08/singapore_researchers_build_a_better_shockwave_model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00175-ip-10-171-10-70.ec2.internal.warc.gz
en
0.936964
569
2.953125
3
Cell Phone Use Increases Risk of Brain Tumors, New Study Finds The International EMF Collaborative released a report to draw attention to studies linking brain tumors and cell phone use and debunking the findings of the Interphone studies. The report emphasizes dangers of cell phone use by children and teens and recommends awareness campaigns.A group calling itself the International Electromagnetic Field Collaborative released an impassioned, 44-page report on Aug. 25 with the intent of drawing attention to studies showing a significant risk of brain tumors from cell phone use and exposing what it calls "design flaws" in the Interphone study protocol. The 13-country Interphone study is said to be the largest case-control study to investigate the relationship between brain tumors and cell phone use. The EMF Collaborative, which comprises the EM Radiation Research Trust, the EMR Policy Institute, ElectromagneticHealth.org and The Peoples Initiative Foundation, describes the Interphone study as funded by the telecom industry and biased in its methods and findings. Wanting to "raise red flags" to alert government officials and journalists to findings beyond those of Interphone, the group cites data from international sources, including a Swedish study that found an 280 percent increased risk of brain cancer after 10 or more years of digital cell phone use. The Swedish study reportedly also cites a 420 percent increased risk of brain cancer for users who began using a cell phone as teens or younger, and among adults, it found the risk of brain cancer to increase by 8 percent for every year of cell phone use. The EMF Collaborative report, "Cellphones and Brain Tumors: 15 Reasons for Concern," includes concerns that research funded by the telecom industry has also found cell phone use to elevate the risk of brain tumors; that there have been warnings from governments, including those of the United Kingdom, Israel, Finland and Germany, about children's cell phone use; that cell phone radiation is shown to damage DNA, an established cause of cancer; and the little-discussed fact that many cell phone manuals warn users to keep the phone away from their bodies when it's not in use. The Collaborative additionally offers recommendations for public safely, in light of its concerns. "We wholeheartedly echo the European Parliament's recent call for actions," the group writes. These actions include reviewing the scientific adequacy of existing cell phone use limits, creating wireless-free areas, such as schools and day care centers, and creating awareness campaigns geared toward children and young people. John Walls, vice president of public affairs for CTIA, The Wireless Association, a nonprofit representing all aspects of wireless communication, issued a statement saying: "... The peer-reviewed scientific evidence has overwhelmingly indicated that wireless devices do not pose a public health risk. In addition, there is no known mechanism for microwave energy within the limits established by the FCC to cause any adverse health effects. That is why the leading global heath organizations such as the American Cancer Society, National Cancer Institute, World Health Organization and the U.S. Food and Drug Administration all have concurred that wireless devices are not a public health risk." A copy of the Collaborative's report is available at RadiationResearch.org. Its author, Lloyd Morgan, told PC World: "Cell phones can be used appropriately and have a certain usefulness, but I fear we will see a tsunami of brain tumors, although it is too early to see that now since the tumors have a 30-year latency. I pray I'm wrong, but brace yourself."
<urn:uuid:9bcc21ba-861a-4267-aadc-555bbe9810c7>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Mobile-and-Wireless/Cell-Phone-Use-Increases-Risk-of-Brain-Tumors-New-Study-Finds-600550
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00385-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937205
714
2.640625
3
Here are the slides for the talk I gave at RECON. The talk was on “Creating Code Obfuscating Virtual Machines”. The videos of all the talks will also be made available on the RECON website as well. To get started writing your own virtual machine or programing for the MiniVM you’ll need to download the MiniVM suite (See below). This has the core CPU (aka VM) under core/minivm.inc. This file was intended to be compiled by MASM. You could of course compile this to an object file and link it into your C code. There is also a directory for compilers. There is currently just one and it’s Ruby based. This compiler is easily extensible so you can use this compiler for any VM you decide to create yourself. This should speed up and give you a lot more flexibility when writing your own VMs. Both the Compiler and the VM core *should* be able to compile on other platforms but I haven’t tested compiling the core with NASM yet. These operands are currently support by MiniVM - MOV r32, r32 - MOV [r1], r32 - MOV r1, [r1] - MOV r32, value - CMP r32, value - INC/DEC r32 - ADD/SUB r32, value - AND/OR r1 - XOR r32,r32 - PUSH/POP r32 - JMP (Relative address / Direct Address) - JE, JL, JG value - CALL r1/value r32 in most cases means any of the registers. If you are using the supplied compiler and you enter an unsupported use of an operand it will not only give an error but it will also show you all the possible valid ways to use that operand. You basically have 4 general purpose registers: r1, r2, r3, and r4. With r1 being a primary register. Every operand works with that but not necessarily the others. You also have the registers IP and SP for Instruction pointer and stack pointer manipulation. As well as a few others. See the slides for more information or simply look at the core source. I will be maintaining both MiniVM and the compiler. Please send me any patches or updates to either of these. Also if you write anything really cool for MiniVM I would like to see that as well. I’m sure the solutions for the Crackme will fill up quick but if you write up a good tutorial send that to me and I’ll post it as well. Send emails to: agent.craig (at) gmail.com
<urn:uuid:42a323f2-9a5d-4a95-9545-6294a019c3d5>
CC-MAIN-2017-04
https://labs.neohapsis.com/2008/06/14/minivm-recon-release/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00413-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926556
573
2.515625
3
The National Nuclear Security Administration awarded Cray Inc. a $174 million contract to develop a new supercomputer capable of crunching data at speeds measured in multiple quadrillion of operations per second (petaflops) to conduct nuclear weapons simulations. Neither Cray nor NNSA disclosed the speed of the new machine. However, if in operation today, the new machine would rank in the top 10 class of supercomputers, said Nick Davis, a Cray spokesman. Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, ranks as world’s fastest system, with a performance of 33.86 petaflops as of last month. The new supercomputer, named Trinity after the code name for the first atomic bomb tested in August 1945, will be installed at Los Alamos National Laboratory and will also be used by Sandia National Laboratories, both located in New Mexico. NNSA did disclose that Trinity will run applications eight times faster than the Cray “Cielo” supercomputer installed at Los Alamos, which runs at a speed of 1.37 petaflops. Cray will develop Trinity with advanced chips from Intel, while Cielo uses chips from Advanced Micro Devices. Cray said Trinity will include 82 petabyes of storage. Bill Archer, Los Alamos Advanced Simulation and Computing program director said: “The needs of the mission drive the need for increased memory rather than computing speed alone. Trinity will be a very fast machine, but the real key is having enough memory to solve extremely complex calculations,” for weapons simulation. Beefed-up memory will allow the labs to perform more detailed weapons simulations, NNSA said. The United States adopted nuclear weapons simulation following a ban of all live nuclear tests in 1966. Cray will start deployment of Trinity in phases in mid-2015.
<urn:uuid:f3fc5638-64ac-41c8-be59-30b63ec939b3>
CC-MAIN-2017-04
http://www.nextgov.com/defense/2014/07/nnsa-taps-cray-new-high-speed-supercomputer-los-alamos-lab/88384/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00470-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920161
388
2.609375
3
Throughout this blog I appear to use (or misuse) the terms SSL, TLS and HTTPS interchangeably. From time to time I catch myself and say, “Which one should I be using?” Frankly, my default is to use SSL. When I reference an article or site, I do tend to side with the term it prefers. So what’s the difference? Secure Sockets Layer (SSL) is a cryptographic protocol that enables secure communications over the Internet. SSL was originally developed by Netscape and released as SSL 2.0 in 1995. A much improved SSL 3.0 was released in 1996. Current browsers do not support SSL 2.0. Transport Layer Security (TLS) is the successor to SSL. TLS 1.0 was defined in RFC 2246 in January 1999. The differences between TLS 1.0 and SSL 3.0 were significant enough that they did not interoperate. TLS 1.0 did allow the ability to downgrade the connection to SSL 3.0. TLS 1.1 (RFC 4346, April 2006) and TLS 1.2 (RFC 5246, August 2008) are the later editions in the TLS family. Current browsers support TLS 1.0 by default and may optionally support TLS 1.1 and 1.2. Hypertext Transfer Protocol Secure (HTTPS), or “HTTP Secure,” is an application-specific implementation that is a combination of the Hypertext Transfer Protocol (HTTP) with the SSL/TLS. HTTPS is used to provide encrypted communication with and secure identification of a Web server. What terminology should we use? Since TLS has succeeded SSL, logic dictates that we should be using the term TLS instead of SSL. However, SSL is by far most common on the Internet, so SSL will continue to be my default acronym of choice when making non-application specific references. From time to time, I will use SSL/TLS. When talking about HTTPS, I may use SSL, SSL/TLS or HTTPS, who knows?
<urn:uuid:d884a8cf-fd6a-4687-b0ed-1a3dbd37a9f2>
CC-MAIN-2017-04
https://www.entrust.com/is-it-ssl-tls-or-https/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00222-ip-10-171-10-70.ec2.internal.warc.gz
en
0.939528
414
3.046875
3