text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
What is a Distributed Denial of Service (DDoS) event? It is different than a DDoS "attack". Some, such as Arbor Networks, have dubbed it "The Tiger Effect". June 2008's U.S. Open Golf Championship 19-hole playoff resulted in massive traffic spikes from those seeking real-time scores and streaming video feeds. DDoS events are a massive focus of interest that sometimes take place on the Internet. They are something that greatly exceeds normal demand, and the result is a Denial of Service effect. Web servers just can't meet demand when focus points occur and the timing is not so easily predicted. And even though DDoS events lack malicious intent, the results can often be just as painful as an attack… Here's a recent example from two weeks ago: North Carolina's unemployment rate is at its highest level in 25 years, and a deluge of out-of-work people has strained the state's jobless systems to the breaking point. State [websites] have crashed twice in the past month as people apply or renew their employment benefits.
<urn:uuid:e7e2f8c0-13d0-4f11-bf51-288a7d55ba2e>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00001587.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00560-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963055
226
2.578125
3
Carnegie Mellon University researchers have come up with algorithms to help spot bugs in "cyber-physical systems" (CPS), those computerized mechanisms used to automate everything from aircraft collision avoidance to robotic surgery. Their breakthrough, which involves analyzing the logic behind system design, has already been used to find a flaw in an aircraft collision avoidance maneuver that has since been corrected. In some ways, the technique is similar to Model Checking, a widely used method of spotting errors in complex hardware and software systems. CMU professor of computer science Edmund Clarke was behind Model Checking and is behind this latest research, this time along with Andre Platzer, an assistant professor of computer science at CMU. "Engineers increasingly are relying on computers to improve the safety and precision of physical systems that must interact with the real world, whether they be adaptive cruise controls in automobiles or machines that monitor critically ill patients," Clarke said in a statement. "With systems becoming more and more complex, mere trial-and-error testing is unlikely to detect subtle problems in system design that can cause disastrous malfunctions. Our method is the first that can prove these complex cyber-physical systems operate as intended, or else generate counterexamples of how they can fail using computer simulation." Detecting and fixing problems in CPS ahead of time could save transportation companies and others a lot of money, Platzer says, given that testing systems is currently so expensive. The research is funded in part by the NSF and German Research Council. Do you Tweet? Follow me on Twitter here
<urn:uuid:c901ab76-de0c-4ede-abe5-e0bf137760f4>
CC-MAIN-2017-04
http://www.networkworld.com/article/2235371/data-center/cmu-researchers-address-worst-tech-nightmares--colliding-planes--trains-and-automobiles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00496-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944008
312
3.28125
3
When it is known that a data set is in sorted order it is possible to drastically increase the speed of a search operation in most cases. The following section deals with an array-based implementation of the In a later section we will look more closely at the binary search data structure and associated algorithm. Both of these methods take advantage of the fact that the search space is in some order to limit the area in which the target item is known An array-based binary search selects the median (middle) element in the array and compares its value to that of the target value. Because the array is known to be sorted, if the target value is less than the middle value then the target must be in the first half of the array. Likewise if the value of the target item is greater than that of the middle value in the array, it is known that the target lies in the second half of the array. In either case we can, in effect, ``throw out'' one half of the search space with only one comparison. Now, knowing that the target must be in one half of the array or the other, the binary search examines the median value of the half in which the target must reside. The algorithm thus narrows the search area by half at each step until it has either found the target data or the search fails. The algorithm is easy to remember if you think about a child's guessing game. Imagine I told you that I am thinking of a number between 1 and 1000 and asked you to guess the number. Each time you guessed I would tell you ``higher'' or ``lower.'' Of course you would begin by guessing 500, then either 250 or 750 depending on my response. You would continue to refine your choice until you got the
<urn:uuid:778f63dd-e46f-4130-8980-d2b44f58371f>
CC-MAIN-2017-04
http://www.darkridge.com/~jpr5/mirror/alg/node10.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929222
379
4.03125
4
From the Who Ever Would Have Guessed It Department, the Consumer Electronics Association (CEA) has come out with a report detailing the “materials footprint” of televisions and computer monitors from 2004 through 2011. In comparing televisions in the 13″ to 36″ size range, the group determined that flat panel LCD TVs weigh 82% less than CRT (picture tube) models, and take up 75% less space. That’s hardly surprising. The fact is that few of those 13″ CRT models have been replaced with 13″ flat panels, however. Maybe the larger flat panels balance out the gains. This is where the report has some interesting results; today’s 40″ to 70″ flat panels still weigh about 34% less than those smaller picture tube sets. The bottom line is that in spite of the much bigger screens, our televisions consume far less in materials. Not only does this reduce the impact of manufacturing the products, but it also reduces the energy costs required to move them around the world. We’ll still have to deal with the CRTs moving through the waste stream — hopefully being recycled — but as they reach their end of life, we can expect the overall amount of electronic waste to decline, according to the report. So there is a case to be made that the larger flat panel that replaces your old picture tube set will be easier on the environment overall.
<urn:uuid:1e69d250-af2f-4add-b52f-4d0ff1e08bc0>
CC-MAIN-2017-04
https://hdtvprofessor.com/HDTVAlmanac/?p=1534
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00340-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960385
289
3.078125
3
North Carolina is building a system that could potentially detect early indicators of health threats and diseases across the state before they become public outbreaks — even as early as the same day a few students call in sick to school. The North Carolina Bio-Preparedness Collaborative, a partnership between the University of North Carolina, Chapel Hill; North Carolina State University (NCSU) and software analytics company SAS, will analyze data from various organizations in an attempt to better understand past medical emergency patterns, which will in turn help predict and control future outbreaks. For example, the Emergency Medical Services Corp. receives 1.5 million ambulance calls every year and holds records of those calls from many years past, said David Potenziani, the collaborative’s executive director. “But they don’t necessarily understand patterns,” he said, “and they certainly don’t understand how they can detect anomalies, which we believe are the pathway to being able to detect emergencies and potential threats to human health.” Potenziani and his “dream team,” as he calls them, of public health and health-care experts have begun to gather that data, along with other files from emergency room visits, hospitalizations, tainted food reports and veterinary records. This amalgam of data will help the team create thresholds that will help the team differentiate between normal health patterns and environmental changes, versus natural or man-made health threats. “What we’re doing is trying to understand phenomena for data that’s collected from other purposes,” he said, and then use that data for detecting health hazards. For example, his team is currently trying to get access to school attendance records throughout the state in order to recognize emerging diseases hours before they appear in the reported epidemiological systems — when parents take their children to the doctor. Before it’s officially reported, “that illness is invisible to the health-care and public system,” Potenziani said, which leaves adequate time for the illness to spread. The team will also have access to poison control data very soon, he added. The idea for the system sprouted in 2007 when a group of University of North Carolina faculty was talking about coughing, Potenziani said. Questions started to form about how to detect threats like avian flu that originate in the natural world and in nonhuman species. These disease vectors are hard to detect and therefore can spread to thousands of humans without warning. When discussing how a system could track patterns in individual cases — efforts have tried and failed at the federal and national levels, said Potenziani — the team focused on ways to avoid the pitfalls of previous attempts. One way is gathering data from the original file location, where it’s the most accurate. That’s preferable to transferring the data to a center where the team would then hold and analyze it. “The goal is to conduct analytics in real time, but the challenge is getting access to the data in a timely fashion,” he said. Current technology allows the team to scale computational resources of various sizes, and the team is planning on speeding up that process as local data collection advances. Another unique aspect, Potenziani said, is using technology that was created for another function and repurposing it for this project. For example, the preparedness system runs on a cloud computing technology created by the NCSU and IBM, called the Virtual Computing Lab, which was initially developed for supporting education by providing a configurable platform for instruction. By adopting it to serve for biosurveillance, it brings down the cost of the project to a fraction of what has been spent on previous similar federal projects, Potenziani said. To date, the project has received $5 million from the U.S. Department of Homeland Security. Currently the system is operational on a limited basis; new data sources are being added. Potenziani said he hopes to complete the system by this summer. Eventually he’d like to expand the program nationally.
<urn:uuid:4bd13387-0c91-4edf-9ce6-0fa55c698a13>
CC-MAIN-2017-04
http://www.govtech.com/technology/North-Carolina-Predict-Health-Hazards.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00156-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960786
829
2.859375
3
AutoMate can easily interact with most standard database systems and its associated databases through the "SQL Query" and "Stored Procedure" actions. This involves setting up an initial ODBC connection to the desired database, which is a necessary procedure in Microsoft Windows to execute SQL statements on database engines provided by various vendors. This is ideal for automated retrieval, update, manipulation or transfer of data stored in a database. All standard SQL statements or commands are supported, such as "Select", "Insert", "Update", and "Delete". The "SQL Query" action passes a SQL statement (including, but not limited to queries) to the datasource specified via OLEDB. If a query is specified, an AutoMate dataset with the name specified is created and populated with the query results. A "Loop Dataset" action can then be used to loop through the data populated by the dataset. The Stored Procedure action executes the selected stored procedure via OLEDB on the datasource specified. Stored procedures are SQL statements with assigned names that are configured and stored in the database server in compiled form so that it can be shared by a number of programs. Stored Procedures are often faster than repeated SQL calls. The 'Open SQL Connection' action Opens a database connection using the specified custom or predefined connection string. The connection can be identified by a unique session name, which can be referenced by subsequent SQL steps. This allows multiple SQL connections to run simultaneously.
<urn:uuid:3dbe0d07-570f-49c9-b763-fd06c61bbc5b>
CC-MAIN-2017-04
http://www.networkautomation.com/automate/features/database-connectivity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00366-ip-10-171-10-70.ec2.internal.warc.gz
en
0.895454
295
2.59375
3
You may not be able to move mountains, per se, but in Texas, higher education breaks down the geographic barriers separating public and private institutions with a statewide high-speed fiber-optic network. The network enables collaborated research efforts with throughput speeds greater than anything Texas education has ever experienced. The Lonestar Education and Research Network (LEARN) -- operated by a nonprofit 501(c)3 organization consisting of members from 33 higher-education institutions --allows participating Texas educational institutions access to the Internet, Abilene (a multistate backbone network for research and educational institutions to access Internet2), and the nation's largest educational research network -- the National LambdaRail (NLR). The NLR, the inspiration for LEARN, lets scientists and researchers merge brain and computing power to conduct large-scale research efforts through optical data exchange -- creating a national infrastructure for networks across the nation to tie into. Texas higher-education institutions wanted to access the NLR, but the state needed its own high-speed network to provide the backbone for statewide connection. LEARN will provide that backbone, though it's in the early phases of implementation. Texas officials said LEARN is expected to be complete this year, and as the network grows, more institutions will connect and gain access to more extensive international research networks. For the Common Good The idea for LEARN was sparked at a 2002 meeting after a presentation by Tom West, president of the Corporation for Education Network in California (CENIC), and co-founder and CEO of the NLR. West talked about the benefits of having access to a national education and research network, and offered an active node in Texas and a seat on the NLR's board if Texas could contribute $5 million over the next five years. If Texas had an active node to the LEARN network, institutions with access to that node would also gain access to the NLR. Texas opted in, and 23 schools participated in funding the NLR. A year later, 22 institutions agreed to contribute an annual fee of $20,000 each to support the development and recurring costs of what would become LEARN. Additionally, Texas granted LEARN $7.3 million from the Texas Enterprise Fund, created by Gov. Rick Perry to boost employment. Within this $390 million fund, $55 million was allotted to technology and biotechnology, with one goal being to support university research. The purpose of LEARN is to create and operate a unified, statewide, cost-effective advanced-performance data network for research and education in Texas that is the equal of any in the United States, said Dan Updegrove, vice president of Information Technology at the University of Texas at Austin, and LEARN's chairman of the board. LEARN's member institutions continue to pay the annual fee to maintain the network and to provide future funding for a dedicated technical staff to replace the current volunteers from member institutions. LEARN's members collaborate in a way they've never experienced before, said Dave Edmondson, associate provost of Information Services for Texas Christian University and vice chairman of LEARN. "We sit down around a table together, and we talk about issues and how we might collaborate between them -- not only in networking but in other issues as well," he said. "This is a vehicle to facilitate the possibility of collaboration amongst all of our institutions in the state of Texas." Although higher-education institutions compete to recruit students, faculty and staff, Edmondson said, those institutions still share knowledge in a way that corporate America doesn't. Higher-education institutions also contribute to state coffers. With technology driving the education process these days, technology itself is an alluring feature for prospective students, and in turn, a major benefit to state economies. Students want access to the wealth of information technology and may select a school based on the technology in place, according to Updegrove. Perhaps this is partly why the state was compelled to fund LEARN's creation. "Part of what we looked at when we were asked by leadership -- the governor, lieutenant governor and the speaker -- to make our recommendations on LEARN, was as they build this incredible network, are there opportunities for the state to also leverage the state investment?" said Larry Olson, CTO of Texas and director of the Department of Information Resources (DIR). One way the state can benefit from LEARN is by having a redundant network. According to Texas law, the DIR may tap into LEARN in emergency situations. In case of a single node or systemwide failure of the state's telecommunications system, the DIR can divert telecommunications services traffic to LEARN to avoid service interruption. The state also plans to use LEARN as a cost-effective pipe for information exchange between state and local governments. Two services in particular -- managed e-mail and the state's 211 voice over Internet protocol network -- could benefit from LEARN's available resources, Olson explained. "Managed e-mail is a statewide messaging collaboration procurement in the final throes that looks at e-mail as a managed service," he said. The e-mail service would not only include the 65,000 mailboxes in the voluntary state government program, but also allow e-mail as a contracted managed service to city and county governments. The 211 network -- once used solely for the Texas Health and Human Services Commission -- is now a statewide network that could use a portion of LEARN's extra network capacity to tie in counties and cities. "We're just finding very cost-effective ways to leverage our existing infrastructure, including LEARN, to provide those services in a very value-added way to our customers, whether they're state agencies, cities or counties, K-12 or universities," said Olson. "The tie to LEARN gives us a broader solution or resource base that enables us to do so much more with what we already have." A Bright Idea Aggregate computing technology allows computers on LEARN to share processing power and increases overall available computing capacity. "In essence, you can build a super computer anywhere you have a network," said Mickey Slimp, executive director of the Northeast Texas Consortium of Colleges and Universities, comprising 15 public colleges and universities, and chairman of the public relations task force for LEARN. Slimp said the University of Texas Health Center at Tyler performs molecular diagnostics in which a single calculation may take several weeks to accomplish because of limited computing capacity. With LEARN, similar computations could take only hours or even minutes. LEARN can also aid distance learning. Slimp explained that several small networks scattered throughout Texas host interactive video classes. Currently institutions lease T1 lines that cost anywhere from $250 to $3,000 per month. "By creating the network throughout the state [with LEARN], we can eliminate a lot of those piecemeal charges and consolidate our cost so we can start running classes between institutions throughout the state and do it on an ad hoc basis -- whenever we need to set it up, we can do it with a lot less effort than we do now." Additionally, sharing applications would be a financial gain for institutions. Course management systems and software applications currently cost institutions from $25,000 to $5 million to purchase and implement according to Slimp, but pooling resources would cut costs significantly. "If the University of Houston has a project that involves Southern Methodist University and the University of Texas at Arlington, they'll have real-time communication and can actually set up a virtual network between each other," said Slimp, noting that application sharing isn't currently happening but is planned for the future. "They'll have the capacity to share software, to share programs, to look at each other by video at the same speeds across the state that they can do across their campus, in essence eliminating the technological distance between them." Sparking New Light The nonprofit organization operating LEARN purchased multiple 20-year leases of dark fiber to create the physical layer of the LEARN network. Optical nodes at each member institution enable data transfer over fiber optics at speeds of 1 Gbps to 10 Gbps. LEARN's implementation is currently under way. The initial network infrastructure was formed through combining five existing Texas links to Internet2 -- Texas Gigapop in Houston; North Texas Gigapop in Dallas; University of Texas at Austin; Texas Tech University; and the University of Texas Southwestern Medical Center. These links form the foundation of LEARN's network, and LEARN includes three phases of network implementation. The first phase connected the following: Denton to Dallas; Dallas and Waller to College Station; College Station to Houston; Houston to Austin; Austin to San Antonio; and San Antonio to El Paso. In addition, the NLR now runs through Texas because the new network infrastructure in place provides access points to it. More interconnections are slated for January and March, and eventually remote sites will receive service as well. The first cities to provide a point of presence (POP) -- an access point from one place to the rest of the Internet -- include Austin, Beaumont, College Station, Corpus Christi, Dallas/Fort Worth, Denton, El Paso, Houston, Longview, Lubbock, San Antonio, Waco and Waller. Institutions in and around the POPs must provide their own means to connect to the POP to then connect to the Internet. LEARN is like a highway with multiple access ramps, explained LEARN's Executive Director Jim Williams. "Each connecting institution will provide the roads that lead to those access ramps," Williams said. There's plenty of room to grow because LEARN's fiber has spare capacity, said Edmondson. "Hopefully fiber strands not being utilized today will allow us to grow and support functionality that we don't even know how to dream about right now," Edmondson said. Point-to-point links, a two-strand, dedicated fiber circuit to transmit data back and forth between two locations, are possible between any two cities' fiber pairs, said Williams, and such links will allow connections to external resources such as the Internet, or enable people with special needs to access remote data centers at very high bandwidths. Multiple 10 Gbps connections can occur between most LEARN cities, he said. Several states have built higher-education networks like LEARN, but Texas' size creates a unique challenge. According to the Texas Almanac, Texas spans 268,581 miles and could fit New England, New York, Pennsylvania, Ohio and North Carolina within its borders. With a state this size, creating a statewide network is no small feat. Some institutions in remote areas don't have regional networks to connect to LEARN. "This is sparking some activity beyond just the basic LEARN backbone," said Slimp, who said some remote colleges are now creating their own networks to tie into LEARN. As LEARN ramps up, the details of connecting users other than higher education are being worked out. In the meantime, LEARN members are focused on bringing nationwide research networks to higher education and research institutions. "Our focus is primarily serving higher education, research and health science users in Texas, but if we can find a way to beneficially serve others, we'd like to find a way to do that," said Williams. With the future expansion of LEARN, Williams said K-12 education institutions could benefit also. "I'd love to serve K-12, and we will, at least indirectly later on," he said. "For example, a number of K-12 institutions connect to either the commodity Internet or Internet2 via networks that will use the LEARN fabric as part of their backbone." But connecting to LEARN may be difficult for some schools, which might not have the technology, equipment and expertise to support the large bandwidths necessary to connect to LEARN. "Keep in mind, that at least at present, we aren't really able to provide any direct network service at units smaller than one gig, and that may be a bit much for some institutions to deploy," said Williams. Originally LEARN's inception only took into account higher education and research institutions, so they are the first to reap its benefits. "LEARN was conceived by and created for institutions of higher education, so we anticipate our primary focus will remain there," said Updegrove. "That said, many of our members have long-established partnerships with the K-12 community, as well as public libraries, independent research institutions, museums, and such, so it is not too much of a stretch to envision that the LEARN network could be used by these partners." With essentially 33 bosses in both public and private higher education, LEARN members are challenged with reaching consensus each step of the way. "The nature of having multiple statewide university systems -- A&M and UT systems -- and all of the independent schools, makes it particularly exciting and challenging to come up with one thing," Slimp said. "Texas, in terms of higher-education institutions, is fiercely independent, so if you can create a model that works in Texas, it'll work anywhere."
<urn:uuid:b7b1c537-1bf2-49fc-8bc4-1a1422561055>
CC-MAIN-2017-04
http://www.govtech.com/education/Out-of-the-Dark.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00121-ip-10-171-10-70.ec2.internal.warc.gz
en
0.957426
2,751
2.578125
3
A quick guide to computer worms - what they are, how they spread and the potential effects of having a worm infection. What is a Worm? A worm is a type of malicious program that spreads copies of itself to other devices over a network. Though worms are most often seen on computer systems, they can also infect mobile devices. Spreading over networks Worms can theoretically can be transmitted over any kind of network. The Internet, e-mail systems and instant messaging (IM) channels are the most common networks used to spread worms to computer users, but SMS or MMS messages and Bluetooth transmissions have also been used to distribute worms to mobile users. Worms can also affect the accounts of users connected to social media networks, such as Facebook or Twitter. Even devices unconnected to the Internet or mobile networks are not entirely immune to a worm's reach, if it is designed to spread on removable media such as USB sticks. To convince users to run or install the malware themselves, worm authors will often disguise the program as a tantalizing video, image or software. These are then distributed with an accompanying message ("LOL! This video is so cool!" or "See attached file for payment info") designed to pique the user's curiosity and lure them into running the attached file. If the user does so, then the worm is silently installed on the device. Some worms may also spread by exploiting vulnerabilities in an installed program or network. This allows them to automatically spread and infect new machines without any user action needed at all. Flooding the network Once installed, the worm will replicate itself. These copies may be identical to the original sample, though more sophisticated worms will vary the details of the copies to make them harder to detect. To spread its copies, worms often exploit a vulnerability, either in the operating system or in an installed program. Usually, worms will focus on spreading themselves over one network – for example, just over the Internet – but more advanced worms will try and spread over multiple networks for maximum impact. A worm typically creates its biggest impact by the way it spreads, since this can generate a significant amount of network traffic. If multiple infected machines on a single network are sending out worm copies, the stability of the network may suffer. In extreme cases, worms can overwhelm the network's capacity, preventing normal access until the worm has completed its distribution routine. Many worms are only designed to replicate and spread their copies, but some also perform more malicious actions. These can range from something as simple as changing a wallpaper, to more damaging actions like stealing information or installing other malware. The cost of network disruptions The effects of a worm outbreak can be financially significant, especially if business or government networks are affected. Worm outbreaks can be significant enough to disrupt major national or even international networks, as was seen in the 2009 Conficker outbreak which affected an estimated 2.1 million IP addresses around the world. The cost of cleaning up even a single affected network, in time, labor and lost productivity, can run into the millions. Worm outbreaks still occasionally occur, though they nowadays they are more often seen on social media networks and involve exploits of vulnerabilities. As these networks take prompt action to correct the issues, these outbreaks have been smaller in scale than the massive disruptions we saw in the past. Still, user vigilance is always recommended to avoid being personally troubled by this type of malware.
<urn:uuid:1aa6efb0-9a44-4242-b741-4f932bf83004>
CC-MAIN-2017-04
https://www.f-secure.com/en/web/labs_global/worms
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00239-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942651
694
3.9375
4
This week, a PopSci feature takes an in-depth look at the NCSA Advanced Visualization Lab (AVL) and the team of experts responsible for transforming the mysteries of the cosmos into eye-popping cinema. The lab specializes in creating grand portrayals of astronomical events, the kind of films usually screened in domed museum theaters. The 2010 imax film, Hubble 3-D, which opened to wide acclaim, contains sequences produced by the EVL. Using the processing power of the University of Illinois’ National Center for Supercomputing Applications, the lab creates high-quality, high-resolution, data-driven 3-D visualizations that are not just visually stunning, but scientifically accurate. AVL Director Donna Cox comments on the nature of the work performed by the lab: “Visualization is a supercomputing problem when you have terabytes of data. A lot of places do not have the supercomputing power that we have, so we have focused on leveraging state of the art computer graphics tools and embedding them in a supercomputing environment where we can devote all these processors to the problems of visualization.” Tapping into their passion for both art and science, the AVL team brings to life the extraordinary events of the cosmos. With 25 years of experience, they have learned a trick or two, often writing custom software from scratch in order to achieve a particular outcome. To be sure their reproductions are as authentic as possible, the team checks and re-checks their data, accounting for both physics and physiology. Without the work of the AVL, the data would just be sitting in storage somewhere. As mentioned in the article, data “is meaningless if it can’t be represented in a way that makes sense, not just to scientists but to the public.” The AVL remains committed to making visual sense of the complex world of astrophysics, but wants to venture into other discliplies as well. There are plans to move into the geosciences and life sciences and eventually to create visualizations that draw from both the humanties and sciences to shed light on global trends like migration. Says Cox: “[It’s the] visuals that help us understand of the complexity of nature.”
<urn:uuid:7a115629-93ea-4343-a56b-ba5a9a78f5f3>
CC-MAIN-2017-04
https://www.hpcwire.com/2011/01/25/advanced_visualization_lab_turns_science_into_cinema/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00543-ip-10-171-10-70.ec2.internal.warc.gz
en
0.933994
470
2.71875
3
Microsoft Researchers have been working on a technology that would let mobile phones and other 3G devices automatically switch to public WiFi even while the device is traveling in a vehicle. The technology is dubbed Wiffler and earlier this year, researchers took it for some test drives in Amherst, Mass, Seattle and San Francisco. Mind you, WiFi was available only about 11 percent of the time for a mobile device in transit, the team discovered, compared to 87% of the time 3G was available. So it would stand to reason that, at best, the mobile device wouldn't only be able to use WiFi a tiny bit of the time. However, the Wiffler protocol allowed the device to offload nearly half of its data from 3G to WiFi. How so? Wiffler is smart about when to send the packets. It doesn't replace 3G, it augments it and transmits over WiFi simultaneously, allowing users to set WiFi as the delivery method of choice when it is available -- and when an application can tolerate it. Not every application can handle even a few seconds delay in the stream (VoIP) -- and WiFi tends to drop more packets than 3G does. But many apps can handle even a minutes-worth of delay perfectly well (messaging). Wiffler uses what researchers call "prediction based offloading" in which it determines how likely it is to travel within the area of an acceptable WiFi hotspot within a certain time frame.If the car is moving in an urban area, discovering frequent hotspots, it predicts it will find another one quickly. If it is traveling on a highway and hasn't run across a hotspot in a while, it figures it won't find another one soon. If the the device doesn't find a hotspot within a predicted maximum delay time it goes ahead and fires up the 3G. "We try to ensure that application performance requirements are met. So, if some data needs to be transferred right away (e.g., VoIP) we do not wait for WiFi connectivity to appear. But if some data can wait for a few seconds, waiting for WiFi instead of transmitting right away on 3G, that can reduce 3G usage," Ratul Mahajan told me in an e-mail interview. Mahajan is a researcher with the Networking Research Group at Microsoft Research Redmond. Mahajan worked on the project with two teammates, Aruna Balasubramanian and Arun Venkataramani, both of whom are researchers at the University of Massachusetts Amherst. "The second feature is that we may actually use both connections in parallel instead of using only one. So, if we deem that some data cannot be transferred using WiFi alone within its latency requirement, we will use both 3G and WiFi simultaneously. This parallel use is different from a handoff from one technology to the other, and it better balances the sometimes conflicting goals of reducing 3G usage and meeting application constraints," Mahajan explained. The results of the test was presented in a paper, Augmenting Mobile 3G Using WiFi (PDF), presented in June 2010 at the eighth annual International Conference on Mobile Systems, Applications and Services. The test consisted of running Wiffler units on 20 buses in Amherst, MA as well as in one car in Seattle and one in San Francisco at SFO. The Wiffler unit itself was a proxy device that included a small-from factor computer, similar to a car computer (no keyboard), an 802.11b radio, a 3G data modem, and a GPS unit. The 3G modem was using HSDPA-based service via AT&T. Interestingly, the researchers didn't actually use Wiffler with mobile phones during their tests. They ran phone-like applications on the embedded computer. But the project team does envision Wiffler as a technology for smartphones, perhaps embedded directly into the smartphone. It could also be adapted to run in an in-vehicle infotainment system. While this research focused on using free WiFi in a moving vehicle, Mahajan says Wiffler could be used in other ways. Carriers may want it to for private WiFi services that augment their 3G/4G data networks. It could be used by pedestrians who might get even higher WiFi offload rates if they were wandering in a city with their Wiffler-equipped smartphones. It could be valuable in a stationary setting, too, like hanging out at Starbucks. "Today, the WiFi/3G combo management is highly suboptimal. Today, smart phones tend to use WiFi connectivity only when they are stationary and not use WiFi connectivity when they are on the move. At the same time, they experience poor application performance when the WiFi connectivity is poor because they happen to be far from the AP (access point) or because the WiFi network is congested. This experience occurs because the devices insist on using WiFi whenever they are connected, largely independent of the performance of WiFi. Our technology provides an automatic combo management that is aware of application performance," Mahajan says. Next up, the crew plans to test the Wiffler protocol in other uses, including the 3G savings "in a setting when users have Wiffler running all the time rather than just driving. Another is to understand current smartphone traffic workloads to get a sense of how much traffic individual applications generate; this is important because data for some of the applications can be delayed and for some it cannot be delayed," Mahajan explains. There is no association of Wiffler to Windows Phone 7 at this time. I hope it stays that way. Wiffler could be of best use if it were something that any handset on any phone could have. Plus, by the time a commercial product came to market based on Wiffler, Windows Phone 7 will either have found its niche, or died. At this time it's unclear how or when Wiffler will formally come to market. Neither the researchers nor Microsoft PR would comment on that, and in truth, I got the sense that it hadn't yet progressed to that point anyway. Despite the long road to commercial use, now that the world knows that WiFi offloading from 3G is worthwhile, even from a car, it's only a matter of time before someone creates the first product. Check out these other posts from Microsoft Subnet Like RSS? Subscribe to all Microsoft Subnet bloggers. bi-weekly Microsoft newsletter. (Click on News/Microsoft News Alert.) All Microsoft Subnet bloggers on Twitter @microsoftsubnet - All of today's Microsoft news and blogs - Microsoft Proposes Each PC Needs A Health Certificate or No Net Access Allowed - Troubleshooting database problems - Bill Gates, Microsoft call on you to contribute to education reform - Microsoft is nearly invisible in the mainstream press - Carry an instant Windows 7 hotspot in your pocket - Microsoft acquires .NET vendor AVIcode - Microsoft's Live@edu email not encrypted on cloud servers - Microsoft and EFF: ‘The enemy of my enemy ...’ - Microsoft beat up, then defended over ancient IE8 zero-day Like e-mail? Sign up for the Like Twitter? Follow
<urn:uuid:36c4a1fe-8a7c-47e4-83e3-1169fe8c7a4f>
CC-MAIN-2017-04
http://www.networkworld.com/article/2227438/microsoft-subnet/microsoft-wiffler-lets-smartphones-use-free-wifi-from-moving-vehicles.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00083-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942798
1,485
2.5625
3
The 11 fastest supercomputers on the planet The latest edition of the TOP500 supercomputers was released on 14 November. China and the United States are now level in the battle for global dominance, with 171 systems apiece in the new rankings, with the US accounting for 33.9 percent of the total to China's 33.3 percent. The two nations are also steaming ahead in aggregate Linpack performance (a measure of a computer's floating-point rate of execution). The two countries can ill afford to rest on their laurels while a series of hungry rivals from around the world breathe down their necks. Also on the latest list are a number of new entries, while some perennial contenders have enjoyed rises and endured falls. We countdown the 11 fastest supercomputers in the world. 1. Sunway TaihuLight Extending its reign at number one is the computer installed at the National Supercomputing Center in Wuxi. It’s China’s first system to reach number one that is built entirely out of local components and is capable of performing around 93,000 trillion calculations per second. Top speed: 93 petaflops Total cores: 10,649,600 Bolstering the Chinese claim to supercomputing superpower is another monolith that has retained its previous high rank. Also known as Milky-Way-2, it was knocked off the top spot in June by the Sunway TaihuLight, which boasts three times the speed of its predecessor. Top speed: 33.9 petaflops Total cores: 3,120,000 The first American entry on the list has more than earned its imposing name. It’s contributed to research breakthroughs at the Oak Ridge Leadership Computing Facility (OLCF) that have improved nuclear power plant safety and performance, boosted drug development, and improved the understanding of climate change. Top speed: 17.6 petaflops Total cores: 560,640 The IBM construction was named the world’s fastest supercomputer in June 2012, but has slowly slipped down the list. It’s primarily used for nuclear weapons simulations. Top speed: 17.2 petaflops Total cores: 560,640 The third consecutive American entry in the top five, Cori was named after American biochemist Gerty Cori, the first woman to win a Nobel Prize in physiology or medicine. It’s installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC). Top speed: 14 petaflops Total cores: 622,336 Japan’s highest entry in the TOP500 is powered by the same Intel “Knights Landing” Xeon Phi 7250 processor as the Cori computer that pipped it into the top five. It’s run jointly by the University of Tokyo and the University of Tsukuba. Top speed: 13.6 petaflops Total cores: 556,104 7. K computer The second consecutive Fujitsu-manufactured Japanese entrant on our list reached number one in its 2011 prime. It’s used in a range of fields including meteorology, disaster prevention and medicine. Like 99.6 percent of the TOP500 list, it uses Linux as its operating system. Top speed: 10.5 petaflops Total cores: 705,024 8. Piz Daint Europe’s fastest supercomputer was named after an Alpine mountain less than 80 miles from its Swiss National Computing Center home. It held onto its number eight ranking thanks to a newly installed NVIDIA P100 Tesla GPU that gave it a 3.5 petaflop upgrade. It’s also the second most energy-efficient supercomputer in the TOP500, with a rating of 7.45 gigaflops/watt. The top-ranked DGX SATURNV came in at number 28. Top speed: 9.8 petaflops Total cores: 206,720 According to manufacturer IBM, if every person in the United States performed one calculation every second, it would take them almost a year to do as many calculations as Mira can in just one second. The machine was initially deployed to work on sixteen research projects selected by the Department of Energy. Top speed: 8.6 petaflops Total cores: 786,432 The first of four Cray Inc. offerings in the TOP500 list shares a name with the first detonation of a nuclear weapon in 1945, and is run by the same laboratory that developed that bomb. No prizes for guessing what it was built to support. Top speed: 8.1 petaflops Total cores: 301,056 11. Cray XC40 Britain’s fastest supercomputer will power the country’s foremost forecasting agency, the Met Office.The £97 million behemoth is said to be worth around £2 billion in socio-economic benefits due to better forecasts. Top speed: 6.8 petaflops Total cores: 241,920 High performance computing system was launched in Pau, France, in 2013 Department of Energy is ordering up supercomputers that are five to seven times faster than existing systems in the country The company is targeting 100-petaflop supercomputers with the SPARC64 XIfx chip
<urn:uuid:42fefb89-f83a-4f04-b9d4-cdf3ad970675>
CC-MAIN-2017-04
http://www.computerworlduk.com/galleries/infrastructure/11-fastest-supercomputers-on-planet-3588573/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00387-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926371
1,112
2.765625
3
Calculate a Fractional Number of Years Difference Between Two Dates in SQL October 29, 2008 Timothy Prickett Morgan Is there anything new in SQL that will calculate the difference between two dates and return a fractional number of years? For example: “2007-09-10”–“1997-01-01” should return 10.69 years. The normal DB2 SQL function to return a difference between two timestamps (TIMESTAMP_DIFF) will not return fractional values as your calculation requires. However, if super accuracy isn’t a great need, there is an easy way to do this. Simply calculate the number of days between the two dates to get the number of days difference. Then divide the result by 365.2425, which is the average number of days in the Gregorian calendar, and you have a fractional number of years difference. For more info on using 365.2425 as the number of days in a year, click here. Other websites have the average number of days as 365.2422 although the value is gradually declining. Here’s an SQL example using host variables that rounds the result to two decimals: Select Round( (Days(:EndDate)-Days(:StartDate))/365.2425 ,2) As No_Years Into :NoYears From SysIBM/SysDummy1 And there you have it, an easy way to calculate a fractional number of years between two dates in SQL. On a related note, this same technique can be used to get the fractional number between two months; but this is a little sloppy when using an average of 30.4369 days per month. However, starting in V6R1 the new MONTHS_BETWEEN function can be used to better estimate the fractional number of months between two dates without the SQL mess shown above. Here is an example from the IBM manual: SELECT MONTHS_BETWEEN('2005-02-20', '2005-01-17') FROM SYSIBM.SYSDUMMY1 This example will return the value 1.096774193548387. Michael Sansoterra is a programmer/analyst for i3 Business Solutions, an IT services firm based in Grand Rapids, Michigan. Send your questions or comments for Michael to Ted Holt via the IT Jungle Contact page.
<urn:uuid:57a90421-1196-49ee-aa3d-f138b671513f>
CC-MAIN-2017-04
https://www.itjungle.com/2008/10/29/fhg102908-story01/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz
en
0.851534
507
2.8125
3
Those smartphones that so many of us carry around all day allow us (and others) to track unprecedented amounts of information about ourselves. They can generate seemingly endless streams of data about a wide range of our daily activities such as the people we contact, how many steps we take each day and where we’ve been. It turns out that our phones are also collecting another type of very personal information about us: namely our DNA. A new study by researchers from the University of Oregon titled “Mobile phones carry the personal microbiome of their owners” and published by PeerJ, looked at the biological connection between people and their phones. They hypothesized that the collection of bacteria on the touch screen of a smartphone will reflect the bacteria found on the fingers of its owner. To test this hypothesis, the researchers swabbed the fingers of 17 volunteers, as well as the touch screens of their phones. Contents of the swabs from the fingers and phones were then analyzed using statistical methods. They had three main findings: The bacteria on phones reflect frequent human contact As suspected, the researchers found a high correlation between the bacteria on the phones and those on the participants’ fingers. Specifically, 22% of the bacterial OTUs (operational taxonomic units, the metric used by the researchers) on the subjects’ fingers were also on their phones. However, 82% of the the most common bacteria found in participants, those representing more than 0.1% of a single person’s dataset, were also found on their phones. Contrary to what you may think, washing hands had no effect on the correlation between bacteria on the phone and that on subjects’ fingers. The bacteria on a person’s phone reflect that specific person more than others The researchers tested whether the bacteria a phone resembled the microbes from its owner more than those from other people. The answer was, not surprisingly, yes; each participant’s index finger shared, on average, 5% more bacteria with that person’s phone than with others’ phones. Women are more biologically connected to their phones than men Interestingly, while all participants shared bacteria with their phones, there was a difference between the amount men shared and the amount women shared. The researchers did not find a statistically significant difference between the bacterial community composition on a woman’s index finger and that on her smartphone, while they did find a difference for the men. That is, women seem to have more bacteria in common with their phones than men do. The authors did not offer for a theory for the cause of this difference. Based on these results, the authors argue that our smartphones “hold untapped potential as personal microbiome sensors.” They also suggest that swabbing our phones could enable larger scale microbial studies which more invasive sampling methods might restrict. Combined with cheaper DNA sequencing technology, smartphones could also be used for things like easily screening health care workers for pathogens and to generally help us to better understand what kind of microbes we’re exchanging with our environment on a daily basis. Interesting! It seems like there are a number of potential positive implications of this research. On the other hand, I imagine that the thought of somebody being able to learn a lot about you by swabbing your smartphone screen may give people worried about privacy one more thing to keep them up at night. Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:94399eb7-0ea2-498d-b113-5b30bf83b9b3>
CC-MAIN-2017-04
http://www.itworld.com/article/2696494/big-data/your-smartphone-contains-more-data-about-you-than-you-realize.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959721
738
3.1875
3
(CNN) -- A major online security vulnerability dubbed "Heartbleed" could put your personal information at risk, including passwords, credit card information and e-mails. Heartbleed is a flaw in OpenSSL, an open-source encryption technology that is used by an estimated two-thirds of Web servers. It is behind many HTTPS sites that collect personal or financial information. These sites are typically indicated by a lock icon in the browser to let site visitors know the information they're sending online is hidden from prying eyes. Cybercriminals could exploit the bug to access visitors' personal data as well as a site's cryptographic keys, which can be used to impersonate that site and collect even more information. You can use the Heartbleed Test website (http://filippo.io/Heartbleed/) to test your external websites and external-facing web appliances to see if they are vulnerable. I encourage you to make a quick test of your systems ASAP. If you use Google Chrome, I encourage you to install the Chromebleed plug-in which displays a warning if the site you are browsing is affected by the Heartbleed bug.
<urn:uuid:6253f382-d113-4150-8ada-1749d9a6e236>
CC-MAIN-2017-04
http://www.expta.com/2014/04/the-heartbleed-security-flaw-are-you.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00257-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911924
238
2.859375
3
Pop Quiz: Windows 7 Permissions Applies to the "Managing files and folders" objective of Exam 98-349. Q: A user has created a network share and assigned the share and NTFS permissions to Full Control and Read & execute. What level of access will users have when accessing the network share? - Full Control - No access - Read & execute Answer is D. When share and NTFS permissions are combined, the most restrictive permission is the effective permission. Quick Tip: Granting users the share permission Full Control and NTFS as required is the most efficient method of managing file and folders. Share and NTFS Permissions on a File Server Bonus Question: Which Windows 7 applet in used to manage printer drivers? (The answer, of course, will be revealed next time!) Answer to bonus question from last week: The New Technology File System (NTFS) is required when configuring BitLocker, encrypting file system (EFS), and compression. Andy Barkl, MCT/MCITP/MCSA, A+, Network+, Security+, CCNA has been studying technology for 30 years. Of the last 15 years, he has spent much of his time parting the knowledge and experience he has gained through IT exams, over 300, to help others be prepared and successful. He teaches classes in Phoenix, Ariz. where he has lived most of his life. He can be reached by e-mail at firstname.lastname@example.org.
<urn:uuid:9a3f2e29-87a4-4b98-a9ac-d7894ece1f2b>
CC-MAIN-2017-04
https://mcpmag.com/articles/2014/02/11/windows-7-permissions.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940013
319
2.96875
3
There are lots of useful ways of looking at a file to determine whether it’s malware, and if it is, what it’s trying to do. Some of those ways involve dissecting the file on disk for clues to a file’s structure (static analysis), and some of those ways involve watching the file in action (dynamically) to see what it actually does. These are both useful pieces of the puzzle, and in most instances researchers will first use static and then dynamic methods of analyzing the file. In Part 1 – Static Analysis we covered some methods of Static analysis; namely Text View, Hex View and Assembler View. All three methods are ways of looking for clues that tell you what a file might be trying to do. For instance, is the file armored in some way to dissuade analysis? Is the text in the file indicative of a professional piece of software, or is it more casual, using curse-words or “leetspeak?” Does the file have structures that seem to be indicative of exploiting vulnerabilities in other software, or does it seem to be trying to achieve persistence? These things are not conclusive evidence alone, but help create a picture of what the file is trying to do. The other important part of static analysis is to figure out what sorts of dependencies a file has. The first and most obvious dependency is the type of operating system – does this file run on Windows, OS X or Linux, for example? A file may have other requirements to run too, depending on what programming language it was written in, or if it tries to spread (such as through an instant messaging or peer-to-peer file sharing app) or infect certain types of files. Once we have given the file a thorough look with static analysis methods, we hopefully have a good picture of how we need to set up our test environments. You know that scene in Jurassic Park where they’re driving through the park and a goat is brought up as a snack for the T-Rex? (Apologies, I could only find the scene with added sheep-commentary.) In malware analysis, we have a similar idea for tempting malware to do its thing. Researchers use what we call a “sacrificial goat machine” – or just “goat” for short. This machine is set up to be the tastiest possible treat for the malware, by giving it all the conditions it needs to do what it intends to do, and appearing as much as possible to be a real user’s machine rather than a safely quarantined test machine. In order to do this quickly, most researchers have several standard “images” that are either physical or virtual machines that can be quickly taken back to a known-clean state. This is helpful for either repeating analysis on one file if needed, or getting ready to analyze other files. Usually these include an image for various different OS versions, to see if the malware behaves differently on one versus another. If, for instance, a researcher specializes in just Mac threats they might have an image for all the supported version of OS X plus any versions that are in beta. The same goes for researchers specializing in other operating systems as well. (Though things get complicated when you throw in different Linux flavors or the limitations of different carrier or handset-manufacturers’ versions of Android.) Once a researcher has an image all set up, the next thing he or she needs to do is start up any recording tools they might have, so they can see what changes are made by the file. A lot of malware is essentially silent, if you’re just looking at it on your screen, so we need to have tools that will report any system changes or network traffic. And those tools need to be smart enough not to be fooled by rootkit techniques that try to hide the changes. When everything’s all ready to go, the researcher will start the file up (usually just double-clicking it), and then let it do its thing for a few minutes. Sometimes the file will need a little extra coaxing to perform its various actions, so a researcher will usually spend those minutes interacting with the goat system like a regular user would, by opening files and moving around the system. This can activate various trigger-events that malware sometimes have, hoping to verify that it is on a real user’s machine rather than in an automated honeypot machine. Sometimes it can be helpful to isolate specific parts of a file’s behavior, especially if a sample is going to extraordinary lengths to hide its actions – and for this purpose, we have what’s called a Debugger. These tools were originally created for programmers to help them step through small sections of code, so they could find and correct bugs. Debugging files can be equally useful to a malware researcher that wants to step through small sections of code to figure out certain specific behavior within a file. It can be very helpful to get a file’s decryption routine, or to figure out passwords they use to join C&C channels, as well as to identify certain conditions used for trigger events, for instance. If you have ever wondered why you sometimes see really in-depth analysis way after a particular malware was first discovered, it’s often because the researcher went through much of the malware’s code with a debugger. Most malware is pretty small in size, but much of the code is usually convoluted and repetitive. Going through a sample in a debugger may require analyzing a section of code once, changing a variable, stepping back through the code again, then changing another variable and doing it yet again… it can be a very arduous and time-consuming process. This sort of thorough analysis isn’t something that gets done with every sample, but with high profile or particularly tricky malware as needed. In the End Malware analysis can be a fairly quick and dirty process or a months-long process, depending on the skill and effort of the malware author that created it as well as the malware analyst that receives it. If an analyst is looking at his or her umpteenth variant of a family that’s been publicly released, it can be dealt with in a matter of minutes. If it’s the first sample of a heavily armored and feature-rich spyware that’s hitting hundreds of thousands of users, dozens of researchers around the world are probably going to spend a lot of long nights trying to provide useful and juicy tidbits about its behavior. Hopefully we’ve given you some insight into what that process entails, so it’ll seem less mysterious.
<urn:uuid:f9a9646e-d232-40b5-a834-5227915d5451>
CC-MAIN-2017-04
https://www.intego.com/mac-security-blog/how-malware-is-researched-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00193-ip-10-171-10-70.ec2.internal.warc.gz
en
0.947562
1,364
3.078125
3
2.3.12 What are some other hard problems? There are many other kinds of hard problems. The list of NP-complete problems (see Question 2.3.1) is extensive and growing. So far, none of these has been effectively applied towards producing a public-key cryptosystem. A few examples of hard problems are the Traveling Salesman Problem, the Integer and Mixed Integer Programming Problem, the Graph Coloring Problem, the Hamiltonian Path Problem and the Satisfiability Problem for Boolean Expressions. A good introduction to this topic may be found in [AHU74]. The Traveling Salesman Problem is to find a minimal length tour among a set of cities, while visiting each one only once. The Integer Programming Problem is to solve a Linear Programming problem where some or all of the variables are restricted to being integers. The Graph Coloring Problem is to determine whether a graph can be colored with a fixed set of colors such that no two adjacent vertices have the same color, and to produce such a coloring. The Hamiltonian Path Problem is to decide if one can traverse a graph by using each vertex exactly once. The Satisfiability Problem is to determine whether a Boolean expression in several variables has a solution. Another hard problem is the Knapsack Problem, a narrow case of the Subset Sum Problem (see Question 2.3.11). Attempts have been made to make public-key cryptosystems based on the knapsack problem, but none have yielded strong results. The Knapsack problem is to determine which subset of a set of objects weighing different amounts has maximal total weight, but still has total weight less than the capacity of the "Knapsack."
<urn:uuid:e9120b74-aaec-4446-96bf-e49a17499ca1>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-are-some-other-hard-problems.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00101-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914887
351
2.9375
3
Are there any pitfalls to using SSID cloaking? Many organizations use SSID cloaking as a mechanism to add a layer of security to the WLAN. This technique requires that all users have knowledge of the SSID to connect to the wireless network. While this is commonly viewed as a mechanism to improve the security of the WLAN and is a recommended best-practice by the PCI Data Security Standard, it can reduce the effective security of the WLAN. False Sense of Security Early wireless network deployments relied on SSID cloaking as a mechanism to prevent unauthorized users from accessing the wireless network. Even though this was never intended to be used as an authentication mechanism, some organizations have adopted cryptic SSID's that are distributed as shared secrets. Tools such as ESSID-Jack and Kismet observe and report the SSID from legitimate stations, allowing attackers to deduce the SSID and easily bypass the intended security mechanism. When the network SSID is cloaked, users will be unable to consult the list of available wireless networks for the WLAN. This could prompt users to select other networks which could expose vulnerable clients, or even be construed as computer trespass in some US states. Exposure to AP Impersonation Attacks Attack tools such as KARMA take advantage of the WLAN probing techniques used by wireless clients. When a station probes for a WLAN in their preferred network list (PNL), the station discloses the SSID to a listening attacker. The KARMA attack uses the disclosed SSID to impersonate a legitimate WLAN, luring the station to the attacker. With the Windows XP SP2 wireless client update hotfix described in KB917021, Windows workstations change the behavior of how they probe for wireless networks. Users and administrators can now mark an entry in the PNL as "nonbroadcast". When the "Connect even if this network is not broadcasting" option is not selected, the station will not disclose the SSID information when probing for a network, mitigating the KARMA attack. In order for the station to identify the availability of the network however, the AP must have the SSID cloaking feature disabled. If the AP does cloak the SSID, the station must revert to the active network probing mechanism, making SSID cloaking the less-secure option. Though SSID cloaking might seem like an attractive mechanism to aid the security of the WLAN, it effectively reduces security significantly more than it could potentially gain, exposing enterprise WLANs.
<urn:uuid:8de97f8c-d69a-498e-8c6c-fcb2ea1e1cbc>
CC-MAIN-2017-04
http://www.networkworld.com/article/2295949/network-security/issues-with-ssid-cloaking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927563
513
2.625
3
Block Traffic among two VLAN’s but only in one way, how to do that? VLAN and VLAN configurations are very useful in all kinds of different ways. This configuration will be useful sooner or later for all network administrators out there. It was a big challenge to resolve this tricky communication security requirement. The problem actually does not seem like a big deal but when you try to make it work you see that it is. The goal was to make unidirectional communication filter between two VLANs. The request was to allow VLAN 10 to access VLAN 20 but not the opposite. The computers from VLAn 10 needed to access resources in VLAN 20 normally but computers from VLAN 20 had to be prevented to access VLAn 10. Actually there is a simple solution. I needed a lot of time to get to this and I didn’t get to the solution by myself, it was a team think work you can say. So it’s worth sharing. There is a special type of Access list called reflexive. This kind of access list will allow traffic from one VLAN to another only if the communication is established in other direction before that. It can’t be used for IP traffic but only for every protocol separately so you will need to use more rows in ACL to allow TCP, ICPM etc, but it will solve your problem. Here is how is done: Let’s say that you have two VLANs: VLAN 10 and VLAN 20. VLAN 10 INTERFACE = 10.10.10.1 /24 VLAN 20 INTERFACE = 10.10.20.1 /24 VLAN 10 can access VLAN 20 but, VLAN 20 can’t access VLAN 10. That was the whole problem, to allow access only in one direction. To be able to do so, you need to let the traffic from VLAN 10 go to VLAn 20 but you need also to let this communication to go back to VLAn 10 in order to close the communication bidirectional functionality. Almost every communication needs to get back to source in order to make the circle functional. But, if you allow this communicaton to go back to VLAN 10, you will alow all the communication in both ways, and this is the problem that we can solve using reflexive ACLs. We will make extended named ACL with name EASYONE: ip access-list extended EASYONE permit tcp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 established - The work established at end of this ACL row means that this TCP traffic from VLAN 20 to VLAN 10 will only be allowed when it’s from some communication that was started from VLAN 10, a going back traffic. permit icmp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 echo-reply - This echo-reply row will allow VLAN 20 to reply to ping and other ICMP requests deny ip 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 permit ip any any - This row will deny all other traffic from VLAN 20 directed to VLAN 10 but with permit ip any any it will allow VLAN 20 to go let say to gateway and further to internet and other VLANs. Finally, we will put the ACL EASYONE to VLAN 20 L3 interface interface vlan 20 ip access-group EASYONE in To conclude the config without comments, indeed easy now when is done: ip access-list extended EASYONE permit tcp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 established permit icmp 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 echo-reply deny ip 10.10.20.0 0.0.0.255 10.10.10.0 0.0.0.255 permit ip any any exit interface vlan 20 ip access-group EASYONE in The credit for the solution goes to my mentor and friend Sandra who did the configuration and lab for it but more than that she came out with the established word at end of the ACL and whole reflexive ACL solution.
<urn:uuid:6754a500-7d85-427e-b976-1cead563b71c>
CC-MAIN-2017-04
https://howdoesinternetwork.com/2012/allow-vlan-access-but-no-back
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00221-ip-10-171-10-70.ec2.internal.warc.gz
en
0.890931
951
2.703125
3
Kaspersky Lab, a leading developer of secure content management systems, reports the detection of a malicious program that infects WMA audio files. The objective of the infection is to install a Trojan that gives a cybercriminal control of the user’s computer. The worm, which was named Worm.Win32.GetCodec.a, converts mp3 files to the Windows Media Audio (WMA) format (without changing the .mp3 extension) and adds a marker with a link to an infected web page to the converted files. The marker is activated automatically during file playback. It opens an infected page in Internet Explorer where the user is asked to download and install a file which, according to the website, is a codec. If the user agrees to install the file, a Trojan known as Trojan-Proxy.Win32.Agent.arp is downloaded to the computer, giving cybercriminals control of the victim PC. Unlike earlier Trojans, which used the WMA format only to mask their presence on the system (i.e., the infected objects were not music files), this worm infects audio files. According to Kaspersky Lab virus analysts, this is the first such case. The likelihood of a successful attack is increased because most users trust their audio files and do not associate them with possible infections. It should be noted that the file on the counterfeit web page is digitally signed by Inter Technologies and is identified by www.usertrust.com, the resource that issued the digital signature, as trusted. Immediately after Worm.Win32.GetCodec.a was detected, its signatures were added to Kaspersky Lab’s antivirus databases. About Kaspersky Lab Kaspersky Lab delivers the world’s most immediate protection against IT security threats, including viruses, spyware, crimeware, hackers, phishing, and spam. Kaspersky Lab products provide superior detection rates and the industry’s fastest outbreak response time for home users, SMBs, large enterprises and the mobile computing environment. Kaspersky® technology is also used worldwide inside the products and services of the industry’s leading IT security solution providers. For more information, visit www.kaspersky.com. For the latest malware news, go to www.viruslist.com.
<urn:uuid:faccd586-6896-44d1-86ff-940d868fa213>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2008/Kaspersky_Lab_reports_new_worm_that_infects_audio_files
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00129-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923174
476
2.609375
3
Writing boot code is useful for many reasons, whether you are: - Developing your own operating system - Developing disk encryption systems - Experimenting and researching - Or even writing a bootkit While developing the IDA Bochs plugin at Hex-Rays, we had to write a small MBR and we needed a nice and fast way to compile and debug our code. In the beginning, we were using bochsdbg.exe to debug our code and little by little once we coded the “Bochs Disk Image loader” part we could debug the MBR with IDA and Bochs plugin. Now you may be wondering: How can I use IDA Bochs plugin to debug my MBR? For a quick answer, here are the needed steps: - Prepare a Bochs disk image - Prepare a bochsrc file - Insert your MBR into the disk image - Open bochsrc file with IDA - Start debugging In case you did not know, bochsrc files (though they are text files) are handled by the bochsrc.ldw (IDA Loader). The loader parses the bochsrc file looking for the first “ata” keyword then it locates its “path” attribute. In the following example, bochsrc loader will detect “c.img”. romimage: file=$BXSHARE/BIOS-bochs-latest vgaromimage: file=$BXSHARE/VGABIOS-lgpl-latest megs: 16 ata0: enabled=1, ioaddr1=0x1f0, ioaddr2=0x3f0, irq=14 ata0-master: type=disk, path="c.img", mode=flat, cylinders=20, heads=16, spt=63 boot: disk ... After finding the disk image file, the loader will simply create a new segment at 0x7C00 containing the first sector of that file only and then it selects the Bochs debugger (in Disk Image loader mode). Once the loader is finished you can press F9 and start debugging. As simple as this sounds, this process is really limited: - What if the MBR loads more code from different sectors? (MBR with 2 or more sectors of code) - What about symbol names? - What if we want to customize and control the MBR loading process? Fortunately, IDA Pro provides a rich API (with the SDK or scripting) that will allow us to tackle all these issues. Preparing a Bochs disk image If you don’t have a Bochs image ready, please use the bximage.exe tool to create a disk image. Preparing bochsrc file Edit your bochsrc file and add the ata0 (generated by bximage tool) line to it, and finally run bochsdbg.exe to verify that you can run Bochs properly (outside of IDA). If you see the Bochs debugger prompt, you can press “c” to continue execution but Bochs will complain because our disk image is not bootable. (As a new disk image, It lacks the 55AA signature at the end of the first sector) Inserting the MBR into the disk image For your convenience, we included a sample mbr.asm file ready for you to compile. nasmw -f bin mbr.asm To insert the mbr into the disk image, we can write a small Python function: def UpdateImage(imgfile, mbrfile): """ Write the MBR code into the disk image """ # open image file f = open(imgfile, "r+b") if not f: print "Could not open image file!" return False # open MBR file f2 = open(mbrfile, "rb") if not f2: print "Could not open mbr file!" return False # read whole MBR file mbr = f2.read() f2.close() # update image file f.write(mbr) f.close() return True Loading bochsrc with IDA As discussed previously, loading the bochsrc file into IDA is not enough (see above) so we need to write another script that acts like a loader: def MbrLoader(): """ This small routine loads the MBR into IDA It acts as a custom file loader (written with a script) """ import idaapi; import idc; global SECTOR_SIZE, BOOT_START, BOOT_SIZE, BOOT_END, SECTOR2, MBRNAME # wait till end of analysis idc.Wait() # adjust segment idc.SetSegBounds(BOOT_START, BOOT_START, BOOT_START + BOOT_SIZE, idaapi.SEGMOD_KEEP) # load the rest of the MBR idc.loadfile(MBRNAME, SECTOR_SIZE, SECTOR2, SECTOR_SIZE) # Make code idc.AnalyzeArea(BOOT_START, BOOT_END) What we did is simply extend the segment from 512 to 1024 (our sample MBR is 1024 bytes long) and load into IDA the rest of the MBR code from the compiled mbr.asm binary. Importing symbols into IDA When we assemble mbr.asm, a map file will also be generated. We will write a simple parser to extract the addresses and names from the map file and copy them to IDA: def ParseMap(map_file): """ Opens and parses a map file Returns a list of tuples (addr, addr_name) or an empty list on failure """ ret = f = open(map_file) if not f: return ret # look for the beginning of symbols for line in f: if line.startswith("Real"): break else: return ret # Prepare RE for the line of the following form: # 7C1F 7C1F io_error r = re.compile('\s*(\w+)\s*(\w+)\s*(\w*)') for line in f: m = r.match(line.strip()) if not m: continue ret.append((int(m.group(2), 16), m.group(3))) return ret def ApplySymbols(): """ This function tries to apply the symbol names in the database If it succeeds it prints how many symbol names were applied """ global MBRNAME map_file = MBRNAME + ".map" if not os.path.exists(map_file): return syms = ParseMap(map_file) if not len(syms): return for sym in syms: MakeNameEx(sym, sym, SN_CHECK|SN_NOWARN) print "Applied %d symbol(s)" % len(syms) Putting it all together Now that we addressed all of the issues previously mentioned, let us glue everything with a batch file: rem Assemble the MBR if exist mbr del mbr nasmw -f bin mbr.asm if not exist mbr goto end rem Update the image file python mbr.py update if not errorlevel 0 goto end rem Run IDA to load the file idaw -c -A -OIDAPython:mbr.py bochsrc rem database was not created if not exist bochsrc.idb goto end if exist mbr del mbr if exist mbr.map del mbr.map rem delete old database if exist mbr.idb del mbr.idb rem rename to mbr ren bochsrc.idb mbr.idb rem Start idag (without debugger) rem start idag mbr rem Start IDAG with debugger directly start idag -rbochs mbr echo Ready to debug with IDA Bochs :end If you noticed, we run IDA twice: the first time we run it and pass our script name to IDAPython; the script will continue the custom loading process and symbol propagation for us. The second time we run IDA with the “-rbochs” switch telling IDA to open the database and directly run the debugger. You can still run IDA just once: “start idag -c -A -OIDAPython:mbr.py bochsrc” however you do not call Exit() and you turn off batch mode (with Batch()). And last but not least, how do you debug your MBR code? Please download the files from here. Comments and suggestions are welcome.
<urn:uuid:072726aa-024d-4082-96ae-dae205ccee82>
CC-MAIN-2017-04
http://www.hexblog.com/?p=103
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz
en
0.775666
1,871
2.875
3
While it doesn't have that slick feeling to its skin, a new 37-inch long snake robot with a wireless camera and LED light attached to its head can find its way into pipes and through rubble where humans cannot go. The Carnegie Mellon snakebot is two inches in diameter and tethered to a control and power cable. Its body consists of 16 modules, each with two half-joints that connect with corresponding half-joints on adjoining modules. Carnegie researchers said the snake body has 16 degrees of freedom, letting it twist into a number of configurations and to move using a variety of gaits - some similar to a snake's. The snake robot has been tested in urban search-and-rescue environments in which it crawls through the rubble of collapsed buildings, in archeological excavations and in conventional fossil fuel plants. Further development of the robot could enhance its inspection capabilities, including a next-generation robot that will be waterproof. The researchers also envision designing a "tether runner" device that could move along the robot's tether and position itself around bends in a pipe, ensuring that the robot can be retrieved. One of the first tests of the snakebot was in the Austrian Zwentendorf Nuclear Power Plant where it crawled through a variety of steam pipes and connecting lines. Though the robot's body twists, turns and rotates as it moves through or over pipes, the view from the video feed was corrected so that it was always aligned with gravity. This new "right-side-up" video feature made controlling the robot more intuitive and helped engineers better understand what the robot was seeing, said Robotics Professor Howie Choset in a statement. The video imagery possible with the snake robot is superior to what is available through a borescope, which has limited ability to change its camera angle. Further development could enable the snake robot to perform simultaneous localization and mapping (SLAM), a robotic technique that would produce a map of a nuclear plant's pipe network as it exists, said Choset. "Our robot can go places people can't, particularly in areas of power plants that are radioactively contaminated," Choset said. "It can go up and around multiple bends, something you can't do with a conventional borescope, a flexible tube that can only be pushed through a pipe like a wet noodle." The boiling-water Zwentendorf reactor was built in the 1970s, but was never operated. Its lack of radioactive contamination makes it suitable for research, testing and educational purposes. Check out these other hot stories:
<urn:uuid:26a5edc0-54b1-4094-b8c4-4388019021c5>
CC-MAIN-2017-04
http://www.networkworld.com/article/2224913/mobile-apps/carnegie-mellon-s-robotic-snake-slithers-through-radioactive-pipes--broken-buildings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00544-ip-10-171-10-70.ec2.internal.warc.gz
en
0.958914
523
3.421875
3
Where Do Good Questions Come From? Think of a test question as a product, like a disposable razor, but a lot more expensive and with more long-term impact on your life. The razor, made of plastic and metal, was molded, cut, sharpened, assembled, inspected, packaged, distributed and, finally, bought. You probably never thought much about the process that brought that razor to you, but that’s because it’s not all that important. But a test question, that’s different. It’s important to you, at least at the moment you are trying to answer it correctly. And you probably try to understand it and evaluate it from the moment you see it until you move on to the next one. Does a test question go through a development process similar to that of a razor, from raw material to useful product? How was the question originally written? Or better yet, why was it written? What reviews and changes did it go through? How many people like you actually read it and agreed that it should be on the test? These are great questions (no pun intended) and deserve to be answered. First of all, a question can’t be written until a job skill has been identified. For example, a job skill might be: The test-taker must be able to install a router. (Job skills are usually identified by interviewing experts through a process known as a job task analysis.) Once the skill is identified, one or more questions can be written with the goal of measuring the skill as well as possible. An expert in the subject matter (SME) who also has some experience writing test questions is the first person to actually produce the first draft of the question, which may include graphics as well. Often the SME will get help from colleagues to make sure the question is accurate. All questions, but especially multiple-choice questions, require that the SME follow specific format rules for such questions. After the initial authoring, the question, along with all the others produced, is sent to an editor. The editor is not an SME, but does understand the rules of language, style and the formatting of questions. The editor will fix the language and design problems with the sole goal of reducing ambiguity. For example, if the editor notices that, because of wording, two choices of a multiple-choice question are correct (when only one should be), he will rework one of them or alert the original SME to the problem. The result is a better question. The question is returned to a group of SMEs who review each one for technical accuracy, representation and relevance. Does the question really measure the test objective? Is it an important question, measuring an important skill? Does the test “need” the question to be balanced across the content domain? Is the question accurate, including a correct answer? The question is usually changed (and may even be deleted) at this stage. The question is returned to the editor again because changes produced during the technical review have added or changed text. The editor will fix any obvious errors introduced by the technical review. When all questions have been refined in this way, they are subjected to an actual “field test” of their quality. In what is called a beta test, questions are answered by actual certification candidates in circumstances that mimic the motivation and environment of a real certification test. The beta test provides test results that are subjected to statistical analysis. The analysis will catch those questions that aren’t performing properly. They are then removed from further consideration. Obviously, the question you see on the certification test survived the beta process. Finally, the final set of questions is published as the actual certification exam. Before the test is released to any candidate, it goes through a series of quality assurance steps. While these steps are focused on the actual functioning of the test, the questions are reviewed once more. These several steps make sure that each test question, while not perfect, is as good as it can be at measuring the identified job skill. With enough of these great questions, it is possible to produce a reliable and valuable test score that indicates whether a person should be certified or not. David Foster, Ph.D., is a member of the International Test Commission and sits on several measurement industry boards.
<urn:uuid:005998a9-0dd4-4e52-99ba-aec56380afbe>
CC-MAIN-2017-04
http://certmag.com/mystery-solved-where-do-good-questions-come-from/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00296-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965808
886
3.03125
3
Apple, Android Prep 'Freak' FixExploiting Crypto Flaw Breaks HTTPS on Devices, Sites Numerous Apple and Android devices, as well as websites, are vulnerable to a serious flaw, which an attacker could exploit to subvert secure Web connections. The flaw exists in SSL and TLS and results from the ability to force crypto suites to downgrade from using a "strong" RSA cipher to a weaker, "export-grade" RSA cipher. "In case you're not familiar with SSL and its successor TLS, what you should know is that they're the most important security protocols on the Internet," Johns Hopkins University cryptographer Matthew D. Green says in a blog post. "In a world full of untrusted networks, SSL and TLS are what makes modern communication possible." Security researchers warn that the flaw exists in versions of OpenSSL prior to 1.0.1k, and affects all Android devices that ship with the standard browser, although they say Google Chrome is immune. The flaw also exists in Apple TLS/SSL clients, which are used by both Mac OS X clients, as well as iOS mobile devices. The vulnerability has been designated as CVE-2015-0204. Researchers say it's not clear how many users, devices or websites are vulnerable to the Freak flaw, or if it has yet been exploited in the wild. But 6 percent - or 64,192 - of the world's 1 million most popular websites (as ranked by Amazon.com Web traffic monitoring subsidiary Alexa) are currently vulnerable to the flaw, according to the Tracking the Freak Attack site, which is run by researchers at the University of Michigan, and can be used to check if clients are vulnerable to Freak attacks. Researchers from French computer science lab INRIA, Spanish computer lab IMDEA and Microsoft Research have been credited with discovering the flaw and detailing how it can be exploited. "You are vulnerable if you use a Web browser that uses a buggy TLS library to connect, over an insecure network, to an HTTPS server that offers export ciphersuites," they say. "If you use Chrome or Firefox to connect to a site that only offers strong ciphers, you are probably not affected." In recent weeks, the researchers - together with Green - have been alerting affected organizations and governments. Websites such as Whitehouse.gov, FBI.gov, and connect.facebook.net - which implements the Facebook "like" functionality - were vulnerable to related attacks, but have now been fixed, Green says. But he notes that numerous sites, including the public-facing NSA.gov website, remain vulnerable. Apple, Google Prep Patches Apple tells Information Security Media Group that it is prepping a patch, which it plans to release next week. OpenSSL released a related patch in January, and content delivery networks - such as Akamai - say they've either put fixes in place or will do so soon. While Google didn't immediately respond to a related request for comment, a spokeswoman tells Reuters that the company has already prepped an Android patch and distributed it via the Android Open Source Project to its business partners. She notes that it's now up to those businesses - which include such equipment manufacturers as Samsung, HTC, Sony, Asus and Acer - to prep and distribute patches to their customers. But while some OEMs have a good track record at prepping and releasing patches in a timely manner, others delay, or never release patches. Businesses and users should install related patches as quickly as possible, says information security consultant and SANS Institute instructor Mark Hofman in a blog post. "To prevent your site from being used in this attack you'll need to patch OpenSLL - yes, again. This issue will remain until systems have been patched and updated, not just servers, but also client software," he says. "Client software should be updated soon - hopefully - but there will no doubt be devices that will be vulnerable to this attack for years to come - looking at you Android. Crypto Wars 1.0 Legacy Experts say that the Freak flaw is a legacy of the days when the U.S. government restricted the export of strong encryption. "The SSL protocol itself was deliberately designed to be broken," Green says, because when SSL was first invented at Netscape, the U.S. government regulated the export of strong crypto. Businesses were required to use the relatively weak maximum key length of 512 bits if they wanted to ship their products outside the country. While those export restrictions were eventually lifted, and many developers began using strong crypto by default, the export-grade ciphers still linger - for example in previous versions of OpenSSL - and can be used to launch man-in-the-middle attacks that force clients to downgrade to the weak crypto, which attackers can crack. "The researchers have identified a method of forcing the exchange between a client and server to use these weak ciphers, even if the cipher suite is not 'officially' supported," Hofman says. The researchers who discovered the Freak flaw have published a proof-of-concept exploit on the SmackTLS website, demonstrating a tool they developed, together with a "factoring as a service" capability they built and hosted on a cluster of Amazon Elastic Compute Cloud - EC2 - servers. The exploit was first used against the NSA.gov website. "Since the NSA was the organization that demanded export-grade crypto, it's only fitting that they should be the first site affected by this vulnerability," Green says. Cracking the key for the NSA.gov website - which, it should be noted, is hosted by Akamai - took 7.5 hours, and cost $104 in EC2 power, he adds. Were the researchers to refine their tools, both the required time and cost to execute such attacks would likely decrease. The researchers have reportedly been quietly sounding related alerts about the Freak flaw in recent weeks to vulnerable governments and businesses, hoping to keep it quiet so that patches could be rolled out in a widespread manner before news of the flaw went fully public. But The Washington Post reports that Akamai published a blog post on March 2, written by its principal engineer, Rich Salz, which brought attention to the problem sooner than the researchers had hoped. Moral: Encryption Backdoors In the post-Snowden era, many technology giants have moved to use strong encryption wherever possible, in part to assuage customers' concerns that the NSA could easily tap their communications. Apple and Google also began releasing mobile devices that use - or could be set to use - strong crypto by default. And many U.S. and U.K. government officials have reacted with alarm to these moves. Often citing terrorism and child-abuse concerns, many have demanded that the technology firms weaken their crypto by building in backdoors that government agencies could access. But Green says the Freak flaw demonstrates how any attempt to meddle with strong crypto can put the user of every mobile device, Internet browser or website at risk. "To be blunt about it, the moral is pretty simple: Encryption backdoors will always turn around and bite you ..." he says. "They are never worth it."
<urn:uuid:17be4451-169d-4bc9-b725-686653d4be2c>
CC-MAIN-2017-04
http://www.bankinfosecurity.com/apple-android-prep-freak-fix-a-7978/op-1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00506-ip-10-171-10-70.ec2.internal.warc.gz
en
0.959811
1,470
2.53125
3
The Past, Present and Future of Storage Solutions In the town of Lascaux in southern France, there is a system of caves with more than 1,500 etchings on the walls that experts estimate to be roughly 17,000 years old. The primitive people who inhabited this part of the world many millennia ago painted these pictures to portray deer, oxen and other big game animals that served as sustenance for them. When first discovered, many theorized that these depictions were simply the cave-dwellers’ rudimentary attempts at art, while others suggested that they were the result of some kind of ancient religious ritual. However, there might have been a more practical reason behind this practice. Many of these paintings show the animals piled into large pits, where they would have been relatively vulnerable to attack from projectiles. Scientists believe that the hunters of these prehistoric tribes would herd these animals into pits and then strike them, and archaeological digs in France and Spain that have unearthed large concentrations of animal bones and spears in certain spots seem to corroborate this. Thus, the illustrations in the caves at Lascaux could very possibly be an example of proto-information-storage techniques: By recording this ambush style of hunting in pictures, they would have preserved a process central to the very existence of their civilization. In the information storage field, the progression of technology has always determined not only how much new data could be produced, but also how much could be practically retained. Hence, the practice of storing information can be divided into four distinct periods, each based on its own particular means of preserving data. We’ve already covered the first—the primitive era—so we can move on to the next three. Language and the Library of Alexandria The reason the ancient inhabitants of Lascaux painted their information was that they didn’t have a written language. The rise of codified dialects comprised of letters or characters that represented verbal formulations (i.e. “words”) substantially increased humankind’s capacity to express more complex ideas and store greater amounts of knowledge. As civilization progressed in places such as Northern Africa, the Middle East, Southern Europe and China, people came up with assorted arrangements to consolidate and organize the data they accumulated. The most advanced mode of storage during this time period was the library, and there was no better example of this in the ancient world than the Library of Alexandria in Egypt (which was actually a museum that happened to include a library). This institution was established in the third century B.C. by Ptolemy II, the Hellenic king of Egypt at the time and a former general in the army of Alexander the Great (who, incidentally, founded the city itself). More than 100 scholars lived and worked at this library, which is believed to have housed up to a half million documents. Unfortunately for the Alexandrians—and the rest of the world—they didn’t have an offsite backup for all of the data that was kept there. At least 250 years’ worth of accumulated knowledge was destroyed when the library was eventually lost to fire, although no completely reliable account of the time and circumstances of this occurrence exist. How significant was this disaster? Well, some have suggested that the loss of the library’s data pertaining to mathematics, astronomy and engineering set technological progress back by a minimum of two centuries. If you can imagine the moon landing taking place in 1769 (prior to the American Revolution) instead of 1969, then you can get an appreciation how important that information was. Guttenberg’s Press and Mass Production For thousands of years, the method used to record information in print was to write it out by hand. This incredibly slow method was eventually—though never entirely—supplanted during the Middle Ages by block printing, which used wooden blocks to press ink to paper. This was still inefficient, though, as blocks had to be created for each page. Contrary to popular belief, the printing press was actually invented in the 11th century in China. However, because of aesthetic and linguistic considerations, the benefits of this new technology were not immediately realized. (Unlike the alphabetic languages, written communication in Chinese involves thousands of characters and thus made movable type more problematic.) About four centuries later, a German inventor named Johannes Gutenberg devised a printing press for his language. Some have speculated that Gutenberg, ahem, drew his inspiration from the Chinese press, but no one can say for sure if he knew such a thing previously existed or not. The results were astounding at the time. The number of printed books in Europe increased exponentially, and drove a revolution in intellectual life on the continent. The great scientific, religious and philosophical advances of the following centuries were made possible because the printing press facilitated the spread of ideas from illustrious thinkers such as Luther, Newton and Descartes. Also, preservation of information was significantly enhanced by the fact that it was contained in so many different places. It’s not easy to suppress (or lose) knowledge that resides in hundreds of thousands of books spread out over wide geographic areas. Networks and the Future of Storage While the Gutenberg printing press was technically refined to the point where output could reach massive scales (major metropolitan newspapers that print hundreds of thousands of issues every day are the best examples of this), the basic model for information creation and preservation was in place for the next four centuries. The main mode of recording data was paper, and the means of storing it was the drawers in a desk or cabinet. There was little interactivity between producers and consumers of information, and chances were that if people wanted particular pieces of knowledge, they’d have to go digging for them. In the 20th century, all of that began to change. Advancements in telecommunications and computing produced another sharp upturn in the amount of information created, as well as new storage technologies such as tapes and discs. This was only the beginning, though. With the rapid adoption and usage of the Internet by general populace during the mid-1990s—as well as the employment of a variety of smaller networks in organizations—the quantity of data exploded. A study conducted by students and faculty at the University of California, Berkeley, showed that the output of information created between the beginning of human history and 1999—12 exabytes total—had doubled by 2003. As a result, information storage as a profession took on a whole new level of importance. To meet the challenge of preserving and arranging unprecedented amounts of data, storage professionals generate solutions of incredible technical depth and sophistication. A primary example is the storage area network (SAN). This high-speed network, which operates within the larger network, acts as the conduit between storage devices and servers, and it allows users to access old data and backup new information almost instantly. The future of information storage solutions might be one that today we would have trouble getting our minds around. Perhaps an infinitesimal apparatus—aided by nanotechnology—will be implanted on or near the hippocampus section of the brain to enhance our own memory. This, in turn, could even be connected to a wireless network. We could conjure up a bit of information we don’t know anything about simply by asking the device, which could then obtain that data from the network and store it—a kind of virtual omniscience. Although it seems farfetched now, it might be closer than we think. –Brian Summerfield, firstname.lastname@example.org
<urn:uuid:17d70c0b-dab7-4bf8-9156-41150fd88f50>
CC-MAIN-2017-04
http://certmag.com/looking-back-looking-forward-the-past-present-and-future-of-storage-solutions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00048-ip-10-171-10-70.ec2.internal.warc.gz
en
0.96994
1,526
3.46875
3
Data breaches have become an everyday occurrence and numerous well-known organisations have been named and shamed, denting their reputations and wreaking financial damage. But any organisation, whatever its size or line of business, can be a target. Every organisation has some form of sensitive data such as financial records, customer details and employee information that is highly prized by criminals and the vast majority of organisations rely on technology to run their business. Technology, especially the use of disruptive technologies such as big data and cloud-based services, provides for greater productivity, flexibility and improved information access. But it also increases the chances that sensitive information can be inappropriately accessed, lost or stolen. This document discusses the changes being made to the European data protection landscape and suggests that encryption should be the default choice for protecting data. However, this should just be part of the overall data security strategy, which must be comprehensive and consistent.
<urn:uuid:58dcde79-ce2d-4c3b-b60b-f4c6ad6c3eda>
CC-MAIN-2017-04
https://www.infosecurity-magazine.com/white-papers/gdpr-encryption-should-be-the/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00286-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949384
182
2.703125
3
The Cardioid code developed by a team of Livermore and IBM scientists divides the heart into a large number of manageable pieces, or subdomains. The development team used two approaches, called Voronoi (left) and grid (right), to break the enormous computing challenge into much smaller individual tasks. Source: LLNL The world’s fastest computer has created the fastest computer simulation of the human heart. The Lawrence Livermore National Laboratory’s Sequoia supercomputer, a TOP500 chart topper, was built to handle top secret nuclear weapons simulations, but before it goes behind the classified curtain, it is pumping out sophisticated cardiac simulations. Earlier this month, Sequoia, which currently ranks number one on the TOP500 list of the world’s fastest computer systems, received a 2012 Breakthrough Award from Popular Mechanics magazine. Now the magazine is reporting on Sequoia’s ground-breaking heart simulations. Clocking in at 16.32 sustained petaflops (20 PF peak), Sequoia is taking modeling and simulation to new heights, enabling researchers to capture greater complexity in a shorter time frame. With this advanced capability, LLNL scientists have been able to simulate the human heart down to the cellular level and use the resulting model to predict how the organ will respond to different drug compounds. Principal investigator Dave Richards couldn’t resist a little showboating: “Other labs are working on similar models for many body systems, including the heart,” he told Popular Mechanics. “But Lawrence Livermore’s model has one major advantage: It runs on Sequoia, the most powerful supercomputer in the world and a recent PM Breakthrough Award winner.” The simulations were made possible by an advanced modeling program, called Cardioid, that was developed by a team of scientists from LLNL and the IBM T. J. Watson Research Center. The highly scalable code simulates the electrophysiology of the heart. It works by breaking down the heart into units; the smaller the unit, the more accurate the model. Until now the best modeling programs could achieve 0.2 mm in each direction. Cardioid can get down to 0.1 mm. Where previously researchers could run the simulations for tens of heartbeats, Cardioid executing on Sequoia captures thousands of heartbeats. Scientists are seeing 300-fold speedups. It used to take 45 minutes to simulate just one beat, but now researchers can simulate an hour of heart activity – several thousand heartbeats – in seven hours. With the less sophisticated codes, it was impossible to model the heart’s response to a drug or perform an electrocardiogram trace for a particular heart disorder. That kind of testing requires longer run times, which just wasn’t possible before Cardioid. The model could potentially test a range of drugs and devices like pacemakers to examine their affect on the heart, paving the way for safer and more effective human testing. But it is especially suited to studying arrhythmia, a disorder of the heart in which the organ does not pump blood efficiently. Arrhythmias can lead to congestive heart failure, an inability of the heart to supply sufficient blood flow to meet the needs of the body. There are various types of medications that disrupt cardiac rhythms. Even those designed to prevent arrhythmias can be harmful to some patients, and researchers do not yet fully understand exactly what causes these negative side effects. Cardioid will enable LLNL scientists to examine heart function as an anti-arrhythmia drug enters the bloodstream. They’ll be able to identify when drug levels are highest and when they drop off. “Observing the full range of effects produced by a particular drug takes many hours,” noted computational scientist Art Mirin of LLNL. “With Cardioid, heart simulations over this timeframe are now possible for the first time.” The Livermore–IBM team is also working on a mechanical model that simulates the contraction of the heart and pumping of blood. The electrical and mechanical simulations will be allowed to interact with each other, adding more realism to the heart model. It’s not entirely clear why a national defense lab took on this heart simulation work. Fred Streitz, director of the Institute for Scientific Computing Research at LLNL, would say only that “there are legitimate national security implications for understanding how drugs affect human organs,” adding that the project stretched the limits of supercomputing in a manner that is relatable to the American people. The cardiac modeling work was performed during the system’s “shakedown period” – the set-up and testing phase – and the team had to hurry to finish in the allotted time span. Once Sequoia becomes classified, it’s unclear if it will still be available to run Cardioid and other unclassified programs, although access will certainly be more difficult since the machine’s principle mission is running nuclear weapons codes. Sequoia is an integral part of the NNSA’s Advanced Simulation and Computing (ASC) program, which is run by partner organizations LLNL, Los Alamos National Laboratory and Sandia National Laboratories. With 96 racks, 98,304 compute nodes, 1.6 million cores, and 1.6 petabytes of memory, Sequoia will help the NNSA fulfill its mission to “maintain and enhance the safety, security, reliability and performance of the U.S. nuclear weapons stockpile without nuclear testing.” The Cardioid simulation has been named as a finalist in the 2012 Gordon Bell Prize competition, awarded each year to recognize supercomputing’s crowning achievements. Research partners, Streitz, Richards, and Mirin, will reveal their results at the Supercomputing Conference in Salt Lake City, Utah, on November 13.
<urn:uuid:da6acecb-fd47-4aef-aa07-25721e70bf7a>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/10/24/sequoia_supercomputer_pumps_up_heart_research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00130-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923862
1,223
3.1875
3
Tobin’s q is a simple ratio first posited by Nobel-winning American economist James Tobin in the 1960s to understand the relationship between a company’s market value and the replacement value of its assets. Analysis shows that this quotient has been growing since financial statements were standardized following the Great Depression. Smoothing economic boom and bust cycles via linear regression, Tobin’s q has more than doubled from 0.4 in 1945 to a predicted 1.1 in any given year currently. This means that in general markets now value companies more than the sum of their tangible assets. How can this be? Non-reportable intangible assets of course. We know that due to 75 year old accounting standards, certain intangibles cannot be valued and reported. These unreportable intangibles frequently cited include human capital and intellectual capital. Yet, could these alone have doubled over seven decades? Do corporations of similar revenue have twice the number of employees they once did? No, quite the opposite as we’ve become more efficient and reliant on technology. Do humans have twice the knowledge capacity than we did back in the day? Not only my teenager would fervently disagree with that. Then what is it that companies have so much more of, has been accumulating for over half a decade, and that is hidden from balance sheets? Ever since Arthur Andersen computerized a GE payroll plant in 1953, companies have become better and better at amassing information assets (leading up to this age of Big Data) and finding ways to leverage them. Yet the value of information isn’t quantified or reported in any way. Even today’s infocentric companies whose business models revolve around collecting, buying and selling data (e.g. Facebook, Google, Experian, Nielsen, etc.) have balance sheets devoid of their most valuable asset. Furthermore, a study by intellectual capital research firm, Ocean Tomo, shows that the portion of corporate market value attributable to intangibles has grown from 17% in 1975 to a whopping 81% in 2010. Indeed, information accumulation has not only increased dramatically in businesses, but the importance of information itself has supplanted traditional assets in generating revenue, and therefore in contributing to market value as well. So what are CEOs to do knowing that information comprises a majority of their corporate value? First, forget what the accountants say, and listen to what the market is saying. Stop just talking about information as such an important asset and start valuing and managing it like one. For further reading on the topic of infonomics: Infonomics-The Practice of Information Economics (Forbes) Extracting Value from Information (Financial Times, free registration) Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:7a240555-7f92-4e17-af0f-9598c1fb9270>
CC-MAIN-2017-04
http://blogs.gartner.com/doug-laney/tobins-q-a-evidence-of-informations-real-market-value-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.937261
678
2.53125
3
LAN (Local Area Network) is a computer network within a small area, such as a home, school, computer laboratory, office building or group of buildings. Most LANs connect workstations and personal computers that share a common communications line or wireless link. A local area network may serve as few as two or three users (for example, in a home network) or as many as thousands of users (for example, in an FDDI network. A system of LANs connected each other over any distance via telephone lines and radio waves is called a wide area network(WAN)). There are many different types of LANs. Among them the Ethernets have been the most common for PCs. Some types of LANs: Simple LANs generally consist of one or more switches and the switch would always connect to a router, cable modem, or ADSL modem for Internet access. Complex LANs are characterized by their use of redundant links with switches using the spanning tree protocol to prevent loops so that they can manage different traffic types via quality of service, and to segregate traffic with VLANs. A local area network can also include a variety of network devices, such as switches, firewalls, routers, load balancers, and sensors etc. LANs can maintain connections with other LANs via leased lines, leased services, or the Internet using virtual private network technologies. Depending on how the connections are established and secured in a LAN, and the distance involved, a LAN may also be classified as a metropolitan area network (MAN) or a wide area network (WAN).
<urn:uuid:afe0b849-6ffa-4a43-8c0d-f9ed110d8d9a>
CC-MAIN-2017-04
http://www.fs.com/blog/lan.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00158-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950745
322
3.71875
4
Superfast jets, laser blasts and battling robots — these are commonplace in science fiction stories. But in a prolonged effort to create a stronger, deadlier military, the U.S. Air Force plans on making them a reality over the next 30 years. In July, the organization released a document outlining an ambitious strategy to incorporate sophisticated technology into everyday military operations. If planning is successful, the Air Force’s projects could produce fighting men and women who are the envy — and fear — of other military forces. In the 22-page paper, America’s Air Force: A Call to the Future, Air Force Secretary Deborah Lee James claimed that her branch of government would be challenged to adapt transformative technology in the future. “This strategy challenges our Air Force to forge ahead with a path of strategic agility-breaking paradigms and leveraging technology just as we did at our inception,” she wrote. “This will provide the ability to field the full spectrum capable, high-end focused force of the future.” The report highlighted these five technological areas slated for development: - Hypersonics: The Air Force will continue its efforts to build faster planes -- that fly at speeds above Mach 5 -- which will undoubtedly improve attack, evade and reconnaissance capabilities. Multiple hypersonic aircraft projects have made headlines recently, including the X-51 Waverider, which flew at Mach 5.1 and traveled more than 230 nautical miles in just over 6 minutes in a 2013 test run, the longest air-breathing hypersonic flight ever. (The term “air-breathing” refers to jet engines that draw oxygen from the air to burn fuel.) - Nanotechnology: The organization hopes to manipulate components at the molecular level to create material that’s both stronger and lighter than what’s currently available. The document refers to “significant implications for air-breathing and space platforms,” but names no specific applications for projects. - Directed energy: The Business Insider claims that “directed energy” is just another term for lasers, and cited the government’s Laser Weapon System (LaWS) as an example. LaWS comprises six lasers strapped together with beams that converge on a target. It’s one of multiple energy weapons in development, including an electromagnetic rail gun and a “slab” laser that fires a beam of 105 kilowatts. The Air Force’s 30-year outline never mentions the term “laser” specifically, but notes that “deep magazines can alleviate the need for acquiring and transporting large stockpiles of munitions into the theater, while providing precise, responsive, and persistent effects.” - Unmanned systems. The Air Force document cites drones’ capability to operate with increased range, endurance and performance compared to aircraft operated by human pilots. The military would also be able to conduct more dangerous operations with drones without the need to compensate for human safety. In offensive situations, the drones will also be able to “swam, suppress, deceive, or destroy,” according to the report, with weapons ranging from “kinetic to non-kinetic; permanent to reversible; single-use to self-recharging.” - Autonomous systems. The document vaguely states that artificial intelligence and robotics will be better able to react to environments and perform situation-dependent tasks. Boston Dynamics is currently developing battle-ready robots for the military, including those that climb up walls and sprint like animals.
<urn:uuid:cc8f0f13-504f-4e56-bdd9-6d0bac71809f>
CC-MAIN-2017-04
http://www.govtech.com/videos/5-Technologies-That-May-Supercharge-the-Air-Forces-Attack-and-Defense.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281353.56/warc/CC-MAIN-20170116095121-00552-ip-10-171-10-70.ec2.internal.warc.gz
en
0.927713
727
2.53125
3
7.11 What is digital timestamping? Consider two questions that may be asked by a computer user as he or she views a digital document or on-line record: - Who is the author of this record - who wrote it, approved it, or consented to it? - When was this record created or last modified? In both cases, the question is about exactly this record - exactly this sequence of bits. An answer to the first question tells who and what: Who approved exactly what is in this record? An answer to the second question tells when and what: When exactly did the contents of this record first exist? Both of the above questions have good solutions. A system for answering the first question is called a digital signature scheme (see Question 2.2.2). A system for answering the second question is called a digital timestamping scheme. Such systems are described in [BHS93] and [HS91]. Any system allowing users to answer these questions must include two procedures. First, there must be a signing procedure with which (1) the author of a record can ``sign'' the record, or (2) any user can fix a record in time. The result of this procedure is a string of bytes that serves as the signature. Second, there must be a verification procedure by which any user can check a record and its purported signature to make sure it correctly answers (1) who and what? or (2) when and what? about the record in question. The signing procedure of a digital timestamping system often works by mathematically linking the bits of the record to a ``summary number'' that is widely witnessed by and widely available to members of the public - including, of course, users of the system. The computational methods employed ensure that only the record in question can be linked, according to the ``instructions'' contained in its timestamp certificate, to this widely witnessed summary number; this is how the particular record is tied to a particular moment in time. The verification procedure takes a particular record and a putative timestamp certificate for that record and a particular time, and uses this information to validate whether that record was indeed certified at the time claimed by checking it against the widely available summary number for that moment. One nice thing aboutdigital timestamps is that the document being timestamped does not have to be released to anybody to create a timestamp. The originator of thedocument computes the hash values himself, and sends them in to the timestamping service. The document itself is only needed for verifying the timestamp. This is very useful for many reasons (like protecting something that you might want to patent). Two features of adigital timestamping system are particularly helpful in enhancing theintegrity of a digital signature system. First, a timestamping systemcannot be compromised by the disclosure of a key. This is because digitaltimestamping systems do not rely on keys, or any other secret information,for that matter. Second, following the technique introduced in [BHS93],digital timestamp certificates can be renewed so as to remain valid indefinitely. With these featuresin mind, consider the following situations. It sometimes happensthat the connection between a person and his or her public signature key must be revoked. For example, the user's private key may accidentally be compromised, or the key may belong to a job or role in an organization that the person no longer holds. Therefore the person-key connection must have time limits, and the signature verification procedure should check that the record was signed at a time when the signer's public key was indeed in effect. And thus when a user signs a record that may be checked some time later - perhaps after the user's key is no longer in effect - the combination of the record and its signature should be certified with a secure digital timestamping service. There is another situation in which a user's public key may be revoked. Consider the case of the signer of a particularly important document who later wishes to repudiate his signature. By dishonestly reporting the compromise of his private key, so that all his signatures are called into question, the user is able to disavow the signature he regrets. However, if the document in question was digitally timestamped together with its signature (and key-revocation reports are timestamped as well), then the signature cannot be disavowed in this way. This is the recommended procedure, therefore, in order to preserve the non-reputability desired of digital signatures for important documents. The statement that private keys cannot be derived from public keys is an over-simplification of a more complicated situation. In fact, this claim depends on the computational difficulty of certain mathematical problems As the state of the art advances - both the current state of algorithmic knowledge, as well as the computational speed and memory available in currently available computers - the maintainers of a digital signature system will have to make sure that signers use longer and longer keys. But what is to become of documents that were signed using key lengths that are no longer considered secure? If the signed document is digitally timestamped, then its integrity can be maintained even after a particular key length is no longer considered secure. Of course, digital timestamp certificates also depend for their security on the difficulty of certain computational tasks concerned with hash functions (see Question 2.1.6). (All practical digital signature systems depend on these functions as well.) The maintainers of a secure digital timestamping service will have to remain abreast of the state of the art in building and in attacking one-way hash functions. Over time, they will need to upgrade their implementation of these functions, as part of the process of renewal [BHS93]. This will allow timestamp certificates to remain valid indefinitely. - 7.1 What is probabilistic encryption? - Contribution Agreements: Draft 1 - Contribution Agreements: Draft 2 - 7.2 What are special signature schemes? - 7.3 What is a blind signature scheme? - Contribution Agreements: Draft 3 - Contribution Agreements: Final - 7.4 What is a designated confirmer signature? - 7.5 What is a fail-stop signature scheme? - 7.6 What is a group signature? - 7.7 What is a one-time signature scheme? - 7.8 What is an undeniable signature scheme? - 7.9 What are on-line/off-line signatures? - 7.10 What is OAEP? - 7.11 What is digital timestamping? - 7.12 What is key recovery? - 7.13 What are LEAFs? - 7.14 What is PSS/PSS-R? - 7.15 What are covert channels? - 7.16 What are proactive security techniques? - 7.17 What is quantum computing? - 7.18 What is quantum cryptography? - 7.19 What is DNA computing? - 7.20 What are biometric techniques? - 7.21 What is tamper-resistant hardware? - 7.22 How are hardware devices made tamper-resistant?
<urn:uuid:944d006d-ab5b-42f4-9019-a1dbf3ac710e>
CC-MAIN-2017-04
https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/what-is-digital-timestamping.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00460-ip-10-171-10-70.ec2.internal.warc.gz
en
0.95161
1,473
3.625
4
Even if a data center decides to source 100 percent of its energy needs from renewable sources, blind consumption of energy is no longer acceptable — not for the environment, society or for a business’ bottom line. This makes energy efficiency critical, no matter which way you look at it. Just by installing simple, energy-efficient technologies, data centers can reduce their consumption by 30 percent. It’s well known that cooling is the singular largest consumer of energy in a data center. Servers need to be maintained at low temperatures at all times in order to prevent melt-downs, crashes and emergency shut-downs. While traditionally performed by energy-intensive HVAC systems, new options are available to cool racks using outside air – a resource that is free, simple and uses almost no energy. When built in regions with naturally occurring chilly air (such as Google’s recently launched 11-acre data center in Dublin, Ireland), free cooling is one of the most effective demand-side strategies to slash consumption. And regardless of how data centers choose to cool their servers, many data centers still routinely mix hot and cold air – limiting the capacity and effectiveness of the system. This is easily remedied — and returns up to 25 percent in energy savings — just by placing air tiles in the cold aisle; locating supply vents in the cold aisle and return vents in the hot aisle; and other simple, low-cost methods to separate hot and cold air. Approaching both aspects of energy supply and demand is vital to true sustainability, but the connecting elements to unlock the potential behind this strategy lie in smart management and communication. Demand response, the “killer app” of the smart grid, is the ability of energy companies and businesses to communicate and determine when to best produce and consume electricity. The business benefits of capitalizing on this communication as enabled by smart supply and demand are enormous. Not only does demand response have the potential to reduce our carbon emissions by 50 percent over the next twenty years, but it also allows enterprises to actually participate in the power financial market – selling back unused energy to utilities at peak times and opening up an entirely new streams of revenue. There are numerous ways that data centers can easily take advantage of demand response. For instance, advancements in weather prediction have made highly precise environmental data available to IT managers at a low cost, allowing them to work with their utilities to pre-heat and pre-cool their data centers to avoid energy-intense times and costs. It’s already clear today that sustainability in the data center is not an “if” for businesses, but a “when” and “how.” Considering all sides of the energy equation will prove critical to supporting our sustainability, energy security and growing technology demands. And while sustainability provides businesses a tremendous financial benefit, it will also bring positive public sentiment to businesses that are responsibly sourcing and using energy– whether you’re a small start-up or the provider of the world’s largest search engine. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. Pages: 1 2
<urn:uuid:43f09c9d-f312-4531-a686-9018ccc81b58>
CC-MAIN-2017-04
http://www.datacenterknowledge.com/archives/2012/09/24/the-new-it-competitive-edge-sustainability/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00488-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923901
665
3.109375
3
August Supermoon as Seen Around the Globe / August 12, 2014 There are three supermoons (which occur when the moon is closest to Earth in its orbit) appearing this Summer, and the one on Aug. 10, 2014 is the second biggest -- and the brightest -- of them all. Photographers around the world came out to capture the event, as shown above. NASA scientist Noah Petro said that Sunday's supermoon was about 30 percent brighter and 14 percent larger in the sky than the smallest full moon of the year, which occurred in Jan. 16, Space.com reported. The next supermoon will occur on Sept. 9. -- Jessica Mulholland
<urn:uuid:4edc07cb-60f7-41f4-a31e-977ffd2c02fc>
CC-MAIN-2017-04
http://www.govtech.com/photos/Photo-of-the-Week-August-Supermoon-as-Seen-Around-the-Globe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00212-ip-10-171-10-70.ec2.internal.warc.gz
en
0.930007
141
2.8125
3
Desperate search for power points could be over In future you may not have to look out for a power plug or external power source to charge your lap top as scientists have developed a new material which can turn the laplop casing as its battery. Researchers from Vanderbilt University’s Nanomaterials and Energy Devices Laboratory have developed a supercapacitor that can store electricity by assembling electrically charged ions on the surface of a porous material, instead of storing it in chemical reactions as in batteries. The wafer-shaped materials have been developed by graduate student of the university Andrew Westover and assistant professor of mechanical engineering Cary Pint. Pint said, "These devices demonstrate – for the first time as far as we can tell – that it is possible to create materials that can store and discharge significant amounts of electricity while they are subject to realistic static loads and dynamic forces, such as vibrations or impacts." "Andrew has managed to make our dream of structural energy storage materials into a reality," Pint added. The material can store energy as well as withstand static and dynamic mechanical stresses. It can store and release electrical charge, subject to stresses or pressures up to 44 psi and vibrational accelerations over 80g which is far greater than those acting on turbine blades in a jet engine. The duo has developed the material by using ion-conducting polymers infiltrated into nanoporous silicon that is etched directly into bulk conductive silicon. The device platform claimed to maintain energy densities of about 10 W h/kg with Coulombic efficiency of 98% under exposure to over 300 kPa tensile stresses and 80 g vibratory accelerations. Researchers also claimed that the structurally integrated energy storage material can be used across renewable energy systems, transportation systems, and mobile electronics, others. The breakthrough could help in charging laptop with casing, or car powered by energy stored in its chassis, or create a smart home where the dry wall and siding store the electricity to power the lights and appliances. "Battery performance metrics change when you’re putting energy storage into heavy materials that are already needed for structural integrity," Pint added. "Supercapacitors store ten times less energy than current lithium-ion batteries, but they can last a thousand times longer. That means they are better suited for structural applications." "It doesn’t make sense to develop materials to build a home, car chassis, or aerospace vehicle if you have to replace them every few years because they go dead." The material is made of electrodes made from silicon that have been chemically treated so they have nanoscale pores on their inner surfaces and then coated with a protective ultrathin graphene-like layer of carbon. A polymer film is sandwiched between the two electrodes which acts as a reservoir of charged ions, similar to the role of the electrolyte paste in a battery. When the electrodes are pressed, the polymer flows into the tiny pores, similar to melted cheese into bread.
<urn:uuid:88f4db82-433e-4a0e-9a4e-937a742acb61>
CC-MAIN-2017-04
http://www.cbronline.com/news/mobility/devices/future-laptops-could-be-charged-by-its-casing-4278312
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
en
0.945103
610
3.71875
4
Kay C.M.,ASB Industries Inc. Advanced Materials and Processes | Year: 2013 Product quality, maintenance costs, and production requirements drive engineering improvements where surfacing technologies play an important role in steel production. Areas of concern during steel manufacturing include heat, corrosion, and wear. To enhance equipment life, a number of thermal spray coatings are being used. Friction, grip, and long-wearing surfaces allow proper strip tension from the initial weld joining to final trimming and wrapping. Rolls with HVOF-applied carbide coatings have harder surfaces than strip materials. Optimized surface profiles and high friction coefficients support gripping of strips to rolls without harming the strip s surface finish properties. The most commonly used zinc baths consist of galvanized zinc with minor concentrations of aluminum. Coatings use tungsten carbide/cobalt powders applied via HVOF or detonation gun technology. Success of these coatings depends on the spray parameters, powder manufacturing method, and sealant system. The key to increased life is to reduce the amount of free cobalt in the coating. Source Bala N.,Baba Banda Singh Bahadur Engineering College | Singh H.,Indian Institute of Technology Ropar | Prakash S.,Indian Institute of Technology Roorkee | Karthikeyan J.,ASB Industries Inc. Journal of Thermal Spray Technology | Year: 2012 High temperature corrosion accompanied by erosion is a severe problem, which may result in premature failure of the boiler tubes. One countermeasure to overcome this problem is the use of thermal spray protective coatings. In the current investigation high velocity oxy-fuel (HVOF) and cold spray processes have been used to deposit commercial Ni-20Cr powder on T22 boiler steel. To evaluate the performance of the coatings in actual conditions the bare as well as the coated steels were subjected to cyclic exposures, in the superheater zone of a coal fired boiler for 15 cycles. The weight change and thickness loss data were used to establish kinetics of the erosion-corrosion. X-ray diffraction, surface and cross-sectional field emission scanning electron microscope/energy dispersive spectroscopy (FE-SEM/EDS) and x-ray mapping techniques were used to analyse the as-sprayed and corroded specimens. The HVOF sprayed coating performed better than its cold sprayed counterpart in actual boiler environment. © 2011 ASM International. Source Singh H.,Guru Nanak Dev University | Karthikeyan J.,ASB Industries Inc. Journal of the Brazilian Society of Mechanical Sciences and Engineering | Year: 2013 Cold spray is one of the various names for describing an all-solid-state coating process that uses a high-speed gas jet to accelerate powder particles toward a substrate where they plastically deform and consolidate upon impact. Traditional thermal spray coating technologies require the melting or partial melting of feedstock material, and then quenching the molten droplets to produce coating. Cold spray technology belongs to the wide family of thermal spray technology and is a future of deposition of coating especially on temperature sensitive materials. In this paper the historical background of the cold spray process, fundamentals of this process and influence of the process parameters on coating properties are summarized. The main motivation for this review is to summarize the rapidly expanding common knowledge on cold spray for the researchers and engineers already or soon to be involved for their future endeavors with this new technology. © The Brazilian Society of Mechanical Sciences and Engineering 2013. Source Lahiri D.,Florida International University | Gill P.K.,Florida International University | Scudino S.,Leibniz Institute for Solid State and Materials Research | Zhang C.,Florida International University | And 5 more authors. Surface and Coatings Technology | Year: 2013 Al-based glassy coatings were synthesized using cold spraying technique to protect 6061 aluminum surface from wear and corrosion. Gas atomized Al90.05Y4.4Ni4.3Co0.9Sc0.35 (at.%) powder was used as the starting powder. Dense (98%) coatings with a uniform thickness of ~250μm were deposited. The coatings retained the glassy structure of the powder with few nanocrystals embedded in the amorphous matrix. Ball-on-disk wear of the coatings showed 600% improvement in the wear resistance as compared to 6061Al substrate. Potentiodynamic studies of the coatings in varying NaCl concentrations displayed 5 times better corrosion resistance than 6061 Al substrate, which was attributed to the active passivation and the chemical homogeneity of the coatings.© 2013 Elsevier B.V. Source Einarsson J.I.,ASB Industries Inc. Obstetrics and Gynecology | Year: 2011 Objective: In a 3-year period, the main mode of access for hysterectomy at Brigham and Women's Hospital changed from abdominal to laparoscopic. We estimated potential effects of this shift on perioperative outcomes and costs. Methods:We compared the perioperative outcomes and the cost of care for all hysterectomies performed in 2006 and 2009 at an urban academic tertiary care center using the χ test or Fisher's exact test for categorical variables and two-sided Student's t test for continuous variables. A multivariate regression analysis was also performed for the major perioperative outcomes across the study groups. Cost data were gathered from the hospital's billing system; the remainder of data was extracted from patients' medical records. Rssults: This retrospective study included 2,133 patients. The total number of hysterectomies performed remained stable (1,054 procedures in 2006 compared with 1,079 in 2009) but the relative proportions of abdominal and laparoscopic cases changed markedly during the 3-year period (64.7% to 35.8% for abdominal, P<.001; and 17.7% to 46% for laparoscopic cases, P<.001). The overall rate of intraoperative complications and minor postoperative complications decreased significantly (7.2% to 4%, P<.002; and 18% to 5.7%, P<.001, respectively). Operative costs increased significantly for all procedures aside from robotic hysterectomy, although no significant change was noted in total mean costs. CONCLUSION:: A change from majority abdominal hysterectomy to minimally invasive hysterectomy was accompanied by a significant decrease in procedure-related complications without an increase in total mean costs. © 2011 The American College of Obstetricians and Gynecologists. Source
<urn:uuid:bb7b2789-d487-46d0-bfd2-168b2654b559>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/asb-industries-inc-32746/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00425-ip-10-171-10-70.ec2.internal.warc.gz
en
0.907114
1,376
2.71875
3
The advent of virtualisation is changing the way we think about datacentres, servers and networks. Not only does virtualisation shrink the footprint of the server population, it also simplifies the physical network. However, there is a knock-on effect - the original hardware server was probably protected, but the virtual server is not. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. When a server is virtualised, it is layered on top of an operating system called the hypervisor. This is the master supervisor of the inputs and outputs for the server. When another virtual machine (VM) is added to the server, the hypervisor manages all the network linkages and any connections between the two VMs. One advantage is that there are no physical cables, but the downside is that any security gateways that may have existed between the original servers are now absent. As far as has been made public, there have been no instances of VM hacks. That does not mean that these are more secure, it just means that hackers have either not cracked the techniques yet or that virtualisation is not yet commonplace enough to attract their attention while there are easier pickings elsewhere in the physical server world. In a recent report from analyst firm Quocirca, only 17% of its 301 respondents had consolidated their servers to any degree. Clive Longbottom, service director for business process analysis at Quocirca, admits that some of these deployments may only be test sites. In addition, the survey shows that 14% of these consolidations did not involve virtualisation. There are two ways in which an attack might be mounted. One is to hit the VM, but the jackpot would be to find some way to compromise the hypervisor because all of the data passes through this point. The hypervisor is only an operating system in the same sense that DOS was an operating system in the past. It has minimal functionality and therefore far less code than Windows or Linux - fewer than 50,000 lines compared to more than 50 million in Windows Server 2003. This leaves less room for the hacker and makes the job of initially hardening the hypervisor much easier. Last September, VMware patched 20 flaws in its software and on March 18 this year it patched seven low-grade but potential security bugs in the free version of its server software, so there is no guarantee that vulnerabilities do not exist. Tamar Newberger, vice-president of marketing at Catbird, a fledgling company in the virtual security market, said, "There have not been any well-publicised attacks and a couple of vulnerabilities have been caught and fixed by the suppliers. There have been a few reports of proof-of-concept attacks which could mean a big one will come along soon. Our problem is to try to pursuade people who are not doing anything to protect themselves to act. It is like it was in the early 90s trying to sell a firewall." The number of companies springing up to protect or embrace virtual security continues to increase. Hezi Moore, CTO at virtual security supplier Reflex Security, said, "If you have not had a break-in recently, why do you still lock your door? If somebody gets access to the hypervisor, the theory is that they will also be able to access the VMs. Having gained access to one machine it may be possible that they could attack others." Graham Titterington, principal analyst at Ovum, is not so pessimistic, "We are in uncharted territory, but I think virtualisation is generally a good thing, but there is always the danger that we might get taken by surprise by something we have not fully appreciated. Virtualisation environments are pretty well designed and the boundaries between the VMs seem fairly rigid. I think the most likely point of attack is the hypervisor with something like a denial-of-service attack or possibly to put something in one of the VMs that will hog the CPU cycles. Virtual security software mirrors the physical world by providing intrusion detection, triggers for unusual traffic and anomalous behaviour, and firewalls. Reflex Security's Virtual Security Appliance, for example, does this by loading itself as a virtual environment within each physical server to protect the VMs housed there. "Most datacentres are not well protected," Moore said. "They concentrate their security on the gateway and there is very little security beyond that. Putting a security device within the datacentre is expensive, disruptive and takes up bandwidth. The main argument is the expense, which may be £1m, and people do not want to spend that. With virtual security, this comes down to around £10,000 and no-one is going to say no to that." Another route of attack in the virtual world mirrors the Trojan horses that attack users today. VMs are portable as long as the underlying hypervisor is from the correct manufacturer. This is opening up the possibility of a different kind of distribution, whereby a complete server can be downloaded in its virtual form either as an appliance or as a test server. It is quite possible that malware could be intentionally or accidentally included within the VM. The number of offerings at the moment are few and probably harmless but if this is a future trend, it will be exploited at some point. Detection of rogue applications tends to rely on irregular behaviour based on observed behaviour, but if the malware is present from the first day it could be considered to be normal activity. The only protection is to treat all externally produced VMs with extreme caution until they prove to be benign. Anything with an IP address is potentially vulnerable, and patching has become an everyday chore. Physical servers are well catered for and can be checked easily, but virtual servers may not always be online. The great thing from a security angle is that an infected or malfunctioning VM can be instantly replaced by a clean back-up VM within minutes or even seconds. This is one of the selling points of virtualisation, but can anyone be sure that the new instance is fully patched? If the virtual server has been dormant for a while it may not be fully patched and there is no software on the market that can guarantee to patch all operating systems and applications on all VMs. The virtual environment suppliers and a clutch of third parties are tackling the problem, but they cannot pretend that they can cover every distribution of every operating system. Longbottom advises that adopters should take care in choosing their hosted operating systems. The more varied the environment, the greater the headache. "One of the biggest areas that has to be looked at is that you work against images, you do not work against physical implementations," Longbottom said. "If you have 17 instances of an image running, you only have one physical image. That physical image is the one to patch and in order you take down each physical image and replace it with an image of the updated physical image. That should ensure that everything is up to the latest level of patching. Because you are working in a virtualised environment, you minimise the amount of downtime involved." Where mission-critical systems are concerned, the environment has its own answer to the downtime problem. Longbottom went on to explain that virtualisation means that a lot more is being made of the utilisation rate of the hardware and some of that can be affordably lost in making the system failsafe. So even for less critical servers, it is affordable to run two images in load-balanced pair. When one is taken down to be refreshed from the updated physical image, the other will pick up the load. There will be a slight hit on performance, but only slight and not for long. In the mobile computing world, virtualisation has a lot to recommend it. It can be heavily defended much more easily and at lower cost, so the security holes that are being punched in current systems, by allowing employees to work away from the office or by allowing partners to access the corporate network, can be fixed more easily. They can even be effectively quarantined on a single server or two and yet still have a great deal of functionality. The virtual world is a fascinating enigma. Systems are possibly just as vulnerable as before but in new and undefined ways. Until some weakness is discovered and exploited the best anyone can do is to treat the VM world as a mirror of the physical world on a better-the-devil-you-know basis. The upside is that virtualisation is enforcing certain best practices that make securing the environment easier. Each server becomes a tight little farm of VMs which can be treated relatively inexpensively as a ring-fenced community. If anything goes wrong it can be quickly recovered, especially if it is a system such as a web server with static content. Where data-intensive activity is occurring, some transactions may be lost in the process but if that is critical there are ways to minimise and even eliminate that eventuality. Virtualisation is catching on. Quocirca's survey suggests that 87% of their sample are at least thinking about introducing virtualisation. This means that the deployments may reach that critical mass that will make the hackers take virtualised environments as serious targets. Until that time, if it ever arrives, all any manager can do is to build the barricades and post watchmen to scan the horizon.
<urn:uuid:0253be30-17ae-45cb-b7ce-ad10f8d2f97d>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240085462/Virtualisation-presents-security-challenges
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.961704
1,906
2.984375
3
The discovery of the Higgs boson is a major scientific achievement, the culmination of 48 years of dedicated effort by the global High Energy Physics (HEP) community. The hunt for the elusive particle began in 1964 when theoretical physicist Peter Higgs, and others, described the mechanism that would explain the origin of mass. It took many years for the theory to be accepted by the HEP community, and then useful technology was developed, on many fronts, which accelerated the process of discovery. Of course, the Large Hadron Collider (LHC) played a pivotal role in the discovery, but the introduction of grid-enabled computing was really the key to their success. The roots for grid go back to the Supercomputing ’95 Conference in San Diego, California. At the event, a team led by Ian Foster (from Argonne National Laboratory and University of Chicago, US) demonstrated the successful execution of a number of applications running over 17 geographically distributed sites participating in the I-Way experiment. The project used middleware called I-Soft that would later, in collaboration with Carl Kesselman and his colleagues at the University of Southern California, US, become Globus Toolkit. In the US, Globus Toolkit continues to provide homogeneity, with eXtreme Science and Engineering Discovery Environment (XSEDE), Open Science Grid (OSG), and many other projects depending on it. In Europe, several countries and domains embraced the concept, and since 1996, additional middleware varieties have been funded and developed for specific applications. But with projects hitting up against four-to-five-year funding cycles, some have fallen by the wayside. Still enough have survived that navigating the disparate middleware presents challenges, especially in regard to global collaboration, and federated e-Infrastructures have found that heterogeneity is difficult to sustain in terms of development and funding. This is probably why the number of prevailing options in Europe dropped from five in 2007, to four in 2011, among them gLite, ARC, Globus Toolkit and UNICORE – with UNICORE being the only one that does not include Globus components. Of the four, only Globus Toolkit and UNICORE are common to PRACE and EGI and have the ability to bridge the e-Infrastructures by offering a common interface to the user. In the US, OSG continues to depend heavily on both Globus Toolkit and Condor Project software as well as community-developed software for handling its massive amounts of data and jobs. In late 2002, the HEP community formed a coordinated effort known as the Large Hadron Collider (LHC) Computing Grid, or LCG, which leveraged LCG-2 middleware. This would become their high-throughput highway to the LHC at CERN (European Organization for Nuclear Research) near Geneva, sited between Switzerland and France. LCG involved high-throughput distributed resources from the OSG in the US and Europe’s Enabling Grids for E-sciencE (EGEE, which became European Grid Infrastructure, EGI, in 2010). There were four major experiments at CERN, but the ATLAS and CMS (Compact Muon Solenoid) projects were launched to cross-check and verify Higgs boson findings. ATLAS and CMS each represent a vast multinational collaboration of more than 3,000 physicists from 41 countries and 179 institutes, with some overlap. They built upon research by many projects which leveraged the Large Electron Positron (predated LHC at CERN); the US Department of Energy’s Tevatron Collider at Fermilab; and the Stanford (University-US) Linear Accelerator Center (SLAC). In 2010, high energy capability was introduced to the LHC (first operational in 2008). That’s when the HEP community finally had what they needed to prove Higgs’ theory on the 4th of July, 2012. EGI Deputy Director Catherine Gater chronicled the five years leading up to the discovery in an International Science Grid This Week (iSGTW) feature. While the global HEP community was first to embrace grid technologies to this extreme, today research teams from all arenas span the globe in pursuit of life-transforming discoveries. Their workflows include a variety of resources and leverage advanced networks to engage the high-throughput systems represented by EGI and OSG, plus high-performance supercomputers (HPC), storage, visualization resources, and expertise offered by the Partnership for Advanced Computing in Europe (PRACE) and the eXtreme Science and Engineering Discovery Environment (XSEDE) in the US. To facilitate this diversity, XSEDE includes access to OSG as a supported resource allocation request. There is also a joint process that allows EU-US collaborative teams to submit unified requests for allocations of PRACE and XSEDE resources (the 2012 deadline is September 15, 2012). Last spring’s EGI Community Forum in Munich, Germany, was co-located with the Initiative for Globus in Europe’s (IGE) annual user conference, and the European Globus Community Forum (EGCF). During the conference, IGE signed a memorandum of understanding with the European Middleware Initiative (EMI), a close collaboration of Europe’s major middleware providers. IGE and EMI deliver middleware components for deployment by European e-Infrastructure providers that facilitate multinational collaboration. Through IGE and EMI’s relationship with EGI, a quality assurance process was established to specify requirements, test, solicit feedback, and apply lessons learned in an effort to continuously improve EGI’s offerings. EMI is a three-year project that engages European users and global infrastructure providers to assess specific needs, identify redundancies, and develop a collection of consolidated and harmonious software components. Deliverables include three major releases and subsequent minor revisions, as necessary. Each set is designed to comply with open-source guidelines and to integrate with Europe’s mainstream operating systems. Major releases include Kebnekaise (EMI-1, 12 May, 2011); Matterhorn (EMI-2, May 21, 2012); and Monte Bianco (EMI-3, February 28, 2013). Although many consider the Globus Toolkit to be US software, it is open source and its developer and user communities include many Europeans who recognize its value. On October 25, 2010, IGE’s roadmap was presented by Steve Crouch (UK-University of Southampton) and Helmut Heller (Germany-LRZ) at the first EGI Technical Forum in Amsterdam. At that time, EGI’s Unified Middleware Distribution (UMD) officially recognized IGE as a technology provider. Their plan included timelines for the integration of resources by European e-Infrastructure providers, including EGI, PRACE, and EU-IndiaGrid2. Globus Toolkit has been widely used in Germany since their D-Grid initiative began in 2005. The Leibniz Supercomputing Centre (LRZ) in Munich installed it on its supercomputers in 2002. Europe’s fastest computer, the SuperMUC, became operational at the LRZ in August this year. SuperMUC and LRZ are committed to serve IGE-supported middleware and will most likely be driving forces for future development and use of Globus Toolkit by Europe’s scientific community. Globus Online Software-as-a-service At the GlobusWORLD 2012 conference in Chicago last April, Foster (Globus Project co-founder) quoted the late Steve Jobs (Apple) who said “Start with the customer experience and work back toward the technology – not the other way around.” Applying this philosophy and a commitment to continuous improvement, Foster and the Globus team recently launched a new effort that leverages cloud technologies to develop the Globus Online software-as-a-service (SaaS) offering. With hosted, professionally-operated services, and intuitive Web 2.0 interfaces, Globus Online aims to increase usability and functionality dramatically relative to past grid software. The SaaS model streamlines the process of delivering new features and enables the service’s capabilities to be rapidly refined based on early user feedback. When EU countries add the Globus Toolkit (in particular, GridFTP and MyProxy servers) to their middleware stack, they can take advantage of Globus Online services without requiring additional software. From left: Steve Tuecke (Globus Online, UChicago/Argonne) and IGE Program Director Helmut Heller (LRZ) at the 2011 EGI Technical Forum in Lyon, France At the March IGE meeting, the University of Chicago’s Steve Tuecke, Globus Online co-founder, presented its capabilities and anticipated future development with European interoperability in mind. Globus Online’s features for high performance, secure file transfer were recently integrated with the ATLAS PanDA workload management system and it is in the testing phase. An upcoming Globus service that simplifies big-data storage and sharing could substantially enhance how the HEP community manages the massive amounts of data generated by the LHC and the new subatomic field of physics research launched by the Higgs boson discovery. Future development will target additional services to offer a comprehensive research data management solution delivered using SaaS approaches. Of course, the biggest challenge faced by multinational collaborations is satisfying the security and privacy policies of every institution, government, and network along the way. Globus Online incorporates Globus Nexus, a service that manages user identities, including profiles, groups, and information about resources connected to the Globus research cloud. Like all Globus services, the Globus Nexus features may be accessed via a Web browser, command line, and a REST-ful programming interface that enables organizations to better integrate Globus services into their infrastructure. The EGI Technical Forum 2012 takes place next week, from September 17-21, in Prague, Czech Republic, at the Clarion Congress Hotel. GlobusEUROPE 2012 is co-located and scheduled for Monday, September 17. The event is hosted by EGI.eu in partnership with CESNET, the consortium of Czech universities and the Czech Academy of Sciences that represents the country in the EGI Council. HPC in the Cloud is covering the event live, so check back for more coverage soon.
<urn:uuid:d5367415-b539-4e7f-80c6-a96893cf950b>
CC-MAIN-2017-04
https://www.hpcwire.com/2012/09/13/globus_and_grid_blazing_trails_for_future_discovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00545-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932719
2,164
2.828125
3
"The death of democracy is not likely to be an assassination from ambush. It will be a slow extinction from apathy, indifference and undernourishment." -- Robert Maynard Hutchins (1899-1977), former dean, Yale Law School; president, University of Chicago There should be little doubt that the difficulties experienced in counting votes accurately and quickly in the recent elections raise some difficult and important questions for the future of our republic. However, the urgency of these questions -- for example, the December 12 and December 18 deadlines in the Electoral College process -- and the vehemence with which they have been debated may divert our attention from other issues that are even more important in the long term. The first of these concerns the largest voting block in the presidential election: those eligible voters who didnt vote at all. That group amounted to about 50 percent of the total eligible voting population in 2000, which is about what its been in all presidential elections since 1971. Its even higher in non-presidential election years, averaging more than 60 percent for federal elections in the same period. In the earlier part of this century it was typical for less than 40 percent of eligible voters to stay home in presidential elections, and in the 1800s less than 20 percent failed to vote. This same 80 percent voter turnout level is still common today in most democratic nations, and in some, such as Germany and Japan, voter turnout usually approaches 90 percent. Various studies have looked into why voter turnout has declined in the United States, and the results are not comforting. Surveys conducted by Harvards Vanishing Voter Project between November 14 and November 19 last year indicated that 60 percent to 70 percent of citizens were "discouraged" by what was happening in the campaign and that half of the public believes the election was "unfair" to voters. The study shows that 86 percent of citizens feel that "most politicians are pretty much willing to say whatever it takes to get themselves elected." Other polls by the Harvard project reveal that citizens are not only disinterested in elections but ill informed about candidates and issues as well. The Important Question: Why? "When 100 million people fail to vote in a presidential election ... the reason is more than simply apathy," wrote former presidential advisor John Dean in a recent FindLaw column. "To tag over half the voting population with indifference, unconcern, passivity, lethargy or simply laziness may describe behavior, but it doesnt explain it. And an explanation is needed, if one can be given for 100 million excuses." It is possible that an all-out effort to register more voters is needed. Studies indicate that high percentages of registered voters do indeed vote. Perhaps mandatory registration would be desirable, or maybe even compulsory voting, wherein non-voters without legitimate excuses must pay fines, as is the case in Australia and Belgium. Some pundits have suggested that a viable multi-party system is necessary to give voters a more meaningful choice than is now provided by the Democratic/Republican-dominated system, or if not that, a binding "none of the above" option on all ballots. Revitalizing through Reform If we dont know which of these reforms to invoke, it is because we havent given enough attention to the problem and to finding out the root causes of non-voting. Rather than sweep the unpleasantness of the recent election under the rug, it is important that we face up to the problems it revealed: archaic or motley voting systems, citizen disenchantment and non-participation. We need to revitalize our democracy by reforming our elections. As noted in the Encyclopedia Britannica article on electoral processes: "Elections ... serve to reinforce the stability and legitimacy of the political community in which they take place. Like national holidays commemorating common experiences, elections serve to link the members of a body politic to each other and thereby confirm the viability of the political community. By mobilizing masses of voters in a common act of governance, elections lend authority and legitimacy to the acts of those who wield power in the name of the people. "Elections can also confirm the worth and dignity of the individual citizen as a human being. Whatever other needs he may have, participation in an election serves to gratify the voters sense of self-esteem and self-respect. It gives him an opportunity to have his say, and he can, through expressing partisanship and even through nonvoting, satisfy his sense of belonging to or alienation from the political community." (15th edition, Macropaedia Volume 6). The corollary to this is that widespread alienation, expressed by nonvoting, reflects a growing frustration of this "profound human craving for personal fulfillment," which can be the death knell of our democracy. That is why we cannot ignore the broader issues revealed, not just in the 2000 election, but in the changing pattern of voter participation over the past century.
<urn:uuid:d9953ccf-09a6-4123-8567-dec40cae417d>
CC-MAIN-2017-04
http://www.govtech.com/magazines/gt/gov-and-all-that.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00297-ip-10-171-10-70.ec2.internal.warc.gz
en
0.964101
1,002
2.953125
3
Originally published May 15, 2012 Hadoop is one of the up-and-coming technologies for performing analysis on big data, but to date very few universities have included it in their undergraduate or graduate curriculums. In a February 2012 article from InfoWorld, those already using the technology issued the warning that “Hadoop requires extensive training along with analytics expertise not seen in many IT shops today.” A ComputerWorld article singled out MIT and UC Berkeley as having already added Hadoop training and experience to their curriculums. Other educational institutions need to seek out practitioners in their area or poll alumni to determine if individuals that can impart their knowledge to college students are available and if so, prepare a curriculum to start training the next generation of IT employees and imbue them with the skills they will require to meet the challenges of the 21st century. Hadoop is one of the newest technology solutions for performing analysis and deriving business intelligence on big data. On the TechTarget website, it is defined as “… a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.” Hadoop is a combination of many tools and software products of which the primary two are HDFS (Hadoop Distributed File System) and MapReduce. In its current form, these components run primarily on the Linux operating system. Both of these components are Free Open Source Software (FOSS) and are licensed under the Apache License, Version 2.0. HDFS is a file system that distributes the data to be analyzed across all the servers, which are typically inexpensive commodity hardware with internal or direct attached storage, available in a server farm. The data is replicated across several nodes so the failure of any one node does not disrupt the currently executing process. The HDFS file system maintains copies of the master catalog across many of these nodes so it always knows where specific chunks of the data reside. Support for very large datasets is provided by this mechanism of distributed storage. As defined on the Apache Hadoop website, “… MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.” This software, typically written in Java, is used to map the input data that defines specifications for how it can be broken into chunks and how it is to be processed in parallel on multiple nodes of the cluster. The output of these map tasks is then used as input to the reduce tasks. The data and processing usually reside on the same nodes of the cluster to provide the scalability to handle the very large datasets that are typically processed using MapReduce. The basis of this processing is the mapping of the input data into key-value pairs. This is very similar to XML where each combination of start and end tags contains a specific value within a group of elements (e.g., < FirstName > Alex < /FirstName >). The reduce tasks combine the outputs of the map tasks into smaller sets of values that can then be used in additional analysis tasks. To manage the processing, the Hadoop framework provides a job control mechanism that passes the required data to each of the nodes in the cluster, then starts and monitors the jobs on each of the processing nodes. If a particular node fails, the data and processing are automatically switched to a different node in the cluster, preventing the failure of a process due to a node becoming unavailable. According to an October 2010 article in InfoWorld, the initial use of the Hadoop framework was to index web pages, but it is now being viewed as an alternative to other business intelligence (BI) products that rely on data residing in databases (structured data), since it can work with unstructured data from disparate data sources that database-oriented tools are unable to handle as effectively. The article goes on to state that corporaations “… are dropping data on the floor because they don't have a place to put it” and Hadoop clusters can provide a data storage and processing mechanism for this data so it is not wasted. A very recent InfoWorld article examines the issues involved in the detection of cyber criminals by combining big data with traditional structured data residing in a data warehouse. The article mentions that the biggest problem will be identifying the network activity and behavior of individuals that are accessing the system for legitimate reasons as opposed to those out to steal sensitive information for nefarious purposes. It also mentions the inability of security information and event management (SIEM) and other intrusion detection systems (IDS) currently used for this purpose to correctly detect and report these types of events. They generate mounds of information that cannot be adequately analyzed to help identify the good users versus the bad users on their systems to protect the enterprise and its data. A November 2011 article in ComputerWorld mentions that JPMorgan Chase is using Hadoop “… to improve fraud detection, IT risk management, and self service applications” and that Ebay is using it to “build a new search engine for its auction site.” The article goes on to warn that anyone using Hadoop and its associated technologies needs to consider the security implications of using this data in that environment because the currently provided security mechanisms of access control lists and Kerberos authentication are inadequate for most high security applications. It was noted that most government agencies utilizing Hadoop clusters are firewalling them into “… separate ‘enclaves’ … ” to protect the data and insure that only those with proper security clearance can see the data. One of the individuals interviewed for the article suggested that all sensitive data in transit or stored in a Hadoop cluster be encrypted at the record level. Given all these security concerns, many executives do not view Hadoop as being ready for enterprise consumption. An article in ComputerWorld states that IT training in these skills can be obtained from organizations such as Cloudera, Hortonworks, IBM, MapR and Informatica. Cloudera has been offering this training for three years and they also offer a certification at the end of their four-day training program. According to the education director at Cloudera, their certification is deemed valuable by enterprises and organizations are starting to require the Cloudera Hadoop certification in their job postings. Hortonworks just started offering training and certification classes in February 2012 while IBM has been doing this since October 2011; the big difference between the two is that Hortonworks is targeting IT professionals with Java and Linux experience and IBM is targeting undergraduate and graduate students taking online classes in Hadoop. Upon completion of these classes, they are qualified to take a certification test; however, when the article was written, approximately 1% of students had taken the certification exam. A recent ComputerWorld article mentions that the terms “Hadoop” and “data scientist” are starting to show up in job postings and that some of the most well-known organizations are posting these job requirements. The article mentions that Google has reported that the search term “data scientist” is 20 times higher – so far – in the first quarter of 2012 than it was in the last quarter of 2011 and that there were 195 job listings on Dice.com that mentioned this term. This indicates that the market for technical skills in IT and statistics is growing very quickly as businesses are realizing that this new technology can provide real value to their organizations. They will require a new IT specialty called “data science” to analyze the data extracted and processed using Hadoop and using statistical analysis to derive beneficial insights. To address the shortage of individuals entering the workforce with the skills necessary to effectively utilize technologies like Hadoop, educational institutions need to offer courses in data analysis and data mining using statistical modeling methods as well as more specialized courses in Hadoop technologies like HDFS and MapReduce. These courses should have heavy emphasis on setting up Hadoop, HDFS, Java and any other software required for the environment to operate correctly. Since most students will be performing these tasks on a laptop or in a virtual machine (VM) environment, it may be more desirable to provide a preloaded VM to the students so they can see the end-state they need to achieve. This VM could also be used for the initial programming courses in Java so the students are not burdened with setting up the environment until they are in more advanced courses in operating system technologies.
<urn:uuid:b4503fdf-f8cf-4fd8-81a2-9e83c0e534d8>
CC-MAIN-2017-04
http://www.b-eye-network.com/channels/1531/view/16067
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950764
1,767
3.078125
3
A 256-node Hadoop system at the University of Texas at Austin is breaking down the barriers that have traditionally kept high performance computing relegated to technical experts. Nearly 70 students and researchers at the Texas Advanced Computing Center (TACC) have used the cluster to crunch big datasets, and provide potential answers to questions in the fields of biomedicine, linguistics, and astronomy. There’s been a lot of hype over Apache’s Hadoop in the last few years, and with good reason. With the emergence of big data, new technologies like Hadoop promise to make it easier to sort through huge datasets and tease out the patterns, without burdening users with low-level plumbing, like I/O, memory structures, and job queuing. What’s notable about the TACC’s Hadoop cluster is that it represents the first Hadoop implementation running on a supercomputer at a U.S. high performance computing center. Until the folks at TACC loaded Hadoop on their 256-node Dell cluster (dubbed Longhorn) in the fall of 2010, you couldn’t find Hadoop running on an academic supercomputer, according to Aaron Dubrow, a science and technology writer at TACC. In the 3.5 years that the TACC cluster has been online, it’s seen more than one million hours of data intensive computations across 19 different projects, and has been the basis for dozens of papers and presentations ranging from flow cytometry (FCM) to natural language processing. Longhorn helped accelerate the identification of cell types using FCM, which is a technology used by medical researchers. Thanks to the cluster’s ability to automatically create and schedule parallel tasks based on the user’s job specification, the FCM processing got an immediate boost, and eliminated the need to rewrite the open-source software to handle big data sets. The cluster was also used by linguistic researchers to show how language is connected across time and space. A UT linguistics professor applied the TextGrounder algorithm against a collection of British and American books from a century ago. The results were then meshed with a geobrowser to display where words have their roots. Others are using the 96-TB Hadoop cluster to help sort the wheat from the chaff on the Internet as it relates to one topic in particular: Autism. UT researchers are using visualization techniques to help the parents of autistic children find information and support on the Web more quickly. TACC is also working with Intel to find out how Hadoop clusters can be goosed to run scientific workloads faster, particularly as it relates to speedier interconnects. The groups shared their work together with a white paper that was recently published.
<urn:uuid:f1457894-49ee-4e6c-b264-32f618b66797>
CC-MAIN-2017-04
https://www.hpcwire.com/2013/05/30/tacc_s_hadoop_cluster_breaks_new_ground/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00443-ip-10-171-10-70.ec2.internal.warc.gz
en
0.9396
573
2.59375
3
The U.S. started the transition to digital television (DTV) in 2009 with full implementation by 2015. DTV is an innovative type of over-the-air broadcasting technology that enables TV stations to provide dramatically clearer pictures and better sound quality. It is more efficient and flexible than the traditional way of broadcasting known as analog. For example, DTV makes it possible for stations to broadcast multiple channels of free programming all at once (called multicasting), instead of broadcasting one channel at a time. DTV technology can also be used to deliver future interactive video and data services that analog technology can't provide. This switch to DTV left what is called “white space” in the spectrum that many have theorized could be used for high speed WiFi, or Super WiFi. The FCC is currently evaluating a number of possibilities that would help track white space use. The move to using this free spectrum for wireless data would signal billions in innovation and research. Companies from Google to Microsoft are betting on this enhancement to wireless growth. The move would also help alleviate the coming shortage of bandwidth. The approval for Super WiFi could also benefit consumers in that there would be wider choice in wireless providers. Currently only a few companies are licensed to deliver digital networking to access the Internet. Yet with more spectrum available other companies could bid on the right to deliver wireless digital content. This could drive competition in both prices and services available. Imagine having a wireless device without needing a cell phone plan? All you would have to do is pay for the data package you need. The Super WiFi Summit, on August 27th-29th 2013, will host industry discussions on the use and capabilities that will come with Super WiFi implementation. A subject of great importance to entering into the white space arena is FCC spectrum regulation and how it might affect companies wanting to participate in development. Insiders will be on hand to examine the influence of TV auctions and the FCCs plans for unlicensed spectrum. Edited by Ashley Caputo
<urn:uuid:febca5f0-548b-4a0f-9cae-f5b6d4ecc088>
CC-MAIN-2017-04
http://www.mobilitytechzone.com/topics/4g-wirelessevolution/articles/2013/05/02/336676-super-wifi-coming-the-us.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00351-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94969
404
2.875
3
Text Mining Makes Sense of Social Media September 21, 2012 Text mining is taking a curious turn toward social media, according to “Mining the Blogosphere: Researchers Develop Tools That Make Sense of Social Media” on Science Daily. We learn in the article that several Concordia computer scientists are helping computers get closer to “reading” an online blog and understanding it. The system they created, called BlogSum, allows organizations to pose questions and then find out how a large number of people online would respond by examining real-life self-expression. Leila Kosseim, associate professor in Concordia’s Faculty of Engineering and Computer Science and one of the lead researchers on the project, explains: “Huge quantities of electronic texts have become easily available on the Internet, but people can be overwhelmed, and they need help to find the real content hiding in the mass of information.” Kosseim also comments: “The field of natural language processing is starting to become fundamental to computer science, with many everyday applications — making search engines find more relevant documents or making smart phones even smarter.” When tested against similar technology or even human subjects, BlogSum was ranked superior. The vast number of possibilities available with this technology are overwhelming, from marketing research on consumer preferences to voter intentions in upcoming elections. We look forward to seeing it advance the world of search. Andrea Hayden, September 21, 2012
<urn:uuid:7817f58c-cf62-4671-98ee-e10700464988>
CC-MAIN-2017-04
http://arnoldit.com/wordpress/2012/09/21/text-mining-makes-sense-of-social-media/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00259-ip-10-171-10-70.ec2.internal.warc.gz
en
0.923456
298
2.53125
3
Cloud computing brings great promise, but also confusion to the IT industry. Key questions are answered here. Everyone in the IT industry is talking about cloud computing, but there is still confusion about what the cloud is, how it should be used and what problems and challenges it might introduce. This FAQ will answer some of the key questions enterprises are asking about cloud computing. What is cloud computing? Gartner defines cloud computing as "a style of computing in which massively scalable IT-related capabilities are provided 'as a service' using Internet technologies to multiple external customers." Beyond the Gartner definition, clouds are marked by self-service interfaces that let customers acquire resources at any time and get rid of them the instant they are no longer needed. The cloud is not really a technology by itself. Rather, it is an approach to building IT services that harnesses the rapidly increasing horsepower of servers as well as virtualization technologies that combine many servers into large computing pools and divide single servers into multiple virtual machines that can be spun up and powered down at will. How is cloud computing different from utility, on-demand and grid computing? Cloud by its nature is "on-demand" and includes attributes previously associated with utility and grid models. Grid computing is the ability to harness large collections of independent compute resources to perform large tasks, and utility is metered consumption of IT services, says Kristof Kloeckner, the cloud computing software chief at IBM. The coming together of these attributes is making the cloud today's most "exciting IT delivery paradigm," he says. Fundamentally, the phrase cloud computing is interchangeable with utility computing, says Nicholas Carr, author of "The Big Switch" and "Does IT Matter?" The word "cloud" doesn't really communicate what cloud computing is, while the word "utility" at least offers a real-world analogy, he says. "However you want to deal with the semantics, I think grid computing, utility computing and cloud computing are all part of the same trend," Carr says. Carr is not alone in thinking cloud is not the best word to describe today's transition to Web-based IT delivery models. For the enterprise, cloud computing might best be viewed as a series of "online business services," says IDC analyst Frank Gens. What is a public cloud? Naturally, a public cloud is a service that anyone can tap into with a network connection and a credit card. "Public clouds are shared infrastructures with pay-as-you-go economics," explains Forrester analyst James Staten in an April report. "Public clouds are easily accessible, multitenant virtualized infrastructures that are managed via a self-service portal." What is a private cloud? A private cloud attempts to mimic the delivery models of public cloud vendors but does so entirely within the firewall for the benefit of an enterprise's users. A private cloud would be highly virtualized, stringing together mass quantities of IT infrastructure into one or a few easily managed logical resource pools. Like public clouds, delivery of private cloud services would typically be done through a Web interface with self-service and chargeback attributes. "Private clouds give you many of the benefits of cloud computing, but it's privately owned and managed, the access may be limited to your own enterprise or a section of your value chain," Kloeckner says. "It does drive efficiency, it does force standardization and best practices." The largest enterprises are interested in private clouds because public clouds are not yet scalable and reliable enough to justify transferring all of their IT resources to cloud vendors, Carr says. "A lot of this is a scale game," Carr says. "If you're General Electric, you've got an enormous amount of IT scale within your own company. And at this stage the smart thing for you to do is probably to rebuild your own internal IT around a cloud architecture because the public cloud isn't of a scale at this point and of a reliability and everything where GE could say 'we're closing down all our data centers and moving to the cloud.'" Is cloud computing the same as software-as-a-service? You might say software-as-a-service kicked off the whole push toward cloud computing by demonstrating that IT services could be easily made available over the Web. While SaaS vendors originally did not use the word cloud to describe their offerings, analysts now consider SaaS to be one of several subsets of the cloud computing market. What types of services are available via the cloud computing model? Public cloud services are breaking down into three broad categories: software-as-a-service, infrastructure-as-a-service, and platform-as-a-service. SaaS is well known and consists of software applications delivered over the Web. Infrastructure-as-a-service refers to remotely accessible server and storage capacity, while platform-as-a-service is a compute-and-software platform that lets developers build and deploy Web applications on a hosted infrastructure. How do vendors charge for these services? SaaS vendors have long boasted of selling software on a pay-as-you-go, as-needed basis, preventing the kind of lock-in inherent in long-term licensing deals for on-premises software. Cloud infrastructure providers like Amazon are doing the same. For example, Amazon's Elastic Compute Cloud charges for per-hour usage of virtualized server capacity. A small Linux server costs 10 cents an hour, while the largest Windows server costs $1.20 an hour. Storage clouds are priced similarly. Nirvanix's cloud storage platform has prices starting at 25 cents per gigabyte of storage each month, with additional charges for each upload and download. What types of applications can run in the cloud? Technically, you can put any application in the cloud. But that doesn't mean it's a good idea. For example, there's little reason to run a desktop disk defragmentation or systems analysis tool in the cloud, because you want the application sitting on the desktop, dedicated to the system with little to no latency, says Pund-IT analyst Charles King. More importantly, regulatory and compliance concerns prevent enterprises from putting certain applications in the cloud, particularly those involving sensitive customer data. IDC surveys show the top uses of the cloud as being IT management, collaboration, personal and business applications, application development and deployment, and server and storage capacity. Can applications move from one cloud to another? Yes, but that doesn't mean it will be easy. Services have popped up to move applications from one cloud platform to another (such as from Amazon to GoGrid) and from internal data centers to the cloud. But going forward, cloud vendors will have to adopt standards-based technologies in order to ensure true interoperability, according to several industry groups. The recently released "Open Cloud Manifesto" supports interoperability of data and applications, while the Open Cloud Consortium is promoting open frameworks that will let clouds operated by different entities work seamlessly together. The goal is to move applications from one cloud to another without having to rewrite them. How does traditional software licensing apply in the cloud world? Vendors and customers alike are struggling with the question of how software licensing policies should be adapted to the cloud. Packaged software vendors require up-front payments, and make customers pay for 100% of the software's capabilities even if they use only 25% or 50%, Gens says. This model does not take advantage of the flexibility of cloud services. Oracle and IBM have devised equivalency tables that explain how their software is licensed for the Amazon cloud, but most observers seem to agree that software vendors haven't done enough to adapt their licensing to the cloud. The financial services company ING, which is examining many cloud services, has cited licensing as its biggest concern. "I haven't seen any vendor with flexibility in software licensing to match the flexibility of cloud providers," says ING's Alan Boehme, the company's senior vice president and head of IT strategy and enterprise architecture. "This is a tough one because it's a business model change. … It could take quite some time." What types of service-level agreements are cloud vendors providing? Cloud vendors typically guarantee at least 99% uptime, but the ways in which that is calculated and enforced differ significantly. Amazon EC2 promises to make "commercially reasonable efforts" to ensure 99.95% uptime. But uptime is calculated on a yearly basis, so if Amazon falls below that percentage for just a week or a month, there's no penalty or service credit. GoGrid promises 100% uptime in its SLA. But as any lawyer points out, you have to pay attention to the legalese. GoGrid's SLA includes this difficult-to-interpret phrase: "Individual servers will deliver 100% uptime as monitored within the GoGrid network by GoGrid monitoring systems. Only failures due to known GoGrid problems in the hardware and hypervisor layers delivering individual servers constitute failures and so are not covered by this SLA." Attorney David Snead, who recently spoke about legal issues in cloud computing at Sys-Con's Cloud Computing Conference & Expo in New York City, says Amazon has significant downtime but makes it difficult for customers to obtain service credits. "Amazon won't stand behind its product," Snead said. "The reality is, they're not making any guarantees." How can I make sure my data is safe? Data safety in the cloud is not a trivial concern. Online storage vendors such as The Linkup and Carbonite have lost data, and were unable to recover it for customers. Secondly, there is the danger that sensitive data could fall into the wrong hands. Before signing up with any cloud vendor, customers should demand information about data security practices, scrutinize SLAs, and make sure they have the ability to encrypt data both in transit and at rest. How can I make sure that my applications run with the same level of performance if I go with a cloud vendor? Before choosing a cloud vendor, do your due diligence by examining the SLA to understand what it guarantees and what it doesn't, and scour through any publicly accessible availability data. Amazon, for example, maintains a "Service Health Dashboard" that shows current and historical uptime status of its various services. There will always be some network latency with a cloud service, possibly making it slower than an application that runs in your local data center. But a new crop of third-party vendors are building services on top of the cloud to make sure applications can scale and perform well, such as RightScale. By and large, the performance hit related to latency "is pretty negligible these days," RightScale CTO Thorsten von Eicken. The largest enterprises are distributed throughout the country or world, he notes, so many users will experience a latency-caused performance hit whether an application is running in the cloud or in the corporate data center.
<urn:uuid:26099667-7698-4de3-a4f4-756b6bd81ed2>
CC-MAIN-2017-04
http://www.networkworld.com/article/2268449/virtualization/faq--cloud-computing--demystified.html?page=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00077-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949917
2,258
2.578125
3
Understanding DB2 Universal Database character conversion In today's world, many database applications work with multiple database components on multiple platforms. A database application could be running on a Windows® system, but interact with a DB2 UDB for z/OS® database through a DB2 Connect™ server running on AIX®. The information that flows between these various servers may go through several character conversions, and in most cases these conversions are transparent to the user. There are occasions, however, when some configuration is required. In such cases, it is useful to understand how these conversions work, and which component handles them. For instance, consider these situations: - "I stored an exclamation mark ( ! ) in a column of one of my DB2 UDB for OS/390 tables from SPUFI. When I retrieve the same column from the DB2 UDB for Linux™, UNIX® and Windows command line processor (CLP), the exclamation mark gets converted to a square bracket ( ] )." - "I have Spanish information in my DB2 UDB for Linux, UNIX, and Windows database. My Java™ application retrieves the Spanish data (accents), but they are all corrupted, even though I can see the contents are correct with the CLP." These are some examples of the questions and issues that come from DB2 UDB customers. This article addresses all of these and similar issues by describing the DB2 UDB character conversion process. The article focuses on the following products and versions: DB2 UDB for Linux, UNIX, and Windows Version 8.2, DB2 UDB for iSeries™ 5.3 and DB2 UDB for z/OS Version 8. The information may be applicable to prior versions of these products. In order to understand how the character conversion process works, you need to understand some key concepts. Figures 1, 2, and 3 provide an overview of some of these concepts. Figure 1. Character conversion key concepts -- The ASCII encoding scheme Figure 2. Character conversion key concepts -- The EBCDIC encoding scheme Figure 3. Character conversion key concepts -- The Unicode encoding scheme In the figures, the solid shaded borders in each code page indicate hexadecimal numbers. A code page can be defined as a table with a mapping of alphanumeric code and its binary representation. Figure 1 shows several code pages for different languages and platforms. In the figure, for code page 1252, the letter 'K' can be represented as a binary '01001011' (or '4B' in hex notation). This same character may be represented with a different binary representation in another code page. A code point is the location of a character within the code page. In Figure 1, the code point '4B' corresponds to character 'K' in code page 1252. A character set is a defined set of characters. For example, a character set can consist of the uppercase letters A through Z, the lowercase letters a through z, and the digits 0 through 9. This character set can be repeated in several code pages. For example, in Figure 1 and Figure 2, the cells with dotted background in the different code pages represent the same character set. Code pages can be classified as follows: - A single-byte code page (sometimes referred to as a single-byte character set or SBCS) is a code page that can potentially hold 256 (28) code points in maximum. The actual number of code points in a SBCS might be less. For example, for territory identifier US, codeset ISO 8859-1, locale en_US on AIX is SBCS code page 819. - A double-byte code page (sometimes referred to as a double-byte character set or DBCS) is a code page that can potentially hold 65536 (216) code points in maximum. The actual number of code points in a DBCS might be less. For example, for territory identifier JP, codeset IBM-932, locale Ja_JP on AIX is DBCS code page 932. - A composite or mixed code page contains more than one code page. For example, an Extended UNIX Code (EUC) code page can contain up to four different code pages, where the first code page is always single-byte. For example, IBM-eucJP for Japanese (code page 954) refers to the encoding of the Japanese Industrial Standard characters according to the EUC encoding rules. In the mainframe (z/OS, OS/390®) and iSeries (i5/OS™, OS/400®) world, the term Coded Character Set Identifier (CCSID) is used instead of code page. A CCSID is a 16-bit unsigned integer that uniquely identifies a particular code page. For example, the US-English code page is denoted by a CCSID of 37 on the mainframe. The German code page is CCSID 273. Some of these code pages include code points for specific characters in their language; some have the same characters but are represented by different code points in different CCSIDs. The CCSID is based on the Character Data Representation Architecture (CDRA), an IBM architecture that defines a set of identifiers, resources, services, and conventions to achieve a consistent representation, processing, and interchange of graphic character data in heterogeneous environments. OS/400 fully supports CDRA. OS/390 supports some of the elements of CDRA. An encoding scheme is a collection of the code pages for various languages used on a particular computing platform. Common encoding schemes are: - American Standard Code for Information Interchange (ASCII) ,which is used on Intel-based platforms (like Windows), and UNIX-based platforms, such as AIX. Figure 1 showed a simplified representation of the ASCII encoding scheme. - Extended Binary Coded Decimal Information Code (EBCDIC), which is an encoding scheme designed by IBM. It is typically used on z/OS and iSeries. Figure 2 showed a simplified representation of the EBCDIC encoding scheme. - Unicode character encoding standard is a fixed-length character that provides a unique code point for every character in the world regardless of the platform, program, or the language. It contains close to 100,000 characters and is growing. The Unicode standard has been adopted by such industry leaders as IBM, Microsoft, and many others. Unicode is required by modern standards such as XML, Java, LDAP, CORBA 3.0, and others, and is the official way to implement ISO/IEC 10646. Figure 3 shows a simplified representation of the Unicode encoding scheme. The DB2 UDB character conversion process From the previous discussion, it should be clear that the concept of "code page" is crucial to understanding character conversions. A code page can be defined at different levels: - At the operating system where the application runs - At the application level using specific statements depending on the programming language - At the operating system where the database runs - At the database level Defining the code page at the operating system where the application runs On Windows, the code page is derived from the ANSI code page setting in the Windows registry. You can review your settings from the Regional Settings control panel. Figure 4 shows the regional settings on a Windows XP machine. Figure 4. Regional settings on a Windows XP machine In a UNIX-based environment, the code page is derived from the locale setting. The command locale can be used to determine this value, as shown in Figure 5. The command localedef can compile a new locale file, and the LANG variable in /etc/environment can be updated with the new locale. Figure 5. Regional settings on a UNIX machine using locale For the iSeries and z/OS, contact your system administrator. Defining the code page at the application level This article does not discuss application code page settings in detail, as the focus is mainly on the database side. However, it does mention some concepts that may be useful. By default, an application code page is derived from the operating system where it is running. For embedded SQL programs, the application code page is determined at precompile/bind time and at execution time. At precompile and bind time, the code page derived at the database connection is used for precompiled statements, and any character data returned in the SQLCA. At execution time, the user application code page is established when a database connection is made, and it is in effect for the duration of the connection. All data, including dynamic SQL statements, user input data, user output data, and character fields in the SQLCA, is interpreted based on this code page. Therefore, if your program contains constant character strings, you should precompile, bind, compile, and execute the application using the same code page. For a Unicode database, you should use host variables instead of using string constants. The reason for this recommendation is that data conversions by the server can occur in both the bind and the execution phases; this could be a concern if constant character strings are used within the program. These embedded strings are converted at bind time based on the code page which is in effect during the bind phase. Seven-bit ASCII characters are common to all the code pages supported by DB2 Universal Database and will not cause a problem. For non-ASCII characters, users should ensure that the same conversion tables are used by binding and executing with the same active code page. For ODBC or CLI applications, you may be able to use different keywords in the odbc.ini file or the db2cli.ini file to adjust the application code page. For example, a Windows ODBC application can use the keyword TRANSLATEDLL to indicate the location of DB2TRANS.DLL, which contains codepage mapping tables, and the keyword TRANSLATEOPTION, which defines the codepage number of the database. The DISABLEUNICODE keyword can be used to explicitly enable or disable Unicode. By default, this keyword is not set, which means that the DB2 CLI application will connect using Unicode if the target database supports Unicode. If it doesn't, the DB2 CLI application will connect using the application code page. When you explicitly set DISABLEUNICODE=0, the DB2 CLI application will always connect in Unicode whether or not the target database supports Unicode. When DISABLEUNICODE=1, the DB2 CLI application always connects in the application code page, whether or not the target database supports Unicode. Java applications using the Java Universal Type 4 driver don't need a DB2 UDB client installed at the client machine. The universal JDBC driver client sends data to the database server as Unicode, and the database server converts the data from Unicode to the supported code page. The character data that is sent from the database server to the client is converted using Java's built-in character converters, such as the sun.io.* conversion routines. The conversions that the DB2 Universal JDBC Driver supports are limited to those that are supported by the underlying Java Runtime Environment (JRE) implementation. For CLI and legacy JDBC drivers, code page conversion tables are used. Application programs running on z/OS use the application encoding CCSID values specified on the DB2 UDB for z/OS installation panels. In addition, the application encoding bind option can also define the CCSID for the host variables in the program. For dynamic SQL applications, use the APPLICATION ENCODING special register to override the CCSID. It is also possible to specify the CCSID at an even more granular level by using the CCSID clause in the DECLARE VARIABLE statement. (For example: EXEC SQL DECLARE :TEST VARIABLE CCSID UNICODE;) Defining the code page at the operating system where the database runs The discussion in this section is exactly the same as explained above. Defining the code page at the database level The code page is defined differently depending on the DB2 UDB platform. On DB2 UDB for Linux, UNIX, and Windows A database can have only one code page, and it is set when you first create a database with the CREATE DATABASE command using the TERRORITY clauses. For example, the following command creates the database "spaindb" with code set 1252 and territory ID 34 which determine the code pages for Spanish on the Windows platform. (Refer to Supported territory codes and code pages for a list of code set and territory IDs for different countries.) CREATE DATABASE spaindb USING CODESET 1252 TERRITORY es After the database is created, you can review the code page settings by issuing the command get db cfg for spaindb as shown in Figure Figure 6. Reviewing the code page of your DB2 UDB for Linux, UNIX, and Windows database Table 1 provides descriptions of each of the fields shown in Figure 6. Table 1. Code page database configuration parameters description |Database territory||Determines the territory identifier for a country.| |Database code page||Indicates the code page used to create the database.| |Database country/region code||Indicates the territory code used to create the database.| |Database collating sequence||Indicates the method to use to sort character data.| |Alternate collating sequence||Specifies the collating sequence that is to be used for Unicode tables in a non-Unicode database. Until this parameter is set, Unicode tables and routines cannot be created in a non-Unicode database.| Collating sequences are discussed in more detail in the section Other considerations. If you create the database using the default values, the code page that is used is taken from the operating system's information. Once a database is created with a given code page, you cannot change it unless you export the data, drop the database, recreate the database with the correct code page, and import the data. On DB2 UDB for iSeries On DB2 UDB for iSeries, a code page can be specified per physical file or table within a database. Therefore, an iSeries database can hold multiple code pages, even ASCII code pages, depending on the specific code page. To specify the code page to use for a physical file or table, use either of these two approaches: - Create a physical file and use the CCSID clause. For example, the following command creates a one-member file with CCSID 62251 using Data Description Source (DDS) and places it in a library called CRTPF FILE(DSTPRODLB/ORDHDRP) TEXT('Order header physical file') CCSID(62251) This assumes the DDS source exists and has been correctly defined. Note that the CCSID value is only valid for source physical files FILETYPE(*SRC). iSeries tables are stored in data physical files FILETYPE(*DATA), so you must use DDS and specify the CCSID that way. - Use the SQL CREATE TABLESQL statement with the CCSID clause. For example, the following SQL statement creates the table DEPARTMENT with CCSID 37 specified for two columns: CREATE TABLE DEPARTMENT (DEPTNO CHAR(3) CCSID 37 NOT NULL, DEPTNAME VARCHAR(36) CCSID 37 NOT NULL, PRENDATE DATE DEFAULT NULL) To review the current value of the code page, at the job level, you can issue the following from an OS/400 command line: If you scroll down to the third page, you will be able to see the code page settings, for example: Language identifier . . . . . . . . . . . . . . . : ENU Country or region identifier . . . . . . . . . . : CA Coded character set identifier . . . . . . . . . : 37 Default coded character set identifier . . . . . : 37 If the CCSID is not specified, the code page that is used is the one specified in any of these three layers: - Distributed Data Management (DDM) - Job (that is, at the OS level) - User profile (that is, at the System level) The client will send its code page in the DDM request (DDM Layer). At the OS400 level, the CCSID is determined following this priority: - Use the CCSID in the field definition. - If there is no CCISD in the field definition, the file-level CCSID is used. - If there is no file-level CCSID specified, the CCSID of the current job is used. The CCSID of the job is determined as follows: - Check the user profile and get the CCSID from there. - If the user profile does not specify a CCSID, the value is determined from the system value QCCSID. On DB2 UDB for z/OS On DB2 UDB for z/OS, the CCSID (code page) needs to be specified when you install the DB2 UDB for z/OS subsystem on panel DSNTIPF. This is shown in Listing 1. Listing 1. Application programming defaults panel: DSNTIPF DSNTIPF INSTALL DB2 - APPLICATION PROGRAMMING DEFAULTS PANEL 1 ===> _ Enter data below: 1 LANGUAGE DEFAULT ===> IBMCOB ASM,C,CPP,IBMCOB,FORTRAN,PLI 2 DECIMAL POINT IS ===> . . or , 3 STRING DELIMITER ===> DEFAULT DEFAULT, " or ' (COBOL or COB2 only) 4 SQL STRING DELIMITER ===> DEFAULT DEFAULT, " or ' 5 DIST SQL STR DELIMTR ===> ' ' or " 6 MIXED DATA ===> NO NO or YES for mixed DBCS data 7 EBCDIC CCSID ===> CCSID of SBCS or mixed data. 1-65533. 8 ASCII CCSID ===> CCSID of SBCS or mixed data. 1-65533. 9 UNICODE CCSID ===> 1208 CCSID of UNICODE UTF-8 data. 10 DEF ENCODING SCHEME ===> EBCDIC EBCDIC, ASCII, or UNICODE 11 APPLICATION ENCODING ===> EBCDIC EBCDIC, ASCII, UNICODE, cssid (1-65533) 12 LOCALE LC_CTYPE ===> PRESS: ENTER to continue RETURN to exit HELP for more information Table 2 provides descriptions of each of the relevant fields shown in Listing 1. Table 2. Code page parameters descriptions |MIXED DATA||Indicates whether the EBCDIC CCSID and ASCII CCSID fields contain mixed data or not.| |EBCDIC CCSID||Indicates the default CCSID for EBCDIC encoded data.| |ASCII CCSID||Indicates the default CCSID for ASCII-encoded character data.| |UNICODE CCSID||Indicates the default CCSID for Unicode. DB2 UDB for z/OS currently only supports CCSID 1208 for Unicode.| |DEF ENCODING SCHEME||Indicates the default format in which to store data in DB2.| |APPLICATION ENCODING||Indicates the system default application encoding scheme that affects how DB2 UDB for z/OS interprets data coming into DB2.| |LOCALE LC_CTYPE||Specifies the system LOCALE_LC_CTYPE. A locale is the part of your system environment that depends on language and cultural conventions. An LC_TYPE is a subset of a locale that applies to character functions. For example, specify En_US for English in the United States or Fr_CA for French in Canada.| If the language only uses a single-byte CCSID, the mixed and double-byte CCSIDs in the CCSID set default to the reserved CCSID 65534. Due to the complexity and high number of the characters of some languages such as Chinese and Japanese, these character sets use double-byte and mixed character sets. All the single-byte and mixed CCSIDs are stored in the macro called DSNHDECP in the DB2 UDB for z/OS subsystem parameter job. In addition, the DB2 UDB for z/OS catalog table SYSIBM.SYSSTRINGS has to point to a conversion table for all required code page conversions. DB2 UDB for z/OS uses the ASCII CCSID value to perform conversion of character data that is received from ASCII external sources, including other databases. You must specify a value for the ASCII CCSID field, even if you do not have or plan to create ASCII-encoded objects. To store data in ASCII format in a table, you can use the CREATE statement with the CCSID ASCII clause at the table, table space, or database level. For example, at the table level use the CREATE TABLE statement as follows: CREATE TABLE T1 (C1 int CCSID ASCII, C2 char(10) CCSID ASCII) The CCSID ASCII value from the above CREATE TABLE statement is taken from panel DSNTIPF. The following statements use the default encoding scheme specified in panel DSNTIPF: - CREATE DATABASE - CREATE DISTINCT TYPE - CREATE FUNCTION - CREATE GLOBAL TEMPORARY TABLE - DECLARE GLOBAL TEMPORARY TABLE - CREATE TABLESPACE (in DSNDB04 database) If the CCSID values of the DSNTIPF panel fields are not correct, character conversion produces incorrect results. The correct CCSID identifies the coded character set that is supported by your site's I/O devices, local applications such as IMS and QMF, and remote applications such as CICS Transaction Server. Never change the CCSIDs of an existing DB2 UDB for z/OS system without specific guidance from IBM DB2 UDB Technical Support; otherwise you may corrupt your data! Character conversion scenarios The previous sections showed how the code page value can be determined and changed for an application or database in different platforms. This section describes the character conversion process using two generic scenarios. (It assumes the code page value for the application and the database have already been established). The fundamental rule of the conversion process is that the receiving system will always perform the code page conversion. Scenario 1: Client to DB2 UDB server conversion This generic scenario shown in Figure 7 represents such cases as: - Application running on Linux, UNIX, or Windows client to DB2 UDB for Linux, UNIX, and Windows Server - Application running on iSeries client to DB2 UDB for Linux, UNIX, and Windows server - Application running on iSeries client to DB2 UDB for iSeries server - Application running on iSeries client to DB2 UDB for z/OS server - Application running on z/OS client to DB2 UDB for Linux, UNIX, and Windows server - Application running on z/OS client to DB2 UDB for iSeries server - Application running on z/OS client to DB2 UDB for z/OS server Figure 7. Client to DB2 UDB server conversion Scenario 2: Client to DB2 Connect Gateway to DB2 UDB server conversion This generic scenario, shown in Figure 8, represents such cases as: - Application running on Linux, UNIX, or Windows client to DB2 UDB for iSeries - Application running on Linux, UNIX, or Windows client to DB2 UDB for z/OS Figure 8. Client to DB2 Connect Gateway to DB2 UDB server conversion In Figures 7 and 8, when the operating system where the application runs is Linux, UNIX, or Windows, a DB2 UDB for Linux, UNIX, and Windows client may need to be installed. If the application is written in Java using the JDBC Type 4 driver, a DB2 UDB for Linux, UNIX, and Windows client is not required. In both generic scenarios, no code page conversion will happen if the code pages are the same for all the systems. This is unlikely to happen when you are dealing with Linux, UNIX, or Windows applications (which use ASCII encoding scheme) that access DB2 UDB for iSeries or z/OS data (which use EBCDIC encoding scheme), unless Unicode is used for all of these systems. Character conversion example The following example illustrates the character conversion process. Let's say you have the following configuration: - An ODBC application is running on a Windows machine, and by default the application is using the operating system's code page, which for this example is 1252 (Windows, English). - The AIX server where DB2 Connect runs is set to use Unicode. - The iSeries server uses code page 66535 and has a database with table DEPARTMENT defined as: CREATE TABLE DEPARTMENT (DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) CCSID 37 NOT NULL, PRENDATE DATE DEFAULT NULL) The column DEPTNO and PRENDATE will use the iSeries code page of 66535 as the default. When the Windows application sends a request such as: SELECT DEPTNO FROM DEPARTMENT The following conversion occurs: - The Windows application sends a request to the DB2 Connect server with code page 1252. - The DB2 Connect server converts it to code page 1208 (Unicode), then it sends it to the iSeries Server. - The iSeries server converts it to the CCSID 66535 and accesses the data in the DEPARTMENT table. - Since the data obtained from the table is in CCSID 66535, it will be sent to the requester (in this case the DB2 Connect server) in that code page. - The DB2 Connect server converts the data to code page 1208, then it sends it to the Windows application. - The Windows operating system converts code page 1208 back to 1252. Enforced subset conversion During code page conversion, a character in the source code page X might not exist in the target code page Y. For example, let's say a multinational company stores information in both Japanese and German languages. A corresponding Japanese application inserts data into this DB2 UDB database which has been created using the German code page. In such cases, many characters will not have a code point in the CCSID used by DB2. In such cases, one way to get past this problem is to convert the character by mapping only those characters from the source CCSID to those that have a corresponding character in the target CCSID. Those characters that do not map will be substituted by a reserved code point. (Every code page has reserved at least one code point for substitution.) Those characters that cannot be mapped to the target code page are lost forever. This approach is called enforced subset conversion. Another conversion approach is called the round-trip conversion. A round-trip conversion between two CCSIDs ensures that all characters making the "round trip" arrive as they were originally, even if the receiving CCSID does not support a given character. Round-trip conversions ensure that code points that are converted from CCSID X to CCSID Y, and back to CCSID X are preserved, even if CCSID Y is not capable of representing these code points. This is implemented by using conversion tables. Using other DB2 UDB for Linux, UNIX, and Windows conversion code page tables When you want to use a different version of the conversion tables, such as the Microsoft version, you must manually replace the default conversion table (.cnv) files, which reside in the .../sqllib/conv directory in the UNIX and Linux platforms or ...\sqllib\conv on Windows. These tables are the external code page conversion tables used to translate values between various code pages. Before replacing the existing code page conversion table files in the sqllib/conv directory, you should back up the files. DB2 UDB Unicode support On all platforms, DB2 UDB supports the International Standards Organization (ISO)/International Electrotechnical Commission (IEC) standard 10646 (ISO/IEC standard 10646) Universal 2-Octect Coded Character Set (UCS-2). UCS-2 is implemented with Unicode Transformation Format, 8 bit encoding form (UTF-8). UTF-8 is designed for ease of use with existing ASCII-based systems. The code page/CCSID value for data in UTF-8 format is 1208. The CCSID value for UCS-2 is 1200. UTF-8 was chosen as the default format for character data columns, with UTF-16 for graphic data columns. A DB2 UDB for Linux, UNIX, and Windows database created using default values will create tables in ASCII. To create a Unicode database, use the CREATE DATABASE with the CODESET and TERRITORY clauses as CREATE DATABASE unidb USING CODESET UTF-8 TERRITORY US The tables in this Unicode database will default to code page 1208. You cannot define a table with an ASCII code page in a Unicode database. The opposite however, is possible; that is you can create a Unicode table in a non-Unicode database. This can be performed by invoking the CREATE TABLE statement using the clause. For example: CREATE TABLE unitbl (col1 char(10), col3 int) CSSID UNICODE For this to work, you first need to activate the database configuration alt_collate. Once set, this parameter cannot be changed or reset. UPDATE DB CFG FOR nonunidb USING ALT_COLLATE IDENTITY_16BIT In DB2 UDB for iSeries, the CCSID clause can be used on individual columns. For example, the following SQL statement creates the table U_TABLE. U_TABLE contains one character column called EMPNO, and two Unicode graphic columns. NAME is a fixed-length Unicode graphic column and DESCRIPTION is a variable-length Unicode graphic column. The EMPNO field only contains numerics and Unicode support is not needed. The NAME and DESCRIPTION fields are both Unicode fields. Both of these fields may contain data from more than one EBCDIC code page. CREATE TABLE U_TABLE (EMPNO CHAR(6) NOT NULL, NAME GRAPHIC(30) CCSID 1200, DESCRIPTION VARGRAPHIC(500) CCSID 1200) Refer to Supported CCSID mappings for a list of valid CCSID values in iSeries. Similar to DB2 UDB for Linux, UNIX, and Windows, in DB2 UDB for z/OS you can store and retrieve Unicode data if you have used the CCSID UNICODE clause on the object definitions, such as the CREATE TABLE DBTBDWR.WBMTEBCD (CUSTNO CHAR(8), CUSTBU CHAR(6), CUSTEXT CHAR(3), CNAME VARCHAR(80) FOR MIXED DATA) IN TEST.CUSTTS CCSID UNICODE DB2 UDB for z/OS performs character conversions using LE's (Language Environment's) ICONV, unless z/OS Unicode Conversion Services have been installed. To learn how to set up z/OS Unicode Conversion Services for DB2, review informational APARs II13048 and II13049. To review the conversions that have been installed, use the command /d uni, all from the console as shown in Figure 9. Figure 9. Reviewing installed character conversions on z/OS For example, Figure 9 shows that there is a conversion from 1252 to 1208, and from 1208 to 1252 (that is, from Windows English to Unicode and vice versa). Using the Unicode encoding scheme between all systems avoids character conversion and improves performance. A collating sequence is an ordering for a set of characters that determines whether each character sorts higher, lower, or the same as another. The collating sequence maps the code point to the desired position of each character in a sorted sequence. For example, the collating sequence in ASCII is: space, numeric values, upper case characters, lower case characters. On the other hand, the collating sequence in EBCDIC is: space, lower case characters, upper case characters, and numeric values. An application designed to work against an ASCII database may run into problems if used against an EBCDIC database, because of the difference in the collating sequence. You are allowed to create custom collating sequences. For more details, refer to the Application Development Guide: Programming Client Applications manual (see Related topics). Special considerations for federated systems Federated systems do not support certain data mappings. For example, DB2 UDB federated servers do not support mappings for the data type of LONG VARCHAR. As a result, the scenarios discussed may not work. Please review the Federated Systems Guide (see Related topics) for more details. Moving data with different code pages You cannot backup a database with a given code page, and restore it into another with a different code page. On DB2 UDB for Linux, UNIX, and Windows, you should use the export or db2move utility, create the new database with the new desired code page, and import or db2move the data back. When you use this method, DB2 UDB will perform the character conversion correctly. Dealing with binary data Columns defined with the BLOB data type, or using the FOR BIT DATA clause will be passed as binary from the source to the target, and the code page used is zero. This indicates that no code page conversion is to happen. Character conversion problem determination and problem source identification When you encounter problems with character conversions, first identify which code pages are being used by the application and the DB2 UDB database server involved. Identifying code pages in the DB2 UDB environments In addition to the methods discussed in previous sections for determining the code page value for the operating system or the DB2 UDB database server, the following methods will show you the code page at both the target and the source. The discussion in this section is geared towards DB2 UDB for Linux, UNIX, and Windows. Using the CLP with the "-a" option to display the SQLCA When you use the "-a" option of the CLP to display the SQLCA information, the code pages of the source application (in this case the CLP), and the target database are shown. Figure 10 shows an example of connecting from the CLP on Windows to a database also on Windows. Figure 10. Displaying the SQLCA information to review code page values In Figure 10, note the line that reads: sqlerrmc: 1 1252 ARFCHONG SAMPLE QDB2/NT 557 557 0 1252 1 The first instance of "1252" indicates the code page used at the target, which is Windows US-English. The second instance indicates the code page used at the source, which is also Windows US-English. Figure 11 shows another example, this time connecting from a Windows CLP (where DB2 Connect is installed) to a DB2 UDB for z/OS target database. Figure 11. Displaying the SQLCA information to review code page values -- another example In Figure 11, note the line that reads: sqlerrmc: 1208 TS56692 HOSTV8 QDB2 1252 The 1208 indicates that the target DB2 UDB for z/OS subsystem is using The 1252 indicates that the source CLP application is using Windows US-English. Using a CLI trace for CLI, ODBC or JDBC Type 2 To identify the code page for a CLI and JDBC type 2 application, you can use the trace facilities offered by DB2 CLI on Linux, UNIX, and Windows. By default, this trace facility is disabled and uses no additional computing resources. When enabled, text log files are generated whenever an application accesses the CLI driver. You can turn on the CLI trace by adding the following DB2 CLI keywords in the db2cli.ini file as follows: [COMMON] trace=1 TraceFileName=\temp\clitrace.txt TraceFlush=1 where \temp\clitrace.txt is an arbitrary name used for the directory and trace file where the traces will be stored. You can find the application and database code pages from the SQLConnect() or SQLDriverConnectW() calls in the trace output file. For example, the application and the database server are using the same code page (1252) in the following messages: (Application Codepage=1252, Database Codepage= 1252, Char Send/Recv Codepage=1252, Graphic Send/Recv Codepage=1200, Application Char Codepage=1252, Application Graphic Codepage=1200 Verifying that the conversion tables or definitions are available In most cases, the conversion tables or definitions between the source and the target code page have not been defined. The conversion tables in DB2 UDB for Linux, UNIX, and Windows are stored under the sqllib/conv directory, and normally they handle most conversion scenarios. On iSeries, the IBM-supplied tables can be found in the QUSRSYS library. You can also create your own conversion tables using the Create Table (CRTTBL) command. The Globalization topic in the iSeries Information Center includes a list of the convserion tables. (See Related topics.) You can also run the following query to see a list of character set names: SELECT character_set_name from sysibm.syscharsets On DB2 UDB for z/OS, as shown in Figure 9, you may need to issue the /d uni, all command to display the conversions that have been installed. If the output of this command does not list a conversion you need, say from 1252 to 1208, you should add a conversion entry to the Unicode Conversion Services (the sample JCL hlq.SCUNJCL(CUNJIUTL)), such as: CONVERSION 1252,1208,ER You should also verify in the DB2 UDB for z/OS catalog table SYSIBM.SYSTRINGS that an entry for the given conversion is present. For example, to view the list of entries in this table, issue this query: SELECT inccsid, outccsid FROM sysibm.sysstrings If these do not lead you to a solution, contact DB2 UDB Technical Support. You may be asked to collect a DB2 trace and to format it with the fmt -c option (formerly known as the DDCS trace). This trace will show what is being passed and received between the source and the target. This article provided you with an overview of the character conversion process that occurs within and between DB2 UDB databases. It first explained key concepts such as code pages, code points, and encoding schemes, and indicated how to review their current value, and how to update the value. Next it provided generic scenarios to show the character conversion process. It explained some special considerations during conversions, and provided a two-step method to determine the cause of conversion problems. Ideally, you should try to avoid character conversions to improve performance by using the same code pages between the source and the target of the conversion. But if your database scenario is complex and you cannot avoid character conversion, this knowledge will help you make the process as smooth as possible. - The manual Application Development Guide: Programming Client Applications describes how to create a custom collating sequence for a code page. - The DB2 UDB for Linux, UNIX, and Windows Version 8 Information Center includes more information about character encoding issues.
<urn:uuid:76e3dda0-19a5-4a81-8053-a8aa68589696>
CC-MAIN-2017-04
http://www.ibm.com/developerworks/data/library/techarticle/dm-0506chong/index.html?ca=drs-
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz
en
0.813323
8,621
2.8125
3
News Article | February 12, 2016 Space enthusiasts can watch in real-time as Japan gears up to launch its sixth satellite into space this month. The Japan Aerospace Exploration Agency (JAXA) will be blasting its Astro-H spacecraft from the Tanegashima Space Center in Kagoshima to study the history of galaxy clusters and the presence of black holes. The spacecraft was originally set to launch atop an H-IIA rocket at 3:45 a.m. EST (0845 GMT) on Friday, but the launch has been postponed due to weather issues. According to JAXA, a freezing layer of clouds, which exceed restrictions for suitable weather forecast, is expected to appear during the scheduled launch time. Strong winds were also expected to hinder preparations. Japan is partnering up with NASA for the ASTRO-H mission. NASA's Goddard Space Flight Center provided two telescope mirrors and one scientific instrument for the spacecraft. Astro-H is an X-ray observatory dedicated to examining energetic events across the universe, such as the evolution of galaxy clusters and powerful supernova explosions. The Astro-H observatory will be 10 times more sensitive to X-ray light than the Suzaku spacecraft, its predecessor which had operated for a decade. Astro-H will examine high-energy light through four advanced instruments and four co-aligned focusing telescopes. Robert Petre, NASA's Astro-H project scientist, said they see X-rays from sources throughout the universe where particles reach significantly high energies. "These energies arise in a variety of settings," said Petre. This includes extreme magnetic fields, stellar explosions, or regions with strong gravity. "X-rays let us probe aspects of these phenomena that are inaccessible by instruments observing at other wavelengths," added Petre. Astro-H has two identical soft X-ray telescopes with mirror assemblies provided by Goddard. As X-rays can penetrate matter, the mirrors will be doing a very special function which astrophysicists call "grazing incidence optics." Like skipping a stone on water, X-ray light skimming through the surface of curved mirror segments is deflected toward the telescopes focal point. The first soft X-ray telescope given by the Goddard team can focus X-ray light to an advanced wide-field camera. The second soft X-ray telescope can send light into a soft X-ray spectometer (SXS). "The technology used in the SXS is leading the way to the next generation of imaging X-ray spectrometers," said Caroline Kilbourne of Goddard's SXS team. It will be able to identify tens of thousands of X-ray colors while simultaneously capturing sharp images. The Astro-H observatory is a collaborative project of JAXA, NASA, the European Space Agency (ESA), the Canadian Space Agency (CSA), and Yale University experts. The date of the launch will be announced as soon as it is determined. News Article | March 30, 2016 Somewhere in the vast outer space, a young X-ray astronomy satellite is floating ceaselessly — alone. Only a few weeks after its launch, Japan's ASTRO-H or Hitomi satellite went missing on March 26. The Japan Aerospace Exploration Agency (JAXA) confirmed on Saturday that it has indeed lost contact with the satellite. Hitomi was gone too soon. What could have happened? Could JAXA eventually regain contact with Hitomi? There are many possibilities: it could either still be whole or it could have already been broken into space debris. Hitomi was blasted into space on Feb. 17 to study gamma rays and X-rays, and observe galaxy clusters and black holes. It was supposed to go online on March 26 at 03:40 a.m. ET. When the appointed time passed, Hitomi did not clock in. It may or may not be a coincidence, but 40 minutes after the appointed time, the United States Joint Space Operations Center caught signals for five space objects orbiting near Hitomi. Are these smaller pieces of the satellite? Or are these merely asteroid pebbles? Astrophysicist Jonathan McDowell said that, should the debris belong to Hitomi, they could be minor bits blowing off the satellite. It may not mean complete destruction. In a stunning turn of events, JAXA reported on Monday, March 28, that it had picked up fleeting transmissions from Hitomi. What's more, data for the satellite itself showed a sudden change of course. Getting an empirical answer will be the tricky part. Moriba Jah of University of Arizona, Tucson said there is not enough data collection or data sharing to immediately assess what caused Hitomi's lost transmission. But all hope is not yet lost. JAXA said it is working toward recovery of the space probe, and Jah said that the space agency is skilled at that. "The interesting thing about the Japanese is they tend to be very good at resurrecting things that would otherwise be dead," said Jah, who is director of Space Object Behavioral Science at the university. If there was indeed a collision, they could trace the trajectories back to when the objects were at a minimum distance from each other. "That's probably the point at which their trajectories become one again. That could give an idea of when the collision actually occurred," added Jah. On 12 January 2016, the Japan Aerospace Exploration Agency (JAXA) presented their ASTRO-H satellite to the media at the Tanegashima Space Center, situated on a small island in the south of Japan. The satellite, developed with institutions in Japan, the US, Canada and Europe, is now ready to be mounted on an H-IIA rocket for launch on 12 February. ASTRO-H is a new-generation satellite, designed to study some of the most powerful phenomena in the Universe by probing the sky in the X-ray and gamma-ray portions of the electromagnetic spectrum. Scientists will investigate extreme cosmic environments ranging from supernova explosions to supermassive black holes at the centres of distant galaxies, and the hot plasma permeating huge clusters of galaxies. ESA contributed to ASTRO-H by partly funding various elements of the four science instruments, by providing three European scientists to serve as science advisors and by contributing one scientist to the team in Japan. In return for ESA's contribution, European scientists will have access to the mission's data. Traditionally, Japan's astronomy satellites receive a provisional name consisting of the word 'ASTRO' followed by a letter of the latin alphabet – in this case H, because it is the eighth project in JAXA's astronomical series. JAXA will announce the new name after launch. ASTRO-H is a new-generation satellite for high-energy astrophysics, developed by the Japan Aerospace Exploration Agency (JAXA) in collaboration with institutions in Japan, the US, Canada, and Europe. Its four instruments span the energy range 0.3-600 keV, including soft X-rays, hard X-rays and soft gamma rays. ESA's contribution consists in funding the procurement of a number of items on the various instruments, three European scientists who will serve as advisors to the mission's core science programme, and one full-time scientist based at the Institute of Space and Astronautical Science (ISAS), Japan, to support in-flight calibration, science software testing and data analysis. Support to European users will be provided by scientists at ESA's European Space Astronomy Centre in Madrid, Spain, and at the European Science Support Centre at the ISDC Data Centre for Astrophysics, University of Geneva, Switzerland. Explore further: Japan launches satellite for better GPS coverage (Update) A new X-ray telescope run by the Japan Aerospace Agency has gone silent a little more than a month after its launch. JAXA reported online March 27 that the telescope, ASTRO-H (aka Hitomi), stopped communicating with Earth. U.S. Strategic Command’s Joint Space Operations Center also reported seeing five pieces of debris alongside the satellite on March 26. Attempts to figure out what went wrong with the spacecraft, which launched February 17, have not been successful. Up until now though, ASTRO-H seemed to be functioning. In late February, mission operators successfully switched on the spacecraft’s cooling system and tested some of its instruments. ASTRO-H carries four instruments to study cosmic X-rays over an energy range from 0.3 to 600 kiloelectron volts. By studying X-rays, astronomers hope to learn more about some of the more feisty denizens of the universe such as exploding stars, gorging black holes, and dark matter swirling around within galaxy clusters. Earth’s atmosphere absorbs X-rays, so the only way to see them is to put a telescope in space. The project, led by the Japan Aerospace Exploration Agency (JAXA), aims to collect a wealth of new data on everything from the formation of galaxy clusters to the warping of space and time around black holes. ASTRO-H will launch Feb. 12 from the Tanegashima Space Center, with participation from NASA, the European Space Agency (ESA), and research institutions around the world. "This is the next, big X-ray observatory," said Andrew Szymkowiak, a Yale senior research scientist in astronomy and physics who is part of the ASTRO-H mission. "We're going to clean up on new information about galaxy clusters and supernova remnants." Many objects in deep space—including black holes, neutron stars, and galaxy clusters—emit X-rays as well as visible light; however, those X-rays have wavelengths that are 1,000 to 100,000 times shorter than visible light. The best way to study X-rays from deep space is to use an orbiting telescope, because Earth's atmosphere blocks X-rays from reaching land-based telescopes. ASTRO-H will maintain orbit near the equator and gather data for three years. It will be outfitted with an array of innovative technologies, including four telescopes, a soft X-ray spectrometer (SXS), a soft X-ray imaging system (SXI), a hard X-ray imaging system (HXI), and a soft gamma-ray detector (SGD). Meg Urry, Yale's Israel Munson Professor of Physics and Astronomy, and Paolo Coppi, professor of astronomy and physics, are members of the ASTRO-H scientific working group. They will be among the first scientists to get a look at the data collected by ASTRO-H. "This will be a powerful observatory," Urry said. "We're using novel technology to learn about objects that are very far away, in more detail than ever before." In particular, Urry noted, the gear aboard ASTRO-H boasts better energy resolution by a factor of 30 and a sensitivity level that is orders of magnitude better than previous technology. Data from ASTRO-H will aid Urry's ongoing research into the formation and evolution of black holes and their host galaxies, and Coppi's work exploring deep space objects that are surrounded by dense gasses. For Szymkowiak, the mission represents the culmination of a long-term personal commitment, as well. In 1983, while working for NASA's Goddard Space Flight Center, he was part of a team that developed a new way to build an X-ray spectrometer with the potential to collect information on a wider expanse of diffuse objects in deep space. The idea was to test the instrument on a rocket being readied by the Japanese space program. But getting the experiment launched into orbit in working order proved challenging, with several failed attempts. There was one version of the experiment that was launched but didn't make it into orbit; another attempt achieved orbit but stopped working after 17 days. Szymkowiak's instrument, the SXS, now will be the central piece of technology aboard ASTRO-H. Here's how it works: refrigeration units will cool specialized detector elements to near absolute zero. When X-rays emitted by objects in deep space are absorbed by the detector elements, they will increase in temperature. Scientists will use that temperature rise to measure the energy of the X-ray. The SXS is expected to generate the most accurate X-ray measurements of any instrument to date. "Our team has been working on this experiment for 30 years," Szymkowiak said. "While we've really enjoyed working with our Japanese colleagues, during the many weeks of instrument integration, testing, and launch rehearsals, it is going to be so rewarding to finally get to reap the scientific rewards." The principal investigator for ASTRO-H is Tadayuki Takahashi of JAXA and the University of Tokyo. The lead investigators for the United States are at the Goddard Space Flight Center. The scheduled launch date for ASTRO-H is Feb. 12, with an extended launch window until Feb. 29. ASTRO-H is the eighth JAXA satellite dedicated to astronomy and astrophysics. As with other JAXA missions, it will be renamed after its launch.
<urn:uuid:bc393ae1-a655-4b5e-aabe-77034cea55dc>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/astro-1713836/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.944726
2,681
3.21875
3
Fann N.,U.S. Environmental Protection Agency | Hollingsworth J.W.,Environmental Health Policy Committee of the American Thoracic Society | Pinkerton K.E.,Environmental Health Policy Committee of the American Thoracic Society | Rom W.N.,Environmental Health Policy Committee of the American Thoracic Society | And 2 more authors. Environmental Health Perspectives | Year: 2012 Background: Exposure to ozone has been associated with adverse health effects, including premature mortality and cardiopulmonary and respiratory morbidity. In 2008, the U.S. Environmental Protection Agency (EPA) lowered the primary (health-based) National Ambient Air Quality Standard (NAAQS) for ozone to 75 ppb, expressed as the fourth-highest daily maximum 8-hr average over a 24-hr period. Based on recent monitoring data, U.S. ozone levels still exceed this standard in numerous locations, resulting in avoidable adverse health consequences. Objectives: We sought to quantify the potential human health benefits from achieving the current primary NAAQS standard of 75 ppb and two alternative standard levels, 70 and 60 ppb, which represent the range recommended by the U.S. EPA Clean Air Scientific Advisory Committee (CASAC). Methods: We applied health impact assessment methodology to estimate numbers of deaths and other adverse health outcomes that would have been avoided during 2005, 2006, and 2007 if the current (or lower) NAAQS ozone standards had been met. Estimated reductions in ozone concentrations were interpolated according to geographic area and year, and concentration-response functions were obtained or derived from the epidemiological literature. Results: We estimated that annual numbers of avoided ozone-related premature deaths would have ranged from 1,410 to 2,480 at 75 ppb to 2,450 to 4,130 at 70 ppb, and 5,210 to 7,990 at 60 ppb. Acute respiratory symptoms would have been reduced by 3 million cases and school-loss days by 1 million cases annually if the current 75-ppb standard had been attained. Substantially greater health benefits would have resulted if the CASAC-recommended range of standards (70-60 ppb) had been met. Conclusions: Attaining a more stringent primary ozone standard would significantly reduce ozone-related premature mortality and morbidity. Source
<urn:uuid:cc4d1009-8eb0-44a3-aae3-dd748a9f9464>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/environmental-health-policy-committee-of-the-american-thoracic-society-532378/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00499-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941771
480
2.71875
3
Modern medicine can cure malaria in almost all cases. Still, 2,200 people die from malaria every day; young children in sub-Saharan Africa account for most of the disease's victims. Most of the deaths can be blamed on a lack of access to effective antimalarial drugs. Many health facilities in the developing world particularly those in remote rural communities in poor countries have trouble maintaining adequate supplies of effective antimalarials. Novartis International saw an opportunity to help by leveraging technology to improve the availability of these life-saving drugs. The pharmaceutical giant embarked on a project that focused on eliminating stockouts and increasing access to malaria medicines for hundreds of millions of people in rural areas. The result of its efforts, SMS for Life, has helped reduce the number of deaths from this disease throughout Tanzania. The SMS for Life system consists of an SMS management tool and a Web-based reporting tool. The SMS application stores a single registered mobile telephone number for one healthcare worker at each health facility. Once a week, the system automatically sends an SMS message to each of those telephone numbers and asks for a report of the current stock of antimalarial drugs at the facility. Each healthcare worker sends a message back to report inventory levels, using a short code number so that the message is sent free of charge. A standard message format is used to capture stock quantities, with formatting errors handled through follow-up automated SMS messages to a facility. Using the Web-based reporting tool, the data captured from the SMS stock-count messages are collected and stored centrally on a secure website that requires a unique user ID and password for access. The website provides current and historical data on stock levels of antimalarial medicines and malaria rapid diagnostic tests at the health facility and district levels. It also incorporates Google mapping of district health facilities, with stock level overlays and stockout alerts, SMS messaging statistics and usage statistics. Statistical tools can be used to provide early warnings of malaria outbreaks. SMS for Life's effectiveness has been impressive. It has given healthcare workers visibility into antimalarial stock levels and has led to more efficient stock management. The program, which was piloted in 2010, is being rolled out across Tanzania and is moving to other African countries, where it is expected to save hundreds of lives every day. Novartis donated the technologies and resources for the design and development of the SMS for Life solution and the pilot implementation. The service is now offered on a commercial basis by Vodafone. Pricing is set to be commercially sustainable but affordable, even for developing countries. Read more about the 2012 Computerworld Honors Laureates.
<urn:uuid:8caa738a-c0fd-465d-82d2-7f8065137faa>
CC-MAIN-2017-04
http://www.computerworld.com/article/2505002/mobile-wireless/keeping-life-saving-medicine-in-stock-with-sms-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00223-ip-10-171-10-70.ec2.internal.warc.gz
en
0.925628
536
3.03125
3
Reputation attacks target both individuals and companies, and their goal is to ruin the victim’s reputation. While attack techniques are varied, the consequences are often the same: a damaged reputation resulting in many cases in financial loss. Attackers can use several methods to ruin a company’s reputation. Until now, most common attacks have been based on distributed denial of service (DDoS). The objective of these attack is to flood corporate online services by means of millions of non legitimate requests from botnets. In this way, business performance is affected, causing direct financial losses and the corresponding damage to corporate image and reputation. Corporate websites are also the target of “defacement’ attacks. They consist of trying to exploit a server or Web application vulnerability to modify pages or introduce other content in the pages that shows the corporate web server. When users and potential customers visit a corporate web page and find it has been modified by a third-party, their confidence in the company is seriously affected. Another method used by hackers that has proven successful is publishing false information on forums and blogs. Seemingly genuine news items, quotes included (false, of course) strategically distributed on several online sites can spread like wildfire, and achieve their goal: to convince a large number of users that the information is true. Many urban legends that are still popular today were originally created in a similar way, and have managed to affect highly prestigious multinational companies. In a similar vein, there have also been false rumors aimed at manipulating stock market prices. Firstly, attackers send true stock market information as spam, to potentially interested parties. After several messages and once attackers consider they have sufficiently gained people’s trust, they send false information to manipulate stock prices. Google, a reference point on the Web Google’s strategic position on the Internet has seen it become a reference when searching for information, but also has a key role in establishing corporate reputations, good or bad. Consequently, Google is also used to attack the reputation of third-parties. The best known method is “Google bombing’ which allows specific websites to appear at the top of search results. Attackers study the way in which Google indexes and orders web pages during searches, and try to introduce critical content regarding a specific brand or company in the first places of the results list. When users search for a specific brand in Google, the first links displayed include pages aimed at damaging their reputation. Although Google has improved its algorithm to avoid these attacks, they are still common practice. PageRank is another Google-based method aimed at ruining corporate reputations. It consists of algorithms developed by Google to measure quantitatively the relevance or importance of web pages on a scale of 0 to 10. A company’s PageRank usually represents its popularity; if the value is high, it is usually considered to be a reliable source accessed by many important sites. Google is currently penalizing companies who exchange links and artificially try to increase PageRank. Attackers are exploiting this to insert penalized links on legitimate web pages. This way, they get the site to be penalized, its PageRank to decrease, and thereby damage its reputation. Other ways of attacking a reputation CastleCops is a volunteer security community focused on making the Internet a safer place. Its free services include malware and rootkit cleanup, malware and phishing research, and malware and hash databases. CastleCops accepts donations via PayPal. Attackers took advantage of this to begin a campaign aimed at discrediting CastleCops. They stole PayPal users’ passwords using Trojans and phishing techniques, and made several donations to CastleCops. When users realized someone had sent their money to CastleCops, they blamed CastleCops for the fraud. Consequently, CastleCops was forced to return all the money, and invest in resources to manage all the complaints and requests. CastleCops’ reputation was undoubtedly damaged. Most of the methods described above are essentially malware-based. For example, botnets are used to carry out distributed denial of service attacks and to launch spam that contains false information to ruin companies’ images. Most defacements also use automated attack tools. In the case of Google, malware is also used to automate the insertion of links and spam on 2.0 websites that allow users to add content. In the case of CastleCops, Trojans were used to steal PayPal users’ credentials. There are numerous scenarios in which viruses, Trojans and other malware-types can damage a company’s reputation. In 2004, even Google was affected by the MyDoom worm which disabled many of its servers for several hours. Worse still, the search engine underwent the attack hours before being floated on the stock market. Other search engines such as Altavista, Yahoo! and Lycos were also affected by the worm. Phishing techniques, which are still as popular as ever, can also damage companies. These attacks are critical for banks, since they cause financial losses and strike fear in users. In the same way, specially-crafted Trojans (mainly banker Trojans) have become one of the worst Internet threats. The main danger lies in the fact they are designed to specifically affect certain entities, and in many cases, operate totally invisibly and when users access their online bank, their access credentials are sent to hackers. In 2006, Trojans accounted for 53 percent of all new malware created, and 20 percent of these were banker Trojans. During 2007, there have already been over 40 percent more attacks than in the whole of 2006.
<urn:uuid:9b738e0e-528b-4ced-915e-0ff02e76a449>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2008/08/18/reputation-attacks-a-little-known-internet-threat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00039-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960146
1,148
2.703125
3
My son often has expressed a desire to be bitten by a radioactive spider so that he could become Spiderman, his favorite superhero. Since he's actually afraid of spiders, this is probably more a conceptual than real-life desire. Either way, he wants to believe Spiderman will be there to protect us from Green Goblin, Sandman, Rhino and Doc Ock. But every superhero is only as good as the tools in his or her crime-fighting bag, and I've often wondered how Spiderman's webbing -- effective in so many contexts and situations -- could contain a powerful villain like Rhino, never mind stop a runaway train from plummeting over the end of the track and killing all the passengers, as nearly happened in Spiderman 2, starring Tobey Maguire, Kirsten Dunst and James Franco. Now, thanks to physics students at the University of Leicester in England, I too can believe in Spiderman. The students set out to answer one question: Could a material with the strength and toughness of a spider's web really stop four crowded subway cars? Their answer: A group of three fourth-year MPhys students calculated the material properties of webbing needed in these conditions - and found that the strength of the web would be proportional to that of real spiders. Their paper, Doing whatever a spider can, was published in the latest volume of the University of Leicester's Journal of Physics Special Topics.Students James Forster, Mark Bryan and Alex Stone first calculated the force needed to stop the four R160 New York City subway cars. To do this, they used the momentum of the train at full speed, the time it takes the train to come to rest after the webs are attached, and the driving force of the powered R160 subway car. The students found the force Spiderman's webs exert on the train to be 300,000 newtons. They were then able to calculate the strength and toughness of the webs. Well, it turns out that the "Young's modulus" -- or stiffness -- of the web would be 3.12 gigapascals, which puts it within the range of silk spun by orb-weaver spiders. And the silk's toughness was calculated at nearly 500 megajoules per cubic meter, the equivalent of silk spun by Darwin's Bark Spider, which previous researchers have said makes webs "more than twice tougher than any previously described silk, and over 10 times better than Kevlar." Let this be a warning to you super villains out there. This stuff's real. Now read this:
<urn:uuid:75a8655e-c136-42b3-909a-00f522724076>
CC-MAIN-2017-04
http://www.itworld.com/article/2712885/enterprise-software/physics-students-conclude-spiderman-s-web-could-in-fact-stop-a-runaway-train.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00525-ip-10-171-10-70.ec2.internal.warc.gz
en
0.952382
515
2.703125
3
if u want to find all end of month sat. sun of whole year . start with first day of year . 1.Find out DATE of first saturday of month. 2.keep on adding 7 to its date and keep on checking with with no. of days left for End month . if its less than 7 and Gr than 0 then we are at end sat and sun, if its more than 7 then add 7 go to start of loop again . it its equal to zero (sat is itself last day of month ) then we can subtract 6 and get last sun . but for this code u need to cal. no. of days in that month in advance . then only u can compare .
<urn:uuid:f201ee9b-a31d-4135-9adc-eca62c77d68b>
CC-MAIN-2017-04
http://ibmmainframes.com/about11702.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00213-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929717
151
2.609375
3
Answer: 7.1 pounds In 2009, a team of researchers at the Massachusetts Institute of Technology began attaching transmitter chips to thousands of pieces of ordinary garbage. They tossed this "smart trash" into the bin, according to The Wall Street Journal, sat back and watched the path our garbage often takes. Of this 7.1 pounds, for instance, less than one-quarter gets recycled. On average, American communities spend more money on waste management than on fire protection, parks and recreation, libraries or schoolbooks, according to U.S. Census data on municipal budgets.
<urn:uuid:6f1ceb1b-04a0-44ec-8bcf-3385e8bc5819>
CC-MAIN-2017-04
http://www.govtech.com/question-of-the-day/Question-of-the-Day-for-042412.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.926857
116
2.90625
3
Many people are familiar with the concept of a mnemonic [nəˈmɑnɪk] — a memory device that uses a phrase based on the first letter of words in a series. Perhaps the most popular of these in the field of networking is the one for the OSI Model (All People Seem To Need Data Processing). Well, for those that deal with TCP/IP a lot, I thought it might be helpful to have a mnemonic for the TCP flags as well. What I’ve come up with and use regularly is: Unskilled Attackers Pester Real Security Folks Unskilled = URG Attackers = ACK Pester = PSH Real = RST Security = SYN Folks = FIN TCP flag information is most helpful to me when looking for particular types of traffic using Tcpdump. It’s possible, for example, to capture only SYNs (new connection requests), only RSTs (immediate session teardowns), or any combination of the six flags really. As noted in my own little Tcpdump primer, you can capture these various flags like so: Find all SYN packets tcpdump 'tcp & 2 != 0' Find all RST packets tcpdump 'tcp & 4 != 0' Find all ACK packets tcpdump 'tcp & 16 != 0' Notice the SYN example has the number 2 in it, the RST the number 4, and the ACK the number 16. These numbers correspond to where the TCP flags fall on the binary scale. So when you write out: U A P R S F …that corresponds to: 32 16 8 4 2 1 So as you read the SYN capture tcpdump 'tcp & 2 != 0', you’re saying find the 13th byte in the TCP header, and only grab packets where the flag in the 2nd bit is not zero. Well if you go from right to left in the UAPRSF string, you see that the spot where 2 falls is where the S is, which is the SYN placeholder, and that’s why you’re capturing only SYN packets when you apply that filter. # tcpdump 'tcp & 2 != 0' 12:40:04.649404 IP 10.5.1.42.51584 > 126.96.36.199.http: S 1524039069:1524039069(0) 12:40:04.708459 IP 188.8.131.52.http > 10.5.1.42.51584: S 1416742397:1416742397(0) ack 1524039070 win 8190 You’ll notice that when I netcat‘d to Google on port 80 from another terminal, tcpdump shows only two out of the three steps involved in the three-way handshake. It didn’t show the third because the final step is simply an ACK from my side, i.e. no SYN flag set. Remembering these flags and how to make use of them can go a long way in helping low-level network troubleshooting/security work by isolating what it is you want to see and/or capture. And of course the better you can isolate the problem, the faster you can solve it.: [ CREATED: April 2004 ]
<urn:uuid:b4385854-1c20-4767-89dd-d9169f66b24f>
CC-MAIN-2017-04
https://danielmiessler.com/study/tcpflags/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00124-ip-10-171-10-70.ec2.internal.warc.gz
en
0.852571
736
2.59375
3
Augmented reality has evolved hugely in recent years within classrooms. Educators will feel far less overwhelmed when trying to introduce AR in their classroom as there are many great apps that don’t require a lot of knowledge in the field. Augmented reality works well in schools because it brings close to real life experiences to students, immersing them in the experience. It’s dynamic learning, watch their faces when they have the opportunity to explore space, the human body, cells or chemistry elements. You appreciate how eager and engaged they become with some simple AR apps. Chromville (Free): Educational app using the eight multiple intelligences. Students color their characters and then they come to life with the Visual Arts app. Elements 4D (Free): AR Chemistry app that brings the elements to life. It includes lesson plans for all levels: elementary, intermediate, and high school. Anatomy 4D (Free): Bring the human body to life with this AR app. Have students learn about the different systems and human anatomy with this app. Field Trip (Free): Field Trip, your guide to the cool, hidden, and unique things in the world around you. Field Trip runs in the background on your phone. When you get close to something interesting, it will notify you and if you have a headset or bluetooth connected, it can even read the info to you.
<urn:uuid:be5750d5-8849-4021-880c-9bcbeba7b25b>
CC-MAIN-2017-04
https://estorm.com.au/uncategorized/augmented-reality-tools-for-education/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00032-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941382
284
3.03125
3
SQLCA is the SQL Communication area.You can consider it as a collection or grp of variables that are updated after each SQL statement executes. An application program that contains executable SQL statements must provide exactly one SQLCA. Whenever an SQL statement executes, the SQLCODE and SQLSTATE fields of the SQLCA receive a return code. Although both fields serve basically the same purpose (indicating whether the statement executed successfully) there are some differences between the two fields. SQLCODE: DB2 returns the following codes in SQLCODE: If SQLCODE = 0, execution was successful. If SQLCODE > 0, execution was successful with a warning. If SQLCODE < 0, execution was not successful. SQLCODE 100 indicates no data was found. The meaning of SQLCODEs other than 0 and 100 varies with the particular product implementing SQL. SQLSTATE: SQLSTATE allows an application program to check for errors in the same way for different IBM database management systems. An advantage to using the SQLCODE field is that it can provide more specific information than the SQLSTATE. Many of the SQLCODEs have associated tokens in the SQLCA that indicate, for example, which object incurred an SQL error.
<urn:uuid:e9edf70e-d47f-40a3-a6e7-9fb72d5cb8b4>
CC-MAIN-2017-04
http://ibmmainframes.com/about2721.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00518-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914807
264
2.84375
3
Table of Contents Every time you visit a web page Internet Explorer makes a copy of the content of these web pages as files on your computer. These files are called Temporary Internet Files and are used to allow for faster displaying of web sites that you visit. Depending on the setting, when you visit a web site Internet Explorer will compare the content of that web site with the information stored in the Temporary Internet Files and only download the content from the Internet if it has changed. This enables you to browse the web much quicker because you do not have to download these files. In most cases Temporary Internet Files do not pose a problem, but there some situations that make it important for you to clean out these files. One common reason is that you are worried about your privacy. If you give your computer to someone they would potentially be able to discover information about web pages that you visit because copies of these web sites are stored locally on the computer. If privacy is not an issue, then you may be concerned with how much space these files take up. Temporary Internet Files, by default, take up 10% of your system partitions disk space. In the past this was not a problem as drives were smaller, but with drives ranging up to 100's of GB of storage this can equate to large amounts of disk space allocated towards these files. You only need 50 MB for the Temporary Internet Files to work efficiently, so there is a lot of wasted disk space that we can reclaim for our personal use. In this tutorial we will discuss how you can manage Internet Explorer and address these concerns. In this section of the tutorial we are going to go over the options on how you can configure Internet Explorer to use Temporary Internet Files. Click on the Start button and then click on Control Panel. Then double-click on the Internet Options icon. Once you double click on the Internet Options icon you will be presented with a screen similar to Figure 1 below. Figure 1. General Tab of Internet Options To access the settings for the Temp. Internet files you will click on the Settings button designated by the blue box in Figure one above. When you click on the Settings button you will be presented with a screen similar to Figure 2 below. Figure 2. Temp Internet Files Settings The settings dialog shown in Figure 2 is broken down into two sections as described below: Check for newer versions of stored pages - The options in this section tell Internet Explorer what it should do when you visit a web page. Every time you visit a web site Internet Explorer stores a copy of this web page in the Temporary Internet Files folder. When you revisit that same web page, whether or not that information is downloaded again or taken directly from locally stored copy is decided based upon the setting you choose in this section. These settings are described below: Temporary Internet Files folder - This section gives you information about the actual folder where the Temporary Internet Files are stored as well as the ability to manage the settings associated with these files/folders. To exit from the Settings dialog, click on the OK button and then click on the OK button again. A common question is "How do I Delete or clear the Contents of the Temporary Internet Files folder?", and we will give you step by step instructions on how to do this below. Step 1: Click on the Start button and then click on Control Panel. Then double-click on the Internet Options icon. You will now be presented with an screen similar to Figure 3 below. Figure 3. General Tab of Internet Options Step 2: Click once on the Delete Files button designated by the red box in Figure 3 above. This will bring up a confirmation box similar to Figure 4 below. Figure 4. Confirmation to delete Temporary Internet Files Step 3: Click on the checkbox labeled Delete all offline content if you would like to delete content that you marked as viewable when you are not connected to the Internet. If you do not have offline content you can leave this unchecked. Step 4: You should then click on the OK button which will start the process of deleting all of your Temporary Internet Files. This can take a while so do not be concerned if it looks like the Internet Options screen has become frozen. When it is done deleting the files, the Internet Options screen will go back to normal and you will be able to press the OK button to close the window. Now your Internet Explorer Temporary Internet Files have been deleted from your computer. As always if you have any questions please feel free to post them in our computer help forums. Bleeping Computer Windows Basic Concept Tutorials BleepingComputer.com: Computer Support & Tutorials for the beginning computer user. One of the most frustrating tasks a non-technical user may run into is getting a new computer and having no idea how to move their old data to it. In this tutorial we will go over how to move your Internet Explorer favorites from one computer to another in a simple and easy to understand manner so that you have one less headache to deal with in these situations. For this tutorial we will use a ... If you are a Windows user then it is possible the only web browser you know about is Internet Explorer. You may have started hearing information about the various other browsers available for you to switch to. Your reasons for wanting to switch can be varied. Some want to switch to another browser out of curiosity, others for certain features, and many for the enhanced security. Many of you may ... By default Windows hides certain files from being seen with Windows Explorer or My Computer. This is done to protect these files, which are usually system files, from accidentally being modified or deleted by the user. Unfortunately viruses, spyware, and hijackers often hide there files in this way making it hard to find them and then delete them. Windows 7 hides certain files so that they are not able to be seen when you exploring the files on your computer. The files it hides are typically Windows 7 System files that if tampered with could cause problems with the proper operation of the computer. It is possible, though, for a user or piece of software to set make a file hidden by enabling the hidden attribute in a particular file or ... Over time as you install new programs, install new hardware, and use your computer, Windows can start to become slower and encounter problems that are hard to fix. These problems could be unable to print, crashes, freezes, or hardware not working. In the past, to resolve these types of issues you would need to backup your data, reinstall the operating system, and then restore your data to the ...
<urn:uuid:4d54157f-fb9c-433a-9353-2a582d65993f>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/tutorials/manage-temporary-internet-files/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00178-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911898
1,340
2.984375
3
Google has conducted a study of the reasons hard drives fail, using information gathered from more than 100,000 of its own disks. The study, which Google says is "...unprecedented in that it uses a much larger population size than has been previously reported" presents "...a comprehensive analysis of the correlation between failures and several parameters that are believed to affect disk lifetime." By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. The study's key finding was "...the lack of a consistent pattern of higher failure rates for higher temperature drives or for those drives at higher utilization levels. Such correlations have been repeatedly highlighted by previous studies, but we are unable to confirm them by observing our population. Although our data do not allow us to conclude that there is no such correlation, it provides strong evidence to suggest that other effects may be more prominent in affecting disk drive reliability in the context of a professionally managed data center deployment." Google's data was collected by tapping into drives' self monitoring facility (SMART) and "confirm the findings of previous smaller population studies that suggest that some of the SMART parameters are well-correlated with higher failure probabilities. "We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in reallocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabilities." "Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. This result suggests that SMART models are more useful in predicting trends for large aggregate populations than for individual components. It also suggests that powerful predictive models need to make use of signals beyond those provided by SMART." The full report is available for download at http://labs.google.com/papers/disk_failures.pdf
<urn:uuid:ebbaac77-4433-474d-af27-31cbaa1129ba>
CC-MAIN-2017-04
http://www.computerweekly.com/news/2240021933/Everything-Google-knows-about-disk-failure
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00408-ip-10-171-10-70.ec2.internal.warc.gz
en
0.956252
422
2.515625
3
Check out this cool video from the NASAexplorer YouTube channel, showing how scientists with the Lunar Reconnaissance Orbiter (LRO) beamed an image of the Mona Lisa to the spacecraft from Earth. NASA says this was done as a demonstration of laser communication with a satellite at the moon, which could be used for future moon missions as a way to communicate rather than with radio waves. The image of the famous DaVinci painting traveled almost 240,000 miles in digital form, NASA says. While it's likely they chose the Mona Lisa image because of its historical significance, I can't wonder if NASA scientists were poking a bit of fun at the Mona Lisa Moon alien conspiracy theorists out there. Keith Shaw rounds up the best in geek video in his ITworld.tv blog. Follow Keith on Twitter at @shawkeith. For the latest IT news, analysis and how-tos, follow ITworld on Twitter, Facebook, and Google+. Watch some more cool videos: Watch this trailer for Lego's 'The Yoda Chronicles' Web series BBC gives Doctor Who fans an Amy/Rory postscript Supercut: Lego Lord of the Rings game cutscenes in one video The Year in Review, courtesy of Twitter Juggling Disney robot hopefully won't attack guests
<urn:uuid:62695421-79ae-44ff-bf0f-54b80173fce9>
CC-MAIN-2017-04
http://www.itworld.com/article/2715290/consumer-tech-science/how-nasa-beamed-the-mona-lisa-to-the-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00096-ip-10-171-10-70.ec2.internal.warc.gz
en
0.910648
264
3.296875
3
In this article you will learn how to install mongo under Linux / UNIX environments. Note: When this article was written (May 2013) we are using Debian based operating system Ubuntu 12.04 LTS. This works under Ubuntu 10.04 LTS, Ubuntu 13, CentOS / RHEL / Redhat Enterprise Linux, Debian 6 and Debian 7. What is MongoDB? MongoDB is a document database server that provides high performance, high availability, and scalability for web applications. It has features like embedding which makes reads and writes fast. Indexes can include keys from embedded documents. Arrays and features like high availability also help replicate servers with automatic master failover. MongoDB is used in many production application environments and it is commonly used by modern web applications. In this article we will cover installation of MongoDB server installation by using documentation and references provided by the MongoDB docs itself and with the help of the bash script we will create to install MongoDB easily. Following the commands below, you either need to run commands as root and ignore sudo. Or if you are logged in with your user it should be a sudo user. sudo apt-get update sudo apt-get -y upgrade The first command is updating software update repositories. The second command is upgrading software packages installed against the updated repositories. Now install MongoDB server by creating a bash script, we name the bash script as “install_mongodb.sh”: sudo vim install_mongodb.sh Content of the bash script: apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | tee -a /etc/apt/sources.list.d/10gen.list apt-get -y upgrade apt-get -y install mongodb-10gen Save and exit the file. Now the above script first alls “apt-get” command which registers call to the public key of the custom 10gen MongoDB repository. After this custom 10gen repository of the file having name “10gen.list” is created under “/etc/apt/sources.list.d” directory. Again run upgrade to download the packages from the new repository created. Then install mongodb server package “mongodb-10gen” Above is only the explanation of what the bash script contents do; please note that you do not have to perform above steps one by one. Give executable permissions to the user running: sudo chmod +x install_mongodb.sh Now, execute the bash script you had created earlier using below command: sudo bash ./install_mongodb.sh After the process is complete without any errors, you have successfully installed MongoDB on your server / machine. You should get an output in the end of something like below, with a unique generated process id (pid) mongodb start/running, process 2156 When the installation is complete it auto starts the MongoDB server’s service; you can verify by checking its “ProcessID” (PID) using below command: ps aux | grep mongo If you see the process id with the user and command running, your MongoDB is up and running.Posted in
<urn:uuid:6c174ef5-213c-417e-8ce2-8640413193b0>
CC-MAIN-2017-04
http://www.codero.com/knowledge-base/content/11/311/en/how-to-install-mongodb.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00306-ip-10-171-10-70.ec2.internal.warc.gz
en
0.841221
730
2.65625
3
This course is intended for learners with no existing knowledge of MPLS. It is intended to give you an overview of basic MPLS usage, configuration, and verification. We start with a quick synopsis of the differences between routing packets based on Layer-3 header information as compared to switching PDUs via a Label (sometimes called a “Tag”). Then we dive into the structure of the MPLS Label and how each field is derived. You are introduced to the four protocols capable of generating, and distributing, MPLS Labels between routers, and then we go into a deep-dive of LDP (the Label Distribution Protocol) complete with lab examples and detailed sniffer traces. Other facets of MPLS are also discussed such as Penultimate-Hop Popping, MPLS TTL usage, and Conditional Label Advertisement. This course should serve as a great foundational MPLS course so that you’re ready to dive into more advanced MPLS courses such as MPLS VPNs and MPLS Traffic Engineering.
<urn:uuid:4b7eba09-f981-4d8a-938e-66bbed9c15b9>
CC-MAIN-2017-04
https://streaming.ine.com/c/rs-intro-to-mpls
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00059-ip-10-171-10-70.ec2.internal.warc.gz
en
0.950762
205
3.0625
3
Tyvak Nanosatellite Systems has been awarded a contract to build a series of miniaturized satellites to support missions under NASA‘s Pathfinder Technology Demonstrator program. NASA said Wednesday its Small Spacecraft Technology Program will use CubeSats from the company with government-furnished technology payloads to conduct multiple flight demonstrations. Tyvak will develop a six-unit CubeSat equipped with solar arrays designed to supply at least 45 watts of power while in-orbit with an option to create up to four additional small spacecraft that will carry technology payloads, the agency added. “The increasing capabilities and resulting significant expansion in applications of small spacecraft represent a paradigm shift for NASA and the larger space community,” said Steve Jurczyk, associate administrator of NASA’s space technology mission directorate in Washington. “The satellites will be used to demonstrate and characterize novel small satellite payloads in low-Earth orbit,” said John Marmie, project manager at Ames Research Center. The project seeks to demonstrate the capacity of small spacecraft technologies to foster the development of commercial space capacities for future NASA missions that require advanced control systems for precision point, communications systems and propulsion systems. NASA added that PTD team leads the Small Spacecraft Technology program while NASA’s Ames Research Center and Glenn Research Center manage the program on behalf of the space agency’s space technology mission directorate.
<urn:uuid:52b41976-34d4-41a4-99eb-7d2b239545c4>
CC-MAIN-2017-04
https://blog.executivebiz.com/2017/01/nasa-taps-tyvak-nanosattelite-systems-for-small-spacecraft-to-help-conduct-cubesat-technology-missions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00547-ip-10-171-10-70.ec2.internal.warc.gz
en
0.864385
290
2.9375
3
Whether it's local or from the Web, there are several ways to get data into R for further work. Once you've installed and configured R to your liking, it's time to start using it to work with data. Yes, you can type your data directly into R's interactive console. But for any kind of serious work, you're a lot more likely to already have data in a file somewhere, either locally or on the Web. Here are several ways to get data into R for further work. [This story is part of Computerworld's "Beginner's guide to R." To read from the beginning, check out the introduction; there are links on that page to the other pieces in the series.] If you just want to play with some test data to see how they load and what basic functions you can run, the default installation of R comes with several data sets. Type: into the R console and you'll get a listing of pre-loaded data sets. Not all of them are useful (body temperature series of two beavers?), but these do give you a chance to try analysis and plotting commands. And some online tutorials use these sample sets. One of the less esoteric data sets is mtcars, data about various automobile models that come from Motor Trends. (I'm not sure from what year the data are from, but given that there are entries for the Valiant and Duster 360, I'm guessing they're not very recent; still, it's a bit more compelling than whether beavers have fevers.) You'll get a printout of the entire data set if you type the name of the data set into the console, like so: There are better ways of examining a data set, which I'll get into later in this series. Also, R does have a print() function for printing with more options, but R beginners rarely seem to use it. Existing local data R has a function dedicated to reading comma-separated files. To import a local CSV file named filename.txt and store the data into one R variable named mydata, the syntax would be: mydata <- read.csv("filename.txt") (Aside: What's that <- where you expect to see an equals sign? It's the R assignment operator. I said R syntax was a bit quirky. More on this in the section on R syntax quirks.) And if you're wondering what kind of object is created with this command, mydata is an extremely handy data type called a data frame -- basically a table of data. A data frame is organized with rows and columns, similar to a spreadsheet or database table. The read.csv function assumes that your file has a header row, so row 1 is the name of each column. If that's not the case, you can add header=FALSE to the command: mydata <- read.csv("filename.txt", header=FALSE) In this case, R will read the first line as data, not column headers (and assigns default column header names you can change later). If your data use another character to separate the fields, not a comma, R also has the more general read.table function. So if your separator is a tab, for instance, this would work: mydata <- read.table("filename.txt", sep="\t", header=TRUE) The command above also indicates there's a header row in the file with header=TRUE. If, say, your separator is a character such as | you would change the separator part of the command to sep="|" Categories or values? Because of R's roots as a statistical tool, when you import non-numerical data, R may assume that character strings are statistical factors -- things like "poor," "average" and "good" -- or "success" and "failure." But your text columns may not be categories that you want to group and measure, just names of companies or employees. If you don't want your text data to be read in as factors, add stringsAsFactor=FALSE to read.table, like this: mydata <- read.table("filename.txt", sep="\t", header=TRUE, stringsAsFactor=FALSE) If you'd prefer, R allows you to use a series of menu clicks to load data instead of 'reading' data from the command line as just described. To do this, go to the Workspace tab of RStudio's upper-right window, find the menu option to "Import Dataset," then choose a local text file or URL. As data are imported via menu clicks, the R command that RStudio generated from your menu clicks will appear in your console. You may want to save that data-reading command into a script file if you're using this for significant analysis work, so that others -- or you -- can reproduce that work. The 3-minute YouTube video below, recorded by UCLA statistics grad student Miles Chen, shows an RStudio point-and-click data import. UCLA statistics grad student Miles Chen shows an RStudio point-and-click data import. Copying data snippets If you've got just a small section of data already in a table -- a spreadsheet, say, or a Web HTML table -- you can control-C copy those data to your Windows clipboard and import them into R. The command below handles clipboard data with a header row that's separated by tabs, and stores the data in a data frame (x): x <- read.table(file = "clipboard", sep="\t", header=TRUE) You can read more about using the Windows clipboard in R at the R For Dummies website. On a Mac, the pipe ("pbpaste") function will access data you've copied with command-c, so this will do the equivalent of the previous Windows command: x <- read.table(pipe("pbpaste"), sep="\t") There are R packages that will read files from Excel, SPSS, SAS, Stata and various relational databases. I don't bother with the Excel package; it requires both Java and Perl, and in general I'd rather export a spreadsheet to CSV in hopes of not running into Microsoft special-character problems. For more info on other formats, see UCLA's How to input data into R which discusses the foreign add-on package for importing several other statistical software file types. read.csv() and read.table() work pretty much the same to access files from the Web as they do for local data. Do you want Google Spreadsheets data in R? You don't have to download the spreadsheet to your local system as you do with a CSV. Instead, in your Google spreadsheet -- properly formatted with just one row for headers and then one row of data per line -- select File > Publish to the Web. (This will make the data public, although only to someone who has or stumbles upon the correct URL. Beware of this process, especially with sensitive data.) Select the sheet with your data and click "Start publishing." You should see a box with the option to get a link to the published data. Change the format type from Web page to CSV and copy the link. Now you can read those data into R with a command such as: mydata <- read.csv("http://bit.ly/10ER84j") The command structure is the same for any file on the Web. For example, Pew Research Center data about mobile shopping are available as a CSV file for download. You can store the data in a variable called pew_data like this: pew_data <- read.csv("http://bit.ly/11I3iuU") It's important to make sure the file you're downloading is in an R-friendly format first: in other words, that it has a maximum of one header row, with each subsequent row having the equivalent of one data record. Even well-formed government data might include lots of blank rows followed by footnotes -- that's not what you want in an R data table if you plan on running statistical analysis functions on the file. Help with external data R enthusiasts have created add-on packages to help other users download data into R with a minimum of fuss. For instance, the financial analysis package Quantmod, developed by quantitative software analyst Jeffrey Ryan, makes it easy to not only pull in and analyze stock prices but graph them as well. All you need are four short lines of code to install the Quantmod package, load it, retrieve a company's stock prices and then chart them using the barChart function. Type in and run the following in your R editor window or console for Apple data: Want to see just the last couple of weeks? You can use a command like this: barChart(AAPL, subset='last 14 days') chartSeries(AAPL, subset='last 14 days') Or grab a particular date range like this: Quantmod is a very powerful financial analysis package, and you can read more about it on the Quantmod website. There are many other packages with R interfaces to data sources such as twitteR for analyzing Twitter data; Quandl and rdatamarket for access to millions of data sets at Quandl and Data Market, respectively; and several for Google Analytics, including rga, RGoogleAnalytics and ganalytics. Looking for a specific type of data to pull into R but don't know where to find it? You can try searching Quandl and Datamarket, where data can be downloaded in R format even without needing to install the site-specific packages mentioned above. Removing unneeded data If you're finished with variable x and want to remove it from your workspace, use the rm() remove function: Saving your data Once you've read in your data and set up your objects just the way you want them, you can save your work in several ways. It's a good idea to store your commands in a script file, so you can repeat your work if needed. How best to save your commands? You can type them first into the RStudio script editor (top left window) instead of directly into the interactive console, so you can save the script file when you're finished. If you haven't been doing that, you can find a history of all the commands you've typed in the history tab in the top right window; select the ones you want and click the "to source" menu option to copy them into a file in the script window for saving. You can also save your entire workspace. While you're in R, use the function: That stores your workspace to a file named .RData by default. This will ensure you don't lose all your work in the event of a power glitch or system reboot while you've stepped away. When you close R, it asks if you want to save your workspace. If you say yes, the next time you start R that workspace will be loaded. That saved file will be named .RData as well. If you have different projects in different directories, each can have its own .RData workspace file. You can also save an individual R object for later loading with the save function: Reload it at any time with: Sharon Machlis is online managing editor at Computerworld. Her e-mail address is firstname.lastname@example.org. You can follow her on Twitter @sharon000, on Facebook, on Google+ or by subscribing to her RSS feeds: articles; and blogs. Read more about business intelligence/analytics in Computerworld's Business Intelligence/Analytics Topic Center. This story, "Beginner's guide to R: Get your data into R" was originally published by Computerworld.
<urn:uuid:d62e94e7-e81f-4b5d-9570-8935d043307e>
CC-MAIN-2017-04
http://www.networkworld.com/article/2166984/big-data-business-intelligence/beginner--39-s-guide-to-r--get-your-data-into-r.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00087-ip-10-171-10-70.ec2.internal.warc.gz
en
0.902555
2,463
2.96875
3
Scientists at Johns Hopkins University Applied Physics Laboratory have developed a quantum algorithm intended to identify measurements such as a radar cross section on aircraft. The research was been funded through an Intelligence Advanced Research Projects Activity program for exploring computational resources for quantum computers, APL said Aug. 14. APL scientists aim to further development of technology for airplanes intended to be stealth, the laboratory said. During an experiment, researchers used the linear system algorithm to encode the problem of computing radar cross sections and to detail the amount of power from an object illuminated by a radar beam. Scattered energy determines a radar’s ability to locate the target and efforts are underway to lower radar cross sections of planes, missiles, tanks and ships, according to APL. Scientists also used a quantum computer to estimate such measurements and model complex objects for the study.
<urn:uuid:856a07b8-4f42-48e3-94a1-285bcd882cb0>
CC-MAIN-2017-04
http://blog.executivebiz.com/2013/08/johns-hopkins-apl-develops-aircraft-measurement-algorithm-for-iarpa/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00051-ip-10-171-10-70.ec2.internal.warc.gz
en
0.911652
169
2.75
3
Knowing where to focus your resources in a time-critical environment is key to achieving a successful outcome. When your primary IT services are performing badly — or worse still, suffering from a complete failure — you want your resolution team spending more time fixing the problem and less time identifying it and searching for the needle in the haystack. Incident.MOOG uses a suite of algorithms that combine to allow your team to speed-up the resolution process so your services get back online sooner. Some of those techniques, such as Situational Management or Probable Root Cause, make it easier to know what to work on first; they give you directions to where the needle is, if you will. Other algorithms have impact at the other end of the problem, reducing the volume of data that needs to be sifted through, making the haystack smaller. The key step in this part of the process is Noise Reduction, the ability to remove the unimportant events, reducing the volume of data that the Incident.MOOG algorithms need to analyze and, more importantly, the volume of data that the people actually fixing your outage need to look at. Reducing the Noise For many years the concept of noise reduction began and ended with deduplication: every time a repeat event is encountered, you increment a counter on the parent alert, and discard the repeated event. Hundreds of ping fail events collapse to a single alert. Simple and effective, but no longer sufficient when managing modern systems. In Incident.MOOG, the concept of noise reduction still begins with deduplication — it is, after all, a wonderfully simple idea — but Incident.MOOG goes a lot further. Every event that enters Incident.MOOG is analyzed and is assigned a numerical value, a value that indicates how important that event is within the context of the rest of the system. In Incident.MOOG we call this attribute Entropy. The higher an alert’s Entropy the more important it is; the lower the Entropy, the less important it is. High Entropy events are the needles, the things to examine first; low Entropy events are the events that can be safely ignored, the noise, the haystack. Even with a basic Entropy threshold, large proportions of the inbound events can be ignored because they don’t contain useful information, and importantly, without losing the events that need remedial action. What is Entropy? Entropy is a term that is used in a variety of scientific and engineering fields, and has its root in thermodynamics. In the field of information theory, entropy, or ‘Information Entropy’ as it is more formally known, is a concept created by Claude Shannon. There is a story that Shannon was discussing his theories about “lost information” with John von Neumann, and specifically what to call his new concept. “You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage” What Entropy Really is… Put yourself in the place of your network operations engineers, where deduplication is the only mechanism for noise reduction. You still see thousands of alerts every day. Through experience and tribal knowledge you know which alerts are of no consequence and that can be safely ignored — the process heartbeat messages, the polled and un-thresholded CPU utilisation messages, the temporary network connectivity failures. None of these alerts need remedial action, they contain little useful information, they have a low Entropy. But can these alerts be distinguished from the important, actionable events? The failure of the disk array on your DB cluster, for example, that can’t be ignored, that needs action. In Incident.MOOG, the Entropy of an alert has multiple components, for example… What is the text of the alert? When does it appear and how often? Where is it coming from? We call these components the Semantic, Temporal, and Topological Entropies, and they combine to form an overall measure of Entropy for the alert. Semantic Entropy is derived using Natural Language Processing techniques. Words and phrases are assigned a score according to how common, or rare, those words are. Combining those scores gives a value for how much information is contained in the text of the message. But that’s not the whole story. An alert that always contains the same text and that appears every few hours carries far less meaning than a similar alert that appears once every few days. This is where Temporal Entropy comes in. Randomly occurring alerts carry more meaning than frequent and regularly occurring ones. Finally, there is the concept of Topological Entropy, a measure of importance derived from where in your network an alert comes from. Is the alert for a development server, or from a switch at the core of the network? An alert from the former is likely to have a lower Topological Entropy than the latter. Of course there is some pretty complex math going on behind the scenes to calculate the values for the different types of entropy. But the underlying concept of an Alert Entropy is a simple and incredibly powerful model for noise reduction, far more powerful and sophisticated than the straightforward act of deduplication, and far more relevant to modern IT operations. Get started today with a free trial of Incident.MOOG—a next generation approach to IT Operations and Event Management. Driven by real-time data science, Incident.MOOG helps IT Operations and Development teams detect anomalies across your production stack of applications, infrastructure and monitoring tools all under a single pane of glass.
<urn:uuid:766087d5-4689-44cb-9531-0f5d8cf5fe9e>
CC-MAIN-2017-04
https://www.moogsoft.com/whats-new/entropy-noise/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00261-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931674
1,196
2.65625
3
Finally, a legitimate health benefit to excessive drinking! Well, the author of a new study showing that the more alcohol seriously injured patients had in their blood, the less likely they were to die in the hospital might prefer we not put it that way. "This study is not encouraging people to drink," University of Chicago injury epidemiologist Lee Friedman said in a statement. That, of course, is because drunk people are (among other things) more prone to getting hurt, what with their stumbling around and driving drunk and getting belligerent and what have you. "However," he said, "after an injury, if you are intoxicated there seems to be a pretty substantial protective effect. The more alcohol you have in your system, the more the protective effect." Friedman, an assistant professor of environmental and occupational health sciences at UIC, conducted his study by analyzing Illinois Trauma Registry data for about 190,000 patients treated at trauma centers from 1995 through 2009 and tested for blood-alcohol content upon admission. From UIC: The study examined the relationship of alcohol dosage to in-hospital mortality following traumatic injuries such as fractures, internal injuries and open wounds. Alcohol benefited patients across the range of injuries, with burns as the only exception. The benefit extended from the lowest blood alcohol concentration (below 0.1 percent) through the highest levels (up to 0.5 percent)."At the higher levels of blood alcohol concentration, there was a reduction of almost 50 percent in hospital mortality rates," Friedman said. "This protective benefit persists even after taking into account injury severity and other factors known to be strongly associated with mortality following an injury." Obviously, as Friedman emphasizes, this isn't a green light to a life of inebriation. Heavy alcohol consumption is extremely dangerous in the short- and long-term. So other than highlighting an interesting phenomenon, how can science use this research? One way, Friedman suggests, is that by understanding the protective effects of booze in situations of bodily trauma, "we could then treat patients post-injury, either in the field or when they arrive at the hospital, with drugs that mimic alcohol." The study is available on the website of the journal Alcohol and will appear in the December issue of the print edition. Now read this:
<urn:uuid:0f23a891-59f6-425f-919e-e26de47f2d8b>
CC-MAIN-2017-04
http://www.itworld.com/article/2718310/enterprise-software/injured-hospital-patients-more-likely-to-survive-if-they-re-drunk--research-shows.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00565-ip-10-171-10-70.ec2.internal.warc.gz
en
0.963973
456
2.734375
3
On the way home from New York last week I saw the movie “Imitation Game”, which is the story of Alan Turing who led the effort to break the Enigma code with fellow mathematicians at Bletchley Park in the UK during World War II. Although, I rarely watch movies on long plane rides, I was fascinated by this movie, since I had just blogged about data encryption. I am certainly not an expert on the subject of encryption or the history of Enigma, so most of what I am posting here comes from this movie and Wikipedia. Please send me comments if you find anything that is incorrect or would like to add any insights. The Enigma machine was invented by the Germans after World War I, and was used commercially and by other countries to encrypt messages for many years. During World War II, an improved version was used by the German military. It had a series of stepping rotors, and plug boards which enabled a possible combination of 159 x 10^18 settings, which were changed on a daily basis. This is roughly equivalent to 2^60 in binary terms. In those days, using manual methods, it was nearly an impossible code to break, especially since the settings were changed daily. (The AES 256 that we use today has 2^256 possible combinations). In the movie the British had an Enigma machine which had been captured by the Poles, and although they could intercept the message at 6AM, which provided the settings for that day, it only gave them 18 hours to decipher the code before they had too start over again. To counter Enigma, Alan Turing built a machine to try to crack the code through brute force. At first, this was not fast enough to break through all the possible combinations. In the movie, one of the code ladies remarked that she could tell when a certain operator was sending messages due to his use of certain sequences of characters. This helped to reduce the number of permutations that had to be processed and led to the cracking of the code. In the end it was operator mistakes, laziness and failure to systematically introduce changes in encryption procedures that cracked the code. As is often the case, this result was due to people failure rather than a technology failure. Alan Turing was a pioneering computer scientist, who is best known for his efforts to decipher the Enigma code. He is also known for the hypothetical device known as the Turing Machine, which can be adapted to simulate the logic of any computer algorithm. If you have a chance to visit, Silicon Valley, you should stop by the Computer History Museum in Mountain View where you can learn more about Alan Turing and see an actual Enigma machine on display. I also recommend seeing the Imitation Game movie, which is interesting in terms of how people can work together to solve problems. The title “Imitation Game” is not made very clear in the movie. Turing introduced the “Imitation Game” in a paper he wrote to pose the question whether a computer could “imitate” a man. The title was less relevant to the computer that broke the enigma code than to the breaking of the man.
<urn:uuid:56d356ec-2ccf-4155-90d3-450d14b06856>
CC-MAIN-2017-04
https://community.hds.com/community/innovation-center/hus-place/blog/2015/03/13/enigma-cracking-an-encryption-engine
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz
en
0.981443
646
3.046875
3
An overview of storage area networks and Fibre Channel components. BY GREG SCHULZ Internet, e-commerce, large databases, data warehousing/mining, and video applications depend on an ever-increasing amount of data. As a result, these storage-intensive applications have special requirements. Disaster tolerance, extended distances, and worldwide 24x7 availability have placed an emphasis on storage being scalable, modular, open, highly available, fast, and cost-effective. Parallel SCSI storage and RAID arrays have gone a long way toward addressing these requirements. However, the "virtual data- center" vision has been limited by existing storage architectures. As a result of the emergence of storage area networks (SANs), storage is entering a period of change similar to what computer networks went through in the late 1980s and early 1990s. Networks have migrated from proprietary interfaces such as SNA and DECnet to open TCP/IP on Ethernet. Simple hub and spoke configurations gave way to robust switched networks with multiple sub-nets, zones, segments, and the Internet. Networks have evolved from being a mechanism for access to computer systems from terminals or PCs, to being able to transfer and share files and support distributed applications including e-mail and Web-hosting. New storage interfaces The storage interface such as parallel SCSI that sits between host systems and storage devices is in some cases becoming a hindrance to growth. Traditional storage environments with dedicated storage, as shown in Figure 1, are no longer sufficient for today's applications needs. That is not to say interfaces such as SCSI are dying. SCSI will continue to co-exist in many environments and can be part of an overall SAN strategy. However, for applications requiring large amounts of storage, high performance, shared storage over long distances, and high availability, a new storage model is required. Storage area networks Key potential benefits of Fibre Channel and SANs are reduced storage management effort and costs. Management can be reduced in the following ways: - Consolidated storage (disk and tape) and storage management; - Shared storage pools for dynamic allocation; - Remove redundant costs and complexity; - Elimination of vendor-specific "islands" of storage; - Simplified storage planning and procurement; - LAN-free and serverless backup; and - Disaster recovery and replication. In Figure 2, a SAN is represented in a logical manner similar to the way a LAN would be shown. Like a LAN or WAN, underneath the SAN there is an underlying infrastructure. Today, Fibre Channel is the primary enabling technology for building SANs. The Fibre Channel standard has been refined over recent years, as has the interoperability of various components (host bus adapters, switches, and devices). Currently, Fibre Channel supports speeds of 100MBps or 200MBps, with various topologies, including arbitrated loop, point-to-point, and switched fabric. Figure 1: Traditional storage architectures include storage devices that are directly attached to servers. Fibre Channel is an ANSI-standard protocol supporting flexible wiring topologies. Fibre Channel supports several upper-level protocols (ULPs), including SCSI, TCP/IP, FICON, and VI for different application requirements. A SAN is a network for storage that can include hubs, switches, directors, host bus adapters (HBAs), and routers used for accessing storage. A benefit of a SAN is that you can isolate all storage I/O on a separate network, so that traditional network traffic is not impacted by storage I/O traffic. A SAN is often depicted as an open-ended "cloud," or network, with virtually unlimited bandwidth and host connectivity. Various servers can plug into and gain access to common pools of storage and services in a transparent manner. Fibre Channel overview Fibre Channel is a high-speed serial interface for connecting computers and storage systems (e.g., RAID/JBOD arrays, tape drives/libraries). Fibre Channel provides attachment of servers and storage systems across distances of 10km and beyond, enabling floor-to-floor, building-to-building, and campus-wide distances. It supports multiple standard protocols (e.g., FICON, TCP/IP, and SCSI) concurrently over the same physical cable or media, which can simplify cabling and infrastructure costs. This interface also allows standard SCSI packets to be transported over fiber-optic or copper interconnects using SCSI_FCP (SCSI Fibre Channel protocol). End users can incorporate existing SCSI devices in a SAN via Fibre Channel-to-SCSI converters such as bridges and routers. Figure 2: In a SAN configuration, storage is attached directly to the storage network. Not all storage subsystems are designed to take advantage of Fibre Channel, and the performance of some applications may not be improved because some products have internal constraints that prevent them from running at faster rates. For these systems, Fibre Channel provides distance and connectivity benefits. Fibre Channel SAN environments consist of several components, depending on the topology and applications. HBAs and device drivers-HBAs attach to host I/O buses or interfaces such as PCI or the SBus. In addition to providing a physical interface between the host bus and the Fibre Channel interface, HBAs can support various protocols, including SCSI, FICON, TCP/IP, and VI. Today, some of the major differences between HBAs are the level of interoperability with other adapters, protocols supported, operating systems support, and physical media interface support. For redundancy, Fibre Channel environments should be built around dual switches and/or directors to eliminate performance bottlenecks and single points of failure. The goal is to build a storage network using similar techniques and principles used for traditional networking combined with storage I/O channels. Given the network type of flexibility provided by Fibre Channel topologies, redundancy can be configured into a storage configuration in many ways. Using redundant HBAs attached to separate switches and/or directors, storage systems can be configured to isolate against HBA failure, cable failure, or failures at the switch, or I/O controller level. Fibre Channel's distance and performance capabilities enable many applications to benefit from increased redundancy and disaster recovery, including in-house disaster recovery. Cabling and GBICs-Fibre Channel cabling includes copper for distances up to 30m and fiber-optic cable for distances to 10km and beyond. Mixed-media topologies are fully supported in a Fibre Channel environment, with conversion being handled by GBICs (small interface modules that house a transceiver for a particular medium). The GBIC provides an adapter type of function and enables hubs or switches to support multiple media types such as copper and fiber optics. Fibre Channel hubs-A Fibre Channel hub provides much the same functionality as an Ethernet hub or concentrator. A hub provides self-healing capabilities using port bypass circuitry to prevent a device failure or physical change from disrupting the loop. A hub is essentially a loop in a box that simplifies cabling and increases loop resiliency. Hubs can also be used to create entry-level SANs that can be migrated to switch-based fabric environments, thus reducing the cost per port. On one hand, hubs provide simple and easy-to-implement "starter" SANs for small environments at a low cost. On the other hand, Fibre Channel hubs provide shared bandwidth and access, which can result in performance degradation-as more host systems are added, the size of the loop or number of devices is increased, or traffic increases. Switches and directors-A Fibre Channel fabric consists of one or more switches or directors that provide increased bandwidth, as opposed to the shared bandwidth of hubs. A Fibre Channel switch provides the same function as a standard network switch, in that it provides scalable bandwidth between various sub-nets, segments, or loops. Unlike a hub or loop, which has shared bandwidth, a switch provides scalable bandwidth as users or devices are attached. Switches are used to create fabrics by interconnecting various loops or segments with Inter Switch Links (ISL). Switches can also be used to isolate local traffic to particular segments, much like traditional network switches isolate LAN traffic. Figure 3: Interconnecting switches can increase bandwidth between ports and improve overall SAN performance. A Fibre Channel director is a large port count, non-blocking scalable enterprise-class switch with full redundancy. Fibre Channel directors support multiple protocols, including FICON, SCSI, and IP concurrently. A Fibre Channel director can be used to implement large SANs ranging from hundreds to thousands of ports with less complexity, given the number of native ports and fewer ISLs required. Director-class products enable multiple SAN "islands" or smaller switches to be brought together to simplify management, similar to how a large IP router/switch like a Cisco Catalyst 6500 ties a LAN together. When directors and switches are configured together, the director may be referred to as a core device and the switches as edge devices. Bridges and routers-A Fibre Channel bridge, or router, provides the ability to migrate existing SCSI devices to a Fibre Channel SAN environment. On one side of the bridge are one or more Fibre Channel interfaces, and on the other side are one or more SCSI ports. The bridge enables SCSI packets to be moved between Fibre Channel and SCSI devices. Other new bridges or routers include Fibre Channel to iSCSI for accessing storage over Ethernet and Fibre Channel to ATM gateways for SAN/WAN. Fibre Channel subsystems-Current Fibre Channel storage devices include JBOD and RAID disk arrays, solid state disks, and tape drives and libraries. Most Fibre Channel RAID arrays today still have SCSI disk drives and Fibre Channel host interfaces. SAN software-SAN software today includes backup packages to access Fibre Channel tape devices, file- or data-sharing software, and volume managers to provide host-based mirroring, disk striping, and other volume and file system capabilities. SAN software also includes data replication, virtualization, remote mirroring, extended file systems, shared file systems, network management, and serverless backup. The key to configuring storage for performance and database applications is to avoid contention or bottlenecks. So, when creating a SAN for database environments, avoid making the mistake of trying to use a single Fibre Channel interface or loop to support all of your storage. Instead, use multiple Fibre Channel HBAs to spread I/O devices such as RAID arrays on different interfaces to avoid contention. The simplest and easiest way to implement a SAN is to buy a "SAN in a box," an enclosure that essentially includes all necessary SAN components. As a next step, you might implement small production SANs, based on hubs or switches that enable groups of systems to share storage and resources. A subsequent step would be to interconnect various sub-SANs, with zoning or volume mapping to isolate storage to specific host systems for data integrity. Volume mapping, or masking, enables a shared storage device such as a LUN on a RAID array to be mapped to a specific host system. Volume mapping ensures that only the authorized or mapped host can access the LUN in a shared storage environment. The main advantages of using hubs for simple SANs in the past were low cost and availability. End users are now shifting toward switches as a starting point and toward directors to connect multiple sub-SANs or create larger SANs. The shift toward switches and directors is being driven by reduced cost per port, increased functionality, management tools, and interoperability. To increase the bandwidth between a host and a SAN, additional HBAs can be added and attached to separate switch or director ports. As shown in Figure 3, using switch ports to interconnect switches or directors can increase overall port count; load-balancing is important to prevent saturating or causing blockage on these ISLs. Tips and comments Whether you are ready to implement a SAN or you are investigating the technology for future implementation, the following are some points to consider: - SANs can be implemented in phases and can include existing storage devices; - Costs for SAN components are dropping, while features, functions, and interoperability are increasing; - Similar to a standard network environment, which may include sub-nets or switched segments, you can configure a SAN with multiple sub-SANs or switched segments where certain systems and storage can be isolated and mapped to specific hosts; - Fibre Channel directors can be used as large high-performance switch or core devices as well as being combined with smaller switches configured as edge devices; - SAN software for functions such as data sharing, file replication, mirroring, and other applications will continue to evolve; and - Fibre Channel is not the only possible infrastructure for SANs. Products based on an early version of the iSCSI standard are starting to appear, which will allow end users to build a SAN with standard Ethernet/IP networks. Greg Schulz is an FC/9000 market development manager at Inrange Technologies (www.inrange.com) in Mt. Laurel, NJ. - Fabric-A collection of one or more switches that combines to create a virtual fabric where the various endpoints (ports or buses) have virtual connections or cross points to each other in a non-blocking manner. Non-blocking access means that ports do not have to share common bandwidth as in a hub or concentrator, thereby improving I/O performance. - Logical unit numbers-LUNs describe a logical or physical device and are referred to as logical physical volumes or partitions. - Upper-level protocols-ULPs operate at the FC-4 level in the Fibre Channel specification. ULPs include SCSI (SCSI_FCP), TCP/IP, VI, FICON, and ATM. - Virtual Interface-VI is designed for high-speed, low-latency memory-to-memory or system-to-system messaging. - Volume mapping-A method for mapping specific storage devices or volumes to particular host systems. - Zoning-A method for creating virtual storage pools using host-based software, HBAs, or switches.
<urn:uuid:a97adb49-670d-4c3c-9c32-22384597c751>
CC-MAIN-2017-04
http://www.infostor.com/index/articles/display/104659/articles/infostor/volume-5/issue-6/features/introduction-to-fibre-channel-sans.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.916953
2,957
2.71875
3
Specific applications (some versions of eDirectory, for example) might provide alternate utilities that back up NICI when the application backs up its own data. In this circumstance, see the application’s documentation. Use the information in this chapter when the application does not provide these alternate utilities. Backing up and restoring NICI requires two things: Backing up and restoring directories and files Backing up and restoring specific user rights on those directories and files The exact sequence of events required is platform-dependent. NICI stores keys and user data in the file system and in system-specific and user-specific directories and files. The NICI installation program protects these directories and files by setting the proper permissions on them, using the mechanism provided by the operating system. Uninstalling NICI from the system does not remove these directories and files; therefore, the only reason to restore these files to a previous state is to recover from a catastrophic system failure or a human error. Also, overwriting an existing set of NICI user directories and files might break an existing application. When you back up and restore NICI, it is critical that you maintain the exact permissions on the directories and files. NICI’s operation and the security it provides depends on these permissions being set properly. Typical commercial backup software should preserve permissions on the NICI system and user directories and files. You should check your commercial backup software to see if it does the job before you do a custom backup of NICI. You should always back up the existing NICI directory structure and its contents, if any, before doing a restore. If you lose the machine key, it is unrecoverable. Because the user data and keys could be encrypted by using the machine key, losing it results in a permanent loss of user data. To do a restore of NICI only, you must understand which specific files must be restored. During restoration, it is important that the correct access rights be restored for the correct owner. On UNIX and Windows systems, the name of the user-specific directory reflects the ID of the owner, but on both systems the owner ID might change between the time of the backup and the time of the restore. It is important for security reasons that you know which account is being restored and that you assign the directory name and access rights accordingly. The existence of a user account on the system with the same ID as an account that was backed up does not mean that the current account is the actual owner of the information being restored.
<urn:uuid:61c8998f-5ba3-4675-836a-efee16765d80>
CC-MAIN-2017-04
https://www.netiq.com/documentation/nici27x/nici_admin_guide/data/bwf6d4c.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00317-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913375
518
2.546875
3
Singh B.,University of Sydney | MacDonald L.M.,CSIRO | Kookana R.S.,CSIRO | Van Zwieten L.,Australian Department of Primary Industries and Fisheries | And 12 more authors. Soil Research | Year: 2014 The application of biochar technology for soil amendment is largely based on evidence about soil fertility and crop productivity gains made in the Amazonian Black Earth (terra preta). However, the uncertainty of production gains at realistic application rates of biochars and lack of knowledge about other benefits and other concerns may have resulted in poor uptake of biochar technology in Australia so far. In this review, we identify important opportunities as well as challenges in the adoption of biochar technology for broadacre farming and other sectors in Australia. The paper highlights that for biochar technology to be cost-effective and successful, we need to look beyond carbon sequestration and explore other opportunities to value-add to biochar. Therefore, some emerging and novel applications of biochar are identified. We also suggest some priority research areas that need immediate attention in order to realise the full potential of biochar technology in agriculture and other sectors in Australia. © 2014 CSIRO. Source
<urn:uuid:6e1d93ff-9f89-4a4f-9169-270efa69b3ff>
CC-MAIN-2017-04
https://www.linknovate.com/affiliation/environment-protection-authority-132395/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00527-ip-10-171-10-70.ec2.internal.warc.gz
en
0.914585
248
2.578125
3
Photo: Press conference on the introduction of the ADA Restoration Act, July 26, 2007. On July 26, 1990, a landmark piece of legislation became law. The Americans with Disabilities Act (ADA) was signed by President George H.W. Bush who called it "the world's first comprehensive declaration of equality for people with disabilities." What was first intended to increase access to physical spaces such as government offices, improve employment opportunities and create real social integration, now includes the technological realm as well. The ADA addresses the need to make telephone communications services accessible to individuals who have impaired hearing or speech. Yet according to a Brown University Study only 54 percent of federal Web sites and 46 percent of state Web sites meet the World Wide Web Consortium (W3C) disability guidelines. "When the original version of the ADA was enacted 17 years ago, the Internet was not a factor," explains Ron Graham, writer of the Access Ability blog. "However, as times and technologies evolve, so should our laws." Today marks the 17th anniversary of the ADA. In those 17 years, much has improved for people with disabilities -- many have gained better employment, accessibility has become more prevalent in everyday life. Yet for some, the glimmer of hope as been snuffed out simply because of advancements in medicine and court decisions. The Supreme Court has been chipping away at the protections of the ADA, leaving millions of citizens vulnerable to a narrow interpretation of the law. Take Tony Coelho for example: Coelho has epilepsy, and was on the receiving end of discrimination much of his life. In 1998 the Supreme Court decided, in Sutton v. United Airlines, that the effects of "mitigating measures" (such as taking medication) are required to be considered in determining if someone has a disability under the ADA. So because of the improvements in anti-seizure medications and other medical devices, Coelho and others with epilepsy are no longer protected under the Act. The irony is that the Americans with Disabilities Act was written by Coelho. People with other disorders and disabilities such as diabetes, muscular dystrophy and those who use hearing aids are also "not disabled enough" to be protected under the ADA -- people who were intended to be. "The rulings that have been handed down under current guidelines allow employers to say somebody is too disabled to do a job, but not disabled enough to be covered by the ADA," said Graham. "Just because a person manages his/her disability with prosthetics, hearing aids, or medications does not mitigate the existence of the disability or its impact on the total functionality of that individual."Restoration Today, the ADA Restoration Act of 2007 was introduced in Congress. This bi-partisan legislation means to restore the initial meaning of the ADA, ensuring that being "not disabled enough" no longer hampers the lives of people with varying degrees of disability. "The language change in the ADA Restoration Act removes the hurdle of people claiming discrimination having to first prove the degree of disability, thus allowing the discrimination claim to be heard, not being collectively dismissed because the claimant failed to demonstrate they were a party covered by the law," continued Graham. "The Supreme Court's interpretation has created a vicious circle for Americans with disabilities," said Congressman Jim Sensenbrenner, co-sponsor of the Act. "It has created a broad range of people who benefit from 'mitigating measures' such as improvements in medicine, who still experience discrimination from employers, yet have been labeled 'not disabled enough' to gain the protections of the ADA. This is unacceptable." House Majority Leader Steny H. Hoyer pointed out the issues the bill intends to improve: "Among other things, the bipartisan House bill -- which already has more than 130 co-sponsors -- will restore the original intent of the ADA by: "The fact is," Hoyer continued, "the Supreme Court has improperly shifted the focus of the ADA from an employer's alleged misconduct, and onto whether an individual can first meet -- in the Supreme Court's words -- a 'demanding standard for qualifying as disabled.'" In an official proclamation current President Bush stated: "On the anniversary of the Americans with Disabilities Act (ADA), we celebrate our progress towards an America where individuals with disabilities are recognized for their talents and contributions to our society ... I call on all Americans to celebrate the vital contributions of individuals with disabilities as we work towards fulfilling the promise of the ADA to give all our citizens the opportunity to live with dignity, work productively, and achieve their dreams."
<urn:uuid:e99a29d1-1718-4de4-8a06-2d810a16e5be>
CC-MAIN-2017-04
http://www.govtech.com/health/Americans-With-Disabilities-Act-Sees-Possible.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00034-ip-10-171-10-70.ec2.internal.warc.gz
en
0.965176
931
3.1875
3
James Gosling, the inventor of Java, discusses the unique technologies to be used by Sun's new real-time application server. SAN FRANCISCOJames Gosling, the father of Java, has been working on some innovative technologies to address unique needs, such as real-time application servers and refactoring of large systems. And Gosling is expected to demonstrate the real-time application server he has been working on at the JavaOne conference here on May 19. Gosling, a Sun Microsystems vice president and fellow, in an interview on May 16 here, told eWEEK that Sun has had a real-time JVM (Java Virtual Machine) "thats been out there for awhile and its a virtual machine that was designed for pretty high-end real-time usesthe kind of thing you could control an F-16 with. Its got timing numbers in the five to 10 microsecond range." So he said he did a few tweaks on the Sun application server to run it on top of the real-time VM. "And we ended up with numbers that are really cool," he said. The system features a real-time garbage collector with intense scheduling, priority management and priority inversion controlall the little components it takes to make real-time systems really real-time, he said. System requests can take milliseconds, microseconds, "but there are occasionally requests that take as long as a secondbetween garbage collector pauses, paging pauses, queuing pauses, threading pauses....," Gosling said. In one benchmark, the system ran a maximum roundtrip time of 15 milliseconds, Gosling said. However, there is no free lunch, he said. "The real-time app server, and actually all the real-time systems, have sort of a tradeoff between deterministic timing and throughput," Gosling noted. "So in order to get deterministic timing, you have to give up some throughput. But when you want deterministic timing, you just dont load the system very heavily." Gosling said Suns target for the technology is anybody who has application servers who need really fast, deterministic timing. Yet, Gosling admits that the real-time application server, "for almost everybody is really kind of silly. Because most people dont need stuff that is this tightly constrained. "But the Wall Street folks, who are just anal about every last millisecondbecause every last millisecond in terms of getting trades in is a really big dealtheyll happily give up a certain percentage of throughput to get tighter timing guarantees." Suns open-source outreach met with mixed emotions. Click here to read more. Sun has a prototype of the real-time application server running "and weve got a little bit of an early access program going on," Gosling said with a smile. "Its taking two quite solid and proven technologies and putting them together in a somewhat unconventional waythe real-time VM and the app server," he said. Meanwhile, one of the projects Gosling worked on a few years ago, a refactoring engine known as Jackpot, has made a resurgence. Refactoring is the process of restructuring an existing body of code, altering its internal structure without changing its external behavior, Gosling said. "We have, after much rebuilding of internal infrastructure, early access versions of that [Jackpot] and it allows you to do customizable refactoring," he said. The tool enables developers to write their own refactorings "very easily," he said. It even features its own specialized scripting language for refactoring. "and thats looking pretty wonderful," he gushed. The Jackpot tool is targeted at developers primarily in larger enterprises that have vast amounts of code in big, complex applications. "A lot of enterprises have coding standards and one of the things you can do with this refactoring engine is establish a number of patterns that detect coding standard violations," Gosling said. "And then you just run that continuously against all your code." Gosling said the tool works at a very large scale, which makes it particularly useful "for some of the quite enormous applications out therelike three million lines of code or more." And once a system gets that big, even simple things in a code base become difficult to change, Gosling said. "The world gets very fragile, because you try to change this one little API and it has this ripple effect all over the world" of your code. Indeed, Gosling asks, "How do you search and replace something in over a million lines of code?" The Jackpot engine will do that, he said. The tool will help developers transform code fragments in large bodies of code, he said. The biggest system Sun that tested the Jackpot refactoring engine on was eight and a half million lines of code, which included the all of NetBeans, all of the Java Development Kit and all of Suns tools, Gosling said. "It was basically all the software that Sun owns jammed together as one big system," he said. "But you can actually find systems bigger than that. And systems of that complexity are pretty tough to change." Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
<urn:uuid:01a13b47-5320-4b74-af66-fffdfdf7db42>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Sun-Aims-for-Speed-and-Control-with-RealTime-App-Server
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00244-ip-10-171-10-70.ec2.internal.warc.gz
en
0.960099
1,097
2.515625
3
When talking about the role of optical switches, it could be said that if there is no switch, there are no communication networks. Although this view of optical switches is a bit exaggerated, it fully proves the importance of the optical switches. From the first generation of telecommunication networks, namely the telephone switching system, a large number of switches are used to form a switching unit that satisfies the circuit-switching between users. However, today, with the development of DWDM technology, all optical network is gradually developed as the next generation communication network which is paid more and more attention of people and become a hot spot of research. In the DWDM based all optical network, it is inevitable to use optical switched to convert and transfer different wavelengths of optical signals in the network. Optical switch is the core component to complete switching and even protecting the circuit. Optical switched are widely used and play an irreplaceable role in optical networks. Technical Indexes of Optical Switches When evaluating of a new optical switch, we must consider the following seven technical indexes. Types of Optical Swiches An optical switch has one or more inputs ports and two or more output ports that we usually call 1xN or NxN optical switch. Different principles and technologies of optical switch are different characteristics and suitable for different occasions. Depending on its fabrication technology, optical switch can be divided into mechanical optical switch, opto-micro-mechanical optical switch, MEMS (Micro-Electro-Mechanical Systems) optical switch and other switches. Among them, mechanical optical switch and MEMS optical switch are the most mature and commonly used switches in the field. In addition, there are also Liquid crystal optical switch, thermal optical switch, acousto-optic switch, waveguide optical switch, solid-state optical switch and magneto-optic switch etc. On the other hand, based on its application, optical switches can be divided into mechanical optical switch, rack mount optical switch, bench top optical switch and so on. Now, we are going to give a brief introduction to some commonly used optical switches in the following. Opto-Mechanical Optical Switch Mechanical optical switch has developed for a long time and it is the most widely deployed at the time. These devices achieve switching by moving fiber or other bulk optic elements by means of stepper motors or relay arms. Benefits of traditional mechanical optical switch are low insertion loss (<2db), high isolation (>45dB), and not influenced by polarization and wavelength. In general, opto-mechanical optical switches collimate the optical beam from each input and output fiber and move these collimated beams around inside the device. This allows for low optical loss, and allows distance between the input and output fiber without deleterious effects. The defects of traditional mechanical optical switch are the long time to turn on or turn off, also devices have more bulky compared to other alternatives. Thus it is not easy to make large optical switch matrix. However, with development of technology, the new micro-mechanical devices overcome this. The new generation of opto-micro mechanical optical switch is with the characteristics of the wider broadband, compact structure and small size which can significantly reduce the optical switch elements in a matrix and the corresponding number of drives. MEMS Optical Switch MEMS (Micro-Electro-Mechanical Systems) optical switch is a micro-optical switch in free space which is composed of the semiconductor material. It is an advanced technology of optical switch and currently attracted a wide attention in the world. MEMS optical switch is compact, lightweight and easy to expand as well as the combination of the advantages of mechanical optical switch and waveguide optical switch while overcomes their defects. It is because of combining the electrical, mechanical and optical integration as a whole, so it can transparently transmit different rates and different business services and now have been widely used in industry. Thermal optical switch This kind of technology is commonly used to make miniature optical switch. In general, thermo-optic switches are normally based on waveguides made in polymers or silica. For operation, they rely on the change of refractive index with temperature created by a resistive heater placed above the waveguide. Their slowness does not limit them in current applications. It mainly has two basic types: digital optical switch (DOS) and interferometer optical switch. In this kind of switch, acoustic wave is utilized to control the deflection of light. Because there are no moving parts, it is more reliable to do it. In general, the loss of 1×2 acousto-optic switch is lower than 2.5db. Waveguide optical switch Waveguide optical switch is new kind of optical switch which is a waveguide structure. In addition, electro-optic, acousto-optic, thermal-optic, and magneto-optic effect are also used in waveguide optical switch. Thanks to its small size, waveguide optical switch has a large-scale application in OXC. The principle of magneto-optic switch is the Faraday rotation effect. Compared to the traditional mechanical optical switch, magneto has the benefits of faster switching speed and higher stability. In addition, compared to the other non-mechanical optical switches, it has a lower driving voltage and little cross talk. Therefore the magneto-optic switch will be a very competitive type of optical switch in the future. Liquid crystal optical switch The working principle of the liquid crystal optical switch is based on polarization control, ie. one patch, light is reflected by the polarization while the other path, light can go through. Because the optic coefficient of liquid crystal electro is high that comes to be the most effective photoelectric material. Additionally, the switching speed of liquid crystal optical switch can reach the degree of sub-microsecond. And with the increasing progress of technology, it may achieve the speed nano-second degree in the future. Optical Bypass Switch After talking about those several commonly used kinds of optical switches, you may have a deeper understanding of the optical switch. In addition, there is an optical switch which is called optical bypass switch. Users are often confused by its name. optical bypass switch is an optical switch which has the protection switching function. It is typically used for network failure recovery. It provides a permanent and trouble-free access port for in-line network security and monitoring devices. The optical bypass switch automatically switches network traffic through added in-line devices or bypasses devices that are about to be removed. With a heartbeat, the optical bypass switch protects network traffic against both signal and power loss on the attached in-line device. This kind of optical switching technology is used in many other devices and it is widely used in opticalline protection of PDH, SDH, C/DWDM, power communication and CATV systems etc. Application & Prospect of Optical Switches Optical switch plays a very important role in optical network, which is not only as the switching core of the key equipment in WDM network, but also as key components in optical networks. The main applications of optical switches as following: With the development of optical transport network technology, new optical switch technologies are constantly emerging while the original optical switch technologies are continuing to improve. As the optical transmission network developed in the direction of ultra high-speed and large-capacity, network survivability, network protection switching and recovery may be the critical issues. Optical switch plays an important role in protection and recovery of this field and fills the gap. In order to better fit the new and increasing network update, the size of the switching matrix of the optical switches may continue to grow in the future. In addition, the switching speed of optical switches will need higher requirements. In a word, a large-capacity, high-speed, low-loss optical switch will be needed in the future network and plays a more important role in the development of optical network.
<urn:uuid:a1f0b52a-1d1a-4dab-97d4-5a8d39a3a5dd>
CC-MAIN-2017-04
http://www.fs.com/blog/technology-and-application-of-optical-switch-in-optical-network.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.920207
1,629
3.515625
4
For decades, wind farms have inspired and frustrated engineers devising reliable sources of renewable energy. Traditional, land-based farms can't generate the horsepower to do anything other than serve as supplemental energy sources. This image shows an offshore wind farm in Denmark. The generally constant surface of the ocean means wind blows much stronger than it does over terrain. In several Eastern Seaboard states, offshore wind farms like this are in various stages of development. Photo courtesy of Sandia.gov
<urn:uuid:c760ee73-2f45-4454-ad92-0278d45aa2f0>
CC-MAIN-2017-04
http://www.govtech.com/technology/Green-Initiatives-Offshore-Wind-Farms-Increase.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00152-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951406
96
3.375
3
Today, 100 organizations in over 40 countries worldwide celebrate Safer Internet Day . In the European Union (EU), conferences and campaigns showcase already existing safer Internet activities of the private and the public sector, from filtering technologies to media literacy program. The purpose is to raise awareness -- in particular at schools, among parents and teachers -- about the best ways for protecting minors in an online environment of growing importance for our daily lives. "The Internet offers tremendous opportunities to all. But many remain unaware of its darker side, from child pornography to sexual grooming online by pedophiles," said EU Information Society and Media Commissioner Viviane Reding. "Today, I am calling upon all decision-makers in the private and in the public sector to help make the Internet a safer place also for the most vulnerable of our society. In today's digital age learning how to avoid online pitfalls is a valuable life skill that all young people need to know. For this purpose, we need to spread the message about safer Internet use among teachers and parents as well as children themselves." To mark Safer Internet Day, the results of a competition to create Internet safety awareness material, as part of a worldwide blogathon , will be published. The competition involved more than 200 schools in 29 countries, which teamed up to create safety material during the last three months. Entries in the competition addressed one of three themes: e-privacy, "netiquette", and the power of image. Safer Internet events organized in the EU at national level include this year the following: - In Germany, visitors are being quizzed on their Internet safety knowledge and video clips are being broadcast by over 10 TV stations and shown in over 250 cinemas - In the Netherlands, Princess Maxima is the special guest at an event featuring theatre, music and stories - In Portugal, a national contest for schools on safer Internet use is being launched and awareness sessions are taking place in schools nationally - In Luxembourg, a week-long exhibition on safer Internet use has been running - In Bulgaria, the results of a nationwide safer Internet competition for school children are being presented, with the participation of around 1000 children - In Slovenia, young people are showcasing art projects and Slovenian national television is broadcasting Internet safety clips. The United Kingdom is holding the Crossing Borders and Dissolving Boundaries conference in London. This conference, sponsored by both the Home Office and the Cyberspace Research Unit at University of Central Lancashire, is aimed to increase awareness of the risks that exist in the online world, including issues such as cyberbullying and social networks. The conference will also discuss the challenges associated with successfully educating children and young people, parents, teachers and others involved in education and child services, about the risks and opportunities associated with the Internet and social networking. UK Home Office Minister Vernon Coaker, under-secretary of state for policing, security and community safety, said "Protecting children from harm is of the highest priority for the government. We will do everything we can to ensure children are protected -- whether that is in communities or on the Internet. "The Internet brings huge opportunities for children -- but also big risks which we all need to be aware of. It's particularly important that people such as teachers, youth group leaders and other child welfare workers are fully aware of the issues surrounding the Internet.
<urn:uuid:376e29e5-809d-4798-838e-fbc4b73ef6db>
CC-MAIN-2017-04
http://www.govtech.com/security/Safer-Internet-Day-2007.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00574-ip-10-171-10-70.ec2.internal.warc.gz
en
0.949416
679
2.703125
3
Google Reveals 'Little Box Challenge' Rules for Contest's $1M Prize Whoever succeeds in building a better, smaller inverter "will help change the future of electricity," he wrote. "A smaller inverter could help create low-cost microgrids in remote parts of the world. Or allow you to keep the lights on during a blackout via your electric car's battery. Or enable advances we haven't even thought of yet." The competition calls for registered teams to submit a technical approach and testing application for their project by July 22, 2015, and up to 18 finalists will be notified of their selection for final testing at the testing facility in October 2015, according to the rules. Those 18 entrants will be required to bring their inverters in person to a testing facility in United States by Oct. 21, 2015, for reviews and judging. The grand prize winner is expected to be announced in January 2016, according to Google. The idea of the Little Box Challenge was first previewed by Google in May, but few details were initially released, according to an earlier eWEEK report.Improved inverters are needed because by 2030, roughly 80 percent of all electricity will flow through the devices and other power electronic systems, making them critically important for future electricity infrastructure and use, according to Google. Google, which is a huge consumer of electricity for its modern data centers, offices and operations around the world, is always looking for ways of conserving energy and using renewable energy sources. The company has been making large investments in wind power for its data centers since 2010. Energy production is known to have a huge impact on Earth's climate. The company has a goal of powering its operations with 100 percent renewable energy in the future. In January 2013, Google announced an investment of $200 million in a wind farm in western Texas near Amarillo, as the company continued to expand its involvement in the renewable energy marketplace. Google has also invested in the Spinning Spur Wind Project in Oldham County in the Texas Panhandle. Other Google renewable energy investments include the Atlantic Wind Connection project, which will span 350 miles of the coast from New Jersey to Virginia to connect 6,000 megawatts of offshore wind turbines; and the Shepherds Flat project in Arlington, Ore., which is one of the world's largest wind farms with a capacity of 845 megawatts. Shepherds Flat began operating in October 2012. Today's power inverters are cooler-sized boxes that are used in homes equipped with solar panels, according to Google. They convert direct current (DC) power generated by the panels to alternating current (AC) power that can be used in homes and businesses. They're big and expensive relative to the systems they serve.
<urn:uuid:129a4d9f-d071-4c21-bfe8-eb92a342fdd1>
CC-MAIN-2017-04
http://www.eweek.com/cloud/google-reveals-little-box-challenge-rules-for-contests-1m-prize-2.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.970011
548
2.65625
3
The next time you buy a computer, you'll likely get one tied to the ancient computing past --- Unix. Yes, you heard that right, Unix. Here's why. Unix was originally developed at AT&T's Bell Labs research center in the early 1970s, became a favorite in universities, and from there migrated to commercial tech startups. A darling of programmers, it was notably user-unfriendly. I first encountered it in the pre-Web Internet days, when I had to use it to access a variety of now-vanished Internet resources. It made DOS look simple. So why might your next computer run Unix? Because the descendents of Unix now rule the computing world. Various variants of Unix were used by Apple to develop its Darwin operating system, which in turn is at the core of both Mac OS X and iOS. The free open-source Linux operating system is also a Unix-like operating system, and Android is a Linux-based operating system. And these days, Android, Mac OS X, and iOS rule the computing world, not Windows. Smartphones and tablets, after all, are merely computers in a small, convenient, mobile form factor. And a recent Gartner report shows just how much the descendents of Unix dominate computing. In 2012, 505.5 million Android devices and 212.8 million iOS and Mac OS X devices shipped, for a total of 718.3 million devices, compared to 346.5 million for Windows. And that doesn't take into account any other variants of Unix, such as Linux servers and computers. By 2014, 1.12 billion Android devices and 338.1 million iOS and Mac OS X devices will ship, according to Gartner, for a total of 1.46 billion devices, compared to 363.8 million for Windows, which means that descendants of Unix will outsell Windows by better than a four-to-one margin. Luckily for you, you won't have to face Unix's original dauntingly convoluted commands. Instead, you'll find various user-friendly graphical interfaces driven largely by touch. They're big productivity boosters. But if you look underneath them, you'll find some of the guts of an operating system first developed 40 years ago.
<urn:uuid:8429a0d5-5084-4c92-b325-f2c404beb37b>
CC-MAIN-2017-04
http://www.itworld.com/article/2705718/data-center/why-your-next-computer-will-run---unix-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00538-ip-10-171-10-70.ec2.internal.warc.gz
en
0.951123
455
2.78125
3
Tech View: Network Science and the Internet: Lessons Learned To demonstrate the ability of network science to predict the behavior of large-scale complex networks, network scientists are applying their techniques to the Internet. Network science is a brand-new discipline that relies on large datasets and computational power to study large-scale networks and to find properties that are common to them. These networks might be biological, technological (power grids, communication networks including the Internet), or social, but the premise of network science is that they share essential features that can be used to predict their behavior. The immediately apparent similarity among networks, regardless of their own unique complexity, is their high-level topology: nodes connected to other nodes via links over which information or data is passed. This distilled topology is easily represented as a graph, and for these graphs network scientists, using tools from mathematics and techniques from statistical physics, build models to predict the networks’ behavior. In the case of the physical Internet, nodes are devices such as routers and switches. The physical connectivity of these nodes has to be inferred from measurements since the links between the nodes cannot in general be directly inspected. The data that network science has relied on came from an early study in which traceroute, a widely used tool for determining the path of a packet through a network, was used to “get some experimental data on the shape of multicast trees.” Although traceroute was not originally intended for such a purpose and the original data collectors were very much aware of their data’s limitations, their study represented an improvement over traditional approaches because of their use of actual data, incomplete as it was. When the traceroute data was used to infer the physical Internet, what stood out when graphing the data was that a few nodes had many connections, while most nodes had only a few. This observation is a hallmark of power-law node degree distributions where the frequency of an event decreases very slowly as the size of the event increases. For example, very large earthquakes occur only very rarely, but small earthquakes occur often. Power law distributions are especially popular with physicists for whom they often confer the ability to make predictions. The finding of power law node degree distributions for the Internet came as a big surprise and didn’t fit with the classical random graph models that have been studied by mathematicians during the past 50 years; these models are unable to capture the observed high variability in node degrees. To account for this new power law phenomenon in graphs representing real-world complex networks, network scientists developed novel graph models capable of reproducing the observed power law node degree distributions. Scale-free models, the term applied to these new models, gained further legitimacy when mathematicians placed the physicists’ largely empirical results on solid grounds by providing rigorous proofs. The new models also carried new implications for the robustness of the Internet—since they predict the presence of highly connected nodes (or hubs) in the core of the network; the existence of such hubs means the Internet is vulnerable to attacks concentrated on these critical nodes. This is the much-publicized Achilles heel of the Internet. But do the measurements support the claims made by network science regarding the Internet? Since the network science approach and conclusions are almost entirely data-driven, this question cannot be answered unless the underlying data—the traceroute data—is rigorously examined; something that was not done at the beginning. The problems start with the use of traceroute to describe the Internet’s physical structure. Firstly, traceroute operates at the IP layer, not the physical layer, and therefore cannot accurately describe the Internet’s physical topology. The Internet from an engineering standpoint is not a single topology but many. Where users and traceroute “see” a simple layout of nodes (routers) linked together, an engineer sees a stack of layers, each of which performs a particular function using its own specific protocols. This stacked, or layered, architecture is one reason for the great success of the Internet. Applications running at one layer don’t need to know anything about other layers. And changes at one layer have no impact on other layers. But there are other just as substantial problems with traceroute and thus the data collected with it: Given that the traceroute data is largely inadequate to infer the physical Internet, a different, more realistic approach is needed to analyze the Internet. A more grounded approach would be to reverse-engineer the decisions made by ISPs and network planners when designing the actual physical infrastructure. Engineers don’t design networks with power laws or any other mathematical conscript in mind, but rather by figuring out the most efficient way to get the anticipated traffic from one place to another within the parameters of what is feasible and cost-effective. In the language of mathematical modeling, they try to solve, at least heuristically, a constrained optimization problem. In particular, the selection of links between nodes is anything but random, and therein lies the main difference between the network scientists’ scale-free models and an engineer’s approach. When feasibility and economic constraints are considered, it makes sense to place high-degree nodes at the edge of the network where ISPs multiplex their customers’ traffic before sending it towards the backbone. The view that emerges through reverse-engineering—high-degree nodes at the network edge with low-degree (though high-capacity) nodes at the core—directly conflicts with the scale-free models, which place the highly connected nodes at the network core. In short, the scale-free modeling approach for the physical Internet collapses under scrutiny of the data and when viewed from an engineering perspective. Physical devices in the Internet can and do fail; for this reason, engineers built in redundancy and designed routing protocols with the ability to bypass nonworking devices for working ones. This system has worked very well, and the robustness of the Internet to router or link failures has exceeded anyone’s expectation as the network has grown from a handful of nodes to millions of nodes over a 40-year span. The current problems with network science, at least how it applies to the Internet, is that network science has depended on inaccurate, incomplete data, has produced models from the data that conflicts with reality, and has paid little or no attention to model validation. Still, network science may have a role to play if it learns the lessons from being carelessly applied to the Internet. A relevant mathematical theory is needed for correcting the Internet’s real shortcoming: that of the trust model on which the system was originally designed. This model has been broken for some time; viruses, worms, and spam are the evidence, but the more serious threat is that the critical protocols (e.g., BGP) that ensure the Internet’s viability could be hijacked to do real damage. Currently network researchers are working with engineering insights, but as the Internet scales ever larger, there is more urgency for a relevant mathematical theory to aid and ultimately replace engineering intuition. But any new model would have to build on rigorously vetted data and incorporate domain knowledge. If these steps are taken, a more nuanced and true-to-life framework could be developed and used for predicting the behavior of tomorrow’s Internet. To download the PDF of the original paper, go here. Walter Willinger, a member of the Information and Software Systems Research Center at AT&T Labs Research in Florham Park, NJ, has been a leading researcher into the self-similar ("fractal") nature of Internet traffic. His paper "On the Self-Similar Nature of Ethernet Traffic" is featured in "The Best of the Best - Fifty Years of Communications and Networking Research," a 2007 IEEE Communications Society book compiling the most outstanding papers published in the communications and networking field in the last half century. More recently, he has focused on investigating the topological structure of the Internet and on developing a theoretical foundation for the study of large-scale communication networks such as the Internet. SIAM Fellows (2009) AT&T Fellows (2007), for fundamental contributions to understanding the behavior of large data networks ACM Fellows (2005), for contributions to the analysis of data networks and protocols IEEE Fellows (2005), for the analysis and mathematical modeling of Internet traffic 2006 "Test of Time" Paper Award from ACM SIGCOMM Co-recipient of the 1996 IEEE W.R.G. Baker Prize Award from the IEEE Board of Directors W.R. Bennett Prize Paper Award from the IEEE Communications Society (1994) Tech View: Views on Technology, Science and Mathematics Sponsored by AT&T Labs Research This series presents articles on technology, science and mathematics, and their impact on society -- written by AT&T Labs scientists and engineers. For more information about articles in this series, contact: email@example.com
<urn:uuid:82aef4b7-27a9-42e4-9c94-ea1a20e095b0>
CC-MAIN-2017-04
http://www.research.att.com/articles/featured_stories/2009/200905_techview_network_science.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941165
1,825
3.21875
3
Argonne supercomputer shows mechanisms behind supernovas This Argonne supercomputer visualisation shows the mechanism behind the violent death of a short-lived, massive star. The image shows energy values in the core of the supernova. Different colours and transparencies are assigned to different values of entropy. By selectively adjusting the colour and transparency, the scientists can peel away the outer layers and see what is happening in the interior of the star.
<urn:uuid:8f63d44a-acf9-4039-837d-4187a2d334f9>
CC-MAIN-2017-04
http://www.computerweekly.com/photostory/2240108698/Photos-Snapshots-from-inside-an-exploding-star/1/Argonne-supercomputer-shows-mechanisms-behind-supernovas
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00382-ip-10-171-10-70.ec2.internal.warc.gz
en
0.824041
93
3.53125
4
The one thing flash drive recovery had going for it was easy direct access to the underlying storage medium. Inside the casing of that USB thumb drive or camera card is an industry-standard NAND flash memory chip. Whereas hard disk platters can realistically only be read by the drive that recorded to them, that memory chip can be removed from the circuit board and read by any number of device programmers available online. The real challenge is turning that raw memory dump into something useful. What you get directly from the memory chip looks nothing like what your PC sees when the drive is plugged in a USB port. Instead, it looks more like a jigsaw puzzle spread out on your coffee table. Intermixed with user data are bits of information the drive uses for its own internal operation. To get at individual files, this information must be stripped, error correction codes applied, and finally all the pieces reassembled in the correct order. Imagine our dismay the first time we opened the case of a failed USB thumb drive to find something like this staring back at us. What the heck is this?!?! Well, that, sports fans, is the latest craze- a monolithic USB thumb drive or “monolith” for short. Instead of many discrete components installed together on a circuit board, many flash drives today contain what appears to be a single chip with four gold fingers to plug into a USB port. The monolith has many things going for it- smaller size, water-proof, and cheaper to manufacture (I imagine, as this seems to be the sole driving force behind every innovation in the storage industry these days). But what protects the drive from the elements also protects the drive from recovery engineers. How do you get at the NAND flash memory when it’s buried in an enclosure like this? Is there even a separate memory element to be gotten at or is this some sort of radical new design? Naturally, we had to get to the bottom of this. Check out the video below to learn more about how Gillware recovered data from a failed monolithic USB flash drive.
<urn:uuid:0bf6ea3c-765d-4676-b781-ae13b2511a4a>
CC-MAIN-2017-04
https://www.gillware.com/blog/data-recovery/monolith-data-recovery/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00042-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938022
422
2.515625
3
Innovative buildings make data center temperature monitoring easier Friday, Mar 22nd 2013 For data center operators, temperature monitoring has long been one of their top ways to combat a facility's main concern: overheating. Servers, if left unchecked, will generate so much heat that the temperature in the storage location will increase to the point that nodes and other vital server components will break down, meaning that data centers are, in essence, their own worst enemy. To combat this issue, facility managers turned to air conditioning units to keep server room conditions sufficiently cool. According to Energy Star, a joint program between the U.S. Environmental Protection Agency and the Department of Energy, data centers would be kept at around 55 degrees F. However, keeping a data center this cool requires operators to use enormous amounts of energy. This often presented data center operators with a catch-22 situation in which there was no ideal solution. On one hand, data centers that are too cold will make annual energy bills costly. On the other hand, servers rooms that are too warm will require constant repairs because of equipment failures. To address this seemingly insurmountable problem, facilities are increasingly turning to more unique temperature monitoring setups. Turning data centers into hot water Swedish data center company Bahnhof will be channeling its excess server heat into hot water tanks at its new proposed facility. The organization's latest project, according to Wired, is transforming a 130-square-foot old gas plant in Stockholm into a five-level location in which each floor is a server room. Instead of using air conditioning units to keep all of the equipment in the 35-megawatt facility cool, the excess heat generated by the servers will be used to heat up water. That hot water will then be sold to Stockholm authorities, thereby ensuring that Bahnhof is contributing to municipal energy supplies. "All this heat generated in the data center will be pumped out by a heat pump in the district heating system," said Bahnhof CEO Jon Karlung, according to Wired. This temperature monitoring system, according to Karlung, will ensure that the new data center has as low a carbon footprint as possible. Since the facility will not be using as much electricity, operators can rest easy knowing that less fossil fuel needs to be burned to keep the location operational. "I think it's an elegant mix: culture and technology and business," Karlung said. "They kind of strengthen each other." Additional temperature monitoring techniques While the Stockholm facility's temperature monitoring technique is energy efficient and effective, it is not a realistic alternative for many data centers. Bahnhof is building the location with energy efficiency in mind, which is a luxury most existing managers do not have. To better manage costs at legacy data centers, business development executive Jack Pouchet offered a few energy efficiency tips in a recent Data Center Knowledge article. One key tip that Pouchet said could help data center operators is to use temperature monitoring equipment. For example, Energy Star reported that for every 1 degree F warmer a server room is, facilities can reduce their energy bills by up to 5 percent. However, even incremental changes in server temperature put a location at greater risk. But, by using a temperature sensor to accurately keep track of internal conditions, facility operators can ensure that server rooms never get too hot and that energy bills are sufficiently reduced. "Take temperature, humidity and airflow management to the next level through containment, intelligent controls and economization," he wrote. "From an efficiency standpoint, one of the primary goals of preventing hot and cold air from mixing is to maximize the temperature of the return air to the cooling unit." Additional energy saving advice from Pouchet included using more efficient equipment, utilizing virtualization to reduce the number of servers needed and optimizing infrastructure.
<urn:uuid:cb2049ed-fc87-411e-a443-deabae7dccfd>
CC-MAIN-2017-04
http://www.itwatchdogs.com/environmental-monitoring-news/data-center/innovative-buildings-make-data-center-temperature-monitoring-easier-408643
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.938183
772
2.640625
3
Of all the environmental monitoring equipment available, a humidity sensor may not seem like the most important of an instrument. For facilities managers concerned with keeping operations running at all costs, external environmental issues such as electricity usage and temperature frequently rank among the major potential sources of problem. However, if left unaccounted for, humidity levels that are either too low or too high can wreck havoc by destroying vital equipment and rendering supplies and research useless. The very best operations managers in just about every industry know that even something as simple as moisture can be damaging, which is why a humidity sensor is one of their best allies. How a humidity sensor can benefit operators in any industry No facility, regardless of who owns it or how it is used, is immune to the unique dangers posed by unchecked moisture and humidity levels. Data Center Infrastructure Management: This is perhaps the industry most aware of why humidity is a vital environmental factor to manage. If levels are too high in a server room, then excess moisture will begin to collect on vital wiring and other equipment. This can cause machines to corrode and servers to short circuit. On the flip side, if levels are too low then excess static electricity can build up in the data center. Food Service: In an industry that depends on having fresh produce supplies on hand at all times, moisture levels must be monitored with a humidity sensor. Too much moisture and food supplies could rot more quickly, while not enough moisture in the air can dry out produce and make it inedible. University and Medical Research: Scientists, in order to make sure their tests and experiments are as accurate as possible, all environmental factors - including humidity - must be accounted for. Nothing is more devastating to a researcher that spends countless hours on a critical project only to have the results be rendered moot by moisture damage. For all of your humidity monitoring and equipment needs regardless of industry, be sure to turn to the experts at ITWatchDogs.
<urn:uuid:f3fec8f3-ccc7-4ba3-89e4-5cb7d87c66ff>
CC-MAIN-2017-04
http://www.itwatchdogs.com/humidity-sensor
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940363
392
2.625
3
The fast moving world of Internet time has left the federal government behind when it comes to protecting your private information. That was the central conclusion of a report issued this week by watchdogs at the Government Accountability Office which said that ensuring the privacy and security of personal information collected by the federal government remains a challenge, particularly in light of the increasing dependence on networked information systems that can store, process and transfer vast amounts of data. For example, GAO has many challenges arise in protecting the privacy of personal information by agencies ' use of Web 2.0 and data-mining technologies. From the GAO: "These challenges include updating federal laws and guidance to reflect current practices for collecting and using information while striking an appropriate balance between privacy concerns and the government's need to collect information from individuals. They also involve implementing sound practices for securing and applying privacy protection principles to federal systems and the information they contain. Without sufficient attention to these matters, Americans' personally identifiable information remains at risk." It's not like the feds haven't tried to protect personal information. The Privacy Act of 1974 and the Privacy Act and the E-Government Act of 2002 both have provisions to protect personal data gathered by the government but time has passed both of their protections by, the GAO found. GAO identified privacy issues in three major areas: - Applying privacy protections consistently to all federal collection and use of personal information. The Privacy Act's protections only apply to personal information when it is considered part of a "system of records" as defined by the act. However, agencies routinely access such information in ways that may not fall under this definition. - Ensuring that use of personally identifiable information is limited to a stated purpose. Current law and guidance impose only modest requirements for describing the purposes for collecting personal information and how it will be used. This could allow for unnecessarily broad ranges of uses of the information. - Establishing effective mechanisms for informing the public about privacy protections. Agencies are required to provide notices in the Federal Register of information collected, categories of individuals about whom information is collected, and the intended use of the information, among other things. However, concerns have been raised whether this is an effective mechanism for informing the public. In the end the GAO recommended two big steps the federal agencies should take: - Ensure the implementation of a robust information security program as required by FISMA. Such a program includes periodic risk assessments; security awareness training; security policies, procedures, and practices, as well as tests of their effectiveness; and procedures for addressing deficiencies and for detecting, reporting, and responding to security incidents. - Data breaches could be prevented by limiting the collection of personal information, limiting the time such data are retained, limiting access to personal information and training personnel accordingly, and considering the use of technological controls such as encryption when data need to be stored on mobile devices. The GAO report says it and agency inspectors general have continued to report on vulnerabilities in security controls over agency systems and weaknesses in their information security programs, potentially resulting in the compromise of personal information. Federal agencies reported 13,017 such incidents in 2010 and 15,560 in 2011, an increase of 19%. Layer 8 Extra Check out these other hot stories:
<urn:uuid:d80778c4-6e53-4de8-816a-ca5d2ace5206>
CC-MAIN-2017-04
http://www.networkworld.com/article/2222873/malware-cybercrime/in-face-of-breaches--malware--unscrupulous-users--us-needs-to-update-online-priva.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz
en
0.924285
657
2.703125
3
DELL EMC Glossary Network-attached storage (NAS) is an IP-based file-sharing device attached to a local area network. NAS serves a mix of clients and servers over and IP network. A NAS device uses its own operating system and integrated hardware and software components to meet a variety of file service needs. Network-attached storage (NAS storage) enables server consolidation by eliminating the need for multiple file servers and storage consolidation through file-level data access and sharing. NAS typically uses multiple protocols to perform filing and storage functions. These include TCP/IP for data transfer; SMB (CIFS) and NFS for remote file service; and NFS, SMB and FTP for data sharing. Who uses network-attached storage, and why Organizations across a wide range of industries use network attached storage (NAS) to: - Consolidate server and storage infrastructure - Streamline data access and file sharing across a heterogeneous client and server environment - Simplify management and increase efficiency - Increase scalability - Strengthen data protection and security How network-attached storage works A NAS device is an open-system computer system with storage capacity connected to a network that provides file-based data storage services to other devices on the network. NAS uses standard file protocols such as SMB (Server Message Block) and Network File System (NFS) to allow Microsoft Windows, Linux, and UNIX clients to access files, file systems, and databases over the IP network. Benefits of network-attached storage Network-Attached Storage allows organizations to: - Provide comprehensive data access and file sharing - Increase efficiency with centralized storage - Gain flexibility to support UNIX and Windows clients - Simplify management - Scale storage capacity and performance - Increase availability with efficient data replication and recovery options - Secure data with user authentication and file locking
<urn:uuid:8b04f6cd-ed3b-48b3-a33e-fa52d5dfc08d>
CC-MAIN-2017-04
https://www.emc.com/corporate/glossary/network-attached-storage.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.882836
391
3.359375
3
Criminals use computers. Police forces around the world use computers, too. But when police need to investigate a possible crime, the methods they are allowed to use vary a lot from one country to another. Police authorities in Germany have been prohibited from "hacking" into a suspect's computer by a February 2007 supreme court ruling. The German court determined that hacking techniques couldn't be used because no legal framework exists at present. This ruling leaves room for further debate, and Germany's Interior Minister Wolfgang Schäuble will reportedly push for the legal changes needed to allow the police to perform such activities, known as "online house searches". German law enforcement would like to search the contents of suspects' computers without the suspects knowing about it. Privacy advocates are concerned about such measures. This formed the basis of a survey we conducted – should legitimate law enforcement authorities, such as the police, be allowed to use computer applications that would in other circumstances be considered malware? Should they be allowed to use hacking techniques to investigate suspects? Out of the 1,020 respondents, 23% were in favor, 11% were undecided, and 65% were against. Approximately 70% of the responses were from one of five locations: Sweden, Germany, Great Britain, Finland, and the United States. Over 91% of Germans were against such techniques, while only 56% of Britons were against them. Considering the geopolitical factors and events such as the 2005 London bombings might explain the differences between these countries. Respondents' comments noted that many would be willing to allow secret hacking techniques as long as law enforcement first obtained a warrant. Could such "official" hacking software be a good thing? If the Internet is seen as a training camp for terrorists (as Minister Wolfgang Schäuble has suggested), then hacking tools would be very useful and a potential benefit. Evidence could be gathered quickly and covertly from individuals operating within isolated cells. Covert collection of evidence is essential if all the cell members are to be identified in a timely fashion. Recent reports from the UK pronounce that Scotland Yard has uncovered evidence of a bomb plot against the headquarters of Telehouse Europe. Detectives recovered computer files showing that suspects had targeted a "high-security internet hub" in London. On the other hand -– much of this benefit is predicated on the theory that the tools will be properly handled. Police are generally trained in law enforcement and criminal investigation, not data security. It could be exceedingly difficult to corral and maintain hacking software. Once a suspect's computer is compromised, it might be infected by malware that then causes harm to innocent others There is also the problem of the amount of data collected. "Online house searches" could yield such quantities of data that it overwhelms the signal with noise. The UK plot was uncovered with a series of raids. Police are trained to do physical investigations. Does the potential benefit of data collection with hack tools outweigh the potential distraction from the police's primary task? And how should antivirus companies react to the existence of such malware? Detect it? Avoid detecting it on purpose? Avoid detecting hacking software used by goverments…of which country? Germany? USA? Israel? Egypt? Iran? So should police hack? As it often is in life, even if the question is simple and straightforward, it might be hard to come up with a simple answer for it.
<urn:uuid:a70b67e3-23e6-48d6-b4ac-c617aeddc1d6>
CC-MAIN-2017-04
https://www.f-secure.com/weblog/archives/00001201.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00575-ip-10-171-10-70.ec2.internal.warc.gz
en
0.967606
687
2.8125
3
Certain data types, such as text documents, may transfer faster if the data is compressed while in transit. ExpeDat can apply ZLIB compression to transferring files, which typically reduces the size of compressible data types to about one-third of their original size. Checking the compression checkbox will enable content compression for individual file transfers. The data will be compressed while in transit and automatically decompressed on the other side. Compression is not available for Streaming Folders or Object Handlers. Compression should only be used on files known to contain compressible data, such as text documents. For these data types, compression may increase transfer speeds by three times or more, on top of MTP's throughput gains. ExpeDat uses the same compression algorithm as Zip and Gzip. Data which compresses well using those utilities will also compress well with ExpeDat. Compression is very CPU intensive. Data files which are already compressed, encoded, or encrypted, will not benefit from compression. If the server or client is CPU limited, enabling compression may severely reduce performance. If you are on a very fast network, even compressible data may move faster without compression due to the CPU overhead of compressing it. See Tech Note 0014 for tips on deciding when to enable compression. The best way to tell whether a given file type will benefit from compression when transferring to a particular server is to experiment with the setting both on and off to see which is faster. Compression is also uses extra memory, as network data must be buffered for the compression engine. If your network is very fast and your latency is very high, then you may need to increase the size of the compression buffer. The buffer should be at least twice the bandwidth-delay-product of your network path. (Multiply the path bandwidth in bits-per second by the latency in milliseconds, then divide by 4194304000. The result is the number of megabytes recommended.) The default value of 16 is adequate up to a gigabit network with 75ms latency, or a 100 megabit network with 750ms latency. The compression buffer setting only affects buffers used by the MTPexpedat client. If you find it necessary to increase this value, you should increase the server StreamSize as well. Server administrators may choose to disable compression using the NoCompression option. Since compression changes the amount and timing of data being sent, the progress bar may be disabled or inaccurate while using compression.
<urn:uuid:f9d5ddc3-ec59-4a82-96e1-9bee85a4500c>
CC-MAIN-2017-04
http://www.dataexpedition.com/expedat/Docs/MTPexpedat/compression.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00393-ip-10-171-10-70.ec2.internal.warc.gz
en
0.887518
508
2.640625
3
All U.S. passports issued since June 2007 are electronic passports, or epassports, which have advanced digital security features. All epassports have a small gold logo printed on the cover. (See illustration at right) The epassport is based on a contactless smart card chip embedded in the cover. Think of it as a computer with special security software inside your passport. Contactless refers to the fact that it is a wireless device, but it can only communicate over very short distances of an inch or two. U.S. epassports do not use RFID tags (Radio Frequency IDentification), which are used mostly for simple insecure object-related identification and tracking such as the whereabouts of warehouse palets and products. How epassports work ? The contactless smart card chip securely stores information and uses its computer to provide enhanced security that protects the privacy and safety of the passport holder. When the government makes the epassport book it places a digital version of the identifying information printed inside-including the photograph-on the epassport chip. The information is “signed” using a type of electronic seal, called a digital signature which stops any alteration of the stored electronic data.. Passport terminals at border control communicate with the epassport chip and check the “seal” to prove that the passport was issued by the U.S. government and that the information stored in the chip has not been changed. Several other U.S. epassport security features prevent anyone from “skimming” or reading data out of the passport without you knowing it, say by standing next to you with a special reader, for example. 1. There is a radio frequency shield in the passport cover, so it cannot be read or even detected by any reading device when it is closed. 2. The epassport chip is “locked” with a key that is unique to each epassport. The border agent must first physically open your passport book to get the printed key to access the chip’s stored information. 3. The smart card chip encrypts, or scrambles, the data before transmitting it to the passport terminal, making the information useless to any eavesdropper. 4. The epassport chip only communicates over very short distances of one or two inches once it has been opened and unlocked. A more secure travel document The epassport is a far more secure travel document than a traditional paper passport because it provides an additional way of authenticating the printed information with an sealed electronic copy. It is virtually impossible to counterfeit an epassport, because no one can duplicate the authentic U.S. digital seal on the electronic data. Furthermore, any change to the chip information breaks the seal, so tampering is evident to a border agent. If stolen, your passport picture could be replaced by a fraudulent one on the printed data page, but the digital copy of your picture on the chip can’t be changed without detection. The sealed digital photograph ensures that you, as the bearer of the passport, are indeed the person to whom it was issued. At passport control, border agents can compare the person, the printed page and the chip information. These all have to match to confirm the identity of the person presenting the passport. Four things to remember about the epassport – Based on smart card technology – Virtually impossible to counterfeit – Far more secure travel document than a traditional paper only passport – Built-in digital security protects your privacy and safety
<urn:uuid:f56736bd-14db-4b1f-8ffc-48fbe8affc94>
CC-MAIN-2017-04
https://www.justaskgemalto.com/us/us-electronic-passport/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00511-ip-10-171-10-70.ec2.internal.warc.gz
en
0.915601
730
3.015625
3
Row, Row, Row Your Bot"> "We have taken off the shelf kayaks for 500 dollars from LL Bean and turned them into robots," Leonard said. "We now have about 12 kayaks." Leonard said that MIT had been testing the kayak-bots on a lake in Maine until the neighbors noticed the fleet of autonomous kayaks operating by themselves.The research then led to a new use. "Were developing autonomous rescue kayaks," Leonard said. He said that these kayaks could also be used to provide communications in flooded areas. "We already have the technology to do this," he said. NASAs Vladimir Lumelsky said that the space agency was researching ways to use robots in space, but was finding it a very difficult problem. He said that plans to use robots to repair the Hubble Space Telescope had to be scrapped because NASA couldnt figure out how to make it work at an affordable price. Lumelsky said that while NASA was making great progress on manipulator arms, the problem was really sensors. There werent enough sensors available to provide adequate input to human operators, and the agency wasnt ready to let autonomous robots loose near people. Closing out the panel was Stephen Welby, director of DARPAs Tactical Technology Office, the organization thats supporting much of todays robotics research. He disagreed with Lumelsky that robotics was too hard to accomplish, saying that his agency had already funded and fielded a large number of successful robotics projects. To read more about the rise of nanorobots, click here. He disclosed, for example, that the third prototype of the GlobalHawk unmanned aerial reconnaissance vehicle was pressed into service in Iraq and Afghanistan, and by the time the agency got it back, it had racked up more hours aloft than any other aircraft in the Air Forces inventory. "It can do things people cant do," Welby said, "like staying at altitude for 35 hours straight." Welby also noted that his agency is making great strides in robotics by offering prizes to companies that can accomplish a task for the agency. Most recently competitors were asked to design a robot that could make its way through the Nevada dessert entirely on its own. The first time the contest was run, no robot succeeded, but in the contest last year, five vehicles made the whole trip. Next, Welby said, comes the tough test. DARPA will offer a prize for a robotic vehicle that can operate in traffic. Read more here about Wal-Marts use of Robots to help shoppers. That contest will take place in November 2007. For that task, some improvements are vital. Welby said that sensors are a key to success, another is learning. "They must be able to learn from the environment," he said. But he noted that robotics research is only at the very beginning. "Were at the Wright Brothers stage of robotics," Welby said. Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. "They called the local news hotline," he said. MIT has since moved the kayaks to the Charles River near the school.
<urn:uuid:719ff6a5-3aea-445c-9697-19c3442b8126>
CC-MAIN-2017-04
http://www.eweek.com/c/a/Application-Development/Robots-Clean-Floors151and-Save-Lives-Panel-Says/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00199-ip-10-171-10-70.ec2.internal.warc.gz
en
0.983224
631
2.53125
3
On the outside, this pair of robotic legs by a team of US-based researchers just looks like an experiment to see how life-like we can make robots. But a key component within the machine could allow it to help people with serious spinal injuries learn how to walk again. The robotic legs are designed to mimic human neural architecture, musculoskeletal architecture, and sensory feedback pathways, in order to make the robot move exactly like we do. The robot uses an artificial form of a central pattern generator (CGP)--in humanoids, this is a neural network in the spine that helps us walk in rhythm. An array of different sensors help this robot walk with the same effortlessness as we mere mortals possess. For instance, load sensors pick up on pressure when the robot's foot touches a surface. [ FREE DOWNLOAD: 6 things every IT person should know ] While the robot can at present only do a walking gait, it is already helping scientists understand how humans learn to walk. Thanks to the robot, researchers from the University of Arizona were able to conclude that even before babies learn to walk, their bodies already possess a simple CGP, which eventually matures to allow for more complex movements. Ultimately, these legs will help with scientists better understand spinal cord injuries and, most significantly, what we can do to help paralyzed individuals walk again on their own. For that reason alone, this development could be pretty incredible. Like this? You might also enjoy... - Man-to-Machine Brain Control Goes International With Robot Avatar Bodies - Gamer Builds Ultimate Starcraft Keyboard Mod; Gaming World Looks in Envy - Fat Thumb Recognizes Your Oversized Digits, Lets You Multitouch Zoom With One Finger This story, "Robotic legs can walk like a human, could teach people to walk again" was originally published by PCWorld.
<urn:uuid:2414ea36-c0e7-4c32-8261-3ad16385d7d0>
CC-MAIN-2017-04
http://www.itworld.com/article/2723538/it-management/robotic-legs-can-walk-like-a-human--could-teach-people-to-walk-again.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00347-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929131
383
3.703125
4
The UR(uncommited read) isolation level provides read-through locks, also know as dirty read or read uncommitted. Using UR can help to overcome concurrency problems. When you're using an uncommitted read, an application program can read data that has been changed but is not yet committed. UR can be a performance booster, too, because application programs bound using the UR isolation level will read data without taking locks. This way, the application program can read data contained in the table as it is being manipulated. For ex: Suppose a user updates a data(changes name from Sam to Samuel) and it`s yet not commited.And now a second user tries to access that data.With UR(Uncommited Read) the second user will get the updated data(i.e sameul) even though the changes done have not been commited into the database. So UR is a real relief for projects which involve quick updation and read like yours.
<urn:uuid:35da35ef-5886-4c1f-b4cc-4320f8b78a61>
CC-MAIN-2017-04
http://ibmmainframes.com/about911.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00163-ip-10-171-10-70.ec2.internal.warc.gz
en
0.932902
199
2.671875
3
The vast majority of the fastest-growing jobs in the United States depend on math and science skills, yet the number of women obtaining science, technology, engineering and mathematics (STEM) degrees continues to stagnate. CDW-G surveyed 300 college women and recent graduates to better understand this trend. “As a leading technology provider to higher education and prominent employer in the technology industry, this is a topic that is important to us,” said Aletha Noonan, vice president of higher education, CDW-G. “We wanted to explore women’s experience in STEM to discuss how we can help build a more inclusive and engaging environment, while contributing to a stronger female STEM pipeline.” CDW-G surveyed two distinct groups: - Women in STEM – students who plan to graduate with a STEM major, or who have graduated with an undergraduate or graduate STEM degree in the last five years - Former STEM students – women who left their STEM major Both groups experienced negative stereotypes, discomfort asking questions in class and a lack of female role models. Almost two-thirds of survey respondents struggled with confidence in STEM. However, there are several actions higher education institutions can take to encourage female STEM students. Survey respondents suggest universities and colleges help connect students with influential females in STEM, create internship opportunities for women and bring in more female role models to speak on campus. “These findings directly correlate to what we see in higher education. Strong role models and internships play a huge part in helping to spark young women’s interest in STEM, boosting their confidence and keeping them engaged,” said Maureen Biggers, director of Indiana University’s Center of Excellence for Women in Technology (CEWiT). “To this end, at Indiana University, we launched CEWiT, a center designed to promote the participation, empowerment and achievement of women in technology rich fields.” Released today at EDUCAUSE, CDW-G’s Women in STEM: Igniting Engagement infographic illustrates barriers facing female college STEM students and the role higher education institutions can play in advancing engagement. In August and September 2016, CDW-G surveyed 300 college women in STEM, including 150 current female STEM students – those who intend to graduate with a STEM major or have graduated with an undergraduate or graduate STEM degree in the last five years – and 150 former female STEM students – those students who left their STEM major. The margin of error for this survey is ±5.62% at a 95% confidence level.Tweet
<urn:uuid:bdfedbfe-254b-4bf8-aff5-111819f882ab>
CC-MAIN-2017-04
http://www.cdwnewsroom.com/women-in-stem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280900.71/warc/CC-MAIN-20170116095120-00493-ip-10-171-10-70.ec2.internal.warc.gz
en
0.941975
522
2.609375
3
When the first computer viruses popped up, their behavior was so similar to that of their biological counterparts that security researchers simply chose to appropriate the already existing expression. And it is that very same similarity that has now – years and years later – helped medical researchers glean crucial insights into how a particular virus still manages to avoid being beaten. “It turns out there are a lot of similarities between the way spammers evolve their approaches to avoid filters and the way that the HIV virus is constantly mutating,” writes Microsoft’s Steve Clayton, and so the Redmond giant has offered some of its technology to help the researchers in their quest for developing a cure for the (relatively) modern blight. The project brought together experts from a number of institutions that test potential vaccines for HIV: MIT, Harvard, the Center for the AIDS Programme of Research in South Africa, the Ragon Institute at Massachusetts General Hospital, and the KwaZulu-Natal Research Institute for Tuberculosis and HIV. With all the testing going on, these various institutions have an immense amount of data that had to be analyzed in order to detect patterns in the virus’ evolution. “That’s where David Heckerman and Jonathan Carlson of Microsoft Research along with a Microsoft Computational Biology Tool called PhyloD come in,” explaines Clayton. “This software enables efficient data mining which then leads to specific cell analysis that helps detail virus patterns for further analysis. PhyloD contains an algorithm, code and visualization tools to perform complex pattern recognition and analysis – enabling Heckerman and his colleagues to learn how different individual immune systems respond to the many mutations of the virus.” Of course, such a tool is not enough – massive computer power is needed in order to make the analysis last days instead of months (or years). Luckily, Microsoft has that at its disposal, as well, so it took only a few days to receive the results. In the end, they “discovered” six times as many possible attack points on the HIV virus. The algorithm that PhyloD employs was originally designed to detect the different tactics that spammers use to try an bypass email spam filters in Hotmail, Outlook and Exchange. And while spam and HIV both still present great problems, interdisciplinary approaches to research such as this one do raise hope for the future.
<urn:uuid:eeaa63f5-fe1a-4c30-b7d0-c9f86038be5b>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2011/12/05/microsoft-spam-detecting-algorithm-helps-with-hiv-research/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00309-ip-10-171-10-70.ec2.internal.warc.gz
en
0.948682
484
3.0625
3
Manufacturing Breakthrough Blog Monday December 19, 2016 One of the major differences between traditional cost accounting and throughput accounting is how inventory is appraised. The simple change in calculating money generated at the time of sale and not at the time it is produced is a shift in reality from a seller’s market to a buyer’s market. Cost accounting assumes that everything produced would eventually be sold (seller’s market). But in a buyer’s market, this is not the case. The bottom line is that you make a profit only when you sell products and not when you produce it. So the inevitable conclusion is that you need to measure sales rather than production. In this sense, throughput per unit for a given product is calculated as sales revenue per unit minus the cost of the raw materials or component parts added as follows: - t = throughput per unit - t = sales revenue per unit – raw material or component parts cost So the total throughput per period of time for a product is the “t” value multiplied by the total quantity sold for that product. For example, if T1 denotes the total product for product 1, then T1 is calculated as follows: - T1 = throughput per unit of product 1 x quantity (q) of product 1 sold - = t1 x q1 From this equation, if T represents the total throughput for a manufacturer for a given time period, total throughput is calculated as follows: - T = the sum of Throughput for each of n products sold by the company = T1 + T2 + T3+ ……+ Tn Before leaving this discussion of throughput, it’s important to remember that throughput measures output in dollars and not activity (e.g. work-in-process inventory). Activity that does not contribute to sales or the conversion of materials into products sold is essentially waste. Let’s now turn our attention to the subject of inventory. The authors, Srikanth and Umble, explain that their definition of inventory is different than the traditional definition in two important ways. First, their definition of inventory does not add value to the product as it progresses through the process. Traditional cost accounting dictates that as material progresses through the process, it absorbs both labor and overhead. Because of this, the inventory value of material increases as it is processed through the various steps in the process. If, for example, a part’s raw materials are valued at $100, using the traditional costing method, that same part could be valued at $110 after the first step in the process. After it passes through all of the steps in the process, those same raw materials could grow to $175 and more when it’s delivered to the finished goods stocking area. Under the authors definition, the value of the part remains unchanged at $100 as it passes through the various processing steps. Therefore, the author’s definition of inventory is simply the amount of money tied up in materials the company intends to sell. - I = purchased material value of raw materials, purchased parts, work-in-process and finished goods inventories The authors rightfully point out that this assumption of increasing “value” is very misleading. Not only has no value been created, it is very possible that value has actually been lost. As materials progress through manufacturing operations, they actually lose flexibility meaning that they could become limited to single type products. And if there is no demand for the product, not only was no value (or throughput) created, material was consumed and must be replaced when a different product is ordered. In order to avoid these distortions, the authors value inventory at the original value (or cost) of the material. Labor and other expenses incurred in the production process are accounted for in the next category – operating expense. Operating Expense (OE) includes all of the money spent by the system with the exception of the money spent to purchase inventory (i.e. truly variable expenses) since this latter expense has already been accounted for in the definition of throughput. Operating Expense is money spent by the company to convert Inventory into Throughput. - OE = actual spending to turn I into T The authors explain that there are two critical differences between their definition and the traditional concept of the cost of operations. First, there is no fundamental difference between direct labor and indirect labor because both assist in the conversion of inventory into throughput. Therefore, all personnel-related expenses are included in OE. The second difference is that for the most part, OE includes actual expenses. That is, it counts real money or checks written as opposed to elements such as variances. Under the traditional cost accounting procedure, if an operator is producing parts at a faster rate than the engineering standard for the operation, then that worker will be generating a positive variance and the cost of operations will be reduced. And this applies even if there is no demand for the product! Under the author’s definitions, the same situation results in no change in throughput (T), an increase in inventory ( I ), and no change in operating expense (OE) because the operator’s wages don’t change. The standard cost system views labor costs as infinitely variable while the author’s method (i.e. Synchronous Management measurement system), normal labor costs (excluding overtime) are viewed as fixed in the short term. In part 4, we will demonstrate a simple example using T, I, and OE to illustrate the two author’s methodology. As always, if you have any questions or comments about any of my posts, leave me a message and I will respond. Until next time. L. Srikanth and Michael Umble, Synchronous Management – Profit-Based Manufacturing for the 21st Century, Volume One – 1997, The Spectrum Publishing Company, Wallingford, CT
<urn:uuid:f3b81b61-0b33-4aae-b88f-798c7b0348ef>
CC-MAIN-2017-04
http://manufacturing.ecisolutions.com/blog/posts/2016/december/local-optimum-versus-global-optimum-part-3.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.942118
1,209
2.859375
3
Email Worm Mimail Lives On 15 Jan 2004 A new modification of Mimail sent in mass spam distribution Kaspersky Lab, a leading information security software developer has detected a mass mailing of a Trojan program, small.cz which downloads Mimail.p,a new version of the Mimail email worm, from a remote server. To date, isolated incidents of infection by this malicious software have been reported in various countries throughout the world. The Trojan has been sent in the guise of a message from the payment system PayPal. The sender's address is falsified as "firstname.lastname@example.org", the message topic appears as "PAYPAL.COM NEW YEAR OFFER", and the attachment is named paypal.exe. When run, the Trojan in the file connects to a remote server, downloads Mimail.p and installs it in the system. Mimail, which was created in Russia and first appeared on the Internet at the beginning of August 2003, is a classic email worm, which spreads via mail messages. The new modification of the worm differs from previous versions only by the fact that it is compressed using UPX. This makes it more difficult for some anti-virus programs to detect Mimail.p After installation, Mimail.p begins the process of replication. The worm first secretly scans several directories of the infected computer and extracts email addresses. It then sends copies of itself to these addresses using its built-in procedure. The worm has dangerous side effects, which can cause significant harm to users. In particular, Mimail.p tracks the activity of E-Gold and PayPal payment system applications installed on the infected computer. It extracts confidential information and sends this information to a number of anonymous addresses belonging to the worm's author. In the same way, the worm steals other confidential data such as user names and passwords for email, access to system information etc. Protection against Mimail.p has already been added to the Kaspersky® Anti-Virus database. More detailed information about this malicious program can be found in the Kaspersky Virus Encyclopaedia
<urn:uuid:93677800-9e33-44a8-a81e-ee9a92ff25c9>
CC-MAIN-2017-04
http://www.kaspersky.com/au/about/news/virus/2004/Email_Worm_Mimail_Lives_On
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00217-ip-10-171-10-70.ec2.internal.warc.gz
en
0.913491
436
2.65625
3
The Lego ComputerBy Tim Moran | Posted 2011-01-28 Email Print Tim Moran’s round up of offbeat stories includes the most boring day of the 20th century and Hedy Lamarr’s role in cell phone history. Building an Ancient Computer From Legos His name is Andrew Carol. By day, he’s an Apple software engineer working on OS X; by night, he’s the digital engineer who created an ancient analog computer—out of Legos! A recent story on Fastcodesign.com explains how he crafted a working replica of the ancient Greek Antikythera Mechanism, circa 100 B.C., which was designed to predict astronomical events such as eclipses. According to the story, the computing device was lost until 1901, when divers discovered it under the sea off Greece. Fast forward 100 years, at which time high-resolution X-ray tomography showed that the ancient Greek engineers had devised a unique “computer,” using gears of great precision, to predict celestial events with uncanny accuracy. In 2010, Carol made a fully functioning replica out of children’s building blocks—1,500 Lego Technic parts and 110 gears. It took him 30 days to design, prototype and build the machine. As someone who spent many hours playing with Legos with my boys (my biggest accomplishment was making a cool tower), I’m awed by Carol’s feat. Beauty and the Cell Phone Depending on one’s demographic, the name Hedy Lamarr might not be familiar. The beautiful actress appeared in numerous “B+” movies (Samson and Delilah, Her Highness and the Bellboy, White Cargo, etc.), and died in 2000 at the age of 87. What’s little known about this star is that she was also a mathematician and inventor, and her work in communi-cations paved the way for the cell phone, according to a recent article on the site iO9.com. With avant-garde composer George Antheil, Lamarr patented a “secret communication system” in 1942 based on frequency-hopping, spread-spectrum technology. During World War II, this technology was used to “keep torpedoes from being detected or manipulated by enemy forces.” This spread-spectrum work formed the basis for the wireless communications boom—cell phones, Wi-Fi, etc. Says the article: “It all flowed from the original patent that Hedy came up with to fight Nazis.” We hear you now, Hedy. A Boring Day in the Life Did you ever wonder what the most boring day of the 20th century was? Nor have I. But computer programmer William Tunstall-Pedoe did, and, like a good software hound, he did something about it. He actually calculated the most objectively dull day since 1900: April 11, 1954. A recent story on Telegraph.com.uk explains that Tunstall-Pedoe created a program called True Knowledge to function as a more intelligent way to search the Internet. But as a sideline—for fun!—he decided to determine what the most unremarkable day in the last 100 years had been. To that end, True Knowledge was fed some 300 million facts about people, places and events in the news, and it eventually determined that April 11, 1954 was the day on which less of import happened than on any other day since the turn of the 20th century. Apparently, no problem is too small or obscure to warrant wasting complex algorithms and huge amounts of computing power.
<urn:uuid:61472442-6cec-4d9f-b920-04b0f336ce37>
CC-MAIN-2017-04
http://www.baselinemag.com/c/a/Innovation/The-Lego-Computer-800547
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00036-ip-10-171-10-70.ec2.internal.warc.gz
en
0.953596
754
2.75
3
(Translated from the original Italian) One more to worry about is the real security of satellite infrastructures. In a technological civilization, satellites play a vital role in the management and transmission of information of all kinds. Satellites in fact do the work in silent that we enjoy every day, but we often forget this crucial aspect of communications. Are these powerful systems of communication actually safe? Is it sufficient just to be in orbit thousands of miles above our heads in order to ward off the danger of an attack? In using satellites, are we sure that nobody could listen in on our communications? Of course not! The main concern is the possibility of compromising satellite those communications in the context of warfare. Consider that satellite communication are widely used in military applications, particularly in those regions where other communication infrastructures are insufficient or absent, like the Middle East and Africa. Security researchers have demonstrated that satellite phones can be easily intercepted and deciphered. It is already of enough concern any common computer can be used to hack the two encryption systems used to protect satellite phone signals, so anyone with a computer and a radio could conceivable eavesdrop on calls, and a multitude of satellite phones are vulnerable. With a few thousand dollars it is possible, according a researchers' announcement, to buy the equipment and software needed to intercept and decrypt satellite phone calls from hundreds of thousands of users. The academics have summarized the threat in a single sentence: "Do not Trust Satellite Phones". The two main standard encryption algorithms that have been compromised are known as GMR-1 and GMR-2, which are implemented by the satellite phone operators. The problem really affects only those companies that use the ETSI GMR-1 and GMR-2 encryption algorithms. The speed with which it is possible to decipher a call is linked to the computing power applied, but keep in mind that it is possible with suitable equipment decipher the communications in real time. The researchers are convinced that the main problem is related to the encryption algorithms and the "security through obscurity" approach applied by attempting to use secrecy of design and implementation to provide security, and preventing the security community from testing them. In publishing the hacking procedure proof-of-concept, the researchers hoped to prompt the ETSI organization to set new standards based on stronger encryption algorithms. It was revealed in the past that GSM communications, an approach used to hide the algorithms for encrypting communications is certainly wrong, and represents a risk to the integrity of the overall infrastructure. Due to this incorrect approach in the management of the algorithms, many organizations have implemented extra layers of cipher software in their satellite phones with the unintended result of increasing its vulnerability. A consequence of the announcement is that satellite handsets with built in encryption mechanisms based on the hacked algorithms are no longer secure, which could pose a considerable threat to the business and military sectors. Hostile governments and criminals are actually able to monitor satellite phone networks on a large scale. If the situation regarding satellite encryption algorithms is worrying, certainly the security of the satellites themselves is not any better. A report released in 2011 named titled the "2011 Report to Congress of the U.S.-China Economic and Security Review Commission" revealed that some US operated satellites were vulnerable to attacks, and on more than one occasion attackers had taken control of the systems. Sensitive satellite systems have been successfully breached, according to the report: "Satellites from several U.S. government space programs utilize commercially operated satellite ground stations outside the United States, some of which rely on the public Internet for 'data access and file transfers,' according to a 2008 National Aeronautics and Space Administration quarterly report.† The use of the Internet to perform certain communications functions presents potential opportunities for malicious actors to gain access to restricted networks." Information regarding several attacks to satellite control systems are in the public domain, and these events have been confirmed also by The National Aeronautics and Space Administration (NASA). Below is a brief list of events: - On October 20, 2007, Landsat-7, a U.S. earth observation satellite jointly managed by the National Aeronautics and Space Administration and the U.S. Geological Survey, experienced 12 or more minutes of interference. - On June 20, 2008, Terra EOS [earth observation system] AM–1, a National Aeronautics and Space Administration- managed program for earth observation, experienced two or more minutes of interference.The responsible party achieved all steps required to command the satellite but did not issue commands. - On July 23, 2008, Landsat-7 experienced 12 or more minutes of interference. The responsible party did not achieve all steps required to command the satellite. - On October 22, 2008, Terra EOS AM–1 experienced nine or more minutes of interference. The responsible party achieved all steps required to command the satellite but did not issue commands. In the report, the responsibility for the attacks was assigned to China, but similar hacks can be conducted by every hostile foreign government. We must consider that compromised satellites are a serious risk, the exposure could affect communications in the business and military sectors, and also can cause the loss of sensitive and strategic technological information. My last consideration is related to threats to satellite systems. In our imagination we make the mistake of considering only as possible sources of attacks as being foreign governments. The proof that this view is wrong arrived in recent weeks when the group Anonymous announced that it had successfully hacked a NASA satellite The group has also published on Pastebin evidence of knowledge on NASA project. Clearly the situation merits a high level of attention given the looming threat. Cross-posted from Security Affairs
<urn:uuid:8f53eb1d-4412-4748-a63f-6a0670529cc0>
CC-MAIN-2017-04
http://www.infosecisland.com/blogview/19993-Hacking-Satellite-Communications.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00338-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94324
1,164
2.984375
3
Researchers from German security consultancy SR Labs have created a whole new class of attacks that can compromise computer systems via ubiquitous and widely used USB-connected devices (storage drives, keyboards, mice, smartphones, etc.) USB devices, as we all know, can carry malware in their flash memory storage, but what researchers Karsten Nohl and Jakob Lell discovered is that it’s possible to reverse-engineer USB devices’ firmware (i.e. the controller chip that makes them function), and reprogram it to contain attack code. This malicious malware/firmware, which they dubbed BadUSB, can be used by attackers to take over a target computer system, redirect the user’s internet traffic by forcing the computer to use a specific DNS server, make the computer install additional malware, change files, spy on the user, and so on. The two are set to present several demonstrations of these attacks at the upcoming Black Hat security conference, and while it’s good to know such attacks are possible, the bad news is that we, as users, can’t do much to prevent them apart from stop using USB devices altogether. The malicious firmware can’t be detected by antivirus solutions, and reformatting the drive does nothing to remove it. And if you don’t have advanced knowledge in computer forensics, it’s practically impossible to make sure that a USB device’s firmware hasn’t been altered. More bad news is that a thusly compromised USB device can infect a computer, but also that a compromised PC, i.e. malware on it, can easily modify the USB devices’ firmware without the user noticing it. The researchers have reprogrammed the controller chips manufactured by Taiwan-based Phison Electronics, and have inserted them in memory drives and Android-running smartphones. According to Tech2, Taiwan-based Alcor Micro and Silicon Motion Technology also manufacture similar chips, and event though the researchers haven’t tested them, it’s very probable they can be as chip manufacturers are not required to secure the firmware. “The next time you have a virus on your computer, you pretty much have to assume your peripherals are infected, and computers of other people who connected to those peripherals are infected,” Nohl commented for Ars Technica. “No effective defenses from USB attacks are known. Malware scanners cannot access the firmware running on USB devices. USB firewalls that block certain device classes do not (yet) exist. And behavioral detection is difficult, since a BadUSB device’s behavior when it changes its persona looks as though a user has simply plugged in a new device,” the researchers pointed out. “To make matters worse, cleanup after an incident is hard: Simply reinstalling the operating system – the standard response to otherwise ineradicable malware – does not address BadUSB infections at their root. The USB thumb drive, from which the operating system is reinstalled, may already be infected, as may the hardwired webcam or other USB components inside the computer. A BadUSB device may even have replaced the computer’s BIOS – again by emulating a keyboard and unlocking a hidden file on the USB thumb drive. Once infected, computers and their USB peripherals can never be trusted again.” There are two options to prevent this type of attacks to become an every-day occurrence: one, users should never let other people use theirs, nor use ones they received from others; two, controller chip manufacturers should implement defenses that prevent the firmware to be modified by unauthorised parties.
<urn:uuid:f8f17bc5-5b69-423b-92d1-901cc2d7d8f6>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/07/31/malicious-usb-device-firmware-the-next-big-infection-vector/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00394-ip-10-171-10-70.ec2.internal.warc.gz
en
0.931465
751
2.734375
3
What you describe confusedhelp is not uncommon as Animal indicates and nothing to be concerned about.Cookies are text string messages given to a Web browser by a Web server. Whenever you visit a web page or navigate different pages with your browser, the web site generates a unique ID number which your browser stores in a text (cookie) file that is sent back to the server each time the browser requests a page from that server. Cookies allow third-party providers such as ad serving networks, spyware or adware providers to track personal information. The main purpose of cookies is to identify users and prepare customized Web pages for them. - Persistent cookies have expiration dates set by the Web server when it passes the cookie and are stored on a user's hard drive until they expire or are deleted. These types of cookies are used to store information between visits to a site and collect identifying information about the user such as surfing behavior or preferences for a specific web site. - Session (transient) cookies are not saved to the hard drive, do not collect any information and have no set expiration date. They are used to temporarily hold information in the form of a session identification stored in memory as you browse web pages. These types of cookies are cached only while a user is visiting the Web server issuing the session cookie and are deleted from the cache when the user closes the session. Cookies can be categorized as: - Trusted cookies are from sites you trust, use often, and want to be able to identify and personalize content for you. - Nuisance cookies are from those sites you do not recognize or often use but somehow it's put a cookie on your machine. - Bad cookies (i.e. persistent cookies, long term and third party tracking cookies) are those that can be linked to an ad company or something that tracks your movements across the web. The type of persistent cookie that is a cause for some concern are "tracking cookies " because they can be considered a privacy risk . These types of cookies are used to track your Web browsing habits (your movement from site to site). Ad companies use them to record your activity on all sites where they have placed ads. They can keep count of how many times you visited a web page, store your username and password so you don't have to log in and retain your custom settings. When you visit one of these sites, a cookie is placed on your computer. Each time you visit another site that hosts one of their ads, that same cookie is read, and soon they have assembled a list of which of their sites you have visited and which of their ads that you have clicked on. Cookies are used all over the Internet and advertisement companies often plant them whenever your browser loads one of their banners.Cookies are NOT a "threat" . As text files they cannot be executed to cause any damage. Cookies do not cause any pop ups or install malware and they cannot erase or read information from a computer. Microsoft's Description of Cookies Cookies cannot be used to run code (run programs) or to deliver viruses to your computer. To learn more about Cookies, please refer to:Flash cookies (or Local Shared Objects ) and Evercookies are a newer way of tracking user behavior and surfing habits but they too are not a threat , and cannot harm your computer. 1. An Evercookie new storage methods. When evercookie finds that other types of cookies have been removed, it recreates them so they can be reused over and over 2. Flash cookies are cookie-like data stored on a computer and used by all versions of Adobe Flash Player and similar applications. They can store much more information than traditional browser cookies and they are typically stored within each user’s Application Data directory with a ".SOL" extension, under the Macromedia\FlashPlayer\#SharedObjects folder. Unlike traditional cookies, Flash cookies cannot be managed through browser controls so they are more difficult to find and remove. However, they can be viewed, managed and deleted using the Website Storage Settings panel at Macromedia's Support Site. From this panel, you can change storage settings for a website, delete a specific website or delete all sites which erases any information that may have been stored on the computer. To prevent any Flash Cookies from being stored on your computer, go to the Global Storage Settings panel the option “Allow third-party Flash content to store data on your computer” . For more information, please refer to:As long as you surf the Internet, you are going to get cookies and some of your security programs will flag them for removal. Tou can minimize the number of cookies which are stored on your computer by referring to:
<urn:uuid:8eedde99-ed57-4836-88b8-46a1e05fd43c>
CC-MAIN-2017-04
https://www.bleepingcomputer.com/forums/t/448745/sas-and-norton/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz
en
0.940614
977
3.3125
3
Two posts last week introduced the topic of Location-Based Services (LBS) and an earlier post this week explained Geofencing—which is a common LBS feature. Another common LBS feature is breadcrumbing, but what is it? Breadcrumbing uses a GPS-enabled device to collect historical location data including route, speed, direction, stops and stop duration at specified time intervals. This data is presented on a map as a “breadcrumb trail” of position markers. This data can be used to improve field performance, increase fleet safety and give managers an enhanced view of field activity. So what are some of the uses and benefits of breadcrumbing? - Breadcrumbing can reduce fuel and maintenance costs, increase productivity, improve safety and increase awareness of field activity that can improve overall customer service. - By recording driving speed, businesses can deter speeding and gain valuable documentation in case of accidents or insurance disputes. - Improve field performance by acting as a “supervisor in the field,” protecting against unapproved routes, excessive work breaks and unauthorized stops. - Control off-hours use of vehicles. - Accurate and automatic audit trail that can be compared to time and attendance records to verify actual worker time on duty. - Route optimization plans the most efficient routes and “breadcrumbing” provides confirmation that they were followed. - Breadcrumbing provides data such as travel times between stops, slowdown areas and wait times. By basing calculations on actual, accurate data from the field, route optimization systems can create “real-world” routes that reduce miles driven, improve utilization of fleet equipment and raise mobile worker productivity.
<urn:uuid:bcf9ee70-3d4a-4426-8fe9-6d7048fa960b>
CC-MAIN-2017-04
http://blog.decisionpt.com/what-is-breadcrumbing
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00540-ip-10-171-10-70.ec2.internal.warc.gz
en
0.929872
347
2.53125
3
iSCSI � What Does It Mean for Your Storage Network? As the Internet and related activities continue to expand, the amount of data that needs to be stored is also increasing. Enterprises and other organizations require effective ways to store and maintain this data. In recent years, many enterprises have seen a significant increase in the volume of data produced. And, this amount of data continues to increase, particularly in Web-based and e-Commerce environments. A good example would be e-mail which impacts worldwide storage by producing more data than is generated by new Web pages. These types of traffic are typically multimedia intensive. E-mail and Internet-related enterprise/commercial transactions combined have caused a dramatic increase in storable data moving across Internet Protocol (IP) networks. A new method is needed to bring improved storage capabilities to IP networks and reduce limitations associated with Fibre Channel SANs. The solution, as is widely known, is Internet Small Computer Systems Interface (iSCSI) or SCSI over IP. But what does this new technology mean to your storage environment? This article will answer the following questions: - What will iSCSI mean to your storage network? - What will the upcoming availability of iSCSI mean to customers who perhaps had considered storage networking to be too expensive? - What, in simple terms, will users need to implement an iSCSI based SAN? - How will an iSCSI based SAN compare in terms of performance and cost to Fibre Channel? What iSCSI Means to Your Storage Network Internet SCSI (iSCSI) is a draft standard protocol for encapsulating SCSI command into Transmission Control Protocol/Internet Protocol (TCP/IP) packets and enabling I/O block data transport over IP networks. iSCSI can be used to build IP-based SANs. The simple, yet powerful technology can help provide a high-speed, low-cost, long-distance storage solution for Web sites, service providers, enterprises and other organizations. An iSCSI Host Bus Adapter (HBA), or storage network interface card (NIC), connects storage resources over Ethernet. As a result, core transport layers can be managed using existing network management applications. High-level management activities of the iSCSI protocol (such as permissions, device information and configuration) can easily be layered over or built into these applications. For this reason, the deployment of interoperable, robust enterprise management solutions for iSCSI devices is expected to occur quickly. First-generation iSCSI HBA performance is expected to be well suited for the workgroup or departmental storage requirements of medium- and large-sized enterprises. The availability of TCP/IP Offload Engines (TCP/IP Offload Engines (TOEs) are based on session-layer interface card (SLIC) technology, which can be used to improve the performance of servers, network-attached storage (NAS) and iSCSI storage devices.) will significantly improve the performance of iSCSI products. Performance comparable to Fibre Channel is expected when vendors begin shipping 10 Gigabit Ethernet iSCSI products in 2003. Benefits of iSCSIBy combining SCSI, Ethernet and TCP/IP, iSCSI delivers the following key advantages: - Builds on stable and familiar standards: Many IT staffs are familiar with the technologies. - Creates a SAN with a reduced TCO: Installation and maintenance costs are low since the TCP/IP suite reduces the need for hiring specialized personnel. - Ethernet transmissions can travel over the Global IP Network and therefore have no practical distance limitations. - Provides a high degree of interoperability: Reduces disparate networks and cabling, and uses regular Ethernet switches instead of special Fibre Channel switches. - Scales to 10 Gigabit : Comparable to OC-192 SONET (Synchronous Optical Network) rates in Metropolitan Area Networks (MANs) and Wide Area Networks (WANs). Who Can Use iSCSI? iSCSI SANs are most suitable for enterprises with a need for streaming data and/or large amounts of data to store and transmit over the network. This includes: - Businesses and institutions with limited IT resources, infrastructure and budget. These organizations should look for iSCSI equipment that functions over standard Gigabit Ethernet Category-5 copper cabling already in place in most buildings today. - For example, work team members who need the latest project data without waiting 24 hours for traditional replication/backup/reconciliation procedures. - Geographically distributed organizations that require access to the same data on a real-time basis. - Internet Service Providers (ISPs). - Organizations that need remote data replication and disaster recovery. For example, a high-technology company in San Jose, California remains susceptible to disaster if it uses a Fibre Channel SAN. Original and backup data copies could be lost in the same earthquake due to distance limitations. - Storage Service Providers (SSPs).
<urn:uuid:dbeb8020-887c-48d4-9486-18e9bc3ae349>
CC-MAIN-2017-04
http://www.enterprisestorageforum.com/ipstorage/features/article.php/1547831/iSCSI-ndash-What-Does-It-Mean-for-Your-Storage-Network.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00292-ip-10-171-10-70.ec2.internal.warc.gz
en
0.886901
1,026
2.75
3
Perhaps the public's obsession with zombies can be refracted from horror movies and towards health issues, suggests a new paper in the journal Emerging Infectious Diseases. The hope is that zombies can do for public health awareness what they did for Jane Austen: tack on some zombies and suddenly boring things turn exciting. Rabies awareness, in particular, could benefit from the shambling hordes -- apparently because of the similarities between the actual symptoms of rabies and the fictional symptoms of zombies. "Zombie popularity may be a perfect opportunity to increase awareness of rabies," the UC Irvine team lead by Brandon Brown wrote. The most prominent resemblance between those afflicted with rabies and zombiism begins at the mouth; both ailments are primarily transmitted through biting. While the pathogenesis for zombification is less consistent, rabies spreads through infected saliva entering the body. In addition, victims indicate infected status with increased production of fluid from the mouth; in the case of rabies, increased salivation occurs to improve chances of transmission. Rabies control in practice may be similar to hypothetical control of zombie outbreaks. For example, in 2008, Indonesian officials in Bali killed roughly 50,000 dogs in 5 days after an outbreak of rabies. This sparked a great deal of controversy, leading to the primary alternative of mass vaccination. If a zombie apocalypse were to occur, surviving humans might not have the capacity for mass vaccination. The sole option may be to kill the undead for human survival; however, the ethics of destroying something that was once human might be called into question.
<urn:uuid:8dbd3d1d-5d09-4ea7-8db2-b369ba86214c>
CC-MAIN-2017-04
http://www.nextgov.com/health/2013/04/emerging-infectious-diseases-better-public-health-outcomes-and-zombies/62314/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.94775
315
2.9375
3
The aim of this talk by Clemens Hopfer from the 30th Chaos Communication Congress is to give an understandable insight into wireless communication, using existing systems as examples on why there are different communication systems for different uses. Although wireless communication systems, like Wifi, GSM, UMTS, Bluetooth or DECT, are always surrounding us, radio transmission is often seen as “Black Magic”. Digital wireless communication systems differ significantly from analog system designs, although actuall transmission is still analog. Digital modulations, coding, filtering etc. enable highly scalable and adaptive wireless systems, making it possible to design quad-band LTE/UMTS/CDMA/GSM radios on a single chip. The talk briefly describes system concepts, modulation and coding basics, along with the challenges of mobile communication systems. This will include the following topics: - System concepts - Digital Modulations - Channel coding principles - Channel Access - High Frequency basics - Radio Propagation.
<urn:uuid:7d730132-9e4e-4f24-aa1e-40fa9071eae2>
CC-MAIN-2017-04
https://www.helpnetsecurity.com/2014/01/10/the-basics-of-digital-wireless-communication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00504-ip-10-171-10-70.ec2.internal.warc.gz
en
0.906865
202
2.921875
3