text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
LittleBits Brings Internet of Things to DIYers
LittleBits Electronics, the 3-year-old company that was created to enable anyone to build electronic devices, now is making it easy for anyone to make electronic devices that connect to the Internet.
LittleBits officials on July 23 launched cloudBit, a new module that enables people to turn any object—from a thermostat and doorbell to a fish feeder and camera—into a connected device, which they said will greatly expand participation in the burgeoning Internet of things (IoT).
CloudBit "gives anyone the power to turn any object into an internet-connected smart device in a snap—no soldering, wiring or programming required," Colin Vernon, head of cloud platform at littleBits, wrote in a post on the company blog. "With the cloudBit, you can snap the internet to anything."
The company has created an ever-growing library of hardware modules—more than 50 so far—aimed at enabling anyone to quickly create gadgets and systems without needing high levels of programming, soldering or wiring skills. The modules can snap together via magnets. It's a goal similar to that of the Raspberry Pi Foundation, which builds low-costs mini-computers that students and enthusiasts can use to learn to how to program.
With cloudBits, anyone can now take their own systems or any other object and connect them to the Internet.
"That means the Internet of Things is now open and accessible—enabling anyone to prototype, test ideas, and participate in a field that could change the world and how we live," Vernon wrote. "People can recreate popular connected devices (like a smart thermostat), invent their own, or build solutions to their unique needs. Our mission is to put the power of electronics in the hands of everyone, from the simplest circuits to powerful internet-connected devices."
CloudBit and limited edition Cloud Starter Bundle are available at littlebits.cc. The Cloud Starter Bundle comes with six electronic modules, an insert card with five tutorials (with another 100 availalble online) and two accessories that can be used to connect objects to the Internet. CloudBit is available for $59; the bundle for $99.
CloudBits can be paired with other modules in the littleBits library and objects can be made to communicate through the Web via a button or motion sensor, or in a machine-to-machine fashion by connecting multiple cloudBits together. LittleBits also has created the Cloud Control Web app to enable users to remotely trigger or read from the cloudBits.
LittleBits is partnering with the IFTTT service to make connections easier and RadioShack, which starting in August will start putting littleBits technologies on their store shelves. They be available in select markets next month and in 2,000 stores this fall. | <urn:uuid:76f4d2a8-0c37-497e-8fea-b3e16f0ee0bd> | CC-MAIN-2017-04 | http://www.eweek.com/blogs/first-read/littlebits-brings-internet-of-things-to-diyers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00023-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930162 | 587 | 2.71875 | 3 |
The Stuxnet and Aurora attacks have shown us that malware development has become a professional job. These threats targeting the process industry were written by highly intelligent developers, financed by huge investors, and possibly even by governments.
Yet every time a new attack is discovered, experts are left wondering how the malware was developed so quickly. And while the experts are scratching their heads about the attack du jour, the cyber criminals are already working on a new, even stealthier attack. What’s even more troubling, the criminals are getting increasingly ambitious, raising the stakes even higher. In the old days, they were satisfied stealing money from bank accounts, but now the ultimate goal is stealing data and propriety corporate information. We’re not far from a world in which the criminals are trying to gain total control of industrial processes to impose destruction or possibly harm the health of the population.
Attacks on the rise
In early 2010, the networks of several Fortune 100 companies, including Google China, were hacked by what was later called the Aurora attacks. More than 30 large companies fell victim to the attack, even though they were running their networks with security and intrusion prevention software. This illustrates just how sophisticated the attack was.
Aurora was able to penetrate these networks through an unpatched security leak in Internet Explorer (or so-called zero day leak) that – up until then – had not been discovered. Of course, by the time the malware was finally detected, the targeted corporate information was already stolen. At the time, security experts described Aurora as “the most sophisticated malware ever’ – although it turned out to be more of an inconvenience than an attack with devastating consequences.
But it wasn’t long before Aurora was supplanted by Stuxnet in late 2010. The Stuxnet developers far exceeded Aurora in one key aspect. Unlike its predecessor, Stuxnet did not rely on one zero day leak, it used no less than four. This malware wasn’t meant to attack many individual computers – it was meant for a networked group of them. To do this, however, the malware needed to make physical contact with the devices through USB sticks, scanners, or shared printers. Despite this limitation, Stuxnet succeeded in infecting dozens of industrial enterprises all over the world. There are indications the main target was nuclear reactors in Iran. Considering this, even though the malware was detected in the nick of time, its potential for destruction could have been devastating.
Protecting the process industry
Stuxnet shows just how plausible a threat scenario is – not just in Iran, where the patching policy might not be as strong it should be – but also in North America and Europe. Even organizations that implement security measures are vulnerable to attacks. For instance, in the Dutch process industry, control systems are not attached to the corporate network, providing some protection against a large attack. Yet even though the process systems are on their own “island,” they do have infrastructural connections to “the mainland,” even if only through a handful of people who have access to both.
While this approach does create a buffer of sorts, it’s by no means fail safe. In the United States, organizations tend to take a fully networked approach, making a trade-off between productivity and security. As for the threat of malware in process industries, unfortunately, organizations may have to make tough choices between amplifying security and maintaining optimal productivity.
To also properly combat the threat of these attacks, the first step is to fully grasp the urgency of process control systems security. On an individual level, employees who are potential targets should be aware and given safety training, whether they are involved in the process control process or not. The training could be as basic as reminding them to be extremely careful with clicking on links in emails and on social networking sites, or banning USB flash drives from the work place. These measures can easily be enforced with software solutions policy.
However, to really tackle this problem, it will have to be addressed at an international level. The most practical approach would be for governments to come to an agreement, similar to the way they handled nuclear threats. They should commit to disassociating from developing or financing these attacks. In addition, governments need to commit to procedures to disable further participation while pledging to investigations and punish responsible parties. Going even further, corporations should band together taking a similar approach. For example, with Stuxnet all corporations that had the SCADA (Supervisory Control and Data Acquistion) of Siemens installed could share information and protection barriers.
Besides political, police and judicial organizations, the entire international industrial sector should cooperate to minimize the risks of cyber attacks. Understandably, enterprises are not keen on openly admitting that their systems have been hacked, however, other organizations will benefit from the knowledge and therefore should be encouraged. When information about a cyber attack is shared at an early stage, other companies can take measures against it. The industrial sector could also agree to fully cooperate in investigations of cyber attacks, even if this means that the production has to suffer temporarily, or that certain corporate secrets need to be disclosed to the investigators. While the last condition seems like a bitter pill to swallow, the alternative is far worse. | <urn:uuid:a0d784a4-8a0a-4c47-8954-b0d66dcdd14d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/02/11/what-stuxnet-means-for-the-process-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00327-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968036 | 1,074 | 2.53125 | 3 |
What Applications Are Quad-Core Chips Good For?
New quad-core chips may be better for some applications than dual-core, but which ones? eWEEK IT expert Kevin Closson, chief software architect for PolyServe-HP, has some answers.Q: What kinds of applications will benefit the most from the Intel and AMD quad-core chips?
A: Certainly databases like Oracle 10g and Microsoft SQL Server 2005 can benefit from running on quad-core chips, because they have already been optimized to run on SMP [symmetric multiprocessor] servers. But there are plenty of other applications that might not see any advantage. I think people are going to have a lot of unpleasant surprises with this. Just because the operating system kernel can schedule parts of your application to run on different cores doesnt mean theres going to be any improvement in performance. It might even make performance worse if there is a lot of thrashing, if the software isnt smart enough to take advantage of multiprocessing. This is something thats very hard to benchmark. Unless the vendors come out and announce that their software has been specifically optimized for quad-core processors, you cant be sure. Oracle and SQL Server will do well. But at this point I dont think anyone knows if things like Microsoft Exchange or common application tier software packages will see performance go up or down with quad-core. Q: Will quad-core chips be more effective than dual core chips?
A: Not necessarily. It all depends on whether the software is smart enough to take advantage of the extra cores. For example, in the 1990s, the Sybase database ran well on dual processor machines, even though it assigned all network i/o to a single processor. But when it scaled up to four and eight processor machines, performance dropped because that single i/o processor became a bottleneck, the software didnt know how to spread the network i/o around to multiple processors. Sybase eventually fixed that problem, but now youre going to get similar problems cropping up unexpectedly as people transition from dual core to quad-core chips. Also, theres the question of clock speed. AMDs quad-core Barcelona chip is going to slow down the cores compared to their dual core chips. That saves power consumption, but it might not be good for performance. Ideally, its better for performance to have one superfast processor than lots of slower processors, because then you have less overhead. Imagine you are a customer waiting in a bank for access to a row of teller windows. When you finally arrive at a window, you only get a certain slice of time with your teller. When that time is up you have to interrupt your transaction and go stand in line again. Thats the way multicore servers work. Now if the tellers are slow, youre going to get interrupted often and spend a lot of time going back to wait in line before your transaction is completed. But if you had only one superfast teller, a teller who was as fast as all the slow tellers put together, then you would get your transaction done without losing any time to interruptions, even though you would still have to wait your turn in line. Its like that for multicore chips. More isnt necessarily better, unless you have really smart software that knows how to take advantage of the hardware. Unfortunately, it doesnt look like a lot of the software that is designed to run on so-called "commodity" servers is smart in this sense. | <urn:uuid:425c5cfe-781d-40b4-9477-aad19e61d2a5> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Database/What-Applications-Are-QuadCore-Chips-Good-For | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00445-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954701 | 717 | 2.546875 | 3 |
A busy person’s iPhone can chew off more energy in a year than a refrigerator, but that doesn’t mean it needs to belly up to the wall trough every time its battery starts running low.
Wireless power won’t be here tomorrow, but it’s coming. Energy is everywhere, in every action, most of it bleeding away be recycled back into the endless machinations of the universe. Scientists and engineers are moving ever closer to figuring out how to harvest power from our environment, ourselves, and our devices themselves—from nanoscale pillars that could offset a device’s energy use by turning waste heat into electricity, to a spongey cell-phone case that works up a charge by sitting on a vibrating car dashboard.
This isn’t just wireless charging; this is harvesting energy from the world around us. Rather than use acres of solar panels or skyscraping wind turbines, energy harvesting engineers want to power your mobile devices from things like heat differentials, ambient vibrations, and your walk to work. That’s not just an engineering challenge, but also a design challenge. Tapping into the energy of everywhere shouldn’t add friction to the pace of modern life.
A few wireless power harvesters have already made their way to the shelves, but the biggest advances are still being worked out in the lab.
Click here for a Quartz review of some of the promising paths that engineers and designers are taking to power the mobile Web. | <urn:uuid:a845dcb7-c54b-4fc3-9177-51ded187dbb2> | CC-MAIN-2017-04 | http://www.nextgov.com/mobile/2014/03/mobile-devices-future-will-get-energy-everywhere-except-wall-socket/79966/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943576 | 306 | 2.84375 | 3 |
Illustration: Opening government and protecting privacy are initiatives that can often travel in opposite directions.
"Transparency" is an up-and-coming buzzword that is finding its way into the national conversation at the federal, state and local levels. Its continued rise to prominence is pretty well assured when the new administration takes office because President-Elect Obama has been associated with federal transparency initiatives for years and has, at least for some federal agency CIOs, made transparency an important part of the transition dialog.
But what does transparency mean for the agency head or IT manager who has been instructed to make his or her agency "more transparent"? What are some of the key issues and architectural considerations that need to be addressed?
First, some background. Although "transparency" as a term has earned recent cachet, the debate about it is quite old, often found in discussions about government openness or implicit in discussions about public disclosure policies and the Freedom of Information Act. And while its current use focuses on opening up government processes, it has also been used as a political tool to bring about changes in the private sector. In their book Full Disclosure: The Perils and Promise of Transparency, authors Fung, Graham and Weil explore the public policy implications of transparency generally and identify "targeted transparency" as a tool available to federal, state and local leaders to help redress wrongs and increase public safety. They cite federal mandates for public disclosure of automobile rollover risks as an example of its use. In that case, the federal government required disclosure by private companies of specified product safety data with the intent of helping consumers make more informed choices and to leverage natural market forces to bring about long-term improvements. It worked.
But for the agency head, transparency means something different -- it means opening up the records, information and processes of the agency to timely public inspection and, further, opening up communication lines for the public to talk back. In other words, we're now talking about providing a means for them to comment on what they see or would like to see.
It is this kind of transparency that President-Elect Obama helped to champion by cosponsoring the Federal Funding Accountability and Transparency Act of 2006 and more recently co-sponsoring the Strengthening Transparency and Accountability in Federal Spending Act of 2008 (S.3077). The president-elect's description of the 2006 Act is that it "...created the public Web site USASpending.gov, makes information about nearly all federal grants, contracts, loans and other financial assistance available to the public in a regularly updated, user-friendly and searchable format. The Web site includes the names of entities receiving federal awards, the amounts of the awards, information on the awards including transaction types, funding agencies, location and other information."
In his floor speech on June 3, 2008 introducing the 2008 bill, Senator Obama commented that the new bill "...will improve government transparency and give the American people greater tools to track and monitor nearly $2 trillion of Government spending on contracts, grants and other forms of assistance."
While transparency sounds like a good idea and has met with successes, it is not without its challenges and even contradictions. For example, during the same period that transparency has become a common talking point, so too have mandates to protect personal privacy information (PII): opening government and protecting privacy are initiatives that can often travel in opposite directions.
The following are some issues that surface immediately when a transparency program is
initiated. A tremendous amount of information is available about each -- the following serves to simply introduce the concept and its relationship to transparency efforts.
Today, there's more concern than ever about privacy due to the advent of identity theft, stalking and other potential abuses of online data. As a result, many laws have been passed in the U.S. and elsewhere, forbidding display on Web sites of "personally identifiable information" -- data that makes it possible to identify a specific individual. While this may sound simple, it can rapidly become quite complex because combinations of otherwise innocuous data can sometimes be used to identify an individual. That is, one datum may not in itself be PII (for example, a birth date), but if a site includes a birth date, the person's sex and the person's home ZIP code, the combination of these three data enable individual identification in more than 80 percent of the cases. Yet none of these three facts, in themselves, constitutes PII. Drafting legislation or building business logic to accommodate this kind of fuzzy situation can be challenging.
Then there's the problem of old records. Mandating that new documents don't contain PII is one issue; the greater challenge is what to do about old documents. In many cases, old paper documents have been scanned to images which means there is no simple, fully reliable method for electronically reviewing the document to identify PII information. So, reliably cleaning old documents can be time consuming and expensive.
Protecting privacy also has its counter-arguments. It is generally true, for example, that the value of a document is inversely proportional to the quantity of data removed from it: the more data removed, the fewer legitimate uses the document is likely to have. For example, a death certificate without a signature is worthless as a legal document but displaying signed death certificates online could facilitate certain crimes by providing a template of a person's signature.
Many agencies have access to what amounts to proprietary trade secret information about private companies. As those agencies open their doors to greater public inspection, they need to make determinations about how to protect trade secrets. While it would be clear that any federal agency having access to the formula for Coke would have to take actions to protect that information, other issues are less clear. For example, the Strengthening Transparency and Accountability in Federal Spending Act of 2008 includes a provision that will require agencies to post online facsimiles of and text-searchable versions of all contracts (in addition to the original RFP, award, and other related information). To the extent that the contract contains detailed financial or other information related to the awardee, it is possible that trade secrets could be revealed.
Security has been the watchword for government in recent years, particularly since 9/11. Security and transparency are another contradictory pair: the safest data is that information that no one knows and that can never be relayed. It also happens to be the most useless, though in some cases that might be the point. Still, no matter where you stand on government secrets, few would ever argue for eliminating all secrets. So, transparency needs to be linked to a keen understanding of what is and is not secure. For some agencies, such as Housing or Land Management, security per se may be a relatively minor issue (which they would make up for in Privacy and Trade Secret concerns), but for many agencies (such as the Department of Defense or the Nuclear Regulatory Commission), it is a core concern.
When it comes to developing systems, security can't be overlaid as an afterthought -- it needs to be baked into any solution from the beginning while making sure that it doesn't turn into the sole, dominant
A particular type of transparency has been tried in several states which highlights how transparency can have an impact on the agency workforce. In those states, the names and salaries of all agency employees have been made public. Although this is an interesting approach to opening government, it can also result in complaints from government employees who don't necessarily want their paychecks shared with the world any more than the rest of us would.
As with many things IT, the most challenging aspects of trying to become more transparent are not technical -- they are cultural, political and business-related. Getting people onto the same page can be challenging and may be an ongoing process -- information an agency can't possibly imagine releasing today may become an obvious release with no more than a change of administration or slight alteration in a regulation.
To accommodate the changing set of constraints and freedoms, any IT solution must be flexible and able to quickly adapt to new business rules and new technologies. Fortunately, IT solutions can be architected with just such flexibility.
As for the cultural, political and business needs, the data that must be reviewed and analyzed are simple to list though sometimes hard to accumulate:
With this groundwork in place, it is only then possible to plan the technical implementation.
There are two general approaches for agencies to use when making data available, each with its own set of advantages and disadvantages, though it appears a blended approach will probably prove to be the best in the long run:
A significant advantage of the first is that it allows almost anyone to get some information with little technical training or experience. But, this approach has two key disadvantages. First, to make changes or updates usually requires expensive programming resources and such efforts take time. This means the site will always tend to run behind the current need. Second, some individual or group will almost always have a new or different requirement which they cannot fulfill using the prepackaged interface. This can tie up agency resources dealing with frustration or complaints.
The advantage of the second approach is that by making the full set of data available to the public the agency leverages the time and skill of citizens who can put in the time to analyze the data the way they find important. This can maximize government IT budgets because much of the actual development is offloaded to private industry. The disadvantage, of course, is that only technical people would have immediate access to the data; the non-technical user will not be able to carry out the potentially routine simple queries he or she would like to do without enlisting the help of a technologist.
A striking example of the advantages of the second approach is Washington, D.C.'s "Apps for Democracy." Under this program, the city put up a relatively small investment to organize and administer a contest which resulted in more than 40 applications submitted -- and donated -- to the city from public and private sources. All the applications used feeds of raw data freely available from the city's data portal.
Current indications are that a combination of the two approaches might serve the widest audience: established or "canned" reports for the most commonly used data, and data feeds to access complete agency datasets to enable more complex and/or specialized manipulations. Where agencies need to choose one or the other for concentrated effort, recent suggestions are to concentrate on putting the infrastructure and data feeds in place. In a paper, Government Data and the Invisible Hand, published in the Yale Journal of Law & Technology, Vol. 11, 2008, authors Robinson, Yu, Zeller and Felten argue that federal agencies should concentrate on building the infrastructure to support data feeds rather than trying to meet every users' needs through a canned interface. The paper suggests:
Obviously, there are also political advantages associated with the second approach. When the government doesn't take the second approach and instead gets into the business of building information-providing Web sites, it is liable to become involved in a never-ending game of catch-up in order to respond to citizen requests for changes. In contrast, if agency concentration is on providing raw data, the response to a constituency demanding a new or specific view of existing data becomes, a (polite and political) suggestion that they access the raw data and create the view themselves.
In this way, potential IT critics can be turned into willing collaborators. At the end of the day, they may still object to the policies or actions of the agency but at least they have arrived at those conclusions in an atmosphere of greater collegial cooperation rather than what often seems like a battle royal just to extract simple data from a seemingly unwilling source.
Modern Web technology has moved well beyond being a simple way to display data. Today it encompasses myriad methods for engaging in dialogs or small or large distributed conversations. Citizens are accustomed to working with such things as WIKIs, forums, blogs and video-sharing sites and so expect them. But this is not a complete list as it is likely we will see new kinds of applications within the next five years that will be similarly transformative.
All of these applications depend on a back-end infrastructure to store, manage and display the data so by concentrating on building an infrastructure that enables data sharing and exchange, agencies are to a certain degree future-proofing their offerings because as new forms of presentation or visualization become available, they will be built on the in-place infrastructure.
This article has only scratched the surface of what is involved in transparency initiatives. It doesn't even directly address what might be the expected returns on investment though obviously this would be important to understand.
Nor does it deal with the deeper internal cultural issues. For an agency accustomed to working away in relative anonymity, the idea of transparency and near real-time public feedback could be shocking and certainly might be considered a distraction. And in some cases, it would be.
But in many ways, transparency is a logical corollary to the idea that it is a good thing to involve citizens in the business of government. And in a democracy, we understand that good government depends on citizen participation. | <urn:uuid:eddb7172-33fd-46dc-8b0f-35e7e0e5409d> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/Transparency-vs-Privacy-and-Security-Whats.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00530-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9494 | 2,665 | 2.515625 | 3 |
In recent years it has been noticeable that the amount of people carrying a smart phone has increased exponentially. This is down to their low price and availability; even children as young as 12 have a smart phone. However, most people who own a smart phone are not aware of the data hidden in even the simplest and most innocent things they do on their phones. This includes armed forces staff. This article will look at the issues and possible repercussions of the availability of such easily obtained data.
Let’s consider a scenario: in this case an armed forces staff member is on patrol. they take a picture of themselves and upload it to a social media. Their personal profile on this site is not secured or has limited access that allows anyone to view their photos. A militant group happens to be doing some research on their “enemy”. They use advanced search on Google then happen use the correct collection of words or phrases, and just happens to find this picture. What could possibly happen?
First off, the basics:
What is a geotag?
The method of geotagging is the addition of geographical data into the meta data of an object, in this case a picture that has been taken by armed services personnel.
A geotag on a photograph from an Iphone, for example, captures the GPS coordinates of the location it was taken using Longitude and Latitude.
Obtaining geotag information
Using free tools that are widely available on the internet it can take seconds to reveal the geotag information. It requires very little effort and absolutely no training. Ideal for militant groups who would want to find this information relatively quickly.
Below is an example and for this example I will be using a picture of the blue ball in snooker, but imagine this photo was a team photo taken in a base on foreign soil.
Here I’m using Evigator’s TAGView software
(available @ http://www.evigator.com/)
1 – Locate the image and open it using the Open Image Icon.
2 – Press Open
3 – The Image will be analysed and you will have a screen similar to below:
4 – Sample data from the analysed picture.
As you can see from the above, highlighted is the geotag data & various information about the device the picture was taken on. Also note the mapped location of where it was taken. To get this information was less than 3 seconds once loaded into the program.
Security Risks & Repercussions
So what are the security risks? Well, as already pointed out the information could reveal any number of things: barracks, bases, patrol points or even patrol patterns. This information not only puts the staff member who uploads the pictures in danger but their entire deployment group.
Potential death is not the only issue, with profiles being insecure it could lead to that one member being profiled by the militant group, this then leading to potential blackmail, kidnap or endangering family members.
What should the armed forces be doing?
There are many things the armed forces could be doing. The key thing to do is offer the training necessary to remind their staff of the issues of geotags and smart phones. They could put a ban on any personal phones completely. However, some service men and woman would still find a way to take them into active duty.
A one hour basic training session that shows the dangers is all that is needed. The session could cover basic security settings of their social networking profiles and turning off the location services on any of their devices.
A one hour session could be the difference between life and death in most cases during deployment.
This article has been geared towards the idea of militant groups, however its not just militant groups, it could be anyone; stalkers, thieves, even an enraged ex could use these techniques.
Part 2 will be released soon. | <urn:uuid:067267d6-0f9e-4c78-bece-e2c828a59954> | CC-MAIN-2017-04 | https://articles.forensicfocus.com/2013/04/10/mobile-device-geotags-armed-forces/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00310-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958981 | 797 | 2.546875 | 3 |
BURNABY, BC--(Marketwired - May 30, 2014) - Research published today presents groundbreaking evidence verifying the presence of entanglement in D-Wave's commercially available quantum computer. The paper entitled "Entanglement in a quantum annealing processor" authored by scientists at D-Wave and the University of Southern California has been published in the peer-reviewed journal Physical Review X (PRX).
The results of the research prove the presence of an essential element in an operating quantum computer: entanglement. This is when the quantum states of a collection of particles (or qubits) become linked to one another. The research demonstrates entanglement of a two and eight-qubit subsection of one of D-Wave's 512 qubit processors, a record number for a solid state quantum processor, throughout the critical stages of a quantum annealing algorithm.
Dr. Federico Spedalieri of USC Viterbi Information Sciences Institute and co-author of the paper played a crucial role developing the framework for this research. "There's no way around it. Only quantum systems can be entangled. This test provides the experimental proof that we've been looking for," said Dr. Spedalieri.
"The research published in PRX is a significant milestone for D-Wave and a major step forward for the science of quantum computing. The findings are further proof of the quantum nature of our technology," said Vern Brownell, CEO of D-Wave. "Building and improving the science of our technology in collaboration with the greater scientific community is important to us and we'll continue to conduct research that enables us to better understand the characteristics and power of our quantum processor."
The PRX paper provides four levels of evidence that the eight-qubit unit cell is entangled including:
(a) a demonstration of an avoided crossing of two energy levels,
(b) a partial restoration of a density matrix of the system with calculations of standard entanglement measures,
(c) calculations of an entanglement witness using measured populations and energy spectra of the system,
(d) measurements of a susceptibility-based entanglement witness, which reports entanglement of the ground state.
These findings demonstrate entanglement within D-Wave's processors at the most critical stages of the quantum annealing procedure.
D-Wave will perform additional research that address the extent of spatial entanglement and will also continue to explore the computational advantages of quantum algorithms. D-Wave has published more than 70 peer-reviewed papers to date.
The paper published today is available here on the PRX website.
About D-Wave Systems Inc.
Founded in 1999, D-Wave's mission is to integrate new discoveries in physics and computer science into breakthrough approaches to computation. The company's flagship product, the 512-qubit D-Wave Two™ computer, is built around a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation.
In 2013, D-Wave announced the installation of a D-Wave Two system at the new Quantum Artificial Intelligence Lab created jointly by NASA, Google and USRA. This came soon after Lockheed Martin's purchase of an upgrade of their 128-qubit D-Wave One™ system to a 512-qubit D-Wave Two computer. With headquarters near Vancouver, Canada, the D-Wave U.S. offices are located in Palo Alto, California and Vienna, Virginia. D-Wave has a blue-chip investor base including Bezos Expeditions, Business Development Bank of Canada, Draper Fisher Jurvetson, Goldman Sachs, Growthworks, Harris & Harris Group, In-Q-Tel, International Investment and Underwriting, and Kensington Partners Limited. For more information, visit: www.dwavesys.com. | <urn:uuid:e6968aac-215b-452a-9f5b-0926438a5ad9> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/latest-research-validates-quantum-entanglement-in-d-wave-systems-1915807.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.90518 | 776 | 2.609375 | 3 |
It's not too often the public gets to pick what the central component of a major museum exhibit but that's what the National Archives has in mind.
The agency has opened an online poll known as the Records of Rights where "history buffs, students, service organizations, and anyone else can choose the opening document to be displayed in the David M. Rubenstein Gallery "Records of Rights " exhibition on November 8, 2013.
[Hot stuff: The hot art in the CIA's cool art collection]
"Records of Rights" showcases original and facsimile National Archives documents -- everything from the Declaration of Independence, the Constitution to the Bill of Rights -- that detail "how Americans throughout our history have debated and discussed issues such as citizenship, free speech, voting rights, and equal opportunity," the agency said.
According to the National Archives, the documents currently under consideration are:
- The 1868 joint resolution proposing the 14th Amendment to the states. The 14th amendment established the principle of "equal protection of the laws" and granted citizenship to "all persons born or naturalized in the United States.
- The 1971 certification of the 26th Amendment. The amendment lowered the voting age from 21 to 18.
- The Americans with Disabilities Act, 1990, which expanded Federal civil rights laws to include disabled Americans and banned discrimination in employment, public services, public accommodations, transportation, and telecommunications.
- Executive Order 9981, 1948. Signed by President Harry S. Truman, this order desegregated the U.S. Armed Forces.
- The Immigration Reform Act, 1965. These amendments to a 1952 immigration law ended the country-based immigration quotas that had favored immigrants from western and northern Europe.
Voting is open now until October 14. Go here to vote.
Check out these other hot stories: | <urn:uuid:7500fb18-a6b4-40d6-af9c-3b7b6324b9e2> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225337/security/national-archives-wants-your-online-vote-for-iconic-american-rights-exhibit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00428-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941438 | 374 | 3.0625 | 3 |
Big Data: From Information to Knowledge - Page 5
Transforming Information to Knowledge
Remember that our wonderful definition for Big Data involves two steps: 1) transforming data to information, and 2) transforming information to knowledge. Both steps aren't easy and can involve a great deal of computation. But how do you do these transformations? The ultimate answer lies with the individual doing the analyses and the particular field of study.
However, let's briefly touch on one possible tool that could be useful -- neural networks.
Neural networks can be used for a variety of things, but my comments are not directed at using neural networks in the more traditional ways. Rather,consider taking the data you are interested in or even the information, and training a neural network with some defined inputs and defined outputs. Hopefully, you have more than just a couple of outputs since you can many times just create a number of 2D plots to visualize the outputs as a function of the inputs (unless you have a huge number of inputs). Once you have a trained net, a very useful feature is to examine the details of the network itself. For example, examining the weights connecting the inputs to the hidden layer and from the hidden layer to the outputs can possibly tell you something about how important various inputs are to the output. Or how combinations of inputs can affect the output or outputs. This is even more useful when you have several, possibly many, outputs and you want to examine how inputs affect each of the outputs.
Neural networks could enjoy a renaissance of sorts in Big Data if they are used to help examine the information and perhaps even turn it into knowledge.
This and the previous two articles are intended to be a starting point for discussing Big Data from the top, while the first article in the series started at the bottom. But the topic of Big Data is so broad and so over-hyped that it is difficult to concisely say what Big Data is and why it is a real topic and not YABW (yet another buzzword). There are several facets to Big Data that must be carefully considered before diving in head first. Some of the facets that I have tried to discuss are:
- What is Big Data?
- Why is it important or useful?
- How do you get data into "Big Data"?
- How do you store the data?
- What tools are used in Big Data, and how can these influence storage design?
- What is Hadoop, and how can it influence storage design?
- What is MapReduce, and how does in integrate with Big Data?
Hopefully, the discussion has caused you to think and perhaps even use Big Data tools like Google to search for information and create knowledge (sorry -- had to go there). If you are asking more questions and wondering about clarification that means you have gotten what I intended from the article.
And now, back over to Henry!
Jeff Layton is the Enterprise Technologist for HPC at Dell, Inc., and a regular writer of all things HPC and storage.
Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain. | <urn:uuid:f13a6573-0f50-4513-8773-2227f6ae9f59> | CC-MAIN-2017-04 | http://www.enterprisestorageforum.com/storage-management/big-data-from-information-to-knowledge.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958059 | 695 | 2.828125 | 3 |
A buffer is memory where pixels can be drawn to or read from.
The information and state variables associated with each buffer is stored in memory allocated when the buffer is created with screen_create_buffer() Note that memory is allocated to store all information pertaining to the buffer, but not for the buffer itself. When buffers are created by the composited windowing system through calls to screen_create_window_buffers() and screen_create_pixmap_buffer(), it isn't necessary to create buffer objects with screen_create_buffer(). screen_create_buffer() is used to create buffers which must be attached to windows or pixmaps.
Usage flags are used when allocating buffers. Depending on the usage, different constraints such as width, height, stride granularity or special alignment must be observed. The usage is also valuable in determining the amount of caching that can be set on a particular buffer.
Depending on which function was called, the buffers can be queried using the SCREEN_PROPERTY_RENDER_BUFFERS property with either screen_get_window_property() or screen_get_pixmap_property() API functions.
Last modified: 2014-05-14 | <urn:uuid:0291c86e-af06-484e-a23e-5c0c76c8c307> | CC-MAIN-2017-04 | https://developer.blackberry.com/native/reference/core/com.qnx.doc.screen.lib_ref/topic/manual/cscreen_buffers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279489.14/warc/CC-MAIN-20170116095119-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.88205 | 251 | 2.953125 | 3 |
Hi, welcome to the Certification Kits presentation on how to build your very own CCENT/ CCNA home lab. A common question we get is, how many routers to I really need in my lab? As you can see on the slide on the screen, one router will give you the ability to run the commands on it and allow you to memorize the correct syntax in context in which to run the commands. However, two routers are really required to see if anything works. So what do I mean by that? With Cisco CCNA exam is built around the propagation of data in route tables. The only way you can see this in action is to have at least two routers. Two routers will allow you to see the data propagate, route table information propagate, and path elections. In addition, you’ll be able to see some basic device selections. Whereas three or more routers, and you’ll get all those things we just talked about, be able to experience more complex topologies and see full device selections. So for those reasons, we really do suggest that you have three routers in your lab. You may say, you know what, I have seen another video or article that said I only need two routers, why are you saying that I need three routers? That’s a good question and we’re going to cover that between this and the next slide.
Now, if you take a look at this slide you’ll see a OSPF topology from our free CCNA study guide. We want a configured environment so we know that R2 will be the designated router. Well there are actually a couple ways we can accomplish this, so at a real high level if we only had two routers, say R1 and R2, we could set R1 with the priority of 0 and then R2 would, by default, be the designated router. But that is not real world, as most companies you will work for, probably any company you work for, will have many more than two routers. So, once we add the third router into the mix, would R2 still be the designated router? I don’t know, but by changing a priority on R2 and leaving R3 the default priority, we can assure that. So I think you could start to see why three routers is our preferred topology, as this can be applied to many of the CCNA concepts that you’ll see.
So let’s take another look in an example why we believe three routers is the right way to go. If we have just a simple two router scenario, we basically have one subnet, and you can argue we could have another one on each Ethernet, so we have a total of 3 subnets in this environment. Now let’s compare that to our EIGRP lab that we have in our lab workbook. This is a 32-page lab, and what we have here, we have dual WAN links between R1 and R3, that could be like New York and San Francisco, so it’s a real company, they have to have redundancy here. We have some LAN links here, so you understand the different encapsulation protocols between WAN and LAN. So we have multiple paths, we could have multiple costs on the different LAN links, we could have load balancing and we could have more complex route tables because now we have a total of 6 different subnets, where here we’re tapped out with the maximum of three. So, I think you can see clearly that this is a much more realistic scenario of what you will see in the real world, where this is so simplistic it really doesn’t get to the meat and potatoes of routing and switching.
So hopefully you’ll agree in those couple scenarios we just showed you, you can see the benefits of having three routers. And if you can afford it, go that way, if you can’t, two routers is still a great environment. Now let’s just take a real quick look at some of the features. We have this table actually on our website at certificationkits.com and I’m going to show you where at the end of the presentation. But, what we will have when you go through this article, you’ll see the different models, what the requirements are memory-wise for Ipv6, whether or not it supports CCP, the max IOS version, whether its 15.x for your 2801, and your 1841s, 12.4 for the 1700 series, 12.3 for the 2500 and a different integrated ports, slots, and such.
Now something I think we really need to address because we’ve been getting a lot of questions because there have been a couple articles out there and videos about the 1721 routers. People just want a bunch of 1721 routers in their lab, and we don’t actually think that’s a really great idea, but maybe we can explain that to you quick. You see, it’s important, if you look down here at the lab alert, that you get the right mix of routers in your lab. Not every router has to have the same capabilities and functions and same features. You’ll want maybe one router to be your dual Ethernet router that can do your NAT PAT stuff, and then every other router might not necessarily have to have dual Ethernet in it. So, different features, functionalities, operating systems versions, IOS versions. It could be a mix of full featured routers and more basic routers such as you see in the CCIE labs. For instance, the current CCIE lab version 4, it has two 2501 routers in it. That’s a CCIE lab, they’re just acting as edge routers, they don’t need to be as powerful as maybe the core routers, or have as many features. So again, the key is getting the right mix of routers and switches in your lab.
So, let’s just talk a little about the 1701’s because we’re getting a lot of questions about that. The pros; it’s small, it’s quiet, and it supports 12.4, and it’s cheap, great! The cons, why I’m not a real big fan of it, not easily stackable or rack-able, there is this bulky external power supply which has a high failure rate. Generally when they came from Cisco they were 64/32, now the big thing about this is hey, it’s a 12.4 router, great. However, if they only have 64/32 memory, you’re going to have to purchase a memory upgrade. And as you start to add that cost to it then, hey, you want to make it a dual Ethernet router, just add a WIC-1ENET to it. Well, that router had a specialized module for the second Ethernet port, that wasn’t as mass produced as the other routers or modules, and thus that module is kind of on the expensive side. You’re going to be $35 to $40 into that module by the time it’s shipped and all that stuff, so you might have been better off getting a much better router in my opinion, like a 2611XM. It’s more expandable, more features, things along those lines.
So, what am I talking about as far as it not being stackable and rack-able and what? We generally have two scenarios with people in their home labs, either stack everything on their desk like here on the left, or have a sweet rack like over here on the right.
So if you’re stacking it over here on the left, you can see how these 1721’s they stack up on there, but they’re plastic and they fall and break. You’ve got this big power cube on the side. I’m just not a real big fan of them as you can tell. At least on these you can have them nice and neat, you have your 2600 series, 2500 series, 1800 series, whatever it is they are all rackable. If you do decide to go with the rack, you’ve got this sweet little setup. You can put it under your desk or you can put it on top of the desk. It keeps everything nice and compact so that nobody is complaining, tripping over it, and pulling out your cables.
So, now how many switches do I need? Just like with the routers, one switch will give you the ability to run the commands on the switch, memorize the correct syntax and context, in which to run the commands, and allow you to do some of the VLAN labs. Two switches will allow you to see VTP domain information and VLAN information propagate. In addition you will see basic device elections like on your routers. Three or more you will have full device elections, more complex scenarios. Now, every now and then, we will get some really smart Cisco people. Maybe they’re at the CCNP level or higher or what have you, or really great CCNA’s. And they’ll say, you know what, you don’t really need that third switch on that second switch. You go and VLAN it, you could do this, that, and the other, and simulate everything you’re talking there with the full device elections and such, and some of the other scenarios we’re going to cover. And I do agree with that, but here’s the problem. We’re trying to get someone to understand this from the ground up, and you’re talking about $35 more for this extra switch. If they knew how to do this off the top of their head, you’re probably already a CCNA so you don’t need a lab such as this, and you’ve already passed it. But let me just show you now what you can do with three switches.
So now with three switches and this slide, you can see, and again this is from our free CCNA study guide. We have a scenario which we are talking about spanning-tree here , and there’s a lot of different concepts that we’re going to talk about. The root bridge, the designated bridge, the non-designated ports, root ports, forwarding ports, blocking ports, which are doing which. And you can’t really do this and experience it in a real world scenario unless you have the three switches. If you took one out, it just doesn’t happen. If you VLAN one off you could probably do a lot of it, but it just doesn’t sink into the student as easily, because that induces some other issues.
So, next thing is switch features. What switches do we recommend, which ones should we look for, stay away from, things along those lines. So, you’re going to stay away from the 1900 series, the 1912, 1924, the plain 2900 series, of the 2912, the , 2924, the 2924m and also the 3512, 3524s. A lot of people get confused with those and the 3550 and 3560s because the 3550s and 3560s are layer 3, as you can see on the slide, the 3512s and 3524s are not a layer 3 switch. Now a really great model for the CCNA exam in your lab scenario will be the 2950 switch, and that comes in SI and EI version. Now EI, enhanced image, supports Enhanced QoS, 802.1s is your multiple instance spanning tree and 802.1w your rapid spanning tree, you’re going to see some of that stuff on the exam. If you can, upgrade to a 2950 to support that, which are generally your C models, your T models, things along those lines. Now some people say that Cisco says you should have the 2960, why are we not saying that you absolutely have to have a 2960 in your lab? Well, think about it from Cisco’s perspective. When they wrote the exam, the 2950 had been sunset for five years, are they going to tell you to go buy something that is five years old for your lab? No, they want you up to date on the latest technology and hardware, so this way when you’re working in the real world, you’re going to be able to be familiar with that hardware and IOS there. But the reality is 2550, 2560 pretty similar, you will be fine with either. Now that brings us to the 3550, 3560 switches. They are Layer 3 switches, they can route, really cool feature. I like to have a 3550 in your CCNA lab, it is touched on a little bit. But again, if you can’t afford it, not a big deal. But you definitely have to have 3550s and 3560s once you get up to your CCNP exams.
Now something I want you not to overlook, getting a bunch of equipment and having it on your desk is not helpful if you don’t have really good labs to follow. If you get, whether it’s a Cisco Press study guide, or Todd’s book, or some of the other books out there like the Brian Advantage ones. They really don’t have much in regards to labs. They have a lot of great theory and their study guide might be 600-700 pages, it’s a smaller size format, and the labs might encompass 20-30 pages total. Most of the labs are one to two pages. Our lab workbook is 450 pages, 8.5” by 11”, so there is a lot of information in our lab workbook. Also with our labs that we sell, we include a Cram Sheet, a How and Why we Subnet Workbook with over 100 examples that you have to go through, exercises, and this book is also over 100 pages. We have our Exam Prep Tools which gives you our Exam Test Engine, Electronic Flash Cards, TFTP Server, Subnet Calculator, Binary Bits Game, and 40 plus Instructional Videos. We have this cool poster that we also send out that has comparisons of IPv4 and IPv6. So, just getting hardware if you don’t have labs to follow, it really doesn’t matter, it’s not going to be that helpful to you.
And finally, as I mentioned earlier in the presentation, if you go to certificationkits.com, and you go over to lab suggestions, and you click CCNA, you’ll see all that information I just talked about, and actually there’s more information there and there’s some tables showing the various routers, switches, IOS versions, what you need. Take some time to chew on it, there’s a lot of good information there. I also might suggest if you go down to the CCNA study center, you will see that we have a free CCNA study guide there, use it! There’s a lot of good information. If you have any questions for us, please feel free to go over here to the Contact Us and shoot us an email. We are here to help you pass your Cisco Certifications, whether its CCNA, CCNP, or CCIE. Thank you and have a great day. | <urn:uuid:7963553c-e662-4a2a-9c25-620e1f406835> | CC-MAIN-2017-04 | https://www.certificationkits.com/diy-ccna-lab-kit-video/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00062-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960254 | 3,229 | 2.765625 | 3 |
Healthcare is one of many industries that will be disrupted by Internet of Things (IoT) technologies. To keep you updated, we’ve compiled several recent articles that discuss the potential impact of IoT on the medical field.
The (Internet Of Things) Doctor Will See You Now – And Anytime
We know that healthcare providers around the world (whether in the public or private sector) are increasingly requiring patients to engage more fully in managing their own state of health. This means health services have to move from a focus on institutional health record management, onward to making that health data available for all patients and their devices. Crucially, doctors must now allow patients to become an active part of the data collection process. This could have a huge impact on the health of nations. Read entire article.
After Big Data—Keep Healthcare Ahead with Internet of Things
In a way, healthcare has spearheaded the forefront of the universal connectivity—some warning signs simply can’t wait for someone to come and check every 6 hours. Telemetry monitors, pulse oximetry, bed alarms are just some examples of how interconnected “things” make for a timely alert system detecting the smallest deviation from normal. One purpose of this near-time update is obvious—early detection leads to early intervention and improved outcome. Read entire article.
Google Takes Aim at Diabetes with Big Data, Internet of Things
Google’s life science team is once again planning to tackle diabetes with the help of big data analytics and innovative Internet of Things technologies. With the formation of a new partnership that enlists the aid of the Joslin Diabetes Center and Sanofi, a multinational pharmaceutical developer, Google hopes to reduce the burden of Type 1 and Type 2 diabetes on both patients and providers. Read entire article.
How The Internet of Things Will Affect Health Care
imagine the value to a patient whose irregular heart rate triggers an alert to the cardiologist, who, in turn, can call the patient to seek care immediately. Or, imagine a miniaturized, implanted device or skin patch that monitors a diabetic’s blood sugar, movement, skin temperature and more, and informs an insulin pump to adjust the dosage. Such monitoring, particularly for individuals with chronic diseases, could not only improve health status, but also could lower costs, enabling earlier intervention before a condition becomes more serious. Read entire article.
Download the Healthcare tech data sheet to learn how Internap provides a HIPAA-compliant environment. | <urn:uuid:9f89248d-db0b-477a-8f1b-1c9d530e9d09> | CC-MAIN-2017-04 | http://www.internap.com/2015/09/03/news-roundup-healthcare-internet-of-things/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00548-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909353 | 503 | 2.796875 | 3 |
After having agreed on a draft of an official cybersecurity strategy earlier this month, Japan’s National Information Security Center (NISC) is looking to establish a Cyber Security Center – an agency equivalent to the U.S. NSA – and allow it to monitor Internet-based communications.
In order for such actions to be legal, it is necessary first to change some laws, namely Article 21 of the Japanese Constitution and Article 4 of Japan’s Telecommunications Business Law.
“Japan is an island nation, and connected through submarine cables via landing stations. We can tap into these to watch malicious communications. We are not proposing deep packet inspection, for example. The ability to monitor headers and to use lists to stop distributed denial of service attacks might be sufficient,” NISC panel member Motohiro Tsuchiya stated for Defense News.
According to the proposal, the NISC, headed by Prime Minister Shinzo Abe, would serve as cybersecurity command and ultimately create the Cyber Security Center by the end of March 2016.
A Cyber Defense Corps as a standing unit of Japan’s army (“Self Defense Forces”) would also be instituted, and would be responsible for responding to cyber attacks.
Japan has finally realized that that the cyber espionage campaigns that their ministries and defense and other companies have been subjected to are not going away, and that attacks against critical infrastructure-related companies will eventually happen.
The lax data protection laws will also have to be changed.
The proposal, which is now open for input by the public, is to be finalized by July. | <urn:uuid:e5da40a3-f77f-4129-84b4-e4ee0049daf2> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/06/14/japan-aims-to-monitor-internet-based-communications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00180-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.961778 | 321 | 2.515625 | 3 |
The era of the ever-shrinking transistors may be coming to a end. According to market research and consulting firm iSuppli, Moore’s Law is going to run out of money before it runs out of technology. If true, this would be bad news indeed for the IT-industrial complex, since semiconductor components (CPUs, GPUs, memory devices, etc.) depend on Moore’s Law for their roadmaps, and many businesses directly or indirectly count on the ensuing technological advancements to drive revenue growth and worker productivity.
Moore’s Law, of course, is the observation that the density of transistors on computer chips doubles approximately every two years. Intel co-founder Gordon Moore originally described the trend in a 1965 paper, at a time when transistor densities were actually doubling ever year. More importantly though, Moore observed that the cost per transistor decreased in concert with the shrinking geometries. And it is really this aspect of the model that is breaking.
In fact, it has been apparent for some time that the Moore’s Law curve is running counter to the escalating costs of semiconductor manufacturing, which are rising exponentially as process technology shrinks. This is the result of the increased cost of R&D, testing, and the construction of semiconductor fabrication facilities.
The price tag on a new 45nm fab is over a billion dollars today. AMD’s new foundry partner, Globalfoundries, is constructing a 32nm fab in New York with a budget of $4.2 billion, and Intel has already committed $7 billion to upgrade its fabs to produce 32nm chips. You have to sell a lot of chips to recoup those kinds of costs. And those are just capital expenditures.
In the iSuppli announcement, Len Jelinek, the firm’s director and chief analyst for semiconductor manufacturing, explained it thusly:
“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes. At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it.”
The operative word is “never.” The iSuppli study predicted that in 2014, when the 18nm and 20nm process nodes are introduced, there will be no economic incentive to build volume semiconductor components below those geometries.
If true, this will tend to level the playing field for semiconductor vendors and especially fabless chip companies. For example, Intel would lose its current chip manufacturing advantage if everyone was stuck on the same process node. More importantly, if transistor size becomes a constant, much more of the burden of computer advancement will be shifted onto other elements of the ecosystem, mainly the folks that do design — chip/device, board, system, and even software.
There would also be increased pressure to abandon legacy architectures in favor of more efficient designs that need proportionally less silicon to do comparable work. Products based on x86 processors and Ethernet networks have been able to advance partly thanks to the ever-shrinking semiconductor components upon which they are based. Without that crutch, more advanced processor designs and interconnects may come to the fore.
To a certain extent, this is already occurring in the high performance computing sector. Moore’s Law is already too slow to keep up with the performance demand of HPC users, and the difference is being made up by aggregating more chips together and attaching accelerators like GPUs, Cell processors and FPGAs. That’s why interconnect technologies have become so important in HPC, which has largely abandoned Ethernet in favor of InfiniBand, and why x86 chips are playing a supporting role on some supercomputers, like the Roadrunner machine at Los Alamos National Lab and the TSUBAME super at Tokyo Tech. I imagine if Moore’s Law comes to a halt or even slows down, non-legacy architectures will become more commonplace in HPC and even generally throughout the ecosystem.
Of course, none of this may come to pass. Moore’s Law is periodically declared dead and has thus far defied its doomsayers. Additional transistor density may be achieved in other ways, such as 3D semiconductor structures. And there’s no shortage of more exotic approaches like carbon nanotubes, silicon nanowires, molecular crossbars, and spintronics. In any case, whatever happens in 2014, we’re bound to be living in interesting times. | <urn:uuid:f4d2e551-5974-4c95-9ba8-9d6f8216bbb3> | CC-MAIN-2017-04 | https://www.hpcwire.com/2009/06/17/the_end_of_moores_law_in_five_years/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00300-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937812 | 965 | 2.640625 | 3 |
This type of virus infects the Master Boot Record or DOS Boot Record of a hard drive, or the Floppy Boot Record of a floppy drive.
Depending on the settings of your F-Secure security product, it will either automatically delete, quarantine or rename the suspect file, or ask you for a desired action.
More scanning & removal options
More information on the scanning and removal options available in your F-Secure product can be found in the Help Center.
You may also refer to the Knowledge Base on the F-Secure Community site for more information.
A boot virus (also known as a boot infector, an MBR virus or DBR virus) targets and infects a specific, physical section of a computer system that contains information crucial to the proper operation of the computer's operating system (OS).
Though boot viruses were common in the early 90s, they became much rarer after most computer motherboard manufacturers added protection against such threats by denying access to the Master Boot Record (the most commonly targeted component) without user permission.
In recent years however, more sophisticated malware have emerged that have found ways to circumvent that protection and retarget the MBR (e.g, Rootkit:W32/Whistler.A).
How a boot virus infects
Boot viruses differ based on whether they target the Master Boot Record (MBR), the DOS Boot Record (DBR) or the Floppy Boot Record (FBR):
- The MBR is the first sector of a hard drive and is usually located on track 0. It contains the initial loader and information about partition tables on a hard disk.
- The DBR is usually located a few sectors (62 sectors after on a hard disk with 63 sectors per track) after the MBR, and contains the initial loader for an operating system and logical drive information.
- The FBR is use for the same purposes as DBR on a hard drive, but it is located on the first track of a diskette.
A boot virus can be further subdivided into either overwriting or relocating:
- An overwriting boot virus overwrites MBR, DBR or FBR sector with its own code, while preserving the original partition table or logical drive information.
- A relocating boot virus saves the original MBR, DBR or FBR somewhere on a hard or floppy drive. Sometimes, such an action can destroy certain areas of a hard or floppy drive and make a disk unreadable.
All boot viruses are memory-resident . When an infected computer is started, the boot virus code is loaded in memory. It then traps one of BIOS functions (usually disk interrupt vector Int 13h) to stay resident in memory.
Once resident in memory, a boot virus can monitor disk access and write its code to the boot sectors of other media used on the computer. For example, a boot virus launched from a diskette can infect the computer's hard drive; it can then infect all diskettes that are inserted in the computer's floppy drive. | <urn:uuid:ab6d48c6-7d95-4021-8193-7a83a284f4d0> | CC-MAIN-2017-04 | https://www.f-secure.com/v-descs/boovirus.shtml | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909069 | 621 | 2.953125 | 3 |
Really sensitive chips:
While Duke is looking to advanced chips for DNA processing, Stanford researchers say they have developed a new biosensor microchip that could speed up the drug development process. According to Stanford researchers , the microchips, packed with highly sensitive "nanosensors," analyze how proteins bind to one another, a critical step for evaluating the effectiveness and possible side effects of a potential medication. A single centimeter-sized array of the nanosensors can simultaneously and continuously monitor thousands of times more protein-binding events than any existing sensor. "You can fit thousands, even tens of thousands, of different proteins of interest on the same chip and run the protein-binding experiments in one shot," said Shan Wang, a professor of materials science and engineering, and of electrical engineering, who led the research effort. | <urn:uuid:e67bb292-8ab0-453e-a637-1b874f9133d1> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2869207/data-center/high-tech-healthcare-technology-gone-wild.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00081-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937437 | 166 | 2.90625 | 3 |
In the study of economics there is a technique called Pareto optimality. Pareto Optimality, or Pareto Efficiency, is a guiding force of economic efficiency. Simply put, it is the principle that there exists a balancing point between opposing interests where neither party benefits more than the other. But how does this relate to cybersecurity? Let us explain.
In Part 1, we discussed cybersecurity, primarily from the defensive point of view, and provided a simplistic explanation of Pareto Optimality. But defense isn’t the only side at play. We tend to think, however, in terms of defense, which leans toward a linear thought process. | <urn:uuid:d66a1583-5f09-456b-85c9-3d88c21d4b95> | CC-MAIN-2017-04 | https://www.entrust.com/series/cybersecurity-and-pareto-optimality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950783 | 134 | 2.65625 | 3 |
What You'll Learn
- Create and modify tables to organize data and enhance appearance
- Use the Mail Merge task pane with different data sources and main documents to create form letters, envelopes, and mailing labels
- Use styles to create consistently formatted documents and to facilitate changing paragraph and character formatting
- Create and insert Quick Parts to reuse data and content and create documents with consistent standards
- Use existing templates to provide consistent document editing and formatting, and create custom form templates
Who Needs To Attend
This course is targeted to students who have mastered the basics of creating documents in Word 2013 and who are looking to extend their knowledge of the program's features. | <urn:uuid:2136433b-74ee-4b36-8641-0c7db41e36a4> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/course/117643/microsoft-word-2013-level-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00255-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.840267 | 132 | 3 | 3 |
There's been a lot of talk about “the Internet of things”--the hypothetical, interconnected Web of physical objects that will communicate with each other and function as a unified force for good in our lives.
There's just one teeny, tiny problem: how exactly will all of these objects talk to each other? Who gets to design a way for, say, any coffeemaker to talk to any light sensor on the market to make coffee when the sun comes up?
The Outercurve Foundation is announcing today the acceptance of a new project into its ranks that could be a significant step towards solving this not-so-tiny problem of device interconnectedness.
The project is called Mayhem, a scripting system that's spinning out of the Applied Sciences Group at Microsoft. project into the newly-formed Innovators Gallery. The idea behind Mayhem is to enable users to interconnect services and devices within the Windows ecosystem.
At this point, the open source faithful might be wondering why they should care about any of this. Windows? Devices? Hang on, we're getting there.
First off, Mayhem is open source, and has been from its inception. According to Paul Dietz, Mayhem Project Leader, as soon as the team at the Applied Science Group started putting Mayhem together, they realized that in order to meet the challenge of hardware communication on a near-universal scale, they were going to have to rely on open source development practices.
What's also interesting about Mayhem is the way it works. Instead of trying to figure out how a bazillion devices can actually communicate with each other, Deitz explained that no actual data gets exchanged when Mayhem devices talk to each other--just a signal. Because Mayhem just deals with signals, it simplifies the communication enormously. Mayhem enables users to build whatever reaction they want based on the signal, Deitz explained.
So, in the example I outlined earlier (which actually came from Deitz), the coffeemaker doesn't have to know what the light sensor is saying; it just knows that when it gets a signal from the light sensor, it just has to start the reaction to the signal: brewing the coffee.
Mayhem's donation to the Outercurve Foundation marks an important event for the Foundation itself: it's the inaugural project in a new gallery for Outercurve, known as the Innovator's Gallery.
Executive Director Paula Hunter outlined the issue neatly: the Innovator’s Gallery solves a significant issue for the Outercurve staff and board of directors: what to do about open source projects that would do well within the Outercurve umbrella, and yet didn't quite fit within the existing gallery structure in the Foundation.
“Innovation is happening out on the margins,” Sam Ramji, President of the Outercurve Board said, “and open source is very innovative.”
For Ramji and the Outercurve board, they needed a way to bring in these kinds of forward-thinking open source projects that were out on the margins and not quite fitting in Outercurve's model.
Now that Mayhem is within Outercurve, contributors will have an easier time contributing to the Mayhem project. Of course, that doesn't mean the Mayhem folks aren't willing to sweeten the pot a bit for the occasion of Mayhem's launch within Outercurve.
To foster the creation of additional add-ons in Mayhem, which are collections of signals and reactions for existing devices, Outercurve is hosting the “Make Your Own Mayhem” Contest 2012.
According to the press announcement, “[d]evelopers are invited to submit any number of creative add-ons to Mayhem by midnight (Pacific Time), April 30, 2012. Submissions will be evaluated by judges Johnny Chung Lee, Rapid Evaluator, Google; IBM Fellow John Cohn, and MK Haley, Associate Executive Producer – Faculty, Carnegie Mellon University Entertainment Technology Center. Awards include Honorable Mention, Most Awesome Add-on, People’s Choice (most ‘Likes’ on entry video) and the Mayhem Master’s Award 2012, awarded to the developer of the best collection of Mayhem add-ons. Over US$5000 in prizes will be awarded.”
The project is licensed under the MS-PL, and currently has ties to the Windows environment to put reactions together. But if the Mayhem framework remains open, there’s no reason other operating systems, mobile or otherwise, wouldn’t be able to take advantage of Mayhem signals.
It’s an interesting approach, and one of the more elegant ways I've seen thus far to addressing the real mechanics of the “Internet of things.”
Read more of Brian Proffitt's Zettatag and Open for Discussion blogs and follow the latest IT news at ITworld. Drop Brian a line or follow Brian on Twitter at @TheTechScribe. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:a3d2780b-530f-45f5-a9f7-fcf0f1946589> | CC-MAIN-2017-04 | http://www.itworld.com/article/2730580/mobile/battling-device-mayhem-with-mayhem.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938224 | 1,043 | 2.546875 | 3 |
Giant Mirrors Light Norwegian Valley
/ August 20, 2013
The Norwegian town of Rjukan is located in a valley between steep hills -- so steep, in fact, that the town is impenetrable by sunlight for five months of the year.
But three sets of giant mirrors, with a surface area of 538 square feet, are poised to shed some light on Rjukan during its dark months. The mirrors, located on surrounding mountains, will get their first real-world test in September, the month in which the darkness descends on the town. The mirrors will be remotely controlled via a computer at the town hall, in order to reflect sunlight into a 2,150-square-foot area of the town square.
The idea was first considered more than 100 years ago, but the technology to turn the idea into reality did not yet exist. The installation cost the town of more than 3,000 residents approximately $850,000. The mirrors will be powered with solar and wind energy. | <urn:uuid:6c2da3c7-23de-4f01-8322-efa080dcbb5e> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week---Giant-Mirrors-Light-Norwegian-Valley.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00375-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97104 | 208 | 2.578125 | 3 |
Definition: An organization of information, usually in memory, for better algorithm efficiency, such as queue, stack, linked list, heap, dictionary, and tree, or conceptual unity, such as the name and address of a person. It may include redundant information, such as length of the list or number of nodes in a subtree.
Specialization (... is a kind of me.)
external memory data structure, passive data structure, active data structure, persistent data structure, recursive data structure.
See also abstract data type.
Note: Most data structures have associated algorithms to perform operations, such as search, insert, or balance, that maintain the properties of the data structure.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 15 December 2004.
HTML page formatted Mon Feb 2 13:10:39 2015.
Cite this as:
Paul E. Black, "data structure", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 15 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/datastructur.html | <urn:uuid:0df459b2-ab8f-4b10-8974-1660ca70f93c> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/datastructur.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00549-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.882873 | 254 | 2.765625 | 3 |
First time creating SQL Stored Procedure and using the Debugger as described in Kent Milligan's pdf.
Thank you Kent!
First part of mysql definition is:
CREATE PROCEDURE test/mysql
(IN out_loc CHARACTER (4),
IN cto_num DECIMAL(13,0),
IN bat_dat DECIMAL(7,0)
I called mysql from Run Sql Scripts with: call testlib/mysql('82 ',8268773,1140627)
My question: why does the character variable show on the Console tab like:
MYSQL.OUT_LOC = SPP:0000800000000270
Decimal variables show like:
MYSQL.CTO_NUM = 0000008268773.
MYSQL.BAT_DAT = 1140627. | <urn:uuid:7ee9a9f9-515d-41f1-b9d0-8c7682ac400c> | CC-MAIN-2017-04 | http://archive.midrange.com/midrange-l/201408/msg00929.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.689653 | 180 | 2.625 | 3 |
SSL Certificate Installation in Apache
Apache Server SSL Certificate Installation
If you are installing an Extended Validation SSL Certificate, use our Apache EV SSL Certificate Installation Instructions. If you are installing any other certificate, follow the instructions below.
Copy the Certificate files to your server.
Download your Intermediate (DigiCertCA.crt) and Primary Certificate (your_domain_name.crt) files from your Customer Area, then copy them to the directory on your server where you will keep your certificate and key files. Make them readable by root only.
Find the Apache config file to edit.
The location and name of the config file can vary from server to server - especially if you use a special interface to manage your server configuration.
Apache's main configuration file is typically named httpd.conf or apache2.conf. Possible locations for this file include /etc/httpd/ or /etc/apache2/. For a comprehensive listing of default installation layouts for Apache HTTPD on various operating systems and distributions, see Httpd Wiki - DistrosDefaultLayout.
Often, the SSL Certificate configuration is located in a <VirtualHost> block in a different configuration file. The configuration files may be under a directory like /etc/httpd/vhosts.d/, /etc/httpd/sites/, or in a file called httpd-ssl.conf.
One way to locate the SSL Configuration on Linux distributions is to search using grep, as shown in the example below.
Type the following command:
Where "/etc/httpd/" is the base directory for your Apache installation.
Identify the SSL <VirtualHost> block to configure.
If you need your site to be accessible through both secure (https) and non-secure (http) connections, you will need a virtual host for each type of connection. Make a copy of the existing non-secure virtual host and configure it for SSL as described in step 4.
If you only need your site to be accessed securely, configure the existing virtual host for SSL as described in step 4.
Configure the <VirtualHost> block for the SSL-enabled site.
Below is a very simple example of a virtual host configured for SSL. The parts listed in bold are the parts that must be added for SSL configuration:
Adjust the file names to match your certificate files:
- SSLCertificateFile should be your DigiCert certificate file (eg. your_domain_name.crt).
- SSLCertificateKeyFile should be the key file generated when you created the CSR.
- SSLCertificateChainFile should be the DigiCert intermediate certificate file (DigiCertCA.crt) If the SSLCertificateChainFile directive does not work, try using the SSLCACertificateFile directive instead.
Test your Apache config before restarting.
It is always best to check your Apache config files for any errors before restarting, because Apache will not start again if your config files have syntax errors. Run the following command: (it is apache2ctl on some systems)
You can use apachectl commands to stop and start Apache with SSL support:
Note: If Apache does not start with SSL support, try using "apachectl startssl" instead of "apachectl start". If SSL support only loads with "apachectl startssl" we recommend you adjust the apache startup configuration to include SSL support in the regular "apachectl start" command. Otherwise your server may require that you manually restart Apache using "apachectl startssl" in the event of a server reboot. This usually involves removing the <IfDefine SSL> and </IfDefine> tags that enclose your SSL configuration.
Visite nuestras instrucciones en español para Apache Instalar Certificado SSL.
If your web site is publicly accessible, our SSL Certificate Tester tool can help you diagnose common problems.
For help moving your certificates to additional servers or across server platforms, see our OpenSSL export instructions.
If you need to disable SSL version 2 compatibility in order to meet PCI Compliance requirements, you will need to add the following directive to your Apache configuration file:
If the directive already exists, you will probably need to modify it to disable SSL version 2.
Also you can visit our page with instructions to fix common Apache SSL Errors for additional tips.
Apache Server Configuration
For information about Apache server configurations that can strengthen your SSL environment:
Instructions for disabling the SSL v3 protocol.
Information about enabling perfect forward secrecy.
Installing your SSL Certificates in Apache
How to install your SSL Digital Certificate. | <urn:uuid:c033397a-a909-4d72-898b-e779c62c9ce2> | CC-MAIN-2017-04 | https://www.digicert.com/ssl-certificate-installation-apache.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.820109 | 968 | 2.609375 | 3 |
Posted 26 July 2006 - 06:40 PM
It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact (unlike formatting a partition). If you do not need to keep your files intact and you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than convert from FAT or FAT32. Formatting a partition erases all data on the partition and allows you to start with a clean drive.
Whether a partition is formatted with NTFS or converted using the convert command, NTFS is the better choice of file system. For more information about Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the command window, type help convert and then press ENTER.
In order to maintain access control on files and folders and support limited accounts, you must use NTFS. If you use FAT32, all users will have access to all files on your hard drive, regardless of their account type (administrator, limited, or standard.)
NTFS is the file system that works best with large disks. (The next best file system for large disks is FAT32.)
There is one situation in which you might want to choose FAT or FAT32 as your file system. If it is necessary to have a computer that will sometimes run an earlier version of Windows and other times run Windows XP, you will need to have a FAT or FAT32 partition as the primary (or startup) partition on the hard disk. Most earlier versions of Windows cannot access a partition if it uses the latest version of NTFS. The two exceptions are Windows 2000 and Windows NT 4.0 with Service Pack 4 or later. Windows NT 4.0 with Service Pack 4 or later has access to partitions with the latest version of NTFS, but with some limitations: It cannot access files that have been stored using NTFS features that did not exist when Windows NT 4.0 was released.
For anything other than a situation with multiple operating systems, however, the recommended file system is NTFS.
Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or FAT32. You will need to reformat the drive or partition which will erase all data including programs and personal files on the partition.
-from Windows XP help. | <urn:uuid:9de1f003-6162-4bce-8461-73d11fe51b3a> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/forums/t/60107/changing-fat32-to-ntfs-windows-xp-pro/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00273-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905055 | 516 | 2.8125 | 3 |
RTP (Real-time Transport Protocol)
Application layer protocol RTP is accessible in the TCP/IP protocol suite. Assigned port for this protocol is 5004 and it belongs to the working groups “AVT” and “FEC Framework”. As a standardized packets format, Real-time Transport Protocol (RTP) is used to deliver the audio or video or both on the IP networks. IETF standards association working group of Audio and Video Transport was built up it, at first.
Features of RTP are included end to end communication and data streams transmission in real time manners. But transfer of data to more than one destination is done with the IP multicast support in case of RTP. Moreover, RTP as a primary audio/video transport standard within the IP networks is used along with payload format and connected profile. Today, communication and entertainment systems with streaming media (telephony) are being used it extensively. And some common examples of such systems are such as teleconference applications and television services. | <urn:uuid:f6f7e411-41bd-4d83-b239-beca2f25cf6b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/real-time-transport-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00485-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935993 | 210 | 2.859375 | 3 |
More than 90% of user-generated passwords will be vulnerable to hacking in a matter of seconds, according to Deloitte’s Canadian Technology, Media & Telecommunications (TMT) Predictions 2013 report.
“Passwords containing at least eight characters, one number, mixed-case letters and non-alphanumeric symbols were once believed to be robust. But these can be easily cracked with the emergence of advance hardware and software,” said Duncan Stewart, Director of Research, Deloitte Canada and co-author of TMT Predictions 2013.
“A machine running readily available virtualization software and high-powered graphics processing units can crack any eight-character password in about five hours.”
It’s human behavior and a tendency for password re-use that puts password security at risk. Moving to longer passwords or to truly random passwords is unlikely to work, since people just won’t use them. Multi-factor authentication using tokens, cellphones, credit cards or even biometrics are likely solutions.
The report also reveals that the PC is not dead, as more than 80% of Internet traffic measured in bits will continue to be generated on traditional personal computers (desktops and laptops). And of the total time spent on PCs, tablets and smartphones combined, more than 70% will be using PCs. This includes both work and home usage.
Deloitte also predicts that “mobile advertising” will thrive, and that very few additional companies will adopt a bring-your-own-computer (BYOC) policy where the employer pays for the PC. At the same time, 50% of Fortune 500 companies will allow employees to bring their own personally-owned and paid for computers. | <urn:uuid:cf32e784-6ff9-4b83-b35e-fb97d959f4fc> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/01/18/the-end-of-strong-password-only-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00266-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929416 | 355 | 2.59375 | 3 |
What is NFC?
According to Wikipedia, NFC — no, not the NFL’s National Football Conference — “allows for simplified transactions, data exchange, and wireless connections between two devices in close proximity to each other, usually by no more than a few centimeters.”
Sounds similar to Bluetooth, doesn’t it? Well, it is similar. They actually complement each other quite nicely. Overall, NFC is lower-power and will likely be used for authenticating two devices quickly. NFC has a lower bit rate than Bluetooth (424 Kbits/s versus 2.1 Mbits/s), therefore it isn’t the best choice for a constant connection (e.g., Bluetooth headset, keyboard, mouse, etc.).
However, it can be used to simplify the Bluetooth-pairing process. For example, you have a mobile device that you want to connect to your laptop — let’s say to transfer some pictures. You tap your phone on the laptop, enter in a PIN, and you are paired. No messy Bluetooth pairing process. Just a simple tap-and-go. Now you can move your phone several feet away from the laptop because you are connected via Bluetooth, which provides extended range.
Now, that is just one example of how NFC can be used. Check out the below video by Google:
Google I/O Conference — Near-Field Communication
Why Should you Care?
NFC has been around for approximately 10 years. In the past, many skeptics claimed that it was a “solution looking for a problem.” Indeed, that was arguably the case at that time. However, with the growing trend of “internet of things” — appliances, cars, houses and TVs will likely be interconnected, coupled with the ubiquity of mobile devices — NFC is well suited to bridge technology of the physical and digital worlds. On the mobile payment/wallet side, NFC is well suited to be the key enabling technology behind it.
— Microsoft: ‘Microsoft Confirms NFC Support in Windows Phone’
Now, these plans center on having embedded NFC chips; the technology required to do NFC transactions is embedded in the device itself. The other alternative is using a third-party solution, such as Device Fidelity, which makes SIM/memory cards and cases that include the required NFC technology.
Each approach has its pros and cons. On the embedded side, it’s clearly the easiest path as everything you need should be included; we’ll discuss the Secure Element later. The downside is that carriers and handset makers control a large part of the value chain, potentially limiting features and functions (e.g., carriers in some regions may block out functionality for a variety of reasons). But, of course, for the general consumer, embedded will likely be the dominant approach.
When using a technology such as Device Fidelity, the main advantage here is control — regardless of what carrier you are using, device features, regions and so on. On the downside, it requires extra hardware (and software) to manage in order to gain NFC capabilities.
So what are the projections? Well, research firm ABI indicated that “by 2016, 552 million handsets will have NFC.” With the current growth of mobile devices, and rapid release cycles, it is likely that it will be difficult to buy a phone without NFC in the near future. In my personal opinion, it will be difficult to purchase a mobile device without NFC capabilities after mid-to-late 2012. And if Apple ever includes NFC, that very well could be the tipping point.
Either way, NFC is making its way into our lives. The applications are quite numerous. Just take a look at what is being tested in some cities, such as mobile payments, public transportation, medical record access, event ticketing and more.
In the next blog entry, I will cover the applications of NFC. So, stay tuned for NFC: It’s all about the Consumer first. | <urn:uuid:bdee73de-6746-4c22-8379-012308b2523b> | CC-MAIN-2017-04 | https://www.entrust.com/near-field-communication-nfc-what-is-it-why-should-you-care/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00476-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948625 | 821 | 2.875 | 3 |
What is computer performance? Those of us in the field think we know what it means, but do we really? Is the user’s view of performance what we actually measure?
I found myself pondering over this question when I found myself talking to one of my computer-illiterate friends. He’s super-smart, but has no patience to learn the technical underpinnings of his PC. I can relate – I don’t actually know how my flat panel TV works, and have only a vague notion of how my car moves me from one place to another. I just expect them to work. That’s how he feels about PCs. When he is browsing, opening a program or trying to print, he blames the PC itself for any slowdowns. I found him searching for faster machines, using the GHz rating to determine the relative speed.
Any performance analyst worth their salt is shaking their heads right now. Clock rating is only one of so many components that we need to look at to understand performance. We pat ourselves on the back; we are so much more knowledgeable than the average user. But do we have tunnel vision too? Is our pool of data including all aspects of an end user experience?
We work in silos of data ourselves. My friend’s is very limited – he only sees one piece of hardware and one aspect of that hardware as the problem. But when I did performance for a living, we weren’t all that much better. We had a metric called response time (yes, mainframes measure that), but it wasn’t really what the user saw. That number represented the point when data arrived at the mainframe with a request to when the request came back to the mainframe. All the back-end network, potentially other servers and the internet was completely ignored.
First, we need to know what we are measuring and what we should be measuring. We want to clock from the moment the end user hits “enter” to when he receives a response on whatever device he chose. Fortunately, there are solutions that can simulate this or actually measure user interactions. We simply have to employ them to get a real number.
Second, we have to understand what a transaction is to our user. Not what we think of it as in IT terms, but the real business transaction that we deliver. It helps to have a way to map it – which servers does it traverse? Which networks? What data stores does it need? We have always tried to measure the components of response time, the “speeds and feeds,” or more accurately, the “using and the waiting times,” but now, we can’t understand where the problem is unless we know the transaction path. Again, there are tools that help you build a CMDB, automatically discovering the assets and relationships. But you have to know that this needs to be done.
Finally, you have to move this all to a proactive approach, where you set thresholds and monitor, to detect and repair issues before the user sees them (or buys a new PC, because his is “too slow.”) You need to do this because most users blame the owners of a web site or program for their performance woes, not their PC. And that means they blame you. And understand this – cloud will not fix this problem for you – it only makes it more difficult. Get this right now, using the right automation tools, so you can limit those panicked help desk calls. Be a performance hero by understanding what your users mean by performance. | <urn:uuid:f464533c-e00e-42c7-bc5c-5d071d160d90> | CC-MAIN-2017-04 | http://www.bmc.com/blogs/buried-in-data-getting-our-heads-out-of-the-sand-in-performance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967282 | 737 | 2.515625 | 3 |
The epigenetics market, valued as a USD 4.38 billion market in 2016, is expected to become a USD 14.32 billion market by 2021, showing a CAGR as high as 26.72%.
Epigenetics is the study of changes in gene expression caused by certain base pairs in DNA, or RNA, being "turned off" or "turned on" again, through chemical reactions. In biology, and specifically genetics, epigenetics is mostly the study of heritable changes that are not caused by changes in the DNA sequence; to a lesser extent, epigenetics also describes the study of stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable.
The term also refers to the changes themselves- functionally relevant changes to the genome that do not involve a change in the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence.
Global Epigenetics Market for Therapeutics, Research & Development- Market Dynamics
The report lists several driving and restraining factors of the global epigenetics market. Some of them are listed below.
The epigenetics market is segmented on the basis of mechanism, applications, and geography. The mechanism sector can be segmented into three different categories namely DNA Methylation, RNA Interference and Histone Modifications. The application segment can again be divided into categories such as Therapeutics and Research & Diagnostics. On the basis of geography, the market is divided into North America, Europe, APAC and the Rest of the World.
Some of the key players in the market are:
Key Deliverables in the Study | <urn:uuid:161f2352-c9d7-44ec-85b9-8d36ee30441e> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/global-epigenetics-applications-market-growth-trends-forecasts-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00412-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939415 | 350 | 2.5625 | 3 |
In a new study, researchers from the Cambridge Crystallographic Data Centre (CCDC) in the UK and the US Department of Energy’s (DOE’s) Argonne National Laboratory have teamed up to capture neon within a porous crystalline framework. Neon is the most unreactive element and is a key component in semiconductor manufacturing, but it has never been studied within an organic or metal-organic framework (MOF) until now. These new results, which include critical studies carried out at the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne, also point the way towards a more economical and greener industrial process for neon production. Although best known for its iconic use in neon signs, industrial applications of neon have recently become dominated by its use in excimer lasers to produce semiconductors. Despite being the fifth most abundant element in the atmosphere, the cost of pure neon gas has risen significantly over the years, increasing the demand for better ways to separate and isolate the gas. In 2015, CCDC scientists presented a talk at the annual American Crystallographic Association (ACA) meeting on the array of elements that have been studied within an organic or metal-organic environment. They challenged the crystallographic community to find the next and possibly last element to be added to the Cambridge Structural Database (CSD). A chance encounter at that meeting with Andrey Yakovenko, a beamline scientist at the APS, resulted in a collaborative project to capture neon – the 95th element to be observed in the CSD. Neon’s low reactivity, along with the weak scattering of X-rays due to its relatively low number of electrons, means that conclusive experimental observation of neon captured within a crystalline framework is very challenging. By conducting in situ high pressure gas flow experiments at X-Ray Science Division beamline 17-BM at the APS using the X-ray powder diffraction technique at low temperatures, the researchers have now managed to elucidate the structure of two different metal-organic frameworks (MOFs) with neon gas captured inside them. “This is a really exciting moment representing the latest new element to be added to the CSD and quite possibly the last given the experimental and safety challenges associated with the other elements yet to be studied” said Peter Wood, senior research scientist at the CCDC and lead author of a paper on this work in Chemical Communications. “More importantly, the structures reported here show the first observation of a genuine interaction between neon and a transition metal, suggesting the potential for future design of selective neon capture frameworks.” The structure of neon captured within a MOF known as NiMOF-74, a porous framework built from nickel metal centers and organic linkers, shows clear nickel-to-neon interactions forming at low temperatures. These interactions are significantly shorter than would be expected from a typical weak contact. “These fascinating results show the great capabilities of the scientific program at 17-BM and the Advanced Photon Source,” said Yakovenko. “Previously we have been doing experiments at our beamline using other much heavier, and therefore easily detectable, noble gases such as xenon and krypton. However, after meeting co-authors Pete, Colin, Amy and Suzanna at the ACA meeting, we decided to perform these much more complicated experiments using the very light and inert gas – neon. In fact, only by using a combination of in situ X-ray powder diffraction measurements, low temperature and high pressure have we been able to conclusively identify the neon atom positions beyond reasonable doubt”. “This is a really elegant piece of in situ crystallography research and it is particularly pleasing to see the collaboration coming about through discussions at an annual ACA meeting,” said Chris Cahill, past president of the ACA and professor of chemistry at George Washington University. This story is adapted from material from the Cambridge Crystallographic Data Centre, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.
Mice were bred in specified-pathogen-free facilities at the University Hospital Zurich and Washington University, and housed in groups of 3–5, under a 12 h light/12 h dark cycle (from 7 a.m. to 7 p.m.) at 21 ± 1 °C, with sterilized chow food (Kliba No. 3431, Provimi Kliba) and water ad libitum. Animal care and experimental protocols were in accordance with the Swiss Animal Protection Law, and approved by the Veterinary Office of the Canton of Zurich (permits 123, 130/2008, 41/2012 and 90/2013). The following mice were used in the present study: C57BL/6J, PrnpZH1/ZH1 (ref. 3), co-isogenic C57BL/6J PrnpZH3/ZH3 and PrnpWT/WT control mice6 and Schwann cell-specifc DhhCre::Gpr126fl/fl mutants3, 4. Mice of both genders were used for experiments unless specified. Archival tissues from previous studies1, 6 were also analysed in the current study. No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment except where stated. Sciatic nerves from postnatal day 2–5 were dissected using microsurgical techniques. Nerves were dissociated in serum-free DMEM supplemented with 0.05% collagenase IV (Worthington) for 1 h in the incubator. Sciatic nerves were mechanically dissociated using fire-polished Pasteur pipettes. Cells were filtered in a 40-μM cell strainer and washed in Schwann cell culture medium (DMEM, Pen-Strep, Glutamax, FBS 10%) by centrifugation at 300g for 10 min. Resuspended cells were plated on 3.5 cm Petri dishes previously coated with poly-l-lysine 0.01% (w/v) and laminin (1 mg/ml). Laminin (Cat. No: L2020; from Engelbreth-Holm-Swarm murine sarcoma basement membrane) and poly- l-lysine were obtained from Sigma-Aldrich. Full-length recombinant PrP (recPrP, residues 23–231) and globular domain (GD, residues 121–231) were purified as previously described21, 22, 23. The generation of the GST fusion FT-PrP expression vector (pGEX-KG FT-PrP) was described previously; a modified purification protocol was used24. The FT-PrP expression vector was transformed into BL21 (DE3) strain of Escherichia coli (Invitrogen). Bacteria were grown in Luria-Bertani medium to an OD of 0.6, and the expression of the fusion protein was induced with 0.5 mM isopropyl-1-thio-β-d-galactopyranoside (AppliChem). Cells were then grown for another 4 h at 37 °C and 100 rpm shaking. Cells were pelleted at 5,000g for 20 min at 4 °C (Sorvall centrifuge, DuPont). The pellet was resuspended on ice in lysis buffer (phosphate-buffered saline supplemented with complete protease inhibitors (EDTA-free, Roche), phenylmethyl sulfonyl fluoride (Sigma) and 150 μM lysozyme (Sigma)) and incubated on ice for 30 min. Triton-X 100 (1%), MgCl (10 mM) and DNase I (5 μg/ml, Roche) were added, and the lysate was incubated on ice for 30 min. The lysate was than centrifuged for 20 min at 10,000g at 4 °C. Glutathione sepharose beads were washed with PBS and incubated with the cell lysate for 1 h at 4 °C on a rotating device. Beads were packed into a column and washed with PBS until a stable baseline was reached as monitored by absorbance at A using an ÄKTAprime (GE healthcare). The fusion protein was cleaved on the beads with 5 U/ml Thrombin (GE Healthcare) for 1 h at room temperature under agitation. For thrombin removal, benzamidine sepharose beads were added and incubated for 1 h at 4 °C on a rotating wheel. Protein preparations were analysed by 12% NuPAGE gels followed by Coomassie- or silver-staining. To achieve a higher purity of the protein, we next applied the protein to a sulfopropyl (SP) sepharose column equilibrated with 50 mM Tris-HCl buffer, pH 8.5. Elution was performed with a linear NaCl gradient of 0–1,000 mM. Fractions containing the protein were collected and concentrated (AMICON; MWCO 3500). The protein was then injected in 500 μl portions into a size-exclusion chromatography system (TSK-GEL G2000SW column (Tosoh Bioscience)) and eluted with a linear gradient using PBS. Pure fractions were combined, concentrated and stored at −20 °C. The purity of FT-PrP was >95–98% as judged by a silver-stained 12% NuPAGE gel. SW10 cells and clones derived from them were all grown in DMEM medium supplemented with 10% fetal bovine serum (FBS), penicillin-streptomycin and Glutamax (all obtained from Invitrogen). HEK293T cells, its clonal variant HEK293(H) cells and clones derived therefrom overexpressing various GPCRs were grown in DMEM-F12 medium supplemented with 10% FCS, penicillin-streptomycin and Glutamax (all obtained from Invitrogen). All cell lines were regularly monitored for mycoplasma contamination. The authenticity of SW10 and its derivatives was established by monitoring the expression of Schwann-cell specific markers (Extended Data Fig. 6a). Human Gpr126 (NM_020455), Gpr124, Gpr64, Gpr56, Gpr133, Gpr56 and Gpr176 expression plasmids (pCGpr126-V5, pCGpr124-V5, pCGpr65-V5, pCGpr56-V5, pCGpr133-V5, pCGpr56-V5 and pCGpr176-V5) were generated by PCR amplification of the respective cDNA followed by TOPO cloning into the pCDNA3.1/V5-His-TOPO vector. The cDNA was in frame with the V5 tag (sequence: GKPIPNPLLGLDST) at the C terminus. HEKGPR126 and HEKGPR176 cells were generated by transfecting 1 μg of plasmid into one well of a subconfluent 6-well plate using 3 μl Fugene (Roche) according to the manufacturer’s protocol. Twenty-four hours after transfection, cells were transferred to a 10-cm dish and grown in selective medium containing 0.4 mg/ml G418 (Invitrogen) until emergence of resistant colonies. A limiting dilution was carried out to obtain clonal lines. Membrane expression of the transgene was assessed in the selected clones by confocal microscopy using 1:100 diluted anti-V5 antibody (Invitrogen) and the Cytofix/Cytoperm kit (Pharmingen Cat. Nr. 554714), according to the manufacturer’s protocol. Cerebellar granule neurons were generated from 7–8-day-old PrnpZH1/ZH1 mice as described previously25. Cultures were plated at 350,000 cells per cm2 in Basal Medium Eagle (BME) (Invitrogen) with 10% (v/v) FCS and maintained at 37 °C in 5% CO . pCDNA-PrPC was generated by cloning murine PrPC into pCDNA3.1 vector as described previously26. A site-specific mutagenesis kit (Stratagene) was used to induce alanine substitutions of QPSPG and KKRPK domains in PrPC. Primers used for generating the Ala-QPSPG plasmid were: forward, GTG GAA GCC GGT ATC CCG GGG CGG CAG CCG CTG CAG GCA ACC GTT ACC C; reverse, GGG TAA CGG TTG CCT GCA GCG GCT GCC GCC CCG GGA TAC CGG CTT CCA C. Primers for Ala-KKRPK were: forward, CTA TGT GGA CTG ATG TCG GCC TCT GCG CAG CGG CGC CAG CGC CTG GAG GGT GGA ACA CCG; reverse, CGG TGT TCC ACC CTC CAG GCG CTG GCG CCG CTG CGC AGA GGC CGA CAT CAG TCC ACA TAG. Transfections were performed with Lipofectamine 2000 (Invitrogen) according to the manufacturer’s protocol. 3 μg of DNA was used per well of a 6-well plate. Cells were washed 24 h after transfection using PBS, and fresh medium was added to the cells. HEK293T and HEKGPR126 cells growing in T75 flasks at 50% density were treated with recombinant FT or GD (2 μM, 20 min). Cells were washed twice in PBS and lysed in IP buffer: 1% Triton X-100 in PBS, 1× protease inhibitors (Roche) and Phospho stop (Roche) for 20 min on ice followed by centrifugation at 5000 rpm for 5 min at 4 °C. BCA assays were performed to quantify the amount of protein, and 500 μg of protein was used for immunoprecipitations. 2 μg anti-V5 antibody was added to the cell lysate and incubated on a wheel rotator overnight at 4 °C. On the following day, Protein G dynabeads (Invitrogen) were added to the samples and incubated for a further 3 h on the wheel at 4 °C. Beads were washed three times for 5 min each using the IP buffer followed by addition of 2× sample buffer containing DTT (1 mM final). Samples were heated at 95 °C for 5 min, loaded on 4–12% Novex Bis-tris gels (Invitrogen), and migrated for 1.5 h at 150 V followed by western blotting. Immunoprecipitations were performed by adding 2 μg of POM2 antibody to 500 μl of cell medium and incubating overnight on a wheel rotator at 4 °C. Protein G beads were then added, and incubation on a wheel rotator at 4 °C was performed again. RNA extraction and quantitative PCR were performed as described previously1. The following primers were used: EGR2 forward: 5′-AATGGCTTGGGACTGACTTG-3′; EGR2 reverse: 5′-GCCAGAGAAACCTCCATT-3′; GAPDH forward: 5′-CCACCCCAGCAAGGAGAC-3′; GAPDH reverse: 5′-GAAATTGTGAGGGAGATGCT-3′. Adult zebrafish were maintained in the Washington University Zebrafish Consortium facility ( http://zebrafishfacility.wustl.edu/) and all experiments were performed in compliance with institutional protocols. Embryos were collected from harem matings or in vitro fertilization, raised at 28.5 °C, and staged according to standard protocols27. The gpr126st49 and gpr126st63 mutants were described previously7, 8. gpr126st63 or gpr126st49 mutants were collected from homozygous mutant crosses and wild-type larvae were collected from AB* strain crosses and raised to 50 hpf. FT treatment of gpr126 mutants was performed as previously described15. Briefly, egg water was replaced with either 20 μM FT in egg water or egg water containing an equivalent volume of DMSO. At 55 hpf, larvae were washed twice and raised in egg water to 5 dpf. Wild-type and gpr126 larvae were fixed in 2% paraformaldehyde plus 1% tricholoroacetic acid in phosphate buffered saline, and Mbp and acetylated tubulin immunostaining was performed as described previously8, 28. Expression scoring was performed with observers blinded to treatment according to the following rubric: strong, strong and consistent expression throughout PLLn; some, weak but consistent expression in PLLn; weak, weak and patchy expression in PLLn; none, no expression in PLLn. n = three independent replicate gpr126st63 assays and one gpr126st49 assay. n = 87 DMSO-treated gpr126st63 larvae, 81 Prp-FT-treated gpr126st63 larvae, 27 DMSO-treated gpr126st49 larvae, 25 Prp-FT-treated gpr126st49 larvae. Fluorescent nerve images were analysed using the Fiji software29. A rectangular region-of-interest (ROI) was drawn longitudinally over the fluorescent nerve. The longitudinal grey-scale histogram of the myelin basic protein (Mbp) was normalized pixel-by-pixel to the corresponding intensity of the acetylated tubulin (AcTub). The size of the measured ROIs was kept constant across different treatment modalities. SW10 cells were grown in P75 flasks at 50% density, rinsed with PBS, and detached from culture flasks with dissociation buffer containing EDTA (GIBCO). After detaching, cells were washed to remove residual EDTA and counted using a Neubauer chamber. Batches of 105 SW10 cells were transferred to FACS tubes, treated with HA-tagged recombinant peptides for 20 min, washed, and incubated with Alexa-488 conjugated anti-HA antibody for 30 min. After further washes and centrifugations, cells were resuspended in 200 μl FACS buffer (PBS +10% FBS) and analysed with a FACS Canto II cytofluorimeter (BD Biosciences). Data were analysed using FloJo software. Schwann cells were lysed in cell-lysis buffer (Tris-HCl 20 mM, NaCl 137 mM, Triton-X-100 1%) supplemented with protease inhibitor cocktail (Roche complete mini). The lysate was homogenized by passing several times through a 26G syringe, and cleared by centrifugation at 8,000g, 4 °C for 2 min. in a tabletop centrifuge (Eppendorf 5415R). Protein concentration was measured with the BCA assay (Thermo Scientific). 10 μg total protein was boiled in 4 × LDS (Invitrogen) at 95 °C for 5 min. After a short centrifugation, samples were loaded on a gradient of 4–12% Novex Bis-Tris Gel (Invitrogen) for electrophoresis at constant voltage of 200 V. Gels were transferred to PVDF membranes with the iBlot system (Life technologies). Membranes were blocked with 5% Top-Block (Sigma) in PBS-T for 1h at room temperature. Primary antibody was incubated overnight in PBS-T with 5% Top-Block. Membranes were washed three times with PBS-T for 10 min and incubated for 1 h with secondary antibodies coupled to horseradish peroxidase at room temperature. After three washes with PBS-T, the membranes were developed with a Crescendo chemiluminescence substrate system (Millipore). Signals were detected using a Stella 3200 imaging system (Raytest). Monoclonal antibodies against PrPC were obtained and used as described previously4. Fab3 and Fab71 antibodies were generated using the phage display technology and their epitopes were mapped with overlapping peptides. Anti AKT, p-AKT were obtained from Cell signaling and used at 1:2,000 dilutions for western blotting. The anti-p75NGF receptor antibody was obtained from Abcam and used at a 1:200 dilution for immunofluorescence. Anti V5 antibody was from Invitrogen and used at a dilution of 1:500 for western blot and 2 μg antibody was used for immunoprecipitation on 500 μg of cell lysate. In the direct cAMP ELISA assay, cAMP levels were assessed with a colorimetric competitive immunoassay (Enzo Life Sciences). Quantitative determination of intracellular cAMP was performed in cells or tissues lysed in 0.1 M HCl to stop endogenous phosphodiesterase activity and to stabilize the released cAMP. SW10 or HEK293T cells (100,000 cells per well) were plated in 6-well plates to ~50% density. Cells were treated with conditioned medium or recombinant peptides (2 μM, unless specified) for 20 min unless otherwise mentioned. Cells were lysed with 0.1 M HCl lysis buffer (Direct cAMP ELISA kit, Enzo). To ensure complete detachment of cells, cell scrapers were used. Lysates were homogenized with a 26G needle and syringe before clearing by centrifugation at 600g for 10 min. The subsequent steps were performed according to the manufacturer’s protocol based on competition of sample cAMP with a cAMP-alkaline phosphatase conjugate. To measure in vivo cAMP changes, BL6, PrnpZH3/ZH3 or PrnpZH1/ZH1 mice were intravenously injected with 600 μg of either FT or, as a control, uncharged FT ( ). Twenty minutes after infusion, mice were killed and all organs were collected. For cAMP assays, organs were homogenized in 0.1 M HCl. Subsequent steps were performed according to the manufacturer’s protocols as described above. Cyclic AMP levels were calculated using a cAMP standard curve in the case of ELISA based assay. Finally, cAMP concentrations were normalized to total protein content in each sample. cAMP changes are represented as fold changes to the respective controls. For each experiment, at least three independent biological replicates were used. For in vivo assays, groups of 8–16 mice were used for each experiment. For normalization purposes, the median value of the respective control sample was defined as 1. All measurements within each panel were normalized to this control value. For in vivo assays, sample sets were coded and investigators were blinded to their identities. The assignment of codes to sample identities was performed only after the cAMP values were plotted for each set. We designed two CRISPR short-guide RNA (sgRNAs) against exon 2 of Gpr126 (upper Guide CCTGTGTTCCTCTCTCAGGT and lower Guide AACAGGAACAGCAGGGCGCT). The DNA sequences corresponding to the sgRNAs were cloned into expression plasmids and transfected with EGFP-expressing Cas9-nickase plasmids. Single EGFP-expressing Schwann cells were isolated with a FACS sorter (Aria III). To determine the exact sequence of indels induced by genome editing, we amplified the sgRNA-targeted locus by PCR and subcloned the fragments into blunt-TOPO vectors. Ten colonies per cell line were sequenced and showed distinct indels on each allele. A clonal subline devoid of Gpr126 was used for further studies. This cell line possessed insertions on both the alleles; a 49-bp insertion at position 118 and a 5-bp insertion at position 84 on each allele. Both insertions led to a frameshift and to the generation of premature stop codons leading to early translation termination. Luciferase reporter constructs were generated containing a 1.3-kB sequence upstream of the transcription-starting site of Egr2. SW10 Schwann cells were transfected with Egr2 reporter construct and a renilla plasmid using lipofectamine 2000. After one day in vitro, Schwann cells were treated with recombinant full-length PrP (23–231), the globular domain of PrP (121–231) or PBS control. Luciferase activity was measured 24 h after stimulation with Dual-Luciferase Reporter Assay System (Promega) according to the manufacturer’s recommendations. Results were normalized to renilla transfection controls. Glass coverslips were placed in 12-well plates (Thermo Scientific) and coated with 0.01% w/v Poly-l-lysine solution (Sigma) overnight at room temperature. Coverslips were washed three times with ddH O and dried for 2 h in a laminar-flow hood. Schwann cells were seeded and cultured at 50% density. Cells were treated with recombinant FT-PrP, full length recPrP or C1-PrP for 20 min, and washed with serum-free DMEM. Cells were further washed with PBS followed by fixation with 4% paraformaldehyde. Fixed cells were incubated in blocking buffer (PBS+10% FBS) for 1 h. Cells were treated with various primary antibodies followed by washes and incubation with Alexa 488 and Alexa 647 tagged rabbit or mouse secondary antibodies (Life Technologies). Imaging was performed by Leica SP2 confocal microscope using a 20× objective; images were processed by Image J software. Transmission electron microscopy was performed as previously described6. Briefly, mice under deep anaesthesia were subjected to transcardial perfusion with PBS heparin and sciatic nerves were fixed in situ with 2.5% glutaraldehyde plus 2% paraformaldehyde in 0.1 M phosphate buffer, pH 7.4 and embedded in Epon. Ultrathin sections were mounted on copper grids coated with Formvar membrane and contrasted with uranyl acetate/lead citrate. Micrographs were acquired using a Hitachi H-7650 electron microscope (Hitachi High-Tech, Japan) operating at 80 kV. Brightness and contrast were adjusted using Photoshop. For quantification of Remak bundles and onion bulb-like structures, images were captured at 1,500× magnification and axon numbers and abnormal onion bulb-like structures were counted manually. Quantification was performed in a blinded fashion by assigning numbers to the images and upon completion of quantification genotypes were revealed. HA-tagged and untagged synthetic peptides were produced by EZ Biosciences. A stock solution of 2 mM was prepared by dissolving the peptides in PBS and they were used at a final concentration of 2 μM unless specified. The sequences of all the peptides used in this study can be found in Extended Data Table 1.
Quantum dot photosensitizers as a new paradigm for photochemical activation Interfacial triplet-triplet energy transfer is used to significantly extend the exciton lifetime of cadmium selenide nanocrystals in an experimental demonstration of their molecular-like photochemistry. Photosensitizers are an essential component of solar energy conversion processes, in which they are used to generate the highly reactive excited states that enable energy conversion (e.g., photochemical upconversion).1, 2 Typically, molecular triplet photosensitizers are used for such applications, but to improve the solar energy conversion process, the identification and preparation of next-generation triplet photosensitizers is required. However, the design of such photosensitizers—suitable for solar energy conversion and photocatalytic applications—remains a challenge.3 Semiconductor nanocrystals are stable light-emitting materials that can be systematically tuned to produce intense absorptions and photoluminescence. Futhermore, semiconductor nanocrystals offer several advantages over molecular photosensitizers, e.g., simple preparative synthesis, photochemical stability, size-tunable electronic and photophysical properties, high molar extinction coefficients, and trivial post-synthetic functionalization. Moreover, the inherently large, energy-consuming singlet-triplet gaps that are characteristic of molecular sensitizers can be avoided with the use of semiconductor nanocrystals that feature closely spaced excited-state energy levels.4 The characteristic broadband light absorption properties of these materials can be extended into the near-IR and can thus potentially be exploited for numerous triplet excited-state reactions, such as photoredox catalysis, singlet oxygen generation, photochemical upconversion, and excited state electron transfer. In this work,5 we have investigated the possibility of using quantum dots as effective alternatives to molecular triplet photosensitizers. With our experiments, we show definitively that triplet energy transfer proceeds rapidly and efficiently from energized semiconductor nanocrystals to surface-anchored molecular acceptors. In particular, we find that cadmium selenide (CdSe) quantum dots can serve as effective surrogates for molecular triplet sensitizers and can easily transfer their triplet excitons to organic acceptors (see Figure 1). These semiconductor nanomaterials are thus highly suited to energy-conversion applications. Figure 1. Artistic illustration of quantum dot-to-molecule triplet energy transfer and example subsequent reactions. The nanoparticle-to-solution triplet exciton transfer methodology we used in our experiments is shown in Figure 2. Quantum dots are typically capped with a ligand shell, and in our experiments we used oleic acid (OA) to ensure solubility of the dots (while also preventing inter-particle aggregation). This ligand periphery also serves as an insulating layer that prevents collisional quenching with the freely diffusing molecules in the solution. As a consequence, bimolecular energy transfer cannot proceed in the solution within the approximately 30ns-lifetime of CdSe excitons. To circumvent this limitation, we modified the CdSe surfaces by replacing some of the native OA ligands with molecular triplet acceptors that bear a carboxylic acid moiety, e.g., 9-anthracene carboxylic acid (ACA). We then purified the resultant nanocrystals by successive preciptation/centrifugation washing cycles that provided the desired donor/acceptors (termed CdSe/ACA). Figure 2. Schematic representation of the quantum dot-to-solution triplet energy transfer process. ). Cl: Chlorine. Ph: Phenyl. PDT: Photodynamic therapy. Schematic representation of the quantum dot-to-solution triplet energy transfer process. 5 The associated energy (E) levels, as well as the various triplet-triplet energy transfer (TTET) and decay pathways investigated in this study are also depicted. Cadmium selenide (CdSe) nanocrystals—capped with oleic acid (OA)—are used as the light-absorbing triplet sensitizer, in conjunction with 9-anthracenecarboxylic acid (ACA) as the triplet acceptor. The long-lived ACA triplets enable exothermic triplet energy transfer to freely diffusing 2-chlorobisphenylethynylanthracene (CBPEA) and dioxygen (O). Cl: Chlorine. Ph: Phenyl. PDT: Photodynamic therapy. Since the energy transfer process in our experiments takes place at the molecule–nanoparticle interface (i.e., resembling an intramolecular process), the dynamics occur on ultrafast timescales. To monitor the photoinduced processes in these materials, we used femtosecond transient absorption spectroscopy to observe the excited-state dynamics following a 500nm laser pulse (100fs full width at half-maximum, 0.2μJ). We find—see Figure 3(a)—that direct triplet-triplet energy transfer (TTET) between the CdSe excited states and the surface-anchored molecular acceptors occurs within hundreds of picoseconds after the laser pulse (with an average rate constant of 2.0×109s−1) and with a nearly quantitative yield. Consequently, the CdSe photoluminescence was completely quenched, and the CdSe exciton ground-state recovery correlated with the molecular ACA triplet excited-state (3ACA*) signal growth (which occurred at a rate of 2.2×109s−1). Figure 3. (a) Ultrafast differential transient absorption (TA) spectra of CdSe-OA quantum dots suspended in toluene. These spectra were obtained upon selective excitation of CdSe, with the use of a 500nm laser pulse. The inset shows the TA kinetics for the growth of the molecular ACA triplet excited state (3ACA*) at 441nm. ΔA: Change in absorbance. 〈k〉 : Average TTET rate constant. (b) TA difference spectra (from a 1mJ laser pulse with 505nm excitation wavelength, 5–7ns full width at half-maximum) measured at selected delay times after the laser pulse for CdSe/ACA CBPEA in deaerated toluene (at concentrations of 5 and 6μM, respectively) at room temperature. The inset illustrates the triplet energy transfer reaction between 3ACA* and CBPEA. It shows TA decay kinetics at 430nm (red), as well as the rise and decay at 490nm (blue) along with their biexponential fits (solid and dashed lines, respectively). (a) Ultrafast differential transient absorption (TA) spectra of CdSe-OA quantum dots suspended in toluene. These spectra were obtained upon selective excitation of CdSe, with the use of a 500nm laser pulse. The inset shows the TA kinetics for the growth of the molecular ACA triplet excited state (ACA*) at 441nm. ΔA: Change in absorbance. 〈k〉: Average TTET rate constant. (b) TA difference spectra (from a 1mJ laser pulse with 505nm excitation wavelength, 5–7ns full width at half-maximum) measured at selected delay times after the laser pulse for CdSe/ACA CBPEA in deaerated toluene (at concentrations of 5 and 6μM, respectively) at room temperature. The inset illustrates the triplet energy transfer reaction betweenACA* and CBPEA. It shows TA decay kinetics at 430nm (red), as well as the rise and decay at 490nm (blue) along with their biexponential fits (solid and dashed lines, respectively). 5 The 3ACA* has an extremely long lifetime because of the strongly spin-forbidden nature of the T # S transition (i.e., between the lowest energy triplet excited state and the singlet ground state). Our results also show the decay of the 3ACA* excited states that formed on the CdSe surfaces had lifetimes on the order of milliseconds. This result represents a remarkable six-order-of-magnitude increase from the initial CdSe excited-state lifetime. Such a long excited-state lifetime is promising for numerous applications because it provides the opportunity for additional chemical reactivity within the bulk solution. As a proof of concept, we used a secondary freely diffusing molecular triplet acceptor—2-chlorobisphenylethynylanthracene (CBPEA)—in solution (toluene), to demonstrate the extraction of triplet energy from the CdSe surface. We observed—see Figure 3(b)—near-quantitative TTET between the 3ACA* and the triplet excited CPBEA (3CPBEA*) states, which can thus enable highly efficient triplet energy extraction from the initially prepared CdSe excitons. Our results also show that once the triplet exciton energy is transferred to the freely diffusing acceptors in a deoxygenated solution, they eventually undergo triplet-triplet annihilation. This leads to upconverted emission from the 1CBPEA* (at 490nm), with a lifetime of hundreds of microseconds, as shown in Figure 4(a). Moreover, in an aerated solution of CdSe/ACA we detected the characteristic photoluminescence (centered at 1277nm) of singlet oxygen (1O ), which results from the quenching of 3ACA* by freely diffusing ground-state oxygen. In contrast—see Figure 4(b)—we observed no such signal when we used CdSe nanoparticles that were devoid of ACA ligands. This work therefore represents the first example of 1O sensitization by semiconductor nanocrystals via a mechanism other than Förster energy transfer. Figure 4. (a) Delayed fluorescence (DF) spectra of triplet-sensitized upconversion emission that occurred as a result of TTET from CdSe/ 3ACA* to CBPEA, followed by triplet-triplet annihilation. The inset shows the emission decay kinetics of the integrated delayed emission. (b) Near-IR singlet oxygen phosphorescence emission from CdSe-OA and CdSE-OA/ACA in aerated toluene (both at a concentration of 4μM), under 505nm excitation at room temperature. (a) Delayed fluorescence (DF) spectra of triplet-sensitized upconversion emission that occurred as a result of TTET from CdSe/ACA* to CBPEA, followed by triplet-triplet annihilation. The inset shows the emission decay kinetics of the integrated delayed emission. (b) Near-IR singlet oxygen phosphorescence emission from CdSe-OA and CdSE-OA/ACA in aerated toluene (both at a concentration of 4μM), under 505nm excitation at room temperature. 5 In summary, we have conducted proof-of-concept experiments in which we show that CdSe quantum dots are effective surrogates for more-ubiquitous molecular triplet photosensitizers in energy conversion processes. The high photostability, broad absorption spectra, and tunable optical properties of such quantum dots give rise to their superior properties. We have also demonstrated that the behavior of semiconductor quantum dots mimics the classical behavior of molecular triplets, and that triplet excitons in nanocrystals can be efficiently transferred to a bulk solution through successive (and nearly quantitative) triplet energy transfer steps. This photofunctionality may be exploited for numerous triplet excited-state reactions, including photoredox catalysis, singlet oxygen generation, photochemical upconversion, and excited-state electron transfer. Our current research activities include generalizing this approach across a range of semiconductor nanocrystalline materials, probing the ‘molecular’ nature of these materials, and applying these long-lived excited states in a range of photoactivated chemistry. This work was supported by the US Air Force Office of Scientific Research (FA9550-13-1-0106) and the Ultrafast Initiative of the US Department of Energy, Office of Science, Office of Basic Energy Sciences, through Argonne National Laboratory (under contract DE-AC02-06CH11357). Department of Chemistry North Carolina State University (NCSU) Cédric Mongin received his PhD in organic chemistry (on photoswitchable molecular cages) from the University of Bordeaux, France, in 2013. Since 2014 he has been a postdoctoral researcher in the Castellano research group at NCSU, where his research focuses on the exploitation of semiconductor quantum dots as promising triplet sensitizers. He will begin his independent career later this year, as a faculty member in chemistry at the École Normale Supérieure de Cachan, France. Sofia Garakyaraghi received her BS in chemistry (with a minor in mathematics) from the College of William and Mary in 2013. She is currently a PhD candidate under the guidance of Felix Castellano. She is studying the excited-state dynamics of various molecular- and nanocrystal-based systems with the use of ultrafast spectroscopy. Felix (Phil) Castellano earned a BA from Clark University in 1991 and a PhD from Johns Hopkins University in 1996 (both in chemistry). Following a National Institutes of Health postdoctoral fellowship at the University of Maryland's School of Medicine, he accepted a position as assistant professor at Bowling Green State University in 1998. He was promoted to associate professor in 2004, to professor in 2006, and was appointed director of the Center for Photochemical Sciences in 2011. In 2013 he moved his research program to NCSU, where he is currently a professor. He was appointed as a fellow of the Royal Society of Chemistry in 2015. His current research focuses on metal-organic chromophore photophysics and energy transfer, photochemical upconversion phenomena, solar fuel photocatalysis, energy transduction at semiconductor/molecular interfaces, and excited-state electron transfer processes. 2. T. F. Schulze, T. W. Schmidt, Photochemical upconversion: present status and prospects for its application to solar energy conversion, Energy Environ. Sci. 8, p. 103-125, 2015. 5. C. Mongin, S. Garakyaraghi, N. Razgoniaeva, M. Zamkov, F. N. Castellano, Direct observation of triplet energy transfer from semiconductor nanocrystals, Science 351, p. 369-372, 2016.
News Article | January 13, 2014
Venture investors may have turned their attention to earlier stage ad-tech companies at the right time now that big investments and strong IPOs could give their valuations a boost. On Monday morning, Turn announced an $80 million Series E round, which included investments by Fidelity Investments and BlackRock, in the latest sign that investors are returning to embrace the sector. September and October 2013 saw two big public offerings for RocketFuel and Criteo, which were both embraced by public investors at the time. Thanks to this attention from massive investment managers, early-stage ad-tech companies may have more leverage at the bargaining table. The word that public investors are embracing the ad space is good news for venture investors who continued to invest billions in the sector in the face of initially tepid public-market response. Investors had been concerned by the lack of enthusiasm for an earlier wave of companies like Millennial Media, which is down 68.8 percent from its high as of market close on Monday, or YuMe Inc., which priced well below its target when it made its initial offering in April 2013. As these advertising companies got the cold shoulder from public markets, funding declined, and investors moved more heavily into earlier-stage investments in 2012 and 2013, according to data from CrunchBase. For public investors, these later-stage investments represent an opportunity to buy earlier and for a bigger pop than if they were to invest in those companies for the first time at their public offerings. Given the number of early-stage companies that are coming to market, investors interested in the sector will have plenty to choose from. Despite falling commitments, investors still poured $1.52 billion into 331 companies in 2013, down from $1.93 billion in 2012, but still up from the $1.02 billion invested in 2009, according to CrunchBase data. “There is continual innovation going on in the ad-tech space. Every 12-to18 months you see a new wave coming in,” said Jeff Crowe, a managing partner with Norwest Venture Partners and an investor in Turn. As public investors have grown more comfortable with advertising technologies through public investments, they’re getting a better understanding of the opportunities presented by private companies, Crowe said. He declined to comment on whether Turn benefited from the phenomenon. Now, however, a host of companies stand to benefit through renewed interest in their own public offerings. “You’re going to see more IPOs coming out of the sector,” Crowe said. “There were half a dozen in the last 12 months, and you could easily see a half a dozen more in the next 12 months.”
News Article | October 26, 2015
The deal is valued at $3.4 billion, according to Reuters calculations. Ctrip's U.S.-listed shares were up 28 percent at record $94.66 in early trading, while Qunar was up 20 percent at near five-month high of $49.71. Overseas spending by Chinese tourists is expected to rise 23 percent this year to $229 billion, and will nearly double to $422 billion by 2020, according to a report by consultancies China Luxury Advisors and the Fung Business Intelligence Centre. (bit.ly/1H4r6t4) The deal would also improve profitability at both the companies after a pricing war, involving heavy promotions and discounts to customers, has hurt the Chinese online travel sector over the past couple of years. Ctrip's adjusted operating margin fell to 4.8 percent in 2014 from 23.6 percent a year earlier, while that of Qunar deteriorated to a negative 46 percent from negative 10 percent. "This deal significantly increases the likelihood of a profitability improvement trajectory over the next couple of years for both companies ... Ctrip and Qunar are likely to have 70-80 percent of the hotel and air ticket market," Summit Research analyst Henry Guo wrote in a note to clients. Ctrip.com will own roughly 45 percent of Qunar and Baidu will take a 25 percent stake in Ctrip.com. Ctrip.com has a market valuation of $10.6 billion, while Qunar is valued at $5.2 billion. Bloomberg earlier on Monday reported the plan to merge, citing unidentified people familiar with the matter. Such mergers are becoming increasingly common in China's tech sector as a way of dealing with fierce competition between rival companies. Earlier this month, Meituan.com and Dianping Holdings - which provide online reviews and deals for restaurants and retail and leisure businesses - said they would merge after being fierce rivals for years. Didi Dache and Kuaidi Dache, two leading taxi-hailing firms, combined in a share swap worth $6 billion earlier this year. Four Ctrip.com representatives will join Qunar's board of directors, including CEO Liang and Chief Operating Officer Jane Sun. Baidu's Chief Executive Robin Li and Tony Yip, the firm's head of investments, have been appointed to Ctrip.com's board. JPMorgan advised Ctrip on the deal. Baidu was advised by Williams Capital Advisors, LLC. | <urn:uuid:8f7d45c5-54d8-4304-85cd-714d0231ac5b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/aca-446820/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00228-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944444 | 9,883 | 2.796875 | 3 |
Until recently, the U.S. Border Patrol, San Diego Sector, was an underfunded, understaffed operation with inadequate resources. But over the last five years, the federal government has transformed it into a well-equipped security operation with over 2,000 agents. Before the Clinton Administration initiated Operation Gatekeeper in 1994, apprehensions of illegal migrants in the area ran well over a half million annually. Since then, the numbers have dropped by 75 percent.
Operation Gatekeeper provided extensive funding and resources to restore integrity and safety to the San Diego Sector. In addition to major increases in personnel, resources andinfrastructure, the sector recently acquired geospatial and IT systems to assist in covert detection and speedy identification of smugglers and migrant traffic. These applications have already advanced the sectors intelligence and surveillance capabilities well beyond what they were in the mid-1990s.
At that time, the 66 miles from Imperial Beach, Calif., to the Anza Borrego Desert in the east -- officially designated the San Diego Sector -- was the most vulnerable portion of the U.S. southern international boundary, accounting for nearly one-fourth of all illegal border crossings throughout the Southwest. As illegal traffic climbed to tens of thousands annually, crime rates and property damage in border communities soared, and the quality of life in neighborhoods deteriorated.
But soon after Attorney General Janet Reno visited the area in 1993 and saw the situation first hand, funding and resources poured in. By the end of the decade, wire fencing through urban and beach areas of the border were replaced with a wall and lit with powerful lighting. Today, a fleet of helicopters with forward-looking infrared systems provides continual aerial coverage, and Zodiac boat teams patrol the river and ocean approaches. Special units with vans, bicycles and ATVs and on horseback cover the sector from the heavily populated urban coast to the remote eastern terrain.
Tracking with Technology
To further counter the movement of illegal traffic, San Diegos technical division began developing applications using GIS to integrate and analyze data from GPS, remote sensing, false infrared aerial imagery, night-vision scopes, seismic and infrared detectors and digitally scanned fingerprints.
One of the first applications was a GIS-based network of seismic and infrared sensors placed in strategic locations to expand surveillance and detection capabilities. Persons or vehicles passing over, through or nearby these devices trigger signals that are picked up and relayed by repeaters to the dispatch center and to each of the sectors nine stations. The time and the coordinates of the alerting sensor are displayed on a 3-D map of the area, enabling the dispatcher to estimate the migrants rate of travel.
Supervisory Border Patrol Agent John Block said the technical division is also entering the GPS survey coordinates of U.S. roads and migrant foot trails in the sector into ArcView. "Were looking at how the trail networks are changing and at how our operations are impacting movements along the border," Block said. "An increase in the number of trails helps us gauge the effectiveness of our operations. It also tells us if the traffic is moving elsewhere along the border or out of our area."
Vegetation stress analysis is also being field-tested. The sector is evaluating the ability of ERDAS Imagine software to analyze modified infrared aerial photography for chromatic differences between healthy vegetation and vegetation stressed by the passage of people and the waste they leave behind. Patrol Agent Daniel Isenberg explained that stressed vegetation shows up as a different color in the infrared spectrum than that of healthy vegetation. "The difference in color may tell us where migrants and smugglers are forming new routes, where they are laying up during the day and something about the amount of traffic along that path. From that we can change our response patterns accordingly."
Isenberg said the orthographically corrected, two-meter infrared imagery, procured through a partnership with the San Diego Association of Governments, will be flown quarterly and cover a strip one kilometer wide from the beach to the eastern edge of the sector. Vegetation stress and other indications from one quarter will be compared with imagery from the previous quarter.
In 2001, the division will also use ERDAS Imagine in conjunction with GIS and GPS to classify trail networks according to the estimated physical cost of crossing a particular terrain. Each type of soil, ground slope, vegetation, season, fence line and detection device in a given area is assigned a numerical level of difficulty then translated into a color-coded, scalable impedence factor for that terrain. The higher the factor, the greater the physical cost of getting through the area. The impedence factor for a given area is overlaid with migrant foot trails that have been GPS surveyed, and two are superimposed on a 2-D or 3-D basemap. "The composite lets us see exactly where the traffic is going, the high-impedence areas they are trying to avoid and their estimated rate of march," Isenberg said. "It also helps us determine where to put sensors and other resources in the area."
GIS query functions also have a key role in the sectors automated fingerprint identification (IDENT) system. Able to digitally scan fingerprints into a database in seconds, IDENT dramatically speeds the process of identifying repeat arrestees. According to Senior Patrol Agent Brandon Steele, individuals with a high number of repeat arrests for illegal crossings are often smugglers.
The first time someone is arrested, his or her fingerprints are digitally scanned into an Oracle database and assigned a number from the Fingerprint Identification Number System (FINS). The entry also contains the arrestees photograph and biographical data. "After that," Brandon said, "we identify the subject by FIN, not by name, because subjects often give false names."
With each arrest, agents do a GIS query of the database for FINs matching those of the arrestee. They can use the GIS to query more than 1.5 million sets of FINs, photos and bios for those matching the person being held, or look for recidivists with the most arrests for a particular month or year.
When a match is found, a unique symbol assigned to the FINs is displayed on the sector map. "We can click on a symbol and bring up a dialog window with the subjects number, photograph and biographical data. We can see when and where the subject was apprehended in the past, the area he is operating in and, often, with whom he is operating," Brandon said. "The first indication that somebody may be a smuggler is the number of hits that come up with a particular symbol. If two subjects apprehended in the same event have a lot of hits, its a good indication theyre working together for the same organization."
Block said it is too soon to assess the effectiveness of GIS and other technologies in helping detect and apprehend smugglers and illegal traffic. "Were bringing the stations online now with these new systems, so we really havent had enough time to quantify what they are accomplishing in terms of countering migrant traffic."
For now, however, there is little doubt that the integration of advanced technologies in border patrol operations will significantly enhance the agents ability to interdict and limit smuggling and illegal migration in the San Diego Sector. | <urn:uuid:0ecee124-9e30-4db0-852b-e47451918daf> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/High-Tech-Borders.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00046-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.9423 | 1,454 | 2.609375 | 3 |
Would you trade your sensitive personal information for a cookie? Probably not? Well, that’s just what 380 New Yorkers did in a ‘Please Enable Cookies’ experiment conducted at an arts festival in Brooklyn in 2014. In a twist on the concept of web ‘cookies’ an artist asked passersby to provide personally identifiable information such as their mother’s maiden name, the last four digits of their social security number, and their fingerprints in exchange for gourmet cookies including flavors such as pink pistachio peppercorn and chocolate chili fleur de sel. It’s possible that the participants felt assured that the information would not be used for any objectionable activity, such as identity fraud, but increasingly we see consumers trading privacy in exchange for convenience, or cookies.1
While consumers reportedly worry about security and value their privacy, they continue to freely give up their personal information. This disparity in what consumers say compared to what they actually do has been highlighted in a number of experiments showing that individuals are often willing to give up private or sensitive personal information for small rewards, as demonstrated by the ‘Please Enable Cookies’ study. Online services and social media accounts are a goldmine of information for those with malicious intent because personal information such as your name, your first school and mother’s maiden name may be just a few clicks away2. Does continuing to expose personal information online in this way put an individual at greater risk of having their identity stolen?
The ID:A Labs team conducted a study to determine if a person’s online exposure – the degree to which they share personal information online – might be a predictive measure of whether they are at greater risk of identity fraud. In the study, we created an “exposure-fraud score” aimed at using an individual’s level of online exposure to predict the likelihood of the consumer becoming a victim of identity fraud. The score had some success, which suggests a correlation between the degree of privacy an individual uses online and their likelihood of being a victim of identity theft.
The above figure shows that an individual with a higher exposure-fraud score has a higher likelihood of becoming an identity theft victim and the below chart shows that the score was able to effectively rank order the fraud victims. And more surprisingly is the highest risk decile have a fraud victimization rate that is 4x higher than the lowest risk decile.
So why does this matter? As online interactions increase and the demand for more customized services grows, consumers will be asked to share personal information with a diverse group of organizations. These individuals will be at greater risk for identity fraud, creating a need for tools to combat increasingly sophisticated threats. It’s a trend that ID:A Labs will continue to explore to better understand how broader access to personal information impacts fraud rates.
To learn more download our Online Privacy vs. Security executive summary.
Dr. Stephen Coggeshall is the Chief Analytics and Science Officer at ID Analytics, LLC
1 Beckett, Lois. ProPublica. (1 October 2014) How Much of Your Data Would You Trade for a Free Cookie? An artist tests whether New Yorkers will give away their mother’s maiden name or part of their Social Security number for a homemade cookie. https://www.propublica.org/article/how-much-of-your-data-would-you-trade-for-a-free-cookie
2 The Fraud Practice. (3 April 2014) Social Media is a gold mine for fraudsters http://fraudpractice.com/fraudblog/?page_id=1641 | <urn:uuid:8e164d0c-1905-4f47-87c4-e4fb05aa4c62> | CC-MAIN-2017-04 | http://www.idanalytics.com/blog/fraud-risk/less-privacy-online-increases-identity-fraud-risk/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281746.82/warc/CC-MAIN-20170116095121-00430-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921496 | 741 | 3.09375 | 3 |
Big data analytics continues to hold currency in the commercial world, especially where operating margins are tight and customers discerning, yet it’s always interesting to discover a story where data was analyzed and value extracted that has, let’s say, more of a “human interest” than those in the business world.
Such is the case with the analysis of openly-available data sets on both the prescription and the consumption of antibiotics drugs in England. As the costs of medical care and medicines themselves continue to rise, understanding how and where antibiotic drugs are consumed is of increasing interest to healthcare bodies and local government administrations alike as they balance the ability to treat us as patients efficiently with the need to ensure they are getting value for the public money they’re spending.
So, it was that in mind that EXASOL recently joined forces with Antibiotic Research UK, the world’s first charity created to develop new antibiotics in the fight against superbugs, to conduct research into this area by taking massive data sets widely available on UK government portals in order to understand what is going on with antibiotics. The project was also run to coincide with an EU initiative around Antibiotics Awareness Day on November 18. Here’s what we found out:
- In England, doctors prescribe 59% more antibiotics in December than they do in August, despite the fact that illnesses treated by antibiotics are not seasonal and, more worryingly, antibiotics are being prescribed and consumed to treat viral conditions, which is a fruitless exercise. Meaning: How many times have we gone to the doctor with a cold or the flu, only to be told that antibiotics cannot cure them? Well, it seems that some doctors are still prescribing them.
- The number of prescriptions per head peaked in 2012 with a total of 3.8 million prescriptions to English patients, but this number has dropped 5.6% since.
Meaning: This either means that fewer people are seeing their doctor, or we are getting healthier.
- However, there is a widening deprivation gap: The gap between prescriptions in the least and most deprived areas of England is widening and the difference in prescribing between the bottom and top 1% by deprivation is 20%.
Meaning: Those that have fewer means are consuming more antibiotics, consumption rates are linked to social standing.
- Doctors in London prescribe 21% fewer antibiotics than those in the North of England. The data also reveals that the most deprived coastal towns in Lincolnshire, Norfolk and Essex are prescribing the most antibiotics in the country, with Clacton-on-Sea, the UK’s most deprived area, prescribing almost twice the national average.
Meaning: Again, this shows that patients who have the least are more likely to be getting ill and therefore need antibiotics.
While this is a project that is very focused on healthcare in England, it highlights the benefits of analytics and how public bodies and administrations can quickly use vast amounts of data to understand how many drugs are being prescribed and to whom. As part of the research, we created a visual heat map so that users can see easily where the most affected areas are.
If you’d like to learn more about this project, our findings were covered in articles that appeared on BBC News and in New Scientist magazine, where you can also find the heat map that our analysis generated as well as a list of the openly-available data sets that were used. There’s also a news release on our website.
Big data has many uses, but when it’s data that means something to you and me (after all, we’re all patients and potential consumers of antibiotics), it drives home the point much more succinctly on the true value of data analytics and insights. | <urn:uuid:674e1fb8-64b0-4f48-bc34-5702b6622c9f> | CC-MAIN-2017-04 | http://www.exasol.com/de/blog/2015-11-26-big-data-analytics-fights-superbugs-and-drugs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00338-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965357 | 763 | 2.609375 | 3 |
The idea of using some consistent form of personal identification has often been touted as the answer to establishing that someone actually is who they claim to be. With the continued development of electronic commerce and services, the ability to verify the identity of consumers and clients becomes ever more important. Technology is certainly helping to drive the debate, but perhaps equally important is the increasing concern over identity theft and the ease with which individuals' identities may be stolen or "borrowed," or entirely new identities may be manufactured.
Currently, two proposals relating to creation of national IDs are wending their way through the federal government. Both are based on provisions in laws that, broadly speaking, have nothing to do with identification. As immigration and health-care reforms worked their ways through Congress several years ago, provisions relating to identity were attached with the hope that methods of consistent and reliable identification would help achieve the underlying goals of the legislation.
Social Security Numbers
In its last overhaul of immigration laws, Congress tucked in a provision requiring the Department of Transportation to facilitate the use of Social Security numbers on all state drivers' licenses, to make them acceptable for identification purposes by federal agencies. The provision was meant to stop illegal immigrants from using fake drivers' licenses to obtain federal benefits.
Proposed implementing regulations from the National Highway Traffic Safety Administration appeared this summer and were greeted by public outcry, including thousands of public comments; typical federal regulations provoke only a handful of comments from the most affected parties. The proposed regulations were also the subject of congressional hearings.
Critics fear that the widespread use of Social Security numbers on drivers' licenses would create a de facto national identification card that can easily be linked to a myriad of personal information. Ironically, before the federal proposal appeared, some states began to move away from the use
of Social Security numbers on drivers' licenses. In Virginia, drivers may use a Social Security number on their license, but they may also choose to have a randomly
generated number serve as their drivers' license ID.
While Congress attempted to prevent the Social Security number from becoming a national identifier by restricting its use by federal and state governments as part of the Privacy Act of 1974, the numbers have commonly been used as one of the most reliable indicators of identity in all sorts of commercial transactions.
Mike Benzen, chief information officer for the state of Missouri, noted that Social Security numbers could not realistically form the basis for a national identification system because they are not reliable. He said there are many people with multiple Social Security numbers -- the Social Security Administration apparently has no practical way to screen an application to ensure the individual has only one number.
National Health Identifier
The other bureaucratic track that has received more publicity is the issue of a national health identifier. Again, as part of the Health Insurance Portability and Accountability Act of 1996, Congress required the Department of Health and Human Services to come up with a way to consistently identify health-care records so that health-care information could be transferred to insurers and medical personnel quickly and efficiently.
While the ability to link individual medical records and access them quickly could make diagnosis and treatment far more efficient and accurate, the identifier was seen by many as a major step toward creating a national identification system. Bob Gellman, a Washington privacy consultant who has worked on medical privacy for a number of years, noted that the information would be available to a wide segment of the economy. He pointed out that the health-care establishment -- providers, payers and insurers -- would have access.
"Health care represents one-seventh of the overall economy," he noted. "That includes many institutions you might not think of at first."
These include supermarkets that fill prescriptions, to say nothing of employers, because they provide medical coverage. Then there are scores of federal and state agencies, including law enforcement. As a result of public outcry, Vice President Al Gore indicated that the administration would not pursue an identifier at this time. Bills have also been introduced in Congress to either repeal the provision or to make any final plan subject to congressional approval.
Whether a national identifier will speed the evolution of electronic commerce is another issue. Although a truly verifiable identifier would resolve identity issues in electronic transactions, those working with e-commerce do not believe its success requires a national identifier as much as it does a practical way to verify the identity of an individual or entity.
Dave Temoshok of the General Services Administration said the federal government is trying to stay away from an all-encompassing individual identifier, preferring to develop identity verification through such technologies as digital signatures and public key escrow that would allow confidence in transactions between agencies and private parties. He noted that issuing of a digital signature certification is the first step toward creating an electronic identity, but that agencies would then verify that signature through further documentation.
Voting in Mexico
However, a national form of identity whose authenticity is widely accepted and trusted can be a boon for commerce. In Mexico, a voter identification card has been designed and issued that includes photographic and biometric data on the holder. While the card is only required for purposes of identity when voting, it has been quickly accepted as the most trustworthy form of identification and is frequently used in commercial transactions as well. A member of the Mexican elections commission said that people in the country's remote areas have never had a reliable way of proving who they are and have embraced the advantages of such an identifier. But, he added, the only form of proof required to get one, since documentation is often unavailable in rural areas, is to provide election officials with personal data and to have several witnesses vouch for your veracity.
The lack of precise and verifiable information that needs to be presented in Mexico to obtain such an identifier is a good illustration of one of the underlying problems in creating an accepted measure of identity -- the data on which a card is based can easily be incorrect or fraudulent. As Thomas Vartanian, a Washington, D.C., attorney who has written extensively on Internet issues, noted, an electronic identity is no better than the identifying information on which it is based.
In an article he wrote for American Banker, Vartanian points out, "Authentication must be the starting point in any electronic network transaction. Lacking the customary modes of physical identification, the parties to a faceless transaction in cyberspace need proof from a third party that each party is who he or she purports to be." He suggested the possibility of a national identification verification standard that would include traditional means of identification -- driver's license, passport, Social Security number, employee identification -- that as a whole would represent as reliable a means of identification as can practically be achieved.
Those opposing a national ID number worry about the Big Brother aspects of such a system -- both government and business having rapid and ready access to an individual's personal data, much of which consumers might prefer to keep private or, at a minimum, less vulnerable to dissemination. But as the need to reliably identify individuals increases in the electronic environment, some measure of uniform identity seems inevitable.
As Missouri's Benzen pointed out, the amount of transactional data we create today already allows companies and governments to track our lives, and a unique identifier probably won't change that reality too much. However, a national ID runs counter to many Americans' views on freedom and democracy. The Transportation Department's proposed regulations for requiring a Social Security number on drivers' licenses met stiff resistance from a coalition of groups from the left and the right of the political spectrum. Such resistance does not bode well for easy acceptance of a national ID.
Harry Hammitt is editor/publisher of Access Reports, a newsletter published in Lynchburg, Va., covering open-government laws and information-policy issues.
December Table of Contents | <urn:uuid:9add9e16-8c8b-4809-af68-6c63bad08a5c> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Can-a-National-ID-Tell-Us.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280872.69/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964568 | 1,565 | 2.78125 | 3 |
As seminal punk band NOFX once sang, “Electricity / All we need to live today / A gift for man to throw away.” The data center industry has a love hate relationship with electricity. It’s obviously a crucial resource that enables the productivity and innovation gains of cloud and large-scale computing, but it comes from polluting power plants, it’s expensive, and it’s delivered from an increasingly unreliable power grid in the United States. Data centers are also using more and more electricity every day.
New developments in electric generation and delivery as well as data center design innovations could help develop the much-hyped smart grid, bringing cost savings, increased reliability, and cleaner power generation. How can data centers and the electric grid work together to create the future of electricity?
The Electric Catch-22
The transformers hooking data centers to the grid might as well be chains. Servers, cooling, storage—infrastructure can’t run without access to grid power. That’s not necessarily a bad thing. Electricity has enabled countless innovations. But the state of the power grid, especially in the United States, is worrisome.
We’ve noted on the blog before that blackouts can be caused by squirrels (a favorite fact of mine). Many components of the grid have reached the end of their lifespan, with original pieces built in the early 1900s and many structures still running after 70 years. Data centers are on the front lines when it comes to negative business impact from the aging grid. Recent natural disasters like Hurricane Sandy may have made this dramatically apparent, but even common brownouts lead to server downtime after UPS batteries die and generators run out of fuel.
In addition to unreliability, grid electricity comes overwhelmingly from polluting power plants that run on coal or natural gas. Renewable energy only contributed about 13% of the total United States power in 2012. A 10 MW data center emits 33,000 – 91,000 metric tons of CO2 even at the relatively low PUE of 1.2. This electric use is attracting attention from the media and activist groups, and while many data centers are striving for efficiency, there’s no escaping the grid.
Finally, energy is simply expensive. Demand spikes, hot days, and the cost of fuel at generation plants can all lead to dramatic increases in costs.
Instead data center managers and researchers at universities across the world are starting to look at ways data centers can turn their energy use into a benefit rather than a necessary evil. Here are three of the ways future data centers can help build more reliable, more efficient “smart grids”.
1) Increasing on-site generation and cleaning up the grid
This is one area where data centers have already taken action. Rooftop solar panels and large solar arrays are relatively common and some lucky data centers are located near hydroelectric or geothermal power generators, allowing them to use entirely renewable energy. If a facility has some on-site generation, there are two primary models to use both that energy and the grid: grid ties and transfer switches.
Grid ties combine on-site electric generation from rooftop solar, hydroelectric turbines, or wind turbines with grid sources. Electricity produced on-site reduces the net draw from the grid, which fills in the blanks when on-site generation can’t cover the entire server and equipment load. When there is excess energy generated, it feeds back into the grid for a net profit.
Transfer switches keep the on-site generated energy separate from the grid entirely. The equipment only receives energy from one source at a time. This falls victim to the same, if not worse, reliability issues as the grid without a very steady source like geothermal or hydroelectric as there is not always a steady source of solar or wind power.
When using renewable energy, data centers must choose between performance and green power. Battery technology is improving but still cannot store enough renewable energy to power a facility during extended periods of low generation. Alternatively, performance adjustments can be made to lower the power draw, but in an enterprise data center this is unlikely to be a real solution due to SLAs and the requirement for constant uptime.
Alternatively, companies can support increased renewable generation by the power companies operating the grid itself. Google and others have made large scale investments into wind farms. Renewable Energy Credits also support the development of renewable generation on the grid.
2) Migrating data center loads cross-country to avoid peak demand
Here is where things start to get crazy with smart grids and data center infrastructure management (DCIM). These new tools can help avoid brownouts or blackouts as well as peak-demand, when energy rates increase dramatically.
Hardware, software, sensors, and controls can be tightly integrated into data center operations and tied to the electric grid with programming and data center automation software. With real-time pricing and energy information from grid providers, this software can migrate entire data center loads geographically according to increasing and decreasing grid loads.
These tools still need development, but experiments have been performed as proof-of-concept. Studied data centers took about eight minutes to move their loads and reduced 10% of their energy use. Another study found energy decreases of up to 46% by moving data center loads.
By dynamically moving workloads, the overall demand is lower and energy cost is less. This improves the grid reliability for everyone, not just data centers. The opportunity is greatest for non-critical loads, which are risky to move. Rescheduling routine backup and storage for off-peak hours is one method of demand reduction that can already be implemented in data centers.
3) Energy storage and micro-grids
Data center technology can give the grid a boost in other ways besides feeding it excess renewable energy or reducing peak demand. Energy storage is developing rapidly, enabling self-healing smart grids within a data center facility itself. These systems store large amounts of energy and constantly monitor the flow of electricity throughout the facility, allowing power to be rerouted during emergencies. Generators are still necessary but their use can be minimized.
Micro grids and backup systems (even current UPS systems) can be combined with DCIM for frequency regulation or boosting power during peak demand. One can even imagine a scenario where data centers are sitting at a low internal load with fully charged battery systems, selling power back to the grid to meet peak demand. Many power utilities pay hourly for frequency regulation, enabling a new (though minor) revenue stream for data centers whose UPS systems and batteries are sitting idle the majority of the time. Of course, this has to be balanced with SLAs to ensure that unexpected outages don't cause downtime.
These solutions may be a ways off from wide scale implementation, but it’s exciting to see how such a major consumer of electricity can actually help improve the generation methods and reliability of the larger grid. Instead of being power hogs, the data center industry can aim to help save electric infrastructure in the United States through innovative power management, onsite generation, and new storage technology.
Posted By: Joe Kozlowicz | <urn:uuid:0bf3b722-607d-40a9-b43d-5fae8dd157ce> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/three-ways-data-centers-can-build-the-future-of-electricity-and-smart-grids | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00394-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937201 | 1,453 | 2.90625 | 3 |
DDoS Attacks: What Can You Do?
The perilous Internet has fostered some pretty frightening things. The ever-lasting hot topics include Denial of Service (DoS) attacks, viruses, phishing and other types of scams.
While writing May's article on IRC botnets, we realized that the most frightening attack isn't discussed in detail very often. Distributed Denial of Service (DDoS) attacks are the same as a DoS attack, in that someone is simply sending you packets as fast as possible, but DDoS attacks can come from thousands of computers at once. Most people do not realize how effective DDoS attacks can really be.
Attackers use automated methods to infect hundreds of thousands of computers. They can be configured to start attacking a victim immediately, or more recently, they can be controlled remotely via IRC. When the compromised computers are controlled interactively, the attacker can command them to try and infect others before beginning the main DDoS attack. Once a large army is built, the fun begins. If the attacker points enough hosts toward one target, the result will be catastrophic for smaller ISPs, and possibly even the backbone provider your ISP connects to.
Cisco provides a few methods to help alleviate DoS attacks, but they are all useless in a truly large attack. Committed Access Rate (CAR) is an IOS feature that can rate-limit packets based on a number of criteria. Basically, you'd want to rate-limit the number of packets per second that get sent to the computer that is under attack. More specifically, if you know what host is executing the attack, you can simply drop all traffic from that IP address. But we're talking about real-world DDoS attacks here, and you simply cannot do that for the 100,000 computers that are attacking.
Cisco's answer to DDoS attacks involve rate-limiting and Reverse Path Forwarding (RPF). Unicast RPF basically looks in the routing table to verify that the source of a packet is valid. If the routing table entry indicates that an outgoing packet toward the questionable source would have used the same interface the packet just came in on, then the packet is valid. This type of check is valuable, but it really doesn't help in this case. Most DDoS traffic is "valid" in this sense. So that's about it. Cisco's last suggestion is to contact law enforcement, but we all know how that would go:
"100,000 Internet hosts are attacking you?"
"Yes, sir, please help."
The only logical response to that is: "You can try to contact every ISP on the planet and request that they turn off Internet access for their infected hosts." Cisco also offers a " DDoS Protection Service," but this is laughable in the event of a large attack.
Now that we've established that there's nothing a company can do to stop a large-scale DDoS attack, let's turn the table and look at this from a service provider's perspective.
Small- to medium-sized ISPs will probably have noticed the huge increase in traffic when this DDoS attack happened. In many cases, it will be disrupting service for more than just your company. The ISP's only option is to drop all packets destined for your company. If the ISP can do this effectively at their borders, then no other customers will be affected. If the ISP is willing to work with you, it is possible that they can simply drop packets destined for the target of the DDoS attack, assuming there is only one target.
If you're lucky, only one machine is targeted for the DDoS attack, and an upstream service provider will block traffic to the target for you. This would work nicely, except we're forgetting one important point: there are possibly 100,000 or more machines on university networks attacking you. Chances are your ISP will not have a large enough Internet connection to facilitate the DDoS traffic as well as real traffic. Whether they drop packets or not, they will still make it all the way to your ISP's borders, eating up bandwidth along the way.
The next step is for your ISP to ask their ISPs to drop traffic destined for the targets. Larger ISPs will normally oblige these requests, and assuming their routers are hefty enough, this can allow the smaller ISP to operate while the DDoS attack marches on.
This doesn't help you, though. Someone is going to have to block traffic to the targets of this attack, which means that you're effectively cut off from the Internet. There's really nothing else that can be done in this large-scale attack scenario. Hopefully there is only one target, and your ISP (or your ISP's ISP) is willing to block packets destined to that one server.
Tier 1 ISPs have 10Gb/s (or larger) links to many different exchanges. When a DDoS attack of sufficient size happens to transit a tier 1 ISP, they will suffer as well. Looking at our example, where your company is under attack and your ISP is effectively crippled, we hope that the upstream ISP has enough bandwidth to block these packets. Tier 1 ISPs like Sprint and UUNet have million dollar routers that can take the punishment, right?
No, they cannot.
The North American Network Operator's Group holds quarterly meetings to discuss Internet operations. At the last NANOG meeting in Seattle, AOL network engineer Vijay Gill spoke about large-scale DDoS attacks.
Mr. Gill noted that attacks of less than 2Gb/s aren't even worth worrying about, which is amusing in itself: Most large universities on the Internet don't even have enough bandwidth to raise the eyebrows of AOL's engineers. Unfortunately, when there are thousands of compromised university computers attacking a single target, even the Tier 1 ISPs can crumble. According to Gill, "you simply cannot throw away all the packets quick enough, no matter how fast your router is. There is nothing you can do."
It all boils down to Mr. Gill's assessment. If enough computers are attacking, your ISP and possibly a few backbone providers will be affected. Their last concern is to keep your company's website up during this attack -- they will be having to deal with their own routers and congested links.
There is nothing you can do. | <urn:uuid:55fbecb3-41bd-4923-9cfd-db7243f01f6e> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3520361/DDoS-Attacks-What-Can-You-Do.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00448-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960332 | 1,281 | 2.84375 | 3 |
- Compiled vs. Interpreted
- Object Orientation
- Methodologies and Workflows
- Different Types of Programmers
- The Joy of Coding
Programming can be an intimidating subject to newcomers. With all its various forms and features and classifications, it can deter people from tinkering, which I see as a high crime.
In this short primer I’ll explain all the various components so as to hopefully remove the mystique and make it approachable. The world needs more coders, and it can be a source of great enjoyment to become one.
To start off, let’s define programming.
- The creation of instructions that turn ideas into actions performed by computers.
That’s my definition. There are other more technical ones that don’t fit the purpose of this primer as well. The key point is creativity. You’re bending computers to your will.
In programming people refer informally to whether a given programming language is low level or high level. This basically deals with three things:
- How fast the language is when running on a computer
- How much code you must write to perform a given task
- How intuitive the language is to humans
These are inversely related.
So for a low level language you can consider low to be close to the computer, and high level to be close to the human. And when you write in a low level language you have to write lots of unintuitive code to do anything, but it runs faster. And when you write in a high level language you can write very little code, which is quite elegant, but that takes longer to run on a system.
Low Level: Verbose, Unintuitive, Fast [assembler, c]
High Level: Concise, Intuitive, Slow [ruby, python]
There are a few basic types of programming language:
- Modern (hybrid)
[ NOTE: Modern isn’t formal; it’s my name for it. Another good name might be ‘hybrid’. ]
- Compiled languages go from the source code to an executable, and that executable can only run on a certain type of CPU. Executables tuned for a particular CPU tend to be very fast, and are generally created using a low level language. The downside is that you can only run the executables on certain computers (based on which CPU it has).
- Interpreted languages don’t have an executable step—they just run from the source code and are interpreted (translated) by the installation/environment into machine code. Interpreted code generally runs much slower and is created using high level languages. The upside of interpreted languages is that you can run the programs anywhere the environment is installed.
- Modern (hybrid) languages have elements of both, and are designed for enterprises. They are both compiled and interpreted, and get many benefits from both, e.g., platform independence and speed.
Object Orientation is a way of building computer programs so that they are both intuitive to humans and functional for computers.
Objects have both attributes and methods. Attributes are basically characteristics, or data, about the object. And methods are operations or actions that can be performed on the object from other places in the program.
When you build a program using Object Orientation you basically build a collection of objects that interact with each other in various ways.
There are a few core tenets of OO worth mentioning.
- Encapsulation is basically access control between the data within an object and the entities that call the object. It’s done by hiding accessors and mutators. Accessors ask objects about themselves, and mutators are public methods that modify the state of an object.
- Abstraction is a method of controlling complexity by presenting a model, view, and controller that others can use to interact with the object.
- Inheritance is the concept that when you make something from something else, you get the attributes for that thing. So if you have an object called mammal, with two eyes and live birth for example, then if you make a dog that inherits from mammal, then the dog would start with two eyes and live birth as well.
- Polymorphism means having multiple methods with the same name but different functionality. Overriding polymorphism happens at runtime, and overloading polymorphism happens at compile time.
Here are some of the most important programming languages.
- c: Includes c++ for these purposes. This is pretty much the king of all programming languages given its role in creating Windows, Linux, etc. Learn it if you want to know the roots of computing, how to do things more manually than with modern languages, or to just be well-rounded.
- Java: Highly enterprise focused. Very much like c++. Learn it if you want to do enterprise development in a Java shop, or if you want to be well-rounded.
- Lisp: A high level language often used by AI types.
- Scala: An object oriented script-like language that compiles into Java bytecode.
- Ruby: An elegant high-level, interpreted, object-oriented language popular with people looking to quickly implement ideas. Also serves as the foundation of Ruby on Rails.
- Python: An elegant high-level, interpreted, object-oriented language popular with people looking to quickly implement ideas.
- Objective C: The language used for creating mobile applications for Apple’s iOS platform before they came out with Swift.
- Swift: The language used for creating mobile applications for Apple’s iOS platform that replaces Objective C. Implements a number of features for making development easier and safer than Objective C.
- Go/Golang: Statically typed and loosely based on C. Created at Google and now used on a number of Google production systems.
- Fortran / Cobol: Attack yourself.
- PHP: An object oriented language originally designed to run CGI scripts for web applications, which is now the centerpiece for a number of web frameworks. Known for being insecure, but this is largely due to the context in which it was implemented, i.e. by web programmers looking to get the most done in the shortest amount of time.
- Lua: The preferred scripting language for games and a number of embedded systems. Known for being fast.
Typing has to do with assigning a type to the variables, expressions, functions, or modules that can exist within a program. Types include integers, strings, arrays, etc.
So Typing basically locks down each part of your program into one of these for the purpose of reducing bugs. If you don’t do this you’ll have people trying to multiply “dogs” the integer with “dogs” the string, which causes all sorts of drama—especially in large applications.
Static typing takes place at compile time, meaning that when you run the app it’s already locked in. Dynamic typing takes place at runtime.
There are a number of ways to build software at scale. Here are few of the main ones—each having its own set of advantages and disadvantages.
The main types are: sequential (do one, don’t go back), incremental (a series of mini-waterfalls), iterative (work on small pieces to find problems), and rapid (create prototypes quickly).
Here are some of the main named approaches:
- Waterfall: A sequential approach where the development is seen to flow downwards through several phases, such as requirements, design, development, testing, integration, deployment, and maintenance. A key concept in Waterfall is not revisiting a phase once you leave it. This is an older model that was supplanted by Agile in many enterprises in the 2010’s, but Waterfall is making a strong comeback.
- Agile: An iterative approach where the requirements and solutions evolve via collaboration between cross-functional teams. The central concept is lighter structure and more focus on human feedback.
- Spiral: A hybrid approach that combines waterfall and rapid prototyping approaches.
- REPL stands for Read Evaluate Print Loop, and is the simple interactive computing environment that allows you to test your code in a given language. IRB is an example of a REPL (for Ruby).
- Just in Time Compilation (also known as dynamic translation) improves the runtime performance of programs based on using byte code, or virtual machine code. Byte code is interpreted, so it’s slower that compiled code, but faster than traditional languages. .NET and Java are JIT languages. They bytecode is kept in memory and segments are compiled as they are needed. This also allows security checks to be applied as code is run. Caching is also used during translation to get near compilation speeds.
- Dynamic Compilation is used by languages like Java to improve performance as an applicaiton runs. Code is optimized as it is run, and thus after this happens on a large system for a few minutes the system gets faster. Many apps can’t wait for the initial slowdown and use another approach.
- A Data Strcuture is a way of storing and organizing information in a cojputer so it can be used efficiently, e.g.: array, record, hash, union, set, object
- Arity is how many parameters something takes. So, an arity of 0, 1, or n.
- A Programmer is someone who can solve problems by by manipulating computer code. They can have a wide range of skill levels—from just being “ok” with basic scripting to being an absolute sorcerer with any language.
- A Hacker is someone who makes things. In this context, it’s someone who makes things by programming computers. This is the original, and purest definition of the term, i.e., that you have an idea and you “hack” something together to make it work. It also applies to people who modify things to significantly change their functionality, but less so.
- A Developer is a formally trained programmer. They don’t just solve problems or create things, but do so in accordance with a set of design and implementation principles. These include things like performance, maintainability, scale, robustness, and (ideally) security.
Basically, all three solve problems using code. Programmer is the umbrella term which means problem solver, a Hacker is the creator/tinkerer, and a Developer is a formally trained programmer who doesn’t just solve problems but does so in a structured and disciplined way likely learned a part of a formal education.
Programming is about shaping the world through a tangible implementation of your ideas. These can be artistic or creative ideas, or ways to find answers to questions by leveraging technology.
Either way, programming is quickly becoming a mandatory skill rather than an extracurricular one. Embrace it.
- This is a mid-level primer, not an absolute baseline. Notice that I’m not talking about what variables are, etc. | <urn:uuid:d0833278-8969-484c-88fc-a96da2aadd67> | CC-MAIN-2017-04 | https://danielmiessler.com/study/programming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930052 | 2,285 | 3.625 | 4 |
Edelman provides comprehensive public relations and marketing services to computer security companies in the United States. They’re recent survey shows that nearly one in three Internet users in the US has been affected by a computer virus or hacker in the past two years. The independent survey questioned more than one thousand adults nationwide and also showed that Internet users feel far more secure at work than on their home computers.
According to the survey 32% of the respondents who use the Internet said they had been impacted by a attacker or computer virus in the past two years. Additionally, 43% of Americans said they felt less secure on their home computers versus 17% who felt they were less protected from viruses and attackers at work. The results also indicate that West Coast residents are most likely to feel insecure on their office computers, and the most wealthy and educated feel the most vulnerable on the Internet.
The margin of error for this survey is +/-3.1% at the 95% level of confidence. The survey was conducted utilizing a Random-Digit-Dialing (RDD) methodology to help ensure that every American household with a telephone had an equal chance of being contacted. | <urn:uuid:f35bdb2c-5932-4eed-859e-bb7425c3bbac> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/08/22/edelman-cybersecurity-survey-viruses-impact-one-third-of-american-internet-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00569-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976644 | 231 | 2.546875 | 3 |
This column is always happy to see networks being pushed into service for the advancement of science, and in this spirit we can report that two years after its dishes were mothballed, the historic Goonhilly Satellite Earth Station in Cornwall will soon be operational once more.
The site owners BT today announced that they have leased the station to a consortium, Goonhilly Earth Station Ltd (GES) for use at the forefront of radio astronomy projects and deep space network comms.
The project was the brainchild of Ian Jones, MD of space sector company Orbit Research, who is now CEO of GES.
The consortium, which includes QinetiQ, the UK Space Agency and the Harwell Inernational Space Innovation Centre, plans to upgrade the existing antennae to enable communication with space missions.
It has also partnered with Oxford University to probe the origins of the universe in a project that will see it link up with the e-MERLIN network run out of Jodrell Bank in Cheshire.
GES also hopes to reopen the site's visitor centre as a space themed exhibit.
Goonhilly, and in particular its 25.9m diameter dish Arthur (the first open parabolic dish in the world) played a pivotal role in the development of satellite communications, not just in the UK but worldwide.
It beamed the first ever satellite television pictures into Britain in 1962, and went on to broadcast the Apollo 11 Moon landing live, among other historic events. More recently, it played a pivotal role in providing alternative routes for data after US comms networks were damaged in the 9/11 terror attacks.
Arthur is a grade 2 listed structure, and as such cannot be demolished, so it is fantastic news to see it being revived for such a worthy cause, and its history is not lost on Ian Jones, who said: "As a child I can remember being inspired by the Apollo missions - my work as a satellite communication design engineer brought me to Goonhilly to design, build and test mobile satellite communication systems."
"I want Goonhilly to continue to provide inspiration to the next generation of scientists and engineers. It is our vision that the UK will continue to be [a] recognised world leader in space science," he added.
Photo courtesy: Chris McHugh/Rex Features | <urn:uuid:53495960-ef6e-4ed7-a752-3be9f0d6a2bf> | CC-MAIN-2017-04 | http://www.computerweekly.com/microscope/opinion/Good-golly-Goonhilly | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00505-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958516 | 472 | 2.671875 | 3 |
Saudi Arabia, with over 65% share, is the largest producer of meat among the GCC countries, followed by Kuwait (14%). Unlike many other GCC nations, Saudi Arabia has a significant quantity of domestic meat production, though not enough to meet the domestic demand. Regarding consumption, Saudi Arabia continues to lead with over 47% share of the total meat consumption by GCC countries; the United Arab Emirates stands second with a share of 29% due to the high tourist-inflow. As of 2015, the edible meat industry in the Saudi Arabia region was valued at USD XX.XX billion and is expected to grow at a CAGR of X% for the next five years.
Owing to the support from Saudi Arabia government, the food manufacturing and processing sector has grown fast over the past few years. The government has offered support in the form of direct subsidies for select food production equipment, duty-free imports of raw supplies, interest-free loans and highly subsided benefits.
Growing population (both domestic and expat) is another factor responsible for the increased consumption. Urbanization and growing popularity of retail format, together, are enhancing the consumption of processed food, milk and meat. Strong economic growth leading to higher protein consumption, growing number of domestic, as well as, expat population, coupled with rising preference for red meat (both sheep and bovine meat) are driving the meat market in the region.
Restraints and Challenges
As the region has a high Islamic population, the biggest constraints for the market in the region are strict quality checks and phytosanitary norms, along with Halal requirements. Processes, such as stunning and automated mechanical slaughter, do not accommodate for Halal procedures, making these methods of slaughter unsuitable for the region. Meat processing companies must be very careful in their selection of meat and non-meat ingredients. In addition to the already existing constraints, classification of red meat and processed meat as, possibly, carcinogenic foods by the WHO is reducing the consumption of these products in certain sections of the educated population.
Due to its location and lack of natural resources necessary for livestock production, most of Saudi Arabia’s edible meat is imported, to meet its domestic demand. Saudi Arabia aims to close this gap and achieve self-sufficiency, which is part of its food security plan. Currently, Saudi Arabia can meet only 56% of its edible meat demand, i.e., chicken, beef, goat and sheep, with domestic supplies. With several projects, underway, the production capacity is expected to double by 2020. Major poultry companies have already done so with the support of the government. Hence there are a lot of opportunities in the country regarding trade, food processing and infrastructure requirements. As many importers in this region import live animals, there exist great opportunities for setting up slaughter houses and processing plants in the region.
About the Market
o Drivers: What are the key factors driving growth in the market?
o Restraints: Most relevant threats and restraints which hinder the growth of the market?
o Opportunities: Sectors of high return or quick turn around on investment?
o Market Concentration: Porter’s 5 Forces Analysis quantified by a comprehensive list of parameters.
o Market Share Analysis: Top players in the market (by value and volume)..
o Company Profiles: Pertinent details about leading, high growth, and innovation-motivated stakeholders with contact, operations, product/service offerings, financials and strategies & insights | <urn:uuid:2e220e18-6be8-4f00-aaa4-230aa008816b> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/edible-meat-industry-in-the-kingdom-of-saudi-arabia-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00413-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947582 | 710 | 2.515625 | 3 |
8 Best Practices for Encryption Key Management and Data Security
From centralization to support for standards, these encryption key management and data security best practices can help you protect your organization’s confidential data and comply with regulatory mandates.
by Gary Palgon
Data encryption is an important element of an organization’s response to security threats and regulatory mandates. What many organizations are finding is that while encryption is not difficult to achieve, managing the associated encryption keys across their lifecycle quickly becomes a problem that creates a new set of security vulnerabilities and risks making important data inaccessible to authorize users those who need it.
Confidential data resides in hundreds of places throughout an organization. It’s found in many different forms. Today’s business environment is compliance-driven, competitive and increasingly fraught with crimes of opportunities from financially motivated hackers and frustrated employees. This creates a mounting demand for effective, practical, automated, risk-mitigating ways to manage keys throughout their lifecycle so that “good guys” are granted access and the bad guys are thwarted. User and application access to these resources must be controlled, managed and audited so that authorized access is quick and reliable, all while preventing malicious attacks. Moreover, a thorough approach to key management must ask: “Who guards the guards?” The administration of keys must itself have built-in protection against internal maliciousness.
Experts in data protection urge organizations to use the following two-step process to manage data security risk and comply with regulatory requirements:
Step 1: Eliminate as much collection and storage of sensitive data as possible—if you don’t really need it, get rid of it (or never collect it in the first place);
Step 2: Encrypt, hash, or mask the remaining sensitive data at rest and in transit.
Encryption has become an increasingly important weapon in the security arsenal for data at rest in databases, files, and applications and for data in transit. Encryption is a perfect companion to strong perimeter and firewall protection. It is also one of the most important ways to protect against internal threats, which some estimates put as high as 73 percent of all breaches. Your firewall and perimeter security can’t protect you from the folks inside the fort, but encryption can.
Encryption resources such as keys, hash algorithms, certificates, and digital signatures are dynamic and fluid. They must be changed, cycled, or renewed regularly. Furthermore, they must be archived under time-based management so that they are available to retrieve historic data.
Encryption is hard for companies to perform on their own, as is the associated encryption key management. Keys proliferate exponentially as companies manage the data encryption lifecycle. If not managed properly, a new problem emerges: how to control and protect access to the keys to ensure they don’t get into the wrong hands and that they are available to when needed (today and in the future). The following overview describes eight best practices in encryption key management and data security.
Best Practice #1: Decentralize encryption and decryption
One critical issue in designing a data protection plan is whether encryption and decryption will take place locally and be distributed throughout the enterprise, or will be performed at a central location on a single-purpose encryption server. If encryption and decryption are distributed, the key manager must provide for the secure distribution and management of keys.
Solutions that provide encryption at the file, database field, and application level provide the highest level of security while allowing authorized individuals ready access to the information. Decentralized encryption and decryption provide higher performance and require less network bandwidth, increase availability by eliminating points of failure, and ensure superior protection by moving data around more frequently but securely.
Best Practice #2: Centralize key management with distributed execution
A solution that employs a hub-and-spoke architecture for distributed key management allows encryption and decryption nodes to exist at any point within the enterprise network. Spoke key-management components are easily deployed to these nodes and integrated with the local encryption applications. Once the spoke components are active, all encryption and decryption of the formerly clear text data is performed locally to minimize the risk of a network or single component failure having a large impact on overall data security. The key manager should manage the generation, secure storage, rotation, export, and retirement of the keys used for encryption at the spokes.
Best Practice #3: Support multiple encryption standards
Even if you choose specific encryption standards for your organization, you may find that mergers and acquisitions or the need to work with business partners in your ecosystem will require support of other standards. Choosing a security solution that supports all industry-standard encryption algorithms ensures your organization will conform to government and regulatory requirements now and in the future.
Best Practice #4: Centralize user profiles for authentication and access to keys
A “user” is any application or person requiring access to sensitive data. Access to these resources should be based on user profiles in the key manager. Users can be assigned and issued credentials (for example, RSA certificates) to provide access to encryption resources associated with their user profile. User profiles are managed through an administrative role in the key manager. In compliance with the PCI DSS mandate and as a best practice, no single administrator or user has access to the actual keys themselves.
Best Practice #5: Do not require decryption/re-encryption for key rotation or expiration
A key profile should be associated with every encrypted data field or file. This key profile allows the application to identify the encryption resources that must be used to decrypt the data field or file, making it unnecessary to decrypt and then re-encrypt data when keys change or expire. The current key will be used to encrypt freshly created data. For existing data, the key profile will be looked up to identify and load the key that was originally used for the encryption. This is a very critical feature for large databases and 24/7 operations and provides for seamless key rotation.
Best Practice #6: Keep comprehensive logs and audit trails
Extensive audit logging that occurs in every component of the distributed architecture is an important component of key management. Every access to sensitive data must be logged with details about the function, the user (individual or application), the encryption resources utilized, the data accessed, and when the access took place.
Best Practice #7: Use one solution to support fields, files, and databases
One benefit of the distributed execution model is that the security software doesn’t know or care what kind of data it is encrypting. To get started, define which fields need to be protected and specify how they are to be protected. Once activated, information is available based on user rights, allowing access (to the full value or a predefined masked value) or denying access. Look for a security solution that operates without requiring any alternation of any field sizes.
Best Practice #8: Support third-party integration
Enterprises often have a large number of external devices (i.e., POS terminals) dispersed throughout their network. These devices do not typically have standard database-oriented applications and are dedicated to a single function using proprietary software. Using a security solution that integrates with third-party applications will protect confidential information as it moves throughout your organization’s extended network.
Several best practices have emerged for encryption key management and data security. Many security vendors incorporate these methodologies into their solutions, making it much easier for you to protect the confidential data entrusted to your organization and comply with regulatory mandates.
- - -
Gary Palgon is vice president of product management for Atlanta-based nuBridges, where he is responsible for defining strategy for the company’s widely-used data protection and managed file transfer solutions. Reach him directly at email@example.com. | <urn:uuid:e3887829-20ed-40fc-a629-10db496fb838> | CC-MAIN-2017-04 | https://esj.com/articles/2008/07/01/8-best-practices-for-encryption-key-management-and-data-security.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921466 | 1,593 | 2.625 | 3 |
A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.
Load balancers are generally grouped into two categories: Layer 4 and Layer 7. Layer 4 load balancers act upon data found in network and transport layer protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon data found in application layer protocols such as HTTP.
Requests are received by both types of load balancers and they are distributed to a particular server based on a configured algorithm. Some industry standard algorithms are:
Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter.
Load balancers ensure reliability and availability by monitoring the "health" of applications and only sending requests to servers and applications that can respond in a timely manner. | <urn:uuid:41023df3-6809-4f68-b689-f46e2031d818> | CC-MAIN-2017-04 | https://f5.com/es/education/glossary/load-balancer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00533-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934203 | 240 | 3.359375 | 3 |
Wan Z.,Sun Yat Sen University |
Wan Z.,CAS Qingdao Institute of Oceanology |
Wan Z.,Development of Guangdong Higher Education Institutes |
Shi Q.,Sun Yat Sen University |
And 6 more authors.
Journal of Natural Gas Science and Engineering | Year: 2013
Mud volcanoes are a common geological phenomenon in tectonically compressed areas on land and offshore. Mud volcano eruptions hold great significance for research on tectonic activity, the sedimentary environment and oil and gas accumulation. Methane emitted from mud volcanoes is also a source of greenhouse gas. Many mud volcanoes have developed in the southern Junggar Basin, Northwest China, but they have been studied very little. In this study, the chemical composition, stable carbon isotopes and gas origin of these mud volcanoes are analysed. The major gas component from the mud volcanoes in the southern Junggar Basin is methane, with an average value of 92.81%. The other gas components are ethane (4.8-2.93%), propane (0.01-0.05%), CO2 (0.11-5.36%) and N2 (0-3.63%). The methane carbon isotope ratios (δ13C1) are between -38.92‰ and -42.82‰, and ethane carbon isotope ratios (δ13C2) are -20.50‰ to -22.95‰. All these data have similar characteristics to other mud volcanoes around the world. Based on the C1 (methane)/(C2 (ethane)+C3 (propane)) and δ13C1, δ13C2 results, the released gas is a coal-type thermogenic gas. The gas is from a middle-low Jurassic coal-measure source. © 2013 Elsevier B.V. Source | <urn:uuid:f8b0fd7f-d126-437c-b1c9-6972c9c4d4bd> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/development-of-guangdong-higher-education-institutes-2599234/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00257-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.874564 | 397 | 3.03125 | 3 |
Primer: Grid ComputingBy David F. Carr | Posted 2006-05-06 Email Print
Grid computing harnesses the power of many computers for a common task.
Is grid computing right for your company? Click here to take the quiz.
What is it? An approach to pooling the computational resources of many computers—typically, low-cost ones built from commodity components—to accomplish tasks that otherwise would require a more powerful computer or supercomputer. A true grid should be more flexible than a traditional server cluster or server farm, where many machines are assigned to perform the same function in parallel. That is, it ought to be possible to dynamically reassign computers participating in a grid from one task to another, or to make grids at different locations cooperate to perform a demanding task.
Who came up with this definition? Argonne National Laboratory computer scientist Ian Foster has been the most outspoken advocate of what he calls The Grid, a vision in which computing power will eventually flow worldwide like electricity on the power grid. According to Foster and his colleagues, a grid becomes a grid when it crosses organizational boundaries (for example, between companies and independent departments) and uses standard protocols to accomplish a significant task.
What are some examples? SETI@home, the Search for Extraterrestrial Intelligence organization's effort to harness idle computer cycles on the world's PCs to analyze radio-telescope data for evidence of intelligent signals, is a grid based on volunteer resources. Hewlett-Packard has designed a corporate version of a "cycle scavenging" grid for an automaker, using idle time on engineering workstations to perform simulation tasks (see Reference box). There are many other examples of academic grids performing scientific number-crunching, sometimes with computers at many universities linked to solve a given problem. The early commercial examples also often have a scientific or engineering bent, such as genetic analysis within biotech firms or oil field analysis by petroleum companies. Other number-crunching applications include analytic models run by financial institutions.
What about similar buzzwords? There's a lot of overlap, particularly with concepts such as utility computing and on-demand computing. However, utility computing is also associated with a particular business model where users or organizations only pay for the computer cycles they use. On-demand computing allows the amount of computing power available to applications or organizations to expand and contract based on demand.
Similarly, many grid computing standardization efforts focus on using XML Web services to let nodes in a grid communicate with each other. And service-oriented architecture (SOA) is often mentioned as being complementary to grid computing because defining the components of a system as loosely coupled services is one way of dividing up the processing workload between nodes. However, a system built around SOA principles and Web services is not necessarily a grid, and not all grids incorporate Web services.
Where do I get this technology? Globus Alliance, formed by Foster and his allies, is working on standards for grid computing. Globus offers an open-source software product called the Globus Toolkit. However, most grids today are either custom-built or created with proprietary technology from vendors such as Platform Computing and United Devices. Sun Microsystems offers racks of computers pre-configured for grid computing, and will sell you time on The Sun Grid, a utility computing offering. IBM and HP offer their own assortments of hardware, software, utility computing and consulting.
In addition, SAS has produced a grid version of its statistical analysis package, in partnership with Platform Computing, and SAP is also grid-enabling some of its software. | <urn:uuid:93ed30a5-f632-455f-9d65-258bf9611ee1> | CC-MAIN-2017-04 | http://www.baselinemag.com/storage/Primer-Grid-Computing | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947946 | 729 | 2.921875 | 3 |
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
MPLS is a network protocol that increases speeds through network shaping. It forwards the majority of packets at network Layer 2, the switch level. Normally data would have to be passed to the routing level, Level 3. (Networks are often described with a 7 layer OSI model, from the physical layer 1, bits, up to the application level data.)
The ingress router, where the data enters the network, labels the packet header (called the label stack), and this label is stripped at the egress router when it exits the network.
Sometimes this is referred to as “layer 2.5” protocol as the definition of this network layer is somewhat ambivalent and outside the strict data link layer 2 and network layer 3.
By adding a shorter path label instead of having the router read full length network addresses, every router on the network path doesn’t have to lookup the address in a routing table. It also means packets can transfer on any network regardless of network protocol, reducing dependence on certain link modes.
MPLS can even be stacked, so the top level label is used to deliver the packet to a destination, where that label is stripped and a second label is then used for the next destination, and so forth.
Different level-switched paths can be used to shape network traffic, so administrators can control the flow of data on the network via MPLS. Pre-defined paths can be set for latency thresholds, jitter, packet loss, and downtime. This helps meet agreed upon Service Level Agreements.
The three primary advantages of MPLS in a data center service provider environment are to engineer network traffic, controlling how it is routed through the network, managing capacity, and prioritizing some services over others; using the same infrastructure to transport data and IP routing; and improving network resiliency. | <urn:uuid:59262260-e6c9-4462-bee0-55afab7809ec> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/improving-data-center-qos-through-mpls-network-connections | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00193-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909985 | 455 | 2.703125 | 3 |
Cybercriminals continue to respond with lightning speed when they see an opportunity to exploit a national or global news story to spread malware. In fact criminals are inventing “breaking news” that appears to relate to high-profile current events.
The Commtouch Security Lab continually analyzes malicious campaigns that exploit breaking news using the CNN name and other prominent news outlets to lure email recipients to malicious sites. The average time between an actual news event and its exploitation hovered around 22 hours during the last three months.
On Friday, September 6, malware distributors invented fake news designed to take advantage of public interest in the possibility of a U.S. airstrike against Syria. The emails used the subject line, “The United States Began Bombing,” and were crafted to appear as a legitimate CNN news alert. It is an example of the cybercriminal community harnessing the interest and anxiousness about current events to increase the success of their malicious campaigns.
Prior to the Syria-related example, the average start time for a virus attack was already decreasing. In March 2013, when the new Pope was elected, the first malware and phishing attacks began after 55 hours. In April 2013, after the Boston Marathon bombing, it took 27 hours to see the first related attacks exploiting interest in the event.
Further examples include the newborn royal baby and news about the NSA whistleblower Edward Snowden. But examples such as the recent Syria-related campaign in September show that spammers are not waiting around – they are becoming even “faster” than the events themselves. | <urn:uuid:4ee75ebb-1520-4a33-b774-1af4d2429a1b> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/09/27/cybercriminals-exploit-most-news-within-22-hours/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00495-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951008 | 318 | 2.671875 | 3 |
Viruses move to mobile phones
14 Jun 2004
Kaspersky Lab detects Cabir, the first network worm for mobile phones
Kaspersky Lab, a leading information security software developer, has detected Cabir, the first network worm which propagates via mobile networks. It infects telephones running Symbian OS. So far, Cabir does not seem to have caused any security incidents.
It seems that the worm was created by a virus writer going under the name of Vallez. This pseudonym is used by 29a, an international group of virus writers. The group specialises in creating proof-of-concept viruses. Among the group's creations are Cap, the first macro virus to cause a global epidemic; Stream, the first virus for additional NTFS streams; Donut, the first virus for .NET and Rugrat, the first Win64 virus.
Preliminary analysis of the malicious code shows that that Cabir is transmitted as an SIS file (a Symbian distribution file), but the file is disguised as Caribe Security Manager utility, part of the telephone security software. If the infected file is launched, the telephone screen will display the inscription "Caribe". The worm penetrates the system and will then be activated each time the phone is started. Cabir scans for all accessible phones using Bluetooth technology, and sends a copy of itself to the first one found.
Analysis of the worm's code has not so far detected any malicious payload.
The worm is coded to run under Symbian OS, used in many Nokia telephones. However, it is possible that Cabir will function on handsets produced by other manufacturers.
A full description of Worm.Symbian.Cabir.a is available in the Kaspersky Virus Encyclopaedia. | <urn:uuid:408db331-5a3f-4bee-a200-8485f71b2b10> | CC-MAIN-2017-04 | http://www.kaspersky.com/au/about/news/virus/2004/Viruses_move_to_mobile_phones | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00551-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899155 | 366 | 3 | 3 |
Although MIT developed the first computer password in 1961, it's a technology that can be traced back far earlier. After a quick Internet search, I discovered that the use of passwords dates back to the Spartan military in 700 BC. So much for progress. One would think that after 2,700+ years we would have come up with a better way of doing things. Are there other possible options?
While the technology has changed since MIT's work in 1961, the concept remains the same: using a complex password, of the right length, changing it frequently, will provide a reasonable level of security. However, after conducting an informal survey of peers and contacts, I was shocked to see the average person is dealing with and managing over 200 passwords (some had over 300!) because we need a password for just about everything -- from banking to benefits, social networking to photo sharing, news and travel, shopping and entertainment, to online thermostats and garage door openers (very cool by the way).
The complexity of managing so many passwords has created the need for password managers, and there are many to choose from: cloud-based versions, offline versions, and options to view the data on every type of device. They provide a reasonable level of security, but again, anything can be hacked, so this is really just an interim fix. I've also found the use of password managers is a very opinionated and emotional topic. Those I spoke with are either big fans or huge opponents.
Is biometrics an option? Apple made headlines with its recent release of the iPhone 5s, including a new kind of fingerprint scanner they are calling Touch ID. I won't go into the details as many others have, however, I will point out that this is nothing new. Back in 2004, IBM introduced a fingerprint reader built into the ThinkPad T42. I remember how excited I was to see it and use it at the time, but I don't remember using it since. As I look at the notebook computer I'm writing this blog on, I see that the same fingerprint scanner is built in and yet I've not used it until now. After re-enabling the software, it seems to work fine. It scanned my fingers and I was able to log in. I discovered that not much has changed since 2004. So why don't I use it? Well, because without a lot of other technology enabled, working, and synchronizing, it's not secure and doesn't meet the corporate standards of an acceptable solution. But to lock your children out from purchasing music, it's perfect!
Dual factor authentication is a good answer to the problem. Just like an ATM card, the concept of something you have (the card), and something you know (the PIN number), provides the best combination of security and ease-of-use. While a number of online services have enabled this capability if a user opts for it (such as Facebook), I can only guess most people don't utilize it because it's another step.
Over the years, some companies have tried to create a single password that works across all services. It has never caught on, most likely due to a lack of trust. To me, having a single password is the perfect solution. One place to sign up, one username and password (or a biometric combination), and it works everywhere. All web sites and computer hardware would connect and authenticate to it. I'm guessing, however, that's not going to happen very soon since it would require the cooperation of most companies who aren't motivated to work together on this.
I'd love to see all of these technologies combined together: I sit down at a PC and it auto detects the smartphone in my pocket. The smartphone prompts me to scan my finger with the built-in scanner, and after I do, I am securely logged into the PC. Sounds like a dream.
I'll continue to wait patiently. How about you?
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:7458786f-417f-4da5-886b-fa410ee22148> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474995/data-privacy/passwords-stink--is-there-a-replacement-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97426 | 821 | 2.53125 | 3 |
When we think about progress in HPC, most of us use hardware speed, as reported in listings like the Top500, as our yardstick. But, is that the whole story – or even its most important component? HPC hardware and the attendant systems software and tools suites are certainly necessary for progress. But to harness HPC for practical problem solving, we also need the right math, as expressed in our solvers and applications algorithms. Hardware is tangible and visible while math is seen through the mind’s eye – and is easily overlooked. Lately, there hasn’t been much public discussion of HPC’s math. Where has it gone? Has it matured to the point of invisibility – or is it still a vibrant and dynamic part of HPC? Let’s take a look.
“Unglamorous but Critical”
From the early days of HPC, math was clearly seen as a vital element. In December of 1982, the Report of the Panel on Large Scale Computing in Science and Engineering, also known as the Lax Report, was published. One of its recommendations called for (emphasis mine):
Increased research in computational mathematics, software, and algorithms necessary to the effective and efficient use of supercomputer systems
Twenty years later, in July of 2003, the Department of Energy (DOE)’s Office of Science published: A Science-Based Case for Large-Scale Simulation, also known as the SCaLeS Report (Volume 1, Volume 2). Among other things, it reiterated the critical role of solvers in HPC (emphasis mine):
Like the engine hidden beneath the hood of a car, the solver is an unglamorous but critical component of a scientific code, upon which the function of the whole critically depends. As an engine needs to be properly matched to avoid overheating and failure when the vehicle’s performance requirements are pushed, so a solver appropriate to the simulation at hand is required as the computational burden gets heavier with new physics or as the distance across the data structures increases with enhanced resolution.
Solvers & Speedup
When improvements in HPC hardware performance are discussed, mention is often made of Moore’s Law and the desire to keep pace with it. Perhaps less well known is the observation that algorithm speedups have historically matched hardware speedups due to Moore’s Law. For example, consider this excerpt from the SCaLeS Report:
The choice of appropriate mathematical tools can make or break a simulation code. For example, over a four-decade period of our brief simulation era, algorithms alone have brought a speed increase of a factor of more than a million to computing the electrostatic potential induced by a charge distribution, typical of a computational kernel found in a wide variety of problems in the sciences. The improvement resulting from this algorithmic speedup is comparable to that resulting from the hardware speedup due to Moore’s Law over the same length of time (see Figure 13).
Top: A table of the scaling of memory and processing requirements for the solution of the electrostatic potential equation on a uniform cubic grid of n × n × n cells.
Bottom: The relative gains of some solution algorithms for this problem and Moore’s Law for improvement of processing rates over the same period (illustrated for the case where n = 64).
Algorithms yield a factor comparable to that of the hardware, and the gains typically can be combined (that is, multiplied together). The algorithmic gains become more important than the hardware gains for larger problems. If adaptivity is exploited in the discretization, algorithms may do better still, though combining all of the gains becomes more subtle in this case.
Time to Solution
So, if hardware gains and algorithmic gains could be “multiplied together,” what would that imply? If we are currently targeting a 1,000 fold increase in hardware speed over the present decade and if algorithmic gains keep pace, then in ten years we’ll have improved our problem solving capability by a factor of 1,000,000. Thus we’d be able to solve today’s problems in one millionth of their current solution time or use today’s time to solution to tackle problems a million times harder. Sounds pretty impressive. Is the necessary math on track to make this happen?
Obviously, things aren’t as simplistic as I’ve made them out to be. To get the multiplicative effect, algorithms and hardware architectures should be independent of one another. But in real HPC life, algorithms and hardware architectures interact. Fast algorithms are usually “complicated” and complicated algorithms are best implemented on “simple” uncomplicated architectures. Historically, when new, more complicated, hardware architectures are introduced we revert to simpler and slower solvers. Consequently, the optimistic estimates of improvement in time to solution may not materialize. In fact, time to solution could go up. This effect can go largely unnoticed by the general community because simpler solvers can require lots of mathematical operations and faster architectures spit out more operations per second. Thus in this situation, applications codes can run “fast” but produce solutions slowly.
As we move toward extreme scale HPC hardware, the interaction of algorithms and hardware architectures is becoming more important than ever. Last year, DOE’s Office of Advanced Scientific Computing Research (ASCR) published a Report on the Workshop on Extreme-Scale Solvers: Transition to Future Architectures. In it, the following observation is made (emphasis mine):
The needs of extreme-scale science are expected to drive a hundredfold increase in computational capabilities by mid-decade and a factor of 1,000 increase within ten years. These 100 PF (and larger) supercomputers will change the way scientific discoveries are made; moreover, the technology developed for those systems will provide desktop performance on par with the fastest systems from just a few years ago. Since numerical solvers are at the heart of the codes that enable these discoveries, the development of efficient, robust, high-performance, portable solvers has a tremendous impact on harnessing these computers to achieve new science. But future architectures present major challenges to the research and development of such solvers. These architectural challenges include extreme parallelism, data placement and movement, resilience, and heterogeneity.
The extreme-scale solver report goes on to address the issue of solver dominance:
Increasing the efficiency of numerical solvers will significantly improve the ability of computational scientists to make scientific discoveries, because such solvers account for so much of the computation underlying scientific applications.
This figure, taken from the extreme-scale solver report, shows that for a typical application as processor count and problem size increase, the time spent in the application’s solver (blue), relative to the time spent in the rest of the application’s code (pink), grows and soon dominates the total execution time.
What’s to be done about this – especially as we anticipate the move to exascale architectures?
In an attempt to find some answers, ASCR has formed an Exascale Mathematics Working Group (EMWG) “for the purpose of identifying mathematics and algorithms research opportunities that will enable scientific applications to harness the potential of exascale computing.”
At ASCR’s request, the EMWG has organized a DOE Workshop on Applied Mathematics Research for Exascale Computing (ExaMath13). ExaMath13 is taking place on 21-22 August and encompasses 40 presentations, selected on the basis of two-page position papers submitted to the EMWG a few months ago. About 2/3s of the presenters are from the DOE Labs with the rest coming from universities. The seventy five submitted position papers from which the 40 presentations were selected may be found at the EMWG website. They make interesting reading and reinforce one’s optimism about the applied math community’s commitment to meeting the challenges posed by exascale architectures.
As the ExaMath problem is complex, it’s not surprising that most of the position papers deal with intricate mathematics. However, a few also address the bigger picture. To mention just one of those, Ulrich Ruede’s paper, entitled: New Mathematics for Exascale Computational Science?, summarizes the challenges faced by the applied math community particularly well:
I believe that the advent of exascale forces mathematics to address the performance abyss that widens increasingly between existing math theory and the practical use of HPC systems. Tweaking codes is not enough – we must turn back and analyze where we have not yet thought deeply enough, developing a new interdisciplinary algorithm and performance engineering methodology. Beyond this, exascale opens fascinating new opportunities in fundamental research that go far beyond just increasing the mesh resolution.
So, it looks like HPC’s math is back in the foreground. There are lots of bright folks in the applied math community. Let’s see what they come up with to address the difficulties posed by ExaMath. | <urn:uuid:480d5f3a-2da0-4c02-8c56-81cb7ab228fe> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/08/19/where_has_hpcs_math_gone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930889 | 1,869 | 3.15625 | 3 |
As microprocessors push up against the limits of miniaturization, many are reflecting on what the post-silicon era has in store. Recently Sandia National Laboratories published an article describing the steps it is taking to extend the pace of computational progress over the coming decades. Some of the forward-leaning technologies include self-learning supercomputers and systems that greatly outperform today’s best crop while using less energy.
As history tells us, many of today’s established technologies would have seemed impossible at one time. Think about explaining Internet-connected smart phones to a pre-mobile, pre-Web generation – and that wasn’t that long ago.
“We think that by combining capabilities in microelectronics and computer architecture, Sandia can help initiate the jump to the next technology curve sooner and with less risk,” said Rob Leland, head of Sandia’s Computing Research Center.
Leland is leading a new initiative focused on next-generation computing called Beyond Moore Computing that encompasses Sandia’s efforts to advance computing technology beyond the exponential trend that was observed by Gordon Moore in 1965.
Moore’s law can be extended for a few more process shrinks but the cost is no longer feasible from an energy perspective. The industry needs technology that uses less energy at the transistor device-level, expressed Leland.
Scientists at Sandia anticipate that multiple computing device-level technologies will evolve to fill this gap, as opposed to one dominant architecture. So far, there exist about a dozen candidates, including tunnel FETs (field effect transistors), carbon nanotubes, superconductors, and paradigm-changing approaches like quantum computing and brain-inspired computing.
Leland makes the case that Sandia Labs, a multi-program laboratory operated by Sandia Corporation, a subsidiary of Lockheed Martin Corp., is well positioned to shape future computing technology.
The lab has decades of supercomputing experience, both on the hardware and software side, extending to capability computing and capacity computing. Leland references two key facilities in particular that will contribute to next-gen computing: the Microsystems and Engineering Sciences Applications (MESA) complex, which carries out chip-level R&D; and the Center for Integrated Nanotechnology (CINT), a Department of Energy Office of Science national user facility operated by Sandia and Los Alamos national laboratories.
This is really an inflection point, where it is difficult to predict what tomorrow’s computers will look like. “We have some ideas, of course, and we have different camps of opinion about what it might look like, but we’re really right in the midst of figuring that out,” Leland said.
One way that computing’s progress has been limited is the mandate for backwards software compatibility. Many computers are running code that was optimized to run on a different architecture.
“To break out of that, we have to find different architectures that are more energy efficient at running old code and are more easily programmed for new code, or architectures that can learn some behaviors that once required programming,” notes Erik DeBenedictis of Sandia’s Advanced Device Technologies department. He expects that computers are about a decade away from being able to manage both old and new code in an efficient manner.
DeBenedictis is pushing for breakthroughs beyond the transistor level. He cites cognitive computers and technologies that move data more efficiently as being crucial for the kinds of big data problems that are becoming so prominent.
This new generation of cognitive computers would be self-learning and able to share some of the programming burden. DeBenedictis makes the point that “while computers have gotten millions of times faster, programmers and analysts are pretty much as efficient as they’ve always been.” Smarter computers have the promise of ameliorating this bottleneck.
As for a timeline, Advanced Device Technologies department manager John Aidun says that post-silicon technology is coming sooner than one might think. Looking through the lens of national security, Sandia thinks this new tech will be needed sooner than industry would develop it on its own. Hence, the concerted efforts in this direction. Aidun estimates Sandia could have a prototype within a decade.
The lab is working to accelerate the process by identifying computer designs that leverage new device technologies and demonstrate fabrication steps that would lower the risk for industry. Mobile computing is an area that’s getting a lot of attention. Mobile meets a lot of the requirements of UAVs and satellites. On-board processing for satellites and other sensors would mean less need for data transfer.
Again, with history as a guide, the next big thing in computing may be an extention of a current technology, a mix of technologies (as in heterogeneous computing) or it might be something entirely different and new. | <urn:uuid:e4a0ba66-6314-4471-b3cc-36016de7f8d9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/28/sandia-launches-post-silicon-development-effort/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00055-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.934173 | 1,000 | 3.125 | 3 |
If your job (or even your personal life) requires you to do anything substantial with numbers, chances are you use a spreadsheet app to do it. As a Mac user, you've got plenty of choices among spreadsheet apps, but for most of us the choice comes down to three: Microsoft's Excel 2011; Apple's Numbers (version 3.2); and the browser-based Sheets section of Google Docs.
The one to use is really a personal choice, and that decision is not the focus of this article. (I personally prefer Excel, possibly because I've been using it for nearly 30 years). But regardless of the app you use, the question here is: How well do you know how to use it, really?
As a spreadsheet vet, I gave that question some thought and came up with the following list of things that I think every savvy spreadsheet jockey--not beginners, but people who've been using one of these apps for a while--should know. I'm not talking about any specific task. Rather, these are the techniques and concepts that I think you should know in order to graduate from casual to serious user.
1. Format Numbers
Because numbers can take many forms (decimals, integers, percentages), you need to apply formatting to make it clear what they mean. For example, most people would find it easier to understand 25% as opposed to 0.25. So, after you enter the number in a cell and select that cell:
Excel: Many often-used number formatting options are visible in the Home ribbon. You can also use the Format > Cells menu, then click Number in the dialog box that appears. All number formats are listed down the left edge of the dialog box; select one, and its options appear on the right.
The Custom option (recently added to Numbers as well) is especially useful, as you can combine text with your formatted number. For example, a format of #,##0.00 "widgets" would format your number with a comma if needed, two decimal places, and the word widgets after the number. Your cells will still be treated as numbers for use in calculations, but they will display with the defined text.
Numbers: Click the Format icon (the paintbrush) in the toolbar, then select the Cell entry in the resulting sidebar. Select the option (Automatic, Number, and so on) you want to use from the pop-up menu. You may need to set other values: For example, if you choose Numeral System, you'll need to set values for Base, Places, and how to represent negative numbers. (Numbers also includes special number formats such as Slider, Stepper, Pop-up Menu, and more; these can be used to create intuitive data entry forms.)
Sheets: All number formats can be found in the Format > Number menu; each formatting option appears in its own submenu. As in Excel, you can create custom number formats that mix text and numbers--but you have to find the option first, as it's buried in the Format > Numbers > More Formats submenu.
2. Merge Cells
Another useful formatting trick is to merge cells. Merged cells are what they sound like: two or more cells merged into one. This is a great way to center a header above a number of columns, for example. Merged cells are a powerful way to get away from the strict column-and-row layout of a typical spreadsheet.
To merge cells, you want to have a value only in the first cell you intend to merge, as values in any other cells will be wiped out by the merge. Select the range of cells to merge, by clicking on the first cell (the one containing the data) and dragging through the range you wish to merge.
Excel: Click the Merge entry in the Home ribbon, and then select one of the Merge options that appear in the pop-up menu--Merge and Center is what I use most often.
Numbers: Select Table > Merge Cells.
Sheets: Select Format > Merge Cells, then choose one of the Merge options, such as Merge Horizontally.
You can also merge cells vertically, which can be useful in tables where you have a parent cell (Salesperson, for instance) that contains multiple rows of data (for example, Product Sold and Units Sold).
3. Use Functions
You probably already know how to use basic formulas to do basic arithmetic on cell contents. But functions, which let you manipulate text and numbers in many other ways, are how you really unlock the potential of spreadsheets.
If quantity mattered most, then Excel would win, with (if I counted correctly) 398 unique functions. Google Sheets comes in a close second with 343, and Numbers has 282. But the total count is irrelevant, as long as the app has the functions you need.
All three apps share a large set of commonly used functions. For instance, to add up numbers across a range of cells, they all offer =SUM(RANGE) (where RANGE is a reference to the range of cells to be summed in the parentheses). To find the average of a range of numbers, they have =AVERAGE(RANGE). To round off a number to two decimal places, you can use =ROUND(CELL,2).
With 250-plus functions in each app, there's no way I can describe even a reasonable portion of them. But here are some of the less-obvious ones that I use all the time; they also happen to exist in the same form in all three apps:
=COUNT(): Counts all numeric entries in a range. Nonnumeric values will be skipped. To include nonnumeric values, use =COUNTA(RANGE) instead.
=MAX(RANGE) and =MIN(RANGE): Return the largest and smallest values in a range. Related to these two, I also often use =RANK(CELL,RANGE), which returns the rank of a given cell within the specified range.
=NOW: Inserts the current date and time, which is then updated each time the spreadsheet recalculates. (In both Excel and Sheets, you need to add a set of parentheses: =NOW().)
=TRIM(CELL): If you work with text that you copy and paste from other sources, there's a good chance you'll find extra spaces at the beginning or end of some lines of text. The TRIM function removes all those leading and trailing spaces but leaves the spaces between words.
Beyond these examples, the best way to get to know the functions in each app is to play around with its function browser. In Numbers, you'll see the browser as soon as you type an equal sign (=); it appears in the right sidebar and provides a nice description and example of each function. In Excel, select View > Formula Builder (in the Toolbox). In Sheets, select Help > Function List, which simply opens the Sheets webpage showing the list of functions.
4. Distinguish Between Relative and Absolute References
In the functions listed above, CELL and RANGE are references to either an individual cell or a range of cells. So =ROUND(C14,2) will take the value in cell C14 and round it off to two digits; =SUM(A10:A20) will add up all the numbers in cells A10 through A20.
You can enter these cell locations either by typing them or by clicking (or, for ranges, clicking and dragging) the mouse.
Spreadsheet apps are also quite smart; if you copy =SUM(A10:A20) and paste it into the column to the right, it will automatically change to =SUM(B10:B20). This is called relative addressing, as the functions' contents are relative to where they're placed; it's the default for formulas in all three apps.
If you don't want the cell references to change when you copy or move a formula, all three apps offer a mode called absolute addressing. An absolute address doesn't change when copied to a new location. All three apps use the same symbol for creating one: a dollar sign before the row and/or column symbols in a formula. So instead of typing A10:A20, for example, you type $A$10:$A$20 to create a fixed formula that always refers to those cells, regardless of where you put it.
You can also lock only one direction: $A10:$A20 will always refer to column A, but if you copy the formula over one column and down 50 rows, it would change to $A60:$A70. Similarly, A$10:A$20 would lock the rows; copy this formula over one and down 50, and it would change to B$10:B$20.
If you're typing cell addresses directly, all three apps let you simply type the dollar sign manually. But if you're selecting cells with clicks and drags, Numbers has another way of switching between relative and absolute addressing.
Cell references added via clicking and dragging appear in small colored bubbles, with a triangle to the right; you click the triangle to pop up Numbers' absolute/relative cell-addressing window. But while this method works, I find it more time-consuming than simply typing the dollar signs where I want them to be.
5. Name Cell References
Referring to cells by location may be convenient, but it can also make it hard to figure out exactly what a given formula is doing. It also means you need to remember the location of often-used cells, which can be tricky in a large spreadsheet. If you name cells (and ranges), however, you can make the formula easier to read, as well as make reusing those cells in other formulas easier.
Consider this formula as an example: =PMT(C5/12,C6,C7). Just by reading it, you can probably guess that it returns a payment of some sort, and maybe you can tell that cell C5 contains an annual interest rate. But really, it's not easy to discern what this formula is doing. Here's the same formula using named cells: =PMT(INT_RATE/12,TERM,LOAN_AMT). Now it's a lot clearer what's going on, and you no longer need to remember that cell C5 is the annual interest rate.
Excel: Select the cell or range you'd like to name, then select Insert > Name > Define, which will pop up a new window. Type the name you'd like to create in the first box, then click Add. Repeat for as many names as you'd like to define. Once you've defined all your names, Excel even provides a way to apply them to existing functions. Select Insert > Name > Apply, and you'll get a little window showing all your named cells and ranges. Hold down the <Shift> key, click on the first name in the list, then click the last name in the list to select them all. Click OK, and Excel will insert the names into any function that references a named cell or range.
Once you've named a cell or range, the spreadsheet always uses it in formulas--even if you click on a cell, Excel will insert its name in the formula.
Numbers: Sadly, it doesn't support named ranges.
Sheets: Select the cell or range you'd like to name, then select Data > Named Range. This will display a sidebar where you can type the name of the range and (if necessary) change the cell reference. Click Done, and you've created a named range (even if it's just one cell). I'm not aware of any way to apply newly created names to existing formulas. Unlike Excel, Sheets won't use a name unless you specifically type it in.
6. Extract Data From Ranges
One of the most-common uses of a spreadsheet is to create tabular data and then extract values from that data. Consider the following worksheet for a company that sells shipping supplies:
Your job is to answer coworkers' queries, such as "What's our cost on the packing peanuts?" and "How many rolls of tape do we have on hand?" You could, of course, just look at the table every time someone asked a question, but consider that the real-world version of the table may have hundreds or thousands of rows. There has to be a better way.
And there is: The VLOOKUP and HLOOKUP functions pull data out of tables, by matching a lookup value to a value in the table. (These functions are identical in all three apps, so I'll explain how they work in Numbers.)
VLOOKUP is used when your data is as shown in the table above: each item is on its own row, with multiple columns of associated data. HLOOKUP is used when each item is in its own column, with multiple rows of associated data.
The layout of the formula is the same in each app:
VLOOKUP(LOOKUP_VALUE, COLUMN_NUMBER (ROW_NUMBER for HLOOKUP) TO RETURN, REQUIRE EXACT MATCH)
Using a few VLOOKUP formulas, you can create a lookup tool to quickly return all the data about a given product. Here's the same worksheet as above, but with a product-lookup table added to the top. I've also included the actual formulas that are generating the results, so you can see how VLOOKUP works.
As one example, here's the formula in the On Hand row: VLOOKUP($A$2,Table 1::$A$2:$G$8,5,0)
$A$2 is an absolute reference to the value to match in the table--the contents of the green box, in other words. Table 1::$A$2:$G$8 is the range of cells in which Numbers will search for a match for whatever's in $A$2. (This is a great example of why naming ranges would be handy in Numbers.) The 5 tells Numbers to return the value in the fifth column (the first column is column one); this is the column that holds the quantity on hand.
Finally, that trailing zero is very important: It tells the spreadsheet to return only exact matches. If you leave that off either lookup formula, Numbers will return fuzzy matches--matches that come close to matching the lookup value. In this case, that would be bad--if you make a typo in your lookup cell, you don't want to see a closely matched product, you want to see error messages, letting you know there was something wrong with the lookup.
The other formulas are basically identical, differing only in which column number's data is returned.
7. Perform Logical Tests
Many times, you need to set a cell's value based on the results of one of more other cell values. For instance, in the worksheet for the shipping supplies company, the Order Alert column is either blank (if there's plenty of stock on hand) or it contains the Order Soon! warning (when inventory is getting low). | <urn:uuid:b7ac7885-bdb2-4e47-957e-ac518c4af9bd> | CC-MAIN-2017-04 | http://www.itworld.com/article/2696471/customer-relationship-management/nine-things-everyone-should-know-how-to-do-with-a-spreadsheet.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00268-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923992 | 3,165 | 2.890625 | 3 |
IPv4 Address Exhaustion
IPv4 address exhaustion is the depletion of the pool of unallocated Internet Protocol Version 4 (IPv4) addresses. The IP address space is managed by the Internet Assigned Numbers Authority (IANA) globally, and by five regional Internet registries (RIR) responsible in their designated territories for assignment to end users and local Internet registries, such as Internet service providers. On 31 January 2011, IANA officially exhausted, assigning the last IP ranges to the RIRs. IPv6 is the ultimate solution to the IPv4 address exhaustion. Carrier Grade NAT (CGN) is an integral part of IPv6 migration. | <urn:uuid:e22182dc-4263-4420-ad20-1e29baa1a326> | CC-MAIN-2017-04 | https://www.a10networks.com/resources/glossary/ipv4-address-exhaustion | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00176-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.903476 | 136 | 2.578125 | 3 |
Like so many other ideas grafted on to the Internet, online education was expected to revolutionize how students learn, especially in colleges and universities. Unlike a lot of the ideas that were tried and then bombed, online learning appears to be working, but not how the dot-coms, as well as numerous public and private universities, originally envisioned.
Just a few years ago, many in higher education looked at the rising demographics and the World Wide Web and concluded that technology would efficiently, and less expensively, address demand for post-secondary education. University leaders assumed faculty would easily migrate their teaching and content over to the Web and that students from ages 18 to 50 would flock to the online courses.
By the late 1990s, a wide range of schools, including Columbia University and the state university system in California, decided to set up online versions of their university courses. Part of the impetus was money: to make a buck without investing in bricks and mortar. The other driving force was fear. Many academic institutions were sure that fast-moving Internet firms were going to take over higher education in terms of online learning.
"With the rise of the dot-coms, there were moments of near panic in the campus community," said Kenneth C. Green, director of the Campus Computing Project and a visiting scholar at Claremont Graduate University. "They thought the dot-coms were going to eat them alive."
But like so many other notions about business and society that the Internet was supposed to transform, it didn't quite happen. The dot-coms never managed to make money out of cyber learning. Nor did a lot of brand name universities.
As a result, online education projects were dropped, scaled back or reinvented.
"Online education is not a matter of looking at the technology and looking at the faculty and throwing the two together and hoping a less-costly way of teaching will result," Green pointed out. "It's not just hot links to content. It's a matter of building, not just posting. And after you build it, you've got to maintain it, update it and continue to support it."
Universities, in particular state-supported academic institutions, have learned some valuable lessons about what works and what doesn't work for online education since the concept was hyped so strongly just a few years ago. That knowledge is beginning to pay off as a number of state-supported online education programs grow and prosper.
Universities have learned that the target customer for online post-secondary education is primarily an adult, partly because they make up the biggest sector of the market. Students who live in dorms and attend classes on a campus where there's a library and other academic facilities represent just 20 percent of the post-secondary population. The rest attend community colleges or for-profit schools, commute to class or take a course through either a distance education or an online program.
Not only is the adult market large, its needs vary. Some adults are looking to change careers with a new degree, while others are looking for courses that will enhance their current job. For the latter, corporations have proven adept at offering e-learning to their employees, who want a discrete educational experience that's shorter than the traditional semester timeframe.
For-profit universities, such as Phoenix University and Jones International, have stepped up to fill this niche and have done quite well. For example, Phoenix University now has the largest population of any academic institution in the country with more than 100,000 students and has the largest Web demographics with an estimated 30,000 students studying online.
Another lesson learned has to do with pricing. Columbia University launched Fathom, its online education project, as a for-profit venture. Today the courses are free. Other e-learning programs have also had to change course | <urn:uuid:c8ddfa0c-1099-4bb7-a4a4-8f873573c289> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Lessons-Learned.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00570-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975941 | 772 | 2.78125 | 3 |
Independence Day took on new meaning this year as North Korea allegedly launched cyberattacks against the U.S. and South Korea. Twenty-five Web sites, including those of the Federal Trade Commission, the Secret Service, the Transportation Department, and The Washington Post, were shut down July 4 by a cyberattack allegedly from North Korean hackers, the Associated Press reported.
Access requests formed by malware crippled the Web sites of South Korea's presidential office, defense ministry, and the National Assembly, the South Korea Communications Commission reported.
The National Intelligence Service in South Korea told South Korean lawmakers Wednesday that North Korea was behind the attacks, according to the AP, citing an aide to one of the lawmakers. NIS, South Korea's spy agency, said it could not confirm the report but was working with officials in the U.S.
The U.S. Department of Homeland Security has issued a notice to federal agencies on handling such attacks. "We see attacks on federal networks every day, and measures in place have minimized the impact to federal Web sites," spokesperson Amy Kudwa told Reuters.
Denial of Service
Both U.S. and South Korean Web sites were hit with denial-of-service attacks. DoS attacks often are intended to prevent a Web site or Internet service from functioning properly, temporarily or indefinitely. A DoS attack overwhelms a target with resource requests for bandwidth or server availability.
Typically, attackers have far less bandwidth per machine and need to band together to facilitate DoS attacks, according to Jose Nazario, a security analyst with Arbor Networks. In this case, 12,000 personal computers in South Korea and 8,000 in the U.S. were hijacked to bring down government, financial institutions, and media Web sites.
Politically motivated DoS attacks have increased around the world both by number and severity, according to Nazario. Notable examples include Olympic Web sites taken down by hackers in Korea in 2002, Web sites in Estonia taken down in April 2007, attacks between Russia and Ukraine, and China's DoS attacks against Cable News Network's Web site.
A security specialist who was part of the team that discovered the McColo spamming botnet has a different take on the attacks. "What confuses this whole image is the suspected political rhetoric, as shown from some press sources regarding North Korea or China," said Jart Armin of HostExploit. "Of course, as this is repeated around the Internet, the word 'suspected' is soon lost in translation."
"In fact these DoS attacks are from remote file inclusion (RFI) hackers using compromised servers in such places as Morocco and Malaysia," Armin added. "The hackers themselves are of Indonesia and Brazil origin, some of which were also clients of 3FN (a rogue ISP that hosts botnets and other illegal malicious content) before their closedown by the FTC."
DoS attacks violate both the Internet Architecture Board's Internet Proper Use Policy and the acceptable-use policies of nearly all Internet service providers.
On Wednesday, most of the more than two dozen Web sites attacked were back to normal, while others were still not up and running at full capacity, thanks to hackers who initiated additional attacks on seven other Web sites, including Ahnlab, a company that provides online security services, according to the Korean press agency Yonhap News.
The NIS said U.S. authorities are cooperating to track down those responsible for the attacks. | <urn:uuid:2a0aa18f-21c6-4e48-ab85-9aea9557ce3a> | CC-MAIN-2017-04 | http://www.cio-today.com/article/index.php?story_id=020000OTV70O | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963232 | 702 | 2.6875 | 3 |
The Federal Communications Commission (FCC) is an independent United States government agency.The FCC was established by the Communications Act of 1934 and is charged with regulating interstate and international communications by radio, television, wire, satellite and cable. The FCC’s jurisdiction covers the 50 states, the District of Columbia, and U.S. possessions.–FCC Website
The FCC recently released its ” long-awaited” plan on national broadband in the U.S. and though it has created quite a vision on where the U.S. should go in reference to Internet access in the future, is it the right agency to do this? According to the FCC website, the FCC was created to form policy and regulate communications, not design and legislate future use of such communications channels. So if not the FCC, then who?
It’s time that the President create a cabinet position that plans, implements and monitors the state of technology in these United States. Technology is a key driver for development, investment and business in the U.S. and is as important an issue as Energy or the the Interior, which are already cabinet level positions. There’s no doubt that in many areas, the U.S. leads in technology adoption–fiber-to-the-home, broadband adoption, number of homes with personal computers–but in other areas has fallen seriously behind (most developed countries have a higher penetration of mobile phones, for instance, and should I even mention the state of computing in our public schools?). In December, President Obama appointed a new position, a cyber security coordinator who is tasked with protecting U.S. computing and network assets. This is one area that should report directly to cabinet-level position. Cyber security, networking, telephony, Internet, wireless systems all should have a leader that should set the plan for future investment and strategy.
The FCC has done a good job of elevating the debate, but it’s not the right department to lead it. It’s time now to create one, or risk falling further behind.
Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog. | <urn:uuid:cef0c564-70be-4a20-a757-e273da984791> | CC-MAIN-2017-04 | http://blogs.gartner.com/phillip-redman/2010/04/07/time-to-make-the-cto-a-cabinet-position/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935839 | 543 | 2.6875 | 3 |
The industrialization of hacking has introduced a wave of threats that are increasingly sophisticated, coming from more effective and efficient actors profiting from attacks on IT infrastructure. If you think about it, just 10 years ago we were focused on less sophisticated attacks such as Blaster and Slammer. Over time, we have moved from stopping simple viruses and macroviruses of the 1990s to worms, spyware and rootkits, along with advanced persistent threats (APTs) and crimeware.
In this video we examine rootkits, a set of software components used to maintain a persistent and undetectable presence on a computer. Despite its reputation, not all rootkits are inherently malicious; some rootkits are designed to mask cheating in video games or to bypass software product activations.
That said, most rootkits today are indeed bundled with malware such as keyloggers, or they take control of the system as a zombie member of a botnet to launch other attacks. Rootkits are classified based on the level of the system in which they operate, from firmware rootkits up to userland rootkits. The difficulty of detecting rootkits depends on their sophistication and their classification.
Watch the video below to learn more about how rootkits operate. | <urn:uuid:cfbc19d0-b573-4927-aeb3-bf9536965486> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226605/security/video-what-is-a-rootkit-chalk-talk.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00322-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958212 | 258 | 3.140625 | 3 |
Computer-Related Injuries Up, Be Careful Out There!
Computer-related injuries are on the rise. The number of injuries grew by 732 percent, the American Journal of Preventive Medicine reports, while rate of household computer ownership grew by only 309 percent over the same period. Falling monitors and trip-inducing cords are partly to blame.
We are hurting ourselves with our computers, and more so than ever before, reports a new study from the American Journal of Preventive Medicine.
Back pain, blurred vision and mouse-related woes aside, researchers reported in the July 2009 issue of the AJPM that there has been a more than sevenfold increase in computer-related injuries due to tripping over computer equipment and head injuries due to falling computer monitors, among other incidents.
The AJPM reports that data from the national Electronic Injury Surveillance System database shows that more than 78,000 cases of "acute computer-related injuries" were treated in U.S. emergency departments from 1994 through 2006, with approximately 93 percent of the injuries occurring at home.
During the 13-year period of the study, acute computer-related injuries increased by 732 percent - though during this same time, household computer ownership grew by only 309 percent, according to the AJPM.
"The computer part most often associated with injuries was the monitor," reported the AJPM in a statement, though such injuries have declined from their peak of 37.1 percent in 1994, due to heavier cathode ray tube monitors being replaced with smaller and easier-to-lift liquid crystal display monitors.
Children under the age of 5 had the highest injury rates, followed by adults 60 years of age and older. Tripping or falling, and hitting or getting caught on equipment, were most often to blame, followed by injuries to the head.
"Future research on acute computer-related injuries is needed as this ubiquitous product becomes more intertwined in our everyday lives," said Lara B. McKenzie, with the Nationwide Hospital Center for Injury Research and Policy, in the AJPM statement. | <urn:uuid:91303b04-558f-4e5e-bb14-05af89899b19> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/ComputerRelated-Injuries-Up-Be-Careful-Out-There-214159 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00048-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966753 | 415 | 2.734375 | 3 |
Since we recently wrote about logical data recoveries from hard drives and how they structure information, it seems like a good time to tackle a question our data recovery engineers are frequently asked.
Let’s say you go to the store (or go online) and buy a new 320GB hard drive for $30 or whatever the latest special is on these now basic-level drives. Granted, you’re more likely to buy larger drives these days, but just bear with me for this example. You plug the USB connection to your laptop, go to disk manager, and see the drive show up as 298 GB. Or maybe it’s 297.44 GB with a 668 MB Western Digital Smartware virtual CD-ROM partition. Either way, the question is: Where did those extra GB’s promised on the box go?
The answer is that kilobytes are counted one way by a computer, and another way by people. A Windows operating system relies on binary and counts in base 2, and the marketing conventions are based on familiar base-10 numbers.
Here’s how it works: Hard drives store data with microscopic patches of metal that are either magnetized or not. One patch is a bit, and together eight of them make a byte.
So, one byte is simply eight tiny switches — eight 1s or 0s — resulting in 256 different combinations that can be assigned a value. In ASCII code, for example, the byte “01100101” is the letter “e.” The number “1” in the text we read is “00110001” in binary code.
As you can see, bytes are based in binary – the switches are either on or off, electricity is there or not, the pull of a magnetic field exists or it doesn’t. Ones and zeroes.
Operating systems, such as Windows, live in binary. Numbers expand by the power of 2. Under this system, a kilobyte is defined as 2 to the 10th power (2^10) of bytes, or 1,024 bytes.
The convention for the retail market, on the other hand, is to look at how many bytes there are, divide by a thousand – and bam – there’s the number of kilobytes. The advantage is that people are used to base 10 numbers, and this maintains the traditional definition of kilo. As in, kilo equals one thousand or 10^3.
In the familiar base 10 system, 1 gigabyte = 1,000 megabytes = 1,000,000 kilobytes = 1,000,000,000 bytes. Giga is billion; mega is million; kilo is thousand.
The difference between these counting systems explains why an operating system reports the capacity differently from how we’d count bytes in a base 10 system.
Let’s go back to our original example — the 320 GB hard drive. The box says that there are 320 GB because there are in fact 320,000,000,000 bytes. But an operating system lives in base 2 and defines 1,073,741,824 bytes to be 1 GB. So if you divide 320,0000,000,000 by 1,073,741,824, you get about 298 GB, which is what Windows shows the drive’s capacity to be.
So when you get a new drive and the capacity displayed seems to be less than promised, don’t worry. You’re still technically getting all the bytes you paid for. It’s just counted differently by your computer.
Relevant, over-used joke: There are 10 types of people in this world. Those who know binary, those who don’t, and those who weren’t expecting this joke to be in base 3. | <urn:uuid:888ca98a-5e65-42e3-8897-b0582c7a0950> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery-case/counting-bytes-is-my-hard-drive-320gb-or-298gb/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281574.78/warc/CC-MAIN-20170116095121-00166-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93598 | 801 | 3.28125 | 3 |
Building the Human Foundation for IT Security
Many of our modern technologies have brought with them a trade-off between reward and risk. We depend on our cars, for instance, even though we all know that accidents happen. To mitigate the risk of driving, we maintain our automobiles and wear our seat belts. Auto manufacturers have added air bags and reinforced the structure of their cars. But no matter how safe and secure the car itself, we are all still vulnerable to the other drivers on the road.
Today, computers and IT systems send information across a different kind of highway, creating boundless opportunities, again with startling risks. And while the equipment we use for IT gets better and better, we find that there is one element of uncertainty in this new world that is the same as ever — the behavior of human beings.
According to a December 2002 survey of 638 U.S. organizations conducted by the Computing and Technology Industry Association (CompTIA), nearly one-third of companies had experienced between one and three major security breaches over the previous six months. In nearly all cases, human error was the most likely cause. Eighty percent of those respondents felt that a lack of security knowledge or training, and a resultant failure to follow security procedures, was the root cause of their troubles. When asked about possible solutions to this problem, 96 percent of the respondents said they would recommend security training for their IT staff, with 73 percent favoring a comprehensive security certification. Yet 69 percent of these companies had trained less than one-quarter of their IT staff, and 22 percent had trained none of their IT employees on security to date.
Information systems today power our telephone systems, upon which emergency services depend. They power hospitals, storing patient records and tracking care. They power airports, banks and other essential pieces of our modern lives. The risks involved in interrupting these systems carry far more gravity than ever before. And as good as our technology is getting, it’s still only as secure as the IT professionals who implement and configure it.
The U.S. Department of Homeland Security in its National Strategy to Secure Cyberspace has said that the two major barriers to improving cyber-security are a lack of familiarity, knowledge and understanding of security issues, and an inability to find sufficient numbers of adequately trained or appropriately certified personnel to create and manage secure systems. The department’s conclusion echoes the opinion of many in the IT industry. And for the first time, companies are working together to build a foundation of the skills necessary to effectively design and administer the security of IT systems.
The Role of Certification
Certifications in general provide a helpful marking point for IT professionals to validate their skills and for employers to evaluate current and potential staff members. Certifications, as much as college degrees, are a wonderful tool for documenting what we’ve learned and helping employers distinguish between the professionals who possess the required skills and those who don’t. They help both employers and IT professionals evaluate skill sets.
Also like a professional with a college degree, an IT professional who has obtained a security certification is not necessarily a full-blown expert on IT security. A decade ago, certification seemed like a Holy Grail for IT professionals. A certified professional had “arrived,” it was thought. But anyone who’s ever worked with a lawyer or a doctor knows that a person with five or 10 years of experience is generally quite preferable to someone who’s just completed her residency or passed his bar exam.
Much like a bar exam, certification could never substitute for years of experience. It shows only that a person basically knows what it takes—the person possesses the baseline skills and has done the homework necessary to be fit for that role. The certification proves that the foundational skills are there.
But as we’ve discussed, this distinction is increasingly critical. With IT so pervasive in our economy today, the stakes are much higher for companies that employ unskilled network administrators. A decade ago, the biggest price a company would generally pay for unskilled IT administration was downtime or wasted resources. And those things are bad enough. But when compared to someone hacking private information and then disclosing it, or shutting down a major service, or changing public records, it pales.
Today there is malicious intent; hackers aren’t just hacking for the fun of it. Today a criminal doesn’t have to go to the bank to rob it. He can be hundreds or thousands of miles away, at any time of the day or night. Today a skilled hacker could potentially shut off runway lights, overturn a criminal conviction or open up a dam, with catastrophic consequences.
Security Certifications From Microsoft and CompTIA
Recently Microsoft, together with industry consortium CompTIA, made an announcement that it has taken the next step in this direction by creating formalized, standardized certifications for IT security professionals. The new certifications are part of a security-specialization program for both the Microsoft Certified Systems Administrator (MCSA) and Microsoft Certified Systems Engineer (MCSE) credentials. These programs provide a way for IT professionals to assess their current skills, develop those skills they still need and validate their ability to design and manage a secure computing environment.
While the core MCSA and MCSE certifications examine the participant’s ability to implement baseline security measures, the new MCSA: Security and MCSE: Security designations go beyond that baseline and look specifically at things like managing and troubleshooting service packs and security updates, and being able to implement and troubleshoot secure communications channels.
In addition, CompTIA’s Security+ certification is included in the program, allowing IT professionals to obtain platform-neutral security expertise. CompTIA represents more than 15,000 members across the IT industry, including hardware and software manufacturers, solution providers, distributors and educational organizations. Released in December 2002, Security+ has been adopted by a number of certification providers as a component of their certification programs.
The beauty of the CompTIA model is that it relies on industry cooperation. Companies across the industry, even intense rivals, sit down across the table and agree on what these certifications need to contain. With Security+, not only did major industry players come to the table, but for the first time government agencies contributed as well—The National Institute of Standards and Technology, the FBI and the Department of Defense, to name a few. To my knowledge, it’s probably the first time in the IT world that training and certification have been acknowledged by the federal government as an important element in solving a nationwide problem.
For IT professionals who are interested in obtaining the security certification but are not sure how to begin, an up-front, cost-free skills assessment can help. The assessment will provide a tailored road map for obtaining the knowledge and skills required to fulfill the role of security specialist with a clear view for anyone looking to develop core security skills.
This alone can save a company and its IT professionals a great deal of time—imagine being asked to become a security specialist without even knowing what skills that job requires. Fortunately you don’t have to spend a month or more just figuring out what you need to know, as Microsoft and CompTIA have done that legwork for you.
Both the Microsoft security specializations and the Security+ designation have undergone a rigorous, methodical process to ensure t | <urn:uuid:67839a73-9ed4-43e5-915a-843d613269f9> | CC-MAIN-2017-04 | http://certmag.com/microsoft-certification-building-the-human-foundation-for-it-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00470-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955094 | 1,500 | 2.625 | 3 |
NASA said today new data show carbon dioxide-based snow, or what's more commonly known as dry ice, falls on the Red Planet's south pole -the only known such weather in our solar system.
Frozen carbon dioxide requires temperatures of about minus 193 degrees Fahrenheit (minus 125 Celsius) and the new analysis is based on data from observations in the south polar region during southern Mars winter in 2006-2007, identifying a tall carbon dioxide cloud about 300 miles (500 kilometers) in diameter persisting over the pole and smaller, shorter-lived, lower-altitude carbon dioxide ice clouds at latitudes from 70 to 80 degrees south.
In the news: Google Glass goes high-fashion
Instruments on NASA's Mars Reconnaissance Orbiter (MRO) in orbit around Mars detected the snow for this latest study, NASA said. The presence of carbon dioxide ice in Mars' southern polar caps isn't a surprise, NASA said, for example, the Phoenix Lander mission in 2008 observed falling water-ice snow on northern Mars.
NASA said Mars' south polar ice cap is the only place on Mars where frozen carbon dioxide persists on the surface year-round. Just how the carbon dioxide from Mars' atmosphere gets deposited has been in question. It is unclear whether it occurs as snow or by freezing out at ground level as frost. These results show snowfall is especially vigorous on top of the residual cap.
"These are the first definitive detections of carbon dioxide snow clouds," said Paul Hayne of NASA's Jet Propulsion Laboratory in a statement. "We firmly establish the clouds are composed of carbon dioxide -- flakes of Martian air -- and they are thick enough to result in snowfall accumulation at the surface."
Check out these other hot stories: | <urn:uuid:d292755a-6fc3-4216-a5bc-cf1b95f401af> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2223101/security/nasa--dry-ice---snowfall-lands-on-mars---no-word-on-school-closings.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.22/warc/CC-MAIN-20170116095120-00194-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.901654 | 356 | 3.78125 | 4 |
After a successful lift off late Friday night, NASA's lunar orbiter is powered up, communicating and on its way to the moon.
NASA's Lunar Atmosphere and Dust Environment Explorer (LADEE) observatory lifted off at 11:27 p.m. ET atop a U.S. Air Force Minotaur V rocket, which started out as a ballistic missile but was converted into a space launch vehicle. It was the first NASA mission to launch from the Wallops Flight Facility on Wallops Island, Va.
LADEE's ascent into space could be seen up and down the East Coast, even as far away as Maine.
NASA's lunar orbiter blasted off Friday night atop a U.S. Air Force Minotaur V rocket. The spacecraft now is headed to the moon. (Image: NASA)
The observatory separated from its rocket, powered up and began communicating with ground controllers soon after liftoff, according to the space agency.
It is expected to reach the moon in about 30 days and then enter lunar orbit and begin its work.
The launch wasn't without a glitch, however.
NASA reported that during technical checkouts soon after the launch, the spacecraft commanded itself to shut down its reaction wheels, which are used to position and stabilize the spacecraft. Engineers are working on the problem and feel they have plenty of time to get the reaction wheels working again before LADEE enters lunar orbit.
A normal spacecraft checkout takes a couple of days, and this anomaly may add a couple more days to the process, NASA said.
"The LADEE spacecraft is working as it was designed to under these conditions. There's no indication of anything wrong with the reaction wheels or spacecraft," said S. Pete Worden, Ames center director, in a written statement. "The LADEE spacecraft is communicating and is very robust. The mission team has ample time to resolve this issue before the spacecraft reaches lunar orbit. We don't have to do anything in a rush."
He added that this is not an unusual event for a spacecraft.
The orbiting observatory is expected to study the moon's atmosphere, giving scientists information that should help them better understand Mercury, asteroids and the moons orbiting other planets. However, that's not the spacecraft's only mission.
About a month after launch, it is scheduled to begin a limited test of a high-data-rate laser communication system. If that system works as planned, similar systems are expected to be used to speed up future satellite communications, as well as deep space communications with robots and human exploration crews.
This will be the space agency's first test of laser communications, though in 2017, NASA is expected to launch a Laser Communications Relay Demonstration, which is expected to run tests for two to five years.
Using a laser for communications, instead of radio systems, would enable robots -- similar to the Mars rover Curiosity -- as well as astronauts to send and receive far greater data loads, whether they're in orbit around Earth, on the moon or on a distant asteroid.
The two-way laser communications system can deliver six times more data with 25% less power than the best radio systems.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "After successful launch, NASA probe heads to the moon" was originally published by Computerworld. | <urn:uuid:30a34995-00a1-42c2-a477-64030b34a087> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2169716/data-center/after-successful-launch--nasa-probe-heads-to-the-moon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00361-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949927 | 744 | 3.09375 | 3 |
In a climate of slow global economic growth, the opportunity presents itself for new and emerging industries to gain prominence in place of failing institutions. In the case of the recent worldwide economic slowdown, IT has emerged as a powerful driver of growth in nearly every developed country.
New technology innovations can transform economies. Take digitization – the mass adoption of connected digital services by consumers, enterprises, and governments – for example. According to the Global Information Technology Report 2013 by the World Economic Forum, “digitization boosted world economic output by nearly $200 billion and created 6 million jobs in 2011.” The report highlights how digitization also drives improvements in GDP per capita and can lead to a lower unemployment rate.
Technology innovations change the way individuals live and work and introduce new ways of doing business. They can also introduce new challenges, such as global cybercrime and privacy issues, that society and policy makers need to grapple with. As a society, we need to balance the risks and rewards of the new technologies that emerge around us. Even with the unintended consequences that come along with new technologies, it is clear that exporting and sharing them worldwide is a force for good. The overwhelming benefits of a more connected, smaller world far outweigh the inherent risks.
Tech Makes the World Go Round . . . And Get Smaller
As we look forward over the next ten years, there are a number of technologies that could have a massive, economically disruptive impact. I recently read the McKinsey Global Institute’s 2013 Disruptive Technologies report and was intrigued by the list of the top 12 technologies that they believe will “transform life, business and the global economy.” According to the report, these technologies share four key characteristics: high rate of technology change, broad potential scope of impact, large economic value and substantial potential for disruptive economics. They include:
- Mobile internet
- Automation of knowledge work
- The internet of things
- Cloud technology
- Advanced robotics
- Autonomous and near-autonomous vehicles
- Next-generation genomics
- Energy storage
- 3D printing
- Advanced materials
- Advanced oil and gas exploration and discovery
- Renewable energy
Another common theme binding these technologies together is human advancement. These technologies have the ability to grow the global economy and make our everyday lives easier. But, they have the potential to introduce new challenges and risks that we as a society need to solve.
Take advanced robotics, for example. The further automation of some workforces will create a streamlined workflow that can generate a consistent product at a standardized rate without human labor. The benefits are clear and adoption of this technology is generally good for our global economy. However, the human cost, such as lost jobs, cannot be underestimated. We’ll need to come together as a society – of consumers, business leaders and policy makers – to solve these challenges and ensure that we’re benefiting from the adoption of these new technologies.
IT: Breaking Down Borders
While the global impact of these innovations might be clear, what does it mean for your organization?
Organizations need to place bets on the technologies that can have the biggest impact to their business. Competitive advantage often comes to early adopters who leverage technology to transform their business. As disruptive technologies come to market, IT needs to quickly evaluate and implement those that make sense. And, they need to experiment with those technologies that they may not need right away.
As the world gets smaller and more connected organizations need to ensure they have a global R&D strategy. For years, many companies were focused on offshoring R&D to India. Today, tech hotspots are popping up around the world in places like the U.K.’s Tech City, Germany, Singapore and Brazil. The key for organizations is to leverage the pools of innovation around the world and set up shop in countries where talented engineers and scientists can interact directly with customers and prospects.
Clearly, technology innovations hold tremendous promise for our global economy and for our individual businesses. A decade ago, we had no clue the impact that mobile technology would have on our lives and today it’s ubiquitous. Regardless of the upheaval brought about by new technology advances and the new challenges they can introduce, disruptive innovations are the key to a prosperous future. I for one am looking forward to what the future holds! | <urn:uuid:5486b2a1-9794-49cc-8178-fad159235103> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2475477/it-management/tech-makes-the-world-go-round.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942518 | 883 | 2.96875 | 3 |
Image: Colorado is one of a number of states where state and local governments are prohibited by law from directly providing broadband service. image © 2005 Matthew. GNU Free Documentation License, Version 1.2. Article reprinted courtesy of Stateline.org
Colorado is one of a number of states where state and local governments are prohibited by law from directly providing broadband service, for example, free municipal wireless connections. So a recommendation in the Federal Communication Commission's National Broadband Plan has state officials scrambling. Released in March, the plan calls for Congress to ensure that state and local governments don't pose any barriers to making broadband available. If approved, the action could override the state laws.
"The worst thing that could happen in the state of Colorado is for a law like that to be rolled back, and we don't have accompanying policies in place," says John Conley, executive director of the state's Statewide Internet Portal Authority. To deal with that and other possible federal actions, Colorado has formed a broadband council to review the plan, as well as state policy, and deliver guidance to state lawmakers in the coming year.
Colorado's situation reflects the dynamically changing broadband environment in the United States today and the efforts of state officials not only to keep up with the changes in the plan, but to get out ahead of them. And since March, the FCC, its plan and other factors shaping the landscape of broadband in the nation have been on a wild ride.
What's at stake, states recognize, is the potential of broadband to improve the delivery of health care, public safety, education, and other services-and to make their workers and businesses competitive in a global economy. According to a new report released June 21 by the Pew Center on the States, "Bringing America Up to Speed: States' Role in Expanding Broadband," states -- with an infusion of $7.2 billion in federal stimulus funds and guidance from the FCC's broadband plan -- have stepped up efforts to ensure universal access to fast, reliable broadband connections and to give their residents the skills and resources they need to understand the benefits of broadband and get the most out of it.
Three weeks after the release of the plan, however, a federal court ruling, Comcast v. FCC, effectively undermined the FCC's authority to regulate many aspects of broadband, including oversight of management of Internet service providers and enforcement of a range of objectives, such as network neutrality, which prohibits providers from restricting all forms of content. "The effect of the Comcast decision," says Austin Schlick, general counsel for the FCC, "made their services unregulated and unregulatable under the current legal framework." Perhaps worried that Congress would step in to set its own standards, a group of providers, including Verizon, AT&T and Comcast, banded together to voluntarily impose net neutrality on themselves, developing guidelines to manage their networks. Net neutrality advocates have said that such an agreement, while nice, is no substitute for clear rules.
The court decision also casts some of the National Broadband Plan's recommendations into doubt. For one thing, it could upset the FCC's plan to reform the Universal Service Fund, which now guarantees funding for universal telecommunications services to everyone, to also provide broadband to all Americans, according to FCC Chairman Julius Genachowski.
The ruling also threatens, he says, the plan's recommendations that would protect consumers and promote competition by ensuring transparency in broadband access services, safeguard the privacy of consumer information, facilitate access to broadband services by persons with disabilities, protect against cyber-attacks, ensure next-generation 911 services for broadband communications and preserve a free and open Internet.
In all of these areas, states have direct and indirect interests as well. For example, the funding for a broadband plan proposal to expand E-rate, a grant program that enables many schools and libraries to be connected to the Internet, could be in question. "Over the years, we have done very well with that program to get Internet to our schools," says Craig Orgeron, strategic services director at Mississippi's Department of Information Technology Services, who waits along with officials from other states to see how broadband's regulatory limbo will affect the E-Rate program and other areas.
After the ruling, the FCC voted June 17 to begin the process of reclassifying broadband as a telecommunications service like traditional phone lines, over which the FCC has more clearly delineated regulatory authority. Genchowski, though, has called for an approach that would scale back some of this authority that he has said would be inappropriate for broadband -- such as regulating Internet content -- an approach similar to the FCC's regulation of wireless telephone.
But U.S. Congressman Lee Terry, a Republican from Nebraska and a proponent of broadband for revitalizing economies in rural areas, argues that Congress needs to step in and decide the next steps for broadband and broadband regulation. He says that the FCC is "usurping the Congressional role in broadband planning." He is not alone; more than half of Congress, including members of both parties, has expressed concern about the new reclassification plan. In one of several Congressional letters sent to the FCC before its vote, more than 70 House Democrats urged, "the significant regulatory impact of reclassifying broadband service ... should not be done without additional direction from Congress."
That could slow the process, however. Harold Feld, legal director for Public Knowledge, a public interest group focused on digital rights, notes, "Democrats and Republicans are fairly far apart on what sort of action they'd like to see." It took 20 years, he says, for Congress to act when similar FCC authority over cable television was in question.
"I don't know what the shakeout will be," says Barbara Esbin, a senior counsel with law firm Cinnamon Mueller who spent more than a decade with the FCC in the Media Bureau and the Cable Services Bureau. "But if I were a state regulator or broadband director, I would be watching this very closely."
Right now, state officials like Colorado's Conley are hoping just to get some clarity. Conley, the go-to person when others in the state have questions related to state or national broadband issues, says he is disconcerted by the murkiness that currently shrouds some important national broadband matters. "If someone asks me what you have to do to meet the net neutrality requirement, I don't know," he says. "And I don't know where to look."
Although the federal ruling casts uncertainty on aspects of FCC authority over broadband, it does not affect many of the recommendations in the FCC's broadband plan. Indeed, the FCC and other agencies already have begun implementing some of the suggestions, including changing regulations regarding utility pole attachments and taking steps to auction broadband spectrum..
For states, perhaps the most significant recent development has been the announcement of a new round of National Telecommunications and Information Administration (NTIA) grants for broadband mapping and planning activities, funded out of the $350 million the Recovery Act had designated for states to map the availability, speed, and location of broadband services. The new grants, in addition to the $100 million already granted for state mapping, cover three additional years beyond the initial two that the first round of grants had covered and expand funding to include state task force planning work and programs to increase computer ownership and Internet use.
"The NTIA broadband mapping program has allowed us to take a more centralized
approach and to take more resources in the state to focus on broadband," notes Stuart Freiman, broadband program manager for the Rhode Island Economic Development Corp. For Freiman, the gamut of state actions suggested by the plan and supported by this planning grant -- everything from improving Internet adoption and digital literacy to using broadband to bolster education and integrate broadband applications across state public safety agencies -- "have created a fantastic opportunity for states to deal with issues they maybe haven't addressed in the past or have ignored because they thought it was being taken care of."
In all, the FCC has more than 60 action items from the plan slated for 2010 implementation. But one of the most prominent measures, auctioning off some new airwaves to commercial providers for broadband applications, has erupted in a dispute over whether a dedicated public emergency broadband network should be owned by government or private carriers. Public safety officials, as well as a number of state and local government groups, including the Council of State Governments and the National Governors Association, argue that these airwaves should be dedicated to a public emergency broadband network. Paying for a public safety network might be difficult, however, and the FCC has suggested that such a network could be constructed less expensively on existing public safety airwaves and supplemented by empowering public safety agencies to take over commercial bandwidth in emergency situations. Congress weighed these arguments in a public hearing June 17 as it considers legislation to build a national public safety broadband network.
One of the FCC's first actions relating to the plan, to reduce the cost and time it takes broadband providers to access the country's 49 million utility poles that the FCC regulates, was influenced by existing programs in some states. FCC General Counsel Phoebe Yang says the move was modeled after attachment guidelines in Connecticut and New York, which regulate their own poles, that can halve the number of days the process might take in other states. When the FCC implements these new rules, those poles still regulated by those other states will lag behind.
By informing the FCC of similar best practices, as well as challenges without current solutions, states will continue to play a crucial role in developing many of the federal regulations that will tumble out in the coming months and years. "We'd love to have input on the infrastructure issues, particularly around the impact of the plan's recommendations on traditional wireline carriers. We rely on the states to communicate that to us. Nothing is self-effectuating. Nothing is pre-decided," Yang says, noting that there are numerous issues, such as broadband adoption by those with disabilities, where the front lines are at the state level.
States also are moving ahead to use their authority to modernize policies and bolster broadband availability. On June 15, Governor Pat Quinn made Illinois the latest state to revamp its telecommunications law, overhauling obsolete standards from a 1985 law written in the days before widespread cell phone and broadband adoption. State officials say the new law will stimulate greater private investment in broadband and wireless technologies. | <urn:uuid:d402ca2e-f197-4ffd-b73a-743501a04e83> | CC-MAIN-2017-04 | http://www.govtech.com/budget-finance/States-Ride-Broadband-Wave.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00479-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959656 | 2,093 | 2.734375 | 3 |
DELL EMC Glossary
Apache Hadoop is an open-source framework that allows for parallel processing of large data sets and collective mining of disparate data sources. Hadoop consists of the Hadoop Distributed File System (HDFS), YARN (Yet Another Resource Negotiator), and other components, such as MapReduce. YARN acts as the operating system managing other applications like MapReduce, which is responsible for processing large datasets in a parallel manner
The Apache Hadoop open-source framework extends to include additional software components, such as Spark, Zookeeper, Pig, and Hive, along with hundreds of others. These additional components address the ingestion, security, scripting, processing, visualization, and monitoring of data. Not all components are required and the use of multiple components is completely dependent on individual workflow needs.
What can you do with Hadoop?
Hadoop analytics help provide better understanding of customer behavior, operations activities, sales patterns, and more. Hadoop assists the science, medical, and pharmaceutical industries by helping researchers who are applying new analytic methods to massive quantities of data to make discoveries that could not otherwise be made using smaller data sample sizes. Hadoop also proves invaluable when evaluating Internet of things data, where countless appliances, machines, vehicles, devices, garments, accessories and more are producing massive amounts of insight-rich data every day.
What makes Hadoop a component of Big Data?
Apache Hadoop allows for the quick, streamlined mining of the various data sources that you have been collecting. This data in turn enables you to gain valuable business insight. The cross correlation of data helps you to make smarter decisions, better products and services, and more informed predictions about future trends and behavior.
Why a Data Lake for Hadoop?
At Dell EMC, we maintain that a Data Lake is essential for true Hadoop environments because the more data that is available for analytics, the richer the data insights will be. A Data Lake takes various data, traditionally held in separate silos, and consolidates them into a single repository that is Hadoop enabled. This consolidation of data allows you to work from a single data source and to manage, control, and protect that source in a unified manner.
Why a Data Lake from Dell EMC for Hadoop?
- Lower operating costs: With the capabilities made possible by a Data Lake from Dell EMC, you’ll require less storage capacity and physical space to house the same amount of data. The Data Lake from Dell EMC is simpler to manage and consumes fewer IT resources for storage administration. Due to these storage efficiencies, you can keep more data for longer rather than disposing the older data sets.
- Faster time to results: With a Data Lake from Dell EMC, you no longer have to move data because the Data Lake enables analytics in-place.
- Scale and flexibility: Although direct-attached storage (DAS) is the conventional approach to deploying and managing Hadoop, there are benefits to decoupling compute and storage with a Data Lake, especially if your Hadoop workload does not linearly scale along with the amount of data. | <urn:uuid:25ee76cb-214a-46b2-a02d-338edf76a7ff> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/hadoop.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911356 | 660 | 2.71875 | 3 |
DELL EMC Glossary
Software-Defined Storage (SDS)
Storage defined storage implies storage software isn’t defined by the hardware it runs on - software is decoupled from hardware and can run on any industry standard hardware that can be procured. Software defined storage should be able to be consumed in choice of flexibility as a downloadable software or as an appliance based model depending on the organization’s implementation standpoint.
Who uses Software-Defined Storage (SDS) and why?
By adopting Software-Defined Storage (SDS), Enterprise IT organizations can incrementally add capacity a few nodes at a time so that as demand increases, storage can grow to meet the demand — as opposed to having to invest in more and more hardware for traditional storage arrays. SDS will enable them to have more flexibility in their procurement model and significantly lower storage costs.
How does the Software-Defined Storage (SDS) work?
Software-Defined Storage breaks traditional hardware lifecycle and the traditional storage appliance model. With software abstracted from hardware, customers can acquire hardware and software independently. Customers can deploy them on the hardware of their choice rather than being locked into a narrow proprietary hardware platform or SDS can be downloaded as software.
Benefits of Software-Defined Storage (SDS)
Turbo charge server performance - Extreme performance and capacity with enterprise grade protection.
Build better clouds – Build better clouds and develop modern apps at 65% cheaper than public cloud Analyze more data – Enable the organizations to store, protect & analyze more data than ever.
Reduce CAPEX & OPEX costs – Improve operational efficiency and reduce the storage costs by up to 73%. | <urn:uuid:24356205-0c54-4782-a081-e65fff8e3edc> | CC-MAIN-2017-04 | https://www.emc.com/corporate/glossary/software-defined-storage.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00507-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920222 | 346 | 2.703125 | 3 |
Using sudo to Keep Admins Honest? sudon't!
The consensus among many Unix and Linux users seems to be that sudo is more secure than using the root account, because it requires you type your password to perform potentially harmful actions. While sudo is useful for what it was designed for, this thinking is flawed, and usually comes from inexperience.
The concept behind sudo is to give non-root users access to perform specific tasks without giving away the root password. It can also be used to log activity, if desired. Similar functionality can be found in operating systems with role-based access control (RBAC). Solaris 10, for instance, has greatly improved RBAC capabilities; so you can easily allow a junior admin access to Web server restart scripts with the appropriate access levels, for example. And while Linux has recently acquired RBAC capabilities through the integration of SELinux, sudo remains in common use, even though more widespread use of RBAC will eventually make it a redundant choice.
Sudo is supposed to be configured to allow a certain set of people to run a very limited set of commands, as a different user. Unfortunately, sysadmins and home users alike have begun using sudo for everything. Instead of running 'su' and becoming root, they believe that 'sudo' plus 'command' is a better alternative. Most of the time, sysadmins with full sudo access just end up running 'sudo bash' and doing all their work from that root shell. This is a problem.
Using a user account password to get a root shell is a bad idea.
Why is there a separate root account anyway? It isn't to simply protect you from your own mistakes. If all sysadmins just become root using their user password by running sudo bash, then why not just give them uid 0 (aka root) and be done with it? For a group of sysadmins, the only reason they should want to use sudo is for logging of commands. Unfortunately, this provides zero additional security or auditing, because an attacker would just run a shell. If sysadmins are un-trusted such that they need to be audited, they shouldn't have root access in the first place.
Surprisingly, the home-user rational makes its way into the workplace as well. The recurring argument is that running a root shell is dangerous. Partially to blame for this grave misunderstanding are X login managers, for allowing the root user to login. New users are always scolded and told that running X as root is wrong. The same goes for many other applications, too. As time progressed, people started remembering that "running as root" is wrong, passing this notion down to their children, but without any details. Now that Ubuntu Linux doesn't enable a root account by default, but instead allows full root access to the user via sudo, the world will never be the same.
People praise sudo, while demeaning Windows at the same time for not having any separation of privileges by default. The answer to security clearly is a multi-user system with privilege separation, but sudo blurs these lines in its most common usage. The Ubuntu usage of sudo simply provides a hoop to jump through, requiring users to type their password more often than they'd like. Of course this will prevent a user's web browser from running something as root, but it isn't security.
We'd really like to focus on the enterprise, where sudo has very little place.
The sudo purists, or sudoists, we'll call them, would have you run sudo before every command that requires root. Apparently running 'sudo vi /etc/resolv.conf' is supposed to make you remember that you're root, and prevent mistakes. Sudoists will also say that it protects against "accidentally left open root shells" as well. If there are accidental shells left on computers with public access, well that's an HR action item.
Sudo doubters will quickly point out that using sudo without specifically defined commands in the configuration file is a security risk. Sudoist user account passwords have root access, so in essence, sudo has un-done all security mechanisms in place. SSH doesn't allow root to login, but with sudo, a compromised user password removes that restriction.
In a true multi-user environment, every so often a root compromise will happen. If users can login, they can eventually become root, and that's just a fact of life. The first thing any old-school cracker installs is a hacked SSH program, to log user passwords. Ideally, this single hacked machine doesn't have any sort of trust relationship with other computers, because users are allowed access. The next time an administrator logs into the hacked machine, his user account is compromised. Generally this isn't a big deal, but with sudo, this means a complete root compromise, probably for all machines. Of course SSH keys can help, as will requiring separate passwords for administrators on the more important (non user accessible) servers; but if they're willing to allow their user account access to unrestricted root-level commands, then it's unlikely that there's any other security in place elsewhere.
As we mentioned, sudo has its place. Allowing a single command to be run with elevated privileges in an operating system that doesn't support such things is quite useful. Still, be very careful about who gets this access, even for one item. As with all software, sudo isn't without bugs.
No matter where you choose to fit sudo into your workflow, do not use it for full root access. Administrators keep separate, non-UID 0 accounts for a reason, and it's not for "limiting the mistakes." Everything should be done from a root shell, and you should have to know an uber-secret root password to access anything as root. | <urn:uuid:5ed1f231-5ca9-4fc8-921e-0110b7a8d136> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/3641911/Using-sudo-to-Keep-Admins-Honest--sudont.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00443-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953513 | 1,193 | 2.78125 | 3 |
Editor’s note: This post by Sean Lawson provides context on cyber conflict, an area of interest at the nexus of national security and technology. – bg
Recently, Dr. Thomas Rid of the War Studies department of King’s College in London published an article in the Journal of Strategic Studies titled, “Cyber War Will Not Take Place.” Rid’s essay relies upon a definition of war taken from the work of Carl von Clausewitz to assess whether cyber attacks can be accurately described as “stand-alone” acts of war. His conclusion is that we have yet to see any cyber attacks that, on their own, meet Clausewitz’s definition. What’s more, he predicts that we are unlikely to see stand-alone acts of cyber war in the future. Nonetheless, he does acknowledge that cyber threats are real and that various cyber tools and techniques are becoming increasingly important in international conflict, including those used for sabotage, espionage, and subversion.
At a time of increasing concern over prospective cyber threats, it is not surprising that Rid’s essay has added fuel to the ongoing debate between cyber security proponents and the so-called “cyber skeptics.” For example, cyber war expert, author, and CEO Jeffrey Carr has written a spirited response to Rid’s essay. In this post, I argue that Carr’s response misses a key component of Rid’s argument, that the debate between Rid and Carr is exemplary of an emerging debate over the definition of “war” more generally, and that the complexities of cyber conflict demand that we move beyond the kind of binary thinking exhibited in this debate.
First, Carr provides three examples of cyber attacks that he says meet the Clausewitzian definition of war provided by Rid because all three are “lethal, instrumental, and political.” His three examples:
- Kyrgyz Intelligence assassinates Gennady Pavlyuk. Kyrgyz intelligence cracked Pavlyuk’s email account and used the information they obtained to lure him out of the country under false pretenses resulting in his murder.
- Mossad assassinates Mahmoud Al-Mabhouh. Israel’s Mossad mounts an operation to assassinate Hamas leader Mahmoud Al-Mabhouh which includes infecting Al-Mabhouh’s computer with a trojan horse virus.
- Iran’s IRGC arrests 30 dissidents after cracking U.S. hosted webservers.
None of these are acts of war in the conventional sense of the term. These are 1) subterfuge in support of assassination, 2) espionage in support of assassination, and 3) espionage in support of political repression. In the first case, Kyrgyz intelligence supposedly assassinated one of its own citizens. That is not war as we typically understand it. In the third case, espionage was used to aid in carrying out an act of political repression. But neither of those acts by themselves (espionage nor arresting one’s own citizens) are war. The only example that might be considered war is the second case. But even here, given the ongoing state of violent conflict between the Israelis and Palestinians, which has included many assassinations, it is hard to see this event as somehow distinct.
But more importantly, Carr’s response misses a key component of Rid’s argument, namely, that it was about whether stand-alone cyber attacks have been or will be acts of war. Not only is it questionable whether any of Carr’s counter examples in their totality are “acts of war,” it is clear that in none of them can the cyber attack components be seen as stand-alone acts of war. The cyber attacks in each example were not the direct causes of the ultimate outcomes. Email hacking did not directly kill Gennady Pavlyuk. The trojan horse did not kill Mahmoud Al-Mabhoud. Cracked servers did not directly arrest those Iranian protestors. All of those actions (assassinations and arrests of political dissidents) have occurred, do occur, and will continue to occur without the aid of cyber attacks. The use of cyber attack tools and techniques in support of them in these cases does not make them nor the use of cyber tools and techniques “acts of war.”
Second, the debate between Thomas Rid and Jeffrey Carr is exemplary of an emerging debate that is less about the definition of “cyber war” and more about the definition of “war” in general. There is an emerging debate between expansionists and traditionalists. Expansionists argue that current definitions of “war,” either from the classic theorists like Clausewitz or the law of war, are inadequate and should be expanded to include a wide range of acts that traditionally would not be considered war. The traditionalists argue that existing definitions of war are more than adequate, that while the practice of war might change (including weapons and tactics) the fundamental nature of war does not: it is still about damage, destruction, injury, or death inflicted for political purposes, usually by state actors.
In this instance, Carr makes an expansionist argument when he claims that “traditional thinking about warfare has been made obsolete by our dependence upon cyber-space-time.” In a previous essay, he cited a NATO study of the legal lessons learned as a result of the 2008 cyber attacks against the country of Georgia. That report concluded that the cyber attacks, by themselves, did not count as “armed attack” (the legal term for what we colloquially call an “act of war”) under current definitions in the law of war. In response, the authors proposed that “new approaches to traditional law of war principles need to be developed.” Therefore, they advocated that the advent of “new bloodless types of warfare” like cyber war mean that “the definition of ‘attack’ should not be strictly connected with established meanings of death, injury, damage and destruction” (p. 30). Because the cyber attack on Georgia (and practically all other cyber attacks to date) do not come close to meeting traditional definitions of war from law of war, theorists like Clausewitz, or even common understandings of the term, the response has been to call for the redefinition of war itself to include a whole host of “bloodless” acts.
Of course, Rid is taking a traditionalist approach in this debate. He is arguing that the fundamental nature of war has not changed and is using the work of a widely-cited, well-respected classic theorist to support his argument. Others, such as Maj. Gen. Charles Dunlap, Jr. (ret) have also taken the traditionalist position. Dunlap has argued convincingly that the law of war definitions of “armed attack” are more than adequate for evaluating cyber attacks. In doing so, he is following closely the analysis provided by Michael Schmitt more than a decade ago, which is the foundation for the “effects-based” approach to determining when a cyber attack rises to the level of armed attack, when it is war. In the traditionalist view, there are no “bloodless” acts of war. Violence, death, destruction, damage, injury are required. Even then, not every act of this sort is armed attack or war (see examples 1 and 3 above).
My own views are more in line with those of the traditionalists than the expansionists. I believe that it is dangerous (for many reasons that I will not elaborate here) to expand definitions of armed attack and war. I merely wish to call attention to the fact that the debate over the definition of cyber war is becoming a debate over the definition of war in general. This is an important distinction. The outcomes of this debate will have profound impacts on the future of politics, economics, security, and individual liberties.
Finally, because the outcomes of this debate will be so important, it is all the more disappointing that issues of cyber security are so often framed in such binary terms. For example, because Rid does not accept that all malicious cyber activity is “war,” Carr lumps him into the category of cyber war “skeptics.” He made a similar move with the authors of a recent OECD report that claimed that cyber attacks do not have the ability to cause systemic shocks that are global in scope. Nonetheless, in each case, Rid and the authors of the OECD report make it clear that they take cyber threats seriously. They merely seek to be more realistic in their assessments of the impacts of cyber attacks and more precise in their categorization of the varying types of malicious actions in/through cyberspace. To be fair, though Rid’s position is more nuanced than Carr admits in his response, Rid nonetheless invites Carr’s application of a binary, proponent/skeptic categorization because his essay largely framed cyber war as a yes/no question.
Effectively addressing the complex challenges of cyber security in a globalized world demand that the public debate about cyber security move beyond such framings. Are cyber attacks war? There needs to be room for answers like, “Maybe, it depends” and “No, but there are still serious challenges.” The truth will likely lie somewhere in the gray, muddled middle between yes and no, black and white. Engaging with the messy complexity of cybersecurity challenges is essential to ensuring that war remains a continuation of politics by other means instead of politics (and every other aspect of daily life) becoming a continuation of war by other means.
[Cross-posted from Forbes.com.] | <urn:uuid:aafd71c8-5e85-4107-b47a-91ace5a3500b> | CC-MAIN-2017-04 | http://www.fedcyber.com/2012/04/22/cyber-war-and-the-expanding-definition-of-war/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00011-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951541 | 1,995 | 2.625 | 3 |
In March, the nation of Turkey attempted to shut Twitter down within the country. There are several lessons network engineers should glean from Turkey’s efforts to shut down a single service, including lessons on resilience, indirection, and the interaction between Internet governance and the technology decisions engineers make when designing protocols and networks.
First -- because of resilience -- it’s become very difficult to turn off “one service” on the Internet. Turkey tried to shut down Twitter by blocking access to Twitter’s DNS records. Using DNS to block access to a service isn’t very effective, though; it’s easy enough to switch to a public DNS server, such as those run by Google, thus restoring any services rendered unreachable through this sort of blocking.
Let’s assume, though, that Turkey had decided to block the IP addresses of Twitter servers rather than the DNS records. Would this have worked? Probably not. Just about any service can be deployed across a number of different IP addresses, even changing IP addresses on a periodic basis to make it difficult to find and block each individual instance of the service. Putting the service behind a large-scale network address translator (NAT), would make it virtually impossible to block without blocking a large number of sites that are “innocent bystanders.”
Indirection such as NAT is often considered a very bad thing in network and protocol design and engineering. If we had a truly “transparent” Internet, where every person or service had to be identified before sending traffic, we’d certainly have a lot less spam. But without the indirections, Twitter service could not have been restored to people living in Turkey.
So the first lesson is this: To block any particular service, you almost have to block the entire Internet. IP networks are just too good at routing around blocked paths or dealing with mapping information being removed from one source. Resilience is a two-edged sword -- individual services are much more reliable, but they’re also much harder to block or otherwise shut down.
Second, this level of resilience comes with another sort of cost in terms of security. DNS servers are often used to reflect or amplify denial-of-service attacks specifically because of the resilience built into the DNS system as a whole. Are we potentially facing another version of the CAP theorem? Just as a database cannot be made to be consistent, available, and partitionable all at the same time, maybe network protocols cannot be resilient, reliable, and secure all at the same time.
Finally, while we sit outside Turkey, smugly condemning an attempt to block Twitter, "free speech zones" are becoming increasingly common in the US. Free speech isn’t just about technology. It’s also about accepting that you’re not going to agree with anyone all of the time -- and, in fact, you might just find what they say offensive. This is perhaps a little more of a personal lesson, but as engineers we need to realize that, while we can’t anticipate all the potential consequences of every decision we make, there is still some interaction between the technical world and the political one.
Engineering decisions have social outcomes as well as technical ones. It’s important to remain as neutral as possible, providing technology for a narrow set of requirements at hand, but it’s also important to get our heads out of the technology sandbox and try to come to terms with the real-world implications of what we’re building from time to time. | <urn:uuid:114d59cb-b995-4448-b1a9-df1b293763b6> | CC-MAIN-2017-04 | http://www.networkcomputing.com/networking/turkeys-twitter-shoot-lessons-learned/2020812065 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965351 | 725 | 2.71875 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Technology and Early Childhood Education-Individual Assignment
Select a size
Caritas institute of community education
2016 - 2017
Higher diploma in early childhood education
Technology and Early Childhood Education
Student : CHENG On Ki, Angel
Student No : 10125412
Date of Submission : November 29, 2016
Now, we are contact with the numerous technology in the society. Believe that each family has one or more of the technology. Such as television, smart phones, computers etc. Today's children and parents are very different. Social environment due to technological progress. Has long been changed the original mode of life. Smart phones from luxury into a necessity. Toys become i-pad. And the video games take the place of the park. As a result of rapid social change, schools need to instill more knowledge about ICT. At present, that a gradually add technology elements of early childhood education. Parents need to help children in the Internet search data to complete their homework. The education sector has different voices on the use of technology by young children. Using ICT in Early Childhood Education must be benefits and potential risks. How we balance the line in the teaching, it is worth to explore.
Now, I will like to tell you about the using of ICT in early childhood teaching. Kindergarten teaching model based on the child's interest and specific themes. Also ICT is into the activities, and not additional instillation information. Below I will design an activity for the K2, contains of 20 students. They will be divided into four groups to carry out activities. First, let children for a two-week theme of the campaign activities. So that children from the gather information and finishing the process of understanding the types of sports, forms, characteristics etc. Second, teachers allow young children to share their understanding of the sports through discussion. The most profound impression of painting as a picture and the use of sentences to describe the scene or express feelings. Third, teachers collect the works of students, with students scan the digital production version of the archive. Then children will create their own digital recording of the sentence, made of digital books. Production is completed, teachers play digital books and children to share the results. The digital book print stored in the book corner for children to read and read. This activities in the language of learning area. That language learning include listening, speaking, reading and writing. In the writing, children use pictures and words to express it. In the speaking, children will create their own digital recording of the sentence. In the reading and listening, get your visual and auditory enjoyment by playing digital books. This activity target group is K2. According to Developmental Characteristics of Children from 2 to 6 Years Old "Age 4 to 5 years, children able to tell stories from pictures; able to speak fairly fluently and clearly. " (Hong Kong Education Bureau, 2006). I believe this is effective in their activities, and their ability to permit reached.
According to The Curriculum Development Council "Children are active learners who are curious and interested in exploration. Given proper resources and adults’ assistance, children can construct knowledge on their own. A safe, comfortable, enjoyable and challenging environment is conducive to children’s learning." (Hong Kong Education Bureau, 2006). So that appear of ICT to bring them freshness. ICT can deliver content and activities that originate and support strong and productive emotions for children. Eventually they can serve as the environment and the tools for development of a child. In addition, using ICT for supporting children's learning. First, ICT to enhance children's speaking skills. According to NAEYC" Technology should be used as a tool to enhance language and literacy ... With technology, adults and children can hear and practice accurate pronunciations so they can learn one another’s languages. If teachers do not speak a child’s language, they may use technology to record the child’s speech for later translation and documentation of the child’s progress. "( NAEYC ,2012). In activities, children to record their own stories. They will listen to their recordings, and have the right to modify. The teacher will teach children to pronounce and speak, make their skills to be better. That some children are more shy, and are afraid to give speeches under the masses. The use of tape recorders can increase their chances of speaking. Second, ICT to enhance children 's reading opportunities. Teachers will be in accordance with the description of children pictures inserted text notes. Promote the use of written language in children the opportunity to express. Third, ICT to increase children's learning interest. The process of making digital books, children need to draw a picture of the sport. Then convert the picture from scanner to computer file and sound recording. This activity allows them to make their own, a story of sound and picture. Comparing the use of traditional painting , it would more appealing. Use technology to make activities interesting. May make children have a higher initiative and participation.
The role and potential of using ICT in the early childhood education. It affects the education of young children. In terms of benefits, ICT benefits education systems to provide quality education in alignment with constructivism, which is a contemporary paradigm of learning. ICT can promote active learning because it plays the role of empowerment, provides rich teaching functions and creates authentic learning opportunities. For teacher, ICT can create a teacher's ability to explore new teaching strategies because IT equipment is flexible and can be used in a variety of ways for teachers to explore. A the same time, ICT may to bring potential risks and concerns. If harmful physical effects of prolonged computer use by children. Excessive exposure of ICT, children to other learning lost interest. For example, Digital books let children have visual and auditory characteristics. They are slowly losing interest in traditional books. ICT effects on children's social development. In this activities, children and peers do not have much chance of communication. Their speech object is a tape recorder. So that children no interaction and communication in this environment. There are advantages and disadvantages in ICT, We need to find a balance in teaching. We should integration of new technologies into many other ordinary everyday activities in early childhood education, rather than replacing them. That can make effective use of information and communication technology in early childhood care and education.
EDB(2006). Guide to the Pre-primary Curriculum. November 25, 2016.
Kalas,I.(2010). Recognizing the potentialal of ICT in early childhood education. November
NAEYC(2012). Technology and Interactive Mediaas Tools in Early Childhood Programs Serving
Children from Birth through Age 8. November 25, 2016. < chrome-extension://ikhdkkncnoglghljlkmcimlnlhkeamad/pdf-viewer/web/viewer.html?file=http%3A%2F%2Fwww.naeyc.org%2Ffiles%2Fnaeyc%2Ffile%2Fpositions%2FPS_technology_WEB2.pdf>.rve as the environment and the tools for development of a child. In addition, using ICT for supporting children's learning. First, ICT to enhance children's speaking skills. According to NAEYC" Technology should be used as a tool to enhance language and literacy ... With technology, adults and children can hear and practice accurate pronunciations so they can learn one another’s languages. If teachers do not speak a child’s language, they may use technology | <urn:uuid:e3601e78-ed8b-4848-aca1-2d2ab25e0627> | CC-MAIN-2017-04 | https://docs.com/angelcheng/7314/technology-and-early-childhood | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940713 | 1,624 | 2.609375 | 3 |
Ransomware facts and mitigation tips
In the world of cyber security threats, Ransomware is a comparatively new word which has become a big concern in the recent years. In the US, May 2016 was the second worst month for Ransomware attacks in history. It came to light first time in the last year and caused a huge loss of data.
What is Ransomware?
Ransomware is a type of malware that accesses a victim’s files, locks and encrypts them and then demands the victim to pay a ransom to get them back. Ransomware infections threaten computer users with the destruction of data if they don’t pay the money to the crooks that created the infections. Cybercriminals use these attacks to try to get users to click on attachments or links that appear legitimate but actually contain malicious code. Ransomware is like the “digital kidnapping” of valuable data from personal photos and memories to client information, financial records and intellectual property. Any individual or organization could be a potential ransomware target.
What does Ransomware do?
Ransomware prevents from using PC normally, and will ask to give some ransom before you can use your PC. They can target any PC users, whether it’s a home computer, endpoints in an enterprise network, or servers used by a government agency or healthcare provider.
- Prevent users from accessing Windows.
- Encrypt files so users can’t use them.
- Stop certain apps from running (like web browser).
Ransomware will demand that users pay money (ransom) to get access to your PC or files. Once executed in the system, a ransomware can be two types. Either it locks the computer screen or it encrypts predetermined files with a password. There is no guarantee that paying the fine or doing what the ransomware tells will give access to the PC or files again.
How does Ransomware work?
When Ransomware first hit the scene a few years ago, computers predominantly got infected when users opened e-mail attachments containing malware, or were lured to a compromised website by a deceptive e-mail or pop-up window. Actually Ransomware attacks are typically carried out using a Trojan, entering a system through, for example, a downloaded file or vulnerability in a network service. The program then runs a payload (the part of malware such as worms or viruses which performs malicious actions), which locks the system in some fashion, or claims to lock the system but does not. Payloads may display a fake warning purportedly by an entity such as a law enforcement agency, falsely claiming that the system has been used for illegal activities.
Protection against Ransomware:
- Regularly backup your data to an external device, to the cloud, or both so that data can be available even if Ransomware attack happens.
- Make sure all of your operating system and anti-virus/anti-malware programs are set to update automatically.
- Think before you click an unknown link because almost all the Ransomware infection attacking happened by clicking on a link from a bogus email, a hijacked social media account, or another malicious source over the internet.
- Enable spam email detection to avoid getting unwanted mails containing malicious attachments.
- Always check who the email sender is. Check digital signature or certificates for any company or be sure the sender is trusted one before clicking any mail attachment.
- Double-check the content of the message before sending.
- Keep all machines clean to prevent any kind of cybercrime.
- Get two-factor authentication system for strong security.
- Every time you plug any USB or external device, do scan twice.
- Make better passwords to enhance security of your computer and accounts.
- When in doubt, throw it out. If any link or mail attachment looks suspicious just ignore it.
Ransomware is a very challenging threat for both users and antimalware companies. The threat is still growing. 50 new Ransomware families have already been seen within the first five months of 2016 alone, which is more than the numbers seen in 2014 and 2015 combined. Cybercriminals have also constantly improved ransomware’s hostage-taking tactics with the use of increasingly sophisticated encryption technologies. The ransomware threat is as real as it gets, but paying shouldn’t be an option, as paying the ransom does not guarantee that victims regain access to their locked files. So, there is no alternative of prevention to escape from Ransomware. | <urn:uuid:50637beb-08c1-4a07-9dc9-dd9eac0f87f5> | CC-MAIN-2017-04 | https://www.cirt.gov.bd/ransomware-facts-and-mitigation-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284352.26/warc/CC-MAIN-20170116095124-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906173 | 924 | 3.34375 | 3 |
Growing up Digital – Cell Phone Safety for Kids
My niece’s birthday is coming up, and – no surprise here – she’s begging her parents for a smartphone. They’ve put it off for a few years, but with all of her extracurricular activities, it’s beginning to sound like a great idea.
With my mind on identity theft protection, I immediately began researching if this is a safe idea, and if we can make it even safer for her. She’s a responsible kid, but smartphones open up a whole world of vulnerability for young people. Even the most mature of young people could fall victim to the vortex of social media, shopping, and gaming.
Maturity seems to be a major deciding factor for many parents, but experts say that sixth grade is a great time to start talking about smartphones. Even so, did you know that more than 25% of children ages 2-5 already have their own tablets or smartphones? This percentage jumps to a totally reasonable 56% in the 10-13 age range.
Experts also agree that human interaction is ideal for the development of young people in a wide age range, but if your middle schooler is begging you for an iPhone, you shouldn’t feel guilty for giving them one – particularly if you’re smart about how and where they use the device.
Here are some quick cell phone safety tips for keeping your child safe with their new-found independence:
- Test the waters with a “dumb” phone. Parents can control who can call and be called; set limits on texting; and it doesn’t include a data plan.
- Share the experience. Limit use of smartphones and tablets to shared family areas so that use can be monitored.
- Warn your kids about scams. Shared electronics time is the perfect opportunity to teach kids about phishing scams via email and SMS text. “Smishing” scam artists send text messages with links in them. The rule is: if you don’t know the sender, don’t click. The same rules apply to phishing scams via email, an appropriate lessons for users of smartphones and tablets.
- Set time limits. Half an hour of screen time is recommended for children 4-5 years old; an hour for ages 5-10; and two hours for high school aged kids.
- Download educational apps. Look for age limits in the app store, and fill your kids phone or tablet with apps that are fun, constructive, and learning-level appropriate. And be sure to review this recent blog post on the “do not download” apps for kids.
- Monitor interactive gaming. Online games now have an interactive component in which players who don’t know each other are required to interact. Teach your older kids how to be safe in this environment. We provided some identity protection tips earlier this year: Its fun and games until an identity gets stolen.
- Monitor social media accounts.Talk with your children about what is, and is not, appropriate when managing their social media accounts.
- Put the phone down and get outside. Set aside time to engage with your kids outside, over dinner, or with other forms of entertainment like reading or board games.
Share these cell phone safety and usage guidelines with your kids and set expectations before buying any kind of smartphone. Everyone should be on the same page before committing to a data plan and contract – this will mitigate future arguments about screen time and paying the bills.
I’m so glad I had the chance to do all this research for my niece – now all we have to do is help her pick a phone. That shouldn’t be too hard, right?
Be sure to stay tuned with the IdentityForce blog to learn more about how to keep you and your family safe in the digital age.
Image courtesy of Flickr user Joris Louwes.
Latest posts by Melanie Medina (see all)
- An Insider’s View: Meet Donna Parent, IdentityForce’s Senior VP of Marketing - January 5, 2017
- Steer Clear of Holiday Hiring Scams - December 6, 2016
- Black Friday Identity Theft Risks - November 15, 2016
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013 | <urn:uuid:f9a75c3f-0d04-4c2a-a996-365780d322fa> | CC-MAIN-2017-04 | https://www.identityforce.com/blog/cell-phone-safety-for-kids | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00369-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939196 | 1,039 | 2.953125 | 3 |
Lv J.,Shandong Agricultural University |
Lv J.,Sino German Cooperative Research Center for Zoonosis of Animal Origin |
Lv J.,Key Laboratory of Animal Biotechnology and Disease Control and Prevention of Shandong Province |
Lv J.,Tsinghua University |
And 38 more authors.
Virus Research | Year: 2012
This study aimed to determine the transmission characteristics of H9N2 avian influenza viruses (AIVs) derived from the air. Eight H9N2 AIVs were isolated from chicken houses between 2009 and 2010. We analyzed the phylogenic and pathogenic traits of these isolates. What is more, transmission characteristics in guinea pigs of two airborne isolates were determined in experimental conditions. Phylogenetic analyses indicated that the homologies of HA and NA genes of eight isolates were 95.4-99.7% and 86.6-99.8% respectively. They were able to duplicate in lung tissues of guinea pigs without prior adaptation. Two airborne isolates could both transmit among guinea pigs by direct contact. No infection was detected in aerosol contact animals while H9N2 AIV aerosols were detected in the air of isolators. Aerosol infection dose experiment showed that aerosol median infective dose (ID50) of H9N2 AIV to guinea pigs was 3.58×106copies, demonstrating that the aerosols could infect guinea pigs at certain concentrations in experimental condition. In conclusion, H9N2 AIV aerosols were infectious to mammals, suggesting that urgent attention will need to be paid to its transmission. © 2012 Elsevier B.V. Source
Liu D.,Shandong Agricultural University |
Chai T.,Shandong Agricultural University |
Xia X.,Institute of Military Veterinary PLA |
Gao Y.,Institute of Military Veterinary PLA |
And 7 more authors.
Science of the Total Environment | Year: 2012
There is a rather limited understanding concerning the antibiotic-resistance of the airborne S. aureus and the transmission of the antibiotic-resistant genes it carries Therefore, we isolated 149 S. aureus strains from the samples collected from the feces, the indoor air and the outdoor air of 6 chicken farms, and performed the research on them with 15 types of antibiotics and the REP-PCR trace identification. The 100% homologous strains were selected to conduct the research on the carrying and transmission status of the antibiotic-resistant genes. The results revealed that 5.37% strains (8/149) were resistant to methicillins (MRSA), and 94% strains (140/149) were resistant to compound sulfamethoxazole, etc. In addition, these strains displayed a resistance to multiple antibiotics (4, 5 or 6 types) and there were also 3 strains resistant to 9 antibiotics. It should be noted that the antibiotic-resistance of some strains isolated from the feces, the indoor and outdoor air was basically the same, and the strains with the same REP-PCR trace identification result carried the same type of antibiotic-resistant genes. The results showed that airborne transmission not only causes the spread of epidemic diseases but also exerts threats to the public health of a community. © 2012 Elsevier B.V. Source | <urn:uuid:83e71b4a-256f-46a3-855b-168630bb597b> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/institute-of-military-veterinary-pla-2175176/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00489-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921354 | 679 | 2.734375 | 3 |
The Satellite Sentinel Project (SSP), which Clooney co-founded, has been innovatively using information from satellites to help prevent humanitarian disasters before they happen—rather than reporting the aftermath of a conflict. Near real-time satellite data, provided free by one of the worlds biggest commercial satellite operators, DigitalGlobe, is used to deter atrocities and to monitor military movements along the troubled border of Sudan and South Sudan, enabling responses that avoid civilian casualties. As the SSP motto goes: “The world is watching because you are watching.”
Other advocacy groups like Amnesty International and Human Rights Watch also use satellite images to monitor human rights abuses. Images taken of Burma, Syria and Zimbabwe have shown the destruction of civilian areas, including razed villages and bomb damage. These can be quite powerful for those seeking to raise public awareness and pressure for political intervention, aid or sanctions. Their increasing value in this area is supported by the fact that the Office of the Prosecutor of the International Criminal Court now has its own in-house team with an expertise in satellite data.
Steep changes in satellite technologies, particularly the increased availability of data at scales which allow the identification of ground-based objects, are obviously providing exciting new opportunities for NGOs to monitor remotely. But they also raise questions about who else is using satellite images for monitoring purposes, and what they are using them for.
In practice, governments have used satellites for many years, especially to police extensive areas where ground inspections would be a burdensome logistical exercise with high associated costs. Within Europe they are used by regulatory bodies to monitor fraud for farming subsidy payments, to patrol borders for oil spills and boats carrying illegal immigrants, and to check compliance with legislation concerning the environment, deforestation, and water usage. There are also examples of the police using archives of satellite images to investigate crimes. | <urn:uuid:fed6180c-6dcc-437b-808d-1d5a92fb6fd3> | CC-MAIN-2017-04 | http://www.nextgov.com/big-data/2013/09/anyone-can-take-pictures-you-satellite-and-theres-nothing-you-can-do-about-it/69916/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00305-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94321 | 369 | 2.734375 | 3 |
In CertMag’s Study Guide, we try to show readers how they can build up their brainpower using various tools and techniques, but by and large, most people wouldn’t associate these with the word “fun.” That’s not to say these educational means aren’t engaging, interesting and outstanding. But fun? Not really.
Your mental development doesn’t have to be all work and no play, though — IT pros can sharpen their intellectual skills through various kinds of games that give the mind a workout. After all, the brain is a muscle of sorts, and it needs to be exercised.
In fact, recent medical research shows bad memory and sluggish synapses are not necessarily inevitable consequences of old age but will almost certainly result from one’s failure to consistently exercise the mind and body.
Chances are, you’re already aware of many of the games that boost the brain. But you might not be aware of exactly how they do so. Here’s a quick overview of the some of the best-known games out there and what they can do for your mental enhancement.
Most of us know the game by its diminutive title, but the iteration established in 15th-century Europe and continues to this day is properly referred to as Western Chess. More than 600 million people play it on a regular basis, making it one of the most popular recreational games in the world.
I won’t go into the specifics of how chess is played here. What I will explain, though, is how it benefits the brain. For one, the game is based on patterns of movement and, thus, requires a great deal of memorization. Indeed, the most proficient players typically can glance at a board midgame for a few seconds, look away and then recite where all the pieces are.
Also, it helps with relational thinking, or understanding how dynamic units interrelate within a whole system. And with chess, these complex conditions constantly are shifting, which means participants must be mentally agile for competitive play.
The first crossword puzzles appeared in New York-based daily newspapers early in the 20th century and have since become a staple of all kinds of periodicals. Variants on these include word jumbles and the board game Scrabble.
One obvious advantage of crossword puzzles is they help build vocabulary. These games force people to really think about words and their meanings, which are often manifold (such as “duck,” “blast” or “run,” for example). In addition, crosswords necessitate a problem-solving mentality, as verbal clues — often very complex ones — are provided for participants to decipher.
Although it’s a relative newcomer in the world of mind games, Sudoku has grown significantly in popularity during the past couple of years, and it now appears in almost as many newspapers as crosswords. Perhaps this is to be expected — many people view this grid-based exercise as the numerical equivalent of those word puzzles.
It might come as a surprise to the uninitiated, though, that the numbers in Sudoku aren’t based on any kind of arithmetic. Rather, the numerals can be thought of as nine “characters,” which can (and frequently have been) substituted with almost any set of symbols. The rules are always the same, but the complexity of the puzzles can vary greatly.
Sudoku works well as a brain game because it’s founded entirely on logic. Players have to analyze a board and identify potential options and conflicts. Because of this process, it also encourages experimentation. Participants will often try out strategies, only to go back a few moves when they’ve discovered their attempted solutions are unworkable. | <urn:uuid:ac260a7c-0981-4215-9460-64ce1aebf8d3> | CC-MAIN-2017-04 | http://certmag.com/head-games/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00029-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965168 | 781 | 2.640625 | 3 |
Temperature is the hottest topic (pardon the pun) when it comes to maintaining the proper environmental conditions in a data center—particularly in the context of energy consumption and cost—but humidity is also important. But with ASHRAE’s recently expanded allowable and recommended ranges for temperature and humidity, is water vapor still a concern?
Humidity: What You Can’t See Can Hurt Your Data Center
Liquid water is generally a bad thing in your data center, but in the air, it’s something you need in the right proportions. Too much humidity can lead to condensation, which can in turn cause corrosion or—in sufficient amounts—electrical shorts. But too little humidity promotes buildup of electrostatic charge, and discharges of static electricity can damage or destroy sensitive electronics.
Part of the solution is data center measurement and monitoring. Installing humidity sensors (along with temperature sensors) provides information that enables maintaining proper environmental conditions. That’s clear enough. What’s not so clear is what exactly you should measure. Traditionally, relative humidity (RH) has been the metric of choice, with 45% to 55% RH being the espoused ideal range. But inherent difficulties with RH mean it is being used less frequently as a metric. Furthermore, the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) recently expanded its recommended and allowable temperature ranges—as well as its humidity ranges. Given, then, that the traditional view of maintaining 45% to 55% RH in the data center seems to be in flux, what’s the best approach?
Measuring Humidity: Absolute or Relative?
The difficulty with attempting to maintain a relative humidity in the data center is that RH is dependent on temperature: it is a measure of the percentage of water vapor content that air can hold at a given temperature. But because warmer air holds more water, an RH of, say, 50% at 65°F would be significantly lower than 50% at 80°F. The difficulty in the context of data centers is that these facilities deal with both warm air and cool air, which are ideally kept as separate as possible. Cool air flows into a server inlet, is heated and then is ejected as warm exhaust. The water content of this air hasn’t changed during this process (barring, of course, condensation), but the relative humidity of the exhaust is lower than that of the cool air at the server inlet.
An alternative to relative humidity is absolute humidity, which can be expressed as, for instance, the mass of water per mass of dry air. A more familiar measure of absolute humidity is the dew point: the temperature at which water in the air begins condensing (or the temperature at which the RH is 100% for a given air mass). The advantage of measuring and monitoring dew point temperatures instead of relative humidity is that the dew point at the server inlet is the same as that at the server exhaust outlet. A sensor can thus be placed at the server inlet (or the outlet) without the need to worry about getting a humidity measurement at the outlet as well.
Naturally, then, measuring absolute humidity enables companies to have less of a “moving target” with regard to maintaining a specific humidity in their data centers. Furthermore, the recently updated ASHRAE temperature and humidity ranges show a clear recognition of absolute humidity as being important—not just relative humidity.
Expanded ASHRAE Guidelines
A recent whitepaper from The Green Grid (“Updated Air-Side Free Cooling Maps: The Impact of ASHRAE 2011 Allowable Ranges”) discusses the new ASHRAE recommended and allowable ranges in the context of free cooling. Over the recommended temperature range (18°C to 27°C, or about 64°F to 81°F), a portion (temperatures below about 23°C or about 73°F) has a corresponding maximum humidity of 50% (RH). The other portion has a maximum absolute humidity of about 0.011 (measured in grams of water per grams of dry air). The previous (2004) ASHRAE recommended range maintained the traditional RH values of 40% to 55%.
Interestingly, however, the new ASHRAE guidelines still maintain the same overall humidity range—although the recommended humidity varies to some extent with temperature, the absolute humidity should never fall below about 0.006 (same units as above), nor should it ever exceed 0.011. For certain allowable ASHRAE ranges (which should only be used when the data center’s IT equipment can withstand them), the absolute and relative humidity can go beyond the recommended range. Not only do these expanded guidelines give data center operators more leeway with their cooling infrastructure, they also enable more use of free cooling (air-side or water-side economization)—in many areas of the world, throughout the entire year.
Should You Worry About Humidity?
The wider ASHRAE guidelines mean that facilities do not need to sweat humidity as much as they did when 40% to 55% RH was the rule. Furthermore, a growing recognition of absolute humidity (such as dew point) as a better metric means less measurement variation from one side the server (the inlet) to the other (the exhaust outlet). Of course, humidity is still a concern: too much or too little can still cause problems for your IT equipment.
Maintaining a certain humidity range when using mechanical cooling methods often requires addition of (or removal of) water from the air, but this generally involves a fairly closed system. When free cooling is used extensively, the natural variations in temperature and humidity of outside air can complicate the situation, simply by making humidifiers work harder, for instance. Just “opening the windows” of your facility sounds like a great cooling option (it’s certainly cheap), but doing so on particularly rainy or dry days can wreak havoc on your equipment, unless you take steps to regulate water content in the air.
So, should you still sweat humidity in the data center? In some sense, yes: too much or too little water vapor in the air is problematic—nothing about that situation has changed. But as the expanded ASHRAE recommended and allowable operating ranges indicate, many companies and manufacturers are recognizing that the old, tight limits on temperature and humidity are not as necessary as once thought. Thus, although maintaining proper humidity is still critical, it’s not as difficult as it once was (thought to be).
Perhaps the more important industry trend to note is the shift from relative humidity toward absolute humidity as the superior metric. Absolute humidity is just that—absolute, in the sense that it is a measurement of the actual water content of air. Relative humidity measures the percent capacity of air at a given temperature, which can be problematic, because data centers deal with cool air and warm air simultaneously. The current challenge for most facilities is selecting the right temperature and humidity range—whether the recommended range or an allowable range—to maximize the potential for free cooling while still adequately protecting equipment from heat and condensation. So—stay cool and dry, but keep an eye on your energy usage while you’re at it.
Photo courtesy of Sam Bald | <urn:uuid:dffdb8b8-7236-41b1-959d-c619aea676b6> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/humidity-in-the-data-center-do-we-still-need-to-sweat-it/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00425-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92474 | 1,499 | 3.171875 | 3 |
Public APIs let customers connect to you in new ways, but the interface must be easy for outside developers to work with.
When Neil Fantom, a manager at the World Bank, sat down with the organization's technology team in 2010 to talk about opening up the bank's data to the world at large, he encountered a bit of unfamiliar terminology. "At that time I didn't even know what 'API' meant," says Fantom.
As head of the World Bank's open data initiative, which was announced in April 2010, Fantom was in charge of taking the organization's vast trove of information, which previously had been available only by subscription, and making it available to anyone who wanted it. The method of doing that, he would learn, would be an application programming interface, or API.
The API would put thousands of economic indicators, including rainfall amounts, education levels and birth rates -- with some metrics going back 50 years -- at the disposal of developers to mix and match and present in any way that made sense to them. The hope was that this would advance the bank's mission of fighting poverty on a global scale by tapping the creativity of others. "There are many people outside the bank who can do things with the data set we never thought about," says Fantom.
One developer, for instance, created an app that married the bank's rainfall data to Google Maps to estimate how much rainwater could be collected on rooftops and subsequently used to water crops in different parts of the world. Another app provides facts about energy consumption and shows individuals what they can do to fight climate change.
Fantom and the World Bank aren't alone in such pursuits. A decade ago, open APIs were a novelty, but in the past few years they've been put to use at an accelerating rate. ProgrammableWeb, a website that tracks public APIs, listed more than 8,800 in early April. According to the site, it took eight years, from 2000 to 2008, for the number of APIs to reach 1,000, and then just another 18 months to hit 2,000. The jump from 7,000 to 8,000 took just three months.
The APIs cover a wide range of categories, including business, shopping, messaging, mapping, telephone, social, financial and government, according to ProgrammableWeb. They're becoming as necessary to an organization as a website. "In business today, an open API is more or less table stakes. It's something you have to have," says Stephen O'Grady, an analyst at RedMonk, an analysis firm that focuses on developers. "Increasingly, your traction is going to be driven by how open and how programmatically manipulable your product is."
An Evolving Model
When Best Buy first launched its API, BBYOpen, in 2009, it gave developers access only to the chain's products catalog, with descriptions and prices for all the items it had on sale, in the hopes that doing so would bring in more customers. That was part of a deliberate strategy to start slowly, says Steve Bendt, director of emerging platforms at Best Buy. "We had to prove these things over time," he says. "We started to prove out that this is a very vibrant and viable area to pursue."
But external developers wanted more, so the company added the ability to access reviews and ratings for products, find nearby stores, check whether certain products were available at particular stores, and purchase items through the website via mobile app, perhaps with a single click if the user had linked a credit card to the app.
It's been a hit. The mobile apps ShopSavvy, RedLaser and Milo all use BBYOpen. The makers of the app get a commission on sales through Best Buy's affiliate program. Shoppers can search for an item, or scan a bar code, and get information on pricing from various sellers.
Of course, that could mean that a customer using the app might wind up buying from a Best Buy competitor, but Bendt says that since websites and mobile apps have changed how people shop, what's important for Best Buy is to be in the mix. "If we're not in the consideration set, that's a missed opportunity." And the fact that the API makes it possible for people to find out if products they've purchased are available for pickup at nearby stores helps give Best Buy a competitive edge over online-only retailers, he says. "Now you can search for, buy and pick up within a matter or 20 to 40 minutes," says Bendt.
Tips for Creating Open APIs
Here's what you need to know about creating open APIs to your data:
Make it easy.Outside developers -- those at your customers' shops -- may have great ideas for how to use the data you make available, but the API itself needs to be understandable and easy to work with. Clear documentation and helpful tools are must-haves.
Make sure your licensing terms are clear and fair.Successful APIs tend to have MIT-style open-source software licenses.
Use REST unless you absolutely need SOAP.About three quarters of all APIs are REST-based, according to ProgrammableWeb, with SOAP a distant second.
Be prepared for cultural resistance.Some of the data "owners" may be reluctant to share the jewels. You might explain how the World Bank, Best Buy, Bloomberg and others have used the technique to reach customers in new ways and/or further their organization's mission.
- Neil Savage
The idea of an in-store pickup option actually came from external developers, Bendt says, and it took the chain some effort to adapt its legacy system to make inventory data available through the API; the data needed to be reformatted to be compatible. "The systems were built at a time before Web services and APIs were in active use," he explains. "It wasn't built in a way to expose it externally to the developer."
The specifics of how the team did that varied depending on the data source, but generally they tried to expose some snapshot of the data, updated as frequently as possible. If the data proved useful, they found ways to make it available in closer to real time.
Getting existing systems to work with the new API was also a challenge at the World Bank, says Malarvizhi Veerappan, the bank's open data systems lead. Her group originally struggled with latency issues because their 8,000 economic indicators were not all directly linked to each another. It was important, she says, to create a structure that could incorporate all that historical data and grow as new information accumulated.
We didn't want the API to be a separate application. We wanted it to be part of everything else we did with the data. Malarvizhi Veerappan, open data systems lead, World Bank
"We didn't want the API to be a separate application. We wanted it to be part of everything else we did with the data," she says. "We needed to connect it back to our data system. It did require our improving our internal data system."
As the API grew, the team added performance monitoring and instituted policies to ensure good traffic flow. The organization also increased server capacity and added server redundancy to ensure availability of the API.
When financial information provider Bloomberg LP launched its Open Market Data Initiative in February 2012, the new open API -- BLPAPI -- was actually Version 3 of the software development kit the company had already been using internally, says Bloomberg CTO Shawn Edwards. In the old days, Bloomberg customers were given a dedicated terminal that connected them to the company's mainframe, which delivered market data, news and analysis.
Bloomberg's project has since evolved into a software package that customers install on their own systems. Even before making it open, the company used the API to develop specific applications that allow customers to manipulate Bloomberg data on their own desktops.
We're not in the business of selling software. We're going to win their business by providing the best services and the best data. Shawn Edwards, CTO, Bloomberg
With the launch of its open API, the company is now allowing customers to create their own apps, such as watch lists for selected securities or their own trading systems. It also allows outside developers to create apps that draw on other data sources besides Bloomberg's. "We're not giving away market data. What this allows people to do is integrate with other services," Edwards says. "The API is a piece of software that connects to the Bloomberg cloud."
It makes sense to let others do the app development, he explains. "We're not in the business of selling software," he says. "We're going to win their business by providing the best services and the best data."
When Bloomberg put out the open API, it decided to remove some of the features that the previous versions supported. There was discussion as to whether the API should be backward-compatible. "We said no," Edwards says. That meant some customers wound up with features that no longer worked, but Edwards says it makes the API less cluttered with obsolete functions.
Like most open APIs, the BLPAPI supports a variety of languages, so a developer can choose the best one for his app. Someone running an overnight batch process might choose Perl, or the recently released Python version. An electronic trading system would probably run on C or C++. Quantitative analysts, or quants, generally use the data in Matlab. The API also supports Java, .Net and C#, and Edwards says some developers are using an R wrapper as well.
One key to making an API successful lies in making it easy to use. Back in 2000, RedMonk's O'Grady says, APIs often used Web services protocols, but those proved too complex. Now about three-quarters of all APIs are REST-based, according to ProgrammableWeb, with SOAP a distant second. "Because developers overwhelmingly preferred this, it's now the dominant protocol for API systems," O'Grady says.
The Importance of Clarity
Another important requirement is having extensive, clear documentation, and tools to help developers do their jobs. Bloomberg's initial documentation was aimed more at the financial experts who are its customers, but it had to be reworked to tell developers what they needed to know.
Bloomberg will soon attempt to make BLPAPI easier for developers to use by providing a replay tool that will allow them to perform trial runs of their apps. Best Buy's BBYOpen also gives developers a set of tools, including a test console to run apps and an automatic widget generator. The World Bank offers a query builder that lets developers select options.
Tools and ideas for APIs don't all flow outward from the organizations; external developers often provide information and frameworks to help one another out. BBYOpen, for instance, offers libraries created by developers in Java, .Net, PHP and other languages. At the World Bank, there's a discussion forum where developers can ask questions and get answers from their peers.
"They don't wait for us to respond to questions in the forum," says Veerappan, who is working to add features to the forum and convert it into a knowledge base. "It's kind of interesting to see the knowledge that other developers have gained in the API."
Successful APIs tend to have MIT-style open-source software licenses; the World Bank, for example, uses an open source attribution license. O'Grady says one key to success is being very clear about the terms of service, and not having an overly restrictive license that discourages use.
For instance, he says Stack Overflow, a collaboratively edited question-and-answer site for programmers, has a very nice API, but the terms of using it are difficult to navigate. And he notes that Twitter irritated some developers by being too demanding about issues such as how the time stamp was formatted, or insisting that the word tweet be capitalized. While developers are unlikely to shun a widely used service such as Twitter for being difficult to work with, O'Grady says, "if your product isn't that popular [it's possible that] people will abandon it."
Another nontechnological challenge to creating an open API is getting other people in your organization to cede some control, because they're likely accustomed to dealing with proprietary information and maintaining authority over their brand. "I had to do a lot of convincing," Bloomberg's Edwards says. "It's a different way of thinking, when you've been controlling your product." But he says it was important to distinguish between the market data Bloomberg sells and things like the symbology and software that the company doesn't need to control. "The time for all these proprietary interfaces is gone," he says. "It doesn't add value anymore."
You've got to release the right kind of data with the right documentation. Really, it comes down to what customer problems are you going to solve by doing what you do. Steve Bendt, director of emerging platforms, Best Buy
Best Buy's Bendt faced similar concerns. "It was tough when we first started talking about an API platform," he says, noting that colleagues wondered, "What are they going to build? What if they create a bad experience?" The company addressed that with rules about how developers could use the data: They must attribute it to Best Buy, for instance, and they can't appropriate it for other purposes. Best Buy doesn't preapprove apps, but it does regular audits to make sure apps comply with the terms of service. | <urn:uuid:048375c0-b094-4a7f-a0f2-a92add3147a8> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2169725/applications/open-apis--an-indispensable-link-to-customers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00451-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967003 | 2,774 | 2.59375 | 3 |
IBM has patented a technique that helps online and cloud-based businesses improve their ability to eliminate fraud by analyzing browsing behavior to determine whether customers are who they say they are after accessing a website or app via a computer, tablet or other mobile device.
IBM’s patented invention can help web site operators, cloud service providers and mobile application developers more efficiently and effectively detect and deal with threats by using analytics to thwart fraudsters.
For example, when individuals access a banking or shopping site, they subconsciously establish characteristics of how they interact with the site, such as clicking certain areas more often than others; using the up and down arrow keys on the keyboard to navigate; relying solely on the mouse; or tapping or swiping the screen of a tablet or smartphone in a distinct manner.
Similar to how individuals recognize changes in the behaviour of a family member or friend on the phone – even when the audio is fuzzy – by the words they use, how they answer the phone, their mannerisms, etc., IBM’s invention helps businesses analyse and identify sudden changes in online behavior.
If the invention detects a change in behavior, it triggers a secondary authentication measure, such as a security question. This helps businesses and website operators avoid unintentionally hindering legitimate customer activities or transactions.
“Our invention improves the effectiveness of authentication and security systems with insights derived from real-time data analytics,” said Keith Walker, IBM Master Inventor and co-inventor on the patent. “For example, if an individual suddenly changes how they interact with an online bank or store, such as due to a broken hand or using a tablet instead of a desktop computer, I want these web sites to detect the change, and then ask for extra identity confirmation before accepting a transaction. Our experience developing and testing a prototype, which flawlessly confirmed identities, shows that such a change would more likely be due to fraud, and we all want these sites to provide more protection while simultaneously processing our transactions quickly.”
As commerce is increasingly conducted online and via the cloud, a new generation of criminals is using digital channels – such as mobile devices, social networks and cloud platforms – to probe for weaknesses and vulnerabilities, including the ability to steal login and password information from the ecommerce sites we use every day. Despite strong passwords and authentications systems, troublesome fraudulent charges remain a reality in today’s digital world.
IBM received U.S. Patent #8,650,080: “User-browser interaction-based fraud detection system” for the invention. | <urn:uuid:61d54348-ff38-4873-bb83-ac959c368d5a> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/06/02/ibm-eliminates-fraudulent-behavior-in-the-cloud/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922432 | 523 | 2.53125 | 3 |
Fitz-Diaz E.,Av University and 3000 |
Hudleston P.,Av University and 3000 |
Siebenaller L.,Av University and 3000 |
Kirschner D.,Av University and 3000 |
And 3 more authors.
Journal of Structural Geology | Year: 2011
Analysis of mesoscopic structures and related veins allows the history of deformation and role of fluids to be established for part of the central Mexican Fold-Thrust Belt (MFTB). The MFTB developed in mostly carbonate rocks with prominent lateral facies changes associated with two platforms and two basins. Fluids played a key role in the deformation, both physically and chemically, by reducing strength and inducing extensional fracturing through raised pore pressure and by providing the medium for solution transfer on the grain-scale. Veins preserve portions of water related to deformation as fluid inclusions. Lithology and lithological variations strongly control deformation styles, with thrusts dominant in the platforms and folds dominant in the basins, and also influence fluid behavior by controlling both porosity and permeability. Structural observations allow distinguishing veins (dominantly calcite) of several generations, emplaced early, during and late/after deformation (V1-V3 respectively). δ13C and δ18O analyses in calcite from veins and host rock show that the veins confined within thrust slices are isotopically buffered by the host rock and differ in isotopic composition from veins emplaced along major thrusts or crosscutting thrust slices. δD analyses in fluid inclusions and clay minerals strongly suggest rock interaction with meteoric fluids in the west (hinterland) and with fluids close to SMOW in the less deformed eastern (foreland) side of the cross-section. By focusing on a single stratigraphic interval exposed across the width of the fold-thrust belt, we propose a conceptual model that explains the differences in vein-rock isotopic composition, the differences of in isotopic composition of aqueous fluids active during deformation, and the progression of clay dehydration reactions as being related to variations in temperature and intensity of deformation in a growing tectonic wedge. © 2011 Elsevier Ltd. Source | <urn:uuid:55b27b7a-f2db-4c27-8b8b-733ff68c7b0c> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/av-university-and-3000-2756948/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00270-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91403 | 458 | 2.890625 | 3 |
In an attempt to develop batteries that are five times more powerful and cheaper than today's batteries in less than five years, the U.S. Department of Energy is launching an Energy Innovation Hub, called the Battery and Energy Storage Hub.
The project will be conducted by a new institution called the Joint Center for Energy Storage Research, which will be located at the Argonne National Laboratory in Illinois. The hub will be the “most advanced energy storage research program in the country,” according to Energy.gov, and will cost $120 million.
Improved battery storage will affect two important energy sectors: transportation and the grid.
Moving new technologies from labs to the private sector quickly is intended to boost the U.S. economy while also supporting emerging technologies such as solar and wind power. A five-fold improvement in both battery effectiveness and price is expected to encourage widespread adoption of technology that may now be considered experimental or not entirely practical.
Such widespread adoption could also prove culturally transformative -- just as improvements in microprocessor, touch-screen and imaging technologies put a mini computers and cameras in nearly everyone's pockets, a reduction in the cost of battery technology could make it more reasonable for people to purchase home-energy systems that provide 80 percent of their needed power, U.S. Energy Secretary Stephen Chu told ComputerWorld.
It's “very, very important for American industrial competitiveness that research be intimately linked with manufacturing in a way that will propel the United States forward,”he said. “This is what the whole Hub concept is about.”
Photo: Argonne scientists Ira Bloom (front) and Javier Bareño prepare a sample of battery materials for Raman spectroscopy, which is used to gather information regarding the nature of the materials present in the sample. Photo courtesy of Argonne National Laboratory. | <urn:uuid:024f88ed-3490-4758-b1d1-2e3aeb1ad9e9> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/DOE-Batteries-Harder-Better-Faster-Stronger.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00296-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945953 | 371 | 3.265625 | 3 |
In its simplest form, Service-Oriented Architecture (SOA) is the process of building scalable distributed systems that treat all software components as services. SOA provides the framework for independent services to interact with each other across a network. This allows a complex distributed system to be assembled quickly and cost-effectively from individual services. SOA is most commonly implemented using Web service technologies.
A service is re-usable, easy-to-program, and independent of programming language or platform. It can be best thought of as a reusable application function, used as a component in a business process. A service is able to provide this function over and over again to various service requesters. It is this ability to reuse the service, and the practice of breaking down each business process into a series of services, that generates the efficiency benefits of a SOA. | <urn:uuid:81624e0d-7237-47e0-b9ab-2a541483a137> | CC-MAIN-2017-04 | https://www.infotech.com/research/soa-basics-what-is-soa | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00232-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938887 | 173 | 3.1875 | 3 |
Members of the BYU Supercomputing team recently posted a tutorial for getting started with SLURM, the scalable resource manager that has been designed for Linux clusters.
SLURM is currently the resource manager of choice for NUDT’s Tianhe-1A, the Anton Machine built by D.E. Shaw Research, and other clusters, including the Cray “Rosa” system at the Swiss National Supercomputer Centre and Tera100 at CEA.
In essence, SLURM’s functions as an allocation mechanism to divvy up resources on both an exclusive and non-exclusive basis, as well as a framework for starting, executing and monitoring jobs on a set of designated nodes. It also manages scheduling conflicts by handling the queue of jobs.
As Dona Crawford from Lawrence Livermore noted about their use of SLURM for their BlueGene/L and Purple systems, using SLURM reduced “large job launch times from tens of minutes to seconds.” She went on to note that “This effectively provides us with millions of dollars with of additional compute resources without additional cost. It also allows our computational scientists to use their time more effectively. SLURM is scalable to very large numbers of processors, another essential ingredient for use at LLNL. This means larger computer systems can be used than otherwise possible with a commensurate increase in the scale of problems that can be solved. SLURM’s scalability has eliminated resource management from being a concern for computers of any foreseeable size. It is one of the best things to happen to massively parallel computing.”
One of the advantages that SLURM users point out is that it’s relatively simple to get started and there are a wide array of modular elements that help to extend the core functionality. For those who want a bare-bones setup (as the one described in the accompanying video), it takes well under an hour to get it up and running. | <urn:uuid:7051d596-ad43-4de0-a1d4-8398c9a8a0b0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/09/04/up_and_running_with_slurm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281151.11/warc/CC-MAIN-20170116095121-00260-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952574 | 403 | 2.828125 | 3 |
Carnegie Mellon researchers call the project Six Degrees of Francis Bacon (SDFB). But what it is is an great big data mining project that tries to trace the influence and ideas of Bacon, William Shakespeare, Isaac Newton and more than 6,000 others from the 16th-17th centuries to let scholars and students reassemble and discuss or debate the era's networked culture.
The project, the researchers say, pulls together centuries of books, articles, documents and manuscripts that have been scattered and divided in order to understand the role of linked connections in spreading ideas and knowledge.
"Francis Bacon may not have 'liked' or commented on a Facebook post by Shakespeare, but reassembling the early modern social network gets us a long way toward understanding what he or anyone else could have known, jokes and references they would have understood, sensitive information they might have encountered," said Christopher Warren, assistant professor of English in the Dietrich College of Humanities and Social Sciences in a statement. One of the great historical arguments has been around whether or not Bacon in particular or someone else write some of Shakespeare's plays?
Warren notes that dense accounts exist of small groups and communities, giving us partial views of the early modern network, but this is the first attempt to bring it together in one place, in a visual way.
"Six Degrees of Francis Bacon is many things, but above all it's a tool for asking questions. It allows people to click on this historical network to see who's connected, recreating this whole world and then raising even more questions about how an idea, say, religious toleration, or the circulation of blood, got from person A to person B, why it took this route and not that route, and so on," Warren stated.
The project, which has support from a Google Faculty Research Award, uses data mining to develop the visual social network. Crawling through sources to create an initial list of 6,000 people from the period, the project already has investigated more than 19 million potential connections. To get the project to its current point of visualizing this 6,000-person world, the CMU team worked with Georgetown University's Daniel Shore, a Milton expert whose current research focuses on tracing syntax, and they are developing a partnership with London- and Cambridge-based scholars Ruth and Sebastian Ahnert, who study the shape of 16th-century letter-writing networks.
From the SDFB web site: "Our current goal is to improve the project and increase our impact on the scholarship of the early modern period in three ways. First, we aim to expand the range of the texts from which we can infer associations. The entries of the ODNB include limited and partial information, but a fuller range of sources - biographies and scholarly work from the nineteenth century to the present - will allow us to extend these limits, correct these partialities, and thereby increase the accuracy of the reconstructed network. Second, we intend to develop and refine our statistical methods to incorporate different types of documents and entities, as well as develop a computational framework for handling the increased scale of the project. Finally, we will build and improve the interactive front end interface to make it intuitive, attractive, and flexible enough to meet the needs of scholars, teachers, and students of the early modern period."
Unlike published prose, SDFB is extensible, collaborative, and interoperable: extensible in that actors and associations can always be added, modified, developed, or, removed; collaborative in that it synthesizes the work of many scholars; interoperable in that new work on the network is put into immediate relation to previously mapped relationships, the researchers stated.
Check out these other hot stories: | <urn:uuid:fa5b054d-909c-4a32-84c9-208c487ef4e7> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2224797/big-data-business-intelligence/to-friend-or-not-to-friend--what-if-william-shakespeare-had-been-a-po.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00076-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942684 | 755 | 3.015625 | 3 |
This sets campaign security apart from its corporate counterpart. “If an e-commerce site is offline for a few hours, they can recover, unless it’s right before Christmas,” said Jeremy Epstein, a computer security researcher at SRI International and technical adviser to nonprofit lobbying organization Common Cause. “If a campaign site is offline at critical junctures — right before the election, right before contribution deadlines — the impact could be much worse,” he said.
While a DOS attack can interrupt the information flow, an equally dangerous exploit would be to radically deface information, by invading blogs, for instance, or by infiltrating a Web page.
“Campaigns face the threat of having false messages injected into the media by hackers,” said Lt. Gen. Harry Raduege Jr., chairman of the Deloitte Center for Cyber Innovation and a former director within the U.S. Defense Department. “Before the error is caught, significant damage can occur, which then escalates into valuable time being expended to supply correct messaging across multiple media sources and in trying to reverse negative impressions and perceptions.”
Such false messaging falls under the general heading of hacktivism, a broad term that refers to the use of illicit cyberstrategies to advance political ends. This is perhaps the most dangerous threat when it comes to political campaigns, because hacktivism doesn’t just disrupt money or messaging. It threatens the very system.
“To the degree that actors in a democracy start using cyberattacks to further political ends, it pollutes the kind of civil society we are supposed to be seeking,” said McAfee’s Gann.
Perilous as it may be, hacktivism also is the most visible among the evolving cyberthreats posed to political campaigns.
“Four years ago, we didn’t have nearly as much of this as we have seen in the last year,” Skoudis said. “Anonymous has shown that you can get a lot of press doing these things. You can achieve real goals here.”
Since the last election cycle, campaigns’ growing reliance on online donations has opened a new avenue for those seeking personal gain. Blogs and social media create new opportunities for attacking content, while sophisticated infiltration tools are making it possible for invaders to gain greater access to inside information culled from campaign servers.
As in the corporate world, campaigns also have come to rely more on mobile devices, thus opening up systems to a range of potential threats.
Before considering the options when it comes to prevention and remediation, it’s important to consider one further element that separates a campaign’s cyberneeds from those faced by users in the corporate world.
While no one in the world of IT would choose to dawdle in the face of a cyberbreach, speed is an even greater consideration in the realm of political campaigns. Campaigns happen in real time, unfolding not only in a matter of days but sometimes hours.
Think about candidates like Herman Cain or Rick Santorum, who came from nowhere to become leading candidates in the span of a week or so, Epstein said. “Suddenly their websites became high-profile targets — but without months and an appropriate budget to plan for it.”
Against this backdrop, careful planning and speedy remediation become critical elements of any cyberstrategy.
Building the Bulwarks
Campaign security begins at the level of policy, said Mark Patton, general manager of the security business unit at GFI Software.
Candidates and senior staff “need to set the tone for Web policies in the office and on the road to make it clear that IT and Web security are priorities of running a successful campaign,” Patton said. “Policies need to be created, socialized, approved and supported from the highest levels of the campaign. Make them official and discuss them often for them to hold weight, especially amongst an environment of nonpermanent staff.” | <urn:uuid:baee65af-e6bc-47ec-9579-454636e25a98> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/High-Tech-Campaigns-Face-New-Security-Risks.html?page=2 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944596 | 826 | 2.53125 | 3 |
Seattle relies on wireless technology to monitor wastewater levels and warn officials of potential discharges from relief valves scattered throughout the city.
The innovative system collects rainfall and wastewater data, then wirelessly transmits it to a dedicated, interactive Web site. That information helps Seattle, which averages 30 to 40 inches of rainfall annually, predict and respond to discharges from what are known as combined sewer overflows (CSOs).
The system sends early warnings, as wastewater rises to critical levels and sends secondary alarms as actual overflows occur. Alarms can be sent via cell phone, e-mail, pager and fax, allowing city staff to mitigate or avoid wastewater overflows.
"It shortens the response time, and then you can quickly get to the location and investigate the site to stop or prevent the dry weather overflows," said Hirad Mousavi of the Seattle Public Utilities' Resource Planning Division.
Although Seattle currently uses data generated by the system internally, the Web site eventually could be opened to the public, said Tim Croll, director of the Seattle Community Services Department.
"My vision was that people could check out a map online and see if there has been discharge from any of the CSOs. Windsurfers could check it before going out. Red means a discharge and green means all is OK," Croll said. "We don't have that yet, but we're going in that direction."
Wireless reporting is done via cellular digital packet data. The system uses standard Internet communication protocols (ICP) that can support sophisticated decisions based on environmental input. It's also programmed with smart logic, enabling the technology to cross-reference wastewater levels with an automated rainfall-data-gathering system to predict, measure and identify overflows.
The project has received national attention, earning honorable mention in Public Technology Inc.'s 2001 Solutions Award program. And although the system still isn't finished, it has met a number of objectives, according to city officials:
- Cost savings. A low-voltage power supply and localized street-level antenna eliminated all hardwiring requirements and significantly reduced installation and data collection costs. The estimated savings over two years exceeds $3 million.
- Durability. The system operates underground within a harsh environment and can accurately monitor wastewater flows.
- Reliability. Wastewater events are confirmed by two independent devices before an alert is generated, significantly reducing operational and maintenance response costs by minimizing false alarms.
- Efficiency. The system helps the city use limited resources more effectively by allowing it to send crews to a potential overflow site before such an event actually occurs.
This allows for a shared data management platform that is available to all city personnel and the general public.
Wireless technology was vital to the system's cost-effectiveness, according to Croll.
"Wireless was driven largely by economics," he said. "We could have dropped the big bucks and paid for a hard phone line -- the rental of it -- into every one of these sites. That would have been very foolish economically.
"The Web-based [design] just makes it a lot easier," he added. "We can access it; it's in one place and I think it gives us the most potential to move to the next iteration of having a Web-based public notification system. That's the ultimate vision."
Seattle officials say the system helps the city cost-effectively comply with the conditions of its permit for CSOs. The permit requires Seattle to report wastewater discharge locations, times, durations, volumes and related weather information to the Washington Department of Ecology (DOE) each month. Dry weather overflows are prohibited and must be reported to DOE within 24 hours from the time Seattle becomes aware of them. Furthermore, the city must take corrective action immediately. | <urn:uuid:447295cb-1a78-43ee-b493-ea8a81a60a2a> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/High-Water-Warning.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949621 | 769 | 2.828125 | 3 |
There are still too many stories about 911 call centers around the country lacking in their ability to respond the way most residents expect.
But there is good news as well, and one example is Escambia County, Fla., which installed a new mapping system that pinpoints 911 calls originating from cell phones. The FCC requires that 67 percent of wireless calls be locatable within 50 meters. That may not be helpful if the caller is in a building with many stories like an apartment or office complex.
With the new PlantCML system, a map pops up on the call taker's monitor and displays a red and yellow circle marking the caller's address or location. Previously call takers had to type in latitude and longitude coordinates on MapQuest or Google Maps. The graphics pinpoint the caller's location with a red and yellow circle as the operator maintains a conversation with the caller or dispatches a first responder.
The deployment of the mapping software took place in November and December 2009 and has worked well. There's a question about how well it will pinpoint calls from some of the older cell phones, however.
The mapping software deployment was part of three 911 projects, aided by three state grants.
For more information on the 911 mapping system, go to Emergency Management's Web site. | <urn:uuid:d3446bee-390b-418a-9c37-ab2d6a38b81e> | CC-MAIN-2017-04 | http://www.govtech.com/geospatial/911-Mapping-System-Pinpoints-Cell-Phone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280221.47/warc/CC-MAIN-20170116095120-00380-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948232 | 260 | 2.609375 | 3 |
The Oxford Supercomputer systems will carry out 15 trillion calculations per second, with some applications able to generate the equivalent of nearly 600 copies of the Oxford English Dictionary on CD in a single hour. The Pillar Axiom storage system, a network attached storage (NAS) system, handles all of this information as the Supercomputer works through the most complex of mathematical models.
In order to handle the ten-fold increase in computing power and all of the associated increase in data, the storage system has to be flexible and efficient, said Dr. Jon Lockley, centre manager at the Oxford Supercomputing Centre. One day our systems could be trying to crack the make-up of complex proteins, the next it could be predicting climate change in 50 or 100 years. Therefore, the ability to expand and reconfigure the storage easily is vital. The type of data, volume and speed at which the storage system needs to handle the results varies dramatically from project to project.
The Pillar Axiom provides the team at Oxford with what if? predictability modelling within the array, said Paul Sleep, sales director at NexStor. This ensures the team is able to maintain performance should additional tasks be added, bringing more capacity and more storage horsepower online. The Axiom proved its flexibility, showing how easily it could adapt as storage demands changed, as well as its green credentials of energy efficiency and reduced space requirements.
Pillar Data Systems Inc. | <urn:uuid:cfe44b81-1dec-4ee8-8771-0f412ce55b25> | CC-MAIN-2017-04 | http://www.networkcomputing.com/unified-communications/pillar-used-oxford-hpc/2075181621 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00288-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920377 | 288 | 2.71875 | 3 |
API to make IoT connectivity simpler
Two Google engineers have proposed a way for IoT devices to easily connected to web pages. The move could pave the way for simpler installation of Internet of Things (IoT) sensors.
The engineers, Reilly Grant and Ken Rockot, said their WebUSB API would enable hardware manufacturers to set up and control devices from web sites. The proposal would also make connecting USB devices and complex IoT sensors easier.
When connecting devices, users either need the right drivers to set them up or you have to log into a small web server on the device itself. WebUSB allows the device to contact a web page and be configured from there.
“For lots of devices it does because there are standardized drivers for things like keyboards, mice, hard drives and webcams built into the operating system. What about the long tail of unusual devices or the next generation of gadgets that haven’t been standardized yet? WebUSB takes “plug and play” to the next level by connecting devices to the software that drives them across any platform by harnessing the power of web technologies,” said the engineers on the WebUSB website.
The engineers were quick to point out that the API will not provide a general mechanism for any web page to connect to any USB device. They said that historically, hosts and devices have trusted each other too much to let arbitrary pages connect to them.
They added that there are published attacks against USB devices “that will accept unsigned firmware updates that cause them to become malicious and attack the host they are connected to; exploiting the trust relationship in both directions.”
According to the engineers, WebUSB could replace native code and native SDKs with cross-platform hardware support and web-ready libraries.
You might like to read: How APIs connect the world to the Internet of Things
API connects IoT to the net
The proposed mechanism has also been designed to be backwardly-compatible with USB devices without needing special firmware.
“For devices manufactured before this specification is adopted information about allowed origins and landing pages can also be provided out of band by being published in a public registry,” the two said.
The code is still a work in progress and is unofficial and hosted at W3C’s Web Platform Incubator Community Group (WICG). The engineers are welcoming members of the WICG to contribute to the project.
Christian Smith, President and Co-Founder of TrackR, told Internet of Business that he sees WebUSB providing the standard to allow a seamless connection between hardware with USB and software.
“It would allow me to take a mechanical design file from Google drive, automatically download the calibration settings for a 3D printer, plug in the 3D printer, and be able to print directly from the web. WebUSB short circuits the complications to hardware and allows your USB devices to have instant access to updatable drivers, files, and printers,” he said. | <urn:uuid:157f5b99-663f-478d-ba13-038dc15abcea> | CC-MAIN-2017-04 | https://internetofbusiness.com/google-webusb-iot-devices-internet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279410.32/warc/CC-MAIN-20170116095119-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944153 | 607 | 2.625 | 3 |
We all yearn for the more innocent time when the acronym DOS stood for your Disk Operating System, or even the Dept. of State for the better traveled. Today, however, it is a term that brings a chill to many technologists -- Denial of Service. Initially, this was largely the realm of minor miscreants, who wanted no more than to target specific Web sites they thought would be cool to disrupt. But now a greater chill has begun to set in as a result of the selective targeting of routers.
Of late, the hacker community has taken to discussing 'router protocol attacks' in listservs, Usenet, and at conferences. Attacks against routers can have serious consequences for the Internet at large. Routers can be used for direct attacks against the routing protocols that interconnect the networks comprising the Internet, therefore causing serious service availability issues on a large scale. By dealing with such threats to their infrastructures, network managers will be protecting both their own interests and the interests of all networks to which they connect.
Crackers perceive router attacks as attractive for several reasons. Unlike computer systems, routers are generally buried within the infrastructure of an enterprise. Often, they are comparatively less protected by monitors and security policies than computers, providing a safer harbor within which the miscreant can operate. Many routers are poorly deployed, with the vendor-supplied default password the only wall between network security and ruination. Documents circulate supplying advice on procedures for breaking into a router and changing its configuration. Once compromised, the router can be used as a platform for scanning activity, 'spoofing' connections, (disguising the origin of packets,) and as a launch point for DoS attacks.
According to Laurie Vickers, a Senior Analyst at Cahners In-Stat Group, "A router is the gateway to a company. They have been the target of hackers and Script Kiddies for quite some time now, but what seems to be occurring is that the hackers are growing more sophisticated. They're finding that the front door is locked, so they go around back and see that the patio door has been left open."
Vickers asserts that router attacks can prove devastating to networks as managers try to determine "Which box will it be? Routers often integrate VPN services and/or firewalls, and these make them even juicier targets." Once the router is compromised, the entire network can be up for grabs.
A further area for concern is what Carnegie Mellon's Computer Emergency Response Team (CERT) Coordination Center refers to as the shrinkage of 'Time-To-Exploit'. In other words, once a vulnerability in a system or device has been discovered, it takes less time to exploit it perhaps less time than it takes to author or deploy a security patch.
Further, don't look for a particular group or individual to target your systems. Tools used to initiate DoS attacks and to propagate the 'attack toolkits' (the collection of instructions used for the attack) are increasingly automated. Scripts are frequently used for scanning, exploitation, and deployment. | <urn:uuid:b7ba79d4-f55d-4dee-b1e9-4aab1522d68a> | CC-MAIN-2017-04 | http://www.cioupdate.com/reports/article.php/911521/DoS-Attacks-Go-For-the-Throat.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00224-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955788 | 631 | 2.5625 | 3 |
IBM has tripled the performance of the world’s most powerful supercomputer.
Big Blue has announced Blue Gene/P, the second generation of the world’s fastest machine. Blue Gene/P nearly triples the performance of its predecessor, Blue Gene/L.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
IBM said the new computer remains the most energy-efficient and space-saving computing package ever built.
Blue Gene/P is designed to operate continuously at speeds exceeding one “petaflop” – or one-quadrillion operations per second.
The system is 100,000 times more powerful than a home PC and can process more operations in one second than the combined power of a stack of laptop computers nearly 1.5 miles high, said IBM.
Dave Turek, vice-president of deep computing at IBM, said:, " We see commercial interest in Blue Gene developing now in energy and finance, for example.”
Comment on this article: e-mail email@example.com | <urn:uuid:3e81fac1-161a-404a-b5d3-1b83d915a409> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240081141/IBM-triples-performance-of-worlds-fastest-computer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.922326 | 237 | 2.546875 | 3 |
The future of DSL is something that many have been worried about, especially the high performance of fiber optics and increased focus on wireless technologies. Before looking into some of the future DSL technologies, it would help to look at what DSL technologies have come and gone as well as those technologies that have come and evolved. After all, learning about the past and present is useful when predicting the future of technology.
The Rise of DSL
One of the key traits that has made DSL a successful broadband technology is the fact that DSL uses the same twisted copper wires that telephone networks use. At the dawn of the broadband era, barely a decade ago, alternatives to DSL included many solutions that not use existing wiring such as ISDN (Integrated Services Digital Network) lines and T1/T3 solutions. One of the reasons that these alternatives were not widely adopted was their pricing, which was directly tied to their use of special wiring that was not readily deployed.
Early DSL came in two basic flavors: ADSL (Asynchronous Digital Subscriber Line) and SDSL (Synchronous Digital Subscriber Line), the former mostly intended for consumers while the latter was, and still is targeted at certain businesses and enterprises with greater upstream speed requirements. While SDSL offered equal upstream and downstream speeds, it would be the high download speeds of ADSL that most consumers would eventually find attractive, and thus modern forms of DSL still offer asynchronous communication that favors faster download speeds than upload speeds.
Some forms of early DSL did not make it, such as ISDL, which used DSL technology over ISDN lines. SDSL may also be considered to be something of a failure by some standards as it is not nearly as common as ADSL and ADSL-derived standards. The one thing that these standards seem to have in common is the lack of performance for their price. Looking at modern DSL technologies, it is hard to argue that they are anything but an good value considering their performance levels. The performance offered by the least expensive DSL plans today is similar to high-end DSL plans of only a few years ago, and DSL prices have consistently gotten better over time in relation to performance.
The Current State of DSL
There are many kinds of DSL (Digital Subscriber Line) services on the market today, but VDSL (Very high speed Digital Subscriber Line) seems to be the dominant technology at this moment. VDSL is currently in its second generation, which is often referred to as VDSL2 by industry insiders. VDSL2 comes with a wide range of profiles, which means that just because two carriers offer VDSL2-class services does not necessarily mean that the underlying technology is the same, or even entirely compatible.
AT&T uses Alcatel-Lucent VDSL2 equipment in most of their curb-side cabinets, which in turn are primarily fed data via fiber optics. This put AT&T in a position to deliver their next generation of U-Verse on a VDSL technology, fiber optics, or a combination of the two. While it is unlikely that AT&T will move to FTTH (Fiber To The Home) meaning a 100% fiber optic network such as Verizon’s Fios, the do have plenty of options to consider that might offer impressive speed boosts to consumers.
Companies such as Versatek market 100 Mbps downstream / 20 Mbps street cabinet systems and DSLAM components, which providers such as Summit Broadband have adopted. While their offerings may not necessarily reach the limits of these components, they are still impressive and easily demonstrate that DSL still has a lot of life left in it, though it is clear that copper wiring is approaching its limits in its current form.
Another option that started to gain traction in the industry around 2005 or so was the use of naked DSL, sometimes referred to as NDSL. Naked DSL is a form of broadband that could easily take off in the future as the entire line is available to the DSL. There is speculation that by allowing data to communicate over a greater range of frequencies that additional performance can be gained via multi-spectrum transmissions. It is important to note that the current and past use of the term NDSL was primarily related to business, while the future use might be more of a technical specification. That is to say that NDSL using existing technologies that work with or without an analog phone are very different than proposed technologies that require use of a greater range of frequencies and thus cannot be paired with an analog telephone service.
GDSL, a Future FTTH Contender?
With fiber optics seeming poised to take over the broadband arena in the coming decade, providers with extensive infrastructure are looking to find new ways to compete. On the DSL front, gigabit DSL (GDSL) is one such technology that appears promising. There are various GDSL specifications being proposed and debated, but most of them rely on naked DSL, which means that carriers such as AT&T would probably offer GDSL with some form of VoIP technology since using an analog phone in conjunction NDSL would not be possible.
What makes some of the GDSL specifications impressive is their use of multiple pairs of copper wires. While current DSL systems almost uniformly use a single wire, GDSL uses multiple wires to create a MIMO (Multiple-In/Multiple-Out) approach. MIMO technologies have proven very effective with wireless networks, but whether or not MIMO will prove equally effective for consumer-grade DSL is a question that may be answered someday soon. The question is: When?
Unfortunately, the first specification(s) for the generation of GDSL technologies has yet to be finalized. This means that GDSL might be over a year away, as carriers will need to evaluate the technology and making any final decisions that will ultimately effect the future of broadband. | <urn:uuid:70092344-476e-42d8-b73a-d8e17b9256b8> | CC-MAIN-2017-04 | http://www.highspeedexperts.com/future-of-dsl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00370-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968622 | 1,199 | 2.890625 | 3 |
As CIO of the Madison County, Ohio, Department of GIS and Information Technology, Rob Slane developed an advanced emergency management system that provides emergency personnel with critical data in near real time.
The Madison County Emergency Management Information System gives emergency personnel the exact location of 911 calls, and provides an interactive digital map with aerial and digital photographs of the geographic location from which the call was made. The award-winning GIS can also track severe weather, and model the location and characteristics of a hazardous chemical plume in the event of a spill or accident, then notify residents and businesses with automated, reverse 911 telephone calls.
What kind of information do emergency personnel receive from 911 calls?
When the call comes in, the address is displayed for the dispatcher at the PSAP [public safety answering point] on hid or her terminal. The mapping interface then takes that address and pinpoints it on a digital map.
With that you have the street center lines; the jurisdictions; an actual digital image of the house; the aerial photograph; and you have the ability to pull up oblique aerial photography, and can actually spin that house around and look at it from different angles.
You can measure area, height, distance -- that sort of thing. It's quite a bit of information right there in a matter of seconds for the dispatcher.
Can first responders see all that information upon receiving one 911 call?
They can if they have a notebook [computer] in their vehicle, the mobile version setup. Currently we don't have that in every single vehicle; it's been a funding issue getting the hardware.
The Health Department and Emergency Management directors -- those types of people -- have notebooks and the software loaded. It's not limited to first responders.
How does the weather tracking and chemical plume modeling work?
If you had some sort of hazardous chemical release, whoever arrives on the scene fires up our 911 GIS software. They're pinpointing their location -- the location of the release, the accident, the intersection -- that sort of thing.
Now they hit the live weather data, because with [the software], you need to start entering some information so it can model the plume: wind direction, speed, ambient temperature, precipitation, humidity.
Once that's done, [the application] creates the plume, and then the plume is dumped over into the GIS and plotted on the digital map. So now you see a good estimation of where that plume is going to go and whom it's going to affect.
Once you have that plume on the map, you can start looking at where you want to set your roadblocks so you don't have the public entering that plume. You're going to have the EPA [Environmental Protection Agency] coming in from Columbus; somebody coming from Springfield, Ohio.
Once that's done, you have the ability to select all those addresses that are in the plume, within different levels of the plume or within a certain distance of the plume, and then you can launch reverse 911 calls. | <urn:uuid:73bf5ea5-a8bb-4f78-b16b-f255a8061fd6> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Rob-Slane.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00426-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939674 | 623 | 3.046875 | 3 |
How do we know an earthquake's epicenter?
Sophisticated technology and old-fashioned science make pinpoint accuracy possible
- By Henry Kenyon
- Jul 16, 2010
The Washington D.C. metro area was jolted awake in the early hours of Friday morning by rare event -- an earthquake measuring 3.6 on the Richter scale. Major earthquakes don't often happen east of the Rocky Mountains, an area that is relatively geologically stable.
So how does a university seismology department or the U.S. Geological Survey, determine the location and strength of an earthquake? The USGS has seismic measuring stations located across the country. The location, strength and depth of an earthquake are determined by detecting the series of waves it generates.
Earthquakes always generate two types of waves, said Terry Tullis, professor emeritus of geological sciences at Brown University. Geologists and USGS use an automated system that measures the interval between the arrival of the primary and secondary waves. The difference between the P and S waves is used to measure the distance of the earthquake from the sensor. “The farther away it is, the longer a delay is between when the P wave gets there and the S arrives,” he said. It's much like the technique of estimating how far away lightning is by counting the seconds until the thunder arrives, but far more precise.
Earthquakes are something to tweet about
The exact location of an earthquake is quickly determined by triangulating its location between several seismic stations. Once the distance of an earthquake can be determined, its relative size can be measured as well. There are two major scales used to measure earthquakes: the Richter scale, which measures the overall strength of an earthquake; and the modified Mercalli intensity scale, which measures the amount of shaking. “A given earthquake will have a bulls-eye pattern of Mercalli intensities, the highest being right where the earthquake was, and they decay as you go away,” Tullis said.
The East Coast is more geologically stable than the West Coast. However, when earthquakes do occur in the eastern part of the country, they are felt over a greater distance, due to a process called attenuation. Tullis said the rocks under the eastern part of the country are older and colder than in the West, which has several large active faults. The cooler rocks underlying the eastern seaboard attenuate less than in the West. “It’s a little bit like the difference between banging on a sponge and a piece of crystal. One of then rings and the other dampens it out,” he said.
Major eastern earthquakes, such as the New Madrid earthquakes that took place in Missouri in 1811 and 1812, were reputed to have rung church bells as far away as Boston. By comparison, a similar earthquake in California would not be felt so far from the epicenter.
Earthquakes constantly take place in the eastern United States, but most are so small as to be undetectable by humans. Tullis noted that geologists don’t understand the nature of most of these earthquakes, but they appear to involve slipping on faults that are artifacts from former geological times. “Earthquakes do occur due to slips on faults in the East, it’s just that when they do occur, they’re not always on faults that we know, or if we know them, ones that we know very well,” he said. | <urn:uuid:05cdb4f5-28d7-4731-873e-d5725ecf9269> | CC-MAIN-2017-04 | https://gcn.com/articles/2010/07/16/washington-dc-earthquake.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970946 | 719 | 4.09375 | 4 |
Business intelligence is the study of a company’s data with the goal of enabling managers and owners to make better decisions. One section of business intelligence commonly talked about is business performance, more specifically the tool used to measure it – Key Performance Indicators (KPIs). Do you know what KPIs are?
Below is an overview of KPIs for business.
The Key Performance Indicator (KPI) is a tool used to measure performance of a business or employees. Many businesses use this tool to look at either the overall performance and success of all or specific operations. To many, the terms performance and success are synonyms.
How do KPIs work?
Most modern versions of this tool come in the form of software applications that track specific data and criteria set by managers or owners. The software allows them to compare these criteria, commonly referred to as Score Cards, with the established goals and gauge overall performance or success.
This data, usually collected from spreadsheets, databases or even manual data entry, is displayed to the user in an easy to read format called a dashboard. The dashboard is typically a graph or similar visual display.
A common dashboard is the traffic light. Let’s say for example that a company is measuring the success of their latest marketing campaign. A green light indicates that the expected number of conversions is being met or exceeded, yellow means actual conversions are slightly below normal and red means actual are well below expected.
Benefits of KPIs
The biggest benefit of these tools is that they allow users to easily gauge the performance of a business. Beyond that you can set many KPIs with triggers that will alert you when the measurements are poor. This will allow the company to figure out ways to fix issues before they can cause bigger problems.
For many businesses, effective KPIs are tailored to the needs of the business. For the majority of businesses, KPIs need to be: Measurable, achievable, specific and result-oriented. The best way for a business to figure out the which will be the most effective is for the manager or owner to look at the aspects that are most important to a business.
This can be hard to figure out, especially for business owners who often think that everything related to their business is important. A business intelligence expert or IT partner can help define what really matters most and help to implement the tools needed.
If you are looking for a better way to measure the success or performance of your business, please contact us today. | <urn:uuid:3aabda3e-b8b0-4561-8eb1-a089f4c804e3> | CC-MAIN-2017-04 | https://www.apex.com/exactly-kpi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00363-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950242 | 507 | 2.53125 | 3 |
Net Neutrality is commonly misunderstood. However, it could easily affect our society for generations to come considering the widespread use of the Internet and the innovations that it fosters. This article will briefly explain what Net Neutrality is, why the FCC is involved and better solutions for solving the problem.
The Internet has generally worked on a “First Come, First Serve” basis. Meaning, as information flows through the Internet, it is processed and forwarded in the order it was received. This gives every Internet user equal access to all applications and services on the Internet. For example, an Internet user may have a DSL Internet connection from AT&T and may use it to gain access to services from Vonage. Although Vonage is a competitor, AT&T’s network treats their packets of information just the same as they would treat packets from their own services.
The neutral Internet has provided opportunity for many innovative ideas and business models to grow and prosper. Equal access has allowed many of those ideas to begin with little or no funding. Facebook and Google are well-known examples. Facebook was started by Mark Zuckerman when he was a college student at Harvard and Google’s first servers were in a friend’s garage near Stanford.
Why the FCC is Involved
Some major Internet Service Providers (ISPs) have attempted to block or slow down traffic from web hosts the ISP did not want its customers to have access. Recent examples include Comcast requiring Level3 (host for Netflix) to pay for faster access to its customers and Metro PCS blocking traffic from Vonage and Skype. These practices have alarmed customers, industry professionals and web-based service providers especially when some ISPs have a monopoly or duopoly in certain areas that they serve. They may deny customers from accessing desired services, stifle ideas and prevent new and innovative business models from having a chance for success.
In an attempt to prevent these problems and keep the status quo of the Internet, the FCC passed a weak set of stipulations preventing land based ISPs from unnecessarily blocking or slowing down content and an even weaker set of stipulations for wireless ISPs. These actions are being challenged in court and Congress. The long term effects of the actions are in doubt especially with the government’s poor track record of solving problems with rules and regulations.
For the record, the FCC is not attempting to regulate the Internet. It is only attempting to limit ISPs from selectively blocking or slowing down access to legitimate web sites and services.
Competition Solves the Problem
The Net Neutrality debate exists because there is not enough competition in the broadband market. Corporations like Comcast and Verizon must maximize their profit and act in the best interest of their shareholders. Their list of priorities does not contain the idealistic goal of protecting an open Internet. This does not make them evil. It is just a fact. How can an open Internet be in sync with the responsibilities of Comcast and Verizon? That is simple. Competition.
Christopher Yoo, director of the University of Pennsylvania Law School Center for Technology, Innovation and Competition, agrees. He is quoted in PCWorld as saying the net neutrality debate is less important than spurring broadband competition and implementing the FCC’s national broadband plan, released last March. The net neutrality debates in recent years “probably generated much more attention than they deserved.” If broadband competition was “robust enough, all these issues would go away.”
Real time applications like Netflix, online gaming and VoIP (such as business Hosted PBX services) are rapidly becoming the most popular applications on the Internet. Could Verizon and Comcast block or slow down some of this content while going head-to-head against a competitor that does not? Not likely since losing revenue would not be maximizing their profit potential. And that would be far more effective than any regulation government could ever put in place. | <urn:uuid:c46be376-8ce8-438c-b6c3-4a8302021080> | CC-MAIN-2017-04 | http://www.hostmycalls.com/2011/01/27/broadband-competition-will-solve-net-neutrality-better-than-the-fcc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960206 | 788 | 3.671875 | 4 |
Europe is the second largest market for data center cooling. There has been a shift in the cooling technologies used in this region. Green data centers are growing around this region. These use renewable sources of energy for the power consumption for cooling. Another method is to have the data centers in cooler areas so that the environment will provide a natural cooling effect to the data center. Little attention has been given so far to the cost of energy used by different IT systems. But with the rising energy costs, enterprises are looking towards those solutions which can help them save some dollars. The demand for such systems are on the rise and has become one of the driving factors behind the decisions regarding deploying new technologies.
To meet the requirements of the customers, IT organizations are deploying new applications. This is leading to a shortage of space, power and cooling. These issues have led many organizations to realize that data center costs are now a part of their limited budget. To take full advantage of their systems, they need to spend effectively and in those technologies which increase the lifespan of the data center as well as the business agility. Data center cooling is a primary target for energy efficiency improvements. These help in bringing down costs related to data centers energy consumption. Companies are now working towards new cooling technologies as well as refining the old ones. Some of the cooling technologies which are being used at present are chillers, air conditioners, economizers and hot huts. With data centers being one of the contributors to greenhouse gas emission, green data centers are gaining traction. These data center cooling systems maximize energy efficiency and minimize environmental impact.
Little attention has been given so far to the cost of energy used by different IT systems. But, with the rising energy costs, enterprises are looking towards those solutions which can help them reduce operating costs. The demand for such solutions are increasing and has become one of the driving factors behind the decisions for adopting new technologies.
The consistent demand for data centers is also a major contributor to the cooling systems market.
There are many companies which are working towards improving the efficiency of their already existing data center cooling systems.
The factors challenging the data center cooling systems market are the system’s adaptability requirements and erratic power supply requirements in some regions.
Skanska is one of the companies has developed eOPTI-TRAX, a new data center cooling technology which can reduce expenditure incurred in these cooling systems. As the demand for data center grows, there will be a need for more effective cooling solutions. Though there are some major players like IBM, Schneider Electric and HP present in the data center cooling market, some other vendors like Skanska are challenging the status quo with their superior solutions.
WHAT THE REPORT OFFERS | <urn:uuid:f6b457bc-5f49-448f-b2d3-1e48dbcac07f> | CC-MAIN-2017-04 | https://www.mordorintelligence.com/industry-reports/europe-data-center-cooling-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281424.85/warc/CC-MAIN-20170116095121-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954912 | 548 | 2.796875 | 3 |
Unified Communications (UC) combines all forms of business communications into a single, unified system that provides powerful new ways to collaborate. Encompassing several communication systems or models, UC includes unified messaging, collaboration and interaction systems, real-time and near real-time communications, and transactional applications. The termination of Private Branch Exchanges (PBX) and Public Switch Telephony Networks (PSTN) circuits to be transported across an IP network and delivered to another phone system is traditionally referred to as Voice over Internet Protocol (VoIP). This IP-based solution would be the driving force behind the obsolescence of traditional phone switching equipment at the customer site. This new technology would be known as IP Telephony.
VoIP is a method of taking analog audio signal, like the kind you hear when you talk on the phone, and turning them into digital data that can be transmitted over an Internet Protocol (IP) network, or the internet.
In the networking industry, IP Telephony (IPT) is a term that is used synonymously with the term VoIP. Cisco Systems uses the term IPT to define a portion of their entire suite of VoIP products called Unified Communications. Cisco’s IPT suite of products consists of media control protocols, hardware, and software.
The first step toward learning the potential of this technology is understanding the terminology. As a summary of the explanation above:
- VoIP refers to a way to carry phone calls over an IP data network, whether on the Internet or on an internal network. A primary attraction of VoIP is its ability to help reduce expenses because telephone calls travel over the data network rather than the phone company’s network.
- IP telephony encompasses the full suite of VoIP enabled services including the interconnection of phones for communications; related services such as billing and dialing plans; and basic features such as conferencing, transfer, forward, and hold. These services might previously have been provided by a PBX.
- IP communications includes business applications that enhance communications to enable features such as unified messaging, integrated contact centers, and rich-media conferencing with voice, data, and video.
- Unified communications takes IP communications a step further by using such technologies as Session Initiation Protocol (SIP) and Presence along with Mobility solutions to unify and simplify all forms of communications independent of location, time, or device.
Cisco IP telephony solutions are comprised of two categories: call-processing software and endpoints. Telephones and endpoint devices allow your organization to efficiently run voice, data, and video communications over a single, converged network.
Author: Paul Stryer | <urn:uuid:34ad74c6-04d6-49d5-a933-6d2b0fcb443d> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/03/18/what-is-cisco-ip-telephony-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00297-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931583 | 545 | 2.703125 | 3 |
A striking number of U.S. travelers, while aware of the risks, are not taking the necessary steps to protect themselves on public Wi-Fi and are exposing their data and personal information to cyber criminals and hackers, according to research released today by AnchorFree, the global leader in consumer security, privacy and Internet freedom.
The PhoCusWright Traveler Technology Survey 2013 polled 2,200 U.S. travelers over the age of 18 revealing new insights into travelers’ online behavior and their understanding of cyber risks.
It is estimated that 89 percent of Wi-Fi hotspots globally are not secure. The increased use of smartphones and tablets to access unsecured public Wi-Fi hotspots has dramatically increased the risk of threats. Travelers were three times more likely to use a smartphone or tablet than a laptop to access an unsecured hotspot in a shopping mall or tourist attraction, two times more likely in a restaurant or coffee shop and one and a half times more likely at the airport.
While most travelers are concerned about online hacking, very few know how, or care enough, to protect themselves. Looming threats — from cyber thieves to malware and snoopers — are skyrocketing on public Wi-Fi and travelers need to be vigilant in protecting themselves.
Further to this point, a striking 82 percent of travelers surveyed reported that they suspect their personal information is not safe while browsing on public Wi-Fi, yet nearly 84 percent of travelers do not take the necessary precautions to protect themselves online. The top three concerns cited when using public Wi-Fi are the possibility of someone stealing personal information when engaging in banking or financial sites (51 percent), making online purchases that require a credit or debit card (51 percent) and making purchases using an account that has payment information stored (45 percent). Travelers were less concerned about using email or messaging services on public Wi-Fi (18 percent).
Cyber-security threats are not the only issues people face while traveling. Thirty-seven percent of international travelers —which equates to 10 million U.S. travelers annually— encountered blocked, censored or filtered content including social networks (40 percent) such as Facebook, Twitter and Instagram during their trip. Top websites that were also blocked include video and music websites such as Hulu and YouTube (37 percent), streaming services such as Pandora and Spotify (35 percent), email (30 percent) as well as messaging sites such as Skype and Viber (27 percent).
To avoid the threat of hacking and cyber attacks, more than half of travelers (54 percent) try not to engage in online activities that involve personally sensitive information while one in five (22 percent) avoid using public Wi-Fi altogether because they believe their personal information is at risk. Only 16 percent reported using a VPN such as Hotspot Shield. | <urn:uuid:c5c6594b-3f43-41d6-968c-a28b76660eb8> | CC-MAIN-2017-04 | http://hospitalitytechnology.edgl.com/news/Four-in-Five-Travelers-Fear-Mobile-Use-of-Unsecured-Public-Wi-Fi-89521 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00115-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94421 | 573 | 2.65625 | 3 |
Building Firewalls with iptables, Part 1
Exposing any system, no matter how briefly, to an untrusted network is suicidal. A firewall, while not a 100% secure solution, is absolutely vital. The Linux world gives us an excellent firewall utility in netfilter/iptables. It is free and runs nicely on feeble old PCs. Netfilter/iptables is flexible, powerful, and enables fine-grained control of incoming and outgoing traffic. The two main functions this series will address are building firewalls and sharing Internet connections, which commonly go hand-in-hand. In Part 1 we'll cover basic concepts; Part 2 will offer examples of rulesets for various uses.
Netfilter/iptables is included with the 2.4/2.5 Linux kernel for firewall, network address translation (NAT), and packet mangling functions. Netfilter works inside the kernel, while iptables is the table structure for the user-defined rulesets. Netfilter/iptables is the descendant of our old friends ipchains and ipwadfm (IP firewall administration); for simplicity, let's call it iptables from this point forward.
Some other excellent uses for iptables are for building firewalls for individual Unix/Linux/BSD workstations and also for building firewalls for subnets to protect other platforms. It's free, so why not construct layers of defenses? Depending solely on a gateway firewall is not enough.
iptables reads only packet headers, and as a result does not inspect payload. It also does not perform authentication. For extra security, combine it with a proxy server such as squid. For Windows users, AnalogX is a popular proxy server noted for its ease of use. (Beware that the default configuration is completely insecure. Do not "set it and forget it," as it installs wide open.)
What It Does
The typical setup is to have two network interfaces -- one "outward" and one "inward" (or call them public and private). iptables reads incoming (and outgoing -- don't forget egress filtering!) packet headers and compares them to the rulesets, then forwards the acceptable packets from one interface to the other. Rejected packets are dropped on the spot -- boom splat -- or are directed in other ways, as you prefer.
Packets must traverse tables and chains. iptables has three built-in tables: filter, NAT, and mangle. (The mangle table is for specialized packet alterations, which we will not cover in this series.) Chains are the lists of rules in each table that match packets and then tell what to do with them. Target is any rule that applies to a matching packet. You'll see these terms a lot.
Unlike ipchains and ipfwadm, iptables uses stateful packet inspection. iptables inspects the source and destination IP addresses, the source and destination ports, and the sequence numbers of incoming packets. In a sense, iptables "remembers" which packets are already permitted on an existing connection. This provides a significant gain in security -- ephemeral ports are open only for as long as they are needed, as opposed to requiring all manner of permanent holes in the firewall to accomodate the various protocols. Malicious packets with altered headers are detected and dropped, even when they contain an allowed destination address and port.
Starting and Stopping iptables
This depends on your individual flavor of Linux; a nice rc script does the job, or you can run it manually from the command line. Please consult the docs for your distribution. Part 2 in this series will have sample scripts.
As always, the more you understand about TCP/IP, the more this stuff makes sense. iptables rules filter and match on packet headers and TCP/IP protocols -- any of them.
iptables is commonly included in Linux distributions; it would be very unusual to not have it. Run iptables --version to see what's on your system. If for some inexplicable reason you do not have it, see Resources at the end of this article.
man iptables is a complete reference for all the commands and options, or run iptables --help for a quick reference. To view your existing iptables rules, run:
# iptables --list
This is what iptables looks like with no rules defined:
Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
As shown in the above example, every packet must first traverse one of three built-in chains: INPUT, OUTPUT, or FORWARD.
Filter is the most commonly used table. Here is the basic syntax for all iptables rules:
iptables [-t table] command [match] [target/jump]
Not every piece of this is required, nor does it need to be in this order; however, this is the usual method, and as always, I encourage verbosity for the sake of clarity.
The filter table is the default if none is specified. The three most common targets in the filter table are ACCEPT, DROP, and REJECT. DROP drops packets dead, with no further processing. No messages are sent at all to anyone. REJECT sends back an error message to the sending host. DROP is very useful, although at times it may have undesirable side effects, such as leaving a messy trail of dead sockets.
This example rule blocks traffic from a specific IP range because it belongs to a notoriously noxious spammer, and we don't want the spammer's spew polluting our nice systems:
# iptables -t filter -A INPUT -s 123.456.789.0/24 -j DROP
See how it follows the syntax described above. (See man iptables for definitions of the various switches and commands.) Now let's say your users are becoming increasingly vindictive and resentful towards spammers, which is understandable, but certain retaliatory tactics are simply not permissible, at least not from your network. We can also block all outgoing packets directed to the spammer's IPs easily enough with this slightly different syntax:
# iptables -t filter -A OUTPUT -d 123.456.789.0/24 -j DROP
Notice the -A switch. Use this to append rules to existing chains.
Spammers are shifty, experts at playing whack-a-mole (in the role of the mole) by continually changing IPs and DNS. Suppose our ignominious spammer moves to a new IP range, and the old IP address is then reassigned to some saintly nuns, whose bits are worthy to traverse your network. Simply delete the rule with the -D switch:
# iptables -t filter -D OUTPUT -d 123.456.789.0/24 -j DROP
Crafting rules to cover every contingency is a nice way to consume mass quantities of time. For those who would rather not, the basic principle is "deny all, allow only as needed." Let's set up the default rules for each chain:
# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT ACCEPT
-P sets the policy for the chain. Only the three built-in chains can have policies. These policies permit unfettered outgoing traffic, but no incoming traffic. At the very least, we want to hear from the nuns:
# iptables -t filter -A INPUT -s 123.456.789.0/24 -j ACCEPT
Stay tuned for Part 2, which will offer more sample rules and scripts. | <urn:uuid:fe13e1f2-4b53-4c1b-9c04-51cf9d9b6e45> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/2213171/Building-Firewalls-with-iptables-Part-1.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00509-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.910696 | 1,632 | 2.53125 | 3 |
Computer scientist, inventor and university physicist Carver Mead is perhaps best known for coining the phrase “Moore’s law,” helping to popularize Gordon Moore’s 1965 observation that the number of transistors on an integrated circuit doubles about every 24 months. Mead was also instrumental in the prediction’s tremendous staying power.
One of Mead’s most significant contributions to computing was a technique called very large-scale integration (VLSI), which enabled tens of thousands of transistors to be fitted onto a single silicon chip. In 1979, Mead taught the world’s first VLSI design course and created the first software compilation of a silicon chip. His 1980 textbook “Introduction to VLSI Design,” coauthored by Lynn Conway, launched the Mead and Conway Revolution. Mead and his contemporaries set the stage for the “microchip revolution” in the Pacific Northwest. His methods of complex chip design have catalyzed decades of progress.
In the 1980s, Mead grew frustrated with the limits of traditional CPU design, and turned to mammalian brains for inspiration. Three decades hence, this field of neuromorphic computing is back in the spotlight with efforts like the Human Brain Project. Mead, now 79, maintains a professor emeritus position at Caltech, where he taught for over forty years. In a recent interview with MIT Technology Review, Mead details why it’s important for computer engineers to explore new forms of computing.
In Mead’s view, one of the thorniest challenges for the chip industry is power dissipation. For decades now, the focus has been on faster and faster chips, but the heat issue can’t be ignored. Mead notes that “It’s a common theme in technology evolution that what makes a group or company or field successful becomes an impediment to the next generation. … Everyone was richly rewarded for making things run faster and faster with lots of power. Going to multicore chips helped, but now we’re up to eight cores and it doesn’t look like we can go much further. People have to crash into the wall before they pay attention.”
These limitations are what prompted his interest in neuromorphic designs. “I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals,” he tells MIT Technology Review, “We built lots of systems. We did retinas, cochleas—a lot of things worked. A lot of my students are still working on this. But it’s a much bigger task than I had thought going in.”
Mead is also directing his energy into developing a unified framework to explain both electromagnetic and quantum systems. This is summarized in his book Collective Electrodynamics. Mead is skeptical, yet supportive, of current quantum computing projects.
“We don’t know what a new electronic device is going to be. But there’s very little quantum about transistors,” he says. “I’m not close to it, but I’m generally supportive of these people doing what they call quantum computing. People have got into trying to build real things based on quantum coupling, and any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.”
Mead’s viewpoint is refreshing and inspirational. He reminds us that all new technologies start small before becoming “part of the infrastructure that we take for granted.” Even “the transistor was [once] a tiny little wart off a big industry,” he quips. | <urn:uuid:50843436-67ca-4e34-a332-4cbace2fd3b5> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/11/25/carver-mead-quantum-computing-neuromorphic-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00233-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963609 | 768 | 3.5 | 4 |
Recently we’ve been comparing using Telnet with Secure Shell protocol to allow remote access to a device such as a router or switch. Now, we’re going to compare File Transfer Protocol (FTP) and Trivial File Transfer protocol (TFTP) for a Cisco router or switch. These protocols can be used for managing the files that exist for back up purposes.
To begin, the Cisco Internet Operating System (IOS) has many options for saving files. When you look at the options with the copy command, you can see many different locations that your files can be saved. As shown in example 1, you can see all the copying options that are available to use.
Note: This display shows some of the most common options. These options may vary and are derived from the IOS version running on the device. You can see that it’s possible to copy to and from the device using TFTP and FTP.
You also may have learned that these protocols are different based on how they deliver files to their destinations. FTP uses Transport control protocol, which provides reliability and flow control that can guarantee that the file will reach its destination while the connection is established. TFTP uses User Datagram protocol which doesn’t establish a connection and therefore cannot guarantee that files to get to their destinations. When you compare these protocol on that basic difference, you may conclude that FTP will always be the better option due to its reliability. However, TFTP is a simpler protocol and the server uses less memory when supporting clients and can be a scalable solution for applications such as IP Telephony.
So with the basic differences established, let’s make a comparison between TFTP and FTP with our copy options. I will be using freeware applications that will serve the function of an FTP server and a TFTP server. Example 2 displays the main log page of SolarWinds TFTP server.
Example 3 displays FileZilla FTP server. If you don’t already know, FTP requires usernames with passwords for access to the device. We will see how this is important later.
These applications are necessary for the routers to act as a client device for the session. Additionally you will see in the next example, a test bed for comparison.
To copy the running configuration file to the TFTP server in the example you have to type the enable command, copy run tftp. This command is interactive, prompting you to enter the IP address of the TFTP server and the file name you want it to be when it arrives at the destination. If the command copy tftp run was entered, then a file can be merged with the running-config. There you will be prompted with specifying the IP address of the TFTP server, the file name on that server, and the file name you want it to be on this device (if copying to flash). Example 5 displays the file being sent to the TFTP server and how it can be viewed when it’s received from the router.
Here you can see that the file (RTR1.TXT) was successfully received on this device. Using this is important for backing up or upgrading files such as the startup-config, vlan.dat, SDM files, and even the IOS. Additionally, you may find that it is necessary to have a router act as a TFTP server for back up purposes. This is accomplished with the global configuration command tftp server <file system> <file name>. The following examples display how file can be used for delivery for TFTP.
This is useful in scenarios when redundant routers need to back up every option or function. By having routers backup one another’s IOS’s, or having a dedicated TFTP server, multiple layers of redundancy can be achieved.
Copying files using FTP is similar but requires more setup. Example 7 displays the configuration commands necessary for FTP.
As mentioned before, File Transfer Protocol uses usernames and passwords for setup. Therefore, routers or switches are required to have a username and password setup for FTP. This is done with the global configuration commands ip ftp username <username> and ip ftp password <password>.
If there is a requirement for the router to act in passive mode (meansingthe FTP server will provide the client a dynamic port that it will use for its data connection, as opposed to active mode where the client will provide the server the dynamic port to be used for the data connection), it can be enabled with the global configuration command, ip ftp passive.
Lastly for security purposes on the ftp server, it may have an IP filter or access list that specifies which IP address are allowed to connect. The command ip ftp source-interface <interface name> is used to specify which IP address can be used on the router, otherwise the router will select the IP address of the interface that connects in the direction of the server. Examples 8 and 9 illustrate how to configure the commands and how to verify its’ configuration.
With all of the set up in place, now you can copy a file to and from the FTP server as illustrated below.
As you can see, it does appear that it takes much longer to send this file than it took the file earlier when it was sent via TFTP. But if you were sending an extremely large file across a WAN or to a distant end location, you will find that FTP is more useful because of its window sizing and the sequence numbers that are used for reassembly and reliability.
Concluding, you can see the difference with using FTP and TFTP. Backup and upgrading files on IOS based devices can be achieved when these commands are used. Using these protocols are a neccesary skill that should be practiced by any technician or network engineer.
Author: Jason Wyatte | <urn:uuid:3e5144cf-bcbd-4b65-9b8c-8c1d69b45014> | CC-MAIN-2017-04 | http://blog.globalknowledge.com/2009/07/15/ftp-vs-tftp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00537-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91378 | 1,195 | 2.921875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.