text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Whether you’re at the cash machine, online to your bank or credit card company or on the phone to your insurance or mortgage provider, until now, the need for greater security has meant added complexity and cost for user and provider alike.
In future, this problem is sure to grow. Consumer-facing organisations want the efficiencies to be gained from e-commerce technologies, and are moving inexorably towards a Web-based interface with their customers.
That could mean asking consumers to navigate increasingly complex layers of password-based authentication, which discourages them from trusting the security of online transactions — only 10 per cent of consumers bank online for this reason. They could also be faced with remembering growing numbers of passwords, enterprises will need to divert scarce resources to helping users recall those passwords, and will continue to have to bear the costs of theft or mistakes following authentication failures.
Yet it doesn’t have to be like that. Security technology can ensure that you keep what’s yours while enabling you to get on with life, letting technology take care of the details. Strong authentication of users that is both easy to use and cost-effective is the answer.
Authentication in a complex world
Consumers in today’s world spend a growing amount of time authenticating their identities to banks, insurance companies, utilities and phone companies, for instance. Before such organisations can process any transactions or information, they need to know that users are who they say they are. In other words, authentication of identity is critical or no trust can exist between the two parties.
Right now, that process consists of what you know — almost invariably a user name and password combination — and, where stronger authentication is required, what you have. This usually takes the form of a hardware or software that generates a second code or PIN, and is known as two-factor authentication.
Names and passwords have a long tradition, going back centuries. They worked well when the numbers to be dealt with were small and a person’s identity could be confirmed by looking at them. In today’s world, that’s not practical, yet reliance continues to be placed in this method, despite its well-publicised weaknesses.
The key problem is that passwords are too easily discovered or guessed — they are often found written down on sticky notes stuck to monitors, for instance. Even when they’re not, passwords can often be derived from well-known information about the user such as their birthday, or spouse, partner or pet’s name. Further, because it’s hard to remember passwords that aren’t standard words — especially as the number of passwords required increases — the average password can often be discovered by a computer attack. This can be achieved using a dictionary or, more time-consuming but ultimately effective, a brute-force lookup that checks every possible combination of characters.
In a corporate environment, end user education as a cornerstone of company security policy can often be the answer to this problem, along with forcing users to update their passwords regularly, and checking the strength of passwords using cracking programs. For consumer applications however, none of these options is realistic. Give customers what they perceive to be a hard time, and a business risks driving them into the arms of the competition.
The mobile future secured
Passwords on their own are too weak to enable full trust, but the alternative is two-factor authentication, which has proven to be both close to unbreakable and is the strongest form of authentication available. Its drawbacks in a consumer application are that it’s also not realistic to expect consumers to carry an additional, special device whose sole function is authentication.
A much better answer is to reap the benefits of two-factor authentication by generating a new password for every authentication using a device that the user already has with them. Research shows that the one device most users both possess and carry with them is their mobile phone.
The way this could work is that the user initiates a transaction, enters their PIN or access code, then the provider of services needing to authenticate someone sends a randomly generated password via SMS to their phone, which they can enter. This proves that they are the right person — a miscreant is highly unlikely to know the user name, the password and possess the phone. And if they are using a browser, a user must enter their access code into the same browser from which they requested it. The ideal solution would also provide non-repudiation, encryption over the link where possible, and would generate passwords that were truly random.
This form of strong authentication shows huge promise. Trials by a number of service providers suggest there are few drawbacks, with the small cost of sending an SMS being offset by the security of knowing they are dealing with the right person.
Compared to other forms of two-factor authentication, the advantages are that:
- such a system would need no extra infrastructure, so deployment costs on a per-user basis will be low;
- because the user is familiar with the hardware, there are no additional training or help desk costs to be borne;
- in some cases, it may help compliance with government, industry, or enterprise regulations for data protection;
- it can be deployed in very large numbers to cover mass markets;
- the user need carry no extra devices around, adding convenience and enabling enterprises to differentiate themselves;
- consumer confidence both in the strength of that security and the protection of their investments from access by the unauthorised will be increased, leading to customer satisfaction and retention.
Only the need for a mobile phone network limits coverage and, even in the US where SMS is not as popular as it is in Europe, trials show that messages both work and travel quickly — one outer limits trial reported a delay between the UK and the US west coast of just four seconds.
When authentication via SMS becomes widespread, businesses and consumers will benefit. In the financial services area, banks and insurance companies are clear beneficiaries. In business to consumer applications, healthcare — ensuring that the consumer is matched, critically, with the right medical records — and bill payment will be transformed. Service providers and enterprises will be able to offer unfettered access to remote users’ desktops no matter where they are, secure that the user can prove their identity.
From a business-to-business perspective, such technology can facilitate supply and buy-side e-commerce, with partners and suppliers being able to authenticate and so gain access to secured extranets, increasing trust between the parties conducting transactions.
Right now, access to information is critical for businesses and consumers alike and this trend is set to grow. What’s needed is a way of authenticating people on a mass-market scale, and using a widely-adopted, easy-to-use technology such as SMS means that access can be secure, more cost-effective and more convenient.
RSA Security’s RSA Mobile, built on its patented, time-synchronous technology and algorithms that deliver proven security to around 13 million end users, provides a platform for consumer-facing organisations to build such a solution. So with RSA Security ready to bring its secure technologies to this market and to fully incorporate industry standards such as Liberty and SAML into future releases, the time is right for this technology.
Both industry and consumers need it, the pre-conditions have been met and the demand is there.
Infosecurity Europe is Europe’s largest and most important information security event. Now in its 8th year, the show features Europe’s most comprehensive FREE education programme, and over 200 exhibitors at the Grand Hall at Olympia from 29th April – 1st May 2003. www.infosec.co.uk | <urn:uuid:f512330f-a03c-4db4-af37-95eb0d755217> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2003/04/09/mass-market-authentication-the-gateway-to-access-hungry-consumers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00211-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.955615 | 1,584 | 2.625 | 3 |
The International Space Station hit a major milestone today, marking 15 years in space.
On Nov. 20, 1998, the Russian Space Agency , now known as Roscosmos, launched a Proton rocket that carried a pressurized module, named Zarya, or sunrise, into Earth's orbit.
Zarya was the first piece of the International Space Station. It provided the main point of orientation control, communications and electrical power while the station was eventually built out with other modules and elements.
That one fateful launch was the beginning of what has become the largest international cooperative effort in space.
Today, the space station is home to a rotating crew of astronauts from organizations and countries including NASA, the European Space Agency, Russia and Japan. More than 200 people from 15 countries have lived and worked there.
The orbiting station, which has been built out to the size of a football field and carries several robotic arms, a talking robot and a humanoid robot, has also been the site of about 1,500 scientific experiments .
According to NASA, the station, with a mass of almost 1 million pounds, is second only to the moon as the brightest object in the night sky.
Fifty-two computers run the systems onboard the station, which travels the equivalent distance to the moon and back in about a day.
"There's much more to be learned aboard the station and I look forward eagerly to the milestones of the coming years," said Charles Bolden, NASA administrator in a video posted online today. "It's our home in orbit where we're learning to live and work in space for the long term. It's integral to our exploration strategy. It's a unique global resource."
He also noted that scientists from 69 countries have been able to send their experiments to be conducted on the space station.
"Astronauts from many countries demonstrate what is achievable when nations work together toward common goals that improve life everywhere," Bolden added.
A few weeks after Zarya was launched, NASA's Space Shuttle Endeavour lifted off, carrying Unity, the first U.S. piece of the station.
Built on opposite sides of the planet, Unity was joined with Zarya, making the space station an international effort.
Since that first pairing, the space station grew over the years, piece by piece. Additions to the orbiter came from three continents.
The space station was considered complete on Feb. 24, 2011, when assembly was finished on the Italian module, Leonardo.
The first crew to live aboard the space station launched on a Soyuz spacecraft on Oct. 31, 2000, as Expedition 1, and consisted of one NASA astronaut, Commander Bill Shepherd, and two Russian cosmonauts, Sergei Krikalev and Yuri Gidzenko.
There have been humans living and working on the orbiter ever since.
Over the years, astronauts have conducted 174 spacewalks , totaling nearly 1,100 hours -- the equivalent of nearly 46 days -- to build and maintain the station.
This article, International Space Station marks 15 years in Earths orbit, was originally published at Computerworld.com.
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org. | <urn:uuid:db9bae34-192e-4f0f-a081-d0f89cf7227c> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2486167/emerging-technology/international-space-station-marks-15-years-in-earth-s-orbit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00387-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948801 | 698 | 3.625 | 4 |
Cyber-attacks in the healthcare environment are on the rise, with recent research suggesting that critical healthcare systems could be vulnerable to attack.
In general, the healthcare industry is proving lucrative for cybercriminals because medical data can be used in multiple ways, for example fraud or identify theft. This personal data often contains information regarding a patient’s medical history, which could be used in targeted spear-phishing attacks.
Dangerous attacks – what are the risks?
Cybercriminals have found medical data to be far more valuable than credit card fraud or other online scams. This is because medical information contains everything from a patient’s medical history to their medical prescriptions, and hackers are able to access this data via network-connected medical devices, now standard in hi-tech hospitals. This is opening up new possibilities for attackers to breach a hospital or a pharmaceutical company’s perimeter defences. If a device is connected to the internet and left vulnerable to attack, an attacker could remotely connect to it and use it as gateways for attacking network security.
The danger is that, because most of these devices are not on segregated networks and are directly connected to other medical computers or life-depending medical hardware, attackers could make their way to servers or databases housing sensitive and confidential patient records. Furthermore, whilst accessing medical data is a serious concern, there’s also the risk of tampering with medical equipment that’s keeping patients properly medicated. In this case, it is likely that future cyber-attacks could lead to the loss of human life.
The healthcare security spend – how much is enough?
Despite increasing attacks on healthcare organisations, 10 per cent or less of IT spend is put towards security, leading many recent reports to suggest that healthcare organisations are not taking the security of patients seriously. However, while 10 percent may seem small, healthcare organisations usually have large budgets, which means this could represent a lot more than what a small or medium-sized company would allocate towards security.
What is more of a concern is that, while organisations continue to put pressure on healthcare companies to secure patient data, 87 per cent of healthcare organisations are still leaving data at risk. If, until now, these organisations have focused on investing in quality services, medication and personnel, protecting a patient’s medical data should be met with the same level of interest and involvement. With the number of implantable or internet-connected medical devices, medical organisations need to account for the fact that such devices could also be used to end life, not only protect it, in a cybercriminal’s control.
Securing the data: Keeping the attackers at bay
The majority of healthcare organisations have often been shown to fail basic security practices, such as disabling concurrent login to multiple devices, enforcing strong authentication and even isolating critical devices and medical data storing servers from a direct internet connection. Organisations must start by fixing these shortcomings.
Furthermore, healthcare companies should implement security policies, and invest in Intrusion Detection Systems, access control lists and even regular pen-testing drills for identifying network, software and procedural issues.
Going forward, it is vital for companies to invest in training personnel to correctly identify security threats, as they’re usually the ones most prone to social engineering techniques or spear-phishing attacks. Healthcare professionals that handle medical equipment should be trained and instructed on best security practices and medical devices security, as they could be directly responsible for a potential security breach or patient-related issues caused by mishandling such hardware or software. | <urn:uuid:fea2fb4e-2810-4e49-88fe-892a17e60f0c> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2016/06/23/hackers-targeting-healthcare-industry/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00387-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948773 | 716 | 3.28125 | 3 |
Flash storage is one of the main components that makes low power electronics so flexible. Unlike common DRAM, which needs constant refreshing in order to retain its contents, flash memory will stay written for about 10 years without power. However, flash pays for that longevity in access times, which are much slower than those for DRAM. The perfect memory would be nonvolatile like flash yet provide access faster than the current generation of DRAM. Quantum dots, with their nicely tunable electronic properties, look like they may fit the bill.
Researchers in Germany have been exploring the suitability of self-assembled arrays of quantum dots for nonvolatile storage. A quantum dot is a small clump of atoms that is confined in a way that restricts the motion of the electrons, making the whole thing act like a single atom. The properties of the dot can be modified by changing the size of the clump or the constituent atoms.
In the quantum dot-based storage array, the researchers have been looking at the constituent atoms, trying out silicon and germanium, and more complicated mixtures of gallium, indium, arsenide, aluminum, and antimony (for those of you keeping count, these are III-V materials). Experimentally, they have found that quantum dots can have access times of around 10ns, faster than the current generation of RAM, and they require a refresh rate as low as 0.7Hz. Further calculations show that more suitable combinations would result in a storage time of one million years while maintaining the same access time.
Quantum dots can do this because there is more design freedom in setting them up. Normal flash memory relies on the huge potential barrier created by a silicon oxide layer. The probability of an electron tunneling across the barrier is so low that the data will stay there for 10 years. However, to get electrons across that barrier when writing data to a flash cell requires a lot of energy, energy that destroys the silicon oxide layer. This is why flash memory has a limited number of write cycles in it.
Quantum dots, in contrast, have tunable properties, so the barrier can be kept low. In the current work, the barrier was four times lower than that of silicon dioxide. Additionally, the data can be stored as an absence of electrons, called holes. These holes behave exactly like positively charged electrons, except that they are heavier. The confinement of the quantum dot makes them even heavier than normal, which reduces the chances of them tunneling out of the quantum dot. The result is a very low refresh rate.
Based on the known properties of the materials used and the behavior of quantum dots, the researchers predict that they will be able to make quantum dots that can store data for one million years with an access time of 10ns. If they can make these in volume and at the same data density as standard flash modules, you can say goodbye to hard drives, flash, and RAM. Personally, I won't miss any of them.
Applied Physics Letters, 2007, DOI: 10.1063/1.2824884 | <urn:uuid:8769f912-9cb2-4e4a-ad11-2f7da30269af> | CC-MAIN-2017-09 | https://arstechnica.com/gadgets/2007/12/the-perfect-computer-memory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00563-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939595 | 624 | 3.859375 | 4 |
Addendum: Symbolic Links
A lot of people not steeped in Unix culture do not know what symbolic links are, particularly Mac users who don't understand the differences between Mac OS aliases and Unix symbolic links, or "symlinks." But it's such a simple concept, it's worth taking a few seconds to explain it.
Symlinks are files that contain a string that is interpreted as a file path. Let's look at that sentence a piece at a time.
- "Symlinks are files..." That means they can be moved, renamed, and deleted just like any other file.
- "...that contain a string" Yes, they really do contain a plain old text string. Depending on the OS, the contents of symlink files may not be accessible. But rest assured, at the lowest level it's just a file containing text.
- "...that is interpreted as a path" This is the only vaguely tricky part for people who have never used a system that required them to understand paths. It is explained further below.
A path is a series words separated by an aptly-named "path separator" character. In Unix-style systems (like Mac OS X), the forward-slash (or just "slash") character "/" is the path separator. Each component of the path corresponds to a file or directory. Every path component but the last one must be a directory name. The last path component can be either a directory or file name. Thus, a path leads to a particular file or directory in the file system. Some examples of paths appear below:
Paths maybe either "absolute" or "relative." Absolute paths begin from the very top of the directory tree and therefore always correspond to the same file or directory no matter where the symlink file containing that path is moved.
Relative paths correspond to the file or directory located at a particular path starting from the current directory. The "current" directory for symlinks is the directory that contains the symlink file. This all sounds complex, but it's really not. Examples:
Let's say we have two symlink files:
- symlink1 contains a text string that spells out the following absolute path: /Food/Apple
- symlink2 contains a text string that spells out the following relative path (note the missing slash at the start): Food/Apple
Now let's look at the simple directory structure below:
/MyDisk/ Food/ Apple Special/ Food/ Apple
No matter where we put symlink1 on MyDisk, it will always point to the Apple file in the Food directory at the top level of MyDisk.
symlink2, on the other hand, could point to either the Apple file in the top level Food directory, or the Apple file in the Food directory inside the Special directory, or, perhaps, to nothing at all! Examples:
Example 1: symlink2 pointing to the "non-special" Apple file:
/MyDisk/ symlink2 -> Food/Apple Food/ Apple (symlink2 points to this) Special/ Food/ Apple
Example 2: symlink2 pointing to the Apple file in the Food directory inside the Special directory. Note that the Special directory could be moved anywhere on the filesystem and symlink2 would still point to the same file. This is the benefit of using relative links.
/MyDisk/ Food/ Apple Special/ symlink2 -> Food/Apple Food/ Apple (symlink2 points to this now)
Example 3: symlink2 pointing to nothing at all!
/MyDisk/ Food/ symlink2 -> Food/Apple Apple Special/ Food/ Apple
This last example is what's called a "broken" symlink. Any attempt to get at the file that symlink2 points to would fail with a "file not found" error. This can also happen if any of the files that either symlink1 or symlink2 point to are removed or renamed. Symlinks are "dumb" in this respect. They just contain a string, after all. If that string does not correspond to an existing file, the symlink is considered "broken."
Mac OS aliases are very different beasts. They are files that uniquely point to a specific file or directory, and continue to point to it regardless of where the file or directory (or the alias file itself) is moved or renamed. (How it does this is beyond the scope of this discussion.)
Both symbolic links and aliases are useful. Sometimes you want to point to a particular file or directory in a robust manner that will not break when things are moved or renamed, but sometimes you simply want to point to "whatever file happens to be located at the following path..." Exercise for the reader: why are relative symbolic links used in Framework Bundles instead of absolute symbolic links or aliases? | <urn:uuid:62b0ea04-7c5e-41f0-910b-b9d96fcf6d72> | CC-MAIN-2017-09 | https://arstechnica.com/apple/2000/05/mac-os-x-dp4/13/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00263-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.905543 | 1,013 | 3.828125 | 4 |
IBM predicts five innovations that will change our behaviour over the next five years.
As 3D and holographic cameras get more sophisticated and miniaturized to fit into mobile phones, people will be able to interact with photos, browse the web and chat with friends in the form of 3D holograms.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Scientists are working to improve video chat to become holography chat - or "3-D telepresence". The technique uses light beams scattered from objects and reconstructs them into a picture of that object.
Batteries that react to the environment
Instead of the heavy lithium-ion batteries used today, scientists are working on batteries that use the air we breath to react with energy-dense metal. If successful, the result will be a lightweight, powerful and rechargeable battery capable of powering everything from electric cars to consumer devices.
These would lead to the development of battery-free electronic devices that can be charged using a technique called energy scavenging. Some wrist watches already use this - they require no winding and charge based on the movement of your arm. The same concept could be used to charge mobile phones for example - just shake and dial.
Rise of the 'citizen scientist'
Sensors in phones, cars, wallets and even tweets will collect data that will give scientists a real-time picture of environments.
Simple observations such as when the first thaw occurs and when mosquitoes first appear, for example, will provide a rich resource in datasets. Laptops will even be used to detect seismic activity. If connected to a network of other computers, this will help to map the aftermath of an earthquake quickly, speeding up the work of emergency responders and potentially saving lives.
Personalised commuter information
Advanced analytics will personalise recommendations for commuters, so they will be directed where to go in the fastest time. Adaptive traffic systems will intuitively learn traveller patterns and behaviour to provide more dynamic travel safety and route information to travellers than isavailable today.
IBM researchers are developing new models that will predict the outcomes of varying transportation routes to provide information that goes beyond traditional traffic reports, after-the fact devices that only indicate where you are already located in a traffic jam, and web-based applications that give estimated travel time in traffic.
Computers will help to energize your city
The energy poured into the world's data centres could be recycled for a city's use to combat the excessive heat and energy that they give off.
Up to 50% of the energy consumed by a modern data centre goes toward air cooling. Most of the heat is then wasted because it is just dumped into the atmosphere. But new technologies, such as on-chip water-cooling systems, mean that the thermal energy from a cluster of computer processors can be efficiently recycled to provide hot water for an office or houses.
Watch the video | <urn:uuid:bfc75386-618d-4c86-ae38-405e260332e2> | CC-MAIN-2017-09 | http://www.computerweekly.com/news/1280094633/Five-life-changing-innovations-for-the-next-five-years | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00615-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926091 | 607 | 3.1875 | 3 |
Capacity and QoS are major considerations in a converged network and effect one another. QoS is needed to prevent applications from using more than a fair share of bandwidth and degrading the performance of other applications. At the WAN interface, QoS is needed to allocate expensive wide area capacity among applications.
Bandwidth and QoS requirements are easy to figure in a multilayered design because the traffic flow is fairly predictable. You can also have end-to-end QoS in a multilayered design. End-to-end QoS is critical when you have real-time applications, such as a voice conversation or video presentation, and you have non-real time applications that can interfere with the real-time applications. For example, if the real-time and non-real time applications arrive at the same layer at the same time, the network must pass the real-time packets first, as well as keep latency and jitter low. QoS end-to-end is the answer.
Consider Call Admission Control (CAC) as an alternative to QoS. CAC limits the amount of traffic allowed onto the network at the ingress point. Because you know that the network will be congested at various times during the day, you can disallow additional traffic by using CAC. Also consider using traffic-shaping techniques using a traffic-shaping device. A combination of QoS, CAC and traffic shaping will provide optimal performance for applications on a converged network.
Managing link speed mismatches is the last element of traffic management. The mismatches, called chokepoints or bottlenecks, are a basic design issue whenever a large capacity link generates traffic destined for a low capacity link.To avoid the mismatches, carefully analyze the traffic and the device capabilities, then upgrade the interface (if needed) and apply a combination of CAC and QoS.
For more information on QoS, see the Enterprise QoS Solution Reference Network Design Guide. | <urn:uuid:6bc9c6cc-a568-4353-9c2d-4d16dbe1e729> | CC-MAIN-2017-09 | http://www.cisco.com/cisco/web/docs/iam/unified/ipt601/Capacity_and_QoS.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00139-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916506 | 406 | 2.5625 | 3 |
Sun Library Flaw Opens Door to Remote Attacks
A vulnerability in a Sun code library enables a remote attacker to execute code on a user's machine.There is a vulnerability in a Sun Microsystems Inc. code library that enables a remote attacker to execute code on a users machine. The flaw also affects libraries derived from the Sun library, including any BSD-derived libraries with XDR/RPC routines and the GNU C Library with sunrpc. The vulnerability is located in the Sun Network Services Library, which enables developers to incorporate XDR (External Data Representation) into their applications. XDR is a standard for the description and encoding of data and is used to transfer data between computers with different architectures.
Researchers at eEye Digital Security Inc. discovered an integer overflow in the xdrmem_getbytes () function. Depending on the location and use of the vulnerable routine, an attacker may be able to exploit this vulnerability remotely.
Find white papers on security.
For more security news, check out Ziff Davis Medias Security Supersite. | <urn:uuid:ffdadb3f-3609-4bd1-919d-18936b1d4b8c> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Application-Development/Sun-Library-Flaw-Opens-Door-to-Remote-Attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00139-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.868527 | 216 | 2.6875 | 3 |
Projectors are a critical tool for engaging large groups and play an important role in higher education.
A projector is a tool to teach the masses. This is especially applicable to higher education, where students pile into auditorium-sized lecture halls and standard classrooms to absorb knowledge on the big screen. That’s why experts say projectors are one of the most prominent tools a college has on its campus for teaching purposes.
“You need to have a much larger image to make sure the rest of the audience in that classroom or lecture hall can see it,” says Richard McPherson, Senior Product Manager of Projectors at NEC Display Solutions. “That’s the biggest reason, that they can have a much bigger screen size. Realistically, you can’t do that cost-effectively with flat panel displays. You can do video walls, but that’s a different cost structure. You can’t manipulate it in the same manner.”
But, with big tools like projectors come big decisions that cost big bucks. Before investing a projector for a college classroom, projector experts recommend exploring three key areas: the space that needs the projector, the color brightness and return on investment.
Once you select the perfect projector, the next step is setting it up correctly and then using it to the best advantage going forward. This guide will help make sure that you select the right projector and know how to use it to make lessons come to life. | <urn:uuid:c39212ed-40e5-4950-a911-238b5b03ac20> | CC-MAIN-2017-09 | https://techdecisions.co/downloads/the-secrets-to-finding-configuring-the-perfect-projector/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00383-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9327 | 304 | 2.515625 | 3 |
ARM has added a wealth of wireless technologies to its arsenal, but has no immediate plans to design a modem for mobile or Internet of Things devices.
The company offers CPU and GPU designs that are widely used in smartphones and tablets. A missing piece is a cellular modem that ARM customers could license and put in mobile devices.
ARM once considered developing a modem, but the design is complex and there are different protocols, technologies and customer requirements, said Ian Smythe, director of marketing in the CPU group at ARM.
For example, a device maker might want a modem that supports CDMA in the U.S., while phones in China use a different form of LTE than in the U.S. and Europe.
"It's quite difficult to come out with a one-size-fits-all, slam-dunk design," Forsythe said.
Qualcomm, Intel, MediaTek and Samsung are among the few companies developing modems today. Apple licenses the ARM CPU architecture, but buys modems from Qualcomm.
An ARM design could make modems more accessible to chip makers. In particular, there could be an appetite for modems in IoT devices, where ARM is growing at a rapid pace, said Jim McGregor, principal analyst at Tirias Research.
One example for the use of a modem could be in emergency and rescue situations, where data would need to be transferred to remote locations. Cellular connectivity will also be needed for remote IoT devices feeding data in real time to repositories, McGregor said.
ARM has the expertise to build a modem, but the company is fairly conservative and might not want to make the hefty investment required, McGregor said. Companies like Freescale and Nvidia have left the modem business due to the challenges involved.
There are other areas of wireless connectivity where ARM has a huge presence, such as with its Cordio Bluetooth radio. ARM will continue to focus on those areas, Forsythe said.
ARM will also continue developing adjacent technologies such as CPUs for modems. The company on Thursday announced the Cortex-R8 CPU design for modems to support the buildout of 5G networks, which are expected to be deployed by 2020. Cortex-R8 will go into self-driving cars, robots, base stations and data-center equipment.
The 5G mobile standard could provide data-transfer rates that touch 10Gbps. Carriers will aggregate different types of wireless networks, including unlicensed spectrum and Wi-Fi, into 5G, which will boost mobile broadband speeds.
The convergence of a melting pot of wireless technologies could expand the use of 5G beyond just mobile devices. 5G could also be used for communication in large-scale IOT deployments.
Modems typically establish connections, while technologies like the Cortex-R8 process and prepare data for transmission, through steps like error-checking.
The Cortex-R8 will work with real-time operating systems and spur development of hardware for 5G networks. The first devices with Cortex-R8 could come out next year. | <urn:uuid:b3902238-c773-42b6-8b0e-a2810db40069> | CC-MAIN-2017-09 | http://www.computerworld.com/article/3034703/computer-hardware/arm-to-skip-modems-for-now-even-though-theyre-big-in-mobile.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00007-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947488 | 617 | 2.546875 | 3 |
Autonomous cars need a new kind of horsepower to identify objects, avoid obstacles and change lanes. There's a good chance that will come from graphics processors in data centers or even the trunks of cars.
With this scenario in mind, Nvidia has built two new GPUs -- the Tesla P4 and P40 -- based on the Pascal architecture and designed for servers or computers that will help drive autonomous cars. In recent years, Tesla GPUs have been targeted at supercomputing, but they are now being tweaked for deep-learning systems that aid in correlation and classification of data.
"Deep learning" typically refers to a class of algorithmic techniques based on highly connected neural networks -- systems of nodes with weighted interconnections among them.
It's all part of a general trend: as more data is transmitted to the cloud via all sorts of systems and devices, it passes through deep-learning systems for answers, context and insights.
For example, Facebook and Google have built deep-learning systems around GPUs for image recognition and natural language processing. Meanwhile, Nvidia says Baidu's Deep Speech 2 speech recognition platform is built around its Tesla GPUs.
The new Teslas have the horsepower to be regular GPUs. The P40 has 3,840 CUDA cores, offers 12 teraflops of single-precision performance, has 24GB of GDDR5 memory and draws 250 watts of power. The P4 has 2,560 cores, delivers 5.5 teraflops of single-precision performance, has 8GB of GDDR5 memory, and draws up to 75 watts of power.
Additional deep-learning features have been slapped on the GPUs. Speedy GPUs usually boast double-precision performance for more accurate calculations, but the new Teslas also handle low-level calculations. Each core processes a chunk of information; these blocks of data can be stringed together in order to interpret information and infer the answers to questions about, for example, what objects are included in images, or what words are being spoken by people who are talking to each other.
Deep-learning systems rely on such low-level calculations for inferencing mostly because double-precision calculations -- which would deliver more accurate results but require more processing power -- would slow down GPUs.
Nvidia earlier this year released the Tesla P100, which is faster than the upcoming P4 and P40. The P100 is aimed at high-end servers and used to fine-tune deep-learning neural networks.
The new Tesla P4 and P40 GPUs have low-level integer and floating point-processing for deep learning and can also be used for inferencing and approximation at a local level. The idea is that certain types of systems and cars can't always be connected to the cloud, and will have to do processing locally.
Low-level processing for approximations is also being added by Intel to its upcoming chip called Knights Mill, which is also designed for deep learning.
The Tesla P4 and P40 succeed the Tesla M4 and M40, which were released last year for graphics processing and virtualization. The new GPUs will be able to do those things as well.
The Tesla P40 will ship in October, while the P4 will ship in November. The GPUs will be available in servers from Dell, Hewlett Packard Enterprise, Lenovo, Quanta, Wistron, Inventec and Inspur. The server vendors will decide the price of the GPUs. | <urn:uuid:bf0996e5-e690-470f-9ab6-6b61c28b1e31> | CC-MAIN-2017-09 | http://www.networkworld.com/article/3119445/nvidias-new-pascal-gpus-can-give-smart-answers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00603-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.944509 | 707 | 3.21875 | 3 |
37.3 million users around the world were subjected to phishing attacks in the last year, which is a massive 87 percent increase for the number of targeted user in 2011-2012.
According to the results of a Kaspersky Lab research into the evolution of phishing attacks, they were most frequently launched from the U.S., the U.K., Germany, Russia and India. Most often targeting users are those in Russia, the U.S., India, Germany, Vietnam, the U.K., France, Italy, China and Ukraine, which represent 64 percent of all phishing attack victims within the observed period.
Yahoo!, Google, Facebook and Amazon are top targets of malicious users. Online game services, online payment systems, and the websites of banks and other credit and financial organizations are also common targets, but also email services, social networks, online stores and auction venues, blogs, IT company websites, and telecom operator websites.
The number of fraudulent websites and servers used in attacks has more than tripled since 2012, and more than 50 percent of the total number of individual targets were fake copies of the websites of banks and other credit and financial organizations.
The Top 30 websites that are copied the most often by phishers are mostly services and companies whose names are known by a mass audience. The number of attacks against one or another online resource may correspond directly to its popularity.
Depending on the country, the list of the websites that are visited may change — this is typically influenced by local user preferences.
For example, in the U.S. the top three most targeted sites are Yahoo!, Facebook and Google. The list for Russia goes like this: Odnoklassniki.ru, VKontakte, and Google Search.
Internet users can encounter links to phishing sites either by surfing the web or via email, but according to the research, the overwhelming majority of phishing attacks are launched against users when they are surfing the web, and take the form of banners to legitimate websites, messages on forums and blogs, private messages on social networks. | <urn:uuid:4b513932-3096-4d9f-bdc5-649e60e2f0df> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2013/06/21/phishing-attacks-impacted-373-million-users-last-year/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00603-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949184 | 422 | 2.765625 | 3 |
Free Speech Online: Where Is the Line Drawn?
When America’s Founding Fathers drafted the First Amendment, no one could have imagined there would eventually be a technology — the Internet — that would allow Americans to speak to people around the globe within seconds. Now, more than 200 years later, some wonder if the existence of this modern technology complicates one of America’s basic rights: the freedom of speech.
“The Internet doesn’t change the dynamic in any fundamental way. What it does is it presses hard on some existing problems,” said John Palfrey, faculty co-director of the Berkman Center for Internet & Society at Harvard UniversityOpenNet Initiative. and a principal investigator with
“If I say something that’s harmful about you online, it can be read instantaneously by billions of people around the world at basically no cost. The number of people who can hear [that] speech [can] be vastly greater than it could have been before, and many more people are holding the megaphone that could reach that large group of people.”
Free-speech advocates argue that despite this scope and speed, it’s unnecessary to create laws that restrict speech online. Others disagree. On the Internet, there’s no segregation of material, no cellophane wrapper, nothing to protect children from seeing graphic pornography unless you’re proactive.
Welcome to the Village Screen
Before the rise of technology, communities came together at the village green, but now with the Internet, people from around the globe meet on the “village screen,” and that presents some unique challenges, according to Gene Policinski, vice president and executive director of the First Amendment Center.
“We have more opportunities to express ourselves than we had even 20 years ago,” Policinski explained. “Speech that might have gone unnoticed, that might have caused no harm, now gets noticed [and] can be global and eternal. We’re seeing comments about one’s employer, one’s principal or one’s teacher — that might have been scrawled on the wall or in a note — now posted on a Facebook page.”
Even though this is a new wrinkle in the free speech debate, Policinski doesn’t see the need for new laws.
“I’m very wary of proposals that restrict speech just on the Web for some special reason,” he explained. “I’m sure when the telegraph, telephone, radio and TV were new, everybody thought we needed special kinds of regulations [on] that speech.”
Brock Meeks, director of communications for the Center for Democracy & Technology, agrees.
“We want prosecutors to use the laws that are on the books right now to go after the perpetrators of crime on the Internet, not to create new laws just because something is being carried out in cyberspace,” he said. “To put those kinds of restrictions online or to treat the Internet differently than the nonelectronic world just doesn’t work.”
The government has tried to do this before and failed.
Take the Communications Decency Act (CDA), which “was the very first piece of legislation that tried to put restrictions on how people spoke on the Internet,” Meeks said. In 1997, the Supreme Court struck down the CDA except for Section 230, which says “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This decision gave the Internet the same free speech protection as print.
“You can print four-letter words in magazines and newspapers, and it’s not against the law,” Meeks said. “[But] you can’t say it on TV on open broadcast networks without getting in trouble [with] the [Federal Communications Commission]. On the Internet, those standards don’t exist, those laws do not transfer, and the 1996 ACLU v. Reno [decision] cemented that First Amendment protection.”
There are certain forms of speech that are not protected under the First Amendment, though, such as defamation, certain types of incitement to violence and child pornography — that speech is illegal both in the online and offline mediums, Palfrey said.
Policinski believes the government should define these limits “with great caution.”
“One person’s hate speech is another person’s political statement,” he explained. “The First Amendment really exists to protect speech on the fringe because if it’s speech we all agree is fine, it doesn’t need to be protected. So by definition, the First Amendment protects speech that pushes the limits of what you or I or someone else might find comfortable.”
That’s exactly what happened during the Civil Rights Movement — people talked about issues many Americans weren’t comfortable with. If we didn’t have the freedom of speech, the United States might be in a different place today.
“Civil rights advocates would have been labeled hate speakers for trying to upset the customs, habits and sometimes laws of the nation regarding segregation,” Policinski said. “You just have to imagine what our society would be like today had they been prevented from speaking even though at the time perhaps the majority of Americans didn’t want to hear what they had to say.”
What About Pornography?
In 2006, there were 4.2 million pornographic Web sites, 420 million pornographic Web pages and 68 million daily pornographic search engine requests, according to the Internet Filter Review.
Because of this prevalence, Donna Rice Hughes, president of Enough Is Enough, doesn’t believe the status quo works, especially when a simple search for “water sports” can return sites with urination pornography. The Internet has thrown open the doors to pornography for adults and, even more unsettling, for children, she said.
“The early pioneers in the Internet industry will tell you behind closed doors that one of the ways they [made] and still do make money is because of the access that people have to pornography,” Hughes explained. “But having an entire generation of youth fed a steady diet of very hard-core material is not worth that price.”
According to Hughes, there are three types of pornography: child pornography, obscenity and indecency. In the U.S., it is a federal crime to make, produce, distribute or possess child pornography.
Obscenity — also not protected under the First Amendment — refers to hard-core material or deviant forms of pornography such as bestiality, incest and rape. “[Still], it’s everywhere on the Internet because the federal obscenity statutes are not being aggressively enforced,” she said.
The third kind of pornography is indecency, which is “programming [that] contains patently offensive sexual or excretory material that does not rise to the level of obscenity,” according to the FCC. Indecency is constitutionally protected for consenting adults, but not for minor children.
The Child Online Protection Act (COPA), which was enjoined in 1998 as soon as it was signed into law, would have protected minors from these forms of pornography on the Internet. After being jostled about in the courts for 10 years, the Supreme Court declined to hear the case again in January, effectively killing COPA, said Hughes, who was on the COPA Commission.
“It never went into effect and it never will go into effect because it’s dead now,” she explained. “The net result is that all these years there has not been a cyber brown wrapper, if you will, to screen minor children from getting into any of these porn sites online.”
Hughes would like to see the same standards of decency for broadcast applied to the Internet.
“The Internet shouldn’t get a free pass,” she said. “Since it has become the M.O. of how we communicate, then shouldn’t we have some rules for the road?
“If you could turn on the television and see people having sex, women having sex with dogs, people urinating in sexual ways, then that would be the same as the Internet. With television, if you want to get something that’s adult, you have to opt in to get it. When you turn on the Internet, you’ve got everything.”
But Hughes doesn’t believe this will change, as evidenced by what happened to the CDA and COPA.
“To go in and shift the paradigm to where everything’s locked down, and if you want free access to everything you’ve got to start opting out of the safe zone, that’s a huge jump from where we are. I don’t think it’s going to happen,” she said.
Enough Is Enough has developed a three-pronged solution to provide a safe environment for children.
First, end users — especially those responsible for children — need to be educated on the dangers that exist on the Internet and implement safety measures to protect kids. Second, the technology industry must implement IT solutions and develop family-friendly policies. Third, there must be aggressive enforcement of existing laws and enactment of new laws to stop “the sexual exploitation and victimization of children using the Internet,” according to the organization’s Web site.
“You can’t expect parents and the public to enforce the law, and you can’t expect government to parent kids,” Hughes said. “Everybody’s got a unique role, and if everyone’s doing their part, then you’ve got a very strong chance that kids are going to be much safer online. But we’ve still got a long way to go in each of those areas.”
How Do Other Countries Tackle This Issue?
Not every country is as tolerant of free speech as the U.S. According to the 2007 OpenNet Initiative study, 25 out of 41 countries surveyed engaged in Internet censorship, and that number is on the rise, Palfrey said.
The most basic form of censorship can be found in Saudi Arabia, where there is a single gateway that everyone has to go through.
“Whenever somebody tries to access the Internet from Saudi Arabia, it goes through this proxy system,” Palfrey said. “The request from the user is judged against a blacklist, which says, ‘Is this site acceptable material or not?’ If it’s on the blacklist, they do not return the page.”
In direct contrast to that is China’s filtering system, which is a complicated multi-level strategy with a gateway at every possible level, and many people share the responsibility of filtering the Internet.
“They [effectively] erected the Great Firewall of China around the edge of the country, [which] turned out to be porous,” Palfrey said. “So at the Internet service provider level, there are blocks for material that [is] deemed to be harmful; there are blocks on search engines, including Google and others based in the United States; there are blocks through blog servers; there are blocks at the university level; there are blocks at the cybercafe level; and so forth.”
China is one of the most repressive filtering regimes. Anything that is a threat to its form of government or way of life is censored, Meeks said.
“Let’s look, for example, at the big earthquake that happened in China,” he explained. “People got all upset because there [were] a lot of schools that crumbled and children died. People got on the Internet criticizing the way the government handled that construction. The Chinese government stepped in and started to shut down access to information about construction and arrested people who were speaking out against the government.”
But Meeks doesn’t believe the Internet can be censored effectively even in China.
“[China has] their hand on the information pipe, and they squeeze it pretty tight,” he said. “There are ways to get around that, and people are finding ways to circumvent the Chinese censors all the time. But it’s kind of like escalating warfare. The Chinese clamp down harder, and then new tools spring up and find better and faster ways of circumventing that censorship. The Chinese government [then] retaliates by finding out what those are and clamping down even harder — so it goes back and forth.”
One might argue that any type of censorship runs contrary to the nature of the Internet, which is inherently about the free flow of information.
“One of the great advantages of being able to use the Internet is that people feel empowered to say things that they may not say face-to-face,” Meeks said. “If you are being censored, it chills the way you speak; it chills the way you use the Internet. It drops to the lowest common denominator, so things become no more useful than the dialogue taking place in an elementary school classroom.”
– Lindsay Edmonds Wickman, editor (at) certmag (dot) com | <urn:uuid:b44c3c85-ae55-405b-bb80-f5e41eeabe5a> | CC-MAIN-2017-09 | http://certmag.com/free-speech-online-where-is-the-line-drawn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952711 | 2,813 | 2.875 | 3 |
The Internet of Things (IoT) extends the end node far beyond the human-centric world to encompass specialiSed devices with human-accessible interfaces, such as smart home thermostats and blood pressure monitors. And even those without human interfaces, including industrial sensors, network-connected cameras and traditional embedded systems.
As IoT grows, the need for real-time scalability to handle dynamic traffic bursts also increases. There also may be the need to handle very low bandwidth small data streams, such as a sensor identifier or a status bit on a door sensor or large high-bandwidth streams such as high-def video from a security camera. Consider the following examples and the applicability of network-connected device to IoT.
Homes and offices
Utility meters send complex data packets to service providers where centralised systems provide real-time monitoring to proactively detect and remediate problems such as blackouts, water leaks and circuit overloads.
Data is analysed to improve efficiency by determining needs, spotting trends, and predicting demand. By virtue of its smart IoT fixtures, the city of Oslo reduced energy costs by 62%.
From heartbeat-sensing fitness bands to step-counting smartphone apps, wearables are the public face of IoT. A portable device is connected to a service that aggregates data and, increasingly, shares it across social media, with a doctor or even a gym. The cloud-based services also push back analytics, motivational graphics and music, and location-based maps.
Hospitals utilise several smart devices, both standalone and those wired to nurses’ station monitors. Soon, these will be interconnected through a highly available and secure network with server-based applications that can track patient conditions by correlating all data – not just nurses’ readings – allowing better monitoring, data logging and big data analytics. An IoT-connected network helped St. Luke’s Medical Center reduce patient-bed turnaround time by 51 minutes.
Factories and warehouses
The flow of materials must be monitored and optimized for efficiency. Location sensors are embedded in components moving through assembly lines and inventory systems. The location of forklifts, pallets and workers are tracked as well, while centralised software directs the activity in real time to effectively respond to customer requests.
By implementing predictive maintenance and quality control IoT, BMW reduced auto-warranty costs by 5% and reduced the scrap rate of defective vehicles by 80%.
Dynamic application delivery
Along with these various applications mentioned above, and there are plenty more. When an IoT node performs a service request, such as sending a medical data packet, the ADC (application delivery controller) determines which server, virtual or physical, can handle the request.
The packet is then sent to the appropriate server for processing, while measuring the performance of the application and availability of the server. Application delivery technology can also remember which application server is handling a specific IoT node’s service requests.
When subsequent packets arrive from the same IoT node as part of the same request, the session will continue with the same server, ensuring continuity of the traffic stream and reducing the need for renegotiation.
An application delivery controller also monitors the health of application servers. Common statistics are processor and memory utilisation, server response time, and how different protocols are handled.
When the servers slow down or become unresponsive, advanced load balancers dynamically route traffic to other servers to reduce client interruption.
Evolution of the load balancer
Modern load balancers focused on application delivery are more sophisticated and operate from Layer 4 to the Application Layer 7, making them more in tune with application server software, how the client responses should be handled, and the specific services being requested by IoT end nodes.
ADCs provide packet encryption/decryption, reducing server workload and making it possible to apply advanced policies and processing on secured traffic streams while maintaining end-to-end security.
Global Server Load Balancing (GSLB) allows the intelligent distribution of end-node traffic across private and public clouds based on proximity, performance or manually defined business rules for optimal data handling and communication.
To facilitate the dynamic cloud infrastructure, modern ADCs have also been adapted to integrate into virtual environments.
An IoT application may include millions of participating devices. The number of connections and the amount of data is neither consistent nor predictable.
Here are a few of the more pressing challenges to providing the backend connectivity and customer satisfaction in IoT applications.
Handling huge amounts of traffic
A motion detector video camera maintains a minimal connection to a cloud-based application server, perhaps a periodic ‘heartbeat’ packet that provides operational status.
When the motion detector is tripped, the camera transmits streams of high definition video to be stored and analysed.
IoT systems designers must plan to manage any quantity of data in unpredictable bursts without dropping packets, overloading the network or overwhelming servers – all while accommodating BI analytics software operating in real time.
Maintaining fast response times and quality of service
An embedded industrial applications or warehouse application that directs workers to pick up and deliver materials is a failure if it freezes or if is slow to process location-awareness packets.
The server infrastructure and network design of an IoT application must focus on both maintaining fast response time and ensuring robust quality of service (QoS), especially in real-time location-aware applications.
Security, privacy and regulatory compliance
Whether an IoT application is industrial or consumer, enterprise or personal, data must be protected in transit and at rest.
Applications store current and historical data about an individual’s health, location and finances along with the location and quantity of inventory, business orders and more. Data must be secured against theft and tampering.
This can be challenging when data is transmitted across the Internet or even secured private networks and VPN tunnels.
Government regulations such as HIPAA or restrictions on transporting data across international borders may also apply.
Key IoT security tasks ensure that proper application-level protections, such as DDoS attack mitigation, reach out to end-points and incorporate measures confirming the identity of entities requesting access to data, including multi-factor authentication.
The Internet of Things is now
The Internet of Things includes the connected refrigerator plus thousands of medical devices in hospitals; smart utility meters; GPS-based location systems; fitness trackers; toll readers; motion detector security cameras; smoke detectors; and embedded systems.
Each of those IoT end nodes requires connectivity, processing and storage, some local, some in the cloud. This means scalability, reliability, security, compliance and application elasticity to adapt to dynamic requirements and ever-changing workloads.
Now is the time for network administrators to fully scope out all of their ‘Internets’ and how everything interconnects, from how ERP software systems maintain monitoring rules and governance to how APIs talk to M2M application platforms, to how asset and device management mechanisms orchestrate version control and location metrics.
Sourced from Atchison Frazer, KEMP Technologies | <urn:uuid:ba68d180-8193-4490-9d38-9e65457899eb> | CC-MAIN-2017-09 | http://www.information-age.com/why-internet-things-more-just-smart-fridge-123458485/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.910138 | 1,428 | 2.859375 | 3 |
Do we really need another programming language? There is certainly no shortage of choices already. Between imperative languages, functional languages, object-oriented languages, dynamic languages, compiled languages, interpreted languages, and scripting languages, no developer could ever learn all of the options available today.
And yet, new languages emerge with surprising frequency. Some are designed by students or hobbyists as personal projects. Others are the products of large IT vendors. Even small and midsize companies are getting in on the action, creating languages to serve the needs of their industries. Why do people keep reinventing the wheel?
[ Find out which 11 programming trends are on the rise, verse yourself in the 12 programming mistakes to avoid, and test your programming smarts with our programming IQ test: Round 1 and Round 2. | Keep up on key application development insights with the Fatal Exception blog and Developer World newsletter. ]
The answer is that, as powerful and versatile as the current crop of languages may be, no single syntax is ideally suited for every purpose. What's more, programming itself is constantly evolving. The rise of multicore CPUs, cloud computing, mobility, and distributed architectures have created new challenges for developers. Adding support for the latest features, paradigms, and patterns to existing languages -- especially popular ones -- can be prohibitively difficult. Sometimes the best answer is to start from scratch.
Here, then, is a look at 10 cutting-edge programming languages, each of which approaches the art of software development from a fresh perspective, tackling a specific problem or a unique shortcoming of today's more popular languages. Some are mature projects, while others are in the early stages of development. Some are likely to remain obscure, but any one of them could become the breakthrough tool that changes programming for years to come -- at least, until the next batch of new languages arrives.
Experimental programming language No. 1: Dart
Experimental programming language No. 2: Ceylon
Gavin King denies that Ceylon, the language he's developing at Red Hat, is meant to be a "Java killer." King is best known as the creator of the Hibernate object-relational mapping framework for Java. He likes Java, but he thinks it leaves lots of room for improvement.
Among King's gripes are Java's verbose syntax, its lack of first-class and higher-order functions, and its poor support for meta-programming. In particular, he's frustrated with the absence of a declarative syntax for structured data definition, which he says leaves Java "joined at the hip to XML." Ceylon aims to solve all these problems.
King and his team don't plan to reinvent the wheel completely. There will be no Ceylon virtual machine; the Ceylon compiler will output Java bytecode that runs on the JVM. But Ceylon will be more than just a compiler, too. A big goal of the project is to create a new Ceylon SDK to replace the Java SDK, which King says is bloated and clumsy, and it's never been "properly modernized."
That's a tall order, and Red Hat has released no Ceylon tools yet. King says to expect a compiler this year. Just don't expect software written in "100% pure Ceylon" any time soon.
Experimental programming language No. 3: Go
Interpreters, virtual machines, and managed code are all the rage these days. Do we really need another old-fashioned language that compiles to native binaries? A team of Google engineers -- led by Robert Griesemer and Bell Labs legends Ken Thompson and Rob Pike -- says yes.
Go is a general-purpose programming language suitable for everything from application development to systems programing. In that sense, it's more like C or C++ than Java or C#. But like the latter languages, Go includes modern features such as garbage collection, runtime reflection, and support for concurrency.
Equally important, Go is meant to be easy to program in. Its basic syntax is C-like, but it eliminates redundant syntax and boilerplate while streamlining operations such as object definition. The Go team's goal was to create a language that's as pleasant to code in as a dynamic scripting language yet offers the power of a compiled language.
Go is still a work in progress, and the language specification may change. That said, you can start working with it today. Google has made tools and compilers available along with copious documentation; for example, the Effective Go tutorial is a good place to learn how Go differs from earlier languages.
Experimental programming language No. 4: F#
Functional programming has long been popular with computer scientists and academia, but pure functional languages like Lisp and Haskell are often considered unworkable for real-world software development. One common complaint is that functional-style code can be difficult to integrate with code and libraries written in imperative languages like C++ and Java.
Enter F# (pronounced "F-sharp"), a Microsoft language designed to be both functional and practical. Because F# is a first-class language on the .Net Common Language Runtime (CLR), it can access all of the same libraries and features as other CLR languages, such as C# and Visual Basic.
F# code resembles OCaml somewhat, but it adds interesting syntax of its own. For example, numeric data types in F# can be assigned units of measure to aid scientific computation. F# also offers constructs to aid asynchronous I/O, CPU parallelization, and off-loading processing to the GPU.
After a long gestation period at Microsoft Research, F# now ships with Visual Studio 2010. Better still, in an unusual move, Microsoft has made the F# compiler and core library available under the Apache open source license; you can start working with it for free and even use it on Mac and Linux systems (via the Mono runtime).
Experimental programming language No. 5: Opa
Opa doesn't replace any of these languages individually. Rather, it seeks to eliminate them all at once, by proposing an entirely new paradigm for Web programming. In an Opa application, the client-side UI, server-side logic, and database I/O are all implemented in a single language, Opa.
Naturally, a system this integrated requires some back-end magic. Opa's runtime environment bundles its own Web server and database management system, which can't be replaced with stand-alone alternatives. That may be a small price to pay, however, for the ability to prototype sophisticated, data-driven Web applications in just a few dozen lines of code. Opa is open source and available now for 64-bit Linux and Mac OS X platforms, with further ports in the works.
Experimental programming language No. 6: Fantom
Should you develop your applications for Java or .Net? If you code in Fantom, you can take your pick and even switch platforms midstream. That's because Fantom is designed from the ground up for cross-platform portability. The Fantom project includes not just a compiler that can output bytecode for either the JVM or the .Net CLI, but also a set of APIs that abstract away the Java and .Net APIs, creating an additional portability layer.
But portability is not Fantom's sole raison d'être. While it remains inherently C-like, it is also meant to improve on the languages that inspired it. It tries to strike a middle ground in some of the more contentious syntax debates, such as strong versus dynamic typing, or interfaces versus classes. It adds easy syntax for declaring data structures and serializing objects. And it includes support for functional programming and concurrency built into the language.
Fantom is open source under the Academic Free License 3.0 and is available for Windows and Unix-like platforms (including Mac OS X).
Experimental programming language No. 7: Zimbu
Most programming languages borrow features and syntax from an earlier language. Zimbu takes bits and pieces from almost all of them. The brainchild of Bram Moolenaar, creator of the Vim text editor, Zimbu aims to be a fast, concise, portable, and easy-to-read language that can be used to code anything from a GUI application to an OS kernel.
Owing to its mongrel nature, Zimbu's syntax is unique and idiosyncratic, yet feature-rich. It uses C-like expressions and operators, but its own keywords, data types, and block structures. It supports memory management, threads, and pipes.
Portability is a key concern. Although Zimbu is a compiled language, the Zimbu compiler outputs ANSI C code, allowing binaries to be built only on platforms with a native C compiler.
Unfortunately, the Zimbu project is in its infancy. The compiler can build itself and some example programs, but not all valid Zimbu code will compile and run properly. Not all proposed features are implemented yet, and some are implemented in clumsy ways. The language specification is also expected to change over time, adding keywords, types, and syntax as necessary. Thus, documentation is spotty, too. Still, if you would like to experiment, preliminary tools are available under the Apache license.
Experimental programming language No. 8: X10
Parallel processing was once a specialized niche of software development, but with the rise of multicore CPUs and distributed computing, parallelism is going mainstream. Unfortunately, today's programming languages aren't keeping pace with the trend. That's why IBM Research is developing X10, a language designed specifically for modern parallel architectures, with the goal of increasing developer productivity "times 10."
X10 handles concurrency using the partitioned global address space (PGAS) programming model. Code and data are separated into units and distributed across one or more "places," making it easy to scale a program from a single-threaded prototype (a single place) to multiple threads running on one or more multicore processors (multiple places) in a high-performance cluster.
X10 code most resembles Java; in fact, the X10 runtime is available as a native executable and as class files for the JVM. The X10 compiler can output C++ or Java source code. Direct interoperability with Java is a future goal of the project.
For now, the language is evolving, yet fairly mature. The compiler and runtime are available for various platforms, including Linux, Mac OS X, and Windows. Additional tools include an Eclipse-based IDE and a debugger, all distributed under the Eclipse Public License.
Experimental programming language No. 9: haXe
Lots of languages can be used to write portable code. C compilers are available for virtually every CPU architecture, and Java bytecode will run wherever there's a JVM. But haXe (pronounced "hex") is more than just portable. It's a multiplatform language that can target diverse operating environments, ranging from native binaries to interpreters and virtual machines. | <urn:uuid:c49a2af7-8745-439e-8560-ff4232626fd3> | CC-MAIN-2017-09 | http://www.itworld.com/article/2733716/enterprise-software/10-programming-languages-that-could-shake-up-it.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926958 | 2,267 | 2.59375 | 3 |
Mayol M.,CREAF Cerdanyola del Valles 08193 Spain |
Riba M.,University of Barcelona |
Gonzalez-Martinez S.C.,Forest Research Center Madrid 28040 Spain |
Bagnoli F.,National Research Council Italy |
And 10 more authors.
New Phytologist | Year: 2015
Despite the large body of research devoted to understanding the role of Quaternary glacial cycles in the genetic divergence of European trees, the differential contribution of geographic isolation and/or environmental adaptation in creating population genetic divergence remains unexplored. In this study, we used a long-lived tree (Taxus baccata) as a model species to investigate the impact of Quaternary climatic changes on genetic diversity via neutral (isolation-by-distance) and selective (isolation-by-adaptation) processes. We applied approximate Bayesian computation to genetic data to infer its demographic history, and combined this information with past and present climatic data to assess the role of environment and geography in the observed patterns of genetic structure. We found evidence that yew colonized Europe from the East, and that European samples diverged into two groups (Western, Eastern) at the beginning of the Quaternary glaciations, c. 2.2 Myr before present. Apart from the expected effects of geographical isolation during glacials, we discovered a significant role of environmental adaptation during interglacials at the origin of genetic divergence between both groups. This process may be common in other organisms, providing new research lines to explore the effect of Quaternary climatic factors on present-day patterns of genetic diversity. © 2015 New Phytologist Trust. Source | <urn:uuid:77511212-d8c9-4b4e-9aa7-e7d7b18b97cc> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/creaf-cerdanyola-del-valles-08193-spain-2625292/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00479-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.885644 | 343 | 2.640625 | 3 |
A focal point requirement of the 2002 No Child Left Behind Act (NCLB), federal legislation aimed at improving U.S. primary and secondary schools performance, was to implement accountability systems that analyze student and educator data, and report those results to the U.S. Department of Education. These reporting systems were heralded as an effective way to help state departments of education collect statistics to assess teacher proficiency and student progress.
"It's really important to be able to follow individual students from grade to grade, school to school, district to district and see how they are doing over time," said Jim Hull, a policy analyst at the National School Boards Association (NSBA). "We haven't been able to do that before."
In addition, educational data systems offer the advantage of getting assessment data back to educators faster than before. "You had the old-fashioned assessment test that's taken in April and no one gets results until October or November. It's not a useful timeframe; those kids have moved on," said Ann Flynn, director of education technology programs at the NSBA.
While there is little doubt that collecting more specific data - and publishing the results more quickly - is beneficial to educators, many states have struggled with how to best implement data systems. Limited funding, institutional resistance to change, and schools' use of various student information systems have been impediments.
New Mexico officials, however, believe they have solved some of those issues with the state's Student Teacher Accountability Reporting System (STARS).
Longitudinal Student Data
STARS is a statewide, "longitudinal" educational information system that collects data from students through all grade levels, starting in kindergarten and continuing through the 12th grade. Although NCLB doesn't require longitudinal systems, states such as Florida have shown that having that kind of long-term data can be a useful measurement when assessing how well a school, district or state is meeting educational benchmarks for schools and individual students. Florida has been electronically collecting its longitudinal student data since the 1980s, allowing the state to make decisions based on comprehensive, accurate and timely data about its schools.
The STARS system collects and aggregates a variety of student data: demographics and achievement information, exam scores on state- and federally mandated tests, districts' financial information, and teacher licensing data. "At a minimum, the system collects information on students, teachers, staff, programs and schools," said Philip Benowitz of Deloitte Consulting, the engagement director for the STARS project. "But there's no limit to what the system could collect. We're still in the early stages of understanding what makes sense and what's really valuable."
Moreover, the system standardizes data so it can be reported to the federal government as required by NCLB.
But Benowitz asserts that STARS has more value than just for NCLB compliance. New Mexico can provide data to the school districts for their own analyses and use. "The intent and the spirit is to put the data in the hands of educators and analysts who can make a difference in student achievement - the classroom teacher, the principal, the state educational analyst," he said. "People who can help to improve the curriculum and student achievement."
Overcoming Interoperability Issues
New Mexico CIO Roy Soto said it was a challenge to determine the best way to collect and consolidate data. "New Mexico is no different from any other state. We have 89 school districts, all collecting data in a different form and fashion."
Unlike many other states, New Mexico had been collecting student-level data since 1997 with the STARS predecessor, the Accountability Data System (ADS). But ADS had maintenance and system integrity issues. Before making critical implementation decisions on a new system, the state conducted several legislative audits. After careful consideration of the results, the state chose
a data warehouse solution and put out an RFP to find a vendor.
"We basically took the audits, with specific emphasis on what needed to be fixed, and put it into our request for proposal," said Robert Piro, CIO of the New Mexico Public Education Department (PED). "Deloitte Consulting presented us with a solution based on eScholar and Cognos."
With eScholar, an educational data collection and analysis tool, and Cognos business intelligence software, Deloitte created a commercial-off-the-shelf system that allows school districts to collect data as they've always done.
"In New Mexico, there are a dozen or more student information system vendors that have systems in place in one or more of the 89 school districts. The last thing we wanted to do is mandate that they all use the same system," Benowitz said. With the STARS solution, school districts can continue using their existing systems and produce a flat extract data file that can be uploaded to the data warehouse automatically.
New Mexico implemented the system in nine months. "We started the prototype in December of 2005, and then did a pilot with 11 districts in spring 2006," Piro said. "We're now in our second year of data collection."
One of New Mexico's biggest hurdles was change management. Since the system was implemented in less than one year, there was pushback at the district level from some educators.
"When you have so many different entities that are basically independent, doing things a certain way, it's hard," Soto said. "Some people saw it as, 'Here comes Big Brother.'"
Although school districts could keep their internal systems, the move to STARS required a redefinition of processes for what kind of data to collect and when to collect it. This caused some consternation from districts that already had workflows in place.
Daryl Landavazo, New Mexico's STARS IT project manager, said the districts have been collecting student-level data for some time. "So the assumption was, 'We're using data; we know how to report, and we know what we're collecting,'" he said. "But that's not always the case."
To combat resistance, the STARS team marketed a proof-of-concept system to both the school districts and the Legislature. "We showed them the proof of concept, and said, | <urn:uuid:d74466e4-3044-4fc9-acf8-8c44f4d88d3c> | CC-MAIN-2017-09 | http://www.govtech.com/education/Student-Tracking-System-Helps-New-Mexico.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00003-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.964404 | 1,265 | 3.203125 | 3 |
Historically, the Earth's climate has ranged from the snowball Earth, with glaciers in tropical regions, to a hothouse with ice-free poles. The transitions between different states—climate changes—are driven by changes in the factors that force the climate. These include difference in incoming solar radiation (both the amount and its distribution), the concentration of greenhouse gasses in the atmosphere, and the amount of sunlight the planet reflects back into space (albedo).
In addition to the primary forcings, there are many feedbacks in the system. For example, rising temperatures melt ice, which normally reflects sunlight back into space; this effect will tend to reinforce any trends. They will also increase the average levels of water vapor in the atmosphere. Water vapor both acts as a greenhouse gas, trapping additional heat, and may increase the cloud cover, which can alter the planet's albedo. (At present, the net impact of increased water vapor as a feedback is uncertain.)
One of the most dramatic examples of historic climate change is the Paleocene-Eocene Thermal Maximum (PETM), a sudden rise in temperatures that occurred about 55 million years ago. The PETM was accompanied by a geologically sudden influx of carbon into the atmosphere, indicating that the warming was caused by changes in greenhouse forcings. The change in climate was accompanied by a significant disruption in terrestrial ecosystems, and a major extinction event in the oceans.
Over the past century, the Earth has undergone a change in temperature that cannot be accounted for by accompanying changes in solar activity. The pattern of warming, however, is consistent with increased greenhouse forcings, and atmospheric concentrations of carbon dioxide, a long-lived greenhouse gas, have gone up dramatically due to land use changes and the combustion of fossil fuels. As a result, a large majority of scientists and scientific organizations have concluded that the current period of climate change is being driven primarily by human activity. | <urn:uuid:bf9b5a05-7536-44d1-acf0-87186efc1888> | CC-MAIN-2017-09 | https://arstechnica.com/technopaedia/2009/11/climate-change/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00003-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945997 | 387 | 4.6875 | 5 |
E-Mail, the venerable old standard for internet text messages, dating back to the early 1980s – and back to the early 1970s in other forms, has long been the “killer app” of the internet. While so many companies try to make the next great thing that’ll capture users around the world – none of these compare to the success of e-mail. It is likely the single most entrenched application-layer protocol used today.
Thanks to STELLARWIND and other NSA programs, we have also seen that it has failed in a very real, and very important way.
But this isn’t exactly news is it?
The security issues around e-mail – or rather the complete lack of security in the protocol has been well understood for decades. Yet, in all those years of existence, all we, as those that care about security have managed to do is glue ineffective solutions on it.
In the last year, since the scope of NSA’s spying has been made clear to the world – and the reminder that the NSA isn’t the only player in this game, the use of STARTTLS has spread dramatically. Many people have worked hard to make this happen – and it really has made things better. Kinda.
While STARTTLS does enable TLS, thus encrypting the data over the wire, it’s far from perfect:
- Opportunistic – In the majority of occasions the encryption is opportunistic; meaning that certificates aren’t validated, and if something goes wrong in the TLS negotiation, the connection will fail-open – passing data in the clear.
- Server Trust – E-mail as it exists today places complete trust in not only the sending server, but also the recipients server – and every other relaying server. Any of these can log all correspondence, as the data is only encrypted during the transport. TLS can be added or dropped at any point in the chain, and this exposes multiple possible intercept points.
Based on the famous “SSL added and removed here” NSA slide, we see that even if the messages are sent over an encrypted connection, that doesn’t mean they stay encrypted when moved around within a company. There are so many failure points that can lead to an attacker being able to collect data.
So if groups like the NSA want e-mail, it takes some extra effort thanks to STARTTLS, but it doesn’t solve the problem.
PGP / GPG
PGP, and the compatible (and likely more commonly used) GPG are one of the best options people have to encrypt their email – but there’s still a great deal of data exposed.
- UX – The user experience for most applications goes beyond unfriendly, to the point of being actively hostile. I recently walked an experienced developer through setting up a key with Gpg4win – the process of getting everything setup and working was beyond painful. The only GPG wrapper that I’ve seen that isn’t unreasonably difficult is GPGTools for OSX.
- Web of Trust – The web of trust model that PGP uses is both genius and terrible. When a news organization started using PGP, I noticed that none of their keys were signed by anyone – so I tried explaining the concept to one of their journalists. After several tweets we moved the conversation to e-mail, which led to several multi-page emails. By the end, I think he was more confused than ever. Last I checked, their keys still aren’t signed.
- Metadata – Metadata kills. It’s scary but true – people die based on who they talk to. How long till an investigative journalist reporting on an organization not friendly to the US gets droned for emailing the wrong people? PGP doesn’t hide who is emailed, the subject, or any of the headers. PGP encrypted email leaks useful information like a sieve.
- Client Integration – Not many email clients natively support PGP, so most users have to encrypt manually, or use a third-party add-on. This can lead to interesting information leaks, such as saving unencrypted drafts to the server.
- Mobile – Using PGP on a mobile device can be risky, as it requires storing the private key on devices that are likely to have known security issues. Many people recommend against it, as it puts the private key at too much risk.
So while PGP / GPG protects the content of email, it still is subject to metadata collection.
For those that don’t like PGP, or want a better chance of having native support, there’s S/MIME. Like PGP, S/MIME has it’s share of issues which leave users with less protection than they realize.
- Certificates – To use S/MIME, you must have a certificate issued from a CA. The question is, how trustworthy is it? CAs have been hacked to issue bad certificates in the past, and nobody knows what an NSL could be used for when issued to a CA.
- Key Escrow – Some CAs generate the private key on their side to allow them to provide a key escrow service. While this can be useful if you ever lose your key and want to read your email again, it also runs the risk of an unauthorized party getting access to the key.
- Metadata – The metadata issues noted for PGP applies here as well. The subject, the recipients, the headers are all in clear text.
There are other various complaints around S/MIME that are well documented, and have been discussed countless times. The point is, it’s another partial solution just glued onto email in an attempt to make it do something it was never designed to do. Be secure.
The list goes on…
Much work has went into other various add-ons, such as SPF and DKIM and others that attempt to do things that could have been done by default if e-mail had been designed with authentication, privacy, and security in mind.
When e-mail was designed, none of these issues were considered – people have been trying to find ways to fix that mistake for years. E-mail was open, plain text, security and privacy weren’t considered – or at least not to the extent required.
For a system like e-mail to be secure, security has to be part of the core design, considered at every step. When security & privacy are an afterthought, something to just glue on – it’s impossible to achieve either.
There are many efforts underway now to improve the situation, some such as Mailpile, I greatly respect. Their goals are noble, their purpose is true – but I also think they are fighting the wrong battle. E-mail as we know it is flawed beyond repair – we can make it leak less, but for all the work trying to overcome the design flaws, it’ll always be flawed. There’s only one way for e-mail to ever be secure.
Time for action!
Over the past few months I’ve been working on a specification for a system to replace e-mail. I don’t know if it’ll ever go anywhere – but that’s not the point. E-mail as we know it needs to be replaced – it can’t be fixed. We need to be discussing how to eliminate email, not new ways to glue partial solutions on to it. STARTTLS isn’t the answer, PGP isn’t the answer, S/MIME isn’t the answer – an entirely new protocol is.
I’m hoping to make the first public draft available in the next few weeks. If it’s well received, I’ll try to do what I can to get the system built. If not, then hopefully others will write competing specifications, and I’ll aide them as I can.
My goal for the specification I’m writing is to encourage discussion – to get people talking about how to solve the problem. If it contributes in any way to a new system, a system that’s designed from the beginning to be secure, then it’s worth every minute that’s been invested in it. We need to find a viable option that can replace the monstrosity that we have today.
I encourage everyone to think about solutions to this – how can we build a viable replacement to e-mail that meets the privacy and security goals, while being user-friendly, and meeting the requirements of business and government environments. It’s a hard problem to solve, but it can be solved. It’s up to us to do it. | <urn:uuid:b106bf78-0470-4df3-9ff9-27a6fb620edf> | CC-MAIN-2017-09 | https://adamcaudill.com/2014/06/27/the-sinking-ship-of-email-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00179-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959936 | 1,829 | 2.6875 | 3 |
In SQL Server, statistics can be created using CREATE STATISTICS command or using CREATE INDEX command. At the feature level, the statistical information created using CREATE STATISTICS command is equivalent to the statistics built by a CREATE INDEX command on the same columns. The only difference is that the CREATE STATISTICS command uses sampling by default while the CREATE INDEX command gathers the statistics with fullscan since it has to process all rows for the index anyway.
A typical command will look like:
CREATE STATISTICS [IX_Stats_City]
WITH SAMPLE 50 PERCENT;
In this command, we are sampling 50% of the rows. For bigger tables, a random sampling may not produce accurate statistics. Therefore, for bigger tables, you may need to use the resample option on UPDATE STATISTICS. The resample option will maintain the fullscan statistics for the indexes and sample statistics for the rest of the columns.
Statistical information is updated when approximately 20 percent of the data rows have changed. Though there are some exceptions to this rule, we will keep this guideline as generic. We can also manually update statistics using UPDATE STATISTICS.
To provide up-to-date statistics, the Query Optimizer needs to make smart query optimization decisions. It is generally best to leave the “AUTO UPDATE STATISTICS” database option ON (the default setting). This helps to ensure that the Optimizer statistics are valid, so that queries are properly optimized when they are run. Additionally, SQL Server uses AUTO_CREATE_STATISTICS, which causes the server to automatically generate all statistics required for the accurate optimization of a specific query.
From SQL Server 2005 version, SQL Server maintains modification counters on a per-column basis rather than a per-row basis as was done in earlier versions. Therefore, sysindexes.rowmodctr is an approximation of what earlier versions of SQL Server would have shown, but the column is not used to determine when auto statistics occurs.
In SQL Server 2000, sp_updatestats would iterate over all the objects in the database and update statistics for every object, regardless of whether there had been any changes to the table (that is, rowmodctr was zero). This has changed from SQL Server 2005, so that, if the sysindexes.rowmodctr value is zero, then the index/statistics is skipped because its statistics are already fully up to date. Running sp_updatestats on a database with objects requiring no update will send a message like:
0 index(es)/statistic(s) have been updated, 0 did not require update.
Though this command still works in the latest version of SQL Server, it is recommended to move to the new syntax of UPDATE STATISTICS.
This command was introduced with SQL Server 2005 version and is used similar to sp_updatestats interchangeably. Updating of statistics ensures that any query that runs get the up-to-date statistics to satisfy the query needs. A typical command would look like:
UPDATE STATISTICS Sales.SalesOrderDetail
WITH FULLSCAN, ALL
This command computes statistics by scanning all rows in the Sales.SalesOrderDetail table. FULLSCAN and SAMPLE 100 PERCENT have the same results. Use caution when using FULLSCAN on large tables as it can take time and also affect performance of the system. It is ideal to do the same during non-peak hours or during maintenance windows. FULLSCAN cannot be used with the SAMPLE option.
There are some conditions in which it might be appropriate to turn off auto statistics or disable it for a particular table. For example, when a SQL Server database is under very heavy load, sometimes the auto update statistics feature can update the statistics on large tables at inappropriate times, such as the busiest time of the day. In such cases, you may want to turn autostats off, and manually update the statistics (using UPDATE STATISTICS) when the database is under a comparatively lesser load.
UPDATE STATISTICS Sales.SalesOrderDetail
WITH FULLSCAN, NORECOMPUTE
The above command forces a full scan of all the rows in the Sales.SalesOrderDetail table, and turns off automatic statistics for the table. To re-enable the AUTO_UPDATE_STATISTICS option behavior, run UPDATE STATISTICS again without the NORECOMPUTE option. To know when the statistics were last updated, use the STATS_DATE function.
Similar to the STATS_DATE function, we can also use another command DBCC SHOW_STATISTICS for the same data for a specific table and index like:
DBCC SHOW_STATISTICS (‘[HumanResources].[Shift]’, ‘PK_Shift_ShiftID’)
At the same time, you need to analyze what will happen if you turn off the auto update statistics feature. While turning this feature off may reduce some stress on your server, by not running at inappropriate times of the day, it could also cause some of your queries to be not properly optimized, which could put extra stress on your server during busy times. It is a fine line of trade off which only an experienced DBA can make based on application workload and query patterns.
As with many other optimization issues, you will need to test to see if turning this option on or off is more effective for your environment.
Statistics is an important concept inside SQL Server and keeping it up-to-date is essential. In this blog post, we have dealt with the basics of statistics and how to update the same. In future posts we will expand on the same. | <urn:uuid:ebcfd09f-6c43-43d0-8413-96e01de3b657> | CC-MAIN-2017-09 | https://blogs.manageengine.com/application-performance-2/appmanager/2013/10/22/optimizing-sql-server-performance-the-story-of-statistics.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00527-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.88694 | 1,186 | 2.75 | 3 |
There is a long history of cable industry vertical integration, tracing back to the 1972 creation of HBO by Cablevision
In the 1940s, the U.S. Department of Justice put a wallop on Hollywood that would have made James Cagney, a popular movie tough guy of the era, tip his fedora hat.
The provocation came from a series of business practices prevalent among the five major movie studios of the day. At the top of the list was “block-booking,” which required that movie theaters accept and run a package of short films and features as a quid pro quo for getting rights to a single attractive release. A cousin, “blind-booking,” meant theaters had to sign up for a broad slate of releases for a forthcoming season based only on thin descriptions studios provided. Taken together, the take-it-or-leave-it practices made the nation’s 18,000 or so movie theaters little more than passive delivery vessels for grand plans drawn up in Hollywood.
How did the studios pull off these feats of business bullying?
Easy. They owned the theaters.
Or at least they owned the theaters that mattered – grand palaces like Paramount Pictures’ El Capitan Theatre in Los Angeles. These opulent theaters exhibited first-run movies that drew big audiences and had enormous influence on the popularity of films. At the time, Paramount controlled about 8 percent of the first-run theaters in major population centers, while Fox and Warner Bros. each accounted for about 3 percent of the total. All together, the five major studios controlled only 17 percent of the nation’s theaters, but those theaters accounted for nearly half of the industry’s film rental revenue. And by maintaining a grip on the theaters that mattered most, the studios were able to dictate terms to all that followed.
One of the intentions of the Justice Department, which began litigation against the studios in 1938, was to crack apart this system of vertical integration – common ownership both of film production and exhibition. On that count, the government was successful. After a 1948 U.S. Supreme Court decision in favor of the government, the studios were forced to divest their exhibition holdings, effectively ending the studio oligopoly and opening up new opportunities for non-aligned filmmakers.
But the forced divestiture didn’t spell the end of vertical integration in the film industry, or the media sector at large. To the contrary, the idea of aligning content and distribution under common ownership and control remains a widely shared ambition. In cable television, particularly, it’s a prevalent theme. The proposed combination of Comcast Corp. and NBC Universal, dramatic though it may be, is only the latest rendition.
There is a long history of cable industry vertical integration, tracing back to the 1972 creation of Home Box Office by a predecessor of the cable company, Cablevision, and including the 1995 acquisition of a controlling interest in Turner Broadcasting System by Time Warner Inc., which itself owned cable systems. When Rupert Murdoch’s News Corp. controlled both DirecTV and Fox Cable Networks, it was practicing the art of vertical integration. The same is true for some of the same movie studios that drew the aim of the Justice Department throughout the 1940s. Movielink, an Internet movie platform, was a joint venture originally involving MGM Studios, Paramount and Universal Studios. They not only supplied content to the venture, they controlled the delivery infrastructure that made it work.
Ironically, the Justice Department’s attack on Hollywood ended up producing a back-handed endorsement of vertical integration. Although the DOJ argued that vertical integration of film production, distribution and exhibition was illegal per se, “the majority of the court does not take that view,” according to the Supreme Court in its ruling. Instead, the court noted that the legality of vertical integration turned on a range of considerations, including whether its impact was to restrain competition rather than merely support legitimate business ambitions.
Although it broke up the Hollywood oligopoly, the Supreme Court’s ruling in United States v. Paramount Pictures Inc. has contributed to the legal case for the legitimacy of media industry integration.
Not only has it given rise to modernday, technology-infused derivations such as Hulu, an online video platform owned by three television networks, it has also inspired some old-school variations. Today, for instance, if you see a movie at the El Capitan on Hollywood Boulevard, you’re taking part in a throwback to Hollywood’s golden era. Not just because the theater has been beautifully refurbished, but because its new owner, The Walt Disney Co., happens to be one of the world’s most prominent film producers. In Hollywood, that’s what they call a sequel. | <urn:uuid:f25b7a5d-e795-4879-8305-1fd2f39e8b6d> | CC-MAIN-2017-09 | https://www.cedmagazine.com/print/articles/2009/12/memory-lane-vertical-reality | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00527-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.94869 | 976 | 2.734375 | 3 |
A 110-core chip has been developed by Massachusetts Institute of Technology as it looks for power-efficient ways to boost performance in mobile devices, PCs and servers.
The processor, called the Execution Migration Machine, tries to determineways to reduce traffic inside chips, which enables faster and more power-efficient computing, said Mieszko Lis, a postgraduate student and Ph.D. candidate at MIT, during a presentation at the Hot Chips conference in California.
The chip is a general purpose processor and not an accelerator like a graphics processor, Lis said, adding that it was an experimental chip.
"It's not the kind of thing you buy for Christmas," Lis said.
Typically a lot of data migration takes place between cores and cache, and the 110-core chip has replaced the cache with a shared memory pool, which reduces the data transfer channels. The chip is also able to predict data movement trends, which reduces the number of cycles required to transfer and process data.
The benefits of power-efficient data transfers could apply to mobile devices and databases, Lis said on the sidelines of the conference.
For example, data-traffic reduction will help mobile devices efficiently process applications like video, while saving power. It could also help reduce the amount of data sent by a mobile device over a network.
Fewer threads and predictive data behavior could help speed up databases. It could also free up shared resources for other tasks, Lis said.
The researchers have seen up to 14 times the reduction in on-chip traffic, which significantly reduces power dissipation. According to internal benchmarks, the performance was 25 percent better compared to other processors, Lis said. Lis did not specify the competitive processors used for benchmarks.
The chip has a mesh architecture with the 110 cores interconnected in a square design. It is based on custom architecture designed to deal with large data sets and to make data migration easier, Lis said. The code was also written specially to work with the processor.
Top chip makers have moved away from adding cores, topping out at between 12 and 16 cores in processors. But the MIT researchers crammed 110 cores in the 10 millimeter by 10 millimeter size of the chip, Lis said. The chip was made using the 45-nanometer process.
The mesh architecture is also used in chips from Tilera, which can scale up to 100 cores. But Lis said the 110-core chip is not based on Tilera's architecture, nor is it a successor. | <urn:uuid:57d80df5-5570-4ddc-960e-ffbc62c32f45> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2169438/computers/mit-develops-110-core-processor-for-more-power-efficient-computing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00223-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954704 | 504 | 3.171875 | 3 |
Hospitals are reporting a new threat of infection -- from computer malware.
Computer viruses are worming their way into everything from fetal monitors to radiology departments’ picture archiving and communication systems, which store and share images from X-rays and other diagnostic equipment, reports Technology Review, a publication of the Massachusetts Institute of Technology.
Kevin Fu, a computer scientist at the University of Michigan and the University of Massachusetts, Amherst, raised the red flag on Oct. 18, during a panel meeting of the National Institute of Standards and Technology’s Information Security and Privacy Advisory Board, according to the report. Malware attacks threaten thousands of network-connected devices, Fu reportedly said at the meeting. An increase in the number of such attacks poses challenges for hospital IT departments.
Health IT experts have had trouble countering the attacks because manufacturers of devices frequently ban modifications to their equipment, including virus protection, Technology Review reported. Interconnected medical equipment often runs on Microsoft Windows operating systems that are vulnerable to viruses.
Manufacturers fear that modifications, including installation of updated versions of Windows that fix many security vulnerabilities, will jeopardize devices’ Food and Drug Administration approval status, according to the report.
One hospital IT executive told the publication that trying to protect all of a hospital’s software-controlled equipment would require the installation of more than 200 firewalls. Mark Olson, chief information security officer at Beth Israel Deaconess Medical Center, in Boston, said his hospital has 664 pieces of medical equipment on which manufacturers will not allow software modifications or updates, according to the report. | <urn:uuid:0386626c-9161-486b-90df-c0fd372e4bd4> | CC-MAIN-2017-09 | http://www.nextgov.com/health/health-it/2012/10/malware-threatens-medical-machines-and-systems/58929/?oref=ng-nextpost | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00223-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942975 | 323 | 2.53125 | 3 |
Americans have not set foot on the moon's surface since 1972. During his presidency, George W. Bush wanted to change that.
"We do not know where this journey will end, yet we know this: Human beings are headed into the cosmos," Bush said during a speech at NASA headquarters on Jan. 14, 2004, in Washington. "Mankind is drawn to the heavens for the same reason we were once drawn into unknown lands and across the open sea. We choose to explore space because doing so improves our lives and lifts our national spirit."
Specifically, mankind is drawn to the moon, Bush said. He proposed spending $12 billion over five years to build a spacecraft that would return humans to the moon by 2020. "Establishing an extended human presence on the moon could vastly reduce the cost of further space exploration, making possible ever more ambitious missions," he said, such as sending humans to Mars for the first time.
The 10-year anniversary of Bush's ambitious plans comes during a bleak time for U.S. space exploration.
The Space Shuttle program was dismantled in 2011, extinguishing hopes for sending American astronauts to space without collaboration with international space agencies. This year's proposed budget for NASA, outlined in an appropriations bill Monday night, was a slim $17.6 billion, just $2 billion more than it was in 2004. Although the budget includes money for asteroid detection, it shrinks funding for a planetary science program that creates and oversees missions to outer planets and moons.
The situation is not all bad, however. Last week, the Obama administration granted the International Space Station a four-year extension, promising to keep the laboratory orbiting Earth until 2024. The move has strong bipartisan support in Congress, and with people who generally do not want to see the 16-year-old station plummet to the bottom of the Pacific Ocean.
Private companies are currently working to send commercial spacecraft to the moon, some by as early as next year. But sustaining human life in zero gravity for prolonged periods of time is a baby science, so people won't come aboard just yet. For now, and likely for the next few years, the only living things headed to the moon are basil and turnips.
Correction: An earlier version of this article incorrectly reported NASA's 2004 budget. It was $15.47 billion. | <urn:uuid:82837523-3620-4ceb-a2cf-885addd80605> | CC-MAIN-2017-09 | http://www.nextgov.com/emerging-tech/2014/01/nasas-budget-woes-eclipse-plans-another-moon-walk-2020/76877/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00399-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.958796 | 473 | 3.109375 | 3 |
Tools to tighten the Internet of Things
The Internet of Things (IoT) is coming, and there’s no doubting its potential. Government IT managers don’t care that your fridge can tell your smartphone what you need to buy next, but they do appreciate that advances in connectivity and data collection will enable major improvements to services that government provides citizens.
Those improvements will come from linking the embedded computing systems that drive much of the country’s infrastructure and that outnumber the more familiar servers, PCs and laptops many times over. With the IoT, systems will become even more numerous and capable, and that’s one of the key factors in the growth of Smart Cities. But it poses a massive security problem.
Market researcher International Data Corp. sees strong growth for the IoT in a number of areas over the next few years, including government. It projects a 7.2 percent compound annual growth rate in environmental monitoring and detection through 2018, for example, and 6.3 percent CAGR for public infrastructure assets management.
Other large growth areas are public safety, emergency response and public transit.
“For IT, typical drivers for this growth are cost and time savings,” said Scott Tiazkun, senior research analyst for IDC’s Global Technology and Industry Research organization. “There’s the convenience factor in having all of these sensors in many places that automatically send data back versus having to send a person out to do a reading, which also decreases the chance for errors.”
Typically, however, these kinds of embedded systems have been built with cost and performance in mind and not security. Now that they are also becoming more interconnected, that vulnerability has become increasingly attractive to attackers looking for protected information or who want to disrupt public services.
The Department of Homeland Security says many of the public infrastructure sites that have recently been successfully attacked were insufficiently protected, and at times administrators weren’t even aware they needed to be secured.
Some parts of the government are keenly aware of potential security problems. Embedded computer systems play a part in just about every area of military technology, for example, and the Defense Advanced Research Projects Agency started its High Assurance Cyber Military Systems program in 2012 specifically to create technology for embedded systems “that are functionally correct and satisfy appropriate safety and security properties.”
Fortunately, it seems the security industry has begun to take notice of the needs of the IoT, though it’s debatable how far traditional IT security systems and techniques can be made to work for embedded systems. But tools specifically aimed at this market are being developed and some are already out.
Computer scientists at the University of California, San Diego, have developed a tool that allows hardware designers and system builders to test for security as they build their devices, for example. It tracks a system’s security-specific properties and makes sure they stay secure. It also detects problems in non-critical subsystems that can affect other, more critical ones.
On the software side, Real-Time Innovations has introduced what it claims is the first secure messaging software for critical industrial systems. Its machine-to-machine communication doesn’t need the centralized brokers or system administrators required by traditional IT security, which ensures the low communication latencies needed by such systems.
These tools, and others like them, will be needed. Embedded system security is still an unknown territory for many government organizations. As the IoT becomes a reality, that could put a lot of public systems and infrastructure at risk.
Posted by Brian Robinson on Jun 20, 2014 at 10:57 AM | <urn:uuid:84d93474-9c21-4d0b-9ba0-bf5383f0582b> | CC-MAIN-2017-09 | https://gcn.com/blogs/cybereye/2014/06/internet-of-things.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00399-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951524 | 739 | 2.609375 | 3 |
The password. It really gives you power doesn’t it? You’re the only one that has the “key” to the workstation or something else that has to be kept away from prying eyes. If you’re using a password than there must be something worth protecting, so why not make this protection a good one?
Choosing a good password
There are two ways to choose a password. You can either use a password generation utility or you can make a password by yourself. If you’re going to do it by yourself than there are several things you have to keep in mind.
Some of the things you should not use include: your name (as well as names of family members, friends, etc.), phone number, address, nickname, computer name, words from a dictionary, name of the company you work for, etc. The idea is to basically not use any kind of information that may be linked with you directly.
A good password includes the following: upper-case and lower-case letters mixed together with special characters, and is at least six characters long. Also, never repeat the same character within a password. Example of a good password: y_R6t*n!b
Using a password generator and safekeeping
A random password generator utility is a wise choice when making hard-to-crack passwords. Also, when you generate a good password, it will be pretty hard to remember so a password manager is a good thing to use. There are many software titles that do this job and two of them are presented below – one for Linux and one for Windows.
Figaro’s Password Manager – [ Download ]
Figaro’s Password Manager is a GNOME application that allows you to securely store your passwords which are encrypted with the blowfish algorithm. If the password is for a website, FPM can keep track of the URLs of your login screens and can automatically launch your browser. In this capacity, FPM acts as a kind of bookmark manager. The program is extremely easy to use and is open source free software.
Included with the program comes a nifty password generator, here’s how it looks:
myPasswords Professional – [ Download ]
myPasswords Professional is a password manager for Windows that uses Blowfish encryption to ensure your information is safe. It can export your databases to Microsoft Excel worksheets, HTML, text, and CSV files. It can import your existing Critical Mass and myPasswords databases and your sensitive information can be masked. The program is very configurable and it’s interface is simple which makes accessing information fast and easy. After a swift installation I doubt you’ll have any problems getting around the program, if you do – there’s a good help file to learn from.
Also included with the program comes a random password generator that makes your password creation extremely easy.
To make users create strong passwords, and in that way improve the security of a system, it’s a good idea to define the type of password that can be created. There are several ways to do this:
- make them use a password generator
- setup some guidelines like how much the password has to be long, what characters have to be used, etc.
- check the integrity of existing passwords with a cracking program and alert users with a weak password.
There are various cracking programs that you can use, some of them are:
It’s wise to change the password frequently as well as avoiding having people look at you when you type your password. There’s never enough paranoia when it comes to protecting your data.
Many applications, that need identification in order to be used, have a default password. Although this password may be easy to remember, you should change it as soon as possible. Lists of default passwords can be found all over the net and that’s probably one of the first things an attacker is going to try using. The same thing applies for any situation when a password is assigned to you, login and change it, right away.
An example of a list of default passwords can be found here.
For much more information on passwords and other methods of authentication, I recommend reading the excellent Authentication: From Passwords to Public Keys by Richard Smith.
As it says on the Addison-Wesley book page:
“[This book] gives readers a clear understanding of what an organization needs to reliably identify its users and how different techniques for verifying identity are executed.”
And, to close this article, here are two interesting articles you might be interested in: | <urn:uuid:46c94b2d-f436-447d-b013-64d715c3def2> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2002/05/24/basic-security-with-passwords/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00399-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.921082 | 952 | 2.625 | 3 |
Sharing Drive Space
Grids can share more than just processors; they also can share drive space for both greater storage room and more robust data availability. Usually this is done with mountable networked file systems, such as AFS (Andrew File System), that old Unix distributed storage favorite; NFS (Network File System); or DFS (Distributed File System). The grid software, in turn, can provide virtual storage with an overreaching file system that spans both drives and file systems. A grid also can be used to set multiple computers to work on a single problem. In such cases, grids must support IPC (interprocess communication) between programs running on different systems. Typically, grids that support such activity borrow MPC (massively parallel computing) message-passing models Common supercomputer message-passing models that have been borrowed by grids are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).To date, most grid systems, such as Sun Microsystems Inc.s N1 Grid Engine and DataSynapse Inc.s GridServer, have been proprietary designs. Recently, however, open-source approaches based on Linux systems have gained popularity. The most influential of these efforts, which has IBMs backing, is The Globus Alliance. This consortium creates open-source tools, the Globus Toolkit 3.2, for building grids. Globus uses Java and Web services to help developers create grid-capable applications. Globus supporters are not the only ones working toward an open source-based grid. Dell, EMC, Oracle and Intel are working on the Linux-based "Project MegaGrid," which will run on Dells PowerEdge servers. The business case driving all of these efforts is the same: Provide customers with a utility model for their computing needs. Hewlett-Packards Adaptive Enterprise, IBMs On Demand Business and Suns N1 take different takes on grids central themes, but are all designed to provide low-cost computing power to customers. This commercial aspect to grid is also relatively new. Traditionally, grids have been used in scientific and academic environments, where they shared the same jobs as its cousins, supercomputing and grids. Now, however, as the technology has matured and open source has brought the price of grid development down, companies are taking it to the marketplace. In particular, financial companieswith their vast need for real-time processinghave become important grid customers. So it is that as Microsoft prepares to launch its Bigtop, the Redmond giant will face several opponents with mature technologies. IBM is currently the grid leader, according to financial services magazine Waters, with Oracle and DataSynapse following. Thus, this is one market battle where Microsoft will face a stern test. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
Another advantage of this borrowing is that application developers dont have to reinvent programs that can make use of a grids parallel computing resources. | <urn:uuid:27ae6028-beb8-46a1-bd12-51c4e571824f> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/IT-Infrastructure/Grid-Computing-101/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00167-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948942 | 613 | 3.1875 | 3 |
Data Privacy Tool
You may also be interested in our Data Privacy Scorebox to assess your organisation's level of data protection maturity.
Law No 09-08 dated on 18 February 2009 relating to protection of individuals with regard to the processing of personal data and its implementation Decree n° 2-09-165 of 21 May 2009 ("Law").
Definition of personal data
Pursuant to article 1 of the Law,the personal data is defined as any information regardless of their nature, and format, relating to identified or identifiable person.
Definition of sensitive personal data
Personal data which reveal the racial or ethnic origin, political opinions, religious or philosophical beliefs or union membership of the person concerned or relating to his health, including his genetic data (article 1.3 of the Law).
Data Protection National Commission (Commission Nationale de Protection des Données Personnelles).
The processing of Personal Data is subject:
- to a prior authorization from the Data Protection National Commission (Commission Nationale de Protection des Données Personnelles) when the processing concerns:
- sensitive data (e.g. revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, including genetic data)
- using personal data for purposes other than those for which they were collected
- genetic data, except for those used by health personnel and that respond to medical purposes
- data relating to offenses, convictions or security measures, except for those used by the officers of the court, and
- data which includes the number of the national identity card of the concerned person, or
- to a prior declaration to be filed with the Personal Data Protection Commission.
The declaration and authorization includes a commitment that the personal data will be treated in accordance with the Law.
The prior declaration and authorization shall include, but is not limited to, the following information:
- the name and address of the person in charge of the processing and, if applicable, its representative
- the name, characteristics and purpose(s) of the processing envisaged
- a description of the category or categories of data subjects, and the data or categories of personal data relating thereto
- the recipients or categories of recipients to whom the data are likely to be communicated
- the envisaged transfers of data to foreign states
- the data retention time
- the authority with which the Data subject may exercise, if any, the rights granted to him by law, and the measures taken to facilitate the exercise of these rights
- a general description allowing a preliminary assessment of the appropriateness of the measures taken to ensure the confidentiality and security of processing, and
- overlap, interconnections, or any other form of data reconciliation and their transfer, subcontracting, in any form, to third parties, free of charge or for consideration.
The personal data must be :
- treated fairly and lawfully;
- collected for specific, explicit and legitimate purposes;
- adequate, relevant and not excessive;
- accurate and necessary and kept up-to-date; and
- kept in a form enabling the person concerned to be identified.
As a general rule, the processing of a personal data must be subject to the prior consent of the concerned person.
However, the processing of personal data can be performed without the approval of the concerned person provided that the information relates to the:
- compliance with a legal obligation to which the concerned person or the person in charge of the processing are submitted
- execution of a contract to which the concerned person is party or in the performance of pre-contractual measures taken at the request of the latter
- protection of the vital interests of the concerned person, if that person is physically or legally unable to give its consent
- performance of a task of public interest or related to the exercise of public authority, vested in the person in charge of the processing or the third party to whom the data are communicated
- fulfilment of the legitimate interests pursued by the person in charge of the processing or by the recipient, subject not to disregard the interests or fundamental rights and freedoms of the concerned person.
The personal data must be subject to prior authorization from the National Commission before any transfer to a foreign state.
Furthermore, the person in charge of the processing operation can transfer personal data to a foreign state only if the said state ensures under its applicable legal framework an adequate level of protection for the privacy and fundamental rights and freedoms of individuals regarding the processing to which these data is or might be subject.
However, the data processor can transfer personal data to any foreign state which does not satisfy the conditions mentioned above (i.e. ensure an adequate level of protection of privacy and fundamental rights and freedoms of individuals), if the person to whom the data relates has expressly consented to the transfer.
Article 23 of the Law provides that the data processor is required to implement all technical and organizational measures to protect personal data in order to prevent it being damaged, altered or used by a third party who is not authorized to have access, as well as against any form of illicit processing.
In addition, the data processor who carries out processing on his own behalf must choose a subcontractor that provides sufficient guarantees with regard to the technical and organizational measures relating to the processing to be carried out while ensuring compliance with these measures.
The Data Protection National Commission ensures compliance with the provisions of the Law.
Article 50 to 64 provides that non-compliance with the provisions of the Law is punishable by a fine ranging from MAD 10,000 to MAD 600,000 and/or imprisonment between three months and four years.
When the offender is a legal person, and without prejudice to the penalties which may be imposed on its officers, penalties of fines shall be doubled.
In addition, the legal person may be punished with one of the following penalties:
- the partial confiscation of its property
- seizure of objects and things whose production, use, carrying, holding or selling is an offence, and
- the closure of the establishment(s) of the legal person where the offense was committed.
Direct prospecting by means of an automated calling machine, a fax machine, e-mails or a similar technology , which uses, in any form whatsoever, an individuals' data without their express prior consent to receive direct prospecting is prohibited.
However, direct prospecting via e-mails may be authorized if the recipient details have been received directly from him.
Unwanted emails can only be sent without consent in the following cases:
- the contact details were provided in the course of a sale
- the marketing relates to a similar product, and
- the recipient was given a method to opt-out of the use of their contact details for marketing when they were collected.
General Data Protection principles apply. | <urn:uuid:aedd34a7-deda-480e-a3a7-15df1ad530f6> | CC-MAIN-2017-09 | https://www.dlapiperdataprotection.com/index.html?t=contacts-section&c=MA | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00167-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.928368 | 1,382 | 2.546875 | 3 |
Chapter 22: Job Control Language
This chapter presents a discussion of JCL (Job Control Language) as used for jobs run on a modern IBM mainframe running a descendant of the OS operating system, such as z/OS.
First, we must define the term “job”. A job is a unit of work for the computer to execute. The job comprises identification statements, control statements, possibly program text, and usually data. There are conventions to label the statements that are not program text and data so that the Control Program part of the Operating System can determine which is what.
The paradigm for a job is a sequence of cards, each card with one statement. The standard card type is the IBM 80 column card, an example of which is shown below. The use of these cards persisted well into the time at which programs and data could be entered through a computer terminal. Today, in classes associated with this textbook, we skip the card input and create text files for submission as jobs. Just remember that each line of text should be imagined as being the content of an 80 column card.
Photo by the author of a card in his collection
The card pictured above is a “6/7/8/9 card” used on a CDC–7600. This was a control card used to indicate the end of a specific job. In modern terms, a “7/8/9 card” would have been an EOD (End of Data) and a “6/7/8/9 card” an EOF (End of File). The 7/8/9 cards were green and the 6/7/8/9 cards were orange; this as a convenience to the programmer. The only computer–readable data on any card is found in the pattern of column punches.
transition from card input of jobs to other means was driven by the simple
inconvenience of handling boxes containing hundreds of cards. The key feature that facilitated that change
was the introduction of system disk drives big enough to store significant
amounts of user programs and data. This
change was not driven by hardware only; it was some time after the introduction
of disk drives that the software designers were able to develop a stable
operating system based on the use of such drives. Your instructor recalls using a Xerox Sigma–7
The first step in transitioning from card input was the ability to catalog a card deck on a disk file maintained by the computer center. Though the jobs remained card based, they became very short: access this file, change these statements, add these statements, and then run. Soon thereafter, the cards went away.
Next, it is important to dispel a misunderstanding that would be almost comical, had it not actually occurred during the teaching of a course based on this textbook. We begin by considering the first few lines of a program that your author assigns as a first lab.
//KC02263R JOB (KC02263),'ED BOZ',REGION=3M,CLASS=A,MSGCLASS=H,
//FFFPROC JCLLIB ORDER=(TSOEFFF.STUDENT.PROCLIB.ASM)
//JESDS OUTPUT PAGEDEF=V06483,JESDS=ALL,DEFAULT=Y,CHARS=GT15
//STEP1 EXEC PROC=HLLASM
//ASM.SYSIN DD *
The above text is the block of job control language that precedes the text of the first assembler language program. Note that many of the lines begin with “//”. Several students decided that these mandatory lines were optional, since they were obviously comments.
The structure of a comment in either a programming language or an execution control statement depends on the language or operating system. It is peculiar to that system. The fact that the “//” character sequence introduces in–line comments in both C++ and Java does not imply similar functioning in other situations. In IBM Job Control Language, comments are prefixed by “//*”, with the asterisk being very significant.
The Job Control Language
There are six types of job control statements that will interest us at this time. These are:
marks the beginning of a job. It gives
the user identification,
accounting information, and other site–specific data.
marks the beginning of a job step by specifying a program or
procedure to be executed.
request the allocation of an I/O device and describes the data set
on that device. It must use the logical device name from the program.
//* This is a comment in the job control language.
/* This terminates an input stream data set.
// This can be used to mark the end of a job.
Logical and Physical Devices
One of the advantages of the structure of the JCL is the ability to define a logical device using a DCB macro within the code, and use the DD control statement to link that logical device to an actual physical device. With the DCB, the code specifies the logical properties of a device. For example a logical printer might be described as PS (Physical Sequential) with record length of 133 bytes (one control character and 132 characters to be printed). The DD statement might then associate this logical device either with the standard output stream or with a dedicated disk file that can be saved and accessed by another job.
Much of this is discussed in chapters 5 and 6 of the IBM Redbook Introduction to the New Mainframe: z/OS Basics [R_24].
The Job Card
This identifies the beginning of a job. It must include a name to associate with the job. For use in our classes, that name is most often the user ID. The name must begin in column 3 of the “card”, following the “//” characters. Remember that none of this is free–form input.
In general, the format of the JOB statement starts as follows.
//name JOB (account number),programmer name
Consider our example from the listing of a lab exercise.
//KC02263R JOB (KC02263),‘ED BOZ’,REGION=3M,CLASS=A,MSGCLASS=H,
The user name consists of from one to eight alphanumeric characters, with the first one being alphabetic. The standard for our course is the user ID with a single letter following it. The job card above shows a user ID of KC02263, with the letter “R” appended.
The next entry in this statement is the keyword “JOB” identifying this as a JOB card.
This is followed by the account number in parentheses. For our student use, the account number is the same as the user ID. This is followed by the programmer name, which is enclosed in quotes as the name contains a space.
The next entry, REGION=3M, specifies the amount of memory space in megabytes required by the step. This could have been specified by REGION=3072K, indicating the same allocation of space. The two size options here are obviously “K” and “M” [R_25, page 16–4].
The entry CLASS=A assigns a job to a class, roughly equivalent to a run–time priority. According to R_25 [page 20-15] the “class you should request depends on the characteristics of the job and your installation’s rules for assigning classes”. This assignment works.
The entry MSGCLASS=H assigns the job log to an output class [R_25, page 20-24]. Depending on the MSGLEVEL statement (see below), the job log will have various content.
The next line of text in the above example should be considered as a continuation of the job card, in that the information that is found there could have been on the job card.
The notify line indicates what user is to be given information about the execution of the job; the level of information is indicated by the integers associated with MSGLEVEL. The first number specifies which job control statements are to be printed in the listing. There are three possible choices.
0 Only the JOB statement is displayed. This is the default for many centers.
job control statements are displayed including those generated from a cataloged
procedure. This is the default for a student job. Note that a cataloged procedure is
a sequence of control statements that have been given a name and placed in a library
of cataloged procedures.
2. Only those job control statements appearing in the input stream are displayed.
The second number inside the parentheses specifies whether or not the I/O device allocation messages are to be printed. A 1 (the default) indicates that all allocation and termination messages are to be printed, regardless of how the job terminates.
The EXEC Statement
The execute statement begins a job step that is associated with the program name or procedure name that controls that step. Each EXEC can begin with an optional step name, which must begin in column 3 and be unique within the job.
There are three standard forms of the execute statement.
//step name EXEC PGM=program name
//step name EXEC PROC=procedure name
//step name EXEC procedure name
The step name is optional, but if it exists it must be unique in the job. For example, we have this line in the job control language of our first lab assignment. This calls for the H–level assembler to be invoked. The procedure takes care of a number of steps that are required, and can be mechanically created.
//STEP1 EXEC PROC=HLLASM
In some more advanced JCL, there is a control logic that requires step names. In this example, we assign names just to show that we can do that.
The PGM option is rarely used by students, who commonly use cataloged procedures. This author views stored procedures in the same light as programming macros; they are predefined sets of statements that have proven useful in the past.
The second and third lines are equivalent, indicating that the default is to execute a cataloged procedure. This expands into a sequence of program EXEC and DD statements.
Here is an example of the ASMFC cataloged procedure [R_09, page 384]. This is given without explanation in order to show the expansion of a very simple cataloged procedure.
//ASM EXEC PGM=IEUASM,REGION=50K
//SYSLIB DD DSNAME=SYS1.MACLIB,DISP=SHR
//SYSUT1 DD DSNAME=&SYSUT1,UNIT=SYSSQ,SPACE=(1700,(400,50)), X
//SYSUT2 DD DSNAME=&SYSUT2,UNIT=SYSSQ,SPACE=(1700,(400,50))
//SYSUT3 DD DSNAME=&SYSUT3,SPACE=(1700,(400,50)), X
//SYSPRINT DD SYSOUT=A
//SYSPPUNCH DD SYSOUT=B
There are a number of parameters to the EXEC statement, but none of these need concern us here. The student who is interested is referred to [R_25, Chapter 16].
The DD (Data Definition) Statement
Any data sets used by the program must be described in DD statements. These must follow the EXEC statement for the particular step in which the data sets are accessed. In the lab examples used with the course associated with this textbook, the DD statements follow the assembler procedure invocation and its associated program input.
For more information, the reader should consult Chapter 6 of Introduction to the New Mainframe [R_24] or Chapter 12 of the MVS JCL Reference [R-25].
The general format of the DD statement is rather flexible, but all have this form.
//proc.ddname DD options
part of the name is the procedure step.
In our programs, we use “GO” for this.
The second part of the name is identical to that used in the DCB macro in the source program, and it further describes the data set referenced in that macro. In general, we have the following sets of relationships within the job.
Here is an example of the linkage between DCB and DD as found in our lab 1.
FILEIN DCB DDNAME=FILEIN, X
PRINTER DCB DDNAME=PRINTER, X
//GO.PRINTER DD SYSOUT=*
//GO.FILEIN DD *
What we have in the above example is a use of the standard input and output data streams. The input stream data set is simply the stream that includes the text of the program and the job control language. The “DD *” indicates that the stream is to be taken as the sequence of 80–character lines immediately following. This stream ends with “/*”.
The following represents the last lines in a job intended to print out the text of three lines. Note the three lines of input text immediately following the DD.
//GO.FILEIN DD *
The statement “DD SYSOUT=*” indicates that the output associated with the ddname PRINTER is to be routed to the standard output stream, called SYSOUT.
The flexibility of this linkage between the DCB and DD statements is illustrated in the following fragment, taken from another lab exercise associated with this textbook. We have taken the above and changed only the DD statement. We have as follows:
PRINTER DCB DDNAME=PRINTER, X
//GO PRINTER DD DSN=KC02263.SP2008.LAB10UT,SPACE=(TRK,(1,1),RLSE),
//GO.FILEIN DD *
The print output is now saved as a text file, called SP2008.LAB1OUT in the user area associated with the user KC02263, which was at the time your author’s user ID. Neither the name “SP2008” nor the name “LAB1OUT” can exceed eight characters in length.
In this version of the DD statement, we use the DSNAME operand, abbreviated as DSN. This identifies the data set (disk file) name to be associated with the output and specifies a few options. The two we use are the disposition option and the space allocation option.
The data set disposition operand has the general form as follows.
DISP=(file status, normal disposition, error disposition)
terms indicates the status of the data set in relation to this job step. The options are:
OLD An existing sat set is used as input only to this step.
SHR An existing disk data set that can be shared with other jobs concurrently.
MOD A partially completed sequential data set. New records to added at the end.
NEW A new output data set is to be created for this job step.
The second term indicates the disposition of the data set in case of a normal termination of the process associated with the step. There are five options for this one.
the data set.
PASS Pass the data set to a later job step.
DELETE Delete this existing data set.
CATLG Catalog and keep the data set.
UNCATLG Remove this data set from the catalog, but keep it.
The third term specifies disposition in the case of an abnormal termination. The option PASS is not available, as it is presumed that an abnormal termination will be associated with corrupt data. Note that our JCL says DISP=(NEW,CATLG,DELETE), indicating to create a new file and catalog it if the job terminates normally. If the job has an abnormal termination, just discard the file.
The space operand has the following format. It is used only for DASD (Direct Access Storage Device, read “disk”) data sets.
term indicates the measure of storage space to be used. In order to understand this, one should
review the architecture of a typical disk unit.
The two options for this term are CYL (cylinder) and TRK
(track). Our JCL has the option SPACE=(TRK,(1,1),RLSE),
indicating that one track is to be allocated initially for our data set and that additional disk space is to be allocated one track at a time when the existing allocation is exhausted.
The RLSE option indicates that the unused space on the DASD (disk drive) is to be released and made available for data storage by other programs when this program terminates and the data set is closed. [R_25, page 12–12].
One option worth mention just for historical reasons is the LABEL option. This was used when accessing data sets on magnetic tape, either 7–track or 9–track. The label was an identifier assigned to an individual physical tape. It was physically written on the label of the tape (to be read by the computer operator) and written in the header record of the tape (to be read by the Operating System). This option would insure that the correct tape was mounted, so that the desired data (and not some other) would be processed.
who is interested in tape labels is referred to a few references, including
[R_02, page 449; R_24, page 203, and R_25, chapter 12]. | <urn:uuid:c4315858-b193-4428-9f0b-7a7b3551ca2b> | CC-MAIN-2017-09 | http://edwardbosworth.com/My3121Textbook_HTM/MyText3121_Ch22_V01.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00343-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913037 | 3,680 | 3.140625 | 3 |
Technology and process upgrades implemented since the controversial 2000 presidential election have made electronic voting machines more secure and reliable to use, the Caltech-MIT Voting Technology Project said in a report last week.
Even so, the only way to ensure the integrity of votes cast with the systems is to have mandatory auditing of the results and of all voting technologies used in an election, the 85-page report cautioned.
Rather than setting security standards for election equipment, the better approach for safeguarding ballot integrity is to hand-count a sufficiently large and random sample of the paper records of votes cast electronically, it said. "The 2000 United States presidential election put a spotlight on the fragility and vulnerability of voting technology," the report said. "It became clear that providing robust, accurate, and secure voting systems remained an important open technical problem" for the United States.
The Voting Technology Project is a joint initiative between MIT and Caltech. It was launched originally to investigate the causes of the voting problems in Florida in 2000 and to make recommendations based on the findings.
Some progress has been made since 2000, said Michael Alvarez, professor of political science at Caltech and co-director of the Voting Technology Project. The antiquated lever-activated punch-card voting systems that led to the infamous hanging chad fiasco in Florida have been mostly replaced with newer, more reliable optical scan and electronic voting systems, he said.
In the upcoming Nov. 6 elections, nearly three out of five counties will use optical-scan technology, with the rest relying on some form of direct-record electronic systems. A very small number of counties will use purely hand-counted paper ballots.
In the past 10 years, there has also been a move away from all-electronic voting systems to electronic systems that support a voter verifiable paper ballot trail, the report noted. That trend has been driven largely by security concerns related to direct-record electronic (DRE) voting machines from companies such as Diebold.
DRE machines processed and stored all ballots electronically and offered little way for voters and election officials to determine for certain whether votes were being recorded and counted correctly. Studies conducted by numerous researchers over the past few years have shown DRE systems to be highly vulnerable to all sorts of tampering and compromises because of their poor design and engineering.
Because of such concerns, much attention has been paid to ensure that votes cast electronically this year have a paper record that can be counted and verified manually if needed. States such as California in particular have led the effort to get voting machine vendors to implement better security. The report pointed to California's decertification of all DRE machines in 2007 as one example of such efforts.
Post-election auditing technologies and approaches have also improved substantially since 2000, thanks mainly to efforts by security researchers and cryptographers, Alvarez said. This year, he said, at least half of all states will conduct post-election audits based on sound statistical principles. Others, including California, have been conducting pilot risk-limiting audits to identify potential issues before votes are cast.
Another big improvement since 2000 is the growing use of centralized statewide voter registration databases. Those databases have enabled quicker voter identification and have given states a better way to address vote loss due to registration problems, Alvarez said. Voter registration databases have also made it easier for state election officials to roll out early voting facilities, he said.
In 2000, between 4 million to 6 million votes were lost nationwide because of voting equipment and ballot problems and because of voter registration problems. But thanks to new technologies and improved processes implemented since then, the number of lost votes is expected to be dramatically lower.
Even so, concerns remain. The increased interest in Internet voting and voting by mail is worrisome, Alvarez noted. Both methods are inherently insecure and vulnerable to tampering and fraud. The federal system to certify electronic voting technologies to specific security standards has also been costly to implement and not particularly effective, he said.
When voters go to the polls this year they will see little that is new in terms of technology Alvarez said. "We haven't had an opportunity to improve voting technology" because of the recession, he said. "The problem that states and counties have had with public finances have made it difficult for election officials to invest in new technologies. We will hopefully see that change as public finances improve."
Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at @jaivijayan or subscribe to Jaikumar's RSS feed . His email address is email@example.com. | <urn:uuid:56c78dd3-e98a-4468-b468-4fe51e859ccc> | CC-MAIN-2017-09 | http://www.computerworld.com/article/2492684/government-it/despite-e-voting-improvements--audits-still-needed-for-ballot-integrity.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00215-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.965317 | 938 | 2.609375 | 3 |
Threats to mobile devices are on the rise. But it’s not just malware, worms, and other viruses that pose a danger — mobile phone spam, a new spin on an old nuisance, is emerging as a problem. Mobile phone spam, also called short message service (SMS) spam, sends unwanted text messages, usually advertisements, to a user’s mobile phone.
Unlike traditional spam that targets an email account, mobile phone spam can have an immediate, direct impact on a user’s wallet. This type of spam can be costly for the recipient because many mobile phone users are charged for each text message they receive, including spam. This spam can also cause big problems for a user with their mobile device carrier. An infected phone can also send spam text messages — which frequently results in the victims’ accounts being closed by the cell providers.
McAfee Labs threat researchers expect to see a rise in mobile spam, including advertisements for pills and other pharmaceuticals and various phishing schemes that attempt to lure a user into clicking on a potentially malicious link. These text messages appear to have been sent from a legitimate company, tricking the recipient into providing personal and financial information.
How can you protect yourself from mobile phone spam? Here are three ways you can lessen the impact of spam on your mobile device: | <urn:uuid:5ed832d4-3f2e-43a0-b8e6-14a6193cd0bd> | CC-MAIN-2017-09 | https://www.mcafee.com/mx/security-awareness/articles/how-to-fight-spam-text-messages.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00391-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937747 | 270 | 2.953125 | 3 |
Intelligent Grids Power a Smarter Future
by Meir Shargal
Regardless of who you ask, the forecast is often the same: in the next two decades, the world’s demand for energy will essentially double. Simultaneously, CO2 regulations and reduction goals — be it for utilities or consumers worldwide — will also explode. Smart grids, which use information technology to help deliver, manage, and monitor electricity for utilities, could provide the capabilities we’ll need.
The International Energy Agency estimated in its Energy Technology Perspectives 2010 report that the global deployment of smart grids can help reduce CO2 emissions by between 0.9 and 2.2 gigatonnes annually by 2050, which is equivalent to the annual emissions of between 300 and 730 mid-sized power plants.1
If you use information-enabled energy, by turning the data you collect from the grid into intelligence, you can create more power with fewer resources.
However, while many are embracing the concept of smart grids, issues ranging from managing renewable energy to ensuring high levels of security still need to be resolved. For example, currently the U.S. National Institute of Standards and Technology is “taking aggressive action to respond to this critical national need” of developing smart grid standards.2
Energy from different sources Another issue is the integration of renewable energy, such as solar and wind power, into a utility’s main grid. With traditional operations, such as coal-fired plants, utilities built their systems to manage reliable, consistent energy that comes from one source. With renewable energy, those resources are intermittent and never guaranteed. They also enter into a utility’s grid at different places, unlike the traditional single source, such as coal being burned at the utility plant.
You need both eyes and ears to be able to control when to take advantage of renewable energy. Incorporating grid management is key when you move from centralized generation, such as coal plants, to distributed, renewable energy generation. To create a smart grid, utilities basically combine their electric grid with a communications network that, through sensors and other devices, gives them intelligence on what is happening on their grid. They can then use that intelligence to make energy decisions and enable consumers to better manage their power usage.
How a utility will manage the large amount of data these smart grids generate is another key issue. Traditionally, utilities generate bills based on monthly meter readings. With smart meters in place and linked to the grid, each meter can send a reading every 10 minutes. Add that new influx of meter data to the flood of data sent by sensors and other devices on the smart grid network, and utilities risk being swamped by a huge wave of data.
To achieve the greatest efficiencies, much less ensure systems aren’t overwhelmed, utilities will need to increase their back office systems’ scalability and reliability. Then they have to integrate that data with their legacy systems in order to turn it into intelligence.
A new level of risk
Security is another concern when creating smart grids. As utilities link their traditional energy grids to communications networks, new vulnerabilities emerge on transmission and distribution networks that need to be protected from cyber attacks. This presents a completely new level of risk that utilities need to consider when building their smart grids.
A smart grid will vary from utility to utility. Each will focus on different requirements and take advantage of different technologies. To succeed, however, they will all have to be smart, secure, and sustainable.
Today CSC provides industry-relevant business solutions and services to help execute smart meter and smart grid programs worldwide for utilities who together supply energy to more than 58 million customers. For information on CSC’s perspective on how smart grids will enable the new energy economy, go to www.csc.com/smartgridPOV.
Meir Shargal is CSC’s Smart Utility practice lead. | <urn:uuid:255a19c1-da8f-425f-8994-69fe8dedd47d> | CC-MAIN-2017-09 | http://www.csc.com/cscworld/publications/56901/56905-the_green_corner_intelligent_grids_power_a_smarter_future | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00567-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938873 | 795 | 3.125 | 3 |
June 18 — Lawrence Berkeley National Laboratory has been named an Intel Parallel Computing Center (IPCC), a collaboration with Intel aimed at adapting existing scientific applications to run on future supercomputers built with manycore processors. Such supercomputers will potentially have millions of processor cores, but today’s applications aren’t designed to take advantage of this architecture.
Most scientific applications, such as those used to study climate change, combustion, astrophysics, materials, etc., are designed to run on parallel systems, meaning that the problem is divided into smaller tasks so more of the calculations can be done simultaneously to reduce the time to solution for the scientists. With the growing use of manycore processors, such as Intel’s Xeon and Xeon Phi processors which can which can have more than 60 cores in each processor, applications will need to have even more parallelism. Unless applications are modernized, they will not be able to take advantage of the greater computing performance promised by manycore processors.
The Berkeley Lab IPCC will be led by Nick Wright of the National Energy Research Scientific Computing Center (NERSC), and Bert de Jong and Hans Johansen of the Computational Research Division (CRD).
“Although manycore processors will significantly increase supercomputing performance, that’s only part of the equation,” said Wright, who leads NERSC’s Advanced Technologies Group. “To fully capitalize on this capability, we need to modernize the applications our user community uses to advance scientific discovery. Intel Parallel Computing Centers such as ours are helping to support the community to attack this problem.”
Optimizing applications for manycore is important for NERSC, which announced in April that its next-generation supercomputer will be a Cray XC supercomputer using Intel’s next-generation Xeon Phi processor, which will have more than 60 cores. NERSC is working with its 5,000 users to help them adapt their codes to the new system, which will is expected to be delivered in 2016.
The Berkeley Lab IPCC will focus on increasing the parallelism of two widely used applications: NWChem and CAM5, the Community Atmospheric Model. NWChem is a leading application for computational chemistry and CAM5, part of the Community Earth System Model, is widely used for studying global climate. Modernizing these codes to run on manycore architecture will enable the scientific community to pursue new frontiers in the fields of chemistry, materials and climate research. Because both NWChem and CAM5 are open source applications, any improvements made to them will be shared with the broader user community, maximizing the benefits of the project.
“Enabling NWChem to harness the full power of manycore processors allows our computational chemistry and materials community to accelerate scientific discovery, tackling more complex scientific problems and reducing the time researchers have to wait for simulations to complete,” says de Jong, who leads CRD’s Scientific Computing Group and is a lead developer of the NWChem software. “Advances made by our IPCC will be shared with the developer community, including lessons learned and making our code available as open source.”
The goal is to deliver enhanced versions of NWChem and CAM5 that at least double their overall performance on manycore machines of today. The research and development will be focused upon implementing greater amounts of parallelism in the codes, starting with simple modifications such as adding or modifying existing components and going as far as exploring new algorithmic approaches that can better exploit manycore architectures.
“The open-source scientific community truly depends on CAM components running effectively at NERSC. And climate scientists have always been early adopters of cutting-edge architectures,” says Johansen, a computational science researcher at Berkeley Lab. “With more performance and more parallelism, scientists can accelerate their simulations and more accurately represent atmospheric dynamics. This collaboration with Intel will help climate science developers leverage NERSC’s and Intel’s network of resources and manycore expertise.”
Berkeley Lab is an ideal collaborator for this project. The lab is home to NERSC, the U.S. Department of Energy’s most scientifically productive supercomputing center with more than 5,000 users running about 700 different applications. CRD is home to fundamental research programs in computer science, applied mathematics, and computational science where researchers investigate future directions in scientific computing and work to develop new tools and technologies to fully exploit the increasing power of supercomputers.
According to Wright, NERSC staff will conduct extensive outreach and training to share what they have learned with NERSC’s broader user community. This will supplement the training and outreach efforts NERSC is already doing to support its users on its current flagship supercomputer “Edison,” a Cray XC30 supercomputer that uses Intel Xeon “Ivybridge” processors. Additionally, the work will be part of the NERSC’s Application Readiness program to help prepare users for the expected 2016 delivery of “Cori,” a Cray XC supercomputer architected with Intel’s next-generation Xeon Phi processor (named “Knights Landing”), which will have more than 60 cores per processor.
Berkeley Lab is the first Department of Energy laboratory to be named an IPCC. Other IPCCs are located at leading universities and research institutions around the world.
About Berkeley Lab Computing Sciences
The Lawrence Berkeley National Laboratory (Berkeley Lab) Computing Sciences organization provides the computing and networking resources and expertise critical to advancing the Department of Energy’s research missions: developing new energy sources, improving energy efficiency, developing new materials and increasing our understanding of ourselves, our world and our universe. ESnet, the Energy Sciences Network, provides the high-bandwidth, reliable connections that link scientists at 40 DOE research sites to each other and to experimental facilities and supercomputing centers around the country. The National Energy Research Scientific Computing Center (NERSC) powers the discoveries of 5,500 scientists at national laboratories and universities, including those at Berkeley Lab’s Computational Research Division (CRD). CRD conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation.
Source: Lawrence Berkeley National Laboratory | <urn:uuid:925f436f-ba54-4349-b759-246669e9033f> | CC-MAIN-2017-09 | https://www.hpcwire.com/off-the-wire/berkeley-lab-intel-collaborate-updating-scientific-codes-manycore-architectures/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00443-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919444 | 1,295 | 2.953125 | 3 |
Water Robots Hit the Waves
/ November 22, 2011
Last Thursday, four self propelled robots called Wave Gliders left San Francisco for a 60,000 kilometer journey. These robots, each of which is about the size of a dolphin, are built by Liquid Robotics, and will travel together to Hawaii, then split into pairs. One pair will head to Japan while the other ventures to Australia, IEEE Spectrum magazine reports.
Solar-powered sensors aboard the wave gliders will measure water temperature, clarity and salinity, and oxygen content; gather information on wave features and currents; and collect weather data. The point of the expedition, so to speak, is to “push the boundaries of science, and prove to the world that this type of technology is ready to increase our understanding of the ocean,” Graham Hine, senior vice president of operations, told IEEE Spectrum.
The collected data is streaming via the Iridium satellite network and will be made freely available in accessible form on Google Earth’s Ocean Showcase. For researchers who register, the data will be available in a more complete form.
Photos courtesy of Liquid Robotics | <urn:uuid:46e086ca-df8a-4fb6-a168-cf923d6535d3> | CC-MAIN-2017-09 | http://www.govtech.com/photos/Photo-of-the-Week-Water-Robots-Hit-the-Waves-11222011.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00388-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.90216 | 231 | 3.109375 | 3 |
Not having an API is becoming like not having a website, but the interface has got to be easy for outside developers to work with.
When Neil Fantom, a manager at the World Bank, sat down with the organization's technology team in 2010 to talk about opening up the bank's data to the world at large, he encountered a bit of unfamiliar terminology. "At that time I didn't even know what 'API' meant," says Fantom.
[ALSO: 7 steps to Big Data success]
As head of the bank's Open Data Initiative, announced in April 2010, Fantom was in charge of taking the group's vast trove of information, which previously had been available only by subscription, and making it available to anyone who wanted it. The method of doing that, he would learn, would be an application programming interface.
The API would place thousands of economic indicators, including rainfall amounts, education levels and birth rates -- some metrics going back 50 years -- at the disposal of developers to mix and match and present in any way that made sense to them. The hope was that this would advance the bank's mission of fighting poverty on a global scale by calling on the creativity of others. "There are many people outside the bank who can do things with the data set we never thought about," says Fantom.
"There are many people outside the bank who can do things with the data set we never thought about," says Neil Fantom, a manager at the World Bank.
One developer, for instance, created an app that married the bank's rainfall data to Google Maps to estimate how much rainwater could be collected on rooftops and subsequently used to water crops in different parts of the world. Another app provides facts about energy consumption and shows individuals what they can do to fight climate change.
Fantom and the World Bank aren't alone in this trajectory. A decade ago, open APIs were a novelty, but in the last few years they've been adopted at an accelerating rate. ProgrammableWeb, a website that tracks public APIs, listed more than 8,800 in early April. According to the site's data, it took from 2000 to 2008 for the number of APIs to reach 1,000, and then another 18 months for that to double. The jump from 7,000 to 8,000 took just three months.
The APIs cover a wide range of categories, including business, shopping, messaging, mapping, telephone, social, financial and government, according to the ProgrammableWeb website. They're becoming as necessary to an organization as a website. "In business today, an open API is more or less table stakes. It's something you have to have," says Stephen O'Grady, an analyst at RedMonk, an analysis firm that focuses on developers. "Increasingly, your traction is going to be driven by how open and how programmatically manipulable your product is."
When Best Buy first launched its API, BBYOpen, in 2009, it gave developers access only to the chain's products catalog, with descriptions and prices for all the items it had on sale, in the hopes that doing so would bring in more customers. That was part of a deliberate strategy to start slowly, says Steve Bendt, director of emerging platforms at Best Buy. "We had to prove these things over time," he says. "We started to prove out that this is a very vibrant and viable area to pursue."
What you need to know about creating open APIs to your data:
Make it easy. Outside developers -- those at your customers' shops -- may have great ideas for how to use the data you make available, but the API itself needs to be understandable and easy to work with. Clear documentation and tools to help are must-haves.
Make sure your licensing terms are clear and fair. Successful APIs tend to have MIT-style open-source software licenses.
Use REST unless you absolutely need SOAP. About three quarters of all APIs are REST-based, according to ProgrammableWeb, with SOAP a distant second.
Be prepared for cultural resistance. Some of the data 'owners' may be reluctant at first to share the jewels. You might explain how the World Bank, Best Buy, Bloomberg and others have used the technique to reach customers in new ways and/or further their organization's mission.
But external developers wanted more, so the company added the ability to access reviews and ratings for products, find a nearby store and check whether a product is available there, and purchase the item through the website or mobile app in question, perhaps with a single click if the user has linked a credit card to the app.
It's been a hit. The mobile apps ShopSavvy, RedLaser and Milo all use BBYOpen as part of their apps. The makers of the app get a commission on sales through Best Buy's affiliate program. Shoppers can search for an item, or scan a bar code, and get information on pricing from various sellers.
Of course, that might mean a customer using the app might wind up buying from a competitor instead, but Bendt says that since websites and mobile apps have changed how people shop, what's important for Best Buy is to be in the mix. "If we're not in the consideration set, that's a missed opportunity." And the fact that the API makes it possible for customers to find out if a product is available for pickup at a nearby store once they've purchased it helps provide a competitive edge over online-only retailers, he says. "Now you can search for, buy and pick up within a matter or 20 to 40 minutes."
Legacy data issues
The idea of an in-store pickup option actually came from external developers, Bendt says, and it took the chain some effort to adapt its legacy system to make inventory data available through the API; the data needed to be reformatted to be compatible. "The systems were built at a time before web services and APIs were in active use," he explains. "It wasn't built in a way to expose it externally to the developer."
The specifics of how they did that varied greatly depending on the data source, but generally the team would try to expose some "snapshot" of the data, updated as frequently as possible. If the data proved useful, they found ways to make it available in closer to real time.
Best Buy's strategy was to start slowly, says Steve Bendt,the retailer's director of emerging platforms. Over time, it's added more data for external developers to incorporate into apps..
Getting existing systems to work with the new API was also a challenge at the World Bank, says Malarvizhi Veerappan, open data systems lead. Her group originally struggled with latency issues because their 8,000 different economic indicators were not all directly linked to one another. It was important, she says, to create a structure that could incorporate all that historical data and grow as new information accumulated.
"We didn't want the API to be a separate application. We wanted it to be part of everything else we did with the data," she says. "We needed to connect it back to our data system. It did require our improving our internal data system."
As the API grew, the team added performance monitoring and instituted policies to ensure good traffic flow. The organization also increased server capacity and added server redundancy to assure availability of the API.
When financial information provider Bloomberg LP launched its Open Data Initiative in February 2012, the new open API -- BLPAPI -- was actually version 3 of the software development kit the company had already been using internally, says Shawn Edwards, Bloomberg's chief technology officer. In the old days, Bloomberg customers were given a dedicated terminal that connected them to the company's mainframe, which delivered market data, news and analysis.
Getting existing systems to work with the new API was also a challenge at the World Bank, says Malarvizhi Veerappan, open data systems lead.
(The name "Open Data Initiative" for both the World Bank and Bloomberg projects is just a coincidence; neither has any formal relationship with the Open Data Initiative that is about making use of publicly available data from various government sources.)
Bloomberg's project has since evolved into a software package that customers install on their own systems. Even before making it open, the company used the API to develop specific applications that allow customers to manipulate Bloomberg data on their own desktops.
With the launch of its open API, the company is now allowing customers to create their own apps, such as watch lists for selected securities or their own trading systems. They also allow outside developers to create apps that draw on other data sources as well as Bloomberg's. "We're not giving away market data. What this allows people to do is integrate with other services," Edwards says. "The API is a piece of software that connects to the Bloomberg cloud."
It just makes sense to let others do the app development, he explains. "We're not in the business of selling software," he says. "We're going to win their business by providing the best services and the best data."
When Bloomberg put out the open API, it decided to remove some of the old features that the previous versions supported. There was discussion as to whether the API should be backward compatible. "We said no," Edwards says. That meant some customers wound up with deprecated functions, but Edwards says it makes the API less cluttered with out-of-date functions.
Like most open APIs, the BLPAPI supports a variety of languages, so developers can choose the best one for their app. Someone running an overnight batch process might choose Perl, or the recently released Python version. An electronic trading system would probably run on C or C++. Quantitative analysts, or quants, generally use the data in Matlab. The API also supports Java, .Net, and C#, and Edwards says some developers are using an R wrapper as well.
One key to making an API successful lies in making it easy to use. Back in 2000, RedMonk's O'Grady says, APIs often used web services protocols, but they proved too complex. Now about three quarters of all APIs are REST-based, according to ProgrammableWeb, with SOAP a distant second. "Because developers overwhelmingly preferred this, it's now the dominant protocol for API systems," O'Grady says.
The importance of clarity
Another important requirement is having extensive, clear documentation, and tools to help developers do their jobs. Bloomberg's initial documentation was aimed more at the financial experts who are its customers, and had to be reworked to tell developers what they needed to know.
""The API is a piece of software that connects to the Bloomberg cloud," says Shawn Edwards, Bloomberg's chief technology officer. "We're not giving away market data. What this allows people to do is integrate with other services."
The BLPAPI also tried to make work easier for developers by providing a replay tool that allows them to perform trial runs of their apps, but that was not available when it first launched. Best Buy's BBYOpen also gives developers a set of tools, including a test console to run apps and an automatic widget generator. The World Bank offers a query builder that lets developers select options.
Tools and ideas don't all flow outward from the organizations; external developers often provide information and frameworks to help each other out. BBYOpen, for instance, offers libraries created by developers in Java, .NET, PHP and other languages. At the World Bank, there's a discussion forum where developers can ask questions, and others jump in with solutions.
"They don't wait for us to respond to questions in the forum," says Veerappan, who is working on giving the forum more features and converting it into a knowledge base. "It's kind of interesting to see the knowledge that other developers have gained in the API," she says.
Successful APIs tend to have MIT-style open-source software licenses; the World Bank, for example, uses an Open Source Attribution License. O'Grady says one key to success is being very clear about the terms of service, and not having an overly restrictive license that discourages use.
He says Stack Overflow, a collaboratively edited question-and-answer site for programmers, has a very nice API, for instance, but that the terms of using it are difficult to navigate. Twitter irritated some developers, he adds, by being too insistent about issues such as how the time stamp was formatted, or insisting that the word "tweet" must be capitalized. While developers are unlikely to shun Twitter for being difficult to work with, O'Grady says, "Certainly in some cases if your product isn't that popular people will abandon it."
Another non-technological challenge to creating an open API is getting other people in the organization, who are used to dealing in proprietary information and maintaining authority over their brand, to cede some control. "I had to do a lot of convincing," Bloomberg's Edwards says. "It's a different way of thinking, when you've been controlling your product." But he says it was important to distinguish between the market data Bloomberg sells and things like the symbology and software that the company doesn't need to control. "The time for all these proprietary interfaces is gone," he says. "It doesn't add value anymore." | <urn:uuid:ef16261d-ab8f-44ec-91cd-a82d6c534a70> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2165357/applications/open-your-data-to-the-world.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00264-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.967419 | 2,751 | 2.609375 | 3 |
The idea of building a robotic manufacturing facility in space might have been in the realm of a Star Wars, Star Trek or other science fiction story, but like some of the technologies in those tales, reality may soon imitate art.
First off, you may recall that NASA is looking for an asteroid weighing about 500 tons that could be moved into within the moon's orbit so astronauts can examine it as early as 2021.
[MORE: The sizzling world of asteroids]
Because asteroids are loaded with minerals that are rare on Earth, near-Earth asteroids and the asteroid belt could become the mining centers for remotely operated excavators and processing machinery. In 20 years, an industry barely imagined now could be sending refined materials, rare metals and even free, clean energy to the Earth from asteroids and other bodies," according to NASA scientists in a recently published paper entitled: "Affordable, Rapid Bootstrapping of the Space Industry and Solar System Civilization."
The scientists say two fundamental developments make this prospect possible: robotics and the discovery of fundamental elements to make plastic and rubber and metals existing throughout space. Another critical technology also is coming in at just the right time: manufacturing in the form of 3D printers that can turn out individual pieces that can be assembled into ever-more-complex machinery and increasingly capable robots.
"Now that we know we can get carbon in space, the basic elements that we need for industry are all within reach," said one of the paper's authors, NASA physicist Phil Metzger said. "That was game-changing for us. The asteroid belt has a billion times more platinum than is found on Earth. There is literally a billion times the metal that is on the Earth, and all the water you could ever need. The idea is you start with resources out of Earth's gravity well in the vicinity of the Earth. But what we argued is that you can establish industry in space for a surprisingly low cost, much less than anybody previously thought."
Metzger said that when the scientists wrote this paper we were focused on the moon as a source of near-Earth resources, but near Earth asteroids work equally well and offer several additional advantages. "It takes less fuel to bring resources away from the lower gravity of an asteroid, and since the ultimate goal is to move the industry to the asteroid main belt starting with asteroids first will help develop the correct technologies," Metzger said.
A near-Earth asteroid or other nearby body presumably will contain enough material to allow a robotic system to mine the materials and refine them into usable metal or other substances. Those materials would be formed into pieces and assembled into another robot system that would itself build similar models and advance the design.
"The first generation only makes the simplest materials, it can include metal and therefore you can make structure out of metal and then you can send robots that will attach electronics and wiring onto the metal," Metzger said. "So by making the easiest thing, you've reduced the largest amount of mass that you have to launch."
Metzger said the first generation of machinery would be akin to the simple mechanical devices of the 1700s, with each new generation advancing quickly to the modern vanguard of abilities. They would start with gas production and the creation of solar cells, vital for providing a power source. Each new robot could add improvements to each successive model and quickly advance the mining and manufacturing capabilities. It would not take long for the miners to produce more material than they need for themselves and they could start shipping precious metals back to Earth, riding on heat shields made of the leftover soil that doesn't contain any precious material.
Perhaps the most unusual aspect of the whole endeavor is that it would not take many launches from Earth to achieve, Metzger said. Launch costs, which now run at best $1,000 per pound, would be saved because robots building themselves in space from material gathered there wouldn't need anything produced by people. Very quickly, only the computer chips, electronics boards and wiring would need to come from Earth.
"We took it through six generations of robotic development and you can achieve full closure and make everything in space," Metzger said. "We showed you can get it down to launching 12 tons of hardware, which is incredibly small." For comparison, that would be less than half the weight of the Apollo command and service modules flown on a moon mission.
The operation the scientists acknowledge, would take years to establish, but not as long as one might think.
The payoff for Earth would be felt when the first shipments of materials began arriving from space. A sudden influx of rare metals, for instance, would drive down the price of those materials on Earth and allow a similar drastic reduction in manufacturing costs for products made with the materials, Metzler stated.
The article was published in the Journal of Aerospace Engineering.
Check out these other hot stories: | <urn:uuid:8051c37f-bba6-4a1e-9e42-2da3ce41c740> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2224711/data-center/nasa--asteroid-based-manufacturing-not-science-fiction.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00384-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9671 | 988 | 4 | 4 |
One of technology's missions is to allow people to think less about the technology they’re using and focus more on the thing they’re using the tech for. A $900,000 National Science Foundation grant issued to researchers at Rice University will allow for the best properties of optical and electronic computer networks to be merged, creating a more efficient network architecture that allows researchers to get results faster and spend more time thinking about their research.
The three-year grant project, called Big Data and Optical Lightpaths-Driven Networked Systems Research Infrastructure (BOLD), is led by Eugene Ng, who said he wants to eliminate a bottleneck experienced even by researchers using advanced supercomputing clusters like the one at Rice. Ng, an associate professor of computer science and electrical and computer engineering, worked with his team to establish preliminary findings that point toward the possibility of big strides in networking architecture.
“I’m very optimistic,” Ng said. “The early simulations we have done in past work have shown basically that these kinds of networks can get nearly optimal performance without the complexity and cost of using conventional network technologies.”
Optical and electrical networks each have their strong points, and Ng’s research will attempt to marry the two, eliminating a common bottleneck in computing. It’s always possible to throw more money at computer networks to increase their capacity, Ng said, but increasing capacity becomes less efficient and more difficult as demand increases, so they’re looking to create a more scalable, efficient solution.
Electric packet switches have the benefit of low latency, while optical networks have the benefit of carrying very high bandwidth streams. But the problem comes when it’s time to convert optical data into electrical signals that computers can use. Such exchanges are costly, both in terms of time and physical energy, Ng said.
“The purpose [of our research] is to meet the data demands of research where large amounts of data must be moved from measurement instruments to compute centers where they’re stored in file systems," Ng said. "While you’re processing [the data], they need to be repeatedly extracted from the file system into the compute node, and then newly generated data from the processing could also be exchanged with the compute node and be stored back to the file system.” No matter how fast a computing cluster is, processing speed is typically limited by the bottleneck created by the network’s ability to move the data.
Ng said his research will focus on evolutions in hardware as well as the software and middleware that will make using the new network architecture user-friendly. “The user essentially specifies simple functions that they want to apply to the processing of the data. And the middleware deals with how to move the data, how to store the data, how to schedule that computation to run, and utilize the compute resources in a cluster as efficiently as possible,” Ng said. Once this technology becomes available, Ng said he’d like to see researchers focusing more on their science and algorithms and worrying less about their computers and waiting for jobs to finish.
“The infrastructure is going to accelerate their work and make it more efficient,” he said. “Oftentimes the CPU may sit idle doing no computation because they’re waiting for the data to arrive, so all those CPU cycles are wasted if the network can’t keep up with it.”
The project’s first milestone will be a small prototype system built in the Rice campus lab where the team can begin experimenting, Ng said. The next step will be to integrate the system with Rice’s existing supercomputer cluster so that anyone using that cluster for research can begin reaping the rewards of increased speed and efficiency.
Though the grant supplied three years of funding for the project, Ng said he expects what they produce to stay in use much longer than that. | <urn:uuid:92f2cd1e-dbff-451d-8276-1dc3a25a2339> | CC-MAIN-2017-09 | http://www.govtech.com/Researchers-Target-Network-Bottlenecks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00084-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.953642 | 802 | 2.875 | 3 |
Black Box Explains...Insertion loss
Insertion loss is a power loss that results from inserting a component into a previously continuous path or creating a splice in it. It is measured by the amount of power received before and after the insertion.
In copper cable, insertion loss measures electrical power lost from the beginning of the run to the end.
In fiber cable, insertion loss (also called optical loss) measures the amount of light lost from beginning to end. Light can be lost many ways: absorption, diffusion, scattering, dispersion, and more. It can also be from poor connections and splices in which the fibers don’t align properly.
Light loss is measured in decibels (dBs), which indicate relative power. A loss of 10 dB means a tenfold reduction in power.
Light strength can be measured with optical power meters, optical loss test sets, and other test sets that send a known light source through the fiber and measure its strength on the other end. | <urn:uuid:faf28ec2-d413-47e3-8ad0-0485bed9c8b4> | CC-MAIN-2017-09 | https://www.blackbox.com/en-us/products/black-box-explains/black-box-explains-insertion-loss | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00084-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954694 | 204 | 3.390625 | 3 |
As wearables continue to evolve, leagues are finding themselves asking many new questions about its use.
There is no question that wearable technology has an amazing amount of potential when used by players in professional sports leagues, but the specific way in which collected data is used is starting to generate a massive number of ethical questions.
Athletes already have massive amounts of data collected and analyzed about their performances on the field.
For many years, leagues have been measuring how fast athletes move, how far they run, how fast they throw, how frequently they score and a great deal more. In fact, the data collection has become quite specific. It’s possible to know the average speed of a pitcher during his or her second inning of play while at a home game, while playing on an even numbered day of the month. With wearable technology, the amount of data collected is even greater, with a larger amount of specificity.
Wearable technology measures precise performance factors, health metrics and even tracks a player’s sleep.
A recent tech conference held in Toronto, Canada held a panel on wearables and brought up the issue of privacy that is inherent to this increasingly popular trend in pro sports. While it is not unheard of for a team to want to know everything it can about its players in order to ensure the best possible performance while reducing the risk of injury, what is not yet outlined is at what point does it cut into the rights of the player to his or her own privacy.
Among the key factors being discussed in this wearables debate is that the evolution of technology has occurred more quickly than the collective bargaining agreements that decide the way that pro leagues and their players interact. For instance, the NFL now has its players wearing radio-frequency identification (RFID) chips that are located in their shoulder pads. This allows the movements of each player to be tracked and transmitted in real-time. That tech allows broadcasters to share distance traveled during a run and other interesting data while the game is still in play.
However, new wearable technology can also help to track a great deal more and provides a broader amount of information about a player’s health and lifestyle. The question now being asked is: at what point has the tracking gone too far. | <urn:uuid:a230ba58-9379-450b-b2b7-7b521d4a841a> | CC-MAIN-2017-09 | http://www.mobilecommercepress.com/tag/sports-wearables/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.979488 | 456 | 2.625 | 3 |
Cloud storage, especially object storage, is often marketed by touting its “durability,” with many providers boasting eleven or thirteen “nines”, in other words 99.999999999% reliability. It sounds great—as close to 100% reliable as you can get. But what is durability in relation to storage, and do you really need those eleven nines?
All storage resides on an underlying media, in most cases hard disk drives, and in some cases flash storage arrays. Regardless of where the media is located within the data center, different technologies can access it, split it up, and share it among different hosting products. You can read more about types of cloud storage to see how the primary platforms of file, block, and object differ.
Durability is a measurement of the tiny errors that occur in files due to these underlying media. When you write, read, and rewrite gigabytes, terabytes, and petabytes of information to the same drive, one or more individual bytes can get corrupted or lost.
Not every service provider even offers a durability rating as it can be difficult to measure and guarantee. A more important question to ask your cloud hosting provider is about how they are protecting against data loss generally. What technologies are in play? What are your odds of recoving data? How can you tie in backup?
For object storage, which is designed around storing massive quantities of files, especially media-rich files like documents, images, and video, durability becomes especially important. Once you reach the petabytes, dropping even a single nine of durability, say from 99.999999999% to 99.99999999% might mean losing 90 or 200 extra files in the case of data loss.
One method of fighting byte loss is erasure coding. When a file is copied to cloud storage, erasure coding splits it up and adds an extra piece of the file that is a duplicate. This means that when a single file is lost, it can be reconstructed from the pieces spread across the entire storage area.
So instead of worrying about the number of nines, which is hard to prove anyway, ask if erasure coding or another backup method is available to ensure the availability of your data at all times.
Erasure coding may not be available for all forms of cloud storage, however. Deduplication is another way that copies can be kept without storing two complete duplicate versions of every file for every backup. The system only copies newly changed files to your backup, keeping the storage footprint down. If the backup is corrupted, it can not be reconstructed, unlike erasure coding, but a deduped backup of block or file storage is a good way to hedge your bets against data loss.
A full copy is also faster to restore than one rebuilt from erasure-coded storage.
When planning your cloud storage, the vital questions become “What type of storage is best suited for my environment?” and “Can this data stand reduced durability?” Critical business data that you need for daily operations should absolutely have a full backup and preferably multiple backups in geographically separated data centers.
If you have lower file volume, in the gigabytes or a few terabytes, durability is much less important, as losing a few bytes and corrupted files will be much smaller proportionally compared to a petabyte or exabyte environment. | <urn:uuid:a570f29b-d89c-49b6-8037-10977e0366c5> | CC-MAIN-2017-09 | https://www.greenhousedata.com/blog/what-is-cloud-storage-durability-do-you-really-need-11-nines | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00436-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947766 | 687 | 2.796875 | 3 |
A load balancer is a hardware or virtual software device that intelligently distributes application and network traffic across multiple servers. The goal of the load balancer is to make sure that all users are served information as quickly as possible, and that more work gets done in less time. Today’s application delivery controllers (ADCs) offer load balancing technology as one of their features.
In addition to optimizing server resources, maintaining application availability, and improving application performance, the load balancer provides health monitoring across all connected servers. This capability ensures that servers and application are responding correctly. If, for any reason, the load balancer senses a problem, it removes unresponsive servers from the pool while maintaining a balanced traffic flow.
While a traditional load balancer satisfied an organization’s requirements two decades ago, they can’t keep up with modern availability, acceleration, and security demands. Application delivery controllers have bridged the gap by providing load balancing plus new content delivery features.
Learn more about the A10 portfolio of high-performance ADCs. Download the white paper titled the Evolution of ADCs: The A10 Advantage Over Legacy Load Balancers. | <urn:uuid:746db1dd-4db3-4d76-a8be-6b468ba80c0b> | CC-MAIN-2017-09 | https://www.a10networks.com/resources/glossary/load-balancer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00612-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.901109 | 235 | 2.640625 | 3 |
In the Tactical Deception Field Manual FM 90-2 of the US Army, the concept of deception is described as those measures designed to mislead enemy forces by manipulation, distortion, or falsification of evidence to induce him to react in a manner prejudicial to his interests. In the cyber world the deception concept and deception techniques have been introduced in the early 1990 with the use of honeypots .
Honeypots are decoy systems that attract attackers to attempt to compromise them , whose value lies in being probed, attacked or compromised . In addition, honeypots can be used to gain advantage in network security. For instance they provide intelligence based on information and knowledge obtained through observation, investigation, analysis, or understanding .
Deception techniques such as honeypots are powerful and flexible techniques offering great insight into malicious activity as well as an excellent opportunity to learn about offensive practices. In this post I will be introducing how to create a honeypot for research purposes to learn about attack methods.
If you want to learn more about computer deception I recommend to read Fred Cohen articles. In regard to honeypots in I definitely recommend the landmark book authored by Lance Spitzner in 2002 and published by Addison-Wesley. One of the many things Lance introduces on his book is the concept of level of interaction to distinguish the different types of honeypots. Basically, this concept provides a way to measure the level of interaction that the system will provide to the attacker. In this post I will be using a medium interaction honeypot called Kippo.
A important aspect before running a honeypot is to make sure you are aware of the legal implications of running a honeypots. You might need to get legal counsel with privacy expertise before running one. The legal concerns are normally around data collection and privacy, especially for high-interaction honeypots. Also you might need permission from your hosting company if you would for example run a honeypot on a virtual private server (vps). Lance on his book as one full chapter dedicated to the legal aspects. Regarding hosting companies that might allow you to run a honeypot you might want to check Solar vps, VpsLand or Tagadap.
Let’s illustrate how to setup the Kippo SSH honeypot. Kippo is specialized in logging brute force attacks against SSH. It’s also able to store information about the actions the attacker took when they manage to break in. Kippo is considered a low interaction honeypot. In addition I will be demonstrating how to use a third party application called Kippo-graph to gather statistics and visualize them.
Based on the tests made the easiest way to setup Kippo is on a Debian linux distro. To install it we need a set of packages which are mentioned in the requirements section of the project page. On my case I had a Debian 6 64 bits system with the core build packages installed and made the following:
Using apt (advanced packaging tool) which is the easier way to retrieve, configure and install Debian packages in automated fashion. I installed subversion to be able to then download Kippo. Plus, installed all the packages mentioned in the requisites. Then verified python version to make sure is the one needed. During the installation of the mysql-server package you should be prompted to enter a password for the mysql.
# apt-get update
# apt-get install subversion python-zope python-crypto python-twisted mysql-server ntp python-mysqldb
# python –V
Check the status of MySQL, then try to login with the password inserted during the installation:
# service mysql status
# mysql -u root -p
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 42
Server version: 5.1.66-0+squeeze1 (Debian)
Check if we have a timesource configured and NTP is syncing:
Download Kippo using svn. Create the initial configuration file and then login into MySQL and create the necessary database and tables:
#svn checkout http://kippo.googlecode.com/svn/trunk/ /opt/kippo
#cp kippo.cfg.dist kippo.cfg
mysql -u root –p
mysql> CREATE DATABASE kippo;
mysql> USE kippo;
mysql> SOURCE /opt/kippo/doc/sql/mysql.sql
mysql> show tables;
Edit the kippo.cfg file and change the hostname directive, ssh port, and banner file. Also uncomment all the directives shown above regarding the ability of Kippo to log into the MySQL database. Make sure you adapt the fields to your environment and use strong passwords:
ssh_port = 48222
hostname = server
banner_file = /etc/issue.net
host = localhost
database = kippo
username = root
password = secret
Edit the file /etc/issue.net on the system and insert a banner similar to the following:
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to law enforcement officials.
Verify which username and password is used to deceive the attacker that he got the correct credentials and break in:
# cd /opt/kippo/data
# cat userdb.txt
Then add a non-privileged user to be used to launch Kippo. Its also needed to change the ownership of the Kippo files and directories to the user just created:
# useradd -m –shell /bin/bash kippo
# cd /opt/
# chown kippo:kippo kippo/ -R
# su kippo
$ cd kippo
Starting kippo in background…Generating RSA keypair…
By default – as you might noticed in the kippo.cfg – Kippo runs on port 2222. Because we start Kippo as a non-privileged used we cannot change it to port 22. One way to circumvent this is to edit the /etc/ssh/sshd_config file and change the listening port to something unusual which will be used to manage the system. Then create an iptables rule that will redirect your TCP traffic destined to port 22 to the port where Kippo is running.
#cat /etc/ssh/sshd_config | grep Port
#service ssh restart
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 48022
Depending on your setup you might need or not additional firewall rules. In my case I had the system directly exposed to the Internet therefore I needed to create additional firewall rules. For the iptables on Debian you might want to check this wiki page.
Create a file with the enforcement rules. I will not be including the redirect rule because will allow me to have control when to start and stop redirecting traffic.
# Sample firewall configuration
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 2222 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48022 -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 48080 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
I will be allowing ICMP traffic plus TCP port 22 and 2222 for Kippo and 48022 to access the system. Then the 48080 will be for the kippo-graphs.
Note that you might want to add the –source x.x.x.x directive to the rules that allow access to the real ssh and http deamon allowing only your IP address to connect to it.
Then we apply the iptables rules redirecting the contents of the file to the iptables-restore command. Then we need a small script for each time we restart the machine to have the iptables rules loaded as documented on the Debian wiki.
#iptables-restore < /etc/iptables.rules
/sbin/iptables-restore < /etc/iptables.up.rules
Change the file mode bits
#chmod +x /etc/network/if-pre-up.d/iptables
Subsequently we can install kippo-graphs. To do that we need a set of additional packages:
#apt-get install apache2 libapache2-mod-php5 php5-cli php5-common php5-cgi php5-mysql php5-gd
After that we download kippo-graph into the the webserver root folder, untar it, change the permissions of the generated-graphs folder and change the values in config.php.
# wget http://bruteforce.gr/wp-content/uploads/kippo-graph-0.7.2.tar –user-agent=””
# md5sum kippo-graph-0.7.2.tar
#tar xvf kippo-graph-0.7.2.tar
# cd kippo-graph
# chmod 777 generated-graphs
# vi config.php
Edit the ports configuration settings, under apache folder, to change the port into something hard to guess like 48080. And change the VirtualHosts directive to the port chosen.
#service apache2 restart
Then you can point the browser to your system IP and load the kippo-graphs url. After you confirmed its working you should stop apache. In my case I just start apache to visualize the statistics.
With this you should have a Kippo environment running plus the third party graphs. One important aspect is that, every time you reboot the system you need to: Access the system using the port specified on the sshd config file ; Apply the iptables redirection traffic ; Stop the apache service and start Kippo. This can be done automatically but I prefer to have control on those aspects because then I now when I start and stop the Kippo service.
#ssh vps.site.com -l root -p 48022
#iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 22 -j REDIRECT –to-port 2222
#service apache2 stop
Stopping web server: apache2 … waiting .
$ cd /opt/kippo/
Starting kippo in background…
Loading dblog engine: mysql
Based on my experience It shouldn’t take more than 48 hours to have someone breaking in the system. You can than watch and learn. In addition after a couple of hours you should start seeing brute force attempts.
If you want to read more about other honeypots, ENISA (European Network and Information Security Agency) just recently released a study about honeypots called “Proactive Detection of Security Incidents II: Honeypot”. It’s the result of a comprehensive and in-depth investigation about current honeypot technologies. With a focus on open-source solution, a total of 30 different standalone honeypots were tested and evaluated. It’s definitely a must read.
In a future post I will write about the findings of running this deception systems to lure attackers.
The use of Deception Tecniques : Honeypots and decoys, Fred Cohen
The Art of Computer Virus Research and Defense, Peter Szor, Symantec Press
Honeypots. Tracking Hackers, Lance Spitzner, Addison-Wesley
Designing Deception Operations for Computer Network Defense. Jim Yuill, Fred Feer, Dorothy Denning, Fall | <urn:uuid:ac620d50-2699-4024-b9d5-82d21dce9a11> | CC-MAIN-2017-09 | https://countuponsecurity.com/tag/network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00488-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.837009 | 2,823 | 2.65625 | 3 |
This may seem like a simple question but for a lot of system administrators who “inherit” systems or are unfamiliar with operating systems that have been forced upon them, it can be very confusing, especially if you're coming from a proprietary UNIX® operating system such as Solaris™ or HP-UX to a Linux®-based distribution.
For most of us old-school UNIX people, the reliable “uname” utility is what we are most familiar with. Execute it with the -a option and you get something like:
SunOS sungod 5.10 Generic_137138-09 i86pc i386 i86pc
Cryptic to most but for a seasoned Solaris administrator it means that the host “sungod” is running Solaris 10 on an x86 (non-SPARC) system and the current kernel patch level is 137138-09.
If you run the same command on a Linux system, you might see something like:
Linux greenlantern 188.8.131.52-0.3-default #1 SMP 2010-09-20 11:03:26 -0400 \ x86_64 x86_64 x86_64 GNU/Linux
At most, you can determine that the host “greenlantern” is in fact a Linux system running a default kernel version 2.26.47 and it is a 64-bit system because of the “x86_64” in the statement.
The “uname” utility was first introduced as part of the UNIX Programmer's Workbench (PWB) in 1973. Not only is “uname” a utility, it is a system call – uname() conforms to System Vr4 and POSIX.1-2001. It extracts information from the running kernel.
Linux distributions are built off of standard kernels but are packaged and bundled differently. Some distributions are Debian-based while others might be Red Hat-based. The collection of packages and how the packages were compiled and ultimately delivered are what make Linux distributions unique.
Most UNIX and Linux operating systems have some form of a release file detailing the operating system version and release information. This file, usually in the /etc directory, is a simple text file.
Some operating systems adhere to POSIX while others strive to be Linux Standard Based (LSB). Of course there are more standards and this fact reminds me of Andrew Tanenbaum's famous statement, “The nice thing about standards is that there are so many of them to choose from.”
For those systems which comply with LSB, you can use the lsb_release(8) utility. For example, running the lsb_release command on my openSUSE system reveals the following:
$ lsb_release -r -i -c -d
Distributor ID: SUSE LINUX
Description: openSUSE 11.1 (x86_64)
Much more informative than the “uname” utility. It should be noted that the utility just parses various configuration files such as those in /etc. Specifically, on SUSE systems it examines the following files:
$ ls -l /etc/SuSE-*
-rw-r--r-- 1 root root 24 Dec 3 2008 /etc/SuSE-brand
-rw-r--r-- 1 root root 38 Dec 4 2008 /etc/SuSE-release
Here is a list of some operating systems, related commands, and their release files which will help you determine the specific version and release of your operating system:
Operating System | Command or Configuration Files
AIX uname -a
FreeBSD uname -a
HP-UX uname -a
OpenSUSE and Novell SUSE /etc/SuSE-brand
Red Hat /etc/redhat-release
Finally, many system administrators are confused when they apply all of the available updates to their system via their local software repositories but are still running the same minor revision.
For example, if you are running openSUSE 11.1 and you perform a “zypper update” to install available software updates, this will not bring your system up to openSUSE 11.2. To do this, you must specifically issue the distribution upgrade command. (zypper dist-upgrade).
This is because when you perform a normal update, it is only examining the repositories your current system has configured. For example, here is my list of repositories (zypper lr):
$ zypper lr
# | Alias | Name | Enabled | Refresh
1 | NVIDIA | NVIDIA | Yes | Yes
2 | NVIDIA-11.1 | NVIDIA-11.1 | Yes | No
3 | Packman Repository| Packman Repository | Yes | Yes
4 | adobe-linux-i386 | Adobe Systems Inc | Yes | No
5 | google | Google - i386 | Yes | No
6 | google-chrome | google-chrome | Yes | Yes
7 | google-testing | Google Testing - i386 | Yes | No
8 | openSUSE 11.1-0 | openSUSE 11.1-0 | No | No
9 | repo-debug | openSUSE-11.1-Debug | No | Yes
10 | repo-non-oss | openSUSE-11.1-Non-Oss | Yes | Yes
11 | repo-oss | openSUSE-11.1-Oss | Yes | Yes
12 | repo-source | openSUSE-11.1-Source | No | Yes
13 | repo-update | openSUSE-11.1-Update | Yes | Yes
On the other hand, Red Hat distributions such as CentOS would be updated to the next minor revision (e.g., 5.4 to 5.5) because of the way the repositories are structured.
For system administrators maintaining patch levels and an accurate inventory of their systems, it is imperative they know how to determine the exact operating system version you are running.
Hopefully, this post has provided some guidance on clarifying how to find this important information.
Cross-posted from Security Blanket Technical Blog | <urn:uuid:a639f979-73c9-4836-bdda-46e56697ba83> | CC-MAIN-2017-09 | http://www.infosecisland.com/blogview/9657-Which-Linux-or-UNIX-Version-Am-I-Running.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00608-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.873088 | 1,307 | 2.6875 | 3 |
One of the main concerns of the organizers of the Olympic Games to be held in Athens this summer is security, but not only physical security, computer security as well. The emphasis placed on avoiding problems with the computers that will manage huge amounts of data during the games will be proportional to the magnitude of this global event.
The information that must be protected at any Olympic Games is so valuable that it justifies all efforts to guard it. However, in companies, where the scale of the IT structure is not usually on the level of the Olympic Games, financial investment in security is not always enough to protect information. On the one hand, it is possible that security investment is insufficient, and therefore inefficient. On the other hand, it is just as absurd to leave a system unprotected, as it is to overprotect it, as, in this case, money invested becomes money wasted.
When you evaluate the expenditure to be made on an IT security structure, there are three aspects that must be taken into account. First, you must know the value of the data or systems to be protected. This is probably some of the information most difficult to obtain in a company. How much is a company’s know how? Or even more difficult, what is the current value of the project of a new product that is still at the development stage? The number of variables to be considered is endless, and in many cases, impossible to quantify objectively. The best way to obtain this data is through indirect calculation, that is, by measuring not total losses, but financial loss caused by loss of information.
Just imagine, for example, the cost of having your company’s network halted for an hour. If you divide your annual turnover by the number of working hours, you will see the cost of having your servers at a standstill for an hour.
The second aspect to be considered is the investment to be made on security systems. Under no circumstance should you have a budget that exceeds the value of the information to be protected. This would be like keeping an old stained rag in a safe, as the cost of the safe is greater than the cloth. A security system like this would be redundant. (Unless of course the rag was stained by Leonardo da Vinci, and called the Mona Lisa, then maybe some additional expenditure on extra security measures might be in order).
Finally, you have to calculate how much it would cost for an attacker to breach security measures and access protected information. This should be very high, that is, to obtain certain information must be far more costly than the information itself. In this way, you are setting up an intangible barrier that is very difficult to get over, since, if it is not worth breaking into a system, almost nobody will try to do it. At least, most attackers will be dissuaded from doing it.
As usually happens when you try to assess a security risk, establishing the right measuring standards is rather complicated, as there is no perfect metric and, even if there was, it needs to be capable of adapting to every business alternative. In fact, a parameter which is valid for a certain business vision is completely different for another, irrespective of how similar businesses might be.
Luckily enough, you can be helped by computer security experts with the necessary experience and knowledge to draw up a close approximation of your IT security needs and the investments to be made. On the contrary, to establish an investment policy based on the opinions of unknowledgeable people can lead to highly undesirable effects.
To sum up, leave computer security to experts that are up-to-date with this area and know the issues involved. This is the best way to ensure that you are investing just what you need in security systems, no more, no less. | <urn:uuid:e80cbec3-0710-4907-aa9f-38583072c62d> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2004/06/02/how-much-should-you-invest-in-it-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00132-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.966 | 760 | 2.515625 | 3 |
SPECIAL REPORT—Green IT
Air flow control can yield more efficient data centers
Projects explore new techniques for improving air distribution in data centers
- By Rutrell Yasin
- Apr 03, 2009
You don’t necessarily have to spend money to save money on data center power use. The Energy Department’s Lawrence Berkeley National Laboratory (LBNL) is engaged in several projects with industry partners to demonstrate how cooling and information technology systems can work together to more effectively manage air flow in data centers, thus improving energy efficiency.
One project, with Intel, IBM and Hewlett-Packard, funded by the California Energy Commission, will explore the possibility of using temperature sensors that are already inside servers to directly control the computer room air conditioning (CRAC) units that regulate the flow of cool and hot air into and out of the data center.
The idea is to ensure that the right amount of cool air is being delivered to the server inlet. CRAC systems in most data centers typically focus on cooling the entire room, but that can result in uneven and inefficient distribution.
Additionally, LBNL is working on a separate demonstration to show how data center managers can use wireless temperature sensors to directly control computer room air handlers, which push air into ducts, said William Tschudi, project leader of LBNL’s Building Technologies Department.
“The idea is you have a finer mesh of being able to monitor temperature and then control the computer room air handlers to give [the facility] exactly what it wants rather than oversupplying air,” Tschudi said.
“We’re also working with Sun Microsystems on the demonstration of different cooling technologies," he said. "All companies are trying to demonstrate different pieces” to improve energy efficiency. The results of some of these demonstrations will be shared with people attending the Silicon Valley Leadership Group’s conference in the fall, he added.
Demonstrations on air management and cooling techniques are just part of government and industry efforts to advance innovation and spur greater energy efficiency in data centers. The Environmental Protection Agency is leading efforts to establish an Energy Star specification for enterprise servers so IT managers can buy systems that deliver performance but reduce energy consumption. EPA also is working on an Energy Star rating for data centers with the Green Grid, a consortium of industry and government organizations.
But measuring energy efficiency in data centers could be a tougher nut to crack, experts say.
A view of two networks
The LBNL and Intel demonstration is slated to happen by the summer.
“Right now we’re working on how to get the IT network to work with the building control network,” Tschudi said.
Those two networks are separated, but the LBNL/Intel team is developing a management console that will give data and facility managers a view of both networks, said Michael Patterson, senior power and thermal architect at Intel’s Eco Technology Program Office.
The demonstration is being conducted at an Intel data center in California, which has IBM and HP servers and Liebert CRAC units, Patterson said.
The goal is not to develop a product, Patterson added. Because the California Energy Commission is funding the project, the goal is to document the results of the demonstration so data center and facility operators can learn from the team’s efforts.
“They can learn what the challenges are, how we did the interconnection and what some of the tricky bits were so if they want [they can] implement the same control strategy into their data center,” Patterson said. “So they can go into it smart rather than blindly and hoping for success.”
Blowing hot and cold
CRAC systems in most data centers pump pressurized air to maintain a server inlet temperature within a proper range. The American Society of Heating, Refrigerating and Air Conditioning Engineers recommends inlet temperatures of 64.4 degrees Fahrenheit through 80.6 degrees Fahrenheit. ASHRAE also recommends an absolute humidity/dew point range of 41.9 to 59 degrees Fahrenheit.
The cooling units are positioned around the perimeter of a standard data center. They have a couple of different features. CRAC units receive chilled water from the buildings’ central utility plant, or the facility has localized air-conditioning plants with a CRAC unit in each, Patterson said.
There is a cooling component that takes heat out and, at the same time, there is an air flow component. Motorized fans in the unit move the cool air around the room, usually beneath raised floors and up through perforated tiles to servers mounted in racks. The hot air is blown out of servers, usually to hot aisles and returned to the cooling unit.
During the era of mainframe computers, there was no need for air flow segregation. A lot of cold air was dumped into the room and the computer released heat back in the room, which was sent to the cooling unit.
Ultimately, managers shouldn’t be concerned with the temperature returning to the CRAC unit, Patterson said. What really matters is the inlet temperature to the server.
“To maximize efficiency you want to have just enough air flow and just enough cooling through chilled water or the refrigeration system in the CRAC,” Patterson said. “You can’t get this balance with the temperature sensor in the return air to the CRAC unit.”
However, you can if you tap into the temperature at the inlet of the server. Most server manufacturers put a front panel temperature sensor in their systems that reads the temperature of the air coming into the server, Patterson explained.
“If we can control that temperature and provide the front of the server with enough air flow, then we will have done our job to provide the most efficient cooling possible,” he said.
Essentially, the demonstration is intended to show how data center and facility operators can replace the control functionality of the cooling system with instrumentation that is already in the servers.
“We’re not saying add extra sensors or redesign servers or spend additional money when a new data center is spun out. The beauty of the project is that we are demonstrating the integration of the facility and the computer, providing a wall between them,” Patterson said.
The building control system will be able to communicate with the management server that monitors systems for hard drive failures or memory upgrades and seek information on the front panel temperatures. The team is deploying some complex algorithms that will allow the sensors to tell the cooling system if the air is cold enough and that will drive the chill water pump, Patterson said.
The team also will use sensors to measure the temperature at the bottom and top of the server rack to determine if there is enough air flow. Too little air flow means a large temperature differential between the bottom and top.
“With this thermo map of server inlets, we are going to have the control system be smart to modulate the whole load to significantly reduce the amount of energy we’re going to be using in the data center,” Patterson said.
The project team is expecting a more than 70 percent reduction in energy use in the particular cooling units, he said. Most data centers run the fans in the cooling systems at 100 percent all the time.
“We only need 47 percent of the peak air flow on the average, so we’re going to only use 10 percent of the power compared to if these [cooling system fans] were turned on to run at full speed,” Patterson said.
Benchmarking data centers
There is no silver bullet for improving energy efficiency in data centers, LBNL’s Tschudi said. A lot of areas interact with one another, and improvements can be made in power conversion and distribution, load management, server innovation and cooling equipment, he said.
But coming up with metrics to benchmark those improvements could be difficult, some industry experts say.
“We suspect the federal government is the largest operator of data centers probably in the world,” said Andrew Fanara, the EPA Energy Star product development team lead. As such, the opportunity is there for the federal sector to lead the way in improving data center operations, he said.
However, there has to be a way for data center operators to benchmark performance against the entire facility and measure against themselves over time to improve their efficiency, he added.
EPA has worked with various types of facility managers to come up with Energy Star ratings for facilities from schools to supermarkets. So EPA decided to design a benchmark specifically for data centers, whether they are in a stand-alone facility or inside another commercial office building. The agency is working with the Green Grid to fine-tune that protocol, Fanara said. It will provide advice to data center operators on measuring the performance and energy efficiency of IT equipment.
“Unless you have the means to measure your performance, how do you know the investments are taking you in the right direction?” Fanara asked.
At the end of the research and analysis stage, EPA could have an Energy Star benchmark for data centers, though the analysis isn’t finished, he said.
So far, the Green Grid has proposed the Power Usage Effectiveness (PUE) and its reciprocal Data Center Infrastructure Efficiency (DCiE) benchmarks that compare an organization's data center infrastructure to its existing IT load.
After initial benchmarking using the PUE/DCiE metrics, data center operators have an efficiency score. They can then set up a testing framework for the facility to repeat and can compare initial and subsequent scores to gauge the impact of ongoing energy efficiency efforts.
DOE has also developed the Data Center Energy Profiler tool, which offers a first step to help companies and government agencies identify potential savings and reduce environmental emissions associated with energy production and use.
DC Pro, an online tool, provides a customized overview of energy purchases, data center energy use, savings potential and a list of actions that can be taken to realize these savings.
Microsoft has developed a suite of sophisticated reporting tools to measure efficiency in its own data centers, said Kim Nelson, executive director of e-government at Microsoft.
The company uses the Green Grid’s PUE but has its own tools — based on business intelligence capabilities — that measure server utilization and CPU and wattage usage per server.
“We measure PUE and carbon emission factors that are generated by where you live geographically," she said. "We’ve been reporting those to EPA.”
Because EPA collects information from different organizations and agencies, Microsoft will need to evaluate whether the information it has given to the agency can be reasonably collected across the board.
IBM officials also have collected information on energy efficiency in the company's data centers and sent it to EPA for consideration and analysis.
“We provided a year’s worth of data on six of our data center buildings to EPA as part of their data collection process for the Energy Star building work for data centers,” said Jay Dietrich, program manager for IBM’s corporate environmental affairs group.
There is no meaningful metric for measuring workload at this point, Dietrich said. Data center operators can be very efficient with facilities power and IT power, but if they are not optimizing the amount of work their servers are doing, those metrics might not be the most efficient answer to a particular application, he said.
For now, EPA is just going to get data on IT power and the power needed to run the facility, he said. But the agency is interested in exploring how to introduce that workload component, as is the Green Grid, Dietrich said.
Many data centers don’t have sufficient instrumentation to get the information they need to come up with some measure of efficiency in the data center, Intel’s Patterson said, adding that the company is working to promote a minimum level of instrumentation.
However, “you can’t wait until you have the right instrumentation suite out there. You may never actually start,” Patterson said.
Data center managers can still go around with a clipboard and write things down, Patterson said. “If you don’t measure, you can’t improve and you don’t know where to focus your effort for improvement,” he added. | <urn:uuid:a2bf6124-c384-4dfe-957f-bdc4a0ff87c1> | CC-MAIN-2017-09 | https://gcn.com/articles/2009/04/06/cool-it.aspx?admgarea=TC_DATACENTER | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00308-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.938083 | 2,546 | 2.75 | 3 |
Cloud Infographic: The History Of Cloud Technology
Since cloud computing is gaining acceptability by the day, it is no longer a beginner in the IT infrastructure space. Again, since there are many who resist the technology citing several concerns like security and availability, it hasn’t matured either. In other words, there’s no time like the present in presenting its history.
The general idea behind the technology dates back to the 1960s, when John McCarthy wrote that “computation may someday be organized as a public utility.” Then, grid computing, a concept that originated in the early 1990s as an idea for making computer power as easy to access as an electric power grid also contributed to cloud computing. For a detailed look at the differences between utility, grid and cloud computing, look at “Cloud Computing vs Utility Computing vs Grid Computing: Sorting The Differences.”
Continue Reading….The History of Cloud Computing
Infographic Source: http://www.nttcom.tv | <urn:uuid:8baa6556-d3ed-4671-a967-b85eb16b5e52> | CC-MAIN-2017-09 | https://cloudtweaks.com/2012/11/cloud-infographic-the-history-of-cloud-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00484-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.947375 | 206 | 2.828125 | 3 |
It's still unclear exactly what passwords were stolen by Russian hackers, and how dangerous the breech might be. But whether or not you're affected, here are five ways to keep your passwords safe.
Use two-factor authentication
Passwords by their very nature are insecure. So many sites, including Gmail, Google, Facebook, and Twitter, offer two-factor authentication. That requires not just a password, but also information that you're sent in a variety of ways, such as via a text, or by a smartphone app. Yes, it's more work than just a password. But it's also very safe. For details about how to do it, check out this guide from PC World.
Don't use the same password everywhere
Remembering passwords is tough, particularly because you have so many of them. So you'll be tempted to use one or two for all of the sites into which you log in. Don't do it. If you do that and one of them is stolen, someone may be able to break into your accounts on other sites as well.
Use a password generator
If you're worried about someone breaking into one of your accounts by cracking your password, you'll need to build strong passwords. That can be hard to do by yourself. There are quite a few programs that will do it for you. Norton has a free safe password generator online that you can use. From the same site you can download the free Norton Identity Safe that stores them for you.
Change your passwords regularly
If you regularly change your passwords, you may be able to limit the damage if one of your accounts has been breached. Not uncommonly, it can take quite some time between the time when an account is breached, and when that information is made public. So if you change your passwords regularly, any breached accounts will be vulnerable for a lesser amount of time.
Watch out for signs of breaches
There are a few things you can do to check whether an account of yours has been breached. Regularly check your credit card statements and bank accounts for unusual activity, and call immediately if you find any. And check to see whether there are updates or posts to any of your social media accounts that you didn't make. | <urn:uuid:89750193-90ac-474c-aecc-e6a3ad768c1d> | CC-MAIN-2017-09 | http://www.itworld.com/article/2693574/security/russian-hacker-breach--five-ways-to-better-protect-your-passwords.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00428-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968108 | 452 | 2.609375 | 3 |
Hello Friends, today i will explain you how to secure your computer by just simple 7 steps. This will help you to protect your computer from viruses, Trojan attacks and also from hackers. These 7 tips will improve your computer and online security. Online Computer and internet security is critical right from the moment you switch on your brand new computer or start your internet for the very first time. Here are 7 Simple Steps to improve your PC security.
|7 Tips to improve your PC security|
7 Tips to improve your PC security
Install Anti-virus and Anti-spyware software on your computer before you start surfing the first time. The difference between a computer virus and spyware is that – a virus is a malicious piece of computer code that can be implanted on any computer and it can result in destruction of the file systems of your computer and can be transferred from one computer to another and spread like the biological virus. A spyware is a program that collects information about you without your knowledge or consent. A spyware does not spread like a virus.
2. Keep your computer firewall ON all the time
Most anti-virus soft wares come with a firewall. If the operating system that you are using is Microsoft Windows XP Service Pack 2 (SP2) or Macintosh OS X, then it has an in-built firewall. Usually, the firewall is off when the computer is shipped to you, so make you read instructions on how to turn on the firewall. A firewall prevents direct communication between your computer and another computer (a hacker’s computer).
3. Turn on the automatic software updates feature
Turn on the automatic updates feature of your anti-virus, anti-spyware, OS, and firewall on and stay current. This is a good online computer security measure. It is important that you have the most current protection. Hackers search on the internet for computers that are either unprotected or don’t have the latest protection features. Hackers can hack into your computer and install software on your computer. This will enable them to steal login details of your online bank accounts, other membership sites like Ebay, Paypal etc. and also send Spam emails that appear to originate from your computer. Sending Spam emails that appear to originate from your email address can result in your account being revoked!
4. Store your computer information safely
Storing your computer information safely can help the technician who is fixing/ restoring your computer. For example, on Windows, Click Start and then choose Run. This will bring up small window wherein you need to type ‘msinfo32′ without the quotes. This will bring up a system window. On this widow, choose the file menu and then Export. Export your system information on to a CD. Similarly, for other operating systems, search on Google.com for information on storing the system information on a CD.
5. Backup important files
It is important to back up important files. Determine what you would do to restore your computer if it has been attacked. Pretend that your computer file system has been corrupted and then what steps would you take to restore. You will realize that having a backup can make things easy for you.
6. Use strong password authentication
When you signup for an online membership like online bank, , Paypal, Ebay etc., do not use weak passwords that can make it easy for people who know you or have your some information about, easy to hack. Using your significant other’s name, child’s name, pet’s name etc. are weak. Use something stronger like first letters of the address of the house where you were born concatenated to your birth year or something along those lines. It is best if the password is NOT some meaningful word!
7. Protect your personal information
If you are asked to give out personal information like phone number, address, SSN, identification numbers etc. on the internet, use more caution. Find out exactly why and how they will be used. If there is a link in your email that asks you to login by clicking on the link, then don’t! Usually genuine emails don’t ask you to login directly by clicking a link in the email. If you want to login to your membership accounts, always open a new browser and then type the URL of the website (to login to Paypal account, type paypal.com on the new browser instead of clicking on the link from an email that is asking you to login.).
If you are giving out credit card information, then the page that accepts credit card information must have secure encryption. The URL usually begins with https instead of the regular http. If you right click and select properties, the Connection section should read something like 128 bit encryption (High) and also must have 1024 bit exchange.
Following these 7 steps to online computer security can protect you and your computer from online attacks.
I hope you all like this post. If you like it please comment.. | <urn:uuid:49a670f3-0114-4354-a7ef-f006cb0bba07> | CC-MAIN-2017-09 | https://www.hackingloops.com/7-tips-to-improve-your-pc-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00428-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.914216 | 1,020 | 2.703125 | 3 |
With the Carbon to Collaboration Initiative, Cisco is committed to reducing carbon emissions by a minimum of 10 percent, starting with a dramatic reduction in the company’s air travel in 2008.
In addition, Cisco will invest US$20 million in collaborative technologies that will reduce the need for physical travel at Cisco, by combining its Unified Communications technologies, which include voice and data, with a rich-media and video experience to create virtual interactions across distances. People located across the country and around the globe, for example, will be able to work together as effectively as if they were sitting in the same room.
How Cisco TelePresence Supports the Connected Urban Development Program
The Cisco TelePresence videoconferencing solution supports the CUD program, encouraging telecommuting to help decrease the number of vehicles on the road, especially during peak hours. TelePresence creates a high level of virtual interaction across distances, without compromising communications and collaboration among people. | <urn:uuid:7fd8c0e0-9815-44b8-ae7e-ba7526df9f33> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/consulting-thought-leadership/what-we-do/industry-practices/public-sector/our-practice/urban-innovation/connected-urban-development/further-cud-information/thought-leadership/carbon.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00124-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.942856 | 195 | 2.578125 | 3 |
Attention: Virus Zircon.c is spreading on the Internet
14 Mar 2002
Kaspersky Lab, a leading international data-security software developer, reports the detection of a new version of the Internet-worm known as Zircon. Zircon.c, which is distinct from two previous forms, has achieved wide distribution across the Internet.
Zircon.c spreads via e-mail in the form of an e-mail message with the attachment 'patch.exe'. The messages subject field may contain text in Japanese or the word 'Important', while the messages body is blank.
To avoid infection from Zircon.c Kaspersky Lab recommends that users do not launch the attachment 'patch.exe' and immediately delete the e-mail together with its attachment.
Instructions detailing how to defend against this Internet-worm have already been added to the Kaspersky anti-virus database.
Further details regarding this Internet-worm are available in the Kaspersky Lab Virus Encyclopedia | <urn:uuid:23304592-e88a-4fc5-b7c4-a815fe2b8e9e> | CC-MAIN-2017-09 | http://www.kaspersky.com/au/about/news/virus/2002/Attention_Virus_Zircon_c_is_spreading_on_the_Internet | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00300-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.883423 | 207 | 2.53125 | 3 |
For most people, a home router is their window to the world -- the World Wide Web.
But it is a broken window, according to some top security experts, who say there is little that average consumers can do to protect themselves from skilled cyber attackers, even if they use rigorous passwords and encryption, because the software running the devices is obsolete and riddled with known vulnerabilities.
"The big issue is that the software being shipped on these devices is obsolete the day you buy it, and there is no update stream," said Jim Gettys, system software architecture researcher at Alcatel-Lucent Bell Labs.
"I did an inventory of the age of the packages inside a number of these devices and they are three to four years old on Day One," he said. "And without an update stream, you start with existing vulnerabilities, and it just gets worse from there."
Gettys pointed to a 2010 research paper titled "Familiarity Breeds Contempt," by several University of Pennsylvania professors [http://www.acsac.org/2010/openconf/modules/request.php?module=oc_program&action=view.php&a=&id=69&type=2], who found that the longer a piece or system of software is in use, the more likely it is for attackers to find vulnerabilities, because they become familiar with the code.
Michael Brown, writing recently in PCWorld, [http://www.pcworld.com/article/2097903/asus-linksys-router-exploits-tell-us-home-networking-is-the-vulnerability-story-of-2014.html] said vulnerable routers and other connected devices are leaving home networks, "wide open to attack," meaning hackers from anywhere in the world can, "access your files, slip malware into your network, or use your own security cameras to spy on you -- all without ever laying a finger on your hardware."
Security guru and author Bruce Schneier, CTO of Co3 Systems, wrote recently that, "the computers in our routers and modems are much more powerful than the PCs of the mid-1990s," and warned that if security vulnerabilities in them are not fixed soon, "we're in for a security disaster, as hackers figure out that it's easier to hack routers than computers. At a recent Def Con, a researcher looked at 30 home routers and broke into half of them -- including some of the most popular and common brands."
To cure the problem, he said, would require, "flushing the entire design space and pipeline inventory of every maker of home routers."
Not everyone is quite so pessimistic. There are any number of blog posts that offer advice on securing home routers -- at least to a better level than the default settings in place when the device is first taken out of the box. And those experts argue that a little security can matter a lot. Some of them say it is like the common story of two men with a bear chasing them. One says to the other, "I don't have to outrun the bear. I just have to outrun you."
In other words, if you take basic security precautions, you will be more secure than the average user, and therefore much less likely to be attacked.
Robert Siciliano, CEO of IDTheftSecurity and a blogger for McAfee, recently offered a brief list [http://blogs.mcafee.com/consumer/secure-home-wifi] that includes logging in to the router settings, changing the default username and password that control the configuration settings and enabling the WPA2-PSK with AES encryption protocol, making sure to enter the passphrase, which is usually at least 10 characters.
He said, if possible, users should also change the Service Set Identifier (SSID) of the network connection from the default name.
Siciliano said he uses the latest versions of N and AC home routers, "which are the equivalent to the security of Windows 7 or 8," but are much more expensive than the basic $15 to $40 models. They cost $150-200 or more.
But he contended that the newer routers on the market, "have a grade of security that most average consumers need not be concerned about in relation to the amount of WiFi hackers in play. And as exploits are discovered, either ethically or not, patches will be administered or recommendations will be made to upgrade hardware."
He said it is possible for "those versed in WiFi hardware and software," to wipe and replace the default firmware with custom versions that provide addition security. But, this would be beyond the capabilities of 99% of users.
Besides encryption and changing default user names and passwords, Brown and others recommend:
- Password protection for the different ports that handle various types of traffic such as HTTP, FTP (file transfer protocol), HTTPS (encrypted web traffic) and Remote Desktop.
- Reset everything -- passwords, user names etc., if a hard reset is required, since that common troubleshooting step frequently restores the weak, default password without letting the user know.
- Disable UPnP (Universal Plug and Play) -- a recommendation of the federal Department of Homeland Security.
- Disable anonymous access to your FTP service, unless you don't mind sharing your files with anyone and everybody. Users can access their FTP settings in the router's HTML configuration pages, and those can be accessed with a browser. The default address for a router is in its user manual.
- Put the router into so-called "pin-hole" mode, where every port is blocked by default until the user opens them. "It takes a bit of work, but it's very secure," Brown wrote.
To that list, Mark Stanislav, security evangelist at Duo Security, recommends, "turning on automatic updates, disabling Internet-facing remote administration, and keeping an eye on security notices."
Stanislav acknowledged that changing passwords doesn't improve things greatly, but said it is worthwhile because, "it's too common that an attacker leverages default credentials to start an attack against a target."
Gettys agrees with that much -- he described changing passwords as "basic hygiene," and said it offers some protection since many of today's attacks are "simple-minded. Since so many routers have their passwords left at known defaults, the default passwords are often used as a way in to be able to install malware," he said.
Still, there is general agreement that routers could, and should, be more secure. Dan Crowley, senior security consultant at Trustwave, said the use of threat intelligence and other research can, "help users and manufacturers incorporate best security practices moving forward. These include performing automated scanning and penetration testing on home routers during the development, production and active phases so manufacturers are continuously identifying and remediating vulnerabilities in their products."
He added that security should not be left to the user. "Security needs to be transparent to the user. We can't expect anyone except computer security experts to be computer security experts. Make the default option choices be the secure ones."
Stanislav said government pressure on router manufacturers might be required, since consumers tend to focus only on what will give them the fastest WiFi. "I think attention from the FTC could go a long way when vendors fail to handle basic information security best practices," he said.
Schneier believes the current situation is a disaster in the making. In his essay, he noted that the embedded systems manufacturing system is fragmented -- it includes the manufacturers of chips, system manufacturers and then brand-name companies that may add a user interface. None of them, he said, do much engineering.
So security patches are rarely applied. "No one has that job. Some of the components are so old that they're no longer being patched," he wrote.
Beyond that, he said many times that source code is not available, and some drivers and other components are "binary blobs," with no source code at all.
"No one can possibly patch code that's just binary," he wrote, adding that the result of all this is, "hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years. Hackers are starting to notice."
The problem with routers and modems is particularly severe, he said, because they are the interface between the user and the Internet, so turning them off is rarely feasible, and they are generally on all the time.
We have an incipient disaster in front of us," he wrote. "It's just a matter of when. We simply have to fix this."
Gettys said security of home routers could be improved significantly, within two to three years, but it would take a different mindset in the industry. "This is not a technology problem: this is primarily cultural and businessA problem," he said.
"The base software can be kept up to date and automatically upgraded for a tiny fraction of what you pay your ISP each month. So it's not that there is no money available; it's just not going from where it comes -- which is you, directly or indirectly -- to where it needs to be expended -- into keeping the software up to date on these devices," he said.
But he calls the prospect of that kind of change, "not very likely," which he said will lead to, "a long, painful future."
This story, "Home Routers: Broken Windows to the World" was originally published by CSO. | <urn:uuid:6ed14be3-0315-4d00-a48e-e67a7fd744a1> | CC-MAIN-2017-09 | http://www.cio.com/article/2376196/router/home-routers--broken-windows-to-the-world.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00476-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959588 | 1,957 | 2.6875 | 3 |
Programming Using the CCA API
The API we’re going to use is the Common Cryptographic Architecture (CCA). There are more than 80 function calls, but we’ll examine only a few of them that let you do authentication and encryption. You’ll want to get a copy of the Cryptographic Services ICSF Application Programmer’s manual for future reference.
On each function call, the first four parameters are always the same. These are variables for the return code, reason code, exit data length, and exit data. The first three are full-word binary values (S9(8) in COBOL). Normally, you won’t have exit data, so code the exit data length with a value of zero, and provide a character exit data variable of 4 bytes of nulls. The return code and reason code variables will be filled in on return from the function call. Normally, you can ignore the reason code if the return code is zero. However, there are times when you can receive a non-zero reason code with a zero return code. The reason code can be important. For example, a reason code of 10000 means that your key has been re-encrypted using a new master key. Depending on how you’ve written your application, this may mean that you need to save the new encrypted key value for future use.
The first thing you may want to do in any application you write is to determine whether the ICSF hardware is present and working. ICSF doesn’t support a status call. One method to accomplish this task is to call CSNBRNG. This function will generate a random bit string, but the real value here is that the function doesn’t require any keys or difficult setup to use and will tell you if the hardware is functioning. If you receive a return code greater than zero, for example 16, you can be sure that either the hardware isn’t present, the master keys are not established, or the CSF system proc isn’t executing. In this case, there’s no point in continuing. The CSNBRNG function requires only two additional parameters: the key form and the returned random bit string. The 8-byte key form parameter can be either “RANDOM” or “ODD.”
Many of the functions require a key value parameter. This is always provided as a 64-byte area and can consist of either a key label or key token. A key label is the name of a key you’ve defined and stored in the key data set using a TKE terminal or through API function calls. The key label must be blank padded to the full 64 bytes. A key token is also 64 bytes, but is formatted by the CCA API interface and can contain a working key encrypted by the master key, or an exported key encrypted by a key encrypting key. Key encrypting keys are special keys in the hardware used to import and export key values into and out of the hardware. For example, an exported key can be used to transfer a key value from one ICSF hardware to another where each have different master keys. Key encrypting keys are also called importer and exporter keys.
The key can be a key label that’s already defined or a working token value. In either case, the key must have been defined for authentication use and not for encryption. Strictly speaking, a key is a key, but ICSF enforces the use of a key for a specific purpose. For example, when defining a key for single DES authentication, use the MAC type. For Triple-DES authentication, use the DATAM type. For encryption, use the DATA type for all key lengths.
To create a MAC for some data, you can call the CSNBMGN function. If you just need to create a MAC for data in memory, then you can do this with one call to the routine. Two of the parameters are the data and length of the data. The rule count should be three. The rule array depends on the algorithm you choose and the form of the returned MAC you desire. Each entry in the rule array consists of 8 bytes. To select DES, use “X9.9- 1”; for Triple-DES, use “X9.19OPT.” Often, the form of the code is 9 bytes of hex characters with a blank in the middle. For this option, use “HEX-9.”
For example, to calculate a MAC for a string of data using Triple-DES, you might code something like the COBOL example in Figure 4. You can use other languages, such as Assembler, PL/1, or C.
WS-RULE2, ONLY, indicates you’re calling the function only once with all the data. The WS-CHAINING variable is used and maintained internally. You use this variable when you must call the CSNBMGN function multiple times to calculate only one MAC. The variable provides continuity for CSNBMGN to keep track of intermediate results and thus cannot be modified between calls. On the first call, you always start with the variable containing all NULL values.
The resulting MAC is returned in the WS-MAC variable, provided you receive a zero return code. | <urn:uuid:f4610f63-3761-4ca3-b060-b42db20f293c> | CC-MAIN-2017-09 | http://enterprisesystemsmedia.com/article/implementing-icsf-hardware-cryptography-on-z-os/3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00052-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.865563 | 1,110 | 2.578125 | 3 |
Optical networking uses thin glass or plastic optical fiber to transmit information in the form of light pulses. It is far more reliable and offers greater transmission capacity than conventional copper-wire networks.
SONET and Synchronous Digital Hierarchy (SDH) are the most common optical transport protocol standards used in optical networking. They both meet the needs of traditional voice traffic, where all traffic is high-priority and patterns are generally predictable.
Extremely demanding enterprise networking solutions can use Dense Wavelength-Division Multiplexing (DWDM) platforms. These deliver high-speed Ethernet connectivity and carrier interconnect, in addition to managed Storage Area Network (SAN) extension services.
DWDM typically supports all point-to-point and ring topologies, along with a variety of transmission distances. Transparent and protocol-independent, DWDM can carry SONET, SDH, storage protocols, data, and video.
Using DWDM, multiple signals can be transmitted simultaneously on one optical fiber, with each signal on a different wavelength. This allows multiple traffic types to be aggregated on to a single wavelength and transmitted over long distances uninterrupted, to deliver different types of services.
One recent optical networking innovation is the Cisco Reconfigurable Optical Add/Drop Multiplexer (ROADM). It delivers uninterrupted high-speed, high-capacity services to customers on meshed and multi-ring networks. A ROADM can be configured remotely to add or drop capacity at each network node, so capacity can be managed as needed.
Each ROADM offers outstanding manageability and scalability for transport networks, and have proven to be vitally important for wavelength services delivery. A ROADM-enabled DWDM node can scale to 40 wavelengths. By using automated signal and power management, a ROADM can eliminate truck rolls to intermediate locations to support or redirect services. | <urn:uuid:bee610ba-69de-45f8-a1c9-5fca7e2ae8f8> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/products/optical-networking/technology.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00472-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.903159 | 385 | 2.9375 | 3 |
The Importance of Global Digital Literacy
With the world economy becoming more connected via networks and business environments becoming more computer-intensive in day-to-day job tasks, the issue of digital literacy stands out as a key differentiator of success at the individual, regional and national levels. Right now, too many people in too many places are lagging behind in these critical competencies. According to David Saedi, CEO of Certiport, which operates the foundational computer skills certification IC3, this gap in digital literacy needs to be addressed by educators and IT pros with a strong sense of social responsibility.
Certiport held its annual PATHWAYS Conference at the beginning of this month in Orlando, Fla. The event brought together technical pedagogues from around the world, who shared their insights on bringing IT education to a global workforce through certification and other means. “We understand that they need a forum to come face-to-face to exchange data, programs and best practices from various areas of the world,” Saedi said of the conference. “They realize that they have a lot more in common than their separate geographies allow them to share. Once they connect, they find they have a number of topics to share information on. They’re focused more on the society around them – that’s one of the benefits of this gathering – so they see the direct effect of what they do through the measurable outcome of certification.”
Expanding digital literacy is more than just a nice-sounding concept, he added. It serves the global economy by bringing more skilled professionals into the international workforce, whether they end up going into a technical field or not. “What we have done is to identify how digital literacy benefits everybody in the community, especially the ones who are at the lowest end and being least served by their communities,” Saedi explained. “The most recent analyses show that IT needs to be diffused across core curriculums, and virtually everybody needs to know the components of IT that enable them to participate in the digital economy. At Certiport, we don’t look at IT as a specific elitist niche. We look at it as diffusing these communications and technology components that need to be taught to everyone.
“Every one of the participants here carries that torch,” he added. “They want to make sure their communities are better equipped and that they’re getting the best value out of the infrastructure investments they’ve made. There’s something very important that we’ve stumbled across in the past two years and have now grabbed onto: the value of the individual as a change agent that allows IT diffusion to happen. It’s not just policies and funding. It’s the individual who says, ‘I will make an impact on my environment.'”
For more information, see http://www.certiport.com. | <urn:uuid:e8798603-2b5e-41e8-9fec-bf62e466da2f> | CC-MAIN-2017-09 | http://certmag.com/the-importance-of-global-digital-literacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956289 | 606 | 2.625 | 3 |
What are “cookies”?
“Cookies” (also known as HTTP cookies, web cookies or browser cookies) are simply small pieces of data, which are stored as text files on your computer, whenever you visit certain websites. Their typical purpose is to help sites remember particular actions you may have done there in the past. For example, cookies may track when you have logged into a site, visited certain pages or clicked certain buttons.
- Remember when you have logged into a site.
- Remember your user preferences, searches and favourites.
- Track your usage of a site, via Google Analytics©.
Track the success of our marketing campaigns. Additionally, RX websites have a small set of carefully selected third-party providers.
- Target more relevant advertisements to you (DoubleClick™)
- Enable social media sharing (AddThis©, Facebook©, Twitter©, YouTube©)
You can view a full list of all cookies used by RX on this website in the “What cookies do we use and why?” section.
Are cookies harmful?
Despite this, if you do wish to disable or remove cookies, please see the “Help” section of your browser or mobile device. Each browser or device handles the management of cookies differently, so you will need to refer to your appropriate “Help” documentation. However, as mentioned, please be aware that cookies are essential for certain features of an RX site to work properly.
Why are we telling you this?
What cookies do we use and why?
The following shows the full list of platform cookies, used throughout this RX website.
|ASP.NET_SessionId||Infosecurity Magazine||Session||This cookie is necessary for us to identify if you are logged in to the website or not and perform other essential site functions. The cookie doesn't store any personal information and is deleted when you finish browsing the website.|
|ISM.ScreenSize||Infosecurity Magazine||Session||This cookie allows the website to store your screen size so we can optimise the delivery of images to your device.|
|ISM.Cookies||Infosecurity Magazine||24 months||This cookie is set when you dismiss the "Privacy and Cookies" message, displayed at the bottom of the Infosecurity Magazine website. Once set, it ensures you will not be shown the message again.|
|_ga||Google Analytics||18 months||Google Analytics© is an analytics solution, which provides information about your activity on the Infosecurity Magazine website. This helps us to understand what works on the site and better tailor it to your needs.|
|_utma||Google Analytics||18 months||Google Analytics© is another analytics solution, which provides information about your activity on an RX website. As with WebAbacus?, this helps us to understand what works on the site and better tailor it to your needs. This cookie is used to determine unique visitors to an RX website. It is updated with each page view.|
|_utmb||Google Analytics||30 minutes||This is another Google Analytics© cookie. It is used to establish a user session with an RX website.|
|_utmc||Google Analytics||None||This is another Google Analytics© cookie. It determines whether or not a new session has been created.|
|_utmz||Google Analytics||6 months||This is another Google Analytics© cookie. It is used to identify how you arrived at the site, whether via a direct method, a referring link, a website search or a campaign, such as an advertisement or email link. This cookie is used to calculate search engine traffic, advertisement campaigns and page navigation. It is updated with each page view.|
|_gads||DoubleClick||18 months||This cookie is used to improve our advertising. Some common applications are to target advertising based on what's relevant to you, to improve reporting on campaign performance and to avoid showing ads you have already seen.|
|_atuvc||AddThis||Session||This cookie, provided by Clearspring Technologies Inc., is used to provide you with the option to share content to your favourite social networks. AddThis collects basic information on how you use the service but any data is always anonymous.| | <urn:uuid:7ed5d194-edf7-4bfb-a7d5-adf468232d03> | CC-MAIN-2017-09 | https://www.infosecurity-magazine.com/cookies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00348-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.884653 | 888 | 2.921875 | 3 |
Last week we brought up the question, “How did end users learn to expect fast websites?” We covered how Pavlov discovered Conditioning through experiments with his dog, which left us wondering; perhaps humans are conditioned to expect instant web gratification.
Almost 100 years after Pavlov, Wolfram Schultz, now at Cambridge, stuck probes into the brains of rats and began to quantify the conditioning phenomenon in terms of neuronal activation.
Dopamine (DA) is a neurotransmitter in our brain commonly coupled with reward. Dopamine is released while you are eating an ice-cream sundae, when you win a prize, even during sex. Cocaine, nicotine, and other amphetamines all cause increases in dopamine transmission as well. Gamblers and other addicts all experience increased DA levels right before and during their favorite activities.
Shultz discovered deep down in our midbrain regions (the areas associated with reward), there are certain patterned firings of a population of Dopamine neurons to signal both reward and reward expectation. His lab showed that once a rat has learned association between a stimulus and reward – the activation of its midbrain dopamine neurons follow the same kind of conditioned behavior as well.
This graph shows normal baseline activity of rat Dopamine neurons. At some point, R, a reward (sugar water) is given to the rat. The probe in the rat’s brain detects a spike of activity immediately following the reward.
After conditioning the rat to learn that a flashing light signals sugar-water, the burst of activation shifts. It moves from immediately following the sugar-water reward to immediately following the Conditioned Stimulus (CS) – the flashing light.
At this point, the light itself is the perceived reward. Now, the actual reward is “guaranteed” to follow (or at least that is what the rat has learned). This is the underlying mechanism that caused Pavlov’s dog to salivate at the bell. The dopamine response to the stimulus has already triggered a downstream chain of physiological reactions to get prepared for food.
* It should be noted that the same amount of time passed on every conditioning run between the CS and the reward.
Here is where Performance Engineers and Ecommerce Directors should start shaking in their boots. When Shultz withheld the sugar after flashing the signal light, there was a drop in activity immediately following the exact moment the sugar-water should have been delivered. This temporarily precise drop is commonly referred to as the Dopamine Reward Prediction Error.
Shultz also found that delaying the reward by 500 ms, 1000 ms, and 2000 ms also caused the same depression in activity (right after the reward was supposed to be delivered) followed by a new spike in activity once the reward was delivered. So the rat’s neurons can also detect slight latencies in the delivery of their reward as well.
These two studies might just show that unconsciously, our brains have conditionally learned when a webpage must load and we possibly experience a burst of dopamenergic neuronal activity after the pressing the GO button. Does this mean that there is a dip in activity when the website takes longer than we expect? Who knows what cascading set of events the drop in Dopamine activity trigger. Perhaps its web stress, perhaps its frustration, perhaps it’s like an addict being denied their drug fix.
If the user expectations are determined by Conditional Learning, where did we learn that web sites are supposed to be 3 second or faster and why is the expectation changing? I suspect that the most popular sites we use the most are driving this learning behavior! The Googles, Yahoos and Amazons of the world, who have the technology, infrastructure, investment, and man power to continue to make their websites faster, do so and spend great effort leading the way. Additionally, the web has changed completely since dial-up and webpages are now getting delivered at speeds of 1 Mbps to 100 Mbps.
Another remaining question is how much impact does other knowledge/outside influence have in this learning. Is querying for a keyword in a search engine a different signal than clicking “Submit Order” to process your credit card on an ecommerce site? (Most of us don’t mind that it takes a little longer.)
Behind the scenes, our brain has learned how fast a page should load and can detect slow ones – even a delay as little as 250 ms. There are still a lot of open questions regarding what factors are involved in this learning behavior, how much learning from one site transfers to another, what is the impact over time, and what are repercussions of a negative experience. We clearly need for more scientific research in this area to understand our new digital lives and how we can trick the mind to overcoming these speed trap pitfalls. | <urn:uuid:1d326bc2-cc0f-4446-9bdf-5170601fac59> | CC-MAIN-2017-09 | http://blog.catchpoint.com/2012/09/19/why-we-expect-fast-websites-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00168-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.962124 | 983 | 2.765625 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, and it provides the authors analytical data about your interactions with their content.
Embed code for: Intro to Political Geography
Select a size
Gerrymandering (Internal Borders) Found within states The intentional manipulation of borders to benefit one political group or organization Based on voting records, race, and anticipated future voting Using the two blank maps provided you will complete each map as noted. Each district must contain five (5) voters each Map 1 – Majority Republican Map 2 – Majority Democrat Follow Up Questions What impact does Gerrymandering have on legislative districts? Is this fair to voters? Why or why not? What is a realistic solution to Gerrymandering? Gerrymandering Exercise Part 1 - Answer the following questions once your group has been formed: Choose what your nation will be Industrial – needs coal, iron ore, timber Military – gold, population to recruit, areas for bases Trade – anything that can be traded of value Part 2 – Build your nation, Name, Flag, Slogan – should represent the idea from above Part 3 – Chose up to 6 territories that you want to claim for your nation Part 4 – Turn in maps Borders and Boundaries Game Wallerstine’s World-System’s Theory Assumes that all nations are a part of a world economic system void of independent economies and based on capitalism Create a world-economy of control and dependency Core: High wealth, political and economic control Semi-Periphery: economically strong and growing, influence over the periphery but follows the core Periphery: Dependent and responsive economically and politically to core and, to an extent, semi-periphery nations Three Tier Structure Core Processes that incorporate higher levels of education, higher salaries, and more technology * Generate more wealth in the world economy Semi-periphery Places where core and periphery processes are both occurring. Places that are exploited by the core but then exploit the periphery. * Serves as a buffer between core and periphery Periphery Processes that incorporate lower levels of education, lower salaries, and less technology * Generate less wealth in the world economy * * * Core=US, Canada, Western Europe, Australia and Japan Semi-Periphery-Mexico, Venezuela, Argentina, Uraguay, Brazil, South Africa, Russia and Eastern Europe, Turkey, Saudi Arabia, India and China-exploited by Core and in turn exploit the Periphery Periphery=rest of Africa, rest of South American and Central America, Central and Asia and most of Middle East-exploited by everyone * Landlocked Non-Landlocked A State that borders on any type of water system (river, ocean, lake) that allows them access to large seas or the oceans A State that has no access to water and must go through another State in order to reach a port UNCLOS United Nations Law of the Sea which determines territorial vs international waters Territorial Waters Up to 12 nautical miles out from the coast Contiguous Zone Up to 12 nautical miles from the edge of territorial waters Exclusive Economic Zones Up to 200 nautical miles, states determine what economic activity may or may not take place International Waters Water outside of a specific nations control which is shared by all nations without limitations A border above the basic surface of the earth and extending into the upper atmosphere Territorial Airspace Airspace up to 12 nautical miles from the designated coast line that is exclusively controlled by a specific nation with a vertical ceiling that is undefined (depends on the county and ranges from 19-99 miles) International Airspace Airspace outside of a specific nations control which is shared by all nations without limitations Air Space Sovereignty - Complete control over a territory’s political & military affairs Territoriality – The attempt by an individual or group to affect, influence, or control people, phenomena, and relationships, by delimiting and asserting control over a geographic area Territorial Integrity – A government has the right to keep the borders and territory of a state intact and free from attack Statehood Vocabulary A separate entity composed of three or more States that forge an association and form an administrative structure for mutual benefit in pursuit of shared goals. An alliance system that binds political decisions and economies together. History: Created Post-WWII to rebuild Europe economically, Counter the Soviet threat, and create a place to hold a dialogue to settle disputes Modern Threats: Donald Trump – NATO, NAFTA, TPP Brexit – EU Supranationalism Economic An agreement between States that offers all parties economic opportunity through trade and commerce Examples OPEC – Organization of Petroleum Exporting Countries – controls oil prices by controlling production WTO – World Trade Organization – set trade rules for member nations NAFTA – North Atlantic Free Trade Agreement – removes tariffs (import taxes) in North America (Mexico, United States and Canada) on agreed upon products Military An agreement whereby two or more States pledge to aid each other if attacked or to share military material and technology Examples NATO – North Atlantic Treaty Organization – member nations pledge to protect each other in case of attack (counters the Warsaw Pact which did the same for Communist Eastern Europe) UN – United Nations – Can authorize the use of member nations military forces to act as peace keepers or an aggressive force AU – African Union – Same as UN, except only in Africa Political An agreement whereby two or more States work together for economic and military benefits – a pledge to help each other Examples UN – United Nations – A world forum to discuss global issues AL – Arab League (also has military powers like the UN and AU) – North African and Middle Eastern states working to promote growth and stability G20 – Global 20 – The 20 largest economies discussing political and economic issues Genetic Boundary Classification Antecedent - physical landscape defined the boundary without any human modification Ex: Mongolia and China (Desert) Subsequent – A boundary that has undergone a regular modification process Ex: China and Vietnam Superimposed-forcibly drawn boundary that cuts across a unified cultural boundary Ex: Kurds and the modern Middle East Relict boundary – A boundary no longer serves a purpose but still affects the lives of people living there Ex: East-West Germany Movement of power from the central government to regional governments within the state. Conflicts within a State (centrifugal forces) cause friction among the population which leads to the break in unity of the State’s government and potentially to the breaking up of the State itself Examples Scotland-England-Wales-Northern Ireland: Autonomy within the United Kingdom Italy – Sardinia has some economic independence USSR – post 1991, breaks into 13 separate States Devolution Examples of Devolution - Economic Brazil – Southern Brazil attempted (unsuccessfully) to break away from the north. Wealth in the south supports poverty in the north United States – Northern manufacturing and Southern agriculture lead to conflicts on wealth distribution and slavery European Union – Great Britain refuses to use the Euro. They maintain their own currency and economic independence and have currently voted to leave the EU Israel-Palestine Cause: Superimposed Borders Issues: As per British mandate in 1948, a territory is created for Jewish settlement. Palestinian (Muslim) settlement already exist leading to continuous conflict between these two sides still today Ukraine and Russia Cause: Cultural Devolution Issues: Protests in Ukraine removed the democratically elected pro-Ruissian president and replaced him with a pro-western president. Crimea – invaded and absorbed by Russia, Eastern Ukraine – war between Russian separatists and Ukrainians Basque Region in Spain Cause: Cultural Devolution Issues: Basque separatists have fought (physically and politically) to separate from Spain. Basque’s speak their own language and have a culture separate from Spain proper Conflicts within States United Nations Peace Keepers – Soldiers under the control of the United Nations move into an area to act as local police and prevent future conflicts Example: Golan Heights in Syria Demilitarized Zone (DMZ) – An area that separates two groups (usually States) where no military personnel or weaponry is allowed to be placed thus separating the militaries Example: North and South Korea No Fly Zone – Air space restriction for military aircraft to prevent the use of them against groups within a nation Example – Iraq post-1991 UN attack Responses to Devolution Reading Quiz #1 Nov 21st/22nd – Ch. 8, Sec 1-2, pg. 261-275 Reading Quiz #2 Nov 30th/Dec 1st – Ch. 8, Sec 3-4, pg. 276-295 Unit Essay Exam and Map Quiz December 12th (Odd) – December 13th (Even) Multiple Choice/End of Unit December 14th (Odd) – December 15th (Even) Final Review December 19th (Even) – December 21st (Odd) Final Exam December 20th (Even) – December 22nd (Odd) Unit 4 and End of Semester Calendar A State is a politically organized territory that is run by an independent government and is recognized by a large portion of the world Ex: The United States, Mexico, Russia, Syria State A political entity within a State. A way of dividing up a State into smaller sections. Ex: Nebraska and Iowa in the United States or Quebec in Canada State/Province The state has a governor who works in unison with the state’s legislature to pass laws within the framework of the federal and state’s constitution The negotiated treaty between the two states will allow for joint military exercises to help the two states be prepared to counter any threats, both internal and external, in the region state vs. state (Its all in the context) Nation - Refers to a group of people with similar cultural traits whose boundaries may or may not follow political lines Nation-State – A major portion of the state’s population share a common culture Multinational State –A state that contains two or more ethnic groups with traditions of self determination that agree to coexist peacefully by recognizing each other as distinct nationalities Multiethnic State – A state containing multiple ethnic groups Stateless Nation – A state with a nation of people with no political or cultural control Political - A separation based on a negotiated settlement between two different States or states Ex: Manmade lines such as latitude and longitude. Straight lines don’t exist in nature Physical – A boundary based on a geographical barrier Ex: Lake, river, ocean, mountain range Cultural – A boundary based on cultural grouping Ex: Mongolia and China, Kosovo and Serbia Compact A small, condensed shape where no single point is far from the center of the nation Ex: Poland, Belgium Fragmented A nation that is broken into multiple pieces and may or may not be spread out over distances. Most nations are separated because of water Ex: Denmark, Indonesia Elongated A nation whose territory is significantly longer than it is wide Ex: Vietnam, Chile Perforated A nation whose territory contains another territory that it completely surrounds Ex: South Africa – Lesotho, Italy – San Marino and Vatican City Protruded A nation where a portion of its territory extends in an elongated fashion from the main territory. Sometimes referred to as a panhandle Ex: Thailand, Oklahoma, Nebraska Enclave Exclave A country or part of a country that is surrounded by another country A territory legally or politically attached to a territory with which it is not touching * * * Core=US, Canada, Western Europe, Australia and Japan Semi-Periphery-Mexico, Venezuela, Argentina, Uraguay, Brazil, South Africa, Russia and Eastern Europe, Turkey, Saudi Arabia, India and China-exploited by Core and in turn exploit the Periphery Periphery=rest of Africa, rest of South American and Central America, Central and Asia and most of Middle East-exploited by everyone * (physically and politically) to separate from Spain. Basque’s speak their own language and have a culture separate from Spain proper Conflicts within States United Nations Peace Keepers – Soldiers under the control of the United Nations move into an area to act as local police and prevent future conflicts Example: Golan Heights in Syria Demilit | <urn:uuid:e17f8c96-4790-41bc-8319-972077b54197> | CC-MAIN-2017-09 | https://docs.com/anthony-razor/7164/intro-to-political-geography | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00044-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.916683 | 2,500 | 2.734375 | 3 |
Deleted files can often be recovered, and that's a problem when you're passing your PC or PC-related tech along to someone else. Whether it's sensitive financial data, business documents, or scandalous photos that could be used to blackmail you, you probably don't want people getting their hands on your private stuff.
Fortunately, you can take steps to protect your data, whether you're getting rid of a PC, external hard drive, or USB stick. Here's how! (And here's how to wipe mobile devices clean.)
Mechanical hard drives vs. internal solid-state drives vs. external drives
Deleted files can be recovered from some types of drives, but not others. Here's a quick summary of how different drives handle deleted files.
Mechanical hard drives: Old-school mechanical hard drives--the kind with a spinning magnetic platter--are still used in PCs. If your PC doesn't have an SSD, it has a mechanical hard drive. Files you delete from these drives can be recovered. When you delete a file from such a drive, the drive just marks the file's data as deleted. Until it's overwritten in the future, people can scan the drive and recover the marked-as-deleted data.
Internal solid-state drives: Solid-state drives use a feature called TRIM. When you delete a file from a solid-state drive, the operating system informs the drive that the file was deleted. The drive then erases the file's data from its memory cells. This is done to speed things up--it's faster to write to empty cells--but it has the benefit of ensuring files you delete from internal SSDs can't be recovered.
External solid-state drives and other removable media: TRIM is used only for internal SSDs. In other words, if you have an external SSD in an enclosure and you connect it to your computer via USB, TRIM won't erase files you delete. This means deleted files can be recovered from that external SSD. Deleted files can also be recovered from USB flash drives, SD cards, and other types of removable media.
If you have a PC with a solid-state drive, you just need to reinstall your operating system to erase your data. If you have a PC with a mechanical drive, you'll need to ensure your drive is wiped before reinstalling your OS. If you have an external drive, you'll need to wipe that, too.
Reset your PC With Windows 8
For many years, geeks had to use third-party tools to wipe their mechanical drives before disposing of them. Windows 8 added a feature that makes wiping deleted files and restoring your operating system much easier.
Use the Reset Your PC feature in Windows 8 or 8.1 to reset your PC to its factory state. You'll be able to choose a "Fully clean the drive" option when going through this process. Windows will overwrite your drive with junk data and then reinstall the Windows operating system. Afterwards, you'll have a like-new system without any recoverable files. Yes, it's really that simple.
Wipe your drive and reinstall Windows 7
Windows 7 doesn't have this wiping feature built-in. If you just reinstall Windows 7 on your PC using a Windows 7 installer disc or your PC's recovery feature, your drive won't be wiped. Deleted files could theoretically be recovered from your drive.
To avoid this, you'll want to use a disk-wiping tool like Darik's Boot and Nuke (DBAN) before reinstalling Windows. This tool wipes your computer's hard drive by overwriting it with junk data. If you're disposing of the PC or internal drive, you're done--you can leave the PC in this state. If you're passing along the PC to someone and want to give them a working copy of Windows, you can then reinstall Windows on the PC and pass it along.
For a full rundown of DBAN and other secure erasure tools, check out PCWorld's guide to securely erasing your hard drive. Be careful when using tools like DBAN! They will overwrite an entire drive, including any recovery partitions and other data you might want to keep. Back up any data you want to keep before wiping your drive.
Clean external drives
Perform a full format of an external drive to wipe away any deleted files. To do so, connect the drive to your computer, right-click it in Windows Explorer or File Explorer, and select Format. Be sure to uncheck the Quick Format box to perform a full format-- a quick format won't fully erase the deleted files from your drive. Repeat this process for each drive you want to wipe.
On Windows XP, data could be recovered from a drive even after a full format. Starting with Windows Vista, Microsoft says a full format will overwrite your drive's data. There's no way to perform a full format from Windows 7's installer, so that's why you have to use a tool like DBAN when reinstalling Windows instead of using the normal Format option.
You can also use other dedicated drive-wiping tools. For example, CCleaner includes a Drive Wiper tool under Tools > Drive Wiper.
Wipe free space
If you've already reinstalled Windows and don't want to wipe your drive and reinstall Windows again, you can try using a tool that wipes a drive's free space, which should obliterate any leftover data left lurking in the shadows. For example, CCleaner's Drive Wiper tool can wipe only the free space on a drive if you'd like.
Just wiping a drive's free space isn't an ideal solution, however. If you have any sensitive files that haven't yet been deleted, CCleaner won't touch them. A full drive wipe is more fool-proof because it ensures everything on your drive is wiped away before you set up a clean system from scratch.
Check your work: Try to recover deleted files yourself
Use a file-recovery program like Recuva, created by the same people who make the popular CCleaner utility, to test whether you can recover any deleted files from a drive. Recuva scans your internal or external drives for deleted files, displays information about them, and allows you to recover them. Be sure to perform a "Deep Scan" when prompted--it's slower, but will find more bits of deleted files. If you wiped the drives properly, Recuva should find no files you can recover.
Recuva performs the same sort of trick an attacker would use to recover your data. Of course, some attackers--particularly criminal organizations that target businesses--may use more advanced disk forensics tools to get at that sensitive business data.
Use encryption to protect all your files
Set up encryption on your drive if you're deeply worried about people recovering your deleted files. Encryption secures all your files, including both current files and deleted files. You can enable encryption with the BitLocker feature built into Professional versions of Windows or the free TrueCrypt that works on all versions of Windows. TrueCrypt can create encrypted containers or encrypt entire drives.
You'll have to provide an encryption passphrase to access your files, which will be saved to your drive in encrypted form. Even if you delete encrypted files from such a drive, the deleted files will just be meaningless gibberish without your encryption key. An attacker who wanted to recover deleted files--or access the current files on the drive--would need your encryption key.
There's another, more extreme option for protecting your data. When the military gets rid of a hard drive containing the nuclear launch codes, they don't just wipe it and set it by the curb. No, they go out of their way to destroy it just to be sure--they may even melt it down or crush it into powder. For magnetic hard drives, you can pay to have the drive degaussed--this eliminates the magnetic field and thus all the data. Or you could just smash it with a hammer and a railroad spike if you want to save cash.
Most people shouldn't be destroying drives, as it's a waste of still-usable hardware. On the other hand, if you're a business and you have an old hard drive containing customers' financial information, you may want to destroy that drive rather than risk that data falling into the wrong hands.
Remember to consider your sensitive data before getting rid of a computer or external drive. The biggest challenge here is simply knowing you need to run these tools--many people don't realize that previously deleted files can be recovered.
This story, "Definitely deleted: How to guarantee your data is truly gone before recycling old PCs and drives" was originally published by PCWorld. | <urn:uuid:c3deb118-9e07-4455-ab4f-d4ecd1af1e23> | CC-MAIN-2017-09 | http://www.itworld.com/article/2699222/security/definitely-deleted--how-to-guarantee-your-data-is-truly-gone-before-recycling-old-pcs-and-drives.html?page=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00572-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945061 | 1,804 | 2.59375 | 3 |
If you're worried about being out of shape, or suspect you might have a disease like diabetes, just breathe into this Toshiba tube.
It's part of a prototype medical breath analyzer that's small enough to be used in small clinics or gyms.
By detecting trace gases that are exhaled, it could be used to monitor health indicators such as fat metabolism and help diagnose disease, Toshiba said.
"The main feature of this analyzer is its compact form," said a spokesman for Toshiba. "It's the size of a personal computer. Previously developed devices were larger and could only be used in facilities such as hospitals."
Another merit is speed, providing analysis results in about 30 seconds, he said.
Some doctors believe that breath analysis could one day be a vital tool for medical testing, along with blood tests and tissue imaging.
Toshiba used gas analysis technologies from its semiconductor and other manufacturing operations to develop the device. An infrared laser shines on the exhalation while a spectrum analysis component checks for telltale signs of organic compounds.
Using a quantum cascade laser, which is a semiconductor laser used in gas analysis, allowed the analyzer to have a small form factor while retaining the accuracy of larger, floor-mounted devices, Toshiba said.
The current version of the device can measure organic compounds such as acetone, which can indicate obesity and diabetes, and acetaldehyde, which is involved in the chemistry of hangovers.
Toshiba plans to improve the analyzer so it will be able to also detect carbon monoxide, methane, nitric monoxide and other constituents to be able to check on conditions such as smoking, intestinal bacteria, asthma and helicobacter pylori, a stomach bacterium linked to ulcers and cancer.
In conjunction with the manufacturer, Waseda University in Tokyo will begin research next month into measuring acetone in exhaled breath in an attempt to measure fat metabolism. The results could yield new approaches to formulating diets and food supplements.
Toshiba said it wants to work with universities and hospitals to pool knowledge of breath analysis for diagnostic and other applications.
It plans to commercialize the analyzer in 2015, first in Japan and possibly overseas in the future, the Toshiba spokesman said. | <urn:uuid:38f8b292-bbc5-41e3-9e69-c68ee27de520> | CC-MAIN-2017-09 | http://www.itworld.com/article/2701246/data-center/compact-breath-sniffer-could-warn-of-diabetes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171043.28/warc/CC-MAIN-20170219104611-00216-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.945082 | 465 | 2.984375 | 3 |
This article is excerpted from “Cloud Opportunities in HPC: Market Taxonomy,” published by InterSect360 Research. The full article was distributed to subscribers of the InterSect360 market advisory service and can also be obtained by contacting email@example.com.
In Life, the Universe, and Everything, the third book of Douglas Adams’ whimsical Hitchhiker fantasy trilogy, cosmic wayfarer Ford Prefect describes how an object, even a large object, could effectively be rendered invisible to the general populace by surrounding it with an “SEP field” that causes would-be observers to avoid recognizing Somebody Else’s Problem. “An SEP,” Ford helpfully explains, “is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem.”
If we were to reinterpret SEP to stand for “Somebody Else’s Processing,” we would be well on the way to a definition of cloud computing.
The term “cloud” comes from the engineering practice of drawing a cloud in a schematic to represent an external resource that the engineer’s design will interact with — a part of the workflow that he or she will assume is working but that is not part of that specific design. For example, a processor designer might draw a cloud to represent a memory system, with arrows indicating the flow of data in and out of the memory cloud. Cloud computing takes this concept to an organizational level; entire sections of IT workflows can now be virtualized into resources that are someone else’s concern.
Cloud computing is therefore a new instantiation of distributed computing. It is built on grid computing concepts and technology and further enabled by Internet technologies for access. Cloud computing is the delivery of some part of an IT workflow — such as computational cycles, data storage, or application hosting — using an Internet-style interface. This definition includes Web-immersed intranets as conduits for accessing private clouds.
Cloud computing is currently driven by business models that attempt to utilize or monetize unused resources. Grid, virtualization, and now cloud technologies have attempted to find and tap idle resources, thus reducing costs or generating revenue. The most interesting difference between cloud computing and earlier forms of distributed computing is that in developing ultra-scale computing centers, organizations such as Google and Amazon incidentally built out significant caches of occasionally idle computing resources that could be made generally available through the Internet. Furthermore these organizations found that they had developed significant skills in constructing and managing these resources, and economies of scale allowed them to purchase incremental equipment at relatively lower prices. The cloud was born as an effort to monetize those skills, economic advantages, and excess capacity.
This is important because from a business model point of view the cloud resources came into existence at no cost, with minimal incremental support requirements. The majority of the costs are born by the core businesses, and therefore, at least initially, customers of the excess capacity do not need to foot the bill for capital expenditures. Costs associated with staff training, facilities, and development are similarly already fully amortized and absorbed by the parent businesses. There is little more appealing than being able to sell something that you get for free.
With such an appealing proposition in play, many other organizations are scrambling to see whether they have an infrastructure — public or private — that can be exploited for gain through cloud computing. However, when significant excess capacity does not exist, or if it cannot be leveraged in a timely or reliable fashion, it is not clear what sustainable business models exist for cloud computing.
High-end, public cloud computing offerings represent a convergence of grid and Internet technologies, potentially enabling workable new business models. Smaller, private clouds are a technical evolution that expands the ease of use and deployment of grids in more organizations.
As cloud computing technologies mature, InterSect360 Research sees several possible business models that could evolve. Although we emphasize High Performance Computing in our analysis, cloud computing transcends HPC, and similar models will exist in non-HPC markets.
Utility Computing Models
Cloud computing provides a methodology for extending utility computing access models. Utility computing is not new; it has been touted for several years as a way for users to manage peaks in demand, extend capabilities, or reduce costs. Traditionally, limitations in network bandwidth, security issues, software licensing models, and repeatability of results have acted as barriers to adoption, and all of these still need to be addressed with cloud.
There are four major variations on the potential utility computing models with cloud:
Cycles On Demand
The cycles-on-demand model is the most basic approach to cloud computing. The cloud supplier provides hardware and basic software environments, and the user provides application software, application data, and any additional middleware required. In this case users are simply buying access to computer processors, which they provision and manage as needed in order to run their applications, after which the resources are “returned” to the cloud provider. Users are charged for the time the resources are in use, plus possibly some overhead costs. The demands are relatively low on the cloud provider, and relatively high on the user in terms of making sure there is effective utility generated by the rented resources.
The storage cloud model complements the cycles-on-demand model both in terms of operational approach — users buy disk space at a cloud providers facility — and in terms of providing a more complete solution for cycles users — a place to put programs and data between job runs. In the storage-on-demand approach the cloud is used:
- As the final (archival) stage in hierarchical storage management schemes (even if it is a two-level hierarchy: local disk and cloud). On the consumer side this is essentially the concept used for PC backup services.
- A file-sharing buffer where users can place data that can be accessed at a later time by other users. This approach is at the heart of photo sharing sites, and arguably with social sites such as Facebook and LinkedIn. This same concept is also used for shared science databases in areas such as genomics and chemistry.
Software as a Service
Software as a service (SaaS) extends the basic cycles-on-demand model by providing application software within the cloud. This model addresses software licensing issues by bundling the software costs within the cloud processing costs. It also addresses software certification and results repeatability issues because the cloud provider controls both the hardware and software environment and can provide specific system images to users.
SaaS also has the advantages for providers of allowing them to sell services along with the software, and to use the cloud as demonstration platform for direct sales of software products. In addition, the user is able to turn much of the system administration task over to the provider. The major drawback to this strategy is that users generally run of a series of software packages as part of their overall R&D workflow, in such case data would need to be moved into and out of the cloud for specific stages of the workflow, or the cloud provider must support an end-to-end process.
Environmental hosting is the use of a service to support virtually all computational tasks, with servers, storage, and software all being maintained by a third party. This concept can include constructs such as platform as a service (PaaS) and infrastructure as a service (IaaS). Arguably environmental hosting in the cloud is an oxymoron, however, it represents the upper end of the utility computing spectrum and a logical destination of cloud strategies. This approach addresses software, result repeatability, and most networking issues by simply providing dedicated resources all in one (logical) place. It addresses many of the technical security issues, but not a consumer organization’s security problem of inserting a third party into the workflow process.
In addition to the models for those who would consume resources through the cloud, there are applications that are made possible by the combination of Internet communications and large computing resources. This is inclusive of the opportunities for organizations to become cloud computing service providers, either externally or internally. In addition, there is the potential for some secondary markets to be enabled by the adoption of cloud technologies.
Restructuring of Internet-Based Service Infrastructures
One of the most interesting aspects of cloud computing is that Internet companies with value-add and expertise in intellectual property or content (as opposed to purchasing, managing, and running computer hardware systems) could move their internal computing architecture to the cloud, while maintaining system management and operating control in-house. With this strategy an organization would move the bulk of its computing to the cloud keeping only what is necessary for communications and cloud management, in doing so they convert internal costs for systems, software, staff, space and power into usage fees in the cloud. Cloud technology and service providers facilitate and accelerate the industry’s evolution towards a network of interrelated specialty companies, as opposed to groups of organizations each performing the same set of infrastructure functions in house. The major issue potentially holding this model back would be cost; i.e., the level of premium users would be willing to pay for a service versus a do-it-yourself solution.
This strategy would replace personal computers with an advanced terminal that connected to a cloud utility that holds all of the user’s data and software. The advantage for users is that they would be relieved of the burden of purchasing, maintaining, and upgrading their personal systems. They would also have professional support for such task as system back-up and system security and would also be able to access their computing environment form any Web-connected device.
This strategy may represent the evolutionary future of the Internet, particularly as more devices become Web-enabled and the relationship between the Web and the personal computer is weakened by competing devices, such as smart phones. The main challenge to this model is overall bandwidth on the Internet. Side effects to such an evolution would replace the role of the operating system with a Web browser and whatever backend environment the cloud supplier chose to provide, also creating a new product class for Web terminals.
InterSect360 Research Analysis
We see cloud computing as part of the logical progression in distributed computing. It is not completely revolutionary, nor is it a panacea that will provide any service that can be imagined. The business models must be considered in terms of cost and control, barriers and benefits.
Of all the cloud business models, InterSect360 Research believes that SaaS has the highest potential for success within HPC. It addresses several of the major dampening factors associated with cloud and provides additional revenue opportunities in the services arena. It also targets industrial users, who would be the most likely to pay a premium for the product, without attempting to develop competing solutions. Furthermore companies can adopting SaaS models in cloud in a phased or tiered way, first proving the concept private clouds before giving themselves over to public or hybrid models. (This same phenomenon persists with private and public grids today.)
Organizations that have experience with the software and in house operations may look to SaaS options for peak load management and capacity extension. However, we believe the greater opportunity is for selling packaged cloud computing, software, and start-up services to companies testing HPC solutions. Our research indicates that there are major start-up barriers to using HPC solutions among small and medium companies. These barriers include finding the expertise for the creation of the organization’s first scalable digital models.
The major barrier for SaaS adoption in HPC is the fragmentation of the applications software sector of the industry. The boutique nature of the opportunity may indicate there is not sufficient volume to merit the ISV’s investment to create and market cloud-enable versions of their applications. Interestingly, in a recursive manner, small SaaS providers could theoretically tap into larger cycles-on-demand cloud providers to supply the computing resources.
Similarly, implementation of environment hosting within current cloud environments for HPC organizations would currently entail significant amounts of effort by the user organization to set up and manage storage and software environments. It would also be limited by software licensing issues for industrial users in particular. Thus market opportunities for this option are very limited at this time. That said, a small organization could conceivably do all its computing in the cloud, keeping all its data on cloud storage system, using only internally developed, open-source, or SaaS software, and trusting in small size as part of a herd to provide security.
Finally, we note that Web-based software services are not new to the market; they currently range from income tax preparation services to on-line gaming companies. SaaS fits into cloud markets based on the concept of work being sent to outside party and results returned, without the sender having knowledge of exactly how those results are generated. For some users, SaaS may inherently make sense. Ultimately the best way to help users adopt HPC applications may be to make them Somebody Else’s Problem. | <urn:uuid:bdc36e50-ee47-41c6-9def-8a1383a8d815> | CC-MAIN-2017-09 | https://www.hpcwire.com/2009/11/02/cloud_computing_opportunities_in_hpc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00392-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.949127 | 2,679 | 2.765625 | 3 |
Originally published February 6, 2008
The enormous interest in master data management (MDM) that has appeared in the past couple of years has not yet generated a great deal of methodological progress. Hopefully, as data professionals, consultants, and vendors grapple with the complex issues involved, the situation will improve. A central problem, however, is that there is little agreement about what master data is. It is usually defined by examples, like product, customer, or account, as if to say “I know it when I see it”. Alternatively, master data is defined using generalities such as that it is simply highly shared data, or that it is data used by an application, but which is not produced by the application.
Definitions do matter. They tell us something fundamental about what is being defined. In the case of master data, there is a special need for a greater understanding because MDM is still at an early level of maturity. For several years, I have been using an approach to categorizing data that provides a detailed definition of master data. I have found this approach useful in that it can be practically applied to master data management problems.
A fundamental question about data is whether it is homogenous. In other words, are the boxes we see in a data model, or the tables contained in a physical database, all the same in terms of their properties, behaviors, and management needs as data? The fact that we are even talking about master data management indicates that there are qualitative differences among entities (at the logical level) or tables (at the physical level). There is, in fact, strong evidence that we can categorize data within a taxonomy that recognizes the different roles that data plays in the operational transactions of the enterprise.
Figure 1 shows a taxonomy of data related to segregating the management needs of data from a perspective of the use of data in operational transactions. It divides data into 6 distinct categories.
Figure 1: The Six Layers of Data
The first category of data in this scheme is metadata. What is meant by this is the metadata that truly describes data. For a logical data model, this will be the descriptive information about entities, attributes, and relationships. For a physically implemented database, this will be information about tables and columns. The latter is found in the system catalog of a database, but it is increasingly being materialized as tables in databases too.
Metadata, as the term is used here, is important because it has semantic content that needs to be managed. Tables and columns have meanings. The metadata has to be ready before a database can be implemented and should remain unchanged for the lifespan of the database. If it has to change, there is likely to be significant impact. For instance, if the datatype of Customer Last Name has to be increased from Char(20) to Char(40), then many programs, screens, and reports will be affected.
Below metadata in the hierarchy shown in Figure 1 is reference data. “Reference data” is used to mean many things today, but in the sense used here, it describes what are usually termed “code tables”. These are also called “lookup tables” and “domain values”. Reference data tables usually consist of a code column and a description column. Typically, these tables have just a few rows in them. In general, the data in these tables changes infrequently. Because of this apparent structural simplicity, low volume, and slow rate of change, these tables get very little respect. However, they can represent anywhere from 20% to 50% of the tables in an implemented database. Also, although they receive little attention, IT professionals fear changing the values in them.
Reference data tables share something with metadata – their physical values have semantic content. For instance, a customer preferred status of “bronze” may mean that a customer with this status has 30 days to pay their bills and can only be extended $1,000 of credit. No other kind of data in a database has this property. The semantic property is why this data is used to drive business rules. If business rule logic refers to actual data values, it is a near certainty that these values will come from reference data tables. Reference data can be defined as follows:
Reference data is any kind of data that is used solely to categorize other data found in a database, or solely for relating data in a database to information beyond the boundaries of the enterprise.
Next in the hierarchy of Figure 1 is enterprise structure data. This is data that allows us to report business activity by business responsibility. Examples are Chart of Accounts and Organization Structure. One of the main issues with this kind of data is managing hierarchies, which may be incomplete or “ragged”. Additionally, this category of data is notoriously difficult to manage when it comes to change. For instance, a product line may be reassigned from one line of business to another. Inevitably, historical reports have to be produced from the perspective of the product line being the responsibility of either line of business. One example would be the need to see the performance of the recently assigned line of business as if it had been responsible for the product line for the past 5 years.
Operational transactions always have parties to them. These are the things that have to be present for a transaction to occur, and are represented in Figure 1 by the transaction structure data layer. The most common entities given as examples of this category of data are product and customer. It can be defined as follows:
Transaction structure data is data that represents the direct participants in a transaction, and which must be present before a transaction executes.
Thus, we have to know something about a product and a customer before we can actually sell the product to the customer.
Transaction structure data typically consists of entities with large numbers of attributes, which makes them very easy to spot in data models. This class of data inevitably has problems of identity management. It is easy to appreciate for customers, whose names may be incorrectly captured or change. Yet even products can change their identifiers as they pass through their life cycle or are rebranded. Standardization of identity is extremely difficult to achieve for this class of data, even though it is the subject of many initiatives in this regard.
Another characteristic of transaction structure data is the fact that it is usually implemented as single tables that contain hidden subtypes. Certain columns in a product table, for example, will only apply to certain kinds of products, or to products at a certain point in their life cycle, or to some kind of externally imposed grouping such as dangerous products. Sorting out what columns are relevant to a particular product record is a difficult and frequently neglected MDM challenge.
Transaction activity data, the fifth layer in Figure 1, is the normal “event” data that we see in operational transactions in an enterprise. It has been the focus of IT from the early days of automation. Transaction audit data, the final layer in Figure 1, tracks the state changes in transaction activity data. It is what is usually found in transaction logs, although this kind of table is also frequently seen in databases too.
At this point, a definition of master data can be provided. It is the aggregation of reference data, enterprise structure data, and transaction structure data. As has been shown, each of these is rather different in its properties, behaviors and management needs. However, they do form a group that is distinct from the other three layers in Figure 1.
Accepting that there are different kinds of data with different management needs is important. It means that “one-size-fits-all” approaches to MDM are likely to be unsatisfactory. It also means that the perspective that there is nothing special about master data, and that MDM is just the application of the same old data management techniques, is wrong. Both the “one-size-fits-all” and the “same-old, same-old” views still enjoy considerable acceptance. This is true even among MDM vendors and consultants, although, for obvious reasons, they tend to only express these views in private.
What the taxonomy in Figure 1 shows is that there really are different categories of data, and that it really does make sense to think of master data as different from other kinds of data and as having specific management requirements. The case for MDM is thus a genuine one.
Recent articles by Malcolm Chisholm | <urn:uuid:497d2ead-c6a0-4a49-bb4d-306e8bbafa82> | CC-MAIN-2017-09 | http://www.b-eye-network.com/view/6758 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00037-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950959 | 1,730 | 2.640625 | 3 |
Some people have been worried that with the growth of technologies like artificial intelligence (AI) and virtual reality (VR) jobs will become more scarce. We have learned from history though that every time a new technology hits the market, any job losses from it usually are replaced by new jobs related to that technology. This infographic from Futurism gives us a good glimpse at what such a future with the boom of artificial intelligence and virtual reality may look like.
Neuro-implant technicians as well as other neuro-scientists of all kinds will be needed to deal with the neuro-implant boom which is set to happen in the next few years.
Smart home technicians will also be needed to install smart home technology. Smart home engineers will be needed to problem solve current smart home technology models and innovate new systems of smart home technology.
Budding program designers can look forward to programming virtual reality. A VR experience specialist will be able to refine the VR experience for all aspects of work, play, home, entertainment, shopping, family, etc.
More and more professors will become freelance professionals since teaching will move into the on demand realm. Starting your own university won’t seem like such a foreign concept anymore; many professors will carry their own custom teaching style, course materials, and marketing plan.
We might see the return of local farming as the public becomes more aware of the growing environment damage of industrial farming.
Some people will become a professional data collector – collecting a terabyte of information or more every day while they go about their normal routine – and they will be compensated handsomely for it.
This information rich infographic has a lot more to say about what the possible jobs of the future are and how they are going to come into existence – so check it out if you want to see more about what careers of the future could be like!
By Jonquil McDaniel | <urn:uuid:c1709f4d-c985-4641-880c-6b1497ff3a7e> | CC-MAIN-2017-09 | https://cloudtweaks.com/2016/10/jobs-future-ai-vr/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00213-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952043 | 382 | 2.640625 | 3 |
HPC clusters first emerged in universities and research centers that required extra compute power but had limited budgets. The development of open source Linux-based operating systems and management tools was a natural evolution.
However, because open source tools are created by a variety of different organizations, no one solution offers a complete all-in-one product. Organizations therefore often need to tap into multiple open source projects to obtain everything they need to manage a cluster environment. They then need to get everything working together and educate their users on how to use each component.
According to IDC, software remains the biggest roadblock for HPC users because parallel software is lacking and many applications run into issues such as scalability. The number of cores per processor and per cluster continues to increase and new programming paradigms are needed to increase efficiency. As clusters grow larger and more complex, sophisticated management tools are required. Setting up and monitoring a cluster can become very difficult – this is especially true for heterogeneous environments that are dependent on multiple generations of technology that support a variety of applications and multiple user groups. The complexity further increases when organizations deploy multiple clusters — whether they are within the same datacenter or across a global organization – or move to a cloud-based model.
When addressing open source software for HPC
Despite the pervasiveness and benefits of open source software, it is not without its pitfalls for those organizations that lack the expertise needed to integrate, maintain and operate a stack of open source software. The associated costs can manifest in many ways – such as increased time spent on system administration, troubleshooting problems due to lack of formal support channels, or the cost of regression testing in-house. Administrators may also experience reduced productivity due to cluster downtime, support issues and issues due to low utilization of resources.
Organizations often experience additional education expenses associated with maintaining an open source environment. They will likely also experience creeping operational costs as they find themselves not only in the research business, but in the software maintenance business.
Comparisons between open source and commercial software often assume the alternatives have equal merit
This can be true for database software or scripting languages in areas where open source software is more widely recognized. However, in more specialized areas, open source software may not exist at all or provides less functionality and requires more effort to install and integrate. Some needed capabilities that emerge as critical elements include Web management consoles and portals, software provisioning tools, reporting and analysis tools, and tools to facilitate application integrations.
Often additional factors contribute to cost and risk when using open source software. This includes a lack of reliable technical roadmaps or the task of important software maintenance such as performing updates or applying security.
So how do you weigh the benefits of open source software?
Total cost of ownership (TCO) varies based on unique requirements for each organization. Measuring your TCO is about finding the right balance. Clearly open source software is here to stay and organizations should consider many factors when deciding on the degree to which to deploy commercial or open source software. Some of these considerations include the cost of software, risk associated with deploying a variety of solutions, the complexity of the environment, and the expertise of the staff. Often these considerations are interrelated – such as in the case of selecting open source software for lower cost reasons, but in doing so creating a higher element of risk and added complexity in the environment.
The decision whether to deploy open-source or commercial software for HPC is sometimes painted as a either/or choice. In practice, however, organizations have a range of different options. The figure below illustrates a range of alternatives between purely open and commercial solutions. The shape of the TCO curve will vary depending on the environment. For most organizations, being at one extreme or the other is likely to be expensive and limits their downstream flexibility.
Productivity is Key
This is particularly true when evaluating the total cost of ownership. If we purely focused on acquisition cost, it’s easy to overlook other factors such as putting measures in place to track throughput and utilization of workloads. Utilization is crucial to getting the most from your infrastructure investment. Improving utilization minimizes additional resource acquisition costs and helps ensure that wait times for resources are minimal so that results are achieved in less time. Another consideration related to productivity is the time it takes to install a complete cluster environment, have it ready for full productive use and educate users on how to use the cluster so they can run their simulations and analyses productively.
Future proofing is very important
As a cluster environments evolve, we need to ensure that extensibility exists in the environment to support future requirements. As user requirements and applications evolve, features such as system monitoring and alerting tools, workload management systems, support for an increasing number of specialized workload types, and ease-of-use features including user-centric web portals become increasingly important.
For more information
TCO estimates will vary based on many factors including the nature of the installation, in-house capability, types of applications and cost of down-time. As you choose between open-source and commercial alternatives, there are many different costs related to administration and productivity.
IBM Platform Computing has developed a whitepaper and recorded Webcast that guide you through evaluating the true cost of deploying and managing an HPC environment. If you are interested in receiving a TCO evaluation of your HPC environment, please contact IBM. | <urn:uuid:eda35803-c9e9-480f-8443-c116c442faa6> | CC-MAIN-2017-09 | https://www.hpcwire.com/2012/12/17/open_source_software_in_hpc/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00213-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.948763 | 1,098 | 2.625 | 3 |
The Trails open-source framework, inspired by Ruby on Rails, makes Java development easier.
LAS VEGASChris Nelson said he found developing in Java was just too hard, so he decided to do something to make it easier. Now hes about to deliver on it.
Nelson, an independent developer and director of the Cincinnati Java Users Group, is the founder of the Trails framework, a new open-source framework aimed at making Java easier for developers.
Some might call Nelson a flatterer, as imitation is considered the finest form of flattery and Trails gets some of its notions from the popular, though non-Java, Ruby on Rails framework. But Nelson said Trails was simply "inspired" by Ruby on Rails but is not a Java-based clone of it.
"Developing J2EE [Java 2 Platform, Enterprise Edition] is just too hard," Nelson said in a talk at TheServerSide Java Symposium here on March 23. "Things like Hibernate, Spring, etc., make it easier, but its still too hard. Ruby on Rails raises the bar," he said.
However, that leaves three choices: "Switch to Ruby on Rails, suffer, or make Java better," Nelson said.
Nelson noted that aside from frameworks such as Spring, other methods also have been used to simplify Java development, but they, too, fall short.
He said typical RAD (rapid application development) tools may look slick, but they do not generate a real domain model, they dont scale up in complexity, they require manual user interface design, and they dont leave the developer with a solid architecture.
Click here to read more about Ruby on Rails.
Code generation is another example of a method to simplify development, but "thats a lot of code," Nelson said. "What if you want to change it? You still have to maintain it. Model-driven architecture [MDA] takes this approach. It tends toward complexity."
The solution is domain-driven development, which is developing an application with a rich domain model. You can "develop the domain model and have the rest of the app be automagically created," Nelson said, adding that domain-driven development is another name for object-oriented development.
"Trails is a domain-driven development framework inspired by Ruby on Rails, but is not a port of Rails."
Indeed, Trails started as an experiment, with Nelson asking, "What can Java learn from Rails?"
Nelson said Trails brings all the pieces together. It uses Hibernate for persistence, Tapestry for component-oriented Web MVC (model-view-controller), Spring for dependency injection and Acegi for security.
"Dont reinvent; integrate the best solutions," Nelson said. He called Trails a "metaframework; a framework of frameworks."
Nelson said he has a 0.9 version of Trails ready this week, but the 1.0 version should be ready for beta in May for the JavaOne conference.
His goal is to get Trails as mature as possible and then start to support other frameworks, possibly JavaServer Faces with Facelets, he said.
Future directions for Trails include instance-based security, support for method invocation, more documentation and more demos, Nelson said.
The Trails project can be found at here
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools. | <urn:uuid:c7bbe3ae-44ad-45d3-b718-1bd20b726f33> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Application-Development/OpenSource-Framework-Means-Happy-Trails-for-Java-Developers | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00389-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.951865 | 706 | 2.6875 | 3 |
Many researchers and journalists are calling the high-pressure system causing California’s historic drought the “Ridiculously Resilient Ridge,” a weather phenomenon identified using technology developed in the 1930s.
For the past 12 months, California has been drier than a sandbox in the Sahara thanks to a stubborn high-pressure system perched over the Gulf of Alaska since December of 2012. This system has kept rainfall in the Golden State at historic lows, with precipitation 10 inches lower than average in most places across the state.
California Gov. Jerry Brown has described it as a “mega-drought,” while California’s Catholic bishops have asked the public to pray for rain. The state has even attempted cloud seeding, where the clouds are sprayed with silver iodide in an attempt to squeeze out every last drop.
But for meteorologists, the solution is simple: that high pressure system over the Gulf of Alaska needs to weaken so precipitation from the Pacific Ocean can make its way to California.
The problem is that system has stayed strong for the past 13 months.
“Given the remarkable persistence and distinct structure of this high-impact feature of recent atmospheric circulation, I would argue that it’s now worthy of a proper name,” wrote Daniel Swain, a Ph.D. hopeful in the Environmental Earth System Science Department at Stanford University, about the high-pressure system.
“With this in mind,” Swain continued, “here’s a visual reminder of the spatial and temporal character of the Ridiculously Resilient Ridge of 2013.”
That phrase proved to be just catchy enough for weather experts and the journalists that cover them. “Ridiculously Resilient Ridge” has now become the accepted name of that high pressure system, which is more than 4 miles high and 2,000 miles wide.
“I first heard about the name a few days ago,” said the National Weather Service’s (NWS) Dr. Warren Blier. “It’s alliterative, it’s relatively accurate, but it’s not the term I would use. It doesn’t sound particularly scientific.”
Of course, weather events are often named for the benefit of informing the public. For example “Dust Bowl” may not have sounded scientific either back in the 1930s, but it accurately explained the widespread drought conditions that crippled the country.
And it was during the Dust Bowl era that the technology used to detect the so-called Ridiculously Resilient Ridge was developed.
“The [NWS] uses radiosondes technology to get a large-scale picture of atmospheric conditions, just as they did in the 1930s,” Blier said. “Radiosondes attached to weather balloons helped us identify this persistent warm front.”
Radiosondes evolved from temperature and pressure gauges hung from kites and balloons in the 19th century. Since the 1950s, radiosonde technology has been relatively stagnant.
“It’s not complicated stuff,” Blier added. “The radiosonde package is more sophisticated, but the idea is still the same as when they were first invented.”
When released, the Radiosondes use electronic signals to relay information back to researchers and meteorologists around the world to measure pressure, precipitation and temperature. Radiosondes helped identify a high-pressure system in 1976-77 that caused similar drought conditions in California.
New Technologies Help With Rapidly Changing Conditions
While the radiosondes are important, Blier was quick to note there have been significant technological advances since then that have greatly improved the accuracy of NWS forecasts in the short-term.
For example, commercial airliners have been equipped with data-gathering instruments so researchers at the National Oceanic and Atmospheric Administration have a constant stream of information used to forecast. Blier credited that system, known as AMDAR, along with improvements in radar and satellite technology, with helping save lives before tornadoes and other quick-forming weather phenomenon.
“We can predict five to six days out now, where as back in the 1970s, we could only predict two to three days out,” Blier said. “We didn’t have a way to issue a tornado warning until someone saw a tornado touch down.”
Technology that detects rapid changes is also very important to California water monitors. David Rizzardo heads the new snow survey section of the California Department of Water Resources. He said a vast network of monitors and sensors surround lakes, reservoirs and other watersheds throughout the state. These monitors use microwave and satellite signals to dispatch real-time information to water monitors in Sacramento.
“It is our first line of defense for emergency response if there’s a flood,” Rizzardo said. “It’s vitally important because it’s a dam safety issue, and we can make real-time adjustments based on the data.”
Yet, like the NWS, Rizzardo said for big-picture data gathering, old techniques and technologies are often deployed. For example, the Department of Water Resources still uses manual water surveys, where people measure samples with buckets.
“It’s the best climate record we have,” Rizzardo said. “It’s been used to measure snow levels in the Sierra Nevada since 1909.”
In fact, researchers know that this current drought is historic, in-part, because of those bucket measurements.
“Older technology can be incredibly useful for the big picture,” said the NWS' Blier. | <urn:uuid:f04ccc03-8bed-48b1-bf6d-458ccfbabf6f> | CC-MAIN-2017-09 | http://www.govtech.com/The-Weather-May-Change-but-Drought-Technology-Stays-the-Same.html?flipboard=yes | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00389-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941783 | 1,191 | 3.640625 | 4 |
DRM stands for “Digital Rights Management.” It is the technology that controls your access to copyrighted works such as movies, music, literature and software. The product files are encrypted so that you can only use them on authorized devices. For example, if you have downloaded music onto your PC, and want to access it with your smart phone, you may need to enter a username and password in order to be authorized to do so. Additionally, there may be a maximum number of devices from which you can access your purchased files.
While it is convenient to carry around your media on your mobile device, keep in mind that you still need to adhere to licensing agreements. Amazon MP3 Music Service, for example, specifies that you must only use their digital content for “your personal, non-commercial, entertainment use.” Apple iTunes authorizes you to use iTunes Products on five iTunes-authorized devices at any time, except for content rentals which may only be used on one device at a time. The exception is iTunes Plus Products, which do not do not contain security technology that limits your usage. According to Apple, “you can copy, store, and burn iTunes Plus Products as reasonably necessary for personal, noncommercial use.” | <urn:uuid:6661a36d-c1e0-4977-a900-14a85b666778> | CC-MAIN-2017-09 | https://www.justaskgemalto.com/us/what-is-a-drm-and-how-does-it-work-on-mobile-device/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00565-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939948 | 253 | 3.03125 | 3 |
Printer maker Epson just revealed a new machine that spits out pages at a stunning rate of 14 per minute. Wait! Don't stop reading yet. That pace only sounds bad when we’re talking about printers. This behemoth isn’t printing paper—its making it, from materials you'd otherwise toss in the trash.
The company recently announced PaperLab, the world’s first in-office paper recycling system, as Ars Technica first reported. You stick used paper in this machine and a few minutes later, fresh sheets of A4 or A3 paper spit out.
Epson says this is the first “compact office paper making system” that can recycle paper without the use of water—an important ingredient in traditional paper recycling. “Given that water is a precious global resource, Epson felt a dry process was needed,” the company said in its PaperLab announcement. Despite the water-free claim, Epson says PaperLab does need a small amount of water to maintain humidity within the machine.
Epson will show off a prototype of PaperLab during the Eco-Products 2015 conference beginning next Thursday in Tokyo. Then in 2016, PaperLab will roll out in Japan as a test market, with further international releases “to be decided at a later date.”
The current prototype will take up a good amount of space at 8.5 feet wide, 3.9 feet deep, and nearly six feet tall. That’s not something you’d want to keep in your living room, but could work well in an office with a big storeroom.
Epson has yet to announce pricing or a specific launch date in Japan.
Why this matters: The ability to make your own paper at the push of a button could be a huge win for companies looking to save money. But there are a lot of unknowns about this machine right now. We don’t know how much waste paper it takes for PaperLab to make 14 fresh sheets, for example, or how much energy this machine consumes. Those are both key points that will help determine if using PaperLab makes more economic and environmental sense than dumping waste paper in blue buckets.
PaperLab’s three-step process
Although Epson is staying mum on how much power and waste paper the device requires, the company did explain its three-step process for creating new paper.
Once you put waste paper into the machine, PaperLab turns the sheets back into fibers, which Epson says destroys confidential documents—making this a paper shredder and paper maker in one.
Next, the fibers are bound together with other material to increase paper strength and whiteness. At this stage, the machine can also add color, fragrance, or flame resistance to your new batch of paper.
Finally, the machine uses pressure forming to create the final product, which can be business cards, A4, or A3 paper at various thicknesses.
PaperLab may not be coming to the U.S. anytime soon, but this is a fascinating idea that offices around the world will want to watch closely.
This story, "World's first in-office paper recycling machine turns used paper into clean, white sheets" was originally published by PCWorld. | <urn:uuid:770ce10f-96c5-4719-99b2-738979e116ea> | CC-MAIN-2017-09 | http://www.itnews.com/article/3011588/business/worlds-first-in-office-paper-recycling-machine-turns-used-paper-into-clean-white-sheets.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00441-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.943013 | 671 | 2.65625 | 3 |
Serious game tests GIS, disaster response skills
- By Patrick Marshall
- Jul 09, 2013
They may not be much at coming up with catchy names for software, but a team of students and faculty at the Rochester Institute of Technology have done a pretty impressive job of using geographic information systems to sharpen spatial-thinking skills of first responders.
The “Serious Game for Measuring Disaster Response Spatial Thinking,” which runs on ESRI's ArcMAP, presents a scenario in which toxins have washed up on the bank of the Rhine River in Bonn, Germany, after a flood. The player is then given a series of questions for remediation, such as, "Which feature class would you like to buffer in relation to risk of population?"
Once the player chooses, the game repopulates with new information based on that choice. The operation is completed by Python scripts so that the player doesn’t actually have to know how to perform the operation using ArcGIS tools.
"We really wanted them to focus on looking at the map and reasoning about the relationships between the various entities," said Brian Tomaszewski, project leader and assistant professor in RIT’s Department of Information Sciences and Technologies. "You do need to have a basic understanding of the some of the functions, such as a buffer. But the plumbing behind the scenes is taken care of through Python programming. Then they can quickly see the results of their actions and continue making decisions."
At the end of the game scenario, the players receive a score reflecting their spatial thinking skills, as well as a discussion about what choices they might have better opted for and why. As a result, Tomaszewski said, the game not only can be used to demonstrate the potential effectiveness of GIS in disaster response, it can also train responders to select more appropriate responses in particular situations.
The Serious Game was the result of a 10-week class taught by Tomaszewski in the fall of 2012. "The students have very diverse backgrounds," he said. "The computer science students did a lot of the coding. Other students did the compilation of GIS data sets. Other students did documentation."
Tomaszewski said that, in addition to creating more disaster scenarios, he wants to port the game to the Web to make it accessible to more people. "For people learning about disaster management, being able to think spatially, to understand relationships between things — distance, scale and so forth — is important," he said. "It's a way of reasoning about the world that GIS can help enable."
Patrick Marshall is a freelance technology writer for GCN. | <urn:uuid:d4e07381-fd12-4489-a311-cffa8a9b52ec> | CC-MAIN-2017-09 | https://gcn.com/articles/2013/07/09/serious-game-tests-disaster-response-skills.aspx?admgarea=TC_STATELOCAL | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00441-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.963731 | 541 | 3.203125 | 3 |
A Multimodel database management system (DBMS) is a database system that is built with a data records using an entity-relationship data model. The data is stored using a variety of logical models or views and uses a flexible combination of KeyValue pairs, documents and graphs.
AranoDB is a flexible data model that uses a combination of documents, KeyValue pairs and graphs to store data. It is easy to use due to a graphical interface and it is licensed under the Apache License. It is a fast database that takes up less space than conventional NoSQL databases.
The Alchemy database is a hybrid database that combines RDBMS and a NoSQL datastore. It can store unstructured and structure data as the Alchemy database does not have limits on tables, columns or indexes. It operates on commodity hardware and is easy to install. | <urn:uuid:aec27c25-6a93-4c95-a75e-193d1dd36f73> | CC-MAIN-2017-09 | https://datafloq.com/big-data-open-source-tools/os-multimodel/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00617-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.868203 | 174 | 2.5625 | 3 |
Calling for better mine safety
A close public/private partnership produces an innovative emergency communications system
- By Doug Beizer
- Jul 01, 2009
Communications technology in many underground mines is primitive, consisting of phones and handsets connected by wire. It works, but it's vulnerable. If an accident severs the line, the system fails, and miners lose contact with people above ground. If it's a dangerous accident, one that leaves people trapped underground, the loss of communications can be deadly.
In the wake of multiple fatal mining accidents, Congress passed a law in 2006 that requires better communications and miner-tracking systems. But developing new technology for tough underground conditions has been demanding, particularly with the law’s tight three-year time frame for fielding a system.
“By government contract perspectives, we had to move at light speed,” said David Snyder, a senior mine electrical engineer at the National Institute for Occupational Safety and Health’s Office of Mine Safety and Health Research.
To meet the goals of the law, called the Mine Improvement and New Emergency Response, or MINER Act, NIOSH officials worked closely with L-3 Communications and other industry partners. A collaborative relationship was critical to keep the project on schedule, particularly because the requirements were vague.
“The MINER Act wasn’t an engineering document, so there were a lot of things that were left for interpretation that ultimately will be interpreted by the Mine Safety and Health Administration,” Snyder said.
For example, the MINER Act states that communications must work after an accident occurs, but it doesn’t say for how long. It also requires the ability to electronically track where people are in a mine, but it doesn’t say with what precision.
Snyder said the ambiguity created trade-offs and system design options that the agency had to evaluate with the help of its development partners.
For its part, L-3 pulled together a team that included Virginia Tech’s Department of Mining and Minerals Engineering and a company called Innovative Wireless Technologies, which provided the wireless solution.
A critical goal of the project was to create a redundant system that would be capable of surviving an emergency. The challenges included limited power and radio range underground and restricted paths into mines for a backup system.
Project leaders originally planned to use a wireless technology called ultra-wideband that was in development for the Defense Advanced Research Projects Agency. It doesn’t require much power or a dedicated frequency, and it can provide location-tracking capabilities. However, developers determined it didn’t work well underground.
“We morphed our original thinking into a mesh solution, and we found 900 MHz to be the ultimate frequency to propagate in a mine. It is the sweet spot,” said Vic Young, who directs L-3’s mine safety programs but used to work in the company’s military communications division.
With a traditional wireless network, fixed base stations communicate with mobile nodes, such as radio handsets, but the system is vulnerable if those base stations go down.
In a mesh system, fixed mesh nodes with backup batteries allow for dynamic routing of communications so that the loss of any particular node does not take the whole system down. In addition, the miner radios also act as nodes for the network, providing further system resiliency. The radio handsets can also communicate with one another independent of the network.
L-3 installed a pilot system in International Coal Group, Wolf Run Mining Company's Sentinel mine near Philippi, W.Va., in October 2008.
The system, called Accolade, consists of a network of mesh nodes that extend from the mine’s surface through an elevator shaft and a portal to one of the working sections underground. The network provides untethered voice and text communications and miner location tracking through handsets that each miner carries.
The system tracks miners by measuring which node is receiving the strongest signal from the handset. A situational awareness display at the surface provides a mine map showing miners' locations. The map data is also available via the Web.
“The biggest benefit of the wireless system is being able to communicate from anywhere in the travel ways,” said Chuck Dunbar, director of acquisitions and planning for International Coal Group. “You don’t have to go find a wired mine phone to talk to the outside or to the working sections.”
In the future, the International Coal Group hopes the ability to communicate wirelessly from an active section of a mine will lead to safety and productivity improvements, such as safety inspections, supply requests, and other maintenance and timekeeping reports.
Officials say the system could also work in other applications, such as underground mass transit systems.
Doug Beizer is a staff writer for Federal Computer Week. | <urn:uuid:8696349f-a6d8-4dcd-a8ff-5093d68bb86e> | CC-MAIN-2017-09 | https://fcw.com/articles/2009/07/06/tech-mine-emergency-comm--system.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00617-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950775 | 988 | 2.515625 | 3 |
While passwords security is not ironclad, experts say they need to be in the mix with device-assisted authentication as an additional layer
Google's security team is experimenting with ways to replace passwords for logging in to websites. But while acknowledging passwords alone are no longer enough to protect users, security experts believe they shouldn't be tossed.
Google is testing device-assisted security as a possible password replacement. Ideas include a small Yubico crytographic card that could be inserted into a USB reader to log in to a Google account or some other supporting website, Wired.com reported Friday. Such a mechanism would have to be supported by the Web browser.
Other authentication options might include someone tapping their smartphone or a smartcard-embedded finger ring on a computer. Details on Google's thinking are contained in a research paper that is scheduled to appear this month in the engineering journal IEEE Security & Privacy Magazine. Google Vice President of Security Eric Grosse and engineer Mayank Upadhyay wrote the paper.
Google was not able to make Grosse or Upadhyay available for an interview, but said in an emailed statement, "We're focused on making authentication more secure, and yet easier to manage. We believe experiments like these can help make login systems better."
The diminishing effectiveness of passwords is seen everyday by the amount of spam spewing from hacked Web mail and social media accounts, such as Facebook and Twitter. Consulting firm Deloitte Touche Tohmatsu predicts that more than 90% of passwords generated this year would take only seconds for a hacker to crack.
While passwords fail to provide ironclad security, experts believe they need to be part of the mix with device-assisted authentication as an additional layer.
"The cell phone is the weakest option, but any sort of two-factor authentication is a serious improvement," said Chester Wisniewski, security adviser for Sophos. "It is important for people to know that this doesn't replace passwords, it simply augments them."
The general principle for strong authentication advises using something you know, something you have and something you are, such as a password, a USB token and a fingerprint reader, respectively. While expecting consumers to have all three would be impractical, having a couple of them would be much stronger security than having just one.
"The notion of authentication strategy is if you start mixing these things, then it's a lot harder for a bad guy to break the system," said Eve Maler, an analyst with Forrester Research.
New layers of authentication are also being invented, Maler said. For example, when a person makes a purchase through PayPal, the online payment site will check the authenticity of the request through algorithms that consider multiple factors, such as the IP address of the computer, what's being purchased and for how much.
"They're silently observing your behavior," Maler said.
Such techniques can go a long way toward augmenting passwords, Maler said. However, users will still have to choose much stronger passwords than they do today. In its 2012 list of worst passwords used on the Web, SplashData found the top three passwords to be "password," "123456" and "12345678."
The use of any device in authentication opens up the possibility of having it lost or stolen. One answer would be biometrics to established the identity of the user. "There needs to be an accompanying mechanism to ensure that your device can only be used by you," said Dan Olds, an analyst with the Gabriel Consulting Group.
How far Google can take its ideas toward widespread adoption remains to be seen. But recognizing and trying to solve the password problem is a step in the right direction.
"It's about time we got serious about replacing passwords," said Andrew Storms, director of security operations at nCircle. "Maybe news of Google's experiments will encourage other vendors to look seriously at alternatives."
Read more about access control in CSOonline's Access Control section.
This story, "Google looks to kill passwords, but experts say not so fast" was originally published by CSO. | <urn:uuid:a7a82fbf-5591-44fb-98b8-10616e45ed8e> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2163763/byod/google-looks-to-kill-passwords--but-experts-say-not-so-fast.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00085-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.95395 | 836 | 2.578125 | 3 |
Field Encryption on the IBM i just got easier.
SQL Field Procedures are a new DB2 feature in version 7.1 that allows a user-specified "exit" program to be called whenever data is read from, inserted into, or updated in a field (column). This is somewhat similar to database column triggers; however there are two distinct advantages:
- Field Procedures allow data to be modified on a Read operation, which allows the exit program to automatically decrypt the field value before it is returned to the customer's application.
- Field Procedures provide a separate internal space to store the encrypted version of the field value. This allows organizations to encrypt numeric fields such as packed decimal, signed decimal and integer data types without having to store the encrypted values in a separate file.
We're excited about Field Procedures since it will allow customers to implement column-level encryption on the IBM i without modifying their applications. This is especially important if a customer is running a canned application and/or does not want to modify their source code. | <urn:uuid:aeca176a-052e-43d0-ada7-367b18011650> | CC-MAIN-2017-09 | http://www.linomasoftware.com/blog/tag/sql-field-procedures | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00261-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.901083 | 207 | 2.53125 | 3 |
The networks are now used for research, but commercial grids are not far away.
Imagine if computing power were as easily accessible as electricity.
Businesses essentially would be able to tap into a vast network of supercomputers and pay only for the power they use, rather than spend millions of dollars on the hardware itself.
Such a scenario could become a reality in a few years as international efforts to develop computing grids heat up and major computer makers such as IBM seek to develop powerful networks that companies can tap into as needed.
In the last month, major initiatives to create powerful grids linking thousands of computers have been announced in Great Britain and the United States.
Two weeks ago, the National Science Foundation contracted with Intel Corp., IBM and Qwest Communications International Inc. to build a $53 million grid. Once completed in 2003, the grid will link computers at four research centers and will be able to perform 11.6 teraflops (trillion floating-point operations per second) and store more than 450 trillion bytes of data over a 40 billion-bit-per-second optical network, according to the NSF.
A week earlier, IBM, of Armonk, N.Y., announced it had been chosen to help build the United Kingdom National Grid and disclosed plans to build its own grid by using 50 computing "farms" around the world.
While todays largest computer grids are used primarily for scientific and research purposessuch as mapping human genesthe development of standard grid protocols could lead to commercial grids.
Under such a scenario, a company in need of more computing power could tap into the grid and gain access to a teraflop of CPU power or more, paying only for what it uses.
But such commercial computer grids likely are still several years away as research has yet to overcome such obstacles as security and bandwidth concerns.
However, ongoing research is benefiting businesses in other ways by enabling computer and software companies to develop new technologies, said Andy Butler, an analyst with Gartner Group Inc., in Egham, England.
"Computer grids are a very good testbed for leveraging architecture designs that will probably then feed back into more modestly scaling commercial applications," Butler said.
Already, large corporations such as processor makers are creating internal grids to handle complex computer simulations. "Within Sun Microsystems [Inc.], we have a compute farm that totals about 3,000 CPUs, which we use in the course of designing the microprocessors that we build here," said Peter Jeffcock, group marketing manager in Suns Volume Systems Products Group, in Menlo Park, Calif. "We leverage that power to run different simulations. For example, we can test to see what will happen to a chip if we supply it with 5.1 volts or 5.2 volts, as well as what will happen if we do all that at different temperatures."
"In most companies, you typically find that workstation CPUs are only being used 5 [percent] to 10 percent of the time in a 24-hour day," Jeffcock said. "If you integrate them into a local computer farm, you can get system use in the 80 [percent] to 90 percent level."
In a move that could persuade even more companies to adopt grid computing, Sun last month released its Grid Engine software to the open-source community.
The software essentially locates idle computer resources, matches them to individual job requirements and delivers networkwide computing power to the desktop, effectively managing an organizations computing resources and job distribution.
The free Grid Engine software has proved a valuable resource to Matt Ferris, systems manager with Motorola Inc.s Semiconductor Products Sector, in Tempe, Ariz.
"Weve been using it since February to help us handle test and verification jobs on wireless and broadband circuit designs," Ferris said. By using the software, he said, Motorolas engineers can submit 40 to 50 jobs into a queue, where the software will automatically distribute the workload to various computer systems as they become available, including overnight when workers have gone home.
"Before, we had to do it by hand, hunting around for machines where people werent logged in," Ferris said. "Now they just put it in a queue; they come in the next morning and review the results."
Because of the enormous potential to offer users access to vast amounts of computing power, its only a matter of time before grids become commonplace in corporate environments, said Suns Jeffcock. "We think of this as being something that is inevitable," he said. "The gains are so big, and the benefits to the companies are so huge, that this is an inevitable transition. Its just a matter of when and how fast."
John Patrick, IBMs vice president for Internet strategies, agreed. "Ive often been asked, Whats the next big thing for the Internet?" Patrick said. "Until now, I didnt have the answer. Im very confident now that the next big thing will be grid computing." | <urn:uuid:c40a1385-ce4f-4347-9111-c7b4c08d7856> | CC-MAIN-2017-09 | http://www.eweek.com/c/a/Cloud-Computing/Feel-the-Power-of-the-Computer-Grid | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00437-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.956142 | 1,018 | 3.046875 | 3 |
Cisco created all sorts of different magic inside their boxes that optimize forwarding processing of packets.
IP Routing explained in detail
Logic behind IP forwarding is listed in steps here with the assumption it will be an IPv4 packet that was received. This is process switching explained in 11 steps:
- A frame enters the one of router’s interface
- First thing that router does is to check frame check sequence. If FCS check fails it means that there was an error and frame will be dropped.
- If FCS is ok, router looks inside Ethernet Type field to get the packet type info. Packet is extracted by discarding Data Link header and trailer.
- IPv4 packets header checksum is read and if there is a mismatch, the packet will be dropped. If ok, we will get to next step.
- Router reads the destination IP address to see if maybe one of his own interfaces has that IP. If that’s true, packet has arrived to destination (that router). Router reads protocol field in the header and determine to which upper protocol software to send the packet payload.
- Otherwise, if packet destination is not on the local router, packet is routed. If the packet TTL, written in the header, is greater than 1, packet routing proceeds. If not, packet is dropped and ICMP Time Exceed is sent to the packet source.
- If ok, router looks at his own routing table for most specific prefix match for destination IP.
- Match in routing table has the info about next hop IP and outgoing interface. With that info read router is able to lookup for next-hop Layer 2 address. In case of Ethernet as outgoing interface, MAC address that will be found in ARP table. ARP table keeps all IP to MAC address bindings that router needed in near past.
- When MAC address deduced, router is able to generate Data Link header and trailer around the packet.
- Before creating that frame from packet, router decreases TTL field value (IPv4 header checksum will need to be recalculated afterwards). After all field are ready the frame if generated together with destination MAC address and so on.
- Frame gets out the interface towards destination.
So, the process is not so simple as we imagined at first but it depicts normal routing steps for all routers. There is one important thing to mention here. The router needs to search through whole routing table every time he needs to resolve a next hop address in order to forward some packets. This search considerably slows down the whole routing process particularly for routers with huge routing tables.
Advancements in route lookup process
Cisco through time invented few methods to speed up steps 8 and 9 from the list above. The idea behind next technology is all about speeding up the routing table lookup process in order to get the next hop address in as few CPU cycles possible.
CEF, Fast Switching and Process Switching are those technologies and in today’s routers Fast Switching and CEF (Cisco Express Forwarding) are used.
Fast Switching – Route one forward many
Is the older one and we can look at him as addition of caching functionality to steps 8 and 9 from above. When first frame comes into the router he does on him all steps mentioned above. Basically it does normal process switching for that frame but after that it saves the result of route table lookup from step 8 and 9 inside route cache. Route cache will be sorted in ideal way to speed up the cache lookup which will follow next. Cache keeps only destination IP, next-hop address and Data-Link header that was used with first frame. Future packets destined to the same IP will match the catch entry. This will allow router to forward the packet quickly as all needed info is already prepared in cache.
Don’t be too happy about it, it’s not used anymore with new Cisco equipment because CEF resolves some drawbacks of using Fast Switching.
CEF enables fast Layer 2 header construction and faster output interface lookup.
If you followed all the steps mentioned above you will easily notice that the most difficult and thus slowest step in the whole process of routing is Layer 2 frame rewrite. Layer 2 frame rewrite is basically route lookup and construction of the whole L2 header that is needed to send the packet to next hop router. L2 frame rewrite is the thing that CEF is all about. CEF represents a technology of pre-prepared Layer 2 header for every next-hop one router has. This enables him to adhere Layer 2 header without the need of gathering all the data that goes inside
Fast L2 header construction using adjacency table
Our CEF enabled router saves in his memory adjacency table in which stores already constructed Layer 2 headers, exactly one L2 header correlated to every single directly connected neighbour. So, one constructed Layer 2 header for every neighbour. Table with prepared L2 headers is constructed from routing table information: output interface and next-hop address. Next-hop MAC address, if not resolved jet, of those next-hops is then resolved using ARP or some similar L3 to L2 mapping table from the IP addresses and used in L2 header construction.
Routing process becomes selecting one entry from adjacency table to encapsulate the packet and forward it really fast.
But is the process of selecting one entry from adjacent table also up to speed?
Fast lookup using FIB
CEF enabled router uses FIB table to find the right entry from adjacency table really fast. It is an Forwarding Information Base table which stores only pointers to one of adjacency table entries for every destination IP. Every FIB entry shows us which L2 header from adjacency table will be used to encapsulate the packet if he needs to be forwarded to particular IP destination.
FIB table is filled up by reading data from Routing table and sorting it and normalizing for faster lookup. Route table, or RIB in this case, is not really fast lookup friendly and, the situations where recursive lookups are needed to deduce real next-hop address, are making it even worse.
FIB resolves all that. If CEF FIB is used for lookups, RIB is only used to construct FIB when something in routing changes.
Altogether about the magic
CEF is technology that uses FIB table and adjacency table to make routing decisions and L2 frame constructions several times faster and thus making routing faster.
CEF can be implemented in software of the router or if even faster high end routing is needed it can be implemented in hardware by using TCAM memory modules. In software or in hardware the FIB speeds the lookup of desired destination prefix but if the hardware is used then CEF technology is enabling FIB lookup process to get us with desired destination IP in only one search iteration, it always finds the result at first try, magically.
If you want to know more about TCAM memory used to speed up FIB lookup using hardware look further at this one: TCAM and CAM memory usage inside networking devices | <urn:uuid:d360005a-d091-42cb-8325-17918b3392e0> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/2015/routing-cef | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00613-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.925123 | 1,454 | 3.296875 | 3 |
With the news that an American woman has received a pacemaker with a wireless connection to the Internet, the so-called “Internet of Things” has taken on a new dimension.
Reuters reported this week that a 61-year-old woman became the first American recipient of the pacemaker, which was approved by the FDA just last month and allows the doctor to monitor how her heart is doing. At least once a day, a server will communicate with the pacemaker over the Internet and get an update. If there is anything unusual, the server can contact the doctor and patient, literally calling the doc on the phone in the middle of the night, if necessary.
The Reuters article quotes the doctor as saying that in the future, wireless devices could monitor high blood pressure, glucose levels or heart failure.
The technology is part of a much broader trend of reaching out to objects in the physical world to bring them into the Internet, so to speak, to build an “Internet of Things.” RFID, short-range wireless technologies and sensor networks are enabling this to happen as they become more commonly used. IPv6, with its greatly expanded address space, allows for many more devices to connect to the Internet.
If all things are connected, all things can be tracked. The earliest applications have centered around tracking shipments in a supply chain, but if the tracking devices are left in objects when they are in use, that could be extremely powerful.
It’s a little scary to think of connecting one’s heart to the Internet. I know the connection is being used in a very narrow way, but if it were at all possible for hackers to tamper with the pacemaker, they probably would, given what we know about what some are capable of. | <urn:uuid:8d981c1c-0c35-49c2-bf53-1204893c7601> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2246930/lan-wan/-the-internet-of-things--now-includes-a-human-heart.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00137-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.959555 | 362 | 2.8125 | 3 |
Authors: Ed Skoudis and Lenny Zeltser
Publisher: Prentice Hall PTR
Every computer user, whether a system administrator or a home user, needs to defend their computer from all sorts of attacks. This book brings basic knowledge about types of malicious code with definitions and practical advice. To get to know your enemy from the opposite side of Internet is the first step for a good and solid defense.
About the authors
Ed Skoudis is a computer security consultant with International Network Services. Ed’s expertise includes hacker attacks and defenses, the information security industry, and computer privacy issues. He is an instructor with the SANS Institute, where he teaches popular courses on incident response and computer attack.
An interview with Ed Skoudis is available here
Lenny Zeltser is an information security consultant and an instructor at the SANS Institute, where he teaches courses in malware reverse-engineering. Lenny holds a number of professional certificates including CISSP and GSE and is currently pursuing an MBA degree at MIT.
Inside the book
Malicious code or malware is a term that we hear a lot almost every day. The book kicks off with an introduction to the term. Skoudis talks about malware code as a prevalent problem, describing malicious users and other reasons which make it possible for malware to spread. Here you encounter a simple but descriptive table presented with types of malicious code followed by malicious code history. It’s a very friendly introduction to the serious content that follows.
What follows is some basic knowledge about viruses. The author describes several infection mechanisms and techniques that a virus can take when infecting executable files, the boot sector, document and other virus targets. After the infection, virus propagation mechanisms are presented. Of course, the end of a virus story belongs to security mechanisms and some malware self-preservation techniques.
Viruses pose a constant threat so it’s important to know as much as possible about them. After all, they are the foundation for worms which are the subject of the next chapter.
The third chapter is dedicated to worms, but it starts with a comparation of viruses and worms. That way one can get a better picture and find out the main differences. Skoudis starts with the definition and history of worms. If you aren’t familiar with some of the worms that were, or still are notable, they are previewed in a table with their characteristics.
Each worm has components which present the building blocks implemented in it. Those are the worm warhead, the propagation engine, the target selection algorithm, the scanning engine and payload and each of them is described. Since there’s no slowdown in the appearance of worm, Skoudis also talks about what we can see coming and presents ethical worms, with their pros and cons. In order to help you defend against worms here you’ll find advices for defense mechanisms.
Every time you’re browsing the Web or read HTML e-mail, you routinely encounter mobile code which can be malicious. The author presents browse scripts, ActiveX controls, Java applets, mobile code in e-mail clients, and mobile code in distributed applications. All of these techniques are described with simple examples. The most important security measures are summarized. This chapter is concentrated on the threat that comes in the form lightweight programs downloaded from a remote system and executed locally, and on suitable defense mechanisms.
The following illustrated threat is a backdoor program. After basic definition, different kinds of backdoor access are shown. The author writes about installing backdoors, starting them automatically in Windows and UNIX operating systems, and also detecting them.
Unlike a backdoor, the Trojan horse is a program that appears to have some useful purpose, but really masks some hidden malicious functionality. Skoudis starts by explaining the danger that file names and extensions bring. It’s very important to understand the tricks used to hide malicious code into harmless files or how attackers perform name-based attacks. Steganography, a technique of the hiding data is also described.
RootKits, discovered in chapter seven, are different from viruses and Trojan horses since they modify the operating system in order to gain access to someone not authorized to use it. Skoudis distinguishes two types of RootKits, user-mode and kernel-mode. First, he analyzes the user-mode RootKit for UNIX and Windows operating systems, including their use and defenses. One of the interesting parts of this chapter is the table that shows the development of the Linux RootKit Family.
Next the author writes about kernel-mode RootKits that get at the heart of the operating system. The main difference is that kernel-mode RootKits modify the operating system kernel. That way, the attacker can mask his presence more efficiently. First, Skoudis explains what kernel is and how it can be manipulated in general, and he touches both Windows and Linux. Here you also find some defense suggestions and its worth to pay attention.
What you see next is a display of six different levels of malware infiltration. A problem discussed here is the possibility for an attacker to alter the functioning of the BIOS and CPU themselves. The author deals with flashing BIOS, denial-of-service attacks, microcode, etc. This chapter simply describes the possibilities for an attacker that go beyond an end user or even some administrators’ knowledge. But, it’s very useful to know what one can expect.
What are presented next are three malware attack scenarios. These case studies include the theory presented in previous chapters. Each of the scenarios is based on common mistakes made by computer users, system administrators and security personnel. The covered scenarios are: surfing the Internet, ignored system administrator notice and buffer overflow vulnerability exploited by a worm on the Internet.
Skoudis also writes about building a malware analysis laboratory and offers some good advice about which hardware to use, and then presents a process and tools for putting malware to detailed analysis so one can determine its functionality and purpose.
At the end of the book Skoudis presents links to some useful web sites for keeping up with malware.
My 2 cents
Today’s hostile computing environment doesn’t allow the user to ignore the threat of malware. Each chapter of this book is devoted to one type of malware: viruses, worms, malicious code, backdoors, Trojan horses, User-lever RootKits, and kernel-level manipulation.
Anyone interested in keeping their system safe from attackers should read this book. Although it contains more beginner-level knowledge, it also has good practice examples and scenarios that can help system administrator and security personnel to develop a safe computer environment.
“Malware: Fighting Malicious Code” presents a good start for getting knowledge about malicious code. It’s clearly written, easy to understand and informative. | <urn:uuid:c0f24498-fd50-44b3-8118-1bcceec2ea06> | CC-MAIN-2017-09 | https://www.helpnetsecurity.com/2004/07/26/malware-fighting-malicious-code/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00313-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.926187 | 1,398 | 3.046875 | 3 |
The previous blog post illustrated how someone with malicious intent could pull off a way to compromise a system. It starts by gaining access to the system using a SQL injection. After the initial access a foothold is established. Then the position is strengthen by uploading the tools of choice. Next the privileges are escalated and a shell with full systems privileges is gained.
However, this attack method might seem complex it would probably happen in case there is a motive or incentive for the attacker do it. An incentive to do such activity could be to steal trade secrets, intellectual property, credit cards or any other information that the attacker could monetize. Nonetheless, there are other motives that serve as an incentive for an attacker to compromise a system. Brian Krebs, a former Washington Post reporter, has putted together a great chart listing the various ways the bad guys can monetize hacked systems (Krebs, 2012). One of the attack methods that tend to gain popularity is to use SQL injection for malware distribution. Basically, by introducing malicious code in the web server an attacker can turn the web server in a mechanism to deliver malicious code to browsers by taking advantaged of client-side vulnerabilities against unpatched browsers. This mechanism was used by the Asprox botnet (Borgaonkar, 2010) (Pelaez, 2008). More recently this attack gained the connotation of watering hole or strategic web compromise when it targets a trustworthy web site (Kindlund, Caselden & Chen, 2014). Steven Adair and Ned Moran explain it perfectly in his article about trusted websites delivering dangerous results (Adair & Moran, 2012).
How does an attacker performs this? What are the mechanics behind such method? As the reader noticed in the previous attack scenario there were some key aspects that would be important for the attacker to be successful. One item is the xp_cmdshell stored procedure being enable or the ability to have an out-of-band channel to accelerate the speed of the time based SQL injection technique. But, in the watering hole attack scenario there is no need of any of those factors. The attacker will only need a SQL injection point and from there it can inject malicious script that will be appended trough out the database. As consequence, when a user browses to the web page, the data is retrieved from the database and rendered in the browser. Then the malicious code is executed putting him at mercy of all kind of client-side exploits.
Figure below illustrate these steps using a SQL statement that is famous due to the Asprox Trojan (Analysis, 2008) (Shin, Myers & Gupta, 2009). It uses a special table in the SQL server sysobjects and syscolumns in an attempt to get access to the “user” defined tables and fields in the website’s database. Through a loop it goes through every table columns and appends a string containing the malicious <script> tag.
This SQL statement is encoded in a hex format and inserted into another SQL statement in order to evade defenses. The reader can practice this technique and use SQLmap to invoke a SQL shell that allows to execute SQL statements. Then this prepared statement is executed which will result in infecting the database data. For reference a picture of what DBA will see if he looks into is affected database is also shown.
From this moment onwards the web server is infected. When a user goes in and browses trough the infected web pages it will download and execute the evil Java Script within the <> tags (Stuck, 2009) (Mendrez, 2009). This evil script can do, among other things, scan the visitor machine for client side vulnerabilities and deliver the appropriate exploit payload. Similar to using guided missiles this attack can be very effective and is worth to mention James Lee presentation “Using Guided Missiles in Drivebys at Defcon 17“.
As demonstrated using this environment the reader could get a practical understanding of how a typical watering hole attack is executed. The next step might be to explore the client side vulnerabilities and exploits by taking advantage of the evil script that is inserted into the database. The reader is encouraged to further learn, practice and explore this vector of attack with tools such as the Browser Exploitation Framework (BeEF) developed by Wade Alcorn and others, or the Social Engineering Toolkit (SET) from David Kennedy.
Continuing the Journey
Even though the tools used are extremely functional and almost no knowledge is needed to run an exploit against a vulnerable server using SQLmap or Metasploit this is the first step in building hands-on information security skills. Some techniques used are low hanging fruit. Nonetheless, the reader should start with them in order to advance to more complex methods and techniques using incremental approach. A proposed next step would be to further expand this environment to model business networks with end point and boundary defenses such as a Proxy, an IDS/IPS, a HIDS, etc. Also introduce Linux based systems such as an e-commerce and test other techniques and exploits (Rocha, 2012). As well, the reader could create scenario based challenges and simulations like Ed Skoudis promotes on his presentation “Using InfoSec Challenges to build your skills and career” that can emphasize the development of critical thinking (Skoudis, 2012).
Further practice reconnaissance, scanning, exploitation, keeping access and covering tracks will be doable. In addition to offensive skills the reader might want to practice defensive skills. When the attacker launches a specific technique how does it look like? Which opportunities does it bring from a defender to identify and detect it from the network or database level? How does it look at the operating system level. How would the reader be able to better prepare, identify, contain, eradicate and recover from each one of these and other attack scenarios. Could the correlation between the logs from the DNS server and Database server be used to detect such incident? Which IDS signatures would be needed to detect this kind of traffic? This and other suggestions have been also encouraged throughout the previous chapters.
It’s this never ending cat and mice game which makes our industry a very interesting place to be at. Like when playing a game, It involves defenders trying to build a secure system, then how to innovate, progress and take it to the next level by circumvent those measures using different tools and techniques. Then the defender improves the system and so on. This healthy competition between the attacker and the defender will make us smarter and better at security. As Jon Erickson mention on his book “The net result of this interaction is positive, as it produces smarter people, improved security, more stable software, inventive problem-solving techniques, and even a new economy”.
Although there are plenty of books and open source information that describe the methods and techniques demonstrated, the environment was built from scratch. The tools and tactics used are not new. However, they are relevant and used in today’s attacks. Likewise, the reader can learn, practice and look behind the scenes to better know them and the impact they have.
The main goal was to demonstrate that hand’s on training is a very valuable and cost efficient training delivery method that allows a better practical understanding on security. This method has advantages to build up your skills – not only from an incident handling and hacking techniques perspective but also from a forensics perspective. One can practice and improve their ability to determine past actions which have taken place and understand all kinds of artifacts which occur within the outlined scenarios. For instance, one could simulate an actual forensic investigation! On the other hand, from an Intrusion Analyst’s perspective the reader can capture the full contents of the network packets during the exercises and work on mastering his TCP/IP and intrusion detection techniques. In addition to that, the data set can be also feed to intrusion detection devices in order to measure how effective will they be in detecting the attacks.
Practice these kind of skills, share your experiences, get feedback, repeat the practice, grow to be proficient, improve your performance and become fluent.
Krebs, B. (2012, 10 15). The scrap value of a hacked pc, revisited. Retrieved from http://krebsonsecurity.com/2012/10/the-scrap-value-of-a-hacked-pc-revisited/)
Borgaonkar, R. (2010). An analysis of the asprox botnet. Manuscript submitted for publication, .
Pelaez, M. (2008, 8 15). Obfuscated sql injection attacks. Retrieved from https://isc.sans.edu/diary/Obfuscated SQL Injection attacks/9397
Kindlund, D., Caselden, D., & Chen, X. (2014, 02). [Web log message]. Retrieved from http://www.fireeye.com/blog/uncategorized/2014/02/operation-snowman-deputydog-actor-compromises-us-veterans-of-foreign-wars-website.html
Adair, S., & Moran, N. (2012, 05 12). [Web log message]. Retrieved from http://blog.shadowserver.org/2012/05/15/cyber-espionage-strategic-web-compromises-trusted-websites-serving-dangerous-results
Analysis, X. (2008). Asprox trojan and banner82.com . Retrieved from http://xanalysis.blogspot.ch/2008/05/asprox-trojan-and-banner82com.htm
Shin, Y., Myers, S., & Gupta, M. (2009). A case study on asprox infection dynamics. Manuscript submitted for publication, Computer Science Department, Indiana University, .
Stuck, F. (2009). An overview of a sql injection attack. Retrieved from http://geek37.net/Portfolio_SQL_Injection_Presentation.html
Mendrez, R. (2009). Another round of asprox sql injection attacks. Retrieved from http://labs.m86security.com/2010/06/another-round-of-asprox-sql-injection-attacks/
Rocha, L. (2012, Nov 23). Hands-on lab – ecommerce – part 1. Retrieved from https://countuponsecurity.com/2012/11/23/hands-on-lab-ecommerce-part-1/
Skoudis, E. (2012, March). [Web log message]. Retrieved from https://blogs.sans.org/pen-testing/files/2012/03/Put-Your-Game-Face-On-1.11.pdf | <urn:uuid:121d271b-4e9b-43ea-8988-004cf6dd60f7> | CC-MAIN-2017-09 | https://countuponsecurity.com/tag/security-hands-on-training/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00489-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.897607 | 2,215 | 3.0625 | 3 |
We have entered into 2016, yet cyber attacks continue to plague the world. Call it unfortunate or attribute it to the present times, security strategies need to be tightened by people all over the world. Hackers are said to have taken the meaning of risk to a different level altogether. So it will be nothing uncommon for them to launch some of the most unthinkable state-of-the-art attacks, which includes machine problems, jailbreaks, medical facilities another malware issues.
Despite consistent efforts in providing high-security network software and alerting customers about how to protect their data centers, carriers, and enterprises, security strategists do not rule out chances of deadly strikes.
Here’s taking a look at what the security industry is predicting in terms of cyber attacks; all of which can become deadly if not dealt well in advance.
Viruses were always present but brace yourself for newer ones! ‘Headless worms’ would be the appropriate term used that are likely to travel from device to device in the forms of suspicious codes and can manifest themselves in smartphones, medical kits, and innovative watches. That doesn’t mean computers are safe! With such codes having a greater chance of multiplying across several other connected devices, there is an extra reason to be anxious about.
The world is witnessing a new high of cloud infrastructure, which includes ‘software-based computers’. That certainly puts extra pressure on people and companies to be on the lookout for malware that can crack these cloud-based systems. Given how enterprises seem to be largely dependent on virtualization, incidences of cyber attacks can increase by several notches. As experts say, planning is the key and firms need to change the existing ‘cyber security mindsets’.
Not to miss out on mobile devices flaunting apps that only paves the path for hackers to attack both private and public clouds.
The Sandbox Phenomenon
A number of corporations have taken to testing new software in a ‘sandbox’ prior to having them on the networks. Basically, this is an effective way to perform greater inspection and thereby detect possible threats. Hackers are smarter than before and know exactly how to create ‘two-faced malware’ to achieve their targets.
Reality said, the volume of cyber attacks will only rise, thanks to technology. However, threats can best be overcome by an optimistic attitude and working up collaboratively to ward off these issues. Frankly, it’s difficult but attainable an objective.
Latest posts by Bel Networks News (see all)
- Small Business IT: Success Finding Tips for Owners - January 26, 2017
- 8 Impact Tips to Prevent Your Emails from Landing into the Spam Folder - December 29, 2016
- 4 Easy Ways to Improve VoIP Call Quality - December 3, 2016 | <urn:uuid:f798c3ce-6540-47ed-b1f2-377626bfe49f> | CC-MAIN-2017-09 | http://www.belnis.com/news/security-landscape-in-2016-cyber-attacks-to-be-watchful-about/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00081-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.950897 | 577 | 2.65625 | 3 |
Congress has voted on the Environmental Protection Agency’s rules controlling greenhouse gas emissions around 10 times over the past few years, and next year the agency is wading into the most controversial parts of the rulemaking.
The politically charged debates conflate facts and myths, and important points with superfluous ones. This primer clears all of that up. Here are the five things you need to know.
1. EPA is Washington’s least-favorite option to address global warming.
EPA regulations were supposed to be the stick to prod lawmakers to enact comprehensive climate-change legislation, such as the cap-and-trade bill the House narrowly passed in 2009. Similar efforts in the Senate collapsed in 2010 and all that was left on the table was the stick of EPA, which in reality turned out to be a polarizing lightning bolt in Congress.
“In 2009, many of my members supported some type of legislation covering greenhouse-gas emissions,” said Christopher Guith, vice president for policy at the U.S. Chamber of Commerce’s Institute for 21st Century Energy. “However, no two industries, and no more than a handful of companies, could agree on an exact approach or language that they could support. The only durable consensus was they didn’t want EPA unilaterally regulating greenhouse-gas emissions.”
Many environmentalists who were instrumental in crafting the cap-and-trade bills in 2009 and 2010 agree.
“Sure, crafting a law that put together a consensus of legislators would have been my preferred outcome,” said Fred Krupp, president of the Environmental Defense Fund, in a recent interview. “In the current situation, lacking consensus in Congress, the Supreme Court has said that EPA has not only the authority but really the responsibility to move forward.”
It’s illustrative of Capitol Hill’s ability to do nothing. And it's also ironic that Washington finds itself left fighting over what everyone could agree they didn’t want after politics soured the better (or less bad) options.
2. The rules are coming because of a suite of court rulings and scientific findings, not because Obama is trying to secretly enact a backdoor cap-and-trade system.
A complicated and politically fraught trail of administrative actions both in the George W. Bush administration and in President Obama’s first four years, along with a web of legal decisions, have compelled EPA to regulate greenhouse-gas emissions under the Clean Air Act.
A 2007 Supreme Court decision found that greenhouse gases fit into the broad definition of a pollutant under the Clean Air Act, which gave EPA the authority (but not the obligation) to regulate these emissions.
A 2009 “endangerment finding” by Obama’s EPA found that greenhouse-gas emissions endanger the public health and welfare, and thus now EPA had the obligation to regulate the emissions under the Clean Air Act. A D.C. Circuit ruling in June upheld EPA’s authority to regulate these emissions, and subsequent challenges to that decision have not succeeded, including one effort that failed last week.
Some Republicans argue that Obama is forcing these rules when he has the power not to and that he is secretly trying to impose a backdoor cap-and-trade system to combat global warming after congressional efforts to do so failed. Obama and top officials at EPA argue that science is compelling the regulations, not an overzealous regulatory regime or a secret climate agenda.
This argument might still be politically potent, but it’s substantively moot since Obama won reelection and the reality is these rules are coming whether you like them or not. Even when and if a Republican wins the White House, court rulings have backed up EPA to a point where Congress would have to change the Clean Air Act—a herculean task by any measure—before these rules are stopped.
EPA might try to ultimately implement some form of cap-and-trade under the Clean Air Act because that offers industry more flexibility in complying with the rule compared to a command-and-control system. This would be good for industry but bad for politicians.
3. EPA’s greenhouse-gas regulations are a package of several different rules.
Here’s what EPA has done so far: In 2009, the agency promulgated rules controlling greenhouse-gas emissions from cars in 2009; in Jan. 2011, EPA started requiring companies operating major sources of greenhouse-gas emissions, like power plants and oil refineries, to apply for a permit to emit those gases; and in March 2012, EPA proposed draft standards limiting greenhouse-gas emissions from new (but not existing) power plants.
The list of what EPA has not done is longer and will be more difficult to accomplish, both substantively and politically. EPA has not yet finalized the regulations for new power plants; it’s expected to do so in the first part of next year. A much bigger lift: proposing rules to control greenhouse-gas emissions from existing power plants. A timeline for these rules is highly uncertain. Sources close to the agency don’t expect action on this for another year or more, but environmental groups and states will keep the legal pressure on EPA to follow through.
Meanwhile, EPA has not yet proposed draft standards limiting greenhouse-gas emissions from either new or existing oil and natural gas refineries. A timeline for both sets of these rules are also uncertain.
4. They’re not actually designed to reduce overall greenhouse-gas emissions.
If that statement seems counterintuitive to you, that’s because it is. Technically these regulations, which EPA is promulgating under the Clean Air Act, are designed to control greenhouse-gas emissions, not reduce them. “We do not in fact have any overall projection of what kind of greenhouse-gas emissions will be avoided as a result of this,” Gina McCarthy, assistant administrator for EPA’s Office of Air and Radiation, said on a conference call way back in Nov. 2010 when EPA was a couple months away from implementing the first part of the regulations. “GHG permitting is not a process for reducing overall GHG emissions.”
In fact, global greenhouse-gas emissions rose 3.2 percent last year despite U.S. carbon emissions being at a 20-year low, thanks in part to increased natural gas that’s offsetting coal (the former burns 50 percent less carbon emissions than the latter). This worldwide rise is due in large part to the developing economies of China and India building hundreds of new coal plants and racing to get their people out of poverty.
5. The political fight over EPA will wear on.
Expect Congress to keep fighting over what to do with EPA’s rules for the indefinite future. Republicans might employ the Congressional Review Act to try to nullify any and all parts of these regulations, but those efforts will probably not succeed (just like the GOP’s CRA efforts on other EPA rules didn’t succeed). Nonetheless, these fights will be particularly potent between now and 2014, when a full third of the Senate is up for reelection and political messaging will be turned up.
In another ironic twist, the Obama administration might try again to use EPA as a stick to prod Congress to act when the agency gets ready to move forward on the rules affecting existing power plants. The threat of those rules might hang more heavily over Congress because their potential impact on the economy will be far greater than the rules affecting only new power plants. | <urn:uuid:01ae0601-6ea9-4b2b-8c26-02357b9180ff> | CC-MAIN-2017-09 | http://www.nextgov.com/health/2012/12/what-you-need-know-about-epas-carbon-rules/60372/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00081-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946727 | 1,533 | 2.765625 | 3 |
Tremors generate tweets in new USGS earthquake program
The Twitter Earthquake Detector could spread the word before official alerts
- By Dan Campbell
- Dec 21, 2009
The U.S. Geological Survey is testing the use of Twitter as a means to quickly collect and disseminate earthquake-related information. The popular social networking Web site and blogging tool is being used as a means to gather firsthand accounts of seismic events as they unfold.
Funded by the American Recovery and Reinvestment Act, the Twitter Earthquake Detector (TED) program is an “exploratory effort” intended to gather real-time earthquake-related messages, according to USGS. The idea is to have people who actually feel a tremor or observe its effects to tweet their observations.
The TED system applies location, time and keyword filtering to track accounts of tremors. The system allows for first impressions and even photos of the event to be delivered to the public from within or near the quake’s epicenter prior to any official report.
“Many people use Twitter, so after an earthquake, they often rapidly report that an earthquake has occurred and describe what they’ve experienced,” said Paul Earle, a USGS seismologist. “Twitter reports often precede the USGS’s publicly released, scientifically verified earthquake alerts.”
TED monitors Twitter for tweets that contain the word “earthquake” in all languages. The system also queries Twitter after USGS or another contributing network to the Advanced National Seismic System detects an earthquake, Earle said.
The TED program is intended to augment rather than replace other USGS earthquake projects that rapidly detect and report earthquake locations and magnitudes in the United States and globally. Tweets typically provide the initial information to the public faster than official scientific alerts, which can take between two and 20 minutes, depending on the location of the event. The program has great potential, particularly in areas where seismic instrumentation is sparse, USGS said.
“In densely instrumented regions, like California, locations and magnitudes are produced within two to three minutes of an event,” said Michelle Guy, a USGS scientist and software developer. But the “time increases up to 20 minutes in sparsely instrumented regions.”
“Analyzing the tweets provides an early indication of what people experience before the quantitative information” is analyzed and delivered, Guy said.
However, USGS, which publishes the location and magnitude of about 50 earthquakes a day, cautioned that tweets should be viewed as a preview and supplement to the official report. Twitter-based accounts are admittedly anecdotal and could even prove to be false positives.
“The basic difference is speed versus accuracy,” Guy said.
The tweets are subsequently attached to the official earthquake alert and report with a summary of the cities and an interactive map showing their origin. The tweets are open to the public to search and analyze. The program may be reviewed at twitter.com/USGSted as well as www.USGS.gov/socialmedia.
Earle said people are integral to the success of the TED program. “Without their tweets, we would have no system,” he said.
Dan Campbell is a freelance writer with Government Computer News and the president of Millennia Systems Inc. | <urn:uuid:8727d5f7-e4f4-4cf5-a34b-596465e688e8> | CC-MAIN-2017-09 | https://gcn.com/articles/2009/12/21/usgs-earthquake-twitter-tweets.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00257-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941817 | 687 | 2.671875 | 3 |
Recent technological advances in high-voltage direct current (HVDC) transmission has led many to see it as a crucial element in an efficient, smart power system of the future, providing power corridors connecting distant supply and demand centers, with minimal energy losses.
With its strong central control, China is implementing the biggest and most rapid roll-out. At the same time it is developing a HVDC manufacturing base that is displacing imports, and could eventually challenge the handful of established global players, helping to keep a lid on prices as the global market takes off.
Quest for clean and renewable power is increasing globally year by year. Governments are looking at different ways to solve their energy crisis; interconnection of HVDC systems is one in that. Lots of investment is going into connecting different power grids and thousands of Megawatts of power is being sent everyday across these grids.
The first long distance High Voltage Direct Current was sent in 1882 over 57 km and only 1.5kW was sent in Germany. Now the longest transmission is the Rio Madeira transmission link in Brazil which has a length of 2385km and sends 7.1GW of power.
In these 130 years the concept of Direct Current has again come into relevance with people realising its advantages over long distance transmission and how the problems that were earlier faced can be overcome. Thomas Edison popularised the concept of DC everywhere but it never really caught the imagination of the people. Now after numerous researches and new innovations in this field, the industries are again looking at HVDC to overcome the problems of HVAC transmission.
The average size of the HVDC transmission systems has increased in the recent years. The market for this transmission system is also increasing with more countries getting involved with the project and installing more HVDC grids.
HVDC has various advantages like for long distances it is much cheaper to transmit power, the transmission losses are less for larger distances, they do not have any maximum transmission distance and one of the very big advantage is that it allows the power to be transferred from one AC grid to another having different frequencies. This helps in linking incompatible grids, brings stability and increases the economy.
The main concerns with HVDC are that its converter stations are expensive and the system of controlling the power flow must be well communicated so the multi-terminal systems are costly. There are big companies getting involved in the HVDC market and are coming up with innovative ideas to solve some of the issues concerning this market.
In HVDC the basic process at the transmitting end is to convert the AC to DC and at the receiving end convert this DC back to AC. These conversions can be done by using rectifiers and inverters. The other important devices used in this are filters, thyristors, Insulated Gate Bijunction Transistor (IGBT) and Voltage Source Converter (VSC). There is a lot of research going on in the VSC field because it is one of the key aspects to reduce the losses. The power can be sent by overhead lines or undersea cables.
What the Report Offers
1) Market Definition for the specified topic along with identification of key drivers and restraints for the market.
2) Market analysis for the HVDC transmission systems Market, with region specific assessments and competition analysis on a global and regional scale.
3) Identification of factors instrumental in changing the market scenarios, rising prospective opportunities and identification of key companies which can influence the market on a global and regional scale.
4) Extensively researched competitive landscape section with profiles of major companies along with their share of markets.
5) Identification and analysis of the Macro and Micro factors that affect the HVDC transmission systems market on both global and regional scale.
6) A comprehensive list of key market players along with the analysis of their current strategic interests and key financial information. | <urn:uuid:ccc65b26-6a7d-4155-859d-776ac413ce06> | CC-MAIN-2017-09 | https://www.mordorintelligence.com/industry-reports/china-high-voltage-direct-current-hvdc-transmission-systems-market-industry | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00433-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952455 | 790 | 3.34375 | 3 |
Europe has spent hundreds of billions of euros rescuing its banks but may have lost an entire generation of young people in the process, the president of the European Parliament said.
Since the region's debt crisis erupted in Greece in late 2009, the European Union has created complex rescue mechanisms to prop up distressed countries and their shaky banking sectors, setting aside a total of 700 billion euros.
But little has been done to tackle the devastating social impact of the crisis, with more than 26 million people unemployed across the EU, including one in every two young people in Greece, Spain and parts of Italy and Portugal.
That crippling level of unemployment has led to protests and outbreaks of violence across southern Europe, raising the threat of full-scale social breakdown, including rising crime and anti-immigrant attacks that can further rattle unstable governments.
"We saved the banks but are running the risk of losing a generation," said Martin Schulz, a German socialist who has led the European Parliament, the EU's only directly elected institution, since January last year.
"One of the biggest threats to the European Union is that people entirely lose their confidence in the capacity of the EU to solve their problems. And if the younger generation is losing trust, then in my eyes the European Union is in real danger," he told Reuters in an interview.
Figures released last week showed 57 percent of Greeks aged 15 to 24 are out of work, and a similar scourge is tearing apart the fabric of Spain, where some university graduates in their 30s have never had a job. ( link.reuters.com/dab48s )
European Union heads of state and government will discuss the fallout from the debt crisis at a summit on March 14-15.
There are plans for a "youth employment guarantee", which would ensure that people under 25 receive either an offer of work, further education or work-related training at least four months after leaving education or being employed.
That is part of a 6-billion-euro initiative to tackle youth unemployment in the worst-hit regions of Europe and head off the prospect of life-long joblessness. But political analysts say it is a case of too little, too late.
Schulz, 57, who finished high school but did not go to university and began his career as an apprentice bookseller, said he had recently taken part in a debate where he was challenged by a Spanish woman over the issue of young people being abandoned for the sake of rescuing wealthy banks.
"She effectively raised the question: 'You have given 700 billion euros for the banking system, how much money do you have for me?'" he said. "And what is my answer?
"If we have 700 billion euros to stabilise the banking system, we must have at least as much money to stabilize the young generation in such countries," he said.
"We are world champions in cuts, but we have less idea ... when it comes to stimulating growth." | <urn:uuid:8a0ec0da-5e79-4e46-a37d-4347a677d07b> | CC-MAIN-2017-09 | http://www.banktech.com/compliance/banks-saved-but-europe-risks--losing-a-generation-/d/d-id/1296224?cid=sbx_banktech_related_slideshow_default_customers_want_it_all_balancing_simplici&itc=sbx_banktech_related_slideshow_default_customers_want_it_all_balancing_simplici | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00605-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.974391 | 599 | 2.515625 | 3 |
Does true literacy now include the ability to write software programs, just as it does the ability to read, write and do sums?
2012 saw a surge of sentiment in the press and blogosphere that we should think of programming as a vital cultural skill. And the year included many stories about newly emerging massive open online courses (MOOCs), which would provide tools to help people learn to sling some code.
TEST YOURSELF: The 2012 tech news quiz
Perhaps the most famous advocate of this idea was New York city Mayor Michael Bloomberg, who, despite his busy schedule, vowed to learn how to program himself. "My New Year's resolution is to learn to code with Codecademy in 2012!" he wrote on Twitter at the beginning of the year.
Backed by venture capital firm Union Square Ventures, Codecademy is a project that offers basic Web programming skills in an online format. The project attracted over 400,000 participants, including Mayor Bloomberg, to learn how to program in the year 2012.
"As technology becomes the driving force in our economy, the ability to program and understand programming is becoming more important," wrote Andy Weissman, a partner for Union Square, which raised over $12 million for Codecademy.
Farhad Manjoo, technology writer for online news magazine Slate, also argued that because computers now touch pretty much every aspect of our lives, we should be at least somewhat knowledgeable about how they operate. "The fact that any moron can use a computer has lulled us into complacency about the digital revolution," he wrote. "Theres no better way to learn how computers work than to start programming."
Not everyone thinks teaching coding to the masses is a good idea, however. Bloomberg's proclamation set off a backlash from programmers and others who warned people away from learning the practice, at least if they were pursuing it only to become more well-rounded in their education.
"It's actually damn hard to learn to code if you have no background in engineering or math. And frankly, Codecademy has been no help," wrote Audrey Watters, an education writer, after trying the service. "If you were to sit me down in front of a blank IDE and ask me to build something, I wouldn't have any clue how to begin."
Coding is one of many practices that we humans rely on that, for the most part, only specialists understand, Atwood argued. We have electricians to fix the lights, doctors to remedy our ailments, plumbers to stop the leaking faucets. "If your toilet is clogged, you shouldn't need to take a two-week in-depth plumbing course on toiletcademy.com to understand how to fix that," he wrote.
As the year progressed however, more options became available for those wishing to learn how to code. The Khan Academy, a popular resource for mathematics and science education and interactive video tutorials, also launched a curriculum for learning basic programming and Web illustration.
The year 2012 could also remembered for the rise of the MOOCs. Universities and colleges have offered distance education online classes for well over a decade, but the new generation of MOOCs offer classes for little cost, on flexible schedules, and that require no prerequisites. Unlike services such as Khan's and Codecademy, MOOC classes are fully-fledged college classes; many of them are actual classes that universities repurposed for the Web. They are perfect for the hard-driving autodidact who may have quickly worked through all that Khan Academy offers.
And computer science curricula play a central part in many MOOCs. Coursera, which has attracted over 2 million users, offers a range of advanced computer science in areas such as artificial intelligence and robotics. Udacity offers many computer and networking classes, including those on advanced topics such as software debugging and testing, HTML5 Web design, and parallel programming. And edX draws from classes taught at Harvard University and the Massachusetts Institute of Technology in fields as diverse as quantum computation and SaaS (software-as-a-service).
"As we learned from Wikipedia, demand for knowledge is so enormous that good, free online materials can attract extraordinary numbers of people from all over the world," observed technology writer Clay Shirky, in a blog post. | <urn:uuid:c46220dd-3b3d-4c60-884c-902dc8529c21> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2162381/software/2012--the-year-that-coding-became-de-rigueur.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00181-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.968176 | 880 | 2.859375 | 3 |
Researchers at Stanford University have made progress toward designing a battery with a lithium anode, a development that could increase battery power in electronics.
Reporting in the journal Nature Nanotechnology, the researchers described how they designed a lithium-metal anode in order to boost the energy storage density.
The anode of a battery discharges electrons into the current cycle. In a regular lithium-ion battery, it is usually made of graphite or silicon.
Lithium is known for its high energy density and lightweight properties, but it has proven problematic in battery research.
Using it as an anode results in metal deposits that pose serious safety concerns and low energy efficiency during charge and discharge cycles, according to the team, which includes former U.S. Secretary of Energy Steven Chu.
However, lithium metal would be the optimal choice as an anode material because of its high energy density.
In its approach described in the report, the team managed to coat a lithium metal anode with a special protective barrier. It consists of a honeycomb-like structure of hollow carbon nanospheres about 20 nanometers thick.
The coating isolates the lithium metal depositions, according to the researchers.
“The cycling Coulombic efficiency can be highly stable at (about) 99 percent for more than 150 cycles,” the researchers wrote, adding that the efficiency must be improved to over 99.9 percent for practical batteries.
In rechargeable batteries, Coulombic efficiency is often expressed as a percentage to describe the energy used during discharge compared with the energy used when charging.
“The lithium metal anode technology we developed can impact consumer electronics such as smartphones and laptops, and electrical vehicles,” Yi Cui, an associate professor in Stanford’s Department of Chemistry, wrote in an email.
“It can also impact grid-scale energy storage. We can enable high energy density and low-cost batteries.”
The technology could be commercialized in five years, Cui added. | <urn:uuid:3e98ca5a-6f84-46b1-b20c-b3b643abb885> | CC-MAIN-2017-09 | http://www.cio.com/article/2459041/lithiummetal-battery-could-boost-gadget-power.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00001-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.937887 | 411 | 4.0625 | 4 |
As more and more of our lives migrate online, congress, state legislatures and local governments are grappling with how to address the issues and challenges this presents. The problem is that by the nature and structure of the Internet itself, content and commerce on the Internet are virtually impossible regulate.
The federal government is better positioned than a state or local government, and even they have a very difficult time. Internet servers and mirror sites located all over the world allow people to operate businesses from whatever jurisdiction they choose. If tightly controlled countries like China and Iran can’t completely stop their citizens from accessing websites they deem objectionable, it is virtually impossible in a free, democratic society like the United States.
Internet gambling is an excellent case study of how difficult it is to effectively regulate many aspects of the Internet.
Just as Internet poker was gaining in popularity in the early 2000s, congress passed the Unlawful Internet Gaming Enforcement Act (UIGEA), which it thought would put an end to Internet gambling. But a 2010 study by the respected Internet gaming research along with data firm H2 Gambling Capital found that Internet poker was booming despite the law. The study found there were between 10 million and 11 million people playing Internet poker for money in the United States.
Leaving aside personal feelings about gambling and whether Internet gambling should be legal, it is virtually impossible to enforce a law that 11 million people are violating in the privacy of their own homes. Between 2006 (when UIGEA passed) and 2011, billions of hands of poker were played, and billions of dollars changed hands. And for every hand of poker played, the companies operating the games took a percentage. But none of that money went to U.S. companies, and no governmental entity in the United States earned any tax revenue from it. Instead of ending Internet gambling in the U.S., UIGEA simply forced it offshore.
Companies that were operating in the U.S. market withdrew, and were replaced by companies that were less interested in respecting the law. Places like Gibraltar, The Isle of Man and Alderney became the homes to multi-million dollar businesses. Their governments and regulators created a friendly regulatory structure and welcomed the businesses. Many of these countries had a very small GDP. Revenue from the new gambling enterprises provided a huge boost in tax revenue, and they had no incentive to enforce U.S. law.
In 2011, after years of trying, the United States’ Department of Justice was able to seize the domain names and shut down the U.S. operations of the world’s two biggest Internet poker companies. But their enforcement actions did not arise from great police and detective work, nor did it come from tightly written, effective statutes. It came from an informant who the FBI was able to arrest because of a spat between the informant and his employers.
According to a survey by Poker Voters of America (disclosure: they are a former client), there were at least 532 Internet poker sites in operation in 2006. The closure of two of those sites, through complete luck, hardly constitutes the triumph of government’s ability to regulate the Internet.
And if proof is needed, less than 24 hours later, it was easy to find many sites ready and willing to accept wagers from U.S.-based players.
The biggest action that slowed down Internet poker in the U.S. was the decision by ESPN to stop accepting advertising from Internet poker companies -- a decision that came in the wake of the Department of Justice’s actions, but was completely voluntary.
Fast forward to 2013. Federal law remains unchanged; it is still illegal (at least on paper) to gamble for money over the Internet. Despite that, anyone with a computer and a credit card can be playing Internet poker within a few minutes. For that matter, if blackjack is your game, you can find that too. The same is true with slots, craps, bingo and roulette. You can even play backgammon for money if you want.
Gambling has a long history and seems to be about as certain in society as death and taxes. Neither the Internet nor gambling is going away. And when the two are combined, it is a matter of "where" the games will take place, not "if" they will take place. The only question is, will they take place on computer servers owned by foreign companies located in foreign territories? Or will they take place on servers located in the United States that are owned and operated by companies based here.
This story was originally published by Techwire.net.
(Editor's note: Then Assemblymember Lloyd Levine was the first state legislator in the country to introduce legislation to legalize Internet gaming at the state level. Since that time, he has served as a consultant in the Internet gaming industry, been featured panelist, speaker and moderator and many Internet gaming conferences, and is a frequent contributor to gaming publications around the world. In part two of this series, Levine will look at the current efforts by various state legislatures to legalize intrastate, Internet poker.) | <urn:uuid:625d2f4c-b7ec-4747-929b-bcbe303a01d9> | CC-MAIN-2017-09 | http://www.govtech.com/internet/Internet-Gaming-Law-vs-Reality-2.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00353-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.9746 | 1,032 | 2.578125 | 3 |
Pointer records are used to map a network interface (IP) to a host name. These are primarily used for reverse DNS.
- Name: This usually represents the last octet of the IP address.
- System (PTR to): This will be the value (the reverse DNS) for your host / computer within your domain.
- TTL: The TTL (Time to Live) is the amount of time your record will stay in cache on systems requesting your record (resolving nameservers, browsers, etc.). The TTL is set in seconds, so 60 is one minute, 1800 is 30 minutes, etc.
- Best Practice Tip
If you plan on changing your reverse DNS TTL to a low value a few hours before you make the change (especially for mail servers). This way you won’t have any downtime during the change. Once your reverse DNS changes you can always raise your TTL to a higher value again.
Reverse DNS overview:
Reverse DNS is setup very similar to how normal (forward) DNS is setup. When you delegate forward DNS the owner of the domain tell the registrar to have your domain use certain name servers. Reverse DNS works the same way in that the owner of the IPs needs to delegate the reverse DNS to DNS Made Easy name servers as well. The owner of the IPs is usually the ISP, the hosting provider, or your own group of they are directly delegated from ARIN.
For reverse DNS you will have to setup your reverse DNS domain. This is a special domain that ends with “in-addr.arpa”. This domain is created in DNS Made Easy in the same manner as any other domain (Add Domains). You will need to ask the organization that owns those IPs (usually your ISP or your hosting provider) what domain name to create as it is based on how large of a block of IPs you have and how they are delegated to your group.
Then you will have to have the organization that owns those IPs (usually your ISP or hosting provider) delegate the reverse DNS for your IPs to DNS Made Easy (similar to how you delegated the DNS for your domains to DNS Made Easy).
If you only have a few hosts that you need reverse DNS for, it may be easier to just have the owner of those IPs set the entries in their reverse DNS domain for your hosts.
We have a full step by step tutorial that you can view at:
Example 1 – PTR record for the 192.168.1.0/27 block (addresses 192.168.1.1 – 192.168.1.30) and the reverse DNS for 192.168.1.10. This PTR record is created in the “27/1.168.192.in-addr.arpa” zone.
PTR record details:
- Name: 10.27/1.168.192.in-addr.arpa. is the host which are we are making an entry for. The domain / zone name is always appended to your domain. So in the data entry screen we only enter 10. The format of your reverse zone is dependent on how your provider delegates it, for example our ISP could have used 27-1.168.192.in-addr.arpa. instead of 27/1.168.192.in-addr.arpa. You must ask your provider for the correct syntax of your zone as reverse DNS will not resolve unless this is set up in the same syntax as the delegation.
- Data / System : mail.example.com. (including the trailing dot). You must include the trailing dot to keep the reverse DNS domain name from being appended to the end of your record.
- TTL (time to live) – The 1800 indicates how often (in seconds) that this record will exist (will be cached) in other systems.
- The end result of this record is that 10.27/1.168.192.in-addr.arpa. points to mail.example.com. | <urn:uuid:4e08806f-5730-4820-9c83-5aec78092f72> | CC-MAIN-2017-09 | http://help.dnsmadeeasy.com/managed-dns/dns-record-types/ptr-record/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00529-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.919286 | 842 | 2.8125 | 3 |
New and improved fast paced technologies are offering us exciting ways to live, are helping us at work, and continue pushing the boundaries of innovation and efficiency. However, along with all of the good comes the bad – the raised risk and exposure to security threats.
In part one and part two of our three part series on cyber threats facing the Asia Pacific (APAC) region, we established that APAC has the worst record in terms of cyber security, with its countries some of the most vulnerable in the world and the discovery of breaches taking over three times longer than the global average.
That being said, the interest and awareness of cyber security starting 2017 is higher than ever following a year filled with some of the world’s biggest hacks, many of which affected APAC tremendously, also proving that cyber attacks can impact any industry, from finance to applications.
With APAC having emerged as the globe’s go-to mobile application core, you can find apps galore for messaging, social networking, games, and more. This heightened demand comes from most of APAC’s countries being mobile-first nations, with smartphones exceeding all other devices used. Additionally, a study recently found that people spend more time in app usage than on the mobile web, which should put it into perspective – the app world is huge.
Applications under constant cyber attack certainly is nothing new – not only to APAC. And with so many users, the trove of data within each application is on the radars of hackers. And APAC is no stranger to mobile malware affecting some of its’ most popular apps – such as Pokémon Go in Taiwan and Japan – used to breach and expose the personal information of users, along with stealing any forms of currency stored in the app.
Mobile malware – a closer look
Cybercriminals are working around the clock seeking new ways to exploit vulnerabilities in mobile apps in order to gain the targeted user data, and malware plays a key part in allowing attackers to access and disrupt apps and devices.
Mobile malware can be found lurking behind every unknown click. As seen in May of 2016, where users around APAC were exposed to updates masked as Google updates which actually contained malware capable of snooping on calls and texts, gathering stored data, and accessing various apps.
The threat of mobile malware isn’t going anywhere, and a large part of the blame can be placed on insecure app development and failing to follow basic application security protocols. But as the app world continues to grow at a rapid pace, application developers should be ready for the world from the start by placing security in the lead through a secure SDLC and by using the right source code analysis solution to ensure your app will be safe from the threats ahead.
To learn about our security solutions, click here
Sign up today & never miss an update from the Checkmarx blog
Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now.
Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you. | <urn:uuid:826f3d7a-8c26-4fa7-85a7-e1da1f002f46> | CC-MAIN-2017-09 | https://www.checkmarx.com/2017/02/13/cyber-threats-facing-apac-applications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00053-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941375 | 727 | 2.6875 | 3 |
Jason | 5th December 2012
First off, there are a number of targets you should be aware of when filtering for @mentions of Twitter usernames:
twitter.user.id - The ID of the Twitter user that sent this Tweet
twitter.user.screen_name - The @username of the Twitter user that sent this Tweet
twitter.mentions - An array of Twitter @usernames that are mentioned in this Tweet
twitter.mention_ids - An array of Twitter user IDs that are mentioned in this Tweet
twitter.in_reply_to_screen_name - The @username of the Twitter user this Tweet is replying to. Note: This @username will also appear in the twitter.mentions and twitter.mention_ids arrays
twitter.retweet.user.id - The ID of the Twitter user that sent this retweet
twitter.retweet.user.screen_name - The @username of the Twitter user that sent this retweet
twitter.retweet.mentions - An array of Twitter @usernames that are mentioned in this retweet
Both Tweets and retweets
interaction.author.id - The author's ID on the service from which they generated a post. For example, their Twitter user ID
Secondly, there are two important syntax rules you should be familiar with when writing your CSDL. These rules are useful to know both when filtering for @mentions, and filtering for other keywords:
Use of [email protected] symbols when filtering for Twitter @usernames
You should not use the [email protected] symbol when filtering for usernames. Twitter usernames are passed on to us from Twitter as the bare username, without the appended [email protected] symbol. Further details of how our CSDL filtering engine works with regard to @Mentions, URLs and punctuation can be found on the documentation page - The CSDL Engine : How it Works.
Use of the IN and CONTAINS_ANY operators
contains_any - Matches if one of your comma separated keywords or phrases are contained as words or phrases in the target field. For example, twitter.user.location contains_any “New, Old” will match locations such as “New York”, but not “Oldfield”.
in - Matches if your comma separated keywords or phrases are an exact match of the full content of the target field. For example, twitter.user.location in “New York” would match the location “New York”, but not “New York, NY”.
How to filter for users sending Tweets
The best targets to use if you want to filter on a list of users who are sending Tweets are twitter.user.id or twitter.user.screen_name. If you are only interested in people sending retweets, you would want to use the twitter.retweet.user.id or twitter.retweet.user.screen_name targets. If you would like to receive both Tweets and retweets, you will be better off using interaction.author.id in conjunction with ‘interaction.type == “twitter”’.
How to filter for Twitter users @mentioned in Tweets
You should use the twitter.mentions or twitter.mention_ids targets. People often try to filter incorrectly for Tweets containing mentions of @usernames using the following CSDL:
twitter.text contains_any "@DataSift, @DataSiftDev, @DataSiftAPI"
The DataSift filtering engine filters keywords by first stripping out any @mentions or links from the main body of text, and filtering them separately using the twitter.mentions targets and links augmentation respectively, so you should never be able to find a @mention by filtering on twitter.text or interaction.content.
Below is an example of a correct way to filter for @mentions of Twitter usernames within Tweets:
twitter.mentions in "DataSift, DataSiftDev, DataSiftAPI"
You could also use twitter.mention_ids to filter on the Twitter user ID, rather than the @username:
twitter.mention_ids in [155505157, 165781228, 425158828] | <urn:uuid:c65c56f6-c568-443f-a80d-83db5759e77f> | CC-MAIN-2017-09 | http://dev.datasift.com/blog/how-best-filter-twitter-mentions | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00049-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.83686 | 904 | 2.671875 | 3 |
by Gregory R.Scholz, Northrop Grumman Information Technology
Wireless networks are described as both a boon to computer users as well as a security nightmare; both statements are correct. The primary purpose of this article is to describe a strong security architecture for wireless networks. Additionally, the reader should take from it a better understanding of the variety of options available for building and securing wireless networks, regardless of whether all options are implemented. The security inherent with IEEE 802.11 wireless networks is weak at best. The 802.11 standard provides only for
Wired Equivalent Privacy
, or WEP, which was never intended to provide a high level of security . For an overview of 802.11 and WEP, see reference . Wireless networks can, however, be highly secure using a combination of traditional security measures, open standard wireless security features, and proprietary features. In some regard, this is no different than traditional wired networks such as Ethernet, IP, and so on, which have no security built in but can be highly secure. The design described here uses predominantly Cisco devices and software. However, unless explicitly stated to be proprietary, it should be assumed that a described feature is either open standard or, at least, available from multiple vendors.
Customer needs range from highly secure applications containing financial or confidential medical information to convenience for the public "hot spot" needing access to the Internet. The former requires multiple layers of authentication and encryption that ensures a hacker will not be able to successfully intercept any usable information or use the wireless network undetected. The latter requires little or no security other than policy directing all traffic between the wireless network and the Internet. Security is grouped into two areas: maintaining confidentiality of traffic on the wireless network and restricting use of the wireless network. Some options discussed here provide both, whereas others provide for a specific area of security.
The level of security required on the wireless network is proportional to the skill set required to design it. However, the difficulty of routine maintenance of a secure wireless network is highly dependant on the quality of the design. In most cases, routine maintenance of a well-designed wireless network is accomplished in a similar manner to the existing administrative tasks of adding and removing users and devices on the network. It is also assumed that security-related services such as authentication servers and firewall devices are available on the wired network to control the wireless network traffic.
It is not necessarily the case that one can see the user or device attempting to use the wireless network. This is the most alarming part of wireless network security. In a wired network, an unauthorized connected host can often be detected by link status on an access device or by actually seeing an unknown user or device connected to the network. The term "inside threat" is often used to refer to authorized users attempting unauthorized access. This is the inside threat because they exist within the boundaries that traditional network security is designed to protect. Wireless hackers must be considered more dangerous than traditional hackers and the inside threat combined because if they gain access, they are already past any traditional security mechanisms. A wireless network hacker does not need to be present in the facility. This new inside threat may be outside in the parking lot.
is the new equivalent to the traditional war dialing. All that is required to intercept wireless network communications is to be within range of a wireless access point inside or outside the facility.
Physical Wireless Network
In a highly secure environment, a best practice is to have the wireless access points connect to a wired network physically or logically separate from the existing user network. This is accomplished using a separate switched network as the wireless backbone or with a
(VLAN) that does not have a routing interface to pass its traffic to the existing wired network. This network terminates at a
Virtual Private Network
(VPN) device, which resides behind a firewall. In this manner, traffic to and from the wireless network is controlled by the firewall policy and, if available, filters on the VPN device. The VPN device will not allow any traffic that is not sent through an encrypted tunnel to pass through, with the exception of directed authentication traffic described later. With this model, the wireless clients can communicate among themselves on the wireless network, but there is no access to internal network resources unless fully encrypted from the wireless client to the VPN. This design may be further secured by configuring legitimate wireless-enabled devices to automatically initiate a VPN tunnel at bootup and by enabling a software firewall on the devices that does not allow communication directly with other clients on the local wireless subnet. In this manner, all legitimate communication is encrypted while traversing the wireless network and must be between authenticated wireless clients and internal network resources.
Many security measures available relate to access controlled through individual user authentication. Authentication can be accomplished at many levels using a combination of methods. For example, Cisco provides
Lightweight Extensible Authentication Protocol
(LEAP) authentication based on the IEEE 802.1x security standard. LEAP uses
Remote Authentication Dial-In User Service
(RADIUS) to provide a means for controlling both devices and users allowed access to the wireless network.
Although LEAP is Cisco proprietary, similar functionality is available from other vendors. Enterasys Networks, for example, also uses RADIUS to provide a means for controlling
Media Access Control
(MAC) addresses allowed to use the wireless network. With these features, the access points behave as a kind of proxy, passing credentials to the RADIUS server on behalf of the client. When these features are properly deployed, access to the wireless network is denied if the MAC address of the devices or the username does not match an entry in the authentication server. The access points in this case will not pass traffic to the wired network behind them. For security, the authentication server should be placed outside the local subnet of the wireless network. The firewall and VPN devices must allow directed traffic between the access points and the authentication server further inside the network and only to ports required for authentication. This design protects the authentication server from being attacked directly.
In addition to authenticating users to the wireless network, the VPN authentication and standard network logon can be used to control access further into the wired network. In this solution, the VPN client has the ability to build its tunnel prior to the workstation attempting its network logon, but after the device has been allowed on the wireless network. After the tunnel is built, specific rules on the VPN and the firewall allow the traditional network logon to occur. A robust VPN solution also treats the users differently based on the group to which they are assigned. Different IP address ranges are assigned to each group, allowing highly detailed rules to be created at the firewall controlling access to internal network resources based on user or group needs. The policy on the firewall must be as specific as possible to restrict access to internal resources to only those clients for whom it is necessary. Building very specific policy for users' access will also allow an
Intrusion Detection System
(IDS) to better detect unauthorized access attempts.
LEAP also provides for dynamic per-user, per-session WEP keys. Although the WEP key is still the 128-bit RC4 algorithm proven to be ineffective in itself , LEAP adds features that maintain a secure environment. Using LEAP, a new WEP key is generated for each user, every time the user authenticates to use the wireless network. Additionally, using the RADIUS timeout attribute on the authentication server, a new key is sent to the wireless client at predetermined intervals. The primary weakness of WEP is due to an algorithm that was easy to break after a significant number of encrypted packets were intercepted. With LEAP, the number of packets encrypted with a given key can be tiny compared to the number needed to break the algorithm.
When using LEAP for user and device authentication, WEP encryption is automatically enabled and cannot be disabled. However, if added security is needed, a VPN, as described earlier, can provide any level of encryption desired. Using a VPN as the bridge between the wired and wireless network is recommended regardless of the underlying vendor or technology used on the wireless network.
(IPSec) is a proven, highly secure encryption algorithm available in VPNs. By requiring all wireless network traffic to be IPSec encrypted to the VPN over the WEP-encrypted 802.11 Layer 2 protocol, any data passed to and from wireless clients can be considered secure. All traffic is still susceptible to eavesdropping, but will be completely undecipherable.
Aside from WEP and LEAP, some vendors provide other forms of builtin security. Symbol Technologies' Spectrum24 product provides Kerberos encryption when combined with a Key Distribution Center. Kerberos is more lightweight than IPSec and, therefore, may be better suited to certain applications such as IP phones or low-end
personal digital assistants
(PDAs). Other methods of automating the assignment and changing of WEP keys are also available, such as Enterasys' Rapid-Rekey . Wireless vendors have realized that security has become of critical importance and most, if not all, are working on methods for conveniently securing wireless networks. When available, most vendors seemingly prefer to use open-standard, interoperable security mechanisms with proprietary security being additionally available.
Bringing it all together
Numerous options are available to secure a wireless network. A highly secure design will include, at a minimum, an authentication server such as RADIUS, a high-level encryption algorithm such as IPSec over a VPN, and access points that are capable of restricting access to the wireless network based on some form of authentication. When all the security options are tied together, the wireless network requires explicit authentication to allow a device and the user on the wireless network, the traffic on the wireless network is highly encrypted, and traffic directed to internal network resources is controlled per user or group by an access policy at the firewall or in the VPN.
There is no substitute for experience and research when designing a network security solution. Using network security and design experience to exploit available technologies can further increase security of a wireless network. For example, grouping users into IP address ranges based on access requirements allows firewall access policy to help restrict unnecessary access. This can be accomplished using
Dynamic Host Configuration Protocol
(DHCP) reservations, assigning per-user or -group IP address ranges to the VPN tunnels or statically assigning addresses. Using a centralized accounts database for all authentication helps avoid inadvertently allowing an account that has been disabled in one part of the network to access resources through the wireless network. To use an existing user database for authentication while providing for dynamic WEP keys, use a LEAP-enabled RADIUS server that has the ability to query another server for account credentials. As with most network designs, a solid understanding of the available technologies is paramount to achieving a secure environment.
Utilizing all the security described in this article would yield the following design. When a device first boots up, it receives an IP address within a specified range on a segregated portion of the network. This IP range is based on the typical usage of the device and is most useful for machines dedicated to specific applications. As a user attempts to log onto a wireless device, a RADIUS server authenticates both the MAC address and the username of the device. If the user authentication is successful, access is granted within the wireless network. In order for traffic to leave the wireless network to access other network resources, a VPN tunnel must be established. Again, the IP address assigned to the tunnel can be controlled based on individual user authentication to help enforce access policy through the firewall. When the tunnel is established, firewall access policy will restrict access to resources on the network. Most, if not all, of the authentications required may be automated to use a user's existing network logon and transparently complete each authentication. This is not the most secure model, but it would be as secure as any single signon environment.
A secure wireless network is possible using available techniques and technologies . After researching needs and security requirements, any combination of the options discussed here, as well as others not discussed, may be implemented to secure a wireless network. With the right selection of security measures, one can ensure a high level of confidentiality of data flowing on the wireless network and protect the internal network from attacks initiated through access gained from an unsecured wireless network. At a minimum, consider the current level of network security and ensure that the convenience of the wireless network does not undermine any security precautions already in place in the existing infrastructure.
"Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," IEEE Standard 802.11, 1999 Edition.
"802.11," Edgar Danielyan,
The Internet Protocol Journal
, Volume 5, Number 1, March 2002.
"War Driving," Andrew Woods,
, last viewed August 11, 2002.
"Cisco Aironet® Product Overview," Cisco Systems, , last viewed August 11, 2002.
"IEEE Standard for Local and Metropolitan Area Networks?Port-Based Network Access Control,&quto; IEEE Standard 802.1X, 2001.
"Remote Authentication Dial-In User Service," C. Rigney, S. Willens, A. Rubens, and W. Simpson, IETF
, June 2000.
"Security of the WEP Algorithm," Nikita Borisov, Ian Goldberg, and David Wagner,
, last viewed August 11, 2002.
"802.11 Wireless Networking Guide," Enterasys Networks, June 2002,
"Wireless LAN Security in Depth," Sean Convery and Darrin Miller, Cisco Systems,
, last viewed August 11, 2002.
"Making IEEE 802.11 Networks Enterprise-Ready," Arun Ayyagari and Tom Fout, Microsoft Corporation, May 2001, last viewed August 11, 2002.
GREGORY SCHOLZ holds a BS in Computer and Information Science from the University of Maryland. Additionally, he has earned a number of certifications from Cisco and Microsoft as well as vendor-neutral certifications, including a wireless networking certification. After serving in the Marine Corps for six years as an electronics technician, he continued his career working on government IT contracts. Currently he works for Northrop Grumman Information Technology as a Network Engineer supporting Brook Army Medical Center, where he performs network security and design functions and routine LAN maintenance. He can be reached at: | <urn:uuid:038dc1a0-e1ed-40b4-a05f-e4c8728cadd4> | CC-MAIN-2017-09 | http://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-14/wireless-networks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00225-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.929662 | 2,948 | 2.8125 | 3 |
Overview of Computer Architecture
the topic of our study is a Stored Program Computer, also called a
“von Neumann Machine”. The top–level logical architecture is as follows.
Recall that the actual architecture of a real machine will be somewhat different, due to the necessity of keeping performance at an acceptable level.
The Fetch–Execute Cycle
This cycle is the logical basis of all stored program computers.
Instructions are stored in memory as machine language.
Instructions are fetched from memory and then executed.
The common fetch cycle can be expressed in the following control sequence.
MAR ¬ PC. // The PC contains the address of the instruction.
READ. // Put the address into the MAR and read memory.
IR ¬ MBR. // Place the instruction into the MBR.
This cycle is described in many different ways, most of which serve to highlight additional steps required to execute the instruction. Examples of additional steps are: Decode the Instruction, Fetch the Arguments, Store the Result, etc.
A stored program computer is often called a “von Neumann Machine” after one of the originators of the EDVAC.
This Fetch–Execute cycle is often called the “von Neumann bottleneck”, as the necessity for fetching every instruction from memory slows the computer.
Modifications to Fetch–Execute
As we have seen before, there are a number of adaptations that will result in significant speed–up in the Fetch–Execute Cycle.
Advanced techniques include Instruction Pre–Fetch and
We may discuss these later.
For the moment, we discuss an early strategy based on
1. Instructions are most often executed in linear sequence.
2. Memory requires at least two cycles to return the instruction.
Here is the RTL (Register Transfer Language) for the
common fetch sequence.
At the beginning of fetch, the PC contains the address of the next instruction.
1. MAR ¬ PC, READ. // Initiate a READ of the next instruction.
2. PC ¬ (PC) + 1. //
Must wait on the memory to respond.
// Update the PC to point to the next instruction.
3. IR ¬ MBR. //
Get the current instruction into the Instruction
// Register, so that it can be executed.
NOTE: In almost all computers, when an instruction
is being executed, the
PC has already been updated to point to the following instruction.
The Data Path
Imagine the flow of data during an addition, when all arguments are in registers.
1. Data flow from the two source registers into
2. The ALU performs the addition.
3. The data flow from the ALU into the destination register.
The term “data
path” usually denotes the ALU, the set of registers, and the bus.
This term is often used to mean “data path timing”, as illustrated below.
Here is a real timing diagram for an addition of the contents of MBR to R1.
B: The inputs to the ALU are stable
C: The ALU output is stable.
D: The result is stable in the R1.
The ALU (Arithmetic Logic Unit)
The ALU performs all of the arithmetic and logical operations for the CPU.
These include the following:
Arithmetic: addition, subtraction, negation, etc.
Logical: AND, OR, NOT, Exclusive OR, etc.
This symbol has been used for the ALU since the mid 1950’s.
It shows two inputs and one output.
The reason for two inputs is the fact that many operations, such as addition and logical AND, are dyadic; that is, they take two input arguments.
For operations with one input, such as logical NOT, one of the input busses will be ignored and the contents of the other one used.
The Data Path of a Typical Stored Program Computer
Note the standard way of depicting an ALU. It has two inputs and one output.
The Central Processing Unit (CPU)
The CPU has four main components:
1. The Control Unit (along with the IR) interprets the machine language instruction
and issues the control signals to make the CPU execute it.
2. The ALU (Arithmetic Logic Unit) that does the arithmetic and logic.
3. The Register Set (Register File) that stores temporary results related to the
computations. There are also Special Purpose Registers used by the Control Unit.
4. An internal bus structure for communication.
The function of the control unit is to decode the binary machine word in the IR (Instruction Register) and issue appropriate control signals, mostly to the CPU.
Design of the Control Unit
There are two related issues when considering the design of the control unit:
1) the complexity of the Instruction Set Architecture, and
2) the microarchitecture used to implement the control unit.
In order to make decisions on the complexity, we must place the role of the control unit within the context of what is called the DSI (Dynamic Static Interface).
The ISA (Instruction Set Architecture) of a computer is the set of assembly language commands that the computer can execute. It can be seen as the interface between the software (expressed as assembly language) and the hardware.
A more complex ISA requires a more complex control unit.
At some point in the development of computers, the complexity of the control unit became a problem for the designers. In order to simplify the design, the developers of the control unit for the IBM–360 elected to make it a microprogrammed unit.
This design strategy, which dates back to the Manchester Mark I in the early 1950’s, turns the control unit into an extremely primitive computer that interprets the contents of the IR and issues control signals as appropriate.
The Dynamic–Static Interface
In order to understand the DSI, we must place it within the context of a compiler for a higher–level language. Although most compilers do not emit assembly language, we shall find it easier to under the DSI if we pretend that they do.
What does the compiler output? There are two options:
1. A very simple assembly language. This requires a sophisticated compiler.
more complex assembly language. This may
allow a simpler compiler,
but it requires a more complex control unit.
The Dynamic–Static Interface (Part 2)
The DSI really defines the division between what the compiler does and what the microarchitecture does. The more complexity assigned to the compiler, the less that is assigned to the control unit, which can be simpler, faster, and smaller.
Consider code for the term used in solving quadratic equations
D = B2 – 4·A·C
In assembly language, this might become
1 LR %R1 A // Load the value into R1
2 LR %R2 B // Load the value into R2
3 LR %R3 C // Load the value into R3
4 MUL %R1, %R3, %R5 // R5 has A·C
5 SHL %R5, 2 // Shift left by 2 is multiplication by 4
6 MUL %R2, %R2, %R6 // R6 has B2
7 SUB %R6, %R5, %R7 // R7 has B2 – 4·A·C
8 SR %R7 D // Now D = B2 – 4·A·C
Many of these operations can be performed in parallel. For example, there are no dependencies among the first three instructions.
The proper sequencing of instructions depends on the dependencies present.
The following is a dependency graph of this set of 8 assembly language instructions.
This analysis is much more easily done in software by the compiler than in hardware by any sort of reasonably simple control unit.
An EPIC Compiler
The compiler for Explicitly Parallel Instruction
Computing is complex.
It might emit the equivalent of the following code.
(1) LR %R1 A
(3) LR %R3 C
(2) LR %R2 B
(4) MUL %R1, %R3, %R5
(5) MUL %R2, %R2, %R6
(6) SHL %R5, 2
(7) SUB %R6, %R5, %R7
(8) SR %R7 D
the compiler needed some sophisticated analysis to postpone the instruction
“LR %R2 B” to the second instruction slot.
Nevertheless, creating a compiler of such complexity is much easier that creating a control unit of equivalent complexity.
The Register File
There are two sets of registers, called “General Purpose” and “Special Purpose”.
The origin of the register set is simply the need to have some sort of memory on the computer and the inability to build what we now call “main memory”.
When reliable technologies, such as magnetic cores, became available for main memory, the concept of CPU registers was retained.
Registers are now implemented as a set of flip–flops physically located on the CPU chip. These are used because access times for registers are two orders of magnitude faster than access times for main memory: 1 nanosecond vs. 80 nanoseconds.
These are mostly used to store intermediate results of computation. The count of such registers is often a power of 2, say 24 = 16 or 25 = 32, because N bits address 2N items.
The registers are often numbered and named with a strange notation so that the assembler will not confuse them for variables; e.g. %R0 … %R15. %R0 is often fixed at 0.
NOTE: It used
to be the case that registers were on the CPU chip and memory was not.
The advent of multi–level cache memory has erased that distinction.
The Register File
These are often used by the control unit in its execution of the program.
PC the Program
Counter, so called because it does not count anything.
It is also called the IP (Instruction Pointer), a much better name.
The PC points to the memory location of the instruction to be executed next.
IR the Instruction
Register. This holds the machine
language version of
the instruction currently being executed.
MAR the Memory
Address Register. This holds the
address of the memory word
being referenced. All execution steps begin with PC ® MAR.
MBR the Memory
Buffer Register, also called MDR (Memory Data Register).
This holds the data being read from memory or written to memory.
PSR the Program
Status Register, often called the PSW (Program Status Word),
contains a collection of logical bits that characterize the status of the program
execution: the last result was negative, the last result was zero, etc.
SP on machines that use a stack architecture, this is the Stack Pointer.
Another Special Purpose Register: The PSR
The PSR (Program Status Register) is actually a collection of bits that describe the running status of the process. The PSR is generally divided into two parts.
ALU Result Bits: C the carry–out from the last arithmetic computation.
V set if the last arithmetic operation resulted in overflow.
N set if the last arithmetic operation gave a negative number.
Z set it the last arithmetic operation resulted in a 0.
Control Bits: I set if interrupts are enabled. When I = 1, an I/O device can
raise an interrupt when it is ready for a data transfer.
Priority A multi–bit field showing the execution
priority of the
CPU; e.g., a 3–bit field for priorities 0 through 7.
This facilitates management of I/O devices that have
different priorities associated with data transfer rates.
Mode The privilege level at which
the current program is
allowed to execute. All operating systems require at
least two modes: Kernel and User.
The CPU Control of Memory
The CPU controls memory by asserting control signals.
Within the CPU the control signals are usually called READ and WRITE.
Reading Memory First place an address in the MAR.
Assert a READ control signal to command memory to be read.
Wait for memory to produce the result.
Copy the contents of the MBR to a register in the CPU.
Writing Memory First place and address in the MAR
Copy the contents of a register in the CPU to the MBR.
Assert a WRITE control signal to command the memory.
The memory control unit might convert these control signals into the Select and R/.
In interleaved memory systems, the memory control unit selects only the addressed bank of memory. The other banks remain idle.
Example: The CPU Controls a 16–Way Interleaved Memory
a 64 MB (226 byte) memory with 16 banks that are low–order
The address format might look like the following.
25 – 4
3 – 0
Address to the chip
Address bits 25 – 4 and the R/ are sent to each of the 16 banks.
This shows an enabled–high decoder used to select the bank when (READ + WRITE) = 1
Each I/O device is connected to the system bus through
a number of registers.
Collectively, these form part of the device interface.
These fall into three classes:
Data Contains data to be written to the device or just read from it.
Control Allows the CPU to control the
device. For example, the CPU
` might instruct a printer to insert a CR/LF after each line printed.
Status Allows the CPU to monitor the status
of the device. For example
a printer might have a bit that is set when it is out of paper.
There are two major strategies for interfacing I/O devices.
Mapped I/O is designated
through specific addresses
Load Reg KBD_Data This would be an input, loading into the register
Store Reg LP_Data This would be an output, storing into a special address
(Instruction–Based I/O) Uses
Input Reg Dev Read from the designated Input Device into the register.
Output Reg Dev Write from the register to the designated Output Device
As we shall see, Isolated I/O can also use addresses for the I/O devices. | <urn:uuid:55a403f1-7262-488a-91b5-00285ce8fc3f> | CC-MAIN-2017-09 | http://edwardbosworth.com/My5155_Slides/Chapter09/ComputerArchitectureOverview.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00169-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.896991 | 3,045 | 4.21875 | 4 |
Internet is relatively new technology but is one of the fastest in gaining popularity and in the spreading across the world. We can hardly imagine our normal daily life without internet. Sometimes we don’t even realize how much devices and services around us are actually internet equipped. But what makes all this devices work? How does the internet work?
That’s today theme and we will speak about the most important parts of internet technology and way this parts cooperate to give us the possibility to use network resources. In the beginning the simplest way to look at the internet is by splitting it into two main components: harware and protocols. | <urn:uuid:691bb217-f6e8-462d-9c44-961b19fba0dd> | CC-MAIN-2017-09 | https://howdoesinternetwork.com/tag/networking-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00221-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.939519 | 126 | 3.40625 | 3 |
A microcontroller is a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys, and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems.
Global Microcontroller market report segments the global market by applications, components, and geographies. The report also profiles the leading companies that are active in the field of developing and manufacturing microcontrollers, along with their product strategy, financial details, developments, and competitive landscape.
The market is segmented into four geographies; namely North America, Latin America, Europe, and Asia-Pacific. The current and future market trends for each region, along with Porter’s five force model analysis, market share of leading players, and competitive landscape have been analyzed in this report.
Market share analysis, by revenue of the leading companies, is also included in this report. The market share analysis of these key players are arrived at, based on key facts, annual financial information, and interviews with key opinion leaders, such as CEOs, directors, and marketing executives. Global Microcontroller market also provides company profiles of the key market players in order to present an in-depth understanding of the competitive landscape.
With market data, you can also customize the MMM assessments that meet your company’s specific needs. Customize to get comprehensive industry standards and deep dive analysis of the following parameters:
1. Data from Manufacturing Firms
· Fast turn-around analysis of manufacturing firms with response to recent market events and trends
· Opinion from various firms about different applications
· Qualitative inputs on macro-economic indicators, mergers & acquisitions in each region
2. Shipment/Volume Data
· Value of components shipped annually in each geography tracked
3. Trend Analysis of Application
· Application Matrix, which gives a detailed comparison of application portfolio of each company, mapped in each geography
4. Competitive Benchmarking
· Value-chain evaluation using events, developments, market data for vendor in the market ecosystem, across various industrial verticals and market segmentation
· Seek hidden opportunities by connecting related markets using cascaded value chain analysis
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
Microcontroller-Asia-Pacific and ARM Based and 8051 Based adds up to total
Microcontroller-Europe and ARM Based and 8051 Based adds up to total
Latin America Microcontroller
Microcontroller-Latin America and ARM Based and 8051 Based adds up to total...
North America Microcontroller
Microcontroller-North America and ARM Based and 8051 Based adds up to total... | <urn:uuid:4ad1b171-ca6e-448b-97d0-96f1acd20649> | CC-MAIN-2017-09 | http://www.micromarketmonitor.com/market-report/microcontroller-reports-8883152037.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00041-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.884121 | 653 | 2.65625 | 3 |
It's been a couple of years and a couple of million dollars. Finally, researchers and graduate students who have spent years developing intelligent water sensors released them into the Sacramento River on Wednesday, about 80 miles east of San Francisco.
That area of the river is a mixture of salt water from the nearby San Francisco Bay area. Altogether, the water within the Delta region supplies two-thirds of California's drinking water.
Researchers hope that their sensors will be able to help track environmental spills and the flow of water, which could also help improve salmon spawning.
"This is the way of the future," said Alexandre Bayen, associate professor at the University of California, Berkeley who is supervising the project, called the Floating Sensor Network. "We're moving from an age when humans were deploying things and baby-sitting them to an age where you just put the robots in the water, they do their job, they come back or they call you if they have a problem."
Watch a video of the sensors entering the water, here.
Researchers from University of California, Berkeley, San Francisco State University, The Center for Information Technology Research in the Interest of Society and the Lawrence Berkeley National Laboratory helped with the project. | <urn:uuid:8bb10124-3c7a-4303-b0d1-6843ce02e405> | CC-MAIN-2017-09 | http://www.itworld.com/article/2726620/networking/uc-berkeley-tests-floating-robot-sensors-to-track-water-flow--environmental-concerns.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00393-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954316 | 248 | 3.484375 | 3 |
Dilts T.E.,University of Nevada, Reno |
Weisberg P.J.,University of Nevada, Reno |
Yang J.,University of Nevada, Reno |
Olson T.J.,University of Nevada, Reno |
And 2 more authors.
Annals of the Association of American Geographers | Year: 2012
In arid regions of the world, the conversion of native vegetation to agriculture requires the construction of an irrigation infrastructure that can include networks of ditches, reservoirs, flood control modifications, and supplemental groundwater pumping. The infrastructure required for agricultural development has cumulative and indirect effects, which alter native plant communities, in parallel with the direct effects of land use conversion to irrigated crops. Our study quantified historical land cover change over a 150-year period for the Walker River Basin of Nevada and California by comparing direct and indirect impacts of irrigated agriculture at the scale of a 10,217 km 2 watershed. We used General Land Office survey notes to reconstruct land cover at the time of settlement (1860-1910) and compared the settlement-era distribution of land cover to the current distribution. Direct conversion of natural vegetation to agricultural land uses accounted for 59 percent of total land cover change. Changes among nonagricultural vegetation included shifts from more mesic types to more xeric types and shifts from herbaceous wet meadow vegetation to woody phreatophytes, suggesting a progressive xerification. The area of meadow and wetland has experienced the most dramatic decline, with a loss of 95 percent of its former area. Our results also show Fremont cottonwood, a key riparian tree species in this region, is an order of magnitude more widely distributed within the watershed today than at the time of settlement. In contrast, areas that had riparian gallery forest at the time of settlement have seen a decline in the size and number of forest patches. © 2012 Taylor and Francis Group, LLC. Source
Sedinger J.S.,University of Nevada, Reno |
White G.C.,Colorado State University |
Espinosa S.,100 Valley Road |
Partee E.T.,15 East 4th Street |
Braun C.E.,Grouse Inc.
Journal of Wildlife Management | Year: 2010
We used band-recovery data from 2 populations of greater sage-grouse (Centrocercus urophasianus), one in Colorado, USA, and another in Nevada, USA, to examine the relationship between harvest rates and annual survival. We used a Seber parameterization to estimate parameters for both populations. We estimated the process correlation between reporting rate and annual survival using Markov chain Monte Carlo methods implemented in Program MARK. If hunting mortality is additive to other mortality factors, then the process correlation between reporting and survival rates will be negative. Annual survival estimates for adult and juvenile greater sage-grouse in Nevada were 0.42 ± 0.07 (x ̄ ± SE) for both age classes, whereas estimates of reporting rate were 0.15 ± 0.02 and 0.16 ± 0.03 for the 2 age classes, respectively. For Colorado, average reporting rates were 0.14 ± 0.016, 0.14 ± 0.010, 0.19 ± 0.014, and 0.18 ± 0.014 for adult females, adult males, juvenile females, and juvenile males, respectively. Corresponding mean annual survival estimates were 0.59 ± 0.01, 0.37 ± 0.03, 0.78 ± 0.01, and 0.64 ± 0.03. Estimated process correlation between logit-transformed reporting and survival rates for greater sage-grouse in Colorado was ρ 0.68 ± 0.26, whereas that for Nevada was ρ 0.04 ± 0.58. We found no support for an additive effect of harvest on survival in either population, although the Nevada study likely had low power. This finding will assist mangers in establishing harvest regulations and otherwise managing greater sage-grouse populations. © The Wildlife Society. Source
Coates P.S.,U.S. Geological Survey |
Casazza M.L.,U.S. Geological Survey |
Ricca M.A.,U.S. Geological Survey |
Brussee B.E.,U.S. Geological Survey |
And 8 more authors.
Journal of Applied Ecology | Year: 2016
Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management. Greater sage-grouse Centrocercus urophasianus, hereafter 'sage-grouse' populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize the use of available information. Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by >35 500 independent telemetry locations from >1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes. We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of the following: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and applications. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance and space use derived from multiple data sources yields a composite map that can guide effective allocation of management intensity across multiple spatial scales. As applied to sage-grouse, the composite map identifies spatially explicit management categories within sagebrush steppe that are most critical to sustaining sage-grouse populations as well as those areas where changes in land use would likely have minimal impact. Importantly, collaborative efforts among stakeholders guide which intersections of habitat selection indices and abundance and space use classes are used to define management categories. Because sage-grouse are an umbrella species, our joint-index modelling approach can help target effective conservation for other sagebrush obligate species and can be readily applied to species in other ecosystems with similar life histories, such as central-placed breeding. © 2016 British Ecological Society. Source
Shanthalingam S.,Washington State University |
Goldy A.,Washington State University |
Bavananthasivam J.,Washington State University |
Subramaniam R.,Washington State University |
And 13 more authors.
Journal of Wildlife Diseases | Year: 2014
Mannheimia haemolytica consistently causes severe bronchopneumonia and rapid death of bighorn sheep (Ovis canadensis) under experimental conditions. However, Bibersteinia trehalosi and Pasteurella multocida have been isolated from pneumonic bighorn lung tissues more frequently than M. haemolytica by culture-based methods. We hypothesized that assays more sensitive than culture would detect M. haemolytica in pneumonic lung tissues more accurately. Therefore, our first objective was to develop a PCR assay specific for M. haemolytica and use it to determine if this organism was present in the pneumonic lungs of bighorns during the 2009-2010 outbreaks in Montana, Nevada, and Washington, USA. Mannheimia haemolytica was detected by the species-specific PCR assay in 77% of archived pneumonic lung tissues that were negative by culture. Leukotoxin-negative M. haemolytica does not cause fatal pneumonia in bighorns. Therefore, our second objective was to determine if the leukotoxin gene was also present in the lung tissues as a means of determining the leukotoxicity of M. haemolytica that were present in the lungs. The leukotoxin-specific PCR assay detected leukotoxin gene in 91%of lung tissues that were negative for M. haemolytica by culture. Mycoplasma ovipneumoniae, an organism associated with bighorn pneumonia, was detected in 65%of pneumonic bighorn lung tissues by PCR or culture. A PCR assessment of distribution of these pathogens in the nasopharynx of healthy bighorns from populations that did not experience an all-age die-off in the past 20 yr revealed that M. ovipneumoniae was present in 31%of the animals whereas leukotoxin-positive M. haemolytica was present in only 4%. Taken together, these results indicate that culture-based methods are not reliable for detection of M. haemolytica and that leukotoxin-positive M. haemolytica was a predominant etiologic agent of the pneumonia outbreaks of 2009-2010. © Wildlife Disease Association 2014. Source | <urn:uuid:c2ce9c1c-81c4-4dc6-9790-7d9befdb4de2> | CC-MAIN-2017-09 | https://www.linknovate.com/affiliation/100-valley-road-51929/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00569-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.913351 | 1,943 | 2.765625 | 3 |
Data center water usage and conservation is a critical aspect of green building design and environmental sustainability. Most data centers use large amounts of water for cooling purposes in order to maintain an ideal operating temperature for servers, hardware and equipment. But how do water conservation efforts affect the cost and operational efficiencies of a data center?
While 70% of the earth’s surface is covered in water, only 2.5 percent is fresh water, most of which is currently ice. Historically, the demand for fresh water has increased with population growth, and the average price has risen around 10-12 percent per year since 1995. In contrast, the price of gold has increased only 6.8 percent and real estate 9.4 percent during this same period.
So how much water do data centers use? While the average U.S. household uses 254 gallons per day, a 15 MW data center consumes up to 360,000 gallons of water per day. As the cost of water continues to rise with demand, the issue becomes one of both economics and sustainability.
How are data centers addressing the problem?
In order to control costs in the long term, data center operators are finding creative ways to manage water usage. Options include using less freshwater and finding alternative water sources for cooling systems.
- Reduced water usage – Designing cooling systems with better water management, resulting in less water use.
- Recycled water – Developing systems that run on recycled or undrinkable water (i.e., greywater from sinks, showers, tubs and washing machines). Internap’s Santa Clara facility was the first commercial data center in California to use reclaimed water to help cool the building.
- No water – In some regions, air economizers that do not require water can be used year round.
While using less freshwater provides long-term cost and environmental benefits, alternative solutions also create new challenges. The use of recycled water can have negative effects on the materials used in cooling systems, such as mild steel, galvanized iron, stainless steel, copper alloys and plastic. Water hardness (measure of combined calcium and magnesium concentrations), alkalinity, total suspended solids (TSS – e.g. sand and fine clay), ammonia and chloride can cause corrosion, scale deposits and biofilm growth.
Data center operators must proactively identify susceptible components and determine a proper water treatment system. Implementing a water quality monitoring system can provide advanced warning for operational issues caused by water quality parameters.
With the rising cost and demand for freshwater, conservation measures are essential to the long term operations of a data center. Internap is committed to achieving the highest levels of efficiency and sustainability across our data center footprint, with a mix of LEED, Green Globes and ENERGY STAR certifications at our facilities in Dallas, Los Angeles, Atlanta, New Jersey, and Santa Clara.
To learn more, download the ebook, Choosing a Green Colocation Provider. | <urn:uuid:6141a486-9f8c-49fe-9951-64f09cadebdb> | CC-MAIN-2017-09 | http://www.internap.com/2014/07/11/data-center-water-conservation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00445-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.917089 | 601 | 3.59375 | 4 |
Boeing, NASA, Lockheed Martin and GE are among the large corporations that for decades have used additive manufacturing, known more popularly as 3-D printing.
Additive manufacturing is also used prominently in the medical and dental industries -- about 80,000 hip implants have been made to date using 3-D printers, and every day some 15,000 tooth crowns and fillings are made with parts from 3-D printers, said Terry Wohlers, an industry analyst.
It was only about six or seven years ago that people began invoking dimensions to give "additive manufacturing" the trendier 3-D printing name. The rise of a movement among consumers known as "maker culture," a type of do-it-yourself philosophy geared toward engineering-related pursuits such as 3-D printing, robotics and electronics, is one possible explanation for the name change.
But analysts also point to a singular event: the expiration in the late 2000s of a key patent held by Stratasys covering fused deposition modeling. Growth in the consumer market has been impressive since then, because the technology, also known as material extrusion, is now used in other companies' 3-D printers.
The extrusion process produces an object by melting and depositing molten plastic through a heated extrusion tip. Like other additive manufacturing processes, it adds one layer upon another until the part is complete. Alternative methods include material jetting, which uses an inkjet print head to deposit liquid plastic layer by layer. Another is powder bed fusion, which uses an energy source, like a focused laser, to build parts from plastic or metal powder.
Those three processes are the most popular, Wohlers said.
3-D printing has some challenges, both for consumers and industry. For consumers, the quality of the lower-cost machines isn't great, said Wohlers. They're hard to set up, sometimes there are pieces missing, and their reliability and output is not always very good, he said.
And for the average consumer, versus the technically adept do-it-yourselfer, there still aren't many compelling applications, some say. Instead of making a new toy or replacing a household tool with your computer, "it's still more convenient to go to the hardware store or toy store," said Pete Brasiliere, an industry analyst with Gartner.
For the enterprise, 3-D printing can have a useful place. If you want to print 1 million devices or products at high quality, experts agree it's better to go with a traditional subtractive process. "But if you want to do one, 10, or even 100, 3-D printing has advantages for low-quantity, high-product value," Brasiliere said.
Others cite additive manufacturing's boutique appeal. 3-D printing will never replace the high-volume manufacturing of mass-produced items like the iPhone, said Brasiliere, but for low-volume components that have very specific requirements around the material, design and performance, "3-D printing makes sense," he said. | <urn:uuid:f9109bdb-d527-48a1-852d-9f0a665e2fb1> | CC-MAIN-2017-09 | http://www.itworld.com/article/2705724/it-management/3-d-printing-comes-of-age.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00445-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.954589 | 627 | 2.859375 | 3 |
One of the best things about covering technology is that you're always on the edge of a completely new generation of stuff that will make everything completely different than it ever was before, even before the last generation made everything different.
"Completely different" always seems pretty much the same, with a few more complications, higher costs and a couple of cool new capabilities, of course.
Unless you look back a decade or two and see that everything is completely different from the way it was then…
Must be some conceptual myopia that keeps us in happy suspense over the future, nostalgic wonder at the past and bored annoyance with the present.
The next future to get excited about is going to be really cool, though.
You know how long scientists have been working on quantum computers that will be incomparably more powerful than the ones we have now because don't have to be built on a "bit" that's either a 1 or a zero? They would use a piece of quantum data called a qubit (or qbit, consistent with everything in the quantum world, the spelling wants to be two things at once), that can exist in several states at the same time. That would turn the most basic function in computing from a toggle switch to a dial with many settings.
Multiply the number of pieces of data in the lowest-level function of the computer and you increase its power logarithmically.
Making it happen has been a trick; they've been under development for 20 years and probably won't show up for another 10.
Teams of Austrian scientists may cut that time down a bit with a system they developed they say can create digital models of quantum-computing systems to make testing and development of both theory and manufacturing issues quicker and easier.
They did it the same way Lord of the Rings brought Gollum to life: putting a living example in front of cameras and taking detailed pictures they could use to recreate the image in any other digital environment.
Rather than an actor, the photo subject was a calcium atom, drastically cooled to slow its motion, then manipulated it using lasers, putting it through a set of paces predicted by quantum-mechanical theory, and recorded the results.
Abstracting those results lets the computer model predict the behavior of almost any other quantum particle or environment, making it possible to use the quantum version of a CAD/CAM system to develop and test new approaches to the systems that will actually become quantum computers, according to a paper published in the journal Science by researchers from the University of Innsbruck and the Institute for Quantum Optics and Quantum Information (IQOQI).
Far sooner than quantum computers will blow our digitized minds, transistors made from grapheme rather than chunkier materials will allow designers to create processors far more dense – and therefore more potentially powerful – than anything theoretically possible using silicon and metallic alloys we rely on now.
Graphene is a one-atom-thick layer of carbon that offers almost no resistance to electricity flowing through it, but doesn't naturally contain electrons at two energy levels, as silicon does. Silicon transistors flip on or off by shifting electrons from one energy level to another.
Even silicon doesn't work that way naturally. It has to be "doped" with impurities to change its properties as a semiconductor.
For graphene to work the same way, researchers have to add inverters that that mimic the dual energy levels of silicon. So far they only work at 320 degrees below zero Fahrenheit (77 degrees Kelvin).
Researchers at Purdue's Birck Nanotechnology Center built a version that operates at room temperature, removing the main barrier to graphene as a practical option for computer systems design
The researchers, led by doctoral candidate Hong-Yan Chen presented their paper at the Device Research Conference in Santa Barbara. Calif. in June to publicize their results with the inverter.
Real application will have to wait for Chen or others to integrate the design into a working circuit based on graphene rather than silicon.
Systems built on graphene have the potential to boost the computing power of current processors by orders of magnitude while reducing their size and energy use, but only if they operate in offices not cooled to 77 degrees Kelvin.
It will still be a few years before graphene starts showing up in airline magazines, let alone in IT budgets. We'll probably be tired of them, too, by the time quantum computers show up, but there's just no satisfying some people.
Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook. | <urn:uuid:5a5d1588-7a1e-4761-afe6-64aba0b09cc8> | CC-MAIN-2017-09 | http://www.itworld.com/article/2736618/development/breakthroughs-bring-the-next-two-major-leaps-in-computing-power-into-sight.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00390-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952701 | 969 | 3.171875 | 3 |
Earthquakes are something to tweet about
USGS's Twitter Earthquake Detector helps monitor quakes in real time
- By Henry Kenyon
- Jul 16, 2010
Agency: U.S. Geological Survey
Project: Twitter Earthquake Detector
Social media has a variety of uses that appeal to the federal government, and one such area is disaster response and management. The U.S. Geological Survey is developing a prototype site that monitors Twitter feeds to provide scientists with real-time data about earthquakes.
Paul Earle, director of operations at the National Earthquake Information Center in Golden, Colo., said the goal behind the Twitter Earthquake Detector (TED) effort, launched last year, is to demonstrate a way to rapidly detect earthquakes and provide an initial damage assessment.
TED taps into the Twitter API and searches for keywords such as “earthquake.” It then pulls and aggregates the information, including photographs, providing USGS scientists with a map based on the number of tweets coming from a geographic area. That information is useful because there is a time lag between an earthquake and its official verification. The Twitter data can fill that gap, Earle said.
A renaissance of government Web apps
10 gov apps that get results
Great dot-gov Web sites 2009
Great dot-gov Web sites 2008
The project was inspired by several earthquakes, notably a May 2008 quake in California. Earle cited commentary that Twitter reports about the incident spread faster than USGS alerts. He cautioned that although Twitter is useful in detecting and providing an initial assessment of an earthquake, it does not provide scientifically precise data. However, Earle said, he saw the potential for using Twitter data to complement earthquake measurements.
Although there is no timetable to launch the new site, Earle said he wants to see TED operating and integrated into disaster response efforts within the next six months.
The Twitter site represents an evolution of USGS’ efforts to distribute data. He said alerts are still sent via facsimile and e-mail messages. Social media sites also provide real-time information, usually within seconds of an incident. Earle said such short-term information is qualitative, but it can provide scientists with additional information about an earthquake.
Earle said Twitter can be a potentially useful tool for scientists. It is also an inexpensive tool. He noted that the cost of developing the site is considerably less than the cost of a modern seismometer.
People can receive earthquake data from the @USGSTED Twitter account. The site sends maps of earthquake zones to account holders. | <urn:uuid:43d6c2cd-f86d-490a-8eb9-c92e13b39c01> | CC-MAIN-2017-09 | https://fcw.com/articles/2010/07/19/web-app-usgs-twitter.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00566-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.941129 | 524 | 2.59375 | 3 |
Facebook has released a programming language called Hack, which marries the ease of PHP with the rigorous safety controls of older languages such as C++.
PHP programmers should easily understand Hack, which replicates many of the same features and functions of PHP, and adds a few of its own for greater productivity, said Bryan O'Sullivan, a Facebook engineer on the project.
Over the past year, Facebook has converted nearly all of its PHP code base to Hack, which makes up the core of its website.
Both projects set out to strengthen a popular dynamic programming language so it can be more easily used by large software teams to design mission-critical applications.
Individuals would also benefit by using Hack, O'Sullivan said, both in terms of increasing performance of their websites and improving the overall quality of their code.
Hack requires Facebook's HHVM (Hip Hop Virtual Machine) to run. HHVM is a virtual machine that compiles PHP, normally an interpreted language, into byte code, so it can run more quickly.
Hack is basically an extension of the PHP language with built-in static typing, a feature found in more traditional programming languages such as C/C++ and Java, O'Sullivan said.
With dynamic typing, "there is no explicit information in the source code that describes what kind of information the program is dealing with," O'Sullivan said.
In contrast, static typing requires the programmer to define the data type for each variable before that program is compiled or run.
Though it takes extra work to implement, static typing prevents run-time errors occurring when the wrong data type is entered into the program, either by human input or some other computer function.
"There are certain kinds of errors and crashes that can occur," if the programmer is not careful about what data is assigned to variables, O'Sullivan said. "These latent errors can hide for a long time in a dynamically typed languages."
The HHVM virtual machine has a built-in type checker to ensure that all of the typed information is correct. Hack even allows the programmer to define unique data types.
"Syntactically, Hack is very close to PHP. We allowed it to be possible to run PHP and Hack code side-by-side so you can gradually convert your language codebase from PHP to Hack," O'Sullivan said.
Certain deprecated PHP features, however, are not supported in Hack, and neither are a handful of features that don't work well with static typing.
Hack also comes with a number of additions not found in PHP. One is Collections, a way to create arrays with more nuance than the array function offered by PHP itself, O'Sullivan said.
Hack also eases the use of closures through the use of Lambda expressions. Closures, which were added to Java 8, "make it easy to succinctly write fairly complicated data transformations," O'Sullivan said.
Hack's Lambda expressions provide a way to create closures "with a fewer number of keystrokes, which is a big win for productivity," he said.
Facebook has supplied a number of text editor plug-ins on the Hack website to help coders write in the language, though the company is hoping volunteers will build a few more elaborate ones.
O'Sullivan didn't reveal any specific plans to offer the Hack augmentations back to the keepers of PHP, though he did note that the company plans to "work closely with the open-source community," to further develop the language. | <urn:uuid:cc604eb3-79c3-4b44-a2c9-72ad906b2a33> | CC-MAIN-2017-09 | http://www.networkworld.com/article/2176311/application-performance-management/facebook--39-s-hack-programming-language-builds-code-safety-into-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00090-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.952391 | 715 | 2.859375 | 3 |
The Mask Raises Network Security Worries in an Age of Cyberwarfare
What if your network was compromised for the past five years and you didn't know?
That seems to have been the situation for many victims of one of the greatest security threat to have been recently discovered.
On February 11, Kaspersky Labs announced its discovery of a particularly insidious piece of malware dubbed "The Mask" – also known as "Careto" (Spanish for "mask" or "ugly face"), the name given by the attackers to one of the two primary backdoor implants used on target machines. Kaspersky has detected at least 380 unique victims of the attack across at least 31 countries, concentrated among energy companies, government offices, private equity firms, research institutions, and political activists. Kaspersky further concedes that many more victims could remain undetected.
Kaspersky reports that The Mask has been active for at least five years, until January of this year. This means that, for years, major public and private sector organizations have had their networks and data deeply compromised and not known about it.
Some samples of The Mask were found to have been compiled even before then, in 2007. Disturbingly, this is the same year as the origins of major cyberweapons like Stuxnet and Duqu. What's more, Kaspersky reports that The Mask is a more sophisticated piece of malware than Duqu because of the former's capacity for flexibility and customization.
Working through a highly complex combination of modules and plug-ins, The Mask would secretly gather and steal data from all manner of systems and networks – including remote and virtualized ones – while monitoring all file operations. It would then hide its tracks in highly sophisticated ways, including replacing real system libraries, entirely wiping log files (as opposed to simple deletion), and blocking the IP addresses of renowned computer research entities (including Kaspersky) from its command and control servers.
For these reasons, and because of the unique and sophisticated way this malware would work from a network infrastructure management perspective, security experts hypothesize that The Mask was created or sponsored by a nation-state, similar to Kaspersky's conclusions about the Stuxnet worm.
"The attack is designed to handle all possible cases and potential victim types," Kaspersky reports. Kaspersky has uncovered versions of The Mask that affect Windows, Mac OSX, and Linux. Kaspersky also reports that there are mobile versions of The Mask, including one known to attack Nokia devices. While Kaspersky has not been able to obtain a sample to 100% confirm, the computer security firm believes that versions of The Mask affect both iOS and Android devices. The Mask also works through a variety of browsers, including Internet Explorer, Firefox, Chrome, Safari, and even Opera.
"Depending on the operating system, browser and installed plugins," Kaspersky notes, "the user is redirected to different subdirectories, which contain specific exploits for the user’s configuration that are most likely to work."
Among these exploits are plugin modules that attack anti-malware products (including those by Kaspersky), intercept network traffic, obtain PGP keys, steal email messages, intercept and record Skype conversations, gather a list of available WiFi networks, and provide other network functions to facilitate other modules. One module even creates a framework for extending the reach of The Mask with new plugins.
The Mask also has the ability to profile its targets. Its modules would automatically determine details of its victims' systems and software and then customize attacks using that information. It can even figure out if it is targeting a remote desktop portal or a virtualized environment.
"The installer module can detect if it is being executed in a VMware or Microsoft Virtual PC virtual machine," reports Kaspersky.
Network administrators and information security officers should find these revelations particularly disturbing. The fact that something so flexible, complex, and sophisticated could compromise so much information across so many platforms and go undetected for several years is bad enough. The fact that it is probable that this is the work of a nation-state or nation-state-sponsored group is yet more disconcerting. As cyber warfare ramps up, so must cyber defenses. The logical consequence may be greater government oversight (some may prefer the more uncharitable characterization "intrusion") over private sector systems, particularly in essential industries like energy, banking, and transportation.
For now, basic security measures are still the best protection against these kinds of attacks. Use up-to-date antimalware and firewall software. Don't open suspicious attachments or click suspicious links. Use air gaps where practicable, and when transferring files across the air gap, use media with small storage space filled with random files to prevent malware from storing itself on your USB stick or CD and leaping the air gap.
In the present case, The Mask appears to have been focused primarily on obtaining information. Still, such information – especially considering the sheer volume of system information accessible to those behind The Mask – could be used to develop and enable outright destructive Stuxnet-like attacks in the future. Therefore, those who were compromised should not consider themselves out of the woods yet. New security measures, perhaps right down to a complete network infrastructure overhaul, may be necessary to avoid serious system disruptions down the line.
And then there is the even bigger question: If something as sophisticated as The Mask went undetected in the wild this long, then what else is still out there?
Photo courtesy of Shutterstock.
Joe Stanganelli is a writer, attorney, and communications consultant. He is also principal and founding attorney of Beacon Hill Law in Boston. Follow him on Twitter at @JoeStanganelli. | <urn:uuid:eb9a6f82-36bc-4969-8cc4-45ccf0dd2ba2> | CC-MAIN-2017-09 | http://www.enterprisenetworkingplanet.com/print/netsecur/the-mask-raises-network-security-worries-in-an-age-of-cyberwarfare.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00442-ip-10-171-10-108.ec2.internal.warc.gz | en | 0.946882 | 1,158 | 2.703125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.