text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
In the race to build the first exascale supercomputer, several nations have set their sites on 2020 as the make-or-break year. With six years to go until that deadline, we can expect exascale efforts to take on a new urgency. Japan, which is among the economic regions seeking exascale glory, just got a little closer to that goal. On December 26, the Japanese Ministry of Education, Culture, Sports, Science and Technology selected RIKEN to develop an exascale supercomputer by 2020.
RIKEN, the large Japanese research institution which relies on supercomputing to pursue advances in a diverse range of scientific disciplines, has been tasked with keeping Japan at the leading edge of computing science and technology.
“Exascale supercomputing is expected to make possible high-resolution simulations, contributing to advances in a wide range of areas including drug discovery, weather forecasting, and astrophysics,” noted the Ministry in a December 26th statement.
RIKEN was selected for this seminal project based on its experience developing and operating Japan’s current top number cruncher, the K computer, which was ranked as the fastest supercomputer in the world in 2011. With 10.51 petaflops LINPACK, the system is still a strong TOP500 contender, as the current number-four machine. Getting to exascale will require fielding a system that is 100 times faster than “K” and 30 times faster than the current TOP500 record-holder, China’s Tianhe-2 supercomputer, which achieved 33.86 petaflops on the LINPACK benchmark.
“The RIKEN Advanced Institute for Computational Science (AICS) will now have two important missions,” notes Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science (AICS) of Japan, “continuing to operate and manage the K computer for public use with the aim to generate useful research outcomes, and the successful development of the Exascale Supercomputer scheduled for completion by 2020. We ask for support from our associates around the world and in Japan as we launch our new project, which will be a great boon for science and technology, as well as industry.”
While nations like the United States, China, Japan and the EU would like to be first to break the exascale barrier, advancing science and technology also requires collaboration, sharing and a degree of openness. At SC13, the US and Japan signed a memorandum of understanding (MOU) in recognition of a new partnership focused on expanding the use of petascale computing in the scientific and engineering communities. Aside from demonstrating that there is still much petascale-level work to be done, the initiative reflects the importance of global collaboration in the pursuit of scientific progress. Going forward, it will be interesting to see how exascale-focused nations balance the drive to compete with the advantages of cooperation. | <urn:uuid:e7009d87-1a03-4d1f-8988-aaec2ade3bf0> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/01/08/japan-moves-forward-2020-exascale-deadline/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00405-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939701 | 609 | 2.640625 | 3 |
By Shamma Iqbal and Helena Marttila
This April, the Indian government quietly passed the 2011 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules (the “Rules”). Among other things, the Rules require written consent for the processing of “sensitive personal information” in India and that organizations processing personal information in India implement reasonable security practices and procedures. As drafted, the Rules apply to organizations that process personal information, including sensitive personal information, in India regardless of where the information originates or whether the information relates to Indian or non-Indian citizens. The Rules also do not differentiate between “data controller” and “data processor” and thus it is likely that they apply to all organizations engaging in data processing activities in India, whether or not the processing is performed on behalf of other organizations.
Much ambiguity surrounds the interpretation and practical effect of the Rules, and the Indian government had not provided any clarification on the Rules at the time of writing, although it is expected to respond to questions posed by industry stakeholders on the meaning of certain provisions in the coming weeks.
The key features of the Rules, and their potential application, are discussed below:
1. Definition of Sensitive Personal Information. The Rules provide an exhaustive definition of “sensitive personal data”, which is similar to the definition contained in the EU Privacy Directive. This definition encompasses passwords, financial information, physical, physiological and mental health condition, sexual orientation, medical records and history, and biometric information. The definition excludes any information that is freely available or in the public domain.
3. Authorization for Processing Sensitive Personal Information. Article 5(1) of the Rules requires that organizations based in India obtain authorization in writing from any “provider” of sensitive personal information in order to process such information in India. While the term “provider” is not expressly defined in the Rules, it is possible to construe a distinction in the Rules between “provider” and “person” meaning that the term “provider” encompasses corporate bodies only. According to this reading, the requirement for written consent will apply only where the sensitive personal information to be processed in India is provided by a corporate body. Thus, where sensitive personal information is transferred from a non-Indian organization to an organization based in India, the exporting organization should provide a written authorization (which may be by letter, fax or email) to permit the recipient organization in India to process the information. On the other hand, and on the assumption that “provider” does not encompass natural persons, the Rules do not require written consent where sensitive personal information is obtained directly from individuals.
4. Transparency Requirement. Pursuant to Article 5(3) of the Rules, when collecting personal information directly from natural persons, organizations based in India should furnish such individuals with certain information about the processing, including the fact that personal information is being collected and the purpose for which such information will be used. As stated above, the Rules appear to make a distinction between “provider of personal information” and “person.” According to this reading, the Rules do not require written consent where personal information, including sensitive personal information, is collected from individuals directly.
5. Authorization for Third-Party Disclosures. In order for an organization in India to disclose personal or sensitive personal information to a third-party, it should have the prior permission of the “provider” of such information (i.e., the organization that originally provided the information). The Rules provide that onward transfers of personal information may alternatively be addressed through contractual arrangements between the provider of the information and the recipient based in India.
6. International Transfers. Under the Rules, organizations based in India may transfer personal information, including sensitive personal information, internationally in two specific situations: (i) where there is a contract in place between the transferring organization and the receiving organization or (ii) where the individual to whom the information relates has consented to the transfer.
7. Security. The Rules provide that an Indian organization will be deemed to have complied with reasonable security practices and procedures where they have implemented a comprehensive documented information security program and policies that contain managerial, technical, operational and physical controls commensurate with the information assets and the nature of the business. In the event of a data breach incident, the organization may be required to evidence, upon request, that it has implemented its documented security controls. An organization that has implemented International Standard IS/ISO/IEC 27001 or an approved industry code of practice is deemed to have complied with reasonable security practices and procedures, provided that compliance with the standard or code of practice is audited on an annual basis. | <urn:uuid:6c887c80-7ac0-4803-928e-4fd7af2769f9> | CC-MAIN-2017-04 | https://www.insideprivacy.com/international/indias-new-privacy-rules-potential-impact-on-outsourcing-arrangements/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921149 | 977 | 2.578125 | 3 |
If you missed it, President Obama designated October as National Cybersecurity Awareness Month. The move was designed to engage and educate the public to increase awareness about cybersecurity issues and increase the resiliency of the nation in the event of a cyber incident.
As Cybersecurity Awareness Month comes to a close, now would be the best time to evaluate your security measures to keep your information safe.
Small and medium businesses (SMBs) are increasingly becoming the main target for security hackers. According to the security company Symantec, cyber attacks on SMBs rose 300 percent in 2012 from the previous year. Additionally, a report from McAfee found almost ninety percent of SMBs in the US do not use data protection for company and customer information. These targets are attractive because they tend to have weaker online security.
Businesses are increasingly dependent on the internet for their daily operations. With vital information to protect, a regular assessment of your business’ security protocol should become habit. Consider what information your organization collects, how you store information internally, who has access to the information and what measures your organization takes to protect this data. Is it secure? Would your customers feel secure with your data storage techniques?
These three areas should be among the first on your list when evaluating your cybersecurity:
Spam or unsolicited junk mail can cause harmful viruses to enter your company’s network. Alternatively, you could even be distributing spam mail without knowing it. To avoid sending spam, use clear, easy to understand subject headers, and provide all recipients with the option to opt off of your distribution list. Be sure you also include your valid physical postal address.
In addition to spam, you company could also experience phishing attacks, which can be detrimental to your company’s security. Phishing attacks can enter your network and gain access to email, monitor your keystrokes to learn passwords, and even hijack your website. Furthermore, viruses and spyware can enter your computer though email downloads and by clicking on suspicious links. In order to protect your company and customer’s secure information, be sure you use the latest security software, web browser versions, and operating systems, and that all programs are completing automatic software updates. If an email or link looks suspicious, delete the email and contact your technical support team.
In addition to keeping your technology safe, it is also important to train your employees to stay safe. Encouraging employees to follow good password policies, regularly back up work, and stay watchful will help your data stay secure.
Now that you know what threats you may encounter, it is important to have a cybersecurity plan in place. Your cybersecurity plan should include action plans for prevention, resolution and restitution.
Stay safe by automating software updates, scanning all new devices, and using a firewall and spam filters. Implementing a commitment to security from the top level all the way throughout your organization will help keep your employees and customers information safe and secure.
Does your company allow employees to bring their own device? If so, this can create additional security issues. Read our blog post about how BYOD can affect your business.
The first line of defense in preventing network compromise and data breaches is through early recognition and investigation of potentially suspicious network activity. Early detection will help your company understand how your network is being used and whether any of that usage is malicious.
Antivirus or Antimalware software and monitoring should be utilized in addition to other safe network practices. Antivirus and antimalware regularly scans your company’s network for out-of-the ordinary occurrences. This provides you with an overview of current network activity and helps in detecting any suspicious activity.
Knowing the security risks your company may face is important in protecting your company and customer’s data. Implementing security practices throughout your organization will instill a culture of safety and security. Use the above methods to protect yourself and your organization, and always be vigilant to avoid falling victim to a cyber attack. | <urn:uuid:d27467ca-21d6-41e2-bbaf-8ebc919b76a2> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/as-cybersecurity-month-ends-revisit-your-business-it-security-measures | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280086.25/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938078 | 809 | 2.65625 | 3 |
Astrophysicists and cosmologists have come up with a new time-lapse simulation of the universe’s evolution that is the most comprehensive and detailed yet. The Illustris simulation, as it’s called, spans 13.8 billion years of cosmic evolution and follows thousands of galaxies taking into account gravity, hydrodynamics, cooling, the course of stellar population and other complex processes.
Developed by a team of scientists from the Massachusetts Institute of Technology and several other institutions and executed on powerful supercomputers, the model traces the history of the universe, starting soon after the Big Bang and continuing through to the present day, capturing 13.8 billion years of change with unprecedented fidelity.
The research team reports that the massive simulation once again confirms the standard theory of the universe and matches key astronomical observations, including the distribution of galaxies and the richness of neutral hydrogen gas in galaxies of all sizes.
A paper describing the research appears in the May 7 issue of the journal Nature. Besides MIT, the 10-author team includes researchers from Harvard-Smithsonian Center for Astrophysics (CfA); the Heidelberg Institute for Theoretical Studies in Germany; the University of Heidelberg, the Kavli Institute for Cosmology and the Institute of Astronomy, both in Cambridge, England; the Space Telescope Science Institute in Baltimore; and the Institute for Advanced Study in Princeton, N.J.
Aside from being a stunning achievement in its own right, Illustris offers important insight into the rate at which certain types of galaxies develop.
“Some galaxies are more elliptical and some are more like the Milky Way, [spiral] disc-type galaxies,” explains Mark Vogelsberger, an assistant professor of physics at MIT and lead author of the Nature paper. “There is a certain ratio in the universe. We get the ratio right. That was not achieved before.”
The model also provides clues on the tendency of matter to redistribute in the universe, prodded by supernovas and other phenomenon. This finding could be used to fine-tune experiments performed with space-based telescopes, such as NASA’s WFIRST survey, and EUCLID, the European Space Agency’s program.
Illustris showcases a cubic chunk of the universe measuring 350 million light-years on each side, which is found to contain 41,416 galaxies. The amount of data is such that the complete simulation required several months of computing time at multiple computing centers, including the Harvard Odyssey and CfA/ITC cluster; the Ranger and Stampede supercomputers at the Texas Advanced Computing Center; the CURIE supercomputer at CEA/France; and the SuperMUC computer at the Leibniz Computing Centre in Germany. The largest run incorporated 8,192 compute cores, and spanned 19 million CPU hours. For comparison’s sake, it would take the best desktop computers of the day 2,000 years to execute the entire simulation.
Adding to the simulation’s complexity are 12 billion visual “resolution elements,” which enabled the researchers to compare “snapshots” from the simulation with images of the known universe. “[There was] agreement with observational data on small scales and large scales,” affirms Vogelsberger. A close match like this serves as validation of the study’s correctness.
Illustris diverges from earlier efforts in both scope and fidelity. Its predecessor, Millennium, only tracked the evolution of the dark matter web; ordinary matter and galaxies were tacked on in an ad hoc approach. But the Illustris simulation incorporates ordinary matter from the start. Where the former visualization was relatively calm-looking, Illustris is packed with explosions, including hot blasts of gas that emanate from supermassive black holes at the center of galaxies. These ejections are an essential part of galaxy formation, acting as a brake to star formation.
As Simon White, a cosmologist at the Max Planck Institute for Astrophysics in Garching, Germany, who worked on the Millennium Simulation, explains: Illustris is the first simulation that is large enough to capture a representative segment of the universe and also fine-grained enough to incorporate individual galaxies. “It’s the combination of those two things that is new,” he tells Science.
Advances in supercomputing power are what enabled the simulation to handle the 350 million light-year span and all the additional features. “Previous simulations of the growth of cosmic structures have broadly reproduced the ‘cosmic web’ of galaxies that we see in the Universe, but failed to create a mixed population of elliptical and spiral galaxies, because of numerical inaccuracies and incomplete physical models,” the research team explains in the Nature article. | <urn:uuid:b2d1deb5-3491-492f-b06f-3b90ce4986f8> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/07/simulation-details-13-8-billion-years-cosmic-evolution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00186-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911248 | 993 | 3.671875 | 4 |
PNRP – The Peer Name Resolution Protocol is new protocol made by Microsoft which is one of the first technology that will change the way we think about naming resolution in computer networking and possibly be the next DNS – Domain Name System like technology. PNRP is the new DNS but there are so much differences between them that it deserves an article on this blog.
Just to remind, is few simple words, DNS is a technology that enables us to type the domain name in the browser and leaves to Domain Name System to translate the domain name to IP address of the server where the web page is published.
As we are stepping forward to IPv6 implementation in the whole world in next years, there are technologies and future services that will not function at their best using DNS. In this case Microsoft was one of the first to develop a new technology, decentralized technology that will rely on neighbor computer for the name resolution and completely rely on IPv6 addressing. The Per Name Resolution protocol was the answer.
In case of DNS, it depends on a hierarchical structure of naming, while PNRP depends on peer systems in order to resolve the computer system’s location. Mainly, PNRP is a referral system that operates lookups on the basis of data it is familiar with.
Here is a simple example, if you require to search Computer 1 and you are close to Computers 2 and 3, it is important for your system to know whether Computer 2 knows Computer 1 or not. If the response of Computer 2 is positive, only then a a link to Computer 1 is provided to you. If the reply is in negative, then the system asks Computer 3 whether it knows Computer 1 and the same method is used with Computer 2. If none of the computers knows Computer 1, then the request is sent to other computers close to the system till it successfully finds the one that is familiar with Computer 1.
There are number of ways in which PNRP is different from the DNS service: | <urn:uuid:d4cf1a33-7b55-47f4-90e4-29a93ff0a420> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/name-resolution | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937558 | 401 | 3.671875 | 4 |
LDAP (Lightweight Directory Access Protocol) is a vendor neutral binary protocol that is used to maintain distributed directory info in an organized way that makes it easy to query. LDAP is great for maintaining role-based permissions and secure authentication in environments with diverse access requirements.
Before you can talk about LDAP, you always have to talk about one other thing:
At the end of the day, that's what it's all about, right? Whether via an app on your phone, a browser on your laptop, a thick client at work, or otherwise; the thing you are most interested in is content. Whether consuming, creating, manipulating or otherwise, content is the main reason that we interact with technology. In a world where content is king, access to said content is logically paramount, yes?
There's no question that we generally want people only accessing the content for which they are authorized. This is an increasingly complex notion in today's ever expanding landscape of mobile devices, remote access, service oriented architectures, and more. Fortunately there are many ways in which such things can be configured and controlled. One such method has been tried and true for many a year: LDAP.
Ever heard of LDAP? Regardless, I can almost guarantee you've used it.
One of the (many) unsung heroes of our connected lifestyle, LDAP is a protocol for communicating with a database-like tree service. It's something that most people have used, and perhaps even use on a daily basis, without realizing it. Allow me to take a moment to shed some light on our beloved four letter friend.
What Is LDAP?
As defined above, LDAP stands for "Lightweight Directory Access Protocol". It's a vendor neutral, binary protocol that interacts over TCP and UDP. It's used to maintain distributed directory info in an organized, easy-to-query manner. That means it allows you to keep a directory of items and information about them. Think of an email directory where you have a list of users or email addresses and information about them, such as which department they're in, which associated lists they can send to, etc. Or perhaps a user list that also associates full name, phone number, etc. Sound familiar?
LDAP stores this data by way of records which contain a set of attributes. Think of the attributes like fields in a database. The record itself has a unique identifier, a "Distinguished Name" in LDAP parlance, most often seen as "DN." This is the unique bit of each entry, kind of like the path to a file on your file system. Or perhaps more accurately similar to a street address, since postal addresses begin with the most specific bit first (house number, etc.), as do DNs. Each other attribute in the record has a name and a type, as well as one or more values.
This structure allows for a huge amount of flexibility and correlation of data which makes LDAP beyond useful, and is a large part of what has caused it to proliferate to such a degree. Maybe you want to search a company email directory for all people located in Nashville whose name contains 'Jesse', and you want to return their full name, email address, title and description. No sweat.
An example LDAP record might look something like this:
As you can see from the above, I'd be able to query on the UID, the name, if I know UIDs are names, the type of object (person, in this case), get a list of detectives (as denoted by the ou), which would include this and all other detective records, the location, stored in a dc attribute above, etc. This is, not shockingly, an extremely simple example, but it shows how things are stored and queried, if only the tip of the iceberg. From here the sky's the limit as far as customization and extension.
LDAP Authentication, Authorization & Access Control
This type of structure lends itself extremely well to things like access control and authorization. Which groups is a user in? Only users in the detective group should have access to the clues application, so when someone attempts to log in, ensure they are in the proper group before granting access, etc. The details get complex, but the effect is fantastic.
Okay, so that's the basics of the 'what' of LDAP, but what about the 'how?' How does one get access to all of those handy-dandy records? Well, the process is pretty straight forward from a flow perspective:
- A session begins with a client binding to an LDAP server (DSA, Directory System Agent), default port 389
- The client then sends an operation request (often a search or compare request, for example) to the server, asking for a particular set of information. (Is uid holmes in ou detectives?)
- The server then processes this query, and supplies a response*. (yes)
- The client receives the response and unbinds, then processes the data.
* Note that the client does not have to wait for this response, and is free to fire multiple LDAP requests in series. The responses can return in any order.
There are, of course, a heck of a lot more complex options and steps in there, TLS, adding and deleting entries, modifying entries and more, but that's the general gist of how transactions flow.
There you have it. LDAP: a lightweight, flexible, robust, broadly used protocol leveraged to access structured information. It's not all sunshine and roses of course, things can get complicated pretty quickly in a large deployment, and there is a heck of a learning curve. All things considered, however, LDAP plays an important role in many modern deployments, and plays it well. So to answer the question...who needs LDAP?
Seems like the answer is "just about everyone."
When an LDAP deployment is misbehaving, people get frustrated fast. With ExtraHop you can get total visibility into LDAP deployments of all sizes, so you can find and fix problems as quickly as possible. See how it works in our interactive online demo.
That's not all. ExtraHop decodes dozens of other protocols that are vital to IT operations, from infrastructure services like DNS, FTP and memcache, to VDI protocols like ICA and many, many others. Check out our protocol support page for more. | <urn:uuid:0b96cd35-9b1f-4e95-a144-c67a8aa299a5> | CC-MAIN-2017-04 | https://www.extrahop.com/community/blog/2016/ldap-protocol/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00122-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943344 | 1,317 | 2.96875 | 3 |
One-to-one computing is on the rise at K-12 schools across the U.S. because of the impact computers have on student learning — from improved scores on standardized tests to lowered dropout rates. But providing a computer for every student remains a challenge due to severe budget constraints as schools struggle with teacher layoffs, aging buildings and educational mandates.
“With severe state budget cuts and competing priorities, putting a computer in front of every student may not receive the attention it should,” said Mike Virgil, a former superintendent of the Morris Central School District in Morris, N.Y., who’s currently interim executive director of the Catskill Area School Study Council, a New York Board of Cooperative Educational Services program. “But if we’re not giving students access to technology with quality instruction at school, where will they learn about issues of Internet security, copyright or information quality? Too many students in urban or rural areas still lack access not just to the Internet, but also to computers.”
Some schools are using a new computer sharing technology — not thin client, but one that solves some of the shortcomings of older thin client, network server-based equipment — to carry out their 1-to-1 computing needs for less than they would spend on new computers.
Since many schools can’t afford to purchase and maintain new computers, many have looked to thin client systems — typically stripped-down computer workstations that rely on a network server to do most of the processing.
But in a school setting, thin client systems sometimes don’t deliver the full PC speed, function, flexibility and reliability necessary for multiple users, particularly on aging school networks. The main issue is that when many users share a single operating system and server, performance slows when more computing power is demanded than is available. When this occurs, one user’s problem becomes everyone’s problem — like a stalled car that triggers a rush-hour traffic jam.
With thin client systems, multimedia, streaming video, project-based learning and downloading Internet files are particularly prone to slowdown and crashes. Many applications won’t run or have to be tweaked, and some software programs only allow one user at a time to open them. All these challenges can wreak havoc on student learning, teacher lesson plans and tech support resources.
“When we tested thin clients, performance slows and crashes were a concern,” said Phil Sheridan, a former technology coordinator for the Morris Central School District, who’s currently director of technology at the Delaware-Chenango-Madison-Otsego Board of Cooperative Educational Services, a regional educational agency providing services in partnership with 16 school districts in the greater New York area. “Also, we didn’t want to be required to operate off a single operating system, since that excludes some valuable educational software from being used.”
A new computer sharing technology called VirtuaCore, by Lawrence, Pa.-based Black Box Network Services, essentially turns one computer into two or four fully functional PC workstations. The technology does this by using the excess processing power of desktop PCs already in schools, working off the network to avoid network slowdowns.
Although dual-core and quad-core processors, which combine two or four processors on a single integrated central processing unit chip, are common in today’s desktop computers, few applications require such computing power. Most applications use only one of those processors, leaving the remaining processing power untapped.
By using the excess CPU capacity of a single desktop PC to simultaneously operate multiple, fully functioning workstations, schools can reduce the number of PCs that need to be purchased, replaced or supported by up to 75 percent and reduce energy costs by up to 70 percent, according to the vendor. For teachers and students, it’s just like working on a separate computer.
VirtuaCore allows a PC to understand what to do when extra keyboards, monitors and mice are plugged into an everyday PC. Each workstation operates as an independent standard desktop computer with full functionality. Each workstation consists of only a keyboard, monitor and mouse, which students use independently. The one computer becomes two or four, each capable of running its own operating system such as Windows 7, XP, Linux and others.
“Instead of buying 20 PCs for a computer lab, schools could buy five PCs and use the Black Box computer sharing technology to provide 20 workstations for the same result,” said Sheridan. “Teachers and students don’t know the difference, except the noise and heat is noticeably less with five PCs running instead of 20. Administrators and IT directors appreciate how much it can cut PC replacement, energy and support costs.”
Conversely if a school has 150 computers and 600 students, it can turn the 150 computers into 300 or 600 based on processor specifications. If it does so over a three-year period, that same school would save 21 to 57 percent of its budget in hardware, software and energy consumption costs, saving anywhere from $100,000 to $500,000, according to the company.
“Computer sharing technology is a good option for schools that want to move toward a 1-to-1 student-to-computer ratio, but don’t have the funds,” said Virgil.
The 1-to-1 computer-to-student ratio is important because “students are naturally engaged and motivated by technology,” Virgil said. “As long as there’s enough computers and trained teachers integrating the technology with the curriculum, technology is a marvelous way to improve learning.”
“Computers will enhance learning only when students have easy access to them in their classroom,” said author Harvey Barnett in his Education Resources Information Center (ERIC) Digest article Investing in Technology: The Payoff in Student Learning. “Using computers once or twice a week will have negligible impact on student learning.”
Del Williams is a technical writer based in Torrance, Calif. | <urn:uuid:2510fb04-1493-4c15-8597-602bfeb53afa> | CC-MAIN-2017-04 | http://www.govtech.com/education/Computer-Sharing-Technology-Helps-Cash-Strapped-Schools.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00058-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948133 | 1,243 | 2.65625 | 3 |
vTerminology: A Guide to Key Virtualization Terminology
One of the most important steps in mastering a new technology is learning the associated terminology or vocabulary. In the IT field, this can be a very challenging step, as much of the terminology is often used inconsistently. This white paper defines the terminology associated with IT virtualization. It is mainly vendor-agnostic, but it does provide some vendor-specific terms, product names, and feature names used by VMware, Citrix, and Microsoft.
This section defines many of the most commonly used terms in the virtualization vocabulary. These are considered core, high-level terms. These are straight forward, commonly accepted definitions.
Virtual Machine (VM) - A set of virtual hardware devices, including virtual CPU, virtual RAM, virtual I/O devices, and other virtual hardware devices. Software that resembles and behaves like a traditional, physical server and runs a traditional operating system (OS), such as Windows or Linux.
Many products and technologies today provide a platform on which VMs can be built and run. Although these technologies may have many fundamental differences, they tend to share these characteristics:
- Many VMs can run on each physical host concurrently.
- VMs running on the same host are isolated from one another.
- The OS installed in the VM is unaware that it is running in a VM.
- Administrators and users in one VM cannot access the underlying host OS or the guest OS of other VMs running on the same host.
Virtual Server - A VM running a server OS such as a Windows Server or a Red Hat Enterprise Linux Server. A virtual server typically runs one server-based application.
Virtual Desktop - A VM that is running a desktop OS, such as Windows 7 or Red Hat Enterprise Desktop. A virtual desktop typically has one direct, concurrent user.
VM Template - An object that represents the "golden image" of a particular virtual server build or virtual desktop build, typically including a well-configured OS and applications. Administrators can quickly deploy new VMs by automatically copying the template to create the new VM.
VM Guest OS - The OS that runs in a VM.
Virtual Hardware Device (Virtual Device) - A software component that resembles and behaves like a specific hardware device. The guest OS and software applications in the VM behave as though the virtual hardware device is actually a physical hardware device. A VM is a set of virtual hardware devices that correspond to the set of devices found in traditional, physical servers, such as virtual CPUs, virtual RAM, virtual storage adapters, and virtual Ethernet adapters.
Virtual Network Interface Card (vNIC) - Software that resembles and behaves like a traditional Ethernet Adapter. It has a MAC address, and it receives and sends Ethernet packets.
Virtual SCSI Adapter - Software that resembles and behaves like a traditional SCSI adapter. It can generate SCSI commands and attach multiple virtual disks.
Virtual CPU (vCPU) - Software that resembles and behaves like a traditional, physical CPU. Depending on the underlying technology, vCPUs could be software-emulated or software-modified:
Software Emulated - A process that resembles and behaves like a specific model of a physical CPU that, in some cases, could be different than the model of underlying physical CPU in the host hardware.
Software Modified - A process that provides a filtered, indirect connection to the underlying host CPU. Typically, the vCPU provides subsets of the instruction and feature sets that are available on the host CPU. The vCPU traps and modifies privileged commands but sends other commands directly to the hardware.
Virtual Disk - Resembles and behaves like a physical disk. It may be a file, a set of files, software, or some other entity, but to a VM, it appears to be a SCSI disk. For example, in Microsoft Hyper-V, virtual disks are referred to as VHD files with the file extension .vhd.
Virtual Ethernet Switch (vSwitch) - Software that resembles and behaves like a physical Ethernet switch. It allows vNICS from multiple VMs to connect to virtual ports. It allows physical NICs to connect to virtual ports and serve as uplinks to the physical network. A vSwitch maintains a MAC address table and routes traffic to specific ports, rather than repeating traffic to all ports. It may include other features commonly found in physical Ethernet switches, such as VLANs.
Virtual Network - A network provided by virtual switches. It may be an extension of a traditional network that is built on physical switches and VLANs, or, it may be an isolated network formed strictly from virtual switches.
Virtual Infrastructure - A collection of VMs, virtual networks and storage, and other virtual items that can deploy and run business applications, as an alternative to running applications directly on physical infrastructure. It allows IT personnel to install software applications in traditional OSs, such as Windows and Linux, without needing to know details of the underlying physical infrastructure. The OSs and applications run in VMs, in virtual networks, and on virtual storage. | <urn:uuid:6f21da55-bf21-4b6a-8ed6-d0f149dbaabc> | CC-MAIN-2017-04 | https://www.globalknowledge.com/ca-en/resources/resource-library/white-paper/vterminology-a-guide-to-key-virtualization-terminology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00452-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.893134 | 1,033 | 3.265625 | 3 |
René Descartes (1596-1650) had an architectural view of knowledge. So do we. We and René think beliefs have a basis, and that fundamental ideas provide a basis we can build on. Putting one thing on top of another to build a structure that shelters us is one of our primordial experiences. If the foundation teeters, so does the structure. And that’s how we think of knowledge.
For logic, that makes sense since logical systems start with axioms and develop via rules of derivation. If you start your argument, "Now, as is well known, the moon is made of cheese," then it doesn’t matter how well you derive what follows, you just can’t trust the idea that, therefore, there are giant mice on the moon. Knowledge needs a firm foundation, at least when it comes to logic.
Descartes’ problem was coming up with the initial axioms. He was engaged in a project of radical self-doubt. After throwing out everything about which we could possibly be wrong, what remains? He worked himself down to the fact that since he is doubting, he must also exist. That may be a firm foundation, but it’s a thin floor on which to rebuild the entire house of knowledge.
So, he came upon a heuristic for deciding which of his ideas are worthy of belief. It is modeled on how we decide which of our perceptions are reliable. Descartes writes: "I call ‘clear’ that perception which is present and manifest to an attentive mind: just as we say that we clearly see those things which are present to our intent eye and act upon it sufficiently strongly and manifestly." He defines "distinct" as perception that is clear but also so "separated and delineated from all others that it contains absolutely nothing except what is clear."
Now that we know the criteria for believing our perceptions, Descartes then applies them to "intuition," by which he means our ability to grasp the truth; this is almost the direct opposite of what we mean by the term today. Intuition is "the conception which an unclouded and attentive mind gives us so readily and distinctly that we are wholly freed from doubt about that which we understand."
He needs clearness and distinction as criteria because otherwise we can’t trust our thoughts, no matter how hard we try. Behind this is not mere wishing it were so; Descartes assumed that God wouldn’t have given us faculties designed to fool us. He gave us faculties by which we can know His world.
Clearness and distinction work not only because they give us a mental state we can trust, but because that state reflects what Descartes assumed was the nature of the world. The things of the world are well-formed and distinct. Clear and distinct knowledge thus reflects the world that it’s knowing.
But Descartes was wrong. The world isn’t any one way. Knowing things means identifying things. Identifying them means classifying them. But how we classify them—and thus how we know them—depends on what we’re up to. If we’re in the forest looking for creatures that carry disease, we’ll distinguish a different set of critters than if we’re looking for creatures that have high-price pelts.
It gets worse as the topics get more interesting. For example, I recently blogged admiringly about a post by Rob Styles that carefully and usefully distinguished content, data and metadata. These are shifty terms that need some precision. Styles’ post nailed it, as far as I was concerned.
But, the next day Tom Matrullo left a comment in response to my post. "Styles’ point is superb, but we need to look at the obvious (and perhaps not so obvious) exceptions," he began. He then listed examples of metadata that are themselves content, including playlists, a list of top-ten films, etc.
Rob is right. Tom is right. Rob clarified. Tom messied it up. Neither has the last word, and we’re all smarter for it. We need both, and we do both at different stages of our investigation of the truth.
Descartes is the apogee of a particular moment in the dialectic of knowing. In that moment, all is clear, distinct and certain. The conversational nature of the Web is the apogee of the other moment, in which the clear and distinct gets to shine but only until someone notices it, points to it and explains why it’s really more complicated than that. Rob Styles’ post is http://tiny url.com/2tv4dj. | <urn:uuid:a6b25044-81a6-4add-a111-8c79caa4e67c> | CC-MAIN-2017-04 | http://www.kmworld.com/Articles/Column/David-Weinberger/Unclear-and-indistinct--and-uncertain-36910.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00022-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.968911 | 991 | 3.609375 | 4 |
This article by Dan Goodin appears to cover the most facts about the CRIME attack on SSL/TLS. It answers my first question about what the acronym means; CRIME is short for “Compression Ratio Info-Leak Made Easy.”
It also confirms the attack is performed when the communication uses TLS compression. My understanding is that TLS compression is used in SPDY, which is an open networking protocol used by both Google and Twitter.
There is good news. Microsoft Internet Explorer, Google Chrome and Mozilla Firefox are believed to be immune from the attack as IE never supported SPDY, and Chrome and Firefox have been patched. There may be issues with mobile browsers, but that is still to be confirmed.
The CRIME attack will only work when a vulnerable browser or application is connected to a website that supports TLS compression or SPDY. So, to protect your users, you should disable SPDY or TLS compression from your website. | <urn:uuid:db196118-93c5-41a5-801c-19c9bc1c26e4> | CC-MAIN-2017-04 | https://www.entrust.com/stopping-crime-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935886 | 193 | 2.578125 | 3 |
Organizations Using the Internet
Jammu and Kashmir,
and militant groups based in or supported
Modified 24 April 2009
It seems impossible to separate these into separate pages, as many so-called "Indian separatist groups" are really Pakistani-based jihadists, and it is impossible to consider either India or Pakistan in isolation without also considering Kashmir.
India — The Indian Union is administratively one extremely large nation, but ethnically it is a collection of many diverse nationalities. It is the result of British imperialism (certainly a fair word, as it was one of the "jewels in the crown" of the British empire). It brought (forced?) together several disparate and historically separate ethnic nations into one political state. Many of these ethnic nationalities have factions desiring independance. There are many legitimate secessionist or autonomy movements.
However, many sites claiming to be maintained by Indian secessionist movements are actually designed by supporters of the terrorist group Lashkar-e-Taiba or allied Pakistan-based organizations (see the Pakistan section below for more).
Many of the sites list below seem to be mostly pro-Pakistan and anti-India, instead of being pro-nationality or pro-autonomy. As with anything else on the net, the reader must remember that these are controversial issues and subject to misleading information. If anyone can offer further information about which sites are really backed by Lashkar-e-Taiba, or knows of sites giving the opposing points of view for any of the issues involved, please let me know! The Pakistan-based anti-Indian viewpoint seems to be disproportionally represented on the net.
- Bharat Rakshak (The Consortium of Indian Military Websites) — Looks very good, although apparently not official — http://www.bharat-rakshak.com
- Indian Armed Forces — http://armedforces.nic.in/
- Ministry of Defence — http://mod.nic.in/
- Bharatvani Institute — A good overview of the Indian point of view, capturing the India/Pakistan, Hindu/Muslim, and Modern / Conservative dichotomies — http://www.voi.org
- Hindutva — http://www.hindutva.org
- India Think Tank — A non-profit organization promoting policies in favor of India — http://www.indiathinktank.net/
- Indo-Pak Links — This looks to be a broad collection — http://www.grandpoohbah.net/indo-pak.htm
- Institute of Defence Studies and Analyses (IDSA) — An organization studying and researching problems of Indian national security: http://www.idsa-india.org
- Secular India — Quite a bit of political analysis — http://www.secularindia.com
- South Asia Terrorism Portal — Concentrates on the terrorism, low intensity warfare and sectarian strife in South Asia — run by the Institute of Conflict Management — http://www.icm-satp.com/
- South Asian Analysis Group (SAAG) — Strategic analysis on Indian and international security — http://www.saag.org
Andhra Pradesh —
- Naxalites (People's War Group) — Maoist movement influential in parts of Andhra Pradesh and neighboring states.
- United Liberation Front of Assam (ULFA) — The militant wing of Assamese secessionism. With several bases in Bhutan and in Assam, it battles the Indian Army. Has been on the net, hosted by Geocities.
- Assam Watch — Documenting the violation of human rights in Assam which led to the secessionist uprising. http://freespace.virgin.net/assam.watch/
- Bodoland — Some information on the Bodoland movement: http://www.bsos.umd.edu/cidcm/mar/indbodo.htm
There is some misinformation out there, a collection of writings
being misattributed to Babasaheb Ambedkar regarding Dalitstan
(see the list of apparent fake sites below).
- The real Babasaheb Ambedkar web site: http://ambedkar.org/
- Revolutionary People's Front Manipur — Manipuri militants engaged in violent secessionist war against the Indian Army. Has been on the net, hosted by Geocities.
- Manipur People's Liberation Front — Has been on the net, hosted by Geocities.
- PrePak (People's Liberation Party of Kangleipak) — Another militant Manipuri secessionist outfit. Has been on the net, hosted by Geocities.
- Mizoram National Front — Some info on the Mizo secessionist movement: http://www.bsos.umd.edu/cidcm/mar/indmizo.htm
Mughalstan — More than any other internal conflict, this
seems to mostly be a manifestation of the India-vs-Pakistan tension.
Many of sites are not seeking separatism or autonomy but instead
seeking union with Pakistan or with some larger pan-Islamic movement.
As some e-mail pointed out,
"These Mughlestan sites may be a handiwork of an Islamic
organization who have never reconciled to losing power in India
in mid 17th century and count as real sites but then they do not
represent any secessionist movement real or political inside India.
These same organization may be behind the fake
sites [listed below as 'Pages Thought to be Fake']."
Also see the Pakistan section
and the Jammu and Kashmir section
for some more details.
- Hezb-e-Mughalstan (The Party of Mughalstan) — http://www.dalitstan.org/mughalstan/
- Mughalstan Nation — Has been on the net, hosted by Geocities.
- Jihad-e-Mughalstan — Seeking restoration of the Mughalstan Caliphate, which would include Pakistan, parts of India, Bangladesh, and maybe parts of Nepal: http://www.freeyellow.com/members8/mughalstan/
- Supporters of Shariah — Based in the portion of Mughalstan actually in Pakistan: http://www.ummah.net/sos/ See also the Supporters of Shariah site at http://www.shareeah.com/home.htm
- Mughalstan Research Institute — http://www.freespeech.org/delhi/
- Tamil Tribune — A monthly Tamil secessionist newspaper — Has been on the net, hosted by Geocities.
- This appears to be Tamil Islamists in India who back bin Ladin. Has been on the net, hosted by Geocities.
- Also see the Sri Lanka section.
- Tripura Peoples' Democratic Front (TPDF) — The political wing of the ATTF (All-Tripura Tiger Force) which is battling the Indian Army in Tripura and runs a parallel government. Has been on the net, hosted by Geocities.
- Andhra Pradesh —
Pages Thought to be Fake —
Or at least pro-Pakistani and anti-India,
not purely secessionist movements as the
Before even reading through the material,
notice the extremely similar layout of the main
pages for Gujarat Swarajya Sangh
and the Bengal Liberation Army,
to name just one example.
Also notice the extensive cross-linking between
many of these sites.
It would appear that one organization is
providing much of this.
- Bengal Liberation Army — Organisation struggling for the secession of West Bengal from the Indian Union. http://west_bengal.tripod.com
- Dalitstan Organisation — This is an umbrella organization including what would appear to be many groups with differing, even conflicting, goals and viewpoints. From what I've been told by those far better informed than myself, it would appear that the collection is really pro-Pakistani and anti-Indian. Also note their hosting of the Hezb-e-Mughalstan pages: http://www.dalitstan.org/
- "Ambedkar Library" — Not really:
- Gujarat Swarajya Sangh (Gujarat Independance Movement) — Advocating a homeland for the Gujaratis and Vaishyas of India. http://hometown.aol.com/mahagujarat/
- Maratha Rashtra Parishad — Advocating the restoration of Shivaji's 18th-century Maratha Empire. http://freeweb.digiweb.com/pages/maratha/
- Pan-Islamic Mughalstan — http://muslimsonline.com/mughalstan/
- Oriya Mahasabha — The Oriya secessionist movement. http://oriya.scripterz.org
- Rajputana Liberation Front — Demanding the secession of Rajasthan, the homeland of the Rajputs. http://rajputana.htmlplanet.com
- Free Tamil Nadu — http://www.dalitstan.org/tamil/
Pakistan — The New York Times, 5 July 2002, pg A8, had an excellent overview of Pakistani militant groups:
Anti-Indian militant groups —
originally founded to fight Indian influence
- Lashkar-e-Taiba or Army of the Righteous and Markaz-ud-Dawa-wal-Irshad (MDI) — To quote the U.S. Dept of State, Lashkar-e-Taiba "is the armed wing of the Pakistan-based religious organization, Markaz-ud-Dawa-wal-Irshad (MDI) — a Sunni anti-US missionary organization formed in 1989. One of the three largest and best-trained groups fighting in Kashmir against India, it is not connected to a political party. The LT leader is MDI chief, Professor Hafiz Mohammed Saeed. [...] Has several hundred members in Azad Kashmir, Pakistan, and in India's southern Kashmir and Doda regions. Almost all LT cadres are foreigners — mostly Pakistanis from seminaries across the country and Afghan veterans of the Afghan wars. [...] Based in Muridke (near Lahore) and Muzaffarabad. The LT trains its militants in mobile training camps across Pakistan-administered Kashmir and Afghanistan. [...] Collects donations from the Pakistani community in the Persian Gulf and United Kingdom, Islamic NGOs, and Pakistani and Kashmiri businessmen." Accused of Dec 2001 attack on Indian Parliament, believed to be linked to Al-Qaeda.
- Harakat ul-Mujahedeen (Movement of Holy Warriors) — also known as Harkat Ansar and Al Faran — believed responsible for hijacking an Air India plane in Dec 1999. Operates primarily in Kashmir, against Indian troops and civilian targets, kidnaps tourists, hijacked an Indian airliner 24 Dec 1999. Politically aligned with radical political party Jamiat-i Ulema-i Islam Fazlur Rehman faction (JUI-F). Several thousand armed supporters in Azad Kashmir, Pakistan (especially Muzaffarabad and Rawalpindi), and India's Kashmir and Doda regions. Collects donations from Saudi Arabia and other Gulf states. http://www.ummah.net.pk/harkat/
- Jaish-e-Muhammed (Army of Muhammed) — a spinoff of Harkat ul-Mujahedeen, also known as Committee for the Restoration of Pakistani Sovreignity, suspected of kidnapping and murder of journalist Daniel Pearl, accused of Dec 2001 attack on Indian Parliament, possibly linked to Al-Qaeda, reported to have received training in Afghanistan and support from Osama bin Laden.
Sectarian militant groups — fighting other Muslim
- Sipah-e-Sahaba — Extremist Sunni group suspected of killing Shi'ites since late 1990's.
- Lashkar-e-Jhangvi — A more radical offshoot of Sipah-e-Sahaba, accused of car bombing (killing 12) in Karachi May 2002, and bombing at American consulate in June 2002, Al-Qaeda is believed to have funded the consulate bombing.
Al Qanoon —
A terror coalition formed in January 2002:
- Formed by members of Lashkar-e-Taiba, Jaish-e-Muhammed, Sipah-e-Sahaba, and Lashkar-e-Jhangvi.
- Draws name and inspiration from Ahmed Omar Sheikh, former leader of Jaish-e-Muhammed, arrested in 2002 for murder of Daniel Pearl.
- Members share distorted view of Islam, hatred of the west, and having trained and fought in Afghanistan.
- Jama'at-Ud-Da'awa Pakistan — without reading Urdu I'm not sure exactly what their position is, although the home page mentions Kashmir, Jihad, Fatawa, and MP3 CD's (speeches and exhortations, not music!) — http://jamatdawa.org/
- Narus, a major U.S. corporation, has sold the Pakistan government the equipment it uses to control and monitor telecommunications.
- Also see the page on al-Qaeda and the Taliban, as the Taleban were a Pakistani political party founded and controlled by ISI, Pakistan's Inter-Services Intelligence agency.
Jammu and Kashmir — Note that I've just copied the web site titles so this is a little more than a list of URLs — they aren't my labels, I'm just quoting the authors. The majority seem to really be Indian or (more commonly) Pakistani, rather than Kashmiri.
- Jammu-Kashmir Facts — http://www.jammu-kashmir-facts.com/
- Jammu and Kashmir — the Complete Knowledge Base — http://www.jammu-kashmir.com/
- Kashmir: A Paradise turned into Hell by Terrorism — http://www.kashmir-information.com/
- Save Kashmir Movement — http://www.savekashmirmovement.org
- The Truth About Kashmir — an Indian perspective — http://www.armyinkashmir.org/index.html
- Pro-Pakistani, Anti-Indian
- Pakistani Government — http://www.pak.gov.pk/govt/kashmir/kashmir-page.html
- Jihad in Kashmir — http://www.homestead.com/kasheer/jihadvkashmire.html
- Desolation or Peace — http://www.kashmir.demon.co.uk/
- Jammu and Kashmir — The slaughter of democracy — Maintained by Gharib Hanif:
- Jihad-e-Kashmir — mainly in Urdu: http://www.jihad-e-kashmir.org.pk/
- Kashmir Council for Human Rights — Lord Eric Avenbury, Chairman British Parliamentary Human Rights Group, is patron of this organisation: http://www.ummah.net/kashmir/kchr/
- Kashmir Home Page — http://www.alumni.caltech.edu/~mughal/kashmir/kashmir.html
- Kashmir Watch — http://www.kashmirwatch.org/
- PakNews Kashmir Conflict — Regular Updates on the Kashmir Conflict: http://www.paknews.org/kashmir/
- The Rape of Kashmir — Graphic website on the ongoing war over Kashmir: http://www.kasheer.uni.cc/
- Pro-Independance, Anti-Indian, Anti-Pakistani
- I'm uncertain exactly how to categorize these...
- Muttahida Jihad Council — http://www.freeyellow.com/members5/unitedstatesofislam/index.html — undoubtedly on Pakistan's side in any Pakistan-vs-India debate, but taking a much broader view for an Islamic "New World Order".
Intro Page Cybersecurity Home Page | <urn:uuid:8f739c02-3725-492c-b7f0-5a3270a095a1> | CC-MAIN-2017-04 | http://www.cromwell-intl.com/cybersecurity/netusers/Index/ipk | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00444-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865337 | 3,368 | 2.53125 | 3 |
The Internet has become an integral part of our daily lives, and not just for entertainment. At work Internet applications are used to communicate, collect data, research, sell products, and handle just about every other business process from hiring to customer relationship management.
Given the amount of sensitive and financial information that is transmitted over the Internet every hour, it would be an obvious choice for cyber criminals to conduct their illegal activities. Yet in addition to the amount of traffic, the proliferation of insecure web applications makes web based hacking attacks even more attractive, and even more profitable.
Breaking into computer systems for malicious intent is nothing new. Since the early eighties skilled computer enthusiasts, or hackers, have used their knowledge to break into systems with no redeeming intent. However with the advent of web based applications, the sophistication of hacking attacks has dramatically increased while the amount of skill required to carry out these attacks has proportionately lessened.
Malicious hackers nowadays can make use of a number of tools that help them automate their attack. Using scanning tools the attacker is able to perform the first step of their attack, enumeration. In this phase information is gathered regarding the intended target. With specific tools, the attacker can scan multiple computers, routers, servers, and web sites at once looking for specific information that will help them easily attack the machine. Add to this the ability for the attacker to conduct the enumeration process with an army of zombie computers and the number of vulnerable systems that they can identify rises exponentially depending upon the size of the botnet they control.
Once the targets have been identified the attacker continues to analyze the targets looking for known vulnerabilities. Depending on what the overall goal of the attacker is, they could be searching for any number, or combination, of vulnerabilities in which to exploit. These can include, but are not limited to:
Once the vulnerabilities are identified, the attacker can move into the last stage of their attack, exploiting the computers.
Using the information found in the vulnerability analysis, the attacker then attempts to exploit the target computers. Again, this process can be automated like the others, and when launched from a large botnet army the attacker can exploit thousands of victims with minimal effort on their part.
Hacking attacks can have detrimental effects on the victim. These effects vary according to the type of attack the hacker launched and what the target of their attack is. Unfortunately for many Web Sites, there are multiple ways to exploit them.
When a web site or network is attacked, the blame falls on the owner. It is their responsibility to ensure that any service or application that they are running is protected against the vulnerabilities that can be used to exploit their property, and that includes their web site.
To protect customers and employees from having their financial or private information from being stolen, both industry and governments have implemented regulations with the intent of securing against common hacking attacks. To combat credit card fraud, the Payment Card Industry created the Data Security Standard that requires merchants who process credit cards to take specific measures that help protect against hacking attacks. The European Union, United Kingdom, United States, and Canada are among the governments that have also instituted privacy acts meant to regulate how businesses protect their customer and employee data from malicious hackers.
In addition to the fees and legal ramifications that can come as a result of failing to comply with the different regulations, hacking attacks can also damage a company’s reputation to the point that they lose customers and revenue. A company who is in the news because they have been hacked is sure to lose the trust of even their most loyal customers. The same happens with web sites that are identified as containing spam or malicious scripts. Once this is known, most visitors will stay away. And if losing traffic wasn’t bad enough, but once the search engines have identified as site as malicious their placement in the search engine falls dramatically rendering any Search Engine Optimization work essentially useless until the problem is corrected.
IBM’s X-Force Trend report stated that, “Web applications remain the Achilles heel for the security industry”. With over 80% of all web sites having contained at least one vulnerability, web application security needs to be addressed by any company with a web presence as protecting web applications not only helps to protect your web site from attack, but also can protect your web servers and any other network resources that access them.
dotDefender enables companies to address challenges facing their web site in a straightforward and cost-effective manner by utilizing a Security as a Service solution. dotDefender offers comprehensive protection against the vulnerabilities that hacking attacks use against your web site every day.
The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Architected as plug & play software providing optimal out-of-the-box protection, dotDefender creates a security layer in front of the application to detect and protect against application-level attacks in incoming web traffic that could be used to compromise the web server, steal sensitive information, or disrupt web services. | <urn:uuid:3ef7f0f2-aee8-44ff-b27c-3361630cdc5d> | CC-MAIN-2017-04 | http://www.applicure.com/solutions/hacking-attacks | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00069-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.946648 | 1,013 | 3.015625 | 3 |
Thirty local governments in England tested various technological improvements to voting or vote counting in May 2002. Some jurisdictions used new technologies for the polling place, such as touchscreen voting machines; others tested techniques for voting remotely.
Nine jurisdictions allowed voters to cast their ballots using electronic methods, such as interactive voice response (IVR) technology, PC-based systems and handheld mobile devices via short message service (SMS). Some of these jurisdictions allowed voters to cast ballots from PCs or kiosks in public places such as shopping centers.
"The central government provided funding and overall strategic planning, and the local officials put it into practice and did all the legwork," said Tom Hawthorn, assistant policy manager of the Electoral Commission, a nonpartisan entity that evaluates and reports on the administration of elections to the UK Parliament, European Parliament, Scottish Parliament, and the Welsh and Northern Ireland Assemblies.
"This wouldn't have happened if the local authorities had to pay for it," said Alan Winchcombe, electoral deputy of the Swindon Borough Council. Swindon tested online voting using PCs, and telephone voting via IVR in19 of its 22 wards. The 19 wards comprised 126,953 voters.
"Swindon was only interested because the central government was paying the bill," Winchcombe said.
Not Just About Turnout
Overall, the pilot tests succeeded, Hawthorn said, although he was careful to define the parameters of success in this instance.
"The big concept for us is multichannel voting," he said. "There is an acceptance that putting in place new ways to vote isn't necessarily going to raise turnout. We're looking at expanding the range of choice voters have of ways they can cast their vote."
In Liverpool, voters could choose to vote over the Internet, with an IVR system or using a handheld mobile device. In St. Albans, voters could use the Net, an IVR system or kiosks scattered throughout the area.
"Voters who have taken part in the tests have been very positive about the choices being made available to them," Hawthorn said. "Remote voting won't be for everybody, and what we're looking at doing is make sure there are a range of options available so people can pick the one that best fits the way they live."
Though increasing turnout is a goal for English elections officials, the May tests were setting the foundation for future electronic voting.
"If we're looking at these pilots as a sound basis for a longer-term, perhaps six or seven year program of development, then they were an excellent beginning," Hawthorn said. "The will is there. The enthusiasm is there. We just have to identify the best technical solutions to the policy aims that we've got. Hopefully, we can put in place procedures that will allow people to vote using whatever technology that we have at the moment or will be developed over the next couple of years."
One Vote, One Identifier
Building confidence in remote electronic voting relies, in part, on the security of voter identity. For the English tests, local elections officials compiled lists of eligible voters and submitted the lists to a vendor that created unique identification measures for those voters.
"Every voter was supplied with a PIN, and they could use the PIN to vote either by the Internet or the telephone," said Winchcombe. "It was a 10-digit PIN that the voters had to enter as two numbers - one block of six numbers and one block of four numbers."
Officials then delivered the PINs to eligible voters.
"They didn't go through the normal mailing system," he said. "We employed people to deliver the PINs to voters' households. The PIN was generated by the vendor. We've got 127,000 registered voters here, and the vendor sent us a random range of 127,000 PINs."
Officials did not want to base PINs on personal information, such as birth dates, to protect against someone guessing another person's PIN and casting a fraudulent vote, Winchcombe said.
For jurisdictions testing online voting from remote PCs, voters visited a specially created Web site using secure sockets layer protocol, entered their PINs and viewed their "ballot."
"Voters could make their choices, confirm their choices and then send the vote off," said Hawthorn. "In most cases, the voters got a receipt back saying their vote had been cast as they wanted."
That receipt is a printed screen shot of the vote confirmation page, which Hawthorn said is a crucial component of establishing an audit trail. Though remote electronic voting holds much promise, verifying voting data is key to ensuring the validity of a particular election and its electronically cast votes.
The central government and local governments have not yet devised a procedure for deciding which pieces of data will be included in the audit trail, he said, though the pilot tests will provide the opportunity to decide how an audit trail will be established.
"We're here to make sure any system is robust enough to withstand, potentially, quite extensive attacks on the integrity of the system and high user demand," Hawthorn said. "We want to make sure that people have the opportunity - that the access points are kept clear - and the data is counted without having been manipulated by a third party and is not susceptible to attack."
By the Numbers
Remote electronic voting holds promise for other countries because English voters generally didn't shy away from using new methods to cast their ballots. In five of the local jurisdictions, voters had the chance to vote online using their home PCs, PCs at the polling place or a public kiosk.
Usage of the new voting methods varied widely by jurisdiction, according to the Electoral Commission's August evaluation of the 2002 elections, available online at .
In the St. Albans pilot, approximately 50 percent of votes were cast via the Internet or telephone. Four of 10 Liverpool voters cast their ballots using the Internet, telephone or text messaging. In Swindon, however, only 16 percent of votes were cast using remote electronic technologies.
Overall, the Electoral Commission found that 76.5 percent of eligible voters in the five local jurisdictions that offered multiple methods of remote electronic voting cast 49,545 votes at polling places or via mail-in ballots, by far the most popular way to vote. Approximately 14.6 percent of eligible voters used the Internet to vote, casting 9,479 votes. Voting via IVR accounted for 6.1 percent or 3,934 votes; and 2.7 percent (1,772 votes) were cast using SMS on mobile devices.
Making Voting Better
Both the central government and local jurisdictions were pleased by the results of the remote electronic voting tests, said Thomas Barry, policy adviser in the Office of the Deputy Prime Minister.
"In Sheffield, a woman who is a paraplegic was able to vote for the first time using computer voice-recognition technology," Barry said. "That's just one very extreme instance of how new technology increased the opportunity for voting."
Surveys of those who voted electronically - remotely or at polling sites - indicated voters found the technologies easy to use, Barry said, though a degree of uncertainty did emerge.
"There is a perception that electronic voting isn't as secure as the traditional way of voting," he said, noting that the Electoral Commission contacted each police force in the 30 pilot areas; no reports of fraud or the undermining of security were made.
Combating that perception - and informing voters about remote electronic voting in general - will require some public-relations work.
"The central government and the Electoral Commission and the local authorities have to do more to communicate the new strategy to voters," Barry said. "Certainly, this was evident in the analysis of the pilot programs - we can do more to convey the new ways of voting to not only the electorate, but also to the key stakeholders: political parties, the candidates and elections officials."
Swindon's Winchcombe said voters in his borough, when surveyed after the election, reported that they felt safe using the new methods.
"The perception was that it was safe, secure, secret and we wouldn't tamper with it," he said. "They trusted us. If you voted, everything was being recorded in Seattle [by VoteHere, an electronic voting company], and the results were transmitted back to us. When you voted, the server you were talking to was in the United States. We had registered voters from Swindon vote from all over the world - Peru, Brazil, Thailand, Korea."
Vote from Your TV
On the heels of the May pilot, Barry said England already is looking at testing interactive digital TV (IDTV).
One jurisdiction was slated to test voting via IDTV in May, but couldn't due to staffing issues. In the next round of tests, though, officials hope IDTV will be offered as a voting option.
"We will be inviting local authorities to consider using IDTV as another method for remote electronic voting," Barry said. "We conducted some research into the implementation of electronic voting in the UK, and the research concludes that voting by IDTV is possibly one of the most secure ways of transmitting data, and therefore, it's prudent for government to consider this method."
If no local jurisdictions jump at the chance, he said, the Electoral Commission itself has statutory power to create a partnership with a local jurisdiction and submit a joint application to test IDTV, Barry said, adding that next year the pilot test will include many more jurisdictions.
"This year, we had | <urn:uuid:12e810b8-d528-4dee-98f2-0f4eb18c8d56> | CC-MAIN-2017-04 | http://www.govtech.com/security/England-Tests-E-Voting.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969813 | 1,940 | 2.8125 | 3 |
On a daily basis, most people using desktop operating systems consume resources and ‘rich content’ from unknown sources on the Internet, typically via technologies ‘under the hood’ of our Web browsers. These include Java, browser plugins like Adobe Flash, PDF readers, HTML5 and others. All are meant to create a rich and seamless user experience. Consider some scenarios that a user might expect to experience while browsing the Web.
When a browser plugin wants to create or access a file, wouldn’t it be great if it could do it automatically without user interaction? If a Java applet could open during a general browsing session and change OS settings to help fix a user’s problem automatically, wouldn’t that be a positive thing? If a PDF file could simultaneously execute code that corresponds to the content of the file, wouldn’t that be handy and increase user productivity?
On the surface, each scenario sounds like a great idea. Allowing these technologies to have unrestricted access to a computer’s resources seems like common sense, but these things all assume that you trust the source of the content and the content itself. Do you trust the PDF file you are opening? Are you sure the Java applet that opened didn’t come from malicious advertisement on your favorite news website?
Malicious usage of these technologies happens every day. Coupled with social engineering and compromised websites (often advertising), malicious code is distributed to its target. Technology companies behind the tools we use every day have reacted to the threats. Instead of unrestricted access to system resources they have taken the completely opposite approach.
Sandboxing is one approach, which demonstrates how the concept of system resource trust has changed. PDF readers sandbox content so that code executed within a file is seriously restricted to what it can do. Java applets, besides code-signing, also sandbox code execution to limit the potential damage that a malicious applet can inflict. Malicious payloads in PDF files and Java applets have barriers so that they cannot simply reach out and touch their target in your PC.
So, is the problem solved? A quick scan of security conference proceedings will show how researchers have been able to bypass the sandbox security features of Java, browser plugins and PDF readers. The browsers themselves have shown a lack of trust for Java and browser plugins, and now contain sandboxing technology at the browser. These browser sandboxes have also been compromised. Security contests such as Pwnium and Pwn2Own demonstrate flaws in browser sandbox security.
There are reasons for this. The underlying desktop OS was engineered before security became a focus. Seamless user experiences and open, trusting resource allocations were a trademark of desktop operating systems. Process memory hooking — now widely known as a key for viruses such as Zeus — was originally engineered into desktop operating systems to enable richer, more seamless user experiences. Even though applications running in desktop memory processes have sandboxing technologies, the underlying operating system has many flaws that enable malicious code to bypass the application sandbox.
The bad guys have a technology pipeline that is much like a legitimate software company. There are entire crops of ‘zero-day’ attacks against all Web browsing technologies that are just waiting to execute their goal: bypass the security mechanisms of your browsing tools and execute malicious code your PC. | <urn:uuid:67eef4be-0ecc-4821-ab4f-d88e2ca6cc04> | CC-MAIN-2017-04 | https://www.entrust.com/playing-digital-sandbox-balancing-system-trust/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00491-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947681 | 672 | 2.8125 | 3 |
The Human Body Is a Network of Interoperating Systems that Have to Work Together Seamlessly
Genetics is our operating system (OS). Epigenetics, nutrition, hormones, enzymes, vitamins, exercise, lifestyle, et cetera are our OSI layers and protocols. A person's health span (defined as the healthy productive time before the onset of age-associated decline) depends on how smoothly our OS, OSI layers, and protocols work together. Successful healthcare is about preventing diseases and slowing decline to achieve a great health span. Success in this work requires collecting data, analyzing it, and assuring that the healthcare systems we put in place are working together as well as our bodies themselves. As a longtime developer of healthcare IT solutions, this metaphor of body-as-network makes it much easier for me to explain the importance of healthcare data.
ExtraHop can go one step further and provide the monitoring and analytics to make enormous gains in the value we can extract from this data, especially as we transition to FHIR, the new standard for healthcare data exchange.
What Is FHIR?
FHIR (pronounced "fire") stands for Fast Healthcare Interoperability Resources. FHIR is the latest interoperability standard that has emerged from the non-profit Health Level Seven (HL7) organization to enable secure electronic sharing of healthcare data. FHIR incorporates the best features from previously developed HL7 standards using modern paradigms for data transfer, and current security standards to solve clinical and administrative problems in a practical way. FHIR defines provenance or origin of data and security event resources suitable for tracking the origins, authorship, history, status and source of resources.
FHIR uses four main paradigms for communicating health data: REST APIs, Messages, Documents, and Services. FHIR is differentiated from previous standards in many ways, but the two root differences that make it so revolutionary are:
Security: All production health data exchanged using FHIR is required to be secured using TLS/SSL. This makes it much more secure than previous HL7 standards.
Resources: FHIR uses standardized data formats and elements, collectively called "Resources." A Resource is the smallest possible unit of transaction in FHIR, with a known identity providing meaningful data.
FHIR is suitable for use in a wide variety of contexts, including mobile phone apps, cloud communications, EHR-based data sharing, server communication in large institutional healthcare providers, and more. FHIR is open source, free, scalable and flexible.
Why a New Standard? Because Healthcare Is Changing Rapidly
Let's back up for a moment and talk about why we need a new healthcare data exchange standard at all.
On March 23, 2010, President Obama signed the Affordable Care Act (ACA), effecting huge changes in health insurance, patient's rights, Medicare and Medicaid payments, and many other aspects of healthcare. This caused a rise in the number and complexity of Electronic Health Record (EHR) implementations. In 2015, requirements around improving quality and lowering costs and paying physicians based on "value not volume" forced EHR vendors to address the lack of interoperability between the EHRs.
EHRs now must be able to support various payment models, including value based payments, bundled episode-based payment, case rate, package pricing and episode-of-care payment, so that the providers can be reimbursed. This resulted in an urgent need to change the existing model to remedy EHRs' complete lack of interoperability.
The HL7 organization decided to turn to other industries to look for ideas for improving interoperability. What they found pointed strongly to the use of RESTful APIs. HL7 FHIR combines the best features of HL7 Version 2 (V2), HL7 V3, and Clinical Document Architecture (CDA), while leveraging the latest web service technologies and concepts.
Based on their findings, HL7 International decided to use popular, open, and accessible technologies for their next standard. For that reason, FHIR uses a modern web-based suite of Application Programming Interface (API) technology, including an HTTP-based RESTful protocol, HTML and Cascading Style Sheets (CSS) for user interface integration, a choice of JSON or XML for data representation, OAuth for authorization and Atom for results.
To the track the development of this new standard, FHIR Connectathons were established very early on to verify specification approaches and continue at HL7 workgroup meetings.
What is Interoperability, and Why Does it Matter in Healthcare?
Interoperability, in the context of healthcare, means the ability of systems that store and use data to exchange that data with other systems without compromising the security or fidelity of the data, and without an undue amount of effort.
For the purposes of our FHIR discussion, interoperability requires making use of the following paradigms for data storage and exchange:
- REST: Use of small lightweight exchanges with low coupling between systems
- Messages: To allow communication between multiple resources in a single exchange
- Documents: Allows persistence when data spans multiple resources
- Services: Allows the use of custom service if the standard service does not fit the requirement
Basic Terms and Concepts in FHIR
Resource: The basic building block or a modular component in FHIR is a Resource. Resources define behavior and meaning, have a known identity and location, are the smallest possible unit of transaction, and provide meaningful data that is of interest to healthcare. Resources will be limited to 100 to 150 in total but can be extended if needed for optionality and customization, they are similar to an HL7 V2 segment.
Resources are data formats and elements, essentially a structured model of a JSON or XML object. Each object will have its own Universal Resource Identifier (URI) with a unique identifier like a Medical Record Number (MRN). Every resource consists of an Identifier, Human Readable summary, Extension, Contained Resources, Metadata, Resource content and Tags. Human readability will allow the data to be viewed in a standard web browser, even if none of the structured data is able to be imported into the receiving system. Resources may also be combined into a Resource Bundle, a collection of multiple resources and even full-message exchanges.
There are various categories of resources that can be used in FHIR, including:
- Clinical (Medications, Diagnostics, Observations or other General objects)
- Identification (Individuals, Groups, Entities, Devices)
- Workflow (Patient Management, Scheduling, Workflow #1 and #2)
- Administrative (Attribution, Workflow)
- Infrastructure (data payloads such as documents, message headers specifying source and destinations, Composition, Query, Profiles, Value Sets, Information tracking, Documents and Lists, Structure, Exchange)
- Conformance (Terminology, Content, Operations Control, Misc)
- Financial (Support, Billing, Payment, Other)
Additionally, the FHIR specification defines a set of Data Types that are used for the resource elements. There are two categories of data types:
- simple or primitive types, which are single elements, and
- complex types, which are re-usable clusters of elements.
Clinical Documentation Architecture (Reaffirmation) (CDA(R)) specifies the structure and semantics of "clinical documents" for the purpose of exchange between healthcare providers and patients, e.g: Discharge Summary, Imaging Report, Admission & Physical, Pathology Report etc. Different Reaffirmation versions are applicable based on the type of CDA section.
Draft Standard for Trial Use (DSTU): this term denotes versions of FHIR that are being tested but should not be considered finalized, as opposed to "Normative Versions" that are standardized after extensive testing.
Why do we need FHIR when we already have HL7?
HL7 V2 is a well-established standard that works within institutions to connect applications. However, it is a legacy standard with unique syntaxes and custom tools that have caused an explosion of HL7 variants resulting in problems of data integration between systems and institutions over time. HL7 V2 also has no requirements for privacy, security or authentication. HL7 V2 is also limiting to modern devices and apps that are trying to make use of the vast trove of stored patient data in a meaningful manner. Lack of security and interoperability have caused HL7 V2 limitations to become a barrier to patient engagement.
HL7 V3, though based on a reference model, was overly complex to implement and did not have the backwards compatibility with HL7 V2. Hence HL7 V3 was dead on arrival and was not going to help anyone in healthcare make money or save money, nor to meet the Centers for Medicare and Medicaid Services' (CMS) new mandatory bundled payments model, called Medicare Access and CHIP Reauthorization Act (MACRA), starting in 2018. This is a significant rule with fundamental changes for Medicare that is related to the merit-based Incentive Payment System (MIPS). MIPS consolidates components of the Physician Quality Reporting System (PQRS), the Value-based Payment Modifier (VM), and the Medicare Electronic Health Record (EHR) Incentive Program. MACRA mandates interoperability that HL7 V2 and V3 were unable to provide. FHIR will step in to fill this requirement.
The HL7 Clinical Document Architecture (CDA) is an interoperable content standard to help organizations exchange clinical data. CDA takes a document approach, providing the ability to group related content about the patient into a single document format.
In contrast, FHIR presents discrete elements of information, for example - individual lab results, demographic information, medications and more – as data representations called Resources. Discrete resources in FHIR offer much greater potential for interoperability than the CDA document approach.
What will change in healthcare with FHIR?
EHRs have done their job of collecting and storing the healthcare data. With FHIR, as patients move around the healthcare ecosystem, their electronic health records will be available to support automated clinical decision support and other machine-based processing, also the data will be structured and standardized. Given the embedded nature of HL7 V2, the transition with FHIR will not happen overnight. The healthcare market will ultimately decide the success of FHIR and whether FHIR survives or coexists with older standards.
What are the different FHIR workflows in healthcare?
- Communications between applications: Initially a broker may be needed for the applications, like the interface engine that acts as a middleware between different applications that may still use HL7 versions.
- Patient engagement: FHIR will be used to provide patients with timely data and alerts using REST standard.
- Link the different EHRs: FHIR will make it so that all patient data stored in many different data repositories like Health Information Systems (HIS) and Picture Archiving Communication System (PACS) can be liberated and made available to different departments within and outside the healthcare facility.
- Vendor Neutral Archive (VNA) or Vendor Neutral Repository: Eventually the goal is to be able to store patient data documents and images or potentially any file of clinical relevance in a standard format with a standard interface, such that they can be accessed in a vendor-neutral manner by other systems. This will allow applications to solve creative workflow problems.
How Is FHIR Being Developed?
FHIR is still being developed by HL7, the second Draft Standard for Trial Use (DSTU2) became available in 2015, the first normative edition is planned for 2017 and the second normative edition in 2018. This will be a big step towards population health management. Major EHR vendors like Athenahealth, Cerner and Epic have agreed to support an early effort at implementation, known as the Argonaut Project, that will be important for the open API requirements for Meaningful Use Stage 3.
SMART on FHIR is a project that started in 2010, funded by The Office of the National Coordinator for Health Information Technology (ONC), to build an app platform for healthcare allowing innovation, creative use of data within the EHR, enabling third party plug-in apps and support apps to be chosen by clinicians. SMART on FHIR specs provide means for healthcare organizations or developers to access discrete clinical data—such as medications, problems, lab results, immunizations and patient demographics.
What Interoperability Means for Population Health and Healthcare Costs
According to the Center for Disease Control and Prevention (CDC) 2016 data, chronic diseases and conditions are responsible for 7 of 10 deaths each year, and treating people with chronic diseases accounts for 86% of our nation's health care costs.
An enormous amount of healthcare effort and cost goes toward just a few chronic diseases and conditions, including heart disease, stroke, cancer, type 2 diabetes, hypertension, obesity, arthritis, Alzheimer's Disease and other dementias, multiple sclerosis (MS), Lou Gehrig's Disease (ALS), asthma, oral health, cystic fibrosis and chronic obstructive pulmonary disease (COPD). in spite of a barrage of prescription medications administered, invasive surgical procedures performed, declaring war on cancer in 1971 and completing the Human Genome Project in 2003, there is no strategy that seems to be working to reduce the number of deaths and cost attributable to these chronic diseases. Unless we understand the root cause of a disease, a treatable disease can become a chronic and progressive lifelong condition. Interoperable healthcare data systems will play a vital role in understanding the root causes of the most costly and deadly diseases.
Can the new healthcare laws, "Value Not Volume" based payments and EHR interoperability in healthcare help in changing the current healthcare conditions?
Yes, I think there is a fair chance. Many of the adverse incentives in healthcare have changed substantially for the better.
Am I Optimistic About Future Positive Changes in Healthcare?
I have been working for the last few years in healthcare applications. Before that I was working in basic research labs, analysing many health related experiences, passionately staying current with many nutrition, health and healthcare topics.
Throughout all this, I could see the limitations of siloed data in healthcare and patient care. Diseases and chronic health conditions are typically treated as if they are just affecting the different parts and organs, not considering the overall impact on the entire human body, and thereby missing the interconnections that complete the picture. Both human health and healthcare data face the same problem, they are both siloed. We need a more holistic approach.
In the near future we will be able to look at the interconnections of the human body and also in healthcare data in their entirety. This will enable us to paint a complete picture for both. Then it will be the time to make use of tons of EHR healthcare data and perform data analytics in a meaningful manner to improve the human health, streamline workflows and reduce the cost of healthcare.
So am I optimistic about the future of healthcare as interoperability increases? Yes!
What makes ExtraHop Great for Monitoring FHIR?
FHIR is implemented on top of HL7 and the HTTPS (HTTP Secure) protocol. FHIR is expected to be secured using TLS/SSL. FHIR data will be structured as JSON or XML and standardized. FHIR is for interoperability where we will have access to all healthcare storage systems like HIS and PACS.
In other words, FHIR is being built on technologies that are already well-supported by the ExtraHop platform.
ExtraHop, with protocol metrics for FHIR, will open the door to so many possibilities with FHIR and the apps-based health IT. ExtraHop can drill down into all metrics collected from HL7, DICOM, TELNET, XML, SSL, HTTP along with FHIR, and can connect the dots needed to use that data to address complex human health questions.
Using ExtraHop in conjunction with FHIR will allow IT departments to break free from EHR data silos and get the full clinical value out of each patient's health data. In doing so, ExtraHop can help in shifting healthcare from a reactive process to a proactive and preventive model. ExtraHop can bridge the current gap between healthcare's siloed data by creating meaningful predictive insights from the entire structured data by providing a single source of truth.
The increase in interoperability and reduced friction in sharing healthcare data will open the floodgates to enormous improvements in healthcare and disease prevention, and ExtraHop can be a key enabler in these improvements. Just a few possibilities include:
- Identifying the root cause of diseases in patients and populations using disease condition patterns and analytics with a drill down approach
- Identifying the comorbidities associated with the worst health outcomes by alerting on the coexistence of multiple high risk conditions in a patient, even if these conditions are being tracked in separate data silos.
- Recommendations for prescription medications and invasive surgical procedures based on the efficacy, absolute risk reduction and health benefits rather than based on the relative risk reduction, to shift towards outcomes-based pricing.
- Identifying nosocomial (originating in a hospital) infections including sepsis
- Democratization of healthcare data, liberating electronic health records data so that data cannot be cherry picked by a few to make false claims, but data will be widely available for anyone to do research and analysis upon it.
- Creating apps or bundles to address various healthcare issues
A good app or a bundle, distributed widely, can transfer ideas, functionality, workflow and could reshape healthcare IT overnight! In my mind, that's the potential power ExtraHop has to accelerate a positive transformation in healthcare.
Want to learn more about how ExtraHop provides healthcare organizations with insights from wire data that can dramatically improve both doctor and patient experiences? Explore the HL7 analytics scenarios in our free, online demo.
Further Reading on FHIR, HL7, and healthcare data analytics: | <urn:uuid:c3cd7805-d8d7-4bc4-8384-83f39db5c556> | CC-MAIN-2017-04 | https://www.extrahop.com/community/blog/2016/hl7-fhir-hl7-standards-interoperability-future-of-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00399-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923551 | 3,720 | 2.71875 | 3 |
Certifying Software Security Professionals: The CSSLP
Humans have been developing computer software for about 65 years now. We’ve come a very long way during that time, and many futurists expect we will see just as much technological advancement in the next 65 years. This amount of change will challenge even the best minds in computer science to keep up.
It can be hard to appreciate just how sophisticated computers have become. But consider this: In 1981, the most sophisticated spacecraft in the world was the U.S. Space Shuttle. The Space Shuttle could launch, orbit the Earth and land without human intervention. The first shuttles were able to accomplish this feat with 104Kb of RAM. Today, you’d be hard-pressed to find anything larger than a wristwatch with only 104Kb of RAM. An average cell phone has more than 2.5 billion bytes of RAM and processes data faster than many of the mainframes used in Mission Control in 1981. It won’t work in the harsh environment of space, but it will do just fine down here on Earth.
However, like the Space Shuttle, a cell phone would be an expensive brick without software. For that reason, reliable and secure software is an essential investment.
Why Software Security?
Until 1975, most programs were written, run and used in the same building. The idea of a computer virus was still science fiction in 1981 when IBM delivered the first PC. Security meant gates and guards, not firewalls and Web filters. During the past 30 years, the explosion in computing required that we change many of our approaches to computer security.
Early efforts at computer security were focused on providing a secure location for the computer. Then the tech community set about building tools that would enhance the security of a particular machine, such as a Web server or an accounting PC. Today, as software becomes more complex, the need for secure software is increasingly critical to software development organizations. That is why (ISC)2 — a certification body that specializes in information security — developed a new certification for software developers, called the Certified Secure Software Lifecycle Professional (CSSLP).
The CSSLP is one way to define a new standard for software development security. (ISC)2 felt the security of software was an important area to investigate. In the course of its research, (ISC)2 found a critical need for specialists in both security and software development and determined that creating a certification program would be the best way to enable widespread adoption of better development security standards.
What the Certification Addresses
When building secure software, it is necessary to address security throughout the life cycle, from concept through maintenance. Although many people might think a security bug is just another kind of coding bug, simply avoiding coding bugs won’t result in secure software. Every year, security flaws arise from incorrect security requirements or design. With more than 14 million software developers worldwide, modern software development organizations must be ready to implement an entire security development life cycle. They also must hire professionals who understand both the principles and practice of secure software development.
Studies sponsored by (ISC)2 have found that professionals who work every day in the field of software development often walk a fine line between profit and process. They must balance the mandate for high productivity with their professional commitment to producing high-quality systems. Those responsible for security must promote security best practices in organizations that often are driven by conflicting priorities. Upon examination, (ISC)2 concluded that these professionals would benefit professionally and financially from clear standards for secure software development and an industry standard recognition of their skills.
The CSSLP is intended for software life cycle professionals who are responsible for improving the security of software and those responsible for developing secure systems or application software. In providing certification opportunities to developers, (ISC)2 aims to establish a base level of professional skill for individuals who wish to pursue this area as a career path.
In a nutshell, the CSSLP is designed to:
- Establish minimum professional standards for a global audience of software developers.
- Provide a portable method for conveying and verifying professional qualifications.
- Encourage opportunities for all organizations to develop software development security capabilities by not tying certification to an enterprise or infrastructure.
- Support specialized areas of information security with critical needs.
Why Certify Individuals?
In the past, other organizations have attempted to certify development organizations or to provide third-party testing for systems. However, organizational certifications tended to localize the expertise to specific geographic or service communities. Moreover, those certifications actually slowed the spread of expertise, since an individual with certified skills might lose that certification by changing jobs.
With the advent of ubiquitous computing, it was necessary to address the need for a global community of professionals who could build skills and drive best practices within every enterprise. Moreover, experts thought a certification program might pave the way for wider acceptance of software certification. By providing an opportunity for professionals to become certified on independent criteria, (ISC)2 is hoping to raise the level of software security throughout the global IT community.
Additionally, rather than certifying only developers of security software (i.e., those who build firewalls and anti-malware programs), the CSSLP is targeted at people who improve the security of all software, including those who improve the security of general-purpose software and those who develop security tools. Subsequently, (ISC)2 believes this certification offers benefits to the software community at-large.
Certification Body of Knowledge
The field of software security is not easy to master, even on a good day. Just as a pathologist first must learn to become a doctor, a CSSLP-certified professional must learn how to develop software before understanding how it breaks and how to prevent those failures. They must then learn how other people will attack the software and how to prevent those attacks.
These multiple layers of expertise challenge even the best professionals, and as a result, deep dedication to the field is not uncommon. For this reason, the CSSLP CBK, a compendium of secure software development topics, might seem intimidating at first glance.
The CSSLP CBK covers all the stages of normal software development. Candidates must understand requirements, design, coding, testing, deployment, patching, maintenance and disposal. Further, they must learn the security functions associated with each of these stages in the software development life cycle (SDLC).
Additionally, candidates must know how to apply core information security concepts such as risk management, vulnerability assessment, auditing and legal issues. Finally, candidates will be required to show that they understand the mathematical models that represent the engineering foundation for secure software development. (ISC)2 expects that universities will begin to offer graduate degrees in software security as a way to prepare candidates for specialization in this field.
Common Standards of Certification
The CSSLP was designed from the ground up with American National Standards Institute (ANSI) standards in mind. Activities such as job-task analysis and exam-item writing were strictly supervised by (ISC)2 staff to meet ANSI standards. At the same time, the development process was run with an eye toward full globalization of the certification itself.
Today, (ISC)2 supports more than 60,000 certified information professionals in more than 130 countries. Many affiliates of (ISC)2 have operations across several continents. For these reasons, the certification process needed to be universal so certified professionals could move around the world and still know their expertise would be applicable to the local environment.
Software security is a critical element of computing today. Although the CSSLP is new, the pedigree of the organization has been upheld for more than 20 years, and the people behind this creation are confident it will play a positive role in computing for the next 65 years.
James E. Molini, CISSP, CSSLP, is a senior program manager at Microsoft, working in the Identity and Security Division. He has more than 22 years experience in the field of information security, including extensive experience in system and software security, intrusion detection and risk management. He can be reached at editor (at) certmag (dot) com. | <urn:uuid:e652a15a-ddf4-4533-8365-d1c5d4ce6162> | CC-MAIN-2017-04 | http://certmag.com/certifying-software-security-professionals-the-csslp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00215-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949834 | 1,651 | 2.8125 | 3 |
You know a technology is going mainstream when Microsoft, Cisco, IBM, VMware and Red Hat all make big announcements about embracing it. Such is the case with containers.
But as containers have grown in popularity among developers during the past year, I still get asked by people, “What are containers?” There still seems to be some education needed about exactly what containers are and how they’re used. So, Cloud Chronicles and Network World give you Frequently Asked Questions (FAQ): Containers.
What are containers?
Containers can be thought of as a type of virtualization for the operating system. Typically virtualization refers to hardware, using a software hypervisor to slice up a server into multiple virtual machines. Container technology virtualizes the operating system, abstracting applications from their underlying OS.
Blogger Greg Ferro has a good summary on his blog:
“Containers virtualize at the operating system level, Hypervisors virtualize at the hardware level.
Hypervisors abstract the operating system from hardware, containers abstract the application from the operation system.
Hypervisors consumes storage space for each instance. Containers use a single storage space plus smaller deltas for each layer and thus are much more efficient.
Containers can boot and be application-ready in less than 500ms and creates new designs opportunities for rapid scaling. Hypervisors boot according to the OS typically 20 seconds, depending on storage speed.”
What’s the advantage of containers?
Containers have a couple of appealing qualities, most notably speed and portability. Containers are often described as “lightweight” because they don’t have to boot up an operating system like a virtual machine does - so, containers can be spun up very quickly. The other common advantage associated with containers is their portability; containers can run on top of a virtual machine, on physical or bare metal servers in a public cloud or on-premises - it doesn’t matter.
Are containers new?
No, not at all. The current hype is around Linux Containers, which have been around for more than 10 years. Before Linux containers, Unix had container technology. Even earlier systems from Oracle Solaris had the concept of Zones, which are basically an equivalent of containers.
Why all the hype now about containers?
As more new social, mobile and web-scale applications are being built, containers are seen as an emerging tool for developers to use in these types of applications because of the advantages outlined above. Concurrently, much of the hype about containers has been galvanized by the rise of a company named Docker, which is attempting to commercialize an open source project of the same name that automates the deployment of an application as a container. Basically as interest in containers is growing, companies like Docker and others are making containers easier to use.
What does Docker do?
Docker is an open source tool for packaging applications inside containers; Docker is basically used to make containers. Docker also has what’s called the Docker Hub, which is a registry of containers that have been developed to be used with specific programs, such as MongoDB, Redis, Node.js and others.
Are containers a replacement for virtual machines?
This one depends on who you ask. Some believe that containers offer a better way to run certain applications compared to just running them on a virtual machines. Generally, the theory is that in an environment with multiple operating systems (Windows and Linux for example), virtual machines are helpful. In a heterogeneous OS environment (all Linux), containers could be more helpful. It also depends on the application. In some circumstances a developer may want a dedicated virtual machine, or perhaps even a whole physical server for running an application. In other situations, a VM can be a good platform for running containers and yet in other scenarios containers could be best to run on bare metal servers.
Kubernetes is an open source project created by Google that specializes in cluster management. Part of its functionality includes being able to manage Docker, which creates containers. So, think of Docker as an engine for creating containers, and Kubernetes as a tool for managing the scheduling of containers or groups of containers. | <urn:uuid:4a34ee16-ba0e-4385-a44d-95ad6ea66962> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2601434/cloud-computing/faq-containers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00031-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944474 | 863 | 2.75 | 3 |
Lithium is the first alkali in the periodic table. It is soft and silvery-white in color. It is also the lightest solid metal. It has high specific heat, high termic conductivity, low density and low viscosity. Lithium chemical compounds are lithium carbonate, lithium chloride, butyl lithium, lithium hydroxide and lithium bromide. These compounds have specific properties like a high co-efficient of thermal expansion, catalytic and fluxing characteristics and high electrochemical potential.
The primary use of lithium chemicals is in batteries. These are of two types, rechargeable and non-rechargeable. Rechargeable batteries are used in cell phones, laptops etc. It has lighter weight and energy density than nickel-cadmium and nickel-metal hydride batteries. Non-rechargeable batteries are used as cylindrical and coin shaped varieties used in digital cameras and calculators. These are light and have longer life than alkaline batteries.
Other use of lithium chemicals is in lubricants where they are used as a thickener in grease. In smelting of aluminium, it helps to reduce power consumption and bring down fluorine emissions. It is used in air treatment as absorption material for controlling humidity and drying systems. Pharmaceuticals sector use it as treatment for manic depression and other products.
This report aims to estimate the Global lithium chemicals market for 2014 and to project the expected demand of the same by 2019. This market research study provides a detailed qualitative and quantitative analysis of the Global lithium chemicals market. It provides a comprehensive review of major drivers and restraints of the market. The Global lithium chemicals market is also segmented into major application and geographies.
An in-depth market share analysis, in terms of revenue, of the top companies is also included in the report. These numbers are arrived at based on key facts, annual financial information from SEC filings, annual reports, and interviews with industry experts, key opinion leaders such as CEOs, directors, and marketing executives. A detailed market share analysis of the major players in global lithium chemicals market has been covered in this report. Some of the major companies in this market are Chengdu Tianqi(China), Rockwood Lithium(Germany), Talison Lithium(Australia), SQM(Chile) etc.
1.1. ANALYST INSIGHTS
1.2. MARKET DEFINITIONS
1.3. MARKET SEGMENTATION & ASPECTS COVERED
2. RESEARCH METHODOLOGY
2.1. ARRIVING AT GLOBAL LITHIUM CHEMICAL MARKET
2.2. MARKET SIZE ESTIMATION
2.3. TOP DOWN APPROACH
2.4. BOTTOM UP APPROACH
2.5. DEMAND (CONSUMPTION) SIDE ANALYSIS
3. EXECUTIVE SUMMARY
4. MARKET OVERVIEW
4.2. KEY MARKET DYNAMICS
4.2.1. MARKET DRIVERS
4.2.2. MARKET INHIBITORS
4.2.3. MARKET OPPORTUNITIES
4.3. DEMAND SIDE ANALYSIS
5. GLOBAL LITHIUM CHEMICALS MARKET INDUSTRY TRENDS
5.2. INDUSTRY INSIGHTS
5.3. PORTER’S FIVE FORCES ANALYSIS
5.3.1. THREAT OF NEW ENTRANTS
5.3.2. THREAT OF SUBSTITUTES
5.3.3. BARGAINING POWER OF SUPPLIERS
5.3.4. BARGAINING POWER OF BUYERS
5.3.5. INTENSITY OF COMPETITIVE RIVALRY
5.4. INDUSTRY SWOT ANANALYSIS
5.5. KEY TRENDS
5.5.1. TECHNOLOGY TRENDS
5.5.2. MARKET TRENDS
5.5.3. PRICE TRENDS
6. GLOBAL LITHIUM CHEMICALS MARKET, BY APPLICATION
6.3. CERAMICS & GLASS
6.5. ELECTRICAL & ELECTRONICS
6.7. LUBRICATING GREASES
6.8. AIR CONDITIONERS
6.9. ALUMINIUM ELECTROLYSIS
7. GLOBAL LITHIUM CHEMICALS MARKET, BY LITHIUM COMPOUND/TYPE
7.2. LITHIUM METAL
7.3. LITHIUM CARBONATE
7.4. LITHIUM HYDROXIDE
7.5. LITHIUM CHLORIDE
7.6. LITHIUM BROMIDE
8. GLOBAL LITHIUM METAL MARKET RESERVES, BY GEOGRAPHY
8.2. SOUTH AMERICA
8.3. NORTH AMERICA
8.5. RoW (ZIMBABWE, PORTUGAL)
9. GLOBAL LITHIUM CHEMICALS MARKET CONSUMPTION, BY GEOGRAPHY
9.2. NORTH AMERICA
9.3. SOUTH AMERICA
10. GLOBAL LITHIUM CHEMICALS COMPETITIVE LANDSCAPE
1.2. MARKET STRUCTURE
1.3. COMPETITIVE LANDSCAPE: MARKET SHARE ANALYSIS
1.4. COMPANY PRESENCE IN GLOBAL LITHIUM CHEMICAL MARKET
1.5. MERGERS AND ACQUISITIONS
1.6. VENTURE CAPITAL FUNDING
1.7. NEW PRODUCT LAUNCHES
1.8. PROJECT DEVELOPMENTS
11. COMPANY PROFILES
11.1. SOCIEDAD QUIMICA Y MINERA DE CHILE S. A.
11.2. FMC CORP.
11.3. ROCKWOOD LITHIUM
11.4. ALBEMARLE CORP
11.5. SICHUAN TIANQUI
11.6. TALISON LITHIUM LTD.
11.7. GALAXY RESOURCES
11.8. LITHIUM AMERICAS CORP.
11.9. OROCOBRE LTD.
11.10. QUEBEC LITHIUM INC.
11.11. MOSEDA TECHNOLOGIES INC
11.12. RODINIA LITHIUM INC
11.13. WESTERN LITHIUM USA CORP.
11.14. ADY RESOURCES
Please fill in the form below to receive a free copy of the Summary of this Report
Please visit http://www.micromarketmonitor.com/custom-research-services.html to specify your custom Research Requirement
Lithium Carbonate (Li2CO3)
The research report provides a detailed analysis on quantitative as well as qualitative factors affecting the global lithium carbonate market. The report provides a comprehensive review of major market drivers and restraints in the market. The market is further segmented and forecasted for major geographic regions such as North America, Europe, Asia-Pacific, and Rest of the World. Competitive scenarios of top players in this region have been discussed in detail. We have also profiled leading players of this industry with their recent developments and other strategic industry activities. This report analyzes various marketing trends and establishes the most effective growth strategy in the market. Major companies such as ABA Guangsheng Lithium Co., Ltd., FMC, Galexy Resources, Nordic Mining, Orocobre, and so on.
Lithium Hydroxide (LiOH)
The market is growing at a CAGR of 10.58% between 2015 and 2020. Lithium hydroxide is mainly used in industrial lubricant, active cathode in batteries, etc. Also, due to better properties than other concentrates, it is mostly preferred for new battery technologies. Moreover, there is a potential market for lithium hydroxide for use in rechargeable battery industry.
Lithium Chloride (LiCl)
This report aims to estimate the global Lithium Chloride Market for 2015 and to project the expected demand of the same by 2020. This market research study provides a detailed qualitative and quantitative analysis of the global Lithium Chloride market. It provides a comprehensive review of major drivers and restraints of the market. The global Lithium Chloride market is also segmented into major application and geographies.
Lithium Bromide (LiBr)
Lithium Bromide (LiBr) and Organolithiums, Lithium Carbonate...
lithium metal (Li)
The global lithium production is composed for sustained growth in the years ahead. Polymers followed by li-ion batteries application are expected to be the largest contributor to the future growth in the global lithium market through 2020. Between 2015 and 2020, the overall lithium consumption will likely post 11.93% YoY growth in terms of volume. Chile and Australia are the largest producers of lithium followed by China, Argentina, Zimbabwe and U.S.A among others.
Lithium Hypochlorite (LiOCl)
Lithium Hypochlorite (LiOCl) and Organolithiums, Lithium...
Lithium niobate and Organolithiums, Lithium Carbonate...
Lithium salts and Organolithiums, Lithium Carbonate...
Asia-Pacific Lithium Compound
Lithium Compound - Asia-Pacific and Organolithiums, Lithium...
The global lithium is composed of sustained growth in the years ahead. Polymers held largest market share followed by medical 2015 and 2020. The overall Butyl-lithium consumption will likely post 6.32% YoY growth in terms of volume and in terms of value the market will increase with a CAGR of 13.89%. Chile and Australia are the largest producers of lithium followed by China, Argentina, Zimbabwe and U.S.A among others.
Lithium mineral ore is subjected to separation processes for upgrading the lithium content and removing waste materials. Different separation processes produce concentrates with differential content of lithium. These can be used in chemical-grade and technical-grade applications. Chemical-grade lithium concentrates are most widely used in lithium batteries. Apart from their use in lithium batteries, chemical-grade lithium concentrates have other applications. It is used in lubricants as a thickener in grease. In aluminum smelting, it is used for reducing power usage and increasing electrical conductivity. | <urn:uuid:f8c0a701-08f7-4f8b-991b-25ca8c6527d8> | CC-MAIN-2017-04 | http://www.micromarketmonitor.com/market-report/lithium-chemicals-reports-5145775814.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00517-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.764278 | 2,276 | 2.890625 | 3 |
After Comet ISON's tumultuous close approach to the sun on Thursday, scientists are working to determine whether it still exists or is now nothing more than dust.
The comet, which was first spotted in September 2012, quickly gained the attention of astronomers around the world because it was considered to be a relic from when our solar system was formed, retaining the primordial ices from which it formed 4.5 billion years ago.
Since ISON, thought to be made of rocks, dust and ice, was passing just 700,000 miles from the blazing surface of the sun, scientists had been eager to see if it could withstand the journey and come out on the other side to head back toward the far reaches of the solar system.
NASA reported today that scientists still aren't sure what happened to ISON. It does appear that something made it around the sun, but whether it's enough to still be called a comet is unclear.
The Hubble Space Telescope is expected to be scientists' best chance to find out what remains of ISON when it makes observations in the comet's direction later this month.
"The question remains as to whether the bright spot seen moving away from the sun was simply debris, or whether a small nucleus of the original ball of ice was still there," NASA reported today. "Regardless, it is likely that it is now only dust."
ISON piqued scientists interest because it has been on a journey that is thought to have taken millions of years to get from the edge of the solar system to a point where it would pass our sun.
And since it's made up of dust and ice from when our solar system was formed, scientists hope that by studying it, they will gain clues to the ancient formation of the solar system and its planets.
"The reason we study comet ISON to begin with is it's a relic," Carey Lisse, a senior research scientist with Johns Hopkins Applied Physics Laboratory, said last week. "It's a dinosaur bone of solar system formation. You need comets in order to build the planets. This comet has been in a deep freeze half way to the next star for the last four and a half billion years. It's just been coming in over the last few millions years and possibly even started around the dawn of man."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is firstname.lastname@example.org. | <urn:uuid:9a18372e-5b7f-4c06-847d-4bcb484526a1> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2486461/emerging-technology/hubble-may-determine-whether-comet-ison-survived-fiery-trip.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00151-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.970528 | 526 | 3.84375 | 4 |
Computer security experts have started discussing a new type of malware, which is known as “Fileless infections.” The main peculiarity of viruses that fall into this category is that they act without traces. This technique helps them to overcome the file-based detection, which is widely used by anti-virus and anti-spyware programs. Having in mind the malicious nature and odd behaviour of the fileless malware, it is not a surprise that there is still a lack of sufficient knowledge about such viruses. Fortunately, IT specialists have already managed to gather enough data to sort this malware into two groups.
The first group of fileless viruses is called “Escapers.” The main goal of such malware is to leave the system right after its malicious task is over. Escapers can collect the technical data about the operating system and infect it with other malware. When their task is finished, such fileless viruses leave the system. The most known viruses from this category are PowerSniff and USB Thief. Speaking of PowerSniff, it is disguised as a macro file which might bear a code of a highly aggressive virus. USB Thief hides in portable data storage devices. Once such device is attached to a computer, the infection starts collecting the data about a victim’s operating system with no trace.
The second group of fileless infections is called “Residents”. These aggressive exploit kits can also operate on your system while keeping themselves invisible. They usually run an encoded script in a computer registry, so there is no surprise that they are almost undetectable. One of “Residents” member is a Kovter virus. It creates unreadable registry keys that carry malicious scripts. Kovter is capable of blocking computer’s screen and data, so it is suspected to be related to FBI ransomware.
Unfortunately, but there is not much knowledge about fileless viruses. It is clear that they can cause serious damage due to their shifting form and broad-scale abilities. However, their ability to disappear right after finishing their job causes serious trouble for computer researchers to find out more about them. The most effective way to protect yourself from these infections is to install a reliable security software. If you keep it updated, there is a higher probability that the software detects this seemingly undetectable malware. | <urn:uuid:dc7265d0-b96c-43b3-a7ad-ecee35926c52> | CC-MAIN-2017-04 | http://www.2-spyware.com/news/post7072.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960099 | 470 | 3.015625 | 3 |
The Center for Internet and Society (CIS) at Stanford Law School launched a new online privacy initiative called the Cookie Clearinghouse, which will empower Internet users to make informed choices about online privacy. Clearinghouse is being spearheaded by Aleecia M. McDonald, the Director of Privacy at CIS.
Websites may place small files called “cookies” on an Internet user’s machine, and some types of cookies can be used to collect information about the user without his or her consent. The Cookie Clearinghouse will develop and maintain an “allow list” and “block list” to help Internet users make privacy choices as they move through the Internet. The Clearinghouse will identify instances where tracking is being conducted without the user’s consent, such as by third parties that the user never visited.
To establish the “allow list” and “block list,” the Cookie Clearinghouse is consulting with an advisory board that will include individuals from browser companies including Mozilla and Opera Software, academic privacy researchers, as well as individuals with expertise in small businesses and in European law, and the advisory board will continue to grow over time. The Clearinghouse will also offer the public an opportunity to comment. With this input, the Clearinghouse will develop an objective set of criteria for when to include a website’s cookies on the lists.
The Clearinghouse will create and maintain the lists. Browser developers will then be able to choose whether to incorporate the lists into the privacy options they offer to consumers. Company websites with cookies that have been included on the “block list” will be able to respond to the Clearinghouse to correct any mistakes in classification.
“Internet users are starting to understand that their online activities are closely monitored, often by companies they have never heard of before,” said McDonald, “But Internet users currently don’t have the tools they need to make online privacy choices. The Cookie Clearinghouse will create, maintain, and publish objective information. Web browser companies will be able to choose to adopt the lists we publish to provide new privacy options to their users.”
The need for the Clearinghouse evolved out of an effort by CIS fellows called Do Not Track. Initially, Stanford’s Do Not Track work raised consumer awareness about the way in which “tracking cookies” are used by websites—and by unaffiliated third parties—to compile extensive individual browsing histories that provide those companies with data about individual consumer behavior.
This effort has since progressed to a global standards effort led by the World Wide Web Consortium (W3C.) More recently, CIS researchers began a new effort to prevent companies from tracking without the user’s consent. CIS student affiliate Jonathan Mayer wrote a software patch for use in Mozilla’s Firefox browser that limits third party tracking through cookies.
Mayer’s patch mimics existing functionality in the Safari browser, which already prevents tracking from websites users have not visited. While Do Not Track efforts continue into their third year, the Cookie Clearinghouse is a new opportunity to accelerate Internet users’ ability to make effective online privacy choices. | <urn:uuid:06967315-05f4-45ad-968a-47c66c860578> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2013/06/20/cookie-clearinghouse-to-enable-user-choice-for-online-tracking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00545-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.939666 | 653 | 2.625 | 3 |
Data Center Consolidation
What is data center consolidation operations?
The process of optimizing IT expenditure by efficient usage of Information Technology by using methods such as server virtualization, storage virtualization, cloud computing, etc. is called data center consolidation. Data center consolidation enables operating expenditure optimization by optimal usage of database resources. | <urn:uuid:60463cf3-8c25-45b0-a2b2-de57a9b37583> | CC-MAIN-2017-04 | https://www.hcltech.com/technology-qa/what-is-data-center-consolidation-operation | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00573-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.865461 | 63 | 2.578125 | 3 |
Bertha, the World's Largest Tunneling Machine
/ January 28, 2014
In July 2013, Bertha -- the world’s largest-diameter tunneling machine -- began digging the SR 99 tunnel beneath downtown Seattle to replace the city's Alaskan Way Viaduct.
Thus far, Bertha has traveled just over 1,000 feet of its 9,270-foot journey -- a journey that's been broken up into 10 zones. Currently Bertha is still in Zone 1, which starts in Seattle's Railroad Way South and ends at South Washington Street. This zone is considered training for Bertha, as crews have built three protected areas where they can crawl through the front of the machine to inspect it.
"Think of it this way: if Bertha were learning to ride a bike, Zone 1 would be her training wheels," according to the WSDOT website.
For a portion of January, Bertha has undergone inspections; crews were investigating the cause of Bertha's slowed progress (results are not yet in). But on Jan. 28, she was on the move again, albeit cautiously.
I dug 2 feet today. This is a testing phase, so we’re taking it slow. http://t.co/VQrOFpL5MU— Bertha (@BerthaDigsSR99) January 29, 2014 | <urn:uuid:cbc80b7b-45f1-466c-93c2-ed7c7aa8f868> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Bertha-the-Worlds-Largest-Tunneling-Machine.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00481-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957693 | 281 | 2.5625 | 3 |
Questions and Answers: Shamir's Factoring Device and RSA
What is RSA announcing?
At the Eurocrypt '99 conference this week in Prague, Adi Shamir, a coinventor of the RSA public-key algorithm and a professor at the Weizmann Institute in Israel, is presenting a design for a special hardware device that would speed up the first part of the process of factoring a large number. The design, called TWINKLE, which stands for "The Weizmann Institute Key Locating Engine," is based on opto-electronics. Shamir estimates that the device would be as powerful as about 100 to 1,000 PCs in the factoring process called "sieving," and would cost only about $5,000 in quantity.
Does this mean that RSA can be cracked?No. Shamir's device offers the possibility of recovering keys less expensively than with a network of PCs, but does not crack RSA in the sense of making it easy to recover keys of any size. Rather, the device speeds up the "sieving" step of known methods of factoring large numbers, which are the primary avenues for attacking the RSA public-key algorithm. The design confirms what was previously expected about the appropriateness of certain RSA key sizes, including 512 bits. Larger RSA key sizes are still out of reach, one of the obstacles being the amount of work and storage involved in the rest of the process of factoring a large number.
What would it take to build the new device?
Building the device would involve a fair amount of opto-electronic engineering, but it is likely to be feasible.
RSA sponsors competitions that demonstrate that DES can be cracked by a group of determined computer enthusiasts using networked computers. Why can't this be applied to RSA?
Actually, such competitions can and have been applied to RSA. Since several years before RSA started the DES Challenges, which offer prizes for successful recovery of a 56-bit DES key, RSA has been awarding prizes for successful factorizations of large numbers, including many RSA numbers. Very few of the RSA numbers have been factored so far, however, the largest one being about 450 bits long, still short of the 512-bit mark targeted by the new device.
Perhaps the new device, if built, may figure into future cracking efforts around the 512-bit level, just as the Electronic Frontier Foundation's Deep Crack device has facilitated the last two cracking efforts for DES.
What can developers do to safeguard their products against advances due to the new device?
One of the benefits of the RSA public-key algorithm is that it has a variable key size, so, in effect, it has variable strength. This is in contrast to DES, which has a fixed, 56-bit key size and is difficult to safeguard if the key size is found to be insufficient. Products based on RSA can be protected against the new device and other developments in factoring technology with appropriate key sizes.
In the 1980s, the "default" key size for many RSA implementations was 512 bits, which even as of this writing has not been broken. Several years ago, recognizing that 512-bit keys might be at risk in the near future, RSA Laboratories recommended that developers choose a minimum key size of 768 bits for user keys and 1024 bits for enterprise keys. (The recommendation for certificate authority "root" keys was 2048 bits.) Products following these recommendations are safe against the new threat, and products that support a variable key size can be safeguarded through the deployment of longer keys.
Last week, RSA issued a bulletin about a weakness in the ISO 9796 signature format. Today, you're disclosing that it is possible to break RSA. Is RSA's encryption technology still secure?
Yes, RSA's encryption technology is still secure. Last week's announcement was about a weakness in an alternative format for preparing messages for RSA signatures that is not supported by RSA's products. The weakness was related to the format, and not the RSA public-key algorithm itself. Today's announcement is about a clever design for a hardware device that speeds up known methods of factoring large numbers, not a new method of attacking the RSA public-key algorithm. In both cases, the strength of the methods supported in RSA's products and of the key sizes recommended by RSA Laboratories have been confirmed.
RSA Laboratories will continue to report on these and similar developments. | <urn:uuid:8c8ea60e-3294-4369-8083-cb583e86dc09> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/historical/questions-and-answers-shamirs-factoring-device.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00205-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964503 | 897 | 2.859375 | 3 |
NASA on Thursday confirmed the strongest, most direct evidence yet that water once flowed freely on the surface of Mars. "Detailed analysis and review have borne out researchers' initial interpretation of pebble-containing slabs that NASA's Mars rover Curiosity investigated last year," the space agency said. "They are part of an ancient streambed." Flowing water, scientists believe, increases the odds of an environment supporting life (as we on Earth know it). Maybe, just maybe, the people who think they see rats running around on Mars are onto something! (Just kidding. More like they're on something.) What's significant about the latest find is that the rocks are the "first ever" discovered on the Red Planet to contain streambed gravels. The rocks were found by Curiosity last fall. Examining them over time, researchers actually were able to determine how deep and fast the water flowed at that location. "At a minimum, the stream was flowing at a speed equivalent to a walking pace -- a meter, or three feet, per second -- and it was ankle-deep to hip-deep," Rebecca Williams of the Planetary Science Institute in Tucson, Arizona, said in a statement. "These conglomerates look amazingly like streambed deposits on Earth," Williams said. "Most people are familiar with rounded river pebbles. Maybe you've picked up a smoothed, round rock to skip across the water. Seeing something so familiar on another world is exciting and also gratifying." NASA puts it all in perspective:
The atmosphere of modern Mars is too thin to make a sustained stream flow of water possible, though the planet holds large quantities of water ice. Several types of evidence have indicated that ancient Mars had diverse environments with liquid water. However, none but these rocks found by Curiosity could provide the type of stream flow information published this week. Curiosity's images of conglomerate rocks indicate that atmospheric conditions at Gale Crater once enabled the flow of liquid water on the Martian surface.
This is a great step in answering the eternal question about life on Mars. From here there are two obvious avenues to pursue: 1) Drill down into the Martian soil to look for evidence of fossils, and 2) Try to figure out where those rats live. At least one of them should pay off. Now read this: | <urn:uuid:999bf67d-959a-4cc1-a538-5c1794562b8d> | CC-MAIN-2017-04 | http://www.itworld.com/article/2711433/hardware/best-evidence-yet-that-water-once-flowed-on-mars.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00233-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.957277 | 462 | 3.84375 | 4 |
When an outbreak hits, Magellan speeds up genetic sequencing
- By Henry Kenyon
- Oct 24, 2011
For several years, scientists at the Argonne National Laboratory have been using an automated, software-based system to sequence the genomes of bacteria and microbes. But when that system recently became overwhelmed by a surge in demand, researchers turned to a cloud-based system that virtualized the process, allowing them to sequence hundreds of genomes within hours.
Argonne’s Rapid Annotation using Subsystems Technology program was developed in 2007 to automate the laborious process of making sense of an organism’s genome.
DOE explores cloud computing for big science
Magellan explores the cloud as a research tool
RAST matches segments of the genetic code of a new bacterium or protein against a catalog of sequenced genetic material. The system’s final result is an annotated genome with a list of an organism’s probable genes and proteins.
A human scientist must still verify this final part, but the entire process can be completed in hours rather than the months — or even years — it once took, said Ross Overbeek, an Argonne computer scientist who was involved in RAST’s design.
But the system wasn’t made to handle a sudden surge in capacity. Designed to process 60 to 80 genomes a day, in June 2010 it was hit with an enormous spike in requests resulting from an E. coli outbreak in German hospitals that was resistant to many existing treatments.
Use of the RAST system is free and open to any scientist, but the system was suddenly hit by demands of up to 200 genomes an hour as researchers sought to pin down and characterize the exact strain of E. coli.
The RAST team turned to the Magellan framework, a cloud computing project managed by the Energy Department, which was designed to boost research by providing additional servers and virtualization tools for scientific research. RAST was duplicated on Magellan, which greatly increased the power and capability of the system.
There are two versions of RAST. The current version uses the Magellan cloud framework, Overbeek said. RAST’s gene sequencing has increased from 100 to 200 genomes a month to more than 1,000 — with surges as high as 450 a day. However, he added that sequencing demand is elastic and usually falls in the range of 1,000 to 1,500 genomes per month.
But rapid response and processing are essential; otherwise, the entire system backs up. By moving to Magellan, the laboratory can reasonably handle such surges, Overbeek said.
The changes in the technology are allowing researchers to sequence the genomes of almost every known pathogen group. The ability to sequence a genome in a matter of hours is a major change, Overbeek said. “I’m excited by the fact that you can have access to an annotated genome in hours,” he said. “This used to take a year to do back in the 1990s.”
Advances in the technology also allow scientists to process batches of related genomes, a process that Overbeek encourages scientists to pursue. “I would like people to think that sequencing is essentially free,” he said.
For microbial genomes, the technology allows researchers to sequence a disease’s genes almost immediately after an outbreak has occurred and pass the data on to other health care research facilities around the world. “This is something that wouldn’t have happened two or three years ago,” he said.
The technology for automated gene sequencing is advancing so quickly, Overbeek said, that in the not-too-distant future, he foresees hospitals being able to do their own gene sequencing. “The way hospitals deal with outbreaks or even bacteria is going to change over the next few years,” he said.
A second, updated version of the RAST software is now in use, and there is a laptop version that is being beta tested. The beta version of RAST can annotate a genome within 10 minutes and is capable of running on a handheld device such as a smart phone or tablet computer. | <urn:uuid:b8521ac7-65ca-4fa8-ab36-d882574945a1> | CC-MAIN-2017-04 | https://gcn.com/articles/2011/10/24/argonne-magellan-cloud-to-sequence-genes.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00013-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960812 | 860 | 2.921875 | 3 |
I am constantly amused and amazed by the ingenuity shown by fraudsters. Their recent use of friendly email ‘from’ names is not an exception. What makes the attack exceptional, though, is how it is born out of the confluence of several entirely unrelated trends of computing.
The actors in this story?
First of all, the friendly name. That is the display name associated with an email address. For example, Bob’s email address may be CoffeeDude1234@somesite.com. To remind his friends that an email be sends is from him, Bob adds a friendly name, making his email address “Bob Smith <CoffeeDude1234@somesite.com>”. The friendly name – Bob Smith – signals to the recipients of an email what the sender would like to be called, and is transmitted along with the content of the email. Of course, if Bob preferred to, he could let his friendly name be “Queen of Portugal” – that is really up to him.
The second actor: the smartphone. An increasing number of people use smartphones for browsing the web and reading emails. The limited screen size puts constraints on what is presented to the user. Friendly names – yes. Email addresses – no. So now, Bob’s friends will only see “Bob Smith” … or “Queen of Portugal”. Many users opt to only see friendly names when reading emails on traditional computers, but on smart phones, the (lack of) space dictates the decision.
And the third actor: social networking, and a terrifying amount of information about pretty much everybody available to anybody who knows how to ask for it. If you search for Alice, you may learn both her email and the fact that she knows somebody named Bob.
Now, let’s see what happens when all of these pieces are put together.
Our villain would like to make Alice visit a particular site – maybe because this site distributes malware; maybe because it tries to sell her cheap Canadian pharma; or maybe because it will display an enticing work-from-home story designed to trick Alice to part with her money or become unwitting cogs in the villain’s machinery.
Normally, Alice would not visit some random website, and she would ignore requests to click on links in spam messages she gets. But not this time. This time, she will fall for it. Let’s see why!
The villain sends an email to Alice that the villain wants to appear to come from Bob. On one hand, the villain could “spoof” the email, to make it appear to come from CoffeeDude1234@somesite.com – but while it is easy to spoof, it is also relatively easy for spam filters to detect spoofing, so that often leads to the spoofed message being discarded. Instead, the email is sent in the “normal way” from any old email address. Maybe the villain creates BadVillain666@hotmail.com, and then uses Bob’s friendly name, making the email come from “Bob Smith <BadVillain666@hotmail.com>” – which is fine with spam filters, since everybody has the right to pick whatever friendly name they want. And then, the email gets displayed as … you guessed it! Bob Smith. When Alice looks at this email, she believes that it is from Bob.
Alice will think that the email was sent by her friend Bob. This is certainly true if she views the email on a phone where only the friendly name is displayed. And with a pretty good chance otherwise, too, given how common it is for people only to display friendly names. This belief may be reinforced by the content of the email – “Alice, take a look at this! Talk to you later. Bob” – followed by the URL of the webpage the villain wants Alice to visit. This is happening today, and passes under the radar for the best anti-spam systems.
Then what is the natural next step for criminals? I believe they will simply develop a collection of different stories that fit the social contexts of most potential victims. “Hey Alice – try out this game and let me know what you think. Bob” may trick Alice to install a Trojan on her device, and “Alice. I went to London for a few days, but was robbed. Can you lend me some money for me to pay the hotel?” is the beginning of a common scam. But those examples – that’s just a start.
For example, let’s think what will happen when Alice and Bob are married, and the villain poses as Bob, asking Alice for her banking password or social security number? “I will explain why I need it later.” Alice responds with the information, and her response gets delivered to –you guessed it! –the villain with the friendly name Bob. Or when Alice is your elderly mom, you are Bob, and the villain wants your mom to think that you are in dire need of money – Please lend me $500, I need it today! Can you send it by Western Union?
With the vast amount of information typical users are making available on social media to anybody who cares to look, we are increasingly vulnerable to attacks of these types. And this genie will not go back into its bottle. | <urn:uuid:033c65ad-af97-40b8-9fb4-8f985557a61e> | CC-MAIN-2017-04 | http://www.itworld.com/article/2703741/security/when-friendly--from--names-become-enemies.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950234 | 1,118 | 2.9375 | 3 |
It would be foolish to underestimate the importance of Apple [AAPL] Siri or other voice-activated intelligent assistants. These solutions are big steps toward new user interfaces: one day your iPhone may be controlled by your mind.
Sounds far-fetched? Perhaps so, but rest assured work’s already taking place to perfect the mind-controlled computer, a topic researched by Macintosh user interface expert, Jef Raskin (RIP).
“Mind-reading has been wishful thinking for science fiction fans for decades, but their wish may soon come true,” IBM said. “You would just need to think about calling someone, and it happens.” IBM believes it will be making mind-controlled PCs and phones by 2016.
In a December press release, IBM explains: “Scientists in the field of bioinformatics have designed headsets with advanced sensors to read electrical brain activity that can recognize facial expressions, excitement and concentration levels, and thoughts of a person without them physically taking any actions.”
The standard GUI interface was the only way to use any mass market device for decades. This changed with the iPhone which opened the flood gate for other forms of touch-based interface allegedly also in simultaneous development by other firms. Microsoft had been working for years to perfect such a system, but its solutions failed to gain traction.
Evolution of the interface was inevitable, answering criticisms propagated by Raskin in his, “Down With GUIs!” report: “Graphical User Interfaces (GUIs) are not human-compatible. As long as we hang on to interfaces as we now know them, computers will remain inherently frustrating, upsetting, and stressful.”
IBM isn’t the only major technology firm exploring thought as a next evolution for the interface. Intel’s also exploring this -- that's why company CTO, Justin Rattner, choose to wear brain-controlled bunny ears at IDF 2012 this year. “At Intel Labs we’re part of that perceptual computing effort” he said. “We decided, of course, since we’re research people, we would go all the way to mindreading. And here’s the prototype.”
These systems are complex -- but we’ve seen work done on complex user interfaces before. Voice control, for example, has been a long-term project which is only now becoming a technology suitable for use by the mass market.
Apple's Siri is powered by the world’s leading voice recognition technology firm, Nuance. The beauty of the system is that each time you use it, it learns a little more about how humans say things, what they say, and grows more forgiving when it comes to dialect and regional accents. As the stored data at Nuance grows, the company becomes capable of delivering even more complex systems.
Today, Nuance is working with chipmakers to develop solutions that will let users control their phone without touching it.
You already see the impact of this work within iOS 6 and Siri (still in beta), which now allows use of your voice to send messages, place phone calls and more.
“...ask Siri to update your status on Facebook, post to Twitter or launch an app. Additionally, Siri takes hands-free functionality even further with a new Eyes Free mode, enabling you to interact with your iPhone using nothing more than your voice,“ Apple explains.
I’m not pretending Siri is perfect. It’s not. But it is a hint of what’s to come.
Interviewed by MIT’s Technology Review, Nuance’s CTO, Vlad Sejnoha, opines that within a year or two you won’t just speak to Siri to tell it to do things, but you’ll be able to ask it questions, such as “When’s my next appointment?”
The phone will detect that you are speaking, wake itself up, and tell you the answer or perform another task. Sejnoha observes: "Just turning on the device is part of the problem, right? So we're going to be smoothing that out, eliminating those problems as well.”
In other words, you won’t need to press a button to activate your phone in order to activate Siri any more.
Life beyond voice
However, as the user interface evolves to become more sensitive to voice controls, the research gathered also feeds into ongoing research for other forms of control. Once you figure out how to make a device responsive to a voice, then some of the technical challenges met in that research can be applied to resolve similar challenges pertaining to other forms of input.
This means it’s not at all surprising that mobile chipmakers are exploring mind-controlled devices as a future for the interface. Qualcomm runs an interesting blog called Qualcomm Spark, publishing articles it commissions for use there. These don’t necessarily reflect company opinion, but it is interesting that one report looks at how mind-controlled devices will change our future.
Written by Emotiv Lifesciences founder and CEO, Tan Lei, this report looks at that company’s work in mind-controlled devices in the form of its EPOC neuroheadset, which reads and interprets brainwave patterns: “The headset's multisensor "arms," which extend to the front and back of your head, pick up electrical signals from different functional parts of the brain. Both subconscious and conscious mental states can be detected using advanced algorithms, allowing the computer to react more naturally to the user’s mental state and even to accept direct mental commands.”
The impact of the technology is promising. For example, in theory it will be able to determine which music make you happy or sad. It can also figure out if you’re happy or sad. This means that if you put your headset on when you’re down in the dumps, the technology will be able to assess your mood and do something to help improve it, perhaps playing you a selection of tracks which generally help cheer you up.
Around the home you may use your mind to turn lights on or off; to change thermostatic controls; change channels on the television; make a phone call or set the security alarm -- or control a wheelchair, or hold a conversation if you’re speech impaired.
Lei admits that we’re only scratching the surface of where these technologies will go. The ability to control devices with your mind is still in its infancy. However solutions such as the Emotiv headset or Mattel’s Mindflex 2; or voice-controlled devices, such as a Siri-controlled iPhone, show that it is possible to develop responsive user interfaces.
Siri as a service layer
Apple today filed a patent in which it describes development of Siri into becoming an intelligence which could potentially manage your experience for you, finding the right app for your need, whether you own it already or not. More on this on AppleInsider.
Clearly in this evolution, Siri is becoming something more than a voice-controlled assistant. It’s becoming an intelligent entity which serves up answers to your needs. What interface you use to interrogate that entity is less important than that entity’s purpose of accurately resolving those queries for you. Siri then becomes a building block which can be accessed transparently.
Raskin would be pleased. “Designers forget that humans can only do what we are wired to do. Human adaptability has limits and today's GUIs have many features that lie outside those limits, so we never fully adapt but just muddle along at one or another level of expertise. It can't be helped: Some of the deepest GUI features conflict with our wiring. So they can't be fixed. Like bad governments, they are evil, well entrenched, and must be overthrown,” he wrote.
The evolution of platform-agnostic cloud-based application as service solutions, and the development of universal solutions for cross-platform computing, mean that the PC becomes transparent. Mobile devices become keys to a much wider computing experience that’s inextricably connected to your daily life.
Controlled by voice or -- if IBM is correct -- in five years by your mind, your mobile device, your iPhone if you will, shall also be a direct link to your entire computer experience. And if you need a desktop to work with, you’ll just pop your video glasses on to see it. Siri is part of this journey, the iPhone is another.
Me? I don’t believe IBM has it right when it predicts brain-powered user interfaces within half a decade -- I suspect that as research continues we’ll encounter problems as hard to resolve as those which held back the development of voice control. However, the history of technology across the last fifty years proves such challenges will eventually be resolved, meaning that at some finite point in the future, making a call or editing a movie will be as simple as imagining what you want to happen in your mind.
I just hope we don’t have to put up with any preference-based advertising when that future happens. I don’t want stupid advertising jingles literally forced into my mind.
Got a story? Drop me a line via Twitter or in comments below and let me know. I'd like it if you chose to follow me on Twitter so I can let you knowwhen these items are published here first on Computerworld. | <urn:uuid:05ff227d-bdfe-4bba-a956-8de60d264b3b> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2473188/smartphones/iphone--siri-and-mind-control--the-future-evolution-of-the-smartphone.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00188-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948306 | 1,968 | 2.625 | 3 |
Trying to figure out how a particular innovation is going to impact the future is always a great topic to get people talking. I'm sure that the folks who started the Internet never thought about the Internet of Things (IoT) and where we are headed early in the 21st century. My next blog post will be on that topic.
Then there is the self-driving car. See this article, Breaking Down the Financial Impact of Self-Driving Cars, which looks at the impact of these cars on government finances. The one piece I disagree with in this particular story is in the first paragraph. I don't see us owning self-driving cars. I see the car being something that like Uber, shows up and takes us where we want to go for a fee. But, in the future there will not be an Uber driver. More on this in a moment.
The rest of the story goes on to detail how these cars are going to impact the finances of government and individuals. It notes that the average car is parked 95 percent of the time. Think about that — even here in western Washington, if I drove to work each workday, my commute time would be around five hours a day. The other 19 it is parked in my garage at home or in a parking garage (where you might pay up to $30 or more a day to park).
Today in the city of Seattle they have either considered passing, passed or are debating allowing Uber drivers to unionize and negotiate for salary and benefits. While I'm sure this is a "rich debate" it is really very temporary. In a few years there won't be any Uber drivers at all. Their time would have been better spent debating things like "How do we improve the seismic safety of people living in unreinforced masonry buildings (URM).
Lastly, there will always be more unintended consequences from the adoption of new technologies. These will be factors that we can't envision today — but, will be real, very shortly in the future.
The technology wheel is turning very fast these days! | <urn:uuid:342675e5-4885-409f-99dc-5ad9d4e81fac> | CC-MAIN-2017-04 | http://www.govtech.com/em/emergency-blogs/disaster-zone/self-driving-economy-will-change-things--like-revenue.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00096-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969752 | 422 | 2.515625 | 3 |
Everyone loves SSL, also known as Transport Layer Security (TLS), right? Well, the good people at Google have decided to make it even better by speeding it up with a feature called TLS False Start.
Setting up an SSL session requires an initial handshake, which is a series of back-and-forth messages between the Web server and browser. The idea behind False Start is to save time by allowing the browser to start sending data before the handshake is complete. This can save 70 to 150ms, depending on the relative global position between browser and server.
False Start is easy to implement as it only requires changes to the browser. According to Google software engineer Mike Belshe, Chrome is the only browser implementing False Start at this time.
Google has yet another trick up their sleeve called TLS Snap Start. Snap Start could eliminate the handshake latency altogether. This feature is more difficult to deploy as it requires changes by to both Web browser and Web servers.
I applaud Google’s efforts to increase SSL performance and improve our secure browsing experience. | <urn:uuid:076448df-3ec2-40c4-9916-c5cf2905d494> | CC-MAIN-2017-04 | https://www.entrust.com/google-is-speeding-up-ssl/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00124-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941949 | 213 | 2.671875 | 3 |
Happy Earth Day! Every year on April 22, millions of individuals across the globe come together to show their support for environmental protection. Celebrated in 192 countries every year, Earth Day helps us remember that the health of our planet is our collective responsibility and, if we can make smarter, greener choices every day, we can help make our world a better place for future generations.
Sure, it’d be great to buy a hybrid vehicle, start a vegetable garden or invest in greener kitchen appliances. But many of us simply do not have the time or money for such activities. You may be surprised to learn that there are simple actions you can take every day that can have a significant environmental impact, and they take less than 5 minutes to accomplish.
Here are ten simple ways to reduce your personal carbon footprint:
1. Turn your lights off when you leave a room. Okay, we’ve all heard this one before. But when you think about how easy it is, you’ll wonder why you haven’t taken the time to do it before. Lighting is responsible for about 11 percent of a home’s energy bills, so you can save a big chunk of change by flipping the switch, as well.
2. Buy in bulk. Next time you are at the grocery store, consider buying the big package of toilet paper instead of the smaller one. Not only will you save money (the paper will be used eventually), you will avoid the emissions created when packaging individual items. If your family doesn’t use large quantities, consider splitting the package with a friend or neighbor.
3. Use blankets to warm up instead of central heating. Turning your thermostat down by just one degree can save you a ton on your heating bill and make a significant dent in your household’s emissions. Not to mention, your grandmother will love the fact that you are using that blanket she knitted for you for your birthday.
4. Practice patience when baking. How many of you check on your cake or casserole in the oven to see how it is coming along? Most of us are guilty of this impatience when cooking. However, every time you do this, you allow heat to escape and it actually makes your meal take longer to cook (and use more energy). In fact, consider turning off your oven a few minutes before the food is finished. The trapped heat will stay hot enough to finish cooking your meal.
5. Turn the water off while you brush your teeth. Just this simple act can save 8 gallons a day per person. Yes, you read that correctly. Men, consider turning the water off while you shave, as well. So simple, yet so impactful.
6. Switch to paperless billing. Let’s be honest, paying bills by snail mail is so old school. With all of us on the Internet for hours a day, consider asking your cable, electricity, water and gas provider to send you “paperless” bills instead. All it takes is a simple phone call or visit to your provider’s website. In fact, many billers offer a discount for signing up (because it saves them a ton on printing and postage). If you can save $1.44 on each of four monthly bills, you can save $70 a year.
7. Skip the paper coffee cup. Did you know that Starbucks discounts your coffee purchase by 10 cents a cup if you bring your own mug? If you have a daily coffee habit, that can lead to a savings of $36.50 a year. Of course, if you make your own coffee at home (or break the habit altogether), you can save even more, to the tune of $358 a year or $14,000 over the course of your career. (Wow.)
8. Cut the junk mail. Junk mail is not only an annoyance; it’s extremely wasteful. If you are sick of all of the junk mail you receive, register with the Direct Marketing Association’s DMAchoice service. It takes just a second to register and you will see a significant reduction in your junk mail after about three months or so. Every country is different though, so research your local post office and see what their policy is.
9. Flex your flex benefits. If your company allows “working from home,” this is a fantastic way to cut your personal carbon emissions. If the 41 million Americans with telework-compatible jobs worked from home just one day, the environment would be spared 423,000 tons of greenhouse gas, the equivalent of taking 77,000 cars off the road for a year. If you need help convincing your manager to let you WFH, check out this blog post for some tips.
10. Replace one business trip with a video call. For many businesses with geographically-dispersed locations, business trips (even one hour trips) are standard. Consider replacing just one of those trips with a video call instead of an on-site, in-person meeting. Don’t have a video conferencing solution to use? Grab a buddy and download a free 14-day trial of Lifesize Cloud to your PC, smartphone or tablet, and do your next meeting via video.
These are just 10 simple tips to make your life greener, but we know there are many, many more out there. If you have your own tip, share them with us in the comment box below. And if you’re interested in learning more about reducing your carbon footprint with Lifesize, check out the Lifesize Earth Day Infographic. | <urn:uuid:3dc55714-ae44-4d35-bfa3-84a2a5d4762f> | CC-MAIN-2017-04 | http://www.lifesize.com/video-conferencing-blog/reduce-your-carbon-footprint-in-five-minutes-or-less/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00032-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929223 | 1,152 | 3.140625 | 3 |
Information is the lifeblood of today’s business world. With timely and accurate information business decisions can be made quickly and confidently. Thanks to modern technology, today’s business environment is no longer constrained by physical premises or office walls. We can work on laptops, smartphones or tablets and, with nearly ubiquitous internet connectivity, we can work from any location.
With this growing dependence on technology we need to also accept there will be times when that technology is going to fail us, either by accidental or malicious intent. We do not expect 100% security in our everyday lives, and we should not expect it in our “technical” lives. What we need to do is design our systems and security programs to be resilient in the event of a failure. This means shifting our thinking away from solely preventing attacks to trying to develop strategies on how to ensure the business can continue to function should an attack happen and be successful. In essence, a change in mind-set is required, and not just in those developing the security programs, but also in senior business management.
To develop this resilience to cyber-attacks, the focus should be on ensuring the business understands the impact of a potential attack and the steps required for them to prevent, survive and recover from it. This requires security not to be viewed only as a purely technical discipline, but also from a business and risk management point of view. This requires technical people who would traditionally focus on point solutions to specific technical threats to translate the potential impact of security incidents into terms and language that business and non-technical people will understand.
Business operates on the principle of risk, and every business decision involves an element of risk. Sometimes the result of that risk is positive, for example, increased sales; sometimes it’s negative such as loss of market share. Traditionally, security people with technical backgrounds look at issues in a very black or white way, it either works or it does not work, it is secure or not secure.
Being resilient involves a change in mind-set whereby you look to identify how secure the business needs to be in order to survive. This is a challenge for both technical and non-technical people. For business people it requires that they get involved in the decision making process regarding information security security by identifying what are the critical assets to the business and how valuable those assets are.
The risks to those assets then need to be identified and quantified so that measures can be put in place to reduce the levels of risk against those assets to a level that is acceptable to the business. So instead of a checklist approach to security, or an all-or-nothing approach, decisions are more focused on what the business needs and investment can be best directed to the more appropriate areas.
I often compare developing a resilient approach to security to how kings protected their crown jewels in their castles during the Middle Ages. The core of the castle is the Keep and it is the most secure part of the castle. The Keep was where the most valuable assets were kept. The Keep itself was placed in a very defendable position within the castle walls. Those castle walls were defended in turn by moats, turrets, and drawbridges. Outside the castle walls were where the villagers and farmers lived. In the event of an attack the king would raise the drawbridge leaving those outside open to attack, but these were acceptable losses to protect the crown jewels. Even if the castle walls were breached the crown jewels would remain protected within the Keep.
In today’s security landscape, businesses need to identify what their crown jewels are and protect them accordingly by moving them to the digital equivalent of a Keep. Similarly, they also need to identify what should remain within the village, or even within the castle walls, and be prepared to lose that in the event of a major attack.
Effective security requires rigorous and regular risk assessment exercises, particularly as today the business environments, technology, and security threats, change so quickly. These risk assessments should be supported by good security policies outlining what the required security controls are to manage the identified risks. Key to having a resilient approach to security is to have an effective incident response plan in place so that when an attack happens the business can still function and survive.
It is time we moved from designing our security infrastructure to avoid failure, and to acknowledge and accept that failure will happen. How we deal with that failure will determine how well our organizations can recover from security incidents. Instead of looking how to avoid failure, we need to learn that failure is an option. What is not an option is not being resilient enough to recover from and survive such a failure.
Brian Honan is an independent security consultant based in Dublin, Ireland, and is the founder and head of IRISSCERT, Ireland’s first CERT. He is a Special Advisor to the Europol Cybercrime Centre, an adjunct lecturer on Information Security in University College Dublin, and he sits on the Technical Advisory Board for several information security companies. He has addressed a number of major conferences, wrote ISO 27001 in a Windows Environment and co-author of The Cloud Security Rules. | <urn:uuid:bc615fe1-3043-4252-84ce-bb948d535566> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/07/31/failure-is-an-option/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00518-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965911 | 1,036 | 2.5625 | 3 |
An advance health care directive, also known as living will, personal directive, advance directive, or advance decision, is a legal document in which a person specifies what actions should be taken for their health if they are no longer able to make decisions for themselves because of illness or incapacity. In the U.S. it has a legal status in itself, whereas in some countries it is legally persuasive without being a legal document.A living will is one form of advance directive, leaving instructions for treatment. Another form is a specific type of power of attorney or health care proxy, in which the person authorizes someone to make decisions on their behalf when they are incapacitated. People are often encouraged to complete both documents to provide comprehensive guidance regarding their care. An example of combination documents includes the Five Wishes in the United States. The term living will is also the commonly recognised vernacular in many countries, especially the U.K. Wikipedia.
Jeffery D.R.,Advance Health
Therapeutic Advances in Chronic Disease | Year: 2013
The development of new pharmacologic agents for the treatment of multiple sclerosis (MS) and advances in testing for exposure to the JC virus have led to changes in the treatment of MS. In addition several new agents are in late stage development for MS and their entry onto the market will provide additional treatment options. In 2012 and in early 2013, it is likely that both terifunomide and BG-12 will be approved by the United States Food and Drug Administration (FDA) for the treatment of relapsing forms of MS. The therapeutic environment has already changed and is likely to change rapidly over the next several years. Fingolimod was the first oral agent approved for the treatment of MS and this agent is now widely used in patients intolerant of injections and the side effects associated with the older platform therapies. In many settings it is also used a first-line agent. Owing to the risk of progressive multifocal leukoencephalopathy, natalizumab had previously been reserved for patients with active disease who were intolerant of first-line agents or patients who were worsening despite standard therapy. With the availability of JC virus antibody testing, natalizumab is now being used as a first-line agent in patients negative for JC virus antibodies. Teriflunomide and BG-12 will become available in the next year. Both agents have suitable efficacy and a favorable safety and tolerability profile. There are advantages and disadvantages associated with all of the oral agents. In this article we summarize the clinical trial results regarding the efficacy and safety of the oral agents and discuss the changes that are already taking place in the therapeutic landscape for MS. © The Author(s), 2012. Source
Hopp F.P.,Wayne State University |
Martin L.,Wayne State University |
Zalenski R.,Advance Health
Social Work in Health Care | Year: 2012
This study addresses the need for more information about how urban African-American elders experience advanced heart failure. Participants included 35 African Americans aged 60 and over with advanced heart failure, identified through records from a community hospital in Detroit, Michigan. Four focus groups (n = 13) and 22 individual interviews were conducted. We used thematic analysis to examine qualitative focus groups and interviews. Themes identified included life disruption, which encompassed the sub-themes of living scared, making sense of heart failure, and limiting activities. Resuming life was a contrasting theme involving culturally relevant coping strategies, and included the sub-themes of resiliency, spirituality, and self-care that helped patients regain and maintain a sense of self amid serious illness. Participants faced numerous challenges and invoked a variety of strategies to cope with their illness, and their stories of struggles, hardship, and resilience can serve as a model for others struggling with advanced illness. © 2012 Copyright Taylor and Francis Group, LLC. Source
Veredas F.,University of Malaga |
Mesa H.,Advance Health |
Morente L.,Area de Enfermeria Comunitaria
IEEE Transactions on Medical Imaging | Year: 2010
A pressure ulcer is a clinical pathology of localized damage to the skin and underlying tissue caused by pressure, shear, or friction. Diagnosis, treatment, and care of pressure ulcers are costly for health services. Accurate wound evaluation is a critical task for optimizing the efficacy of treatment and care. Clinicians usually evaluate each pressure ulcer by visual inspection of the damaged tissues, which is an imprecise manner of assessing the wound state. Current computer vision approaches do not offer a global solution to this particular problem. In this paper, a hybrid approach based on neural networks and Bayesian classifiers is used in the design of a computational system for automatic tissue identification in wound images. A mean shift procedure and a region-growing strategy are implemented for effective region segmentation. Color and texture features are extracted from these segmented regions. A set of κ multilayer perceptrons is trained with inputs consisting of color and texture patterns, and outputs consisting of categorical tissue classes which are determined by clinical experts. This training procedure is driven by a κ-fold cross-validation method. Finally, a Bayesian committee machine is formed by training a Bayesian classifier to combine the classifications of the κ neural networks. Specific heuristics based on the wound topology are designed to significantly improve the results of the classification. We obtain high efficiency rates from a binary cascade approach for tissue identification. Results are compared with other similar machine-learning approaches, including multiclass Bayesian committee machine classifiers and support vector machines. The different techniques analyzed in this paper show high global classification accuracy rates. Our binary cascade approach gives high global performance rates (average sensitivity =78.7\%, specificity =94.7\%, and accuracy =91.5\%) and shows the highest average sensitivity score (=86.3%) when detecting necrotic tissue in the wound. © 2010 IEEE. Source
Rivera V.M.,Baylor College of Medicine |
Jeffery D.R.,Advance Health |
Weinstock-Guttman B.,State University of New York at Buffalo |
Bock D.,EMD Serono, Inc. |
Dangond F.,EMD Serono, Inc.
BMC Neurology | Year: 2013
Background: Registry to Evaluate Novantrone Effects in Worsening Multiple Sclerosis (RENEW) was a 5-year, phase IV study in which the safety of Mitoxantrone was monitored in a patient cohort from the United States (US). The objective of the study was to evaluate the long-term safety profile of Mitoxantrone in patients with secondary progressive multiple sclerosis (SPMS), progressive relapsing multiple sclerosis (PRMS), and worsening relapsing-remitting multiple sclerosis (RRMS).Methods: Overall, 509 patients (395 SPMS, 81 worsening RRMS, 33 PRMS) were enrolled and treated at 46 multiple sclerosis (MS) treatment centers located in the US. Patients received Mitoxantrone in accordance with the package insert every 3 months. During the treatment phase, patients received laboratory workups and cardiac monitoring every 3 months and then annually for a total of 5 years.Results: Five hundred and nine subjects were enrolled in this trial and received at least one infusion of Mitoxantrone. Overall, 172 (33.8%) completed the 5-year trial (i.e., participated for 5 years ± 3 months [treatment + follow-up]); 337 (66.2%) did not complete the 5-year trial. Annual follow-up data were available for 250 of 509 enrolled patients. Left ventricular ejection fraction reduction under 50% was reported in 27 (5.3%) patients during the treatment phase (n = 509) and 14 (5.6%) patients during the annual follow-up phase (n = 250). Signs and symptoms of congestive heart failure were observed in 10 (2.0%) patients (six during treatment phase and four during the annual follow-up phase). Post-hoc analyses of the risk for cardiotoxicity outcomes revealed that cumulative dose exposure is the primary risk factor associated with the risk of cardiac toxicity with Mitoxantrone. Therapy-related leukemia was reported in three (0.6%) patients who received total cumulative Mitoxantrone doses of 73.5 mg/m2, 107.3 mg/m2, and 97.1 mg/m2 respectively. During the treatment phase, persistent amenorrhea developed in 22% (28/128) of women with regular menses and 51% (25/49) of women with irregular menses at baseline. During the annual follow-up phase, persistent amenorrhea developed in 5% (4/73) of women with regular menses at baseline.Conclusion: RENEW results are consistent with the known safety profile of Mitoxantrone, and provide additional long-term safety data for Mitoxantrone in MS patients. © 2013 Rivera et al.; licensee BioMed Central Ltd. Source
Advance Health | Date: 2013-12-30
A medical screening system includes: a mobile device including: a user interface embodied in an input/output system; a processor; and a memory in communication with the processor, including stored instructions that, when executed by the processor, cause the processor to: provide a medical screening intake system that receives objective medical data, subjective medical data, and medical test results for a person as input through the user interface; receive the objective medical data, subjective medical data, and medical test results for the person as input through the user interface; automatically process the objective medical data, subjective medical data, and medical test results to generate one or more plans of care specifically adapted for the person; and communicate the one or more plans of care specifically adapted for the person to the person as a report embodied in a structured data output. | <urn:uuid:a4a64768-a9d2-4573-9bcc-27877cc88c4a> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/advance-health-489223/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00362-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945916 | 2,043 | 2.640625 | 3 |
By Samprita Thota, Research Analyst
Whatever is a harmonic filter? What are harmonics? What does it mean to have harmonics in my system? Why do I need to filter harmonics? Are there different types of harmonic filters? This article intends to provide an overview on harmonic filters and hopefully answer some of the questions that you might have on them.
Introduction to Harmonic Filters and Harmonics
A harmonic filter is used to eliminate the harmonic distortion caused by appliances. Harmonics are currents and voltages that are continuous multiples of the fundamental frequency of 60 Hz such as 120 Hz (2nd harmonic) and 300 Hz (5th harmonic). Harmonic currents provide power that cannot be used and also takes up electrical system capacity. Large quantities of harmonics can lead to malfunctioning of the system that results in downtime and increase in operating costs. The second harmonic would have a frequency of 120 Hz; the third harmonic would have a frequency of 180 Hz and so on.
Inside the Harmonic Filter
The harmonic filter is built using an array of capacitors, inductors, and resistors that deflect harmonic currents to the ground. Each harmonic filter could contain many such elements, each of which is used to deflect harmonics of a specific frequency.
The Cause and the Effect
Harmonic distortion is caused by equipment that are non-linear loads. These loads use current in a pulsing manner and at times feed harmonic currents back into the wiring. In non-linear loads, the current waveform is different from the applied voltage waveform. This causes them to produce the following: | <urn:uuid:1e55842b-73c6-415b-bf80-a4768cc6ca75> | CC-MAIN-2017-04 | http://powersupplies.frost.com/sublib/display-market-insight-top.do?id=SBRD-5LLMAB | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951478 | 325 | 3.734375 | 4 |
On June 19, 2009, all major terrestrial television broadcasters were required to switch from analog to digital transmissions (though many had already made the switch by that point). People scrambled to get their rebate coupons from the FCC for digital conversion boxes, and there was more than a bit of grumbling when people discovered that they no longer could get a signal strong enough to show a picture. (Cable and satellite companies were more than happy to sign them up for a subscription, however.) Fortunately, all that is in the rear-view mirror as we all merrily cruise down the highway of digital television.
Not so fast. The fact is that there are still thousands of television stations in this country still broadcasting analog signals. These are low-power stations and repeater stations that are used in rural areas to relay signals into sparsely-populated areas. But the FCC has finally turned its attention to these, and last week, it adopted a “Further Notice of Proposed Rulemaking and Memorandum Opinion and Order” that — among other things — establishes the summer of 2012 as the deadline for these low-power stations to convert to digital transmissions.
How big a problem can this be? According to the FCC document, there are more than 7,500 of these low power stations. Only a bit more than half of them have made the switch to digital broadcasts, which still leaves more than 3,000 stations that still need to make the transition. Why bother? It’s the same problem as with the full-power stations, except on a smaller scale. Digital transmissions require less radio spectrum, so it frees up frequencies for other applications. In part, this will allow emergency services such as police, fire, and ambulance services to upgrade their communications systems. The issue has taken on additional importance with the National Broadband Plan which will calls for the use of part of the radio spectrum for wireless broadband services.
The FCC is taking comments for 60 days after the document is published in the Federal Register, with reply comments for an additional 30 days after that. | <urn:uuid:0fede19c-668a-44b5-829f-94ff30532c96> | CC-MAIN-2017-04 | https://hdtvprofessor.com/HDTVAlmanac/?p=1312 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00088-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974647 | 416 | 2.578125 | 3 |
Enterprises that fail to keep their public Web sites current risk losing customers to more up-to-date competitors. According to a Stanford University study on Web site credibility, users often judge a site based on currency of information. If customers can tell that a Web site has not been updated regularly, they will lose confidence in the site's owner and opt for an alternative.
Risk of Misinformation
In addition to losing credibility, stale Web sites risk providing users with outdated information. If the site lists products or services that are no longer available, customers will feel that they have been misled. If the site contains health or safety information that is no longer accurate, litigation may ensue. | <urn:uuid:d3ceb939-3b32-4d41-8e69-d89a54910beb> | CC-MAIN-2017-04 | https://www.infotech.com/research/how-fresh-is-your-web-content | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00482-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942342 | 138 | 2.53125 | 3 |
Laberge M.,University of Montreal |
Laberge M.,Ste Justine Research Center |
Laberge M.,University of Toronto |
Laberge M.,CINBIOSE Research Group |
And 4 more authors.
Safety Science | Year: 2014
Young workers are frequently injured at work. Education and awareness strategies to prevent injuries among young workers are common but they are often ineffective. These approaches emphasize teaching, rather than learning strategies, and appear to contradict recent competency-based developments in education science. This study aimed to gain insight into the actual safety skills learning process of adolescents in an internship in a high school vocational training program. The results are based on auto and allo-confrontation interviews from an ergonomics intervention study with nine apprentices and five experienced coworkers involved in the training. This technique is suited to obtaining qualitative data on work activities; it consists of interviewing apprentices and co-workers about videotaped work observations to capture the thought processes behind their action. The findings reveal that learning in an actual situation poses challenges because working conditions and also learning conditions are not always optimal. Such conditions prompt apprentices to develop novel strategies to manage unexpected situations. At times, this involved side-stepping a safety rule in order to meet work demands. The use of an ergonomics actual work activity approach allowed the merging of two research topics rarely found together: the socio-ecological paradigm in education and the development of original interventions to prevent occupational injuries among young workers. This intersection of educational theory and injury prevention strategies provides new avenue for improving vocational training programs and developing primary prevention interventions in occupational health and safety programs that target youth. © 2014 The Authors. Source | <urn:uuid:6f6f3af7-45a9-42d8-8369-eff4462e50cb> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/cinbiose-research-group-2653441/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926143 | 339 | 2.9375 | 3 |
A group of students at the University of Texas at Austin built and successfully tested a GPS spoofing device to remotely redirect an $80 million yacht onto a different route, the Houston Chronicle reports. The project, which was completed with the permission of the yacht's owners in the Mediterranean Sea this past June, is explained in the video below.
Because the yacht's crew relies entirely on GPS signal for direction, the students were able to lead the yacht onto a different course without the knowledge of anyone on-board. The GPS spoofing device essentially over-powered all other GPS signals using until the spoofed signal was the only one that the yacht followed. The yacht's navigation system merely recognized it as another signal, so the yacht changed course without setting off any alarms.
The team then used the GPS spoofing device to convince the ship's crew to redirect onto a different route voluntarily. By changing the signal on the spoofing device, the students led the crew to believe that the ship was drifting off-course to the left. In response, the crew steered the ship to the right, thinking that it would get the ship back on course, when it actually brought the ship off the course entirely.
GPS spoofing is not very common, but it has already raised concerns with international regulators. As this Economist article points out, satellite spoofing is believed to be responsible for a brief daily GPS outage near the London Stock Exchange. The most likely perpetrator, according to the Economist, is a consumer spoofing device used by a delivery driver or anyone concerned that their employer is tracking their driving route.
These consumer spoofing devices, the sale of which has been banned in the U.S., can still be legally purchased in the UK, and are available for as cheap as $78 (£50).
And, of course, North Korea has already experimented with the technology, reportedly blocking GPS signal in South Korea on several occasions. One such attack launched in 2012 affected 1,016 aircraft and 254 ships. | <urn:uuid:8a1cb3c2-c4c8-4604-a242-ff138ac65768> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2225056/opensource-subnet/college-students-hijack--80-million-yacht-with-gps-signal-spoofing.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00418-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971695 | 402 | 2.59375 | 3 |
COM-MU-NI-CA-TION: noun: \kə-ˌmyü-nə-ˈkā-shən\
The act or process of using words, sounds, signs, or behaviors to express or exchange information or to express your ideas, thoughts, feelings, etc., to someone else
Have you been in a situation where something doesn’t go as planned or the way it was intended, then in the after math you find out that it was a miscommunication? It happens every day, with couples, in the work place and even in technology.
Communication is necessary for the simplest daily tasks, to be successful in school and even needed to win a war. This particular skill set is used in a wide variety and it is important to comprehend the specific process.
The communication process has a few basic elements:
- Source: Object which encodes message and send the message
- Message: Data, words and/or signals
- Channel: Medium or way to transmit the message
- Receiver: Object which receive and decodes the message
Even though we know these basic elements of communication, to do it effectively is a whole different game. There are many things to take into consideration when trying to deliver a message successfully, such as encoding to decoding compatibility, unwanted influences in the channel (better known as noise) and feedback.
For example: In Super Bowl XLVIII, with only 12 seconds into the game, Peyton Manning was trying to make some adjustments in the offensive line. He was giving audible commands just a few feet away from the line of scrimmage and suddenly the center snapped the ball; the play ended up in a safety.
The noise emitted by the fans in the stadium was such that affected the message from being delivered effectively to the offensive line.
It is also good practice to close the loop of an effective communication with a feedback. This confirms that the message has been received and understood in full.
For example: In a baseball game the catcher signals the pitcher, and then the pitcher answers with a head gesture as a response or feedback.
The catcher uses hand signals asking for a specific pitch, then a head gesture, typically a vertical move meaning “Yes, that one” or horizontally meaning “Not that one, next…”
In the work place, if you want to work as a team where everyone understands the goals and the tasks that are needed to be accomplished, good communication techniques are needed. How many times have you gone to the other side of the building to talk to someone because it was easier than typing an email? This shows that not every communication method is capable to capture the overall meaning of the message.
What if you work with a team that is spread out across the globe? How would you communicate effectively? Recently I was reading an article by Syed Balkhi, “Effective Communication Tips: Transforming Your Remote Workforce into a Collaborative Unit”, where he breaks down some important aspects of communication and skills needed to have a remote team working cohesively. In the article, Balkhi mentions the ability to listen as a core element of efficient communication. He also breaks down the importance of team building, sharpening email skills, managing stress, remove barriers to have an effective work force.
Skills such as listening and good encoding are necessary in any kind of situation, and some tips of what to do and not to do in order to get better. Some of the prior skills mentioned in Balkhi’s article are excellent considerations for a leader and members of a team to incorporate as a common practice. I strive to apply these skills daily and am continuing to grow and learn with my fellow team members.
At Cirrascale, we aim to practice these skills within our company and with our customers every day. This way we can capture the right requirements, develop better product with our partners and provide the best technology solutions possible to our customers.
Overall, there are many ways to deliver a message however, understanding how and when to do it effectively, will be the key to success. | <urn:uuid:5f6b61ed-2e78-4c3d-b93a-537d584c1ce5> | CC-MAIN-2017-04 | http://www.cirrascale.com/blog/index.php/com-mu-ni-ca-tion/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00262-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960042 | 839 | 3.953125 | 4 |
#include <screen/screen.h> int screen_destroy_buffer( screen_buffer_t buf );
- The handle of the buffer that you want to destroy. This buffer must have been created with screen_create_buffer().
This function destroys the buffer object associated with the buffer handle. Any resources created for this buffer will be released. The buffer handle can no longer be used as argument in subsequent screen calls. The actual memory buffer described by this buffer handle is not released by this operation. The application is responsible for freeing its own external buffers. Only buffers created with screen_create_buffer() should be destroyed with this function.
If the function succeeds, it returns 0 and the buffer is destroyed. Otherwise, the function returns -1 and errno is set. | <urn:uuid:fa2aa392-3ac9-49d0-9a08-4b9d713b40e5> | CC-MAIN-2017-04 | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.screen.lib_ref/topic/rscreen_destroy_buffer.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00564-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.833776 | 159 | 2.703125 | 3 |
Typhoons, volcanoes, tsunamis and other natural phenomena can boil up at any moment and hit islands in the Pacific and Indian oceans, attacking them with a deadly vengeance.
To alert residents of these vulnerable islands to potential danger, the Pacific Disaster Center (PDC), a federally funded nonprofit organization that works closely with state and local governments, uses geo-spatial information and mapping. These tools equip the PDC to warn island residents of approaching natural disasters and manage their aftermath. Located on Maui, Hawaii, the PDC's 45-member staff watches the islands in more than 1.5 million square miles of territory that includes Japan, Australia, Singapore and Alaska.
Island populations present special challenges for emergency response personnel, said James Buika, director of customer application support and training for the PDC. Streets and street addresses are rare in some areas, so first responders rely on GIS maps to direct them to people who may be in danger. For example, GIS images can guide emergency workers to homes that would otherwise be hidden from view by showing tin roofs buried beneath jungle foliage.
"The islands are isolated and have limited resources," Buika said. "This is an attempt to mitigate some of the problems. Everything in a disaster is spatially related." The graphic nature of maps also creates a universal language that overcomes the communication barriers and high illiteracy rates that characterize some island populations, he added.
The PDC's Web site tracks storm movements using satellite imagery from government and commercial sources. The site is updated every minute and new weather events are continually posted by organizations accessing the service, according to Ken Burton, the PDC's technical lead for information systems development. The Java-based service uses ESRI's ARC/IMS GIS software running on a Sun server.
"You have to belong to an organization and have special privileges to post to the site," Burton explained. Typical users are state and local governments, federal agencies, and avid weather watchers. Tapping this resource, weather-watchers can track storms brewing throughout the vast region and, if necessary, issue advance warnings of events from tropical cyclones to volcanic eruptions. The Web site also permits users to post information, send messages, write situation reports and access other resources.
The PDC database includes high-resolution aerial photography, current GIS/vector information and Digital Raster graphics that cover activity in Hawaii and the Pacific-Indian Ocean regions. The site allows registered users to submit queries, collaborate and customize tools within the application.
Along with information submitted by authorized users, the site contains data from the National Oceanographic and Atmospheric Administration (NOAA), Joint Typhoon Warning Center, NASA and National Weather Service.
The PDC also focuses its considerable GIS resources on risk assessment. Buika said the technology is valuable for helping local governments make long-range planning decisions. "We have the critical mass to really look at risk assessment and vulnerability assessment for local communities and regional assessments as well, so people can understand what the hazards really are," he said. "We can provide enough science and engineering information at the detail level for decision-makers to have all the information they need."
For instance, GIS maps can illustrate how a tsunami wave might "bounce" around an island before receding. "It is one thing to talk about what might happen," Buika said. "But when it does happen there can be death and destruction. With our modeling program, we can say how many casualties there might be, how many people will be homeless or in need of shelter, which hospitals could be out because of [poor] structural integrity. It is a loss-estimation model, so that planners will have a very good understanding of the scope of an event."
GIS maps provide valuable insight into other matters, as well. "There are lots of growth issues with
islands," Buika said. "We've created lots of problems in our cities and, with this technology, we can present solutions."
Using GIS maps of various proposals helps avoid misunderstanding, he said. "It's had tremendous impact on the [Maui] City Council. Just like a county's general plan, now there is a digital general plan."
In Maui, maps and high-resolution images from PDC help the island cost-effectively assess the impact of growth, perform regional planning and prepare for the unexpected.
Saving money is a vital concern, given the current economic downturn, said Bill Medeiros, GIS manager for Maui County. "The economy has been tough since the Gulf War, and our economy is tourist-driven," he said. Using information from PDC has helped the jurisdiction operate more efficiently.
"We recently had an outbreak of Dengue fever," Medeiros explained. "With GIS from the Pacific Disaster Center and our own resources we were able to track the mosquito and focus our efforts where they were needed. The epidemic went away. But, GIS didn't take over the project. It was a tool."
Medeiros also agrees that visuals provided by GIS data facilitate political deliberations. "It takes some of the contention out if we can show the community the factual data, like aerial photos of property, and have them focus on the issue and not argue over details," he said. Recently, GIS images helped Maui planners identify historic rock walls located within an area slated for development.
GIS maps also are used to create a baseline assessment tool to project the impact - economic and physical - that natural events might have on a community. Estimates of the extent and location of potential damage can help first responders such as civil defense, FEMA and Red Cross workers prepare for a variety of disasters.
On populated islands like Oahu, Maui and Kauai, GIS modeling might address issues such as lost tourism dollars and infrastructure damage. But for small islands isolated from the business of the 21st century, the threats lean toward saltwater inundating the land and ruining farmers' crops, or high winds destroying simply constructed homes.
GIS weather tracking can be a vital tool for island populations, where climate conditions can have a huge impact on livelihoods and survival.
"El Nino in the states is rain; on the islands it's drought," Buika said. "There is no water to replenished aquifers and people have to scramble. These are very fragile ecosystems. Man wasn't meant to live out there. People there depend upon outside assistance."
An Active Role
The PDC also plays a role in managing active emergencies.
"We have the people and tools in place at State Civil Defense in Oahu," Buika said. "During events - whether real or exercises - we will have our people providing information on road closures and shelters and such."
With homeland defense topping the agendas of many government leaders, PDC has begun offering systems to detect and model potential bio-terrorism attacks. At the same time, the organization is addressing humanitarian concerns resulting from the war on terrorism.
Working with other groups, PDC developed an international Web site for dynamic geo-spatial information that supports relief efforts in Afghanistan. Buika said that relief workers, unfamiliar with the country's rugged terrain, may access detailed maps of roads and pathways leading to isolated camps.
Over Hawaii, under more idyllic conditions, PDC's technology can provide images of the islands from distant space that capture amazing detail. "You can see divots on the golf course," Buika joked. "You can see whales breaching off the shore." | <urn:uuid:c307e51a-d77f-4f33-be24-fceca294fde3> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Eye-in-the-Sky.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947186 | 1,543 | 2.953125 | 3 |
http://www.gcn.com/vol1_no1/daily-updates/37840-1.html By Rob Thormeyer GCN Staff 12/28/05 The National Institute of Standards and Technology released a revised cryptography manual that gives federal cybersecurity officials guidance on how to encrypt and protect sensitive data. NIST issued the revised Special Publication 800-21-1 - first released in 1999 - to help government organizations as they comply with the Federal Information Security Management Act of 2002, which requires agencies, among other things, to certify and accredit their IT systems. The report "is intended to provide a structured, yet flexible set of guidelines for selecting, specifying, employing and evaluating cryptographic protection mechanisms in federal information systems - and thus, makes a significant contribution toward satisfying the security requirements of" FISMA, NIST said. In particular, the report gives agencies guidance on selecting cryptography products, including performing a risk assessment and identifying security regulations and policies that are applicable to the agency and system. NIST tailored the report for federal managers who are responsible for designing, procuring, installing and operating computer security systems. "The goal is to provide these individuals with sufficient information to allow them to make informed decisions about the cryptographic methods that will meet their specific needs to protect the confidentiality, authentication and integrity of data that is transmitted and/or stored in a system or network," the report said. http://csrc.nist.gov/publications/nistpubs/800-21-1/sp800-21-1_Dec2005.pdf _________________________________________ Earn your Master's degree in Information Security ONLINE www.msia.norwich.edu/csi Study IA management practices and the latest infosec issues. Norwich University is an NSA Center of Excellence.
This archive was generated by hypermail 2.1.3 : Fri Dec 30 2005 - 19:29:29 PST | <urn:uuid:a22e7cd6-76f8-420f-97b4-b9ca4df7ff7f> | CC-MAIN-2017-04 | http://lists.jammed.com/ISN/2005/12/0114.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00408-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905427 | 393 | 2.609375 | 3 |
Preserving Tissues and Organs
Body tissues such as blood vessels, cartilage and skin -- and organs such as kidneys, livers and hearts -- could become more available for transplants because of a new way to chill body tissues and organs below freezing without forming ice crystals in them.
Tissue "vitrification" chills tissue and organs to a disordered, glasslike solid without ice formation. Organ Recovery Systems of Chicago developed it with support from the National Institute of Standards and Technology (NIST) and the National Institutes of Health.
Organs and some tissues presently are stored for short periods at refrigerator temperatures. Freezing has not been possible before now due to ice crystals, which damage delicate cells and greatly reduce tissue viability or functions. -- NIST
Doctors conducted more than 24,000 organ transplants in the United States in 2002.
Someone is added to the donor waiting list every 12 minutes, and 16 people die each day waiting for an organ transplant. -- National Institute of Standards and Technology
Tag, You're It
Wannado City, an indoor role-playing theme park for children, uses RFID tags to track people's whereabouts in its 140,000-square-foot facility in Sunrise, Fla.
Visitors receive a WannaFinder plastic wristband, a hybrid wireless bracelet that combines a passive, low-frequency transponder and an active RFID tag from Texas Instruments.
The wristband communicates information, including a person's location, via radio signals to a series of reading devices. The information is accessible through WannaFinder touchscreen kiosks scattered throughout the park, where group members access the real-time location of other group members on a map of the park by scanning their wristbands at any kiosk. -- Texas Instruments
An unpatched Windows PC connected to the Internet will last only about 20 minutes, on average, before it's compromised by malicious software, according to the Internet Storm Center, which is part of the SysAdmin, Audit, Network, Security Institute.
That figure is down from around 40 minutes, the center's estimate in 2003. The drop from 40 minutes to 20 minutes is worrisome -- the average "survival time" is not long enough for a user to download the patches that would protect a PC from Internet threats.
Ivan by Satellite
This image of Hurricane Ivan, a Category 4 storm, was collected by ORBIMAGE's OrbView-2 SeaWiFS satellite on Wednesday, Sept. 15, 2004, at approximately 2:30 p.m. EST. | <urn:uuid:09973f22-dbdc-4a55-b73b-4078d459ba49> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/100496349.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279176.20/warc/CC-MAIN-20170116095119-00227-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915328 | 521 | 2.984375 | 3 |
Optical multimeter is a combination of optical power meter and fiber optic light source. It is used to measure the optical power loss of the optical fiber link.
There are the following two kinds of optical multimeter.
1. A separate optical power meter and a separate fiber optic light source.
2. The combination of optical power meter and fiber optic light source, a whole integrated test system.
These instruments can be two separate instruments, and may be a single integrated unit. In short, the two types of optical multimeter has the same measurement accuracy. The difference is usually the cost and performance. Integrated optical multimeter usually has mature function with a variety of performance but the price is higher.
From a technical point of view to evaluate various optical multimeter configuration, basic optical power meter and fiber optic light source standard is still applied. You should pay attention to the selection of the correct type of light source, the operating wavelength, optical power meter, probe, and dynamic range.
In short-distance local area network (LAN), when the endpoint distance is in walking or talking ambit, the technical staff can successful use the of economic combination of optical multimeter at any one end, fiber optic light source at one end and optical power meter at the other end. For the system of long-distance network, the technical staff should equipment the complete portfolio or integrated optical multimeter at each end.
FiberStore supplies some optical multimeter. JW3207 Handheld Optical Multimeter is for example. It integrates the functions of an intelligent optical power meter module and of a highly stable light source module in one unit which can perform closed-loop tests by incorporating both modules. Individual regimes of operation can also be manually chosen using menu operation to switch functions. A perfect combination to make your optical fiber tests a lot more convenient. It has a long battery operating time than the JW3204 optical multimeter, and a wider measurement range than the JW3204A.
1. User self-calibration function
2. Automatic shutdown function set
3. The high stability output of multi-wavelength single-mode (multi-mode) laser
4. Rechargeable lithium battery, reduce user costs
5. Includes all the outstanding functions of handheld intelligent power meter (JW3206)
6. Includes all the outstanding functions of handheld stable light source (JW3108)
7. Switching of the power meter function and that of the light source by menu operation
8. Different light sources and power meters can be built into JW3207 | <urn:uuid:9316a3a1-92ee-479c-a461-a1147ca0e27e> | CC-MAIN-2017-04 | http://www.fs.com/blog/introduction-of-optical-multimeter.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.844257 | 525 | 2.578125 | 3 |
Three computer experts from Princeton University have found major security flaws in a popular electronic voting machine. Through analysis of the machine's hardware and software, the researchers believe that the Diebold AccuVote-TS voting machine, which is slated for use in 375 counties in the November 2006 elections, is vulnerable to criminal attacks.
In a paper
published yesterday, Ariel Feldman, J. Alex Halderman and Edward W. Felten explain and demonstrate how easy it is for criminals to introduce malicious software to the machine. In less then one minute, a virus can be introduced which will steal votes, spread from machine to machine through memory cards, and can hide its tracks. The software can even delete itself from the machines at the end of elections.
In a demonstration
, the researchers held a mock election between George Washington and the notorious Benedict Arnold. By adding the malicious vote-stealing software, an election which should have ended in a 4-1 win for Washington instead left Arnold ahead 3-2. Both the paper print out and the memory card showed the fraudulent results.
According to the paper, voting machines such as this, called Direct Recording Electronic (DRE), are nothing more then "general-purpose computers running specialized election software," of which computer scientists have been skeptical.
The main findings of the study:
- Malicious software running on a single voting machine can steal votes with little if any risk of detection. The malicious software can modify all of the records, audit logs, and counters kept by the voting machine, so that even careful forensic examination of these records will find nothing amiss.
- Anyone who has physical access to a voting machine, or to a memory card that will later be inserted into a machine, can install said malicious software using a simple method that takes as little as one minute. In practice, poll workers and others often have unsupervised access to the machines.
- AccuVote-TS machines are susceptible to voting-machine viruses -- computer viruses that can spread malicious software automatically and invisibly from machine to machine during normal pre- and post-election activity.
- While some of these problems can be eliminated by improving Diebold's software, others cannot be remedied without replacing the machines' hardware. Changes to election procedures would also be required to ensure security. | <urn:uuid:14318a80-e549-4e93-8adf-c4e8a73224ca> | CC-MAIN-2017-04 | http://www.govtech.com/security/Princeton-Researchers-Find-Flaws-in-Electronic.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940913 | 466 | 2.953125 | 3 |
Let's assume you know nothing about a person except for that during a three-week period, that person called "a home improvement store, locksmiths, a hydroponics dealer, and a head shop." While you still don't know that person, you certainly have insight into their private life that might allow you to infer a certain scenario. All of that sensitive information came from phone metadata surveillance.
The federal government would have you believe that the NSA's mass surveillance of phone metadata doesn't reveal sensitive personal information about citizens. When defending the NSA program, Senator Dianne Feinstein, who was recently accused of "giving hypocrisy a bad name," stressed that collection contains "just metadata" and "no content of a communication." President Obama also claimed the NSA is "not looking at content." When the judge tossed out the ACLU's legal challenge of the NSA's surveillance, he blew off the possibility of metadata inferring sensitive info as a "parade of horribles." But two Stanford grad students proved that metadata can be used to drill down into sensitive details about a person's private life.
Computer scientists Jonathan Mayer and Patrick Mutchler, both grad students at Stanford, have been studying phone metadata privacy since November 2013. 546 volunteers had Android smartphones running a MetaPhone app, which "submits device logs and social network information for analysis." The researchers "matched phone numbers against public Yelp and Google Places directories to see who was being called. From the phone numbers, it was possible to determine that 57% of the volunteers made at least one medical call. 40% made a call related to financial services. The volunteers called 33,688 unique numbers; 6,107 of those numbers, or 18%, were isolated to a particular identity."
Here's another metadata snapshot example: One study participant had "a long, early morning call with her sister. Two days later, she placed a series of calls to the local Planned Parenthood location. She placed brief additional calls two weeks later, and made a final call a month after."
Another participant called "multiple local neurology groups, a specialty pharmacy, a rare condition management service, and a hotline for a pharmaceutical used solely to treat relapsing multiple sclerosis."
Do the examples above infer very sensitive info about the people making the calls? "A pattern of calls will, of course, reveal more than individual call records," Mayer said. "In our analysis, we identified a number of patterns that were highly indicative of sensitive activities or traits."
"Phone metadata is unambiguously sensitive, even over a small sample and short time window," stated Mayer. "We were able to infer medical conditions, firearm ownership and more, using solely phone metadata." He added, "It would be no technical challenge to scale these identifications to a larger population."
The researchers previously "used the MetaPhone dataset to spot relationships, understand call graph interconnectivity, and estimate the identifiability of phone numbers." This time, the researchers used the crowdsourced data to determine that "metadata surveillance can be used to identify information about callers including medical conditions, financial and legal connections, and even whether they own a gun."
"Many organizations have a narrow purpose, such that an individual call gives rise to sensitive inferences," the researchers explained. They found that numerous calls had "straightforward inferences" such as pharmacies, legal services, firearm sales and repair, adult establishments, marijuana dispensaries, religious organizations, political campaigns and financial services. "Many numbers were associated with specialized products or services, particularly within professional fields," they wrote before further breaking down the medical category into specialty practice areas.
The degree of sensitivity among contacts took us aback. Participants had calls with Alcoholics Anonymous, gun stores, NARAL Pro-Choice, labor unions, divorce lawyers, sexually transmitted disease clinics, a Canadian import pharmacy, strip clubs, and much more. This was not a hypothetical parade of horribles. These were simple inferences, about real phone users, that could trivially be made on a large scale.
Mayer and Mutchler concluded:
The dataset that we analyzed in this report spanned hundreds of users over several months. Phone records held by the NSA and telecoms span millions of Americans over multiple years. Reasonable minds can disagree about the policy and legal constraints that should be imposed on those databases. The science, however, is clear: phone metadata is highly sensitive.
So all those "just metadata" and "no content" assurances from the President and intelligence agency "experts" are apparently just more of the same...lies, damned lies as proven by science and the Stanford grad student's research.
Like this? Here's more posts:
- Don't you have a right to link to a hack without going to jail?
- Top 25 most commonly used and worst passwords of 2013
- Google Map jacker called a hero by feds he wiretapped
- Former BlueHat Prize winner pwns Microsoft, researcher bypasses all EMET protections
- How to customize Windows 8.1 Start screen and keyboard shortcut tricks
- Microsoft surveys tech elites on online privacy
- Microsoft finally gets a clue: Boot to desktop as default in Windows 8.1 update
- Mt. Gox files for bankruptcy in U.S. to stop lawsuits
- Ballmer calls Microsoft a two-trick pony, but real trick is Windows XP to 8 'upgrade'
- How to change Windows 8.1 to local account with no Microsoft email account required
- Microsoft: Windows 8.1 update great for mouse-and-keyboard AND touch users
- Samsung to let developers tap into Galaxy S5 fingerprint scanner
Follow me on Twitter @PrivacyFanatic | <urn:uuid:4bb78400-f9c0-4b1d-ba69-a42d76e788f0> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226526/microsoft-subnet/researchers--phone-metadata-surveillance-reveals-very-personal-info-about-callers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00281-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945788 | 1,158 | 2.671875 | 3 |
This part of the webpage introduces the architecture, structure, functions, components, and models of the Internet and other computer networks. It uses the OSI and TCP layered models to examine the nature and roles of protocols and services at the application, network, data link, and physical layers. The principles and structure of IP addressing and the fundamentals of Ethernet concepts, media, and operations.
Select an article to get more info:
- - What is route recursion
- - Proxy and Reverse Proxy Server
- - Wildcard mask – What’s the difference from subnet mask?
- - What is internet – How does the internet work and why is so important?
- - How we open a web page – what is going on behind?
- - Network – Computer network
- - IPv6 Day – 6 June 2012.
- - Network – More news about way it works
- - VoIP and IP telephony – Defining Basics of Voice
- - Routing – How are routers working?
- - IP Address – Internet Protocol address: Basic about IPv4
- - Routing – Static and Dynamic routes – What is route?
- - LAN – Local Area Network
- - Collisions and collision detection – What are collisions in Ethernet?
- - Data Packet – IP Packet – What is this packet story all about?
- - MAC address (MAC L2 addressing) – What is this physical addressing?
- - WAN – Wide Area Network
- - What is my IP address | <urn:uuid:ea76e00c-b440-40e6-95df-7dc1aa97b321> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/network-fundamentals-intro | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00125-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.87941 | 314 | 3.953125 | 4 |
In September, NASA and the U.S. Forest Service will be testing technologies to improve wildfire imaging and mapping capabilities. An Altair unmanned aerial system is scheduled to fly a series of four or five missions over the Western United States, and will collect detailed thermal-infrared imagery of wildfires. These tests will demonstrate the ability of unmanned aerial systems to continuously collect data for 20 to 24 hours, as well as demonstrate the mobility, imaging and real-time communications capabilities of NASA's unmanned aerial systems.
"These tests will be a ground-breaking effort to expand the use of unmanned aerial systems in providing real-time images in an actual disaster event," said Vincent Ambrosia, principal investigator of the Western States Unmanned Aerial System Fire Mission at NASA's Ames Research Center, Moffett Field, California. "This is a prime example of NASA science and technology being used to solve real-world problems."
The Altair is commanded and controlled through satellite communications which will allow real-time data transfer of fire imagery to virtually anywhere on Earth. Mission data will be sent from the unmanned aerial system to the National Interagency Fire Center in Boise, Idaho, and then distributed immediately to deployed fire fighters.
For the first time, a NASA sensor system will fly on the Altair. This sensor was built to observe fires and other high-temperature sources and can discriminate temperature differences from less than one-half to approximately 1000 degrees Fahrenheit, which is important to improving fire mapping.
"The success of these tests will help to refine the future direction of fire mapping for the wildfire management agencies," said Everett Hinkley, liaison and special projects group leader for the U.S. Forest Service, Salt Lake City.
Another new technology application being tested during the flights is the Collaborative Decision Environment, originally developed by NASA for the Mars Exploration Rover. It is an interactive tool that will allow sharing vast amounts of mission information during flights which can then be shared and visualized by members of the mission team for effective planning and acquisition of imagery over critical fire events. | <urn:uuid:3da45118-d444-444f-b9cc-352a05eeb125> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/NASA-and-US-Forest-Service-Test.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.923676 | 415 | 3.390625 | 3 |
Brick and mortar stores have been cautiously embracing data, but Target has turned its cornucopia of information into a key to market success.
The Internet of Things is amazingly powerful and useful — but not always safe to use, and most organizations with IoT implementations need to do a better job of keeping them secure. That's a message that Jerry Irvine, CIO of the Chicago IT services company, Prescient Solutions, would like his fellow CIOs to take to heart. In an interview with The Enterprisers Project, he explains why.
The Enterprisers Project (TEP): Do IoT devices present more of a security threat than more traditional devices? If so, why?
Irvine: The main reason that IoT devices present such significant security threats is due, in most part, to the specific functions these devices are designed to fulfill. Common IoT devices, such as thermostats, garage door openers and even alarm systems, are typically small form factor devices with very little surface area where chips or other devices can be installed. As a result, only basic functionality such as reporting, monitoring and alerting are included within their programming. Additional functionality and features such as security, unfortunately, are not included within the devices themselves. Basic levels of security such as user ID and password are often required in the device's management applications. But these security measures are usually easy to bypass.
The same kinds of chips used in IoT devices have been performing basic functions for businesses, manufacturing, and power companies for over five decades. These devices are called Industrial Control Systems (ICS), Programmable Logic Controllers (PLC) and Supervisor Control and Data Acquisition (SCADA) devices. Only in the past few years have these monitoring devices and controllers become popular in the residential market.
TEP: Given that barring IoT devices from the workplace network may be impractical, what options do IT leaders have for keeping the network secure from any threats they may bring?
Irvine: Initial designs of IoT devices were developed for companies to monitor production environments and allow for remote control and alerting. Because these devices were designed to perform limited and specific automated functions within the production environment, they were on completely different networks from the business networks and end user access. As business, production, and telephony networks have converged over the past decade, these Industrial Control Systems have been combined onto the same networks as everything else. Because the design and configuration of these chips has remained the same since the early 1950s many vulnerabilities and risks associated with their use have been defined. Whether it is commercial ICS or residential IoT devices, the first step in securing them is complete segmentation from the Internet.
TEP: Many IoT devices send and receive machine-to-machine (M2M) communications and this type of data flow is growing exponentially. Should it be a source of concern for CIOs?
Irvine: Machine-to-machine communication has existed for over 50 years. One of the major issues with this type of communication is the proprietary languages or protocols devices speak to each other. These protocols were not designed to exist in an open environment where they can be seen, copied and retransmitted by anyone who has Internet access. They were designed to work within a closed environment with only trusted users and devices. The security of the protocols has not kept pace with the requirements for allowing communications to and from these devices across the Internet.
As a result, these devices can be specifically targeted to cause multiple problems. Malware exists to cause systems outages, failures and even to physically damage hardware (for example, Stuxnet). Additionally, vulnerabilities within these systems can allow malicious users to gain access to other devices within the internal network causing loss, corruption and theft of data, personally identifiable information, and intellectual property. The Target breach originated from an HVAC vendor.
Until new safer protocols, chips, and applications are created, the safest option is to segment ICS and IoT devices on separate networks with no access to the Internet and limit access and communications to them only to specific computers and users. Any external access to these devices should be encrypted via VPN or some other means of communications.
TEP: What advice would you offer to CIOs about keeping their networks secure in an IoT world?
Irvine: Managing and securing ICS and IoT devices is similar to managing and securing computers, laptops, tablets and mobile devices. Devices need to be kept up-to-date. All vendors create and distribute security patches and applications updates that provide fixes to known bugs, vulnerabilities and other risks associated with old code. Patch management is critical in maintaining a secure device. Devices should also be routinely scanned with antivirus, antimalware or other vulnerability scanning solutions. While ICS and IoT devices do not generally allow for antivirus to be installed on them directly, it should be installed and current on all of the computers and devices accessing them.
Users should create unique user IDs and unique complex passwords for all devices. Many ICS and IoT devices detected in production have the default passwords in place. These passwords can be easily found on the Internet and allow access to the devices as well as to all computers and networks they are attached to.
Segmentation must be implemented using firewalls to limit access to these devices to only known and authorized users. Additionally, segmented devices should be configured to monitor and alert administrators to all unauthorized attempts to communicate to these devices. | <urn:uuid:e32eb169-4641-46f1-9fd4-95401d9da93f> | CC-MAIN-2017-04 | https://enterprisersproject.com/article/2016/2/internet-hackable-things-why-iot-devices-need-better-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00181-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956552 | 1,098 | 2.59375 | 3 |
The most popular and comprehensive Open Source ECM platform
3D printing, or additive manufacturing (AM), creates objects from a digital 3d model by printing layers of material from different raw materials.
Recent studies are looking at the impact on the environment of 3D printing technology compared to other manufacturing technologies. Researchers are trying to answer the question of “How Green is 3D printing“?
Jer Faludi, Sustainable Design Strategist and PhD candidate at UC Berkeley, working with members of the UC Berkeley mechanical engineering department, studied and compared the impact of creating objects with 3d printers versus creating similar objects using traditional milling. Their conclusion is that “whether you’re milling or doing 3D printing, how you use the tool is the most important factor in its environmental impact. And there are many opportunities for 3D printers to improve, making huge leaps toward greener manufacturing.”
Faludi’s team found that while traditional injection molding won out in terms of environmental impact over 3D printers for large-scale mass production of items, when considering the usage scenario for the production of a small number of items, 3D printers were generally scored more eco-friendly. Mills generally scored better in their use of energy, while 3D printing scored higher in their efficient use of materials and minimum waste.
Energy usage is one area where 3D printing typically scores poorly compared to traditional manufacturing. Nick Owen, director of 3D Print UK, told RTCC that “[3D printing] is not all it’s cracked up to be in terms of [being] environmentally friendly; in fact it’s a very energy thirsty process.”
A study by Cuboyo found that “on the one hand, classic manufacturing is not adapted for low volume production of different objects in terms of environmental impact. On the other hand, the 3D printing technique cannot compete with injection molding for high volume production… 3D printing technology tends to be ecologically interesting for low volume production (<1000 parts) compared to traditional manufacturing (injection molding).”
Joshua Pearce, associate professor at Michigan Technological University, said that “we can get substantial reductions in energy and CO2 emissions from making things at home. And the home manufacturer would be motivated to do the right thing and use less energy, because it costs so much less to make things on a 3D printer than to buy them off the shelf or on the Internet.” Unlike other studies, Pearce’s team found that, using a basic 3D printer, items could actually be created with 41 to 64 percent less energy than making the same items in a factory and then shipping them to locations in the US. | <urn:uuid:2b79c6bb-d062-48b2-aea5-445a71618e26> | CC-MAIN-2017-04 | http://formtek.com/blog/3d-printing-low-volume-production-by-3d-printing-more-eco-friendly-than-traditional-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.940964 | 555 | 3.984375 | 4 |
Regular Expressions in Track-It! E-Mail Monitor Policies
Parts of this document were taken from original Article ID: TIA03556
Found In: Track-It! 8.0, 9.0, 10.0, 10.5 Email Monitor
- Regular expression examples are shown below. There is also information in the help documentation on email policies and a quick search on Google will generate many websites devoted to regular expression syntax. A regular expression is a text string that is used to match a pattern of text, according to syntax rules. You can create matches by entering .NET regular expressions to represent any text that might be in an incoming e-mail sent to any of your Help Desk e-mail addresses.
- The following examples show how to use Regular Expressions to search for a single word, a single occurance of one of a list of words and the occurance of all specified words in an e-mail subject from a user:
- Single word (StringValue) - This simple example will match an e-mail with a single word anywhere in the subject and is not case sensitive.
A Regular Expression rule for matching a subject containing the word printer is shown below:
Entering just the word printer will match e-mails with the following subjects:
"User needs a new printer"
"My printer needs more ink"
"Can you tell me why my PRINTER won't work?"
- Multiple words (StringValue1|StringValue2|StringValue3) - OR - This example will match an e-mail where any one of the words specified is found in the subject. The pipe delimiter | acts as an OR condition in this case. This format will match an e-mail with StringValue1, StringValue2, OR StringValue3 anywhere in the subject and is not case sensitive.
A Regular Expression rule for matching a subject containing one of the words Printer|Desktop|Laptop as shown below:
Entering this phrase Printer|Desktop|Laptop will match e-mails with the following subjects:
User needs a new laptop
My printer needs more ink
Can you tell me why my desktop won't work?
- Multiple words (^(?=.*?\bStringValue1\b)(?=.*?\bStringValue2\b)(?=.*?\bStringValue3\b).*$) - AND - This example will match an e-mail where ALL of the words specified are found within the subject. The syntax above acts as an OR condition and will match an e-mail with StringValue1, StringValue2, AND StringValue3 anywhere in the subject and is not case sensitive.
Entering this phrase for example ^(?=.*?\bres\b)(?=.*?\bnew\b)(?=.*?\bhire\b).*$ will match e-mails with the following subjects:
"RES: John Smith new hire"
"RES: New employee John Smith - hire process start" | <urn:uuid:4cbebc8b-ddaf-48a5-afd6-ba93ed225cd5> | CC-MAIN-2017-04 | https://communities.bmc.com/docs/DOC-19326 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.829899 | 628 | 2.59375 | 3 |
Brazil's textile and apparel industry is the seventh largest in the world, and is the third largest with regard to knitwear. It turns over US$22 billion per year, produces 7.2 billion garments annually and comprises 30,000 companies employing 1.4 million workers directly and another 1.6 million workers indirectly. Furthermore, it is a growing industry: the past eight years have witnessed US$7 billion in investment, while a further US$12 billion is earmarked for investment over the next seven years. More than 70,000 new jobs have been created since 2000, and the industry is expected to employ upward of 2 million within the next five years.
Additionally, during 2001, the balance of trade turned positive, with the industry exporting more than it imported. The textile and clothing industry in Brazil owns and operates some of the most sophisticated factories in the world, with sectors of the industry located throughout each of the 27 states and related to all aspects of the apparel supply chain.
Although it is a well established industry whose history goes back more than 150 years, a closed post-war economy, followed by a flood of cheap imports and massive inflation, kept it from establishing an export industry until just recently. Brazil currently exports an estimated 7 percent of its total production. Forty-three percent of this is manufactured clothing and 57 percent is textiles, including yarn, fabrics and knitwear.
Left to its own devices, the industry has exported between US$1.2 billion to US$1.3 billion per year over the past eight to 10 years, equivalent in today's terms to 0.3 percent to 0.4 percent of the world's market. But now, ABIT, the Brazilian Textile and Apparel Industry Association (www.abit.org.br), backed by the Brazilian government, has stepped in to promote exports. By 2008, ABIT plans to hold 1 percent of the world's growing textile and apparel market, which is predicted to total US$4 billion in exports.
Quotas to the United States are in most cases currently underutilized and in September 2002 the EU export quotas were abolished. This, combined with the fact that recent heavy investment has meant many of the factories are operating under capacity, means Brazil's textile and apparel industry is poised to focus on the global marketplace.
Additionally, the country is planning to expand its domestic market by 50 percent over the next few years, from approximately $20 million to $30 million.
Brazil boasts Latin America's largest GDP and the ninth largest economy in the world, making it a key prospect for international investment. In 2000, it attracted US$33 billion in direct international investment, compared with US$9 million in 1996. Brazil is the world's fifth largest country, with a surface area of 8.5 million square kilometers, and shared borders with every country in South America except Chile and Ecuador. Portuguese is the official language. Almost 82 percent of its 172.6 million inhabitants live in the major cities. Brasilia is the capital, while So Paulo is the economic, industrial and financial heart of Brazil, accounting for 33 percent of total GDP and 22 percent of total population. The average income of the total population (GNI) equates to US$3,060. The Brazilian currency is the real, introduced in July 1994 in conjunction with the "Real Plan," a series of economic measures designed to end spiraling inflation.
Background to Brazil's Textile and Clothing Industry
It's been 150 years since the Brazilian textile supply chain entered the industrial revolution, and the last few years have seen it enter a new revolution, dictated by the demands of a global industry and the necessity of providing quality, innovation and fashion to the international market.
This revolution encompasses the entire supply chain, from the growing of raw cotton - Brazil is entirely self-reliant and grows 900,000 tons per year, a figure expected to increase to 1.2 million tons by 2005 - through to the finished garment. The country boasts globally renown designers such as Alexandre Herchcovitich, Carlos Miele, Walter Rodrigues, Reinaldo Lourenco and Gloria Coelho and international model celebrities such as Gisele Bndchen, Fernanda Tavares and Caroline Rodrigues.
With 173 million Brazilian consumers, the textile and clothing industry is the seventh largest in the world. Nevertheless, since World War II, it has been monopolized by the domestic sector, mainly due to Brazil's internal political and economic fluctuations. During the war, however, the textile industry was the country's second largest industry. In 1945, more than 1 billion meters of cloth were produced. Brazil's per capita consumption of textile products equates to 11.2 kilograms of fabric per year. In developed countries, this statistic reaches 18 kilos per year per person, leaving considerable room for growth. In a cold country, 25 kilos per person annually is achievable.
Today the clothing industry is the largest manufacturing employer within the country. The industry consumes more than 1 million tons of raw material, 60 percent of which are woven; 40 percent knitted. Clothes represent a major portion of the GDP with an annual return of more than $20 billion a year.
The 1990s were a landmark for the textile industry, with government reforms and policy changes. The government opened the country to imports, which attracted cheap Asian apparel that crippled local trade. Thus, countermeasures were introduced. Two actions in particular - the Brazilian Program of Quality and Productivity and the Brazilian Program of Design - have proved the most significant. They have resulted in major investments, quality improvements and cost reductions that have enabled the industry not only to compete domestically but internationally. The introduction of a kind of "Brazilian brand" or consolidated look has heated up the domestic market whilst attracting international recognition. Moreover, fluctuating exchange rates have made textile exports more attractive.
Scope and Diversity of the Industry
While there is a significant silk industry that covers the supply chain from silk worm to finished product, and that rivals Italian quality, 70 percent of all spinning mills utilize cotton as their principal fiber. Gois is the main cotton producing state, followed by So Paulo, though both will soon be superseded by Mato Grosso, where a huge cotton growing growth program is underway. Crossbreeding research conducted in the 1970s produced a fiber length of up to 90 millimeters, the longest in the world and a variety particularly in demand by sewing thread producers.
The weaving industry currently employs 280,000 people and incorporates some of the most modern companies in the world, including Indego Denim, which may be the world's largest denim producer - hardly surprising for the largest jeans-consuming country in the world.
The state of Minas Gerais is home to many of the largest textile producing companies, including Cedro (2,500 tons per month with plans to expand to 3,500 tons), Cachoeira and Santanense, each over a century old. Some newer companies include Coteminas and So Jos, the first Brazilian company to produce fabric with a width of two meters.
Following a massive reinvestment program, So Jos currently produces 45 million square meters of fabric per year, distributed throughout Brazil and the Mercosul nations, with end uses in the clothing (80 percent), industrial (10 percent) and household textiles (10 percent) sectors.
Coteminas began operation 25 years ago and today operates 11 industrial units within four states. The company consumes 96 thousand tons of fiber per year - nearly 12 percent of Brazil's total cotton consumption - to produce yarns, woven and knitted fabric, T-shirts, socks, bath towels, bathrobes, and bed sheets for the domestic market as well as for export to the United States, Europe and other Mercosul countries.
The leading company for silk production is the Werner Textile Plant founded in 1904, which produces approximately 300,000 meters of fabric per month. It not only produces pure silk and mixed silk textiles, but also several types of crepe, including mousson and Chanel, pure silk georgette, taffeta, gazar, gabardine, crepes and satin with Lycra, and grosgrain.
Synthetic fibers and fabrics are produced principally in So Paulo. Rhodia Poliamide, for example, focuses on the nylon polyamide segment. The company employs 2,100 and turns over $250 million per year, half of which comes from the sale of textile filaments. Fibra Du Pont, Vicunha and Polyerika are also key synthetic fiber and yarn producers.
There are approximately 2,300 registered circular knitting enterprises in Brazil, whose main segments employ approximately 49,500 people in the manufacture of socks, underwear, beachwear, and other cut-and-sew garments. As the most important nucleus of flat bed knitting, Caxia do Sul contains 350 to 400 companies, most of which are medium sized (up to 500 employees) and produce about 8.5 million items per year, generating approximately 7,000 jobs.
In Brazil, a large number of private-capital family companies, most of which are small (up to 100 employees), predominate the clothing industry. Many of these operate on a CMT (cut, make and trim) or purely subcontract sewing basis. Most small companies developed out of small sewing studios, tailor shops and enterprises of immigrants who rushed into the textile field. There have been few foreign investors entering the industry.
Larger companies have often emerged as appendices to the textile plants, specializing in a particular type of fabric and interested in expanding their business by verticality, such as Coteminas. Because of the local production of cotton, and the temperatures within Brazil, 75 percent of all clothes made for the local market are cotton. Products of particular importance to the country include jeans, surf and beach wear, woven shirts, knitted shirts such as T-shirts and polo shirts and lingerie and underwear, while children's wear is on the increase. Currently, only about 15 percent of the country's 20,000 clothing companies export. Of those that do, this accounts for approximately 20 percent of their output.
Apart from traditional clothing, the sewing industry is very diverse, with home domestics such as bed and table linen and towels accounting for a significant amount of production. With the domestic market consuming approximately 150 million mostly branded beachwear items per year, Brazil also has become a leader in international beach fashion. The largest part of production - 87 percent - is concentrated in south and southeastern Brazil, primarily in So Paulo and Santa Catarina. Yet there are also niche areas of production. In Nova Friburgo and Petrpolis, for example, more than 1,000 lingerie producers can be found producing between 360 million and 400 million items per year for noteworthy brands including De Millus, Du Loren, Triumph and Valis re. The largest embroidery producer in the world, the Arp Lace Company, also resides in Nova Friburgo.
The city of Santa Catarina specializes in circular knitwear, table linen, batch towels and gowns. Among its producers is Dhler, Brazil's seventh largest textile producer, which exports about a third of its production to more than 40 countries and employs 3,200 direct employees. Its vertical operations utilize 560 state-of-the-art looms, eight printing machines and 60,000 spindles to produce 1,000 tons of yarn per month and more than 6.8 million square meters of clothing. This accounts for an annual sales volume of more than R$120 million (US$36.2 million) to the domestic market and R$45 million (US$13.6 million) in exports. Products cater primarily to the towel and home domestics markets.
Other Santa Catarina companies include Malwee, which employs 3,500 people and produces 2.5 million knitwear items monthly from three units; and Hering Textil S.A, which ranks No. 1 among Brazilian knitwear companies and is the largest T-shirt producer in the world. Hering employs 4,148 people and outputs 5.5 million items per month, 60 percent of which are produced internally. Production requires approximately 1,000 tons of yarn per month. In 1997, the company celebrated the sale of its five-billionth T-shirt.
Due to attractive local government grants and incentives, northeast Brazil is growing in importance as many companies delocalize or expand garment production in this area and accommodate several key textile plants. The state of Sergipe, for example, produced 11,172 tons in 1996. According to the industry, the labor cost in the northeast is 30 percent to 40 percent lower than in the more industrial areas. Around So Paulo, for example, the average sewing machinist wage would equate to around US$165 to US$195 per month, while in the northeast this figure is more likely to be US$125. The country's minimum wage is around US$55 per month.
Due to the closed nature of Brazil's fashion industry, many brands have grown up internally. Zoomp and M.Officer (jeans and casual), Rosa Cha and Lenny (beachwear), Iodice (knitwear, jeans and high-end fashion) are examples of quality, fashion-conscious Brazilian brands.
All of them have begun exporting to a greater or lesser degree, and in addition to the production capability of their country, it is their branding and design that they most wish to promote. Some companies are setting themselves up to do both. Marisol, for example, has a range of well-established and very attractive brands for men, women and children, franchise stores and is also setting up one of its sewing units exclusively for contract production.
Started in 2000, TexBrasil is a strategic export promotion program operated by ABIT, which addresses the entire textile supply chain. Encouraging the designers, the manufacturers and the technological centers to work together, its aim is to offer the best of Brazilian product to the international marketplace. TexBrasil helps with market research, commercial promotion, trade fairs and so forth, as well as international negotiations such as the recent removal of quota from the EU.
Apart from many internal fairs, TexBrasil takes exhibitors to the key international fairs including Texworld, Heimtextil, MAGIC, Pret--Porter, Lyon Mode City, ASR and FIMI, depending on their product offerings. During 2003, for example, TexBrasil will exhibit at more than 25 international trade fairs. Inside the country, So Paulo Fashion Week and Rio Fashion are the most prestigious events for fashion, with 50 of the most important Brazilian designers showing their wares.
Niki Tait, C.Text FTI, FCFI, heads Apparel Solutions, which provides independent assistance to the apparel industry in the areas of manufacturing methods, industrial engineering, information technology, quick response, etc. She can be reached at tel./fax +44 (0) 1237 423163, e-mail: email@example.com. | <urn:uuid:32cb174a-cdff-48f6-b9ef-bf4935d22adf> | CC-MAIN-2017-04 | http://apparel.edgl.com/magazine/February-2003/Brazil-Plans-Major-Increases-in-Global-Apparel-Exports65113 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00391-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949813 | 3,102 | 2.65625 | 3 |
By exchanging TCP with UDP, two MIT researchers have managed to create the State Synchronization Protocol (SSP) – a new Internet protocol more suited to establishing and sustaining the irregular and low-bandwidth connections typical of mobile devices connecting to wireless networks – and Mosh (“mobile shell”), a remote terminal application that implements it in order to guarantee the security of such connections.
SSP and Mosh were presented at last month’s 2012 USENIX Annual Technical Conference, after Mosh having been tested – and found more than adequate – by users who downloaded it for free from MIT’s website since April.
So what makes SSP so fitting?
First off, the Transmission Control Protocol – used by SSH – works under the assumption that the two endpoints it connects are fixed, and that the information exchanged must be received in the same order it was sent.
Obviously, when it comes to mobile connections, at least one of the endpoints will be moving around, shifting between Wi-Fi, computer and cellular networks, and this is something that TCP isn’t equipped to deal with effectively. Consequently, SSH sessions are easily lost.
Also, in real-time communications, the most important information is the most recent one.
“If there’s an outage for five seconds, you don’t want to wait five seconds and have to see what you missed,” Keith Winstein, one of the researching duo, explained for Computerworld. “You just want it to start up again [where you are now].”
But User Datagram Protocol’s stateless nature – perfect for servers answering small queries from huge numbers of clients – and its transmission model that is not concerned about receiving bytes in the right order but about synchronizing objects / receiving latest screens, makes it suitable for mobile networking.
Another thing that makes all this easier is that SSP doesn’t use IP addresses to identify endpoints. Instead, it uses cryptographic credentials, which also prevent connections from being hijacked by attackers. Consequently, SSP has no problem identifying that one or more of the endpoints are on the move and keeping the connection alive.
While Mosh, the only application that currently uses SSP, may not have a definitive and brilliant future, the researchers believe that the State Synchronization Protocol does and would be perfect for applications such as GMail, GChat, Skype, and others. | <urn:uuid:a5ef0ecb-ce55-4b03-ab80-f27842dff2e2> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/07/09/mit-unveils-a-new-internet-protocol-for-mobile-clients/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00299-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95031 | 502 | 2.90625 | 3 |
If you have two routers / two Layer3 switches connected with two L3 links (two paths) you can route with two equal static routes towards the same prefix and the router will load balance traffic across both links.
The idea is to make two same static routes on the same router but with different next-hops. The question was: Which link or which route will be used? And if the traffic will be load balanced, which mechanism will be used to share the traffic across both of links.
ip route 10.0.0.0 255.0.0.0 192.168.10.2
ip route 10.0.0.0 255.0.0.0 192.168.11.2
How Nagle’s algorithm is making TCP/IP better and when is ok to use it. Truth be told, Nagle should be avoided in today’s high-speed networks.
This article it’s not about mathematics, don’t be afraid. I’m running a networking blog and it’s not my intention to speak or write about anything related to mathematics. Biggest math problem that I’ve done in last few years is some simple subneting, EIGRP metric calculation and that is where I stopped with math for now.
On the other hand, I love the theory behind algorithms, specially if the algorithm is used in networking and if it is so simple and powerful as Nagle’s algorithm.
You can guess, John Nagle is the name of the fellow who created the algorithm. He found a solution for TCP/IP efficiency issue also known as “small packet problem”.
Multipath TCP is an extension of TCP that will soon be standardized by IETF. It is a succesful attempt to resolve major TCP shortcomings emerged from the change in the way we use our devices to communicate. There’s particularly the change in the way our new devices like iPhones and laptops are talking across network. All the devices like the networks are becoming multipath. Networks redundancy and devices multiple 3G and wireless connections made that possible.
Almost all today’s web applications are using TCP to communicate. This is due to TCP virtue of reliable packet delivery and ability to adapt to variable network throughput conditions. Multipath TCP is created so that it is backwards compatible with standard TCP. In that way it’s possible for today’s applications to use Multipath TCP without any changes. They think that they are using normal TCP.
We know that TCP is single path. It means that there can be only one path between two devices that have TCP session open. That path is sealed as a communication session defined by source and destination IP address of communicating end devices. If some device wants to switch the communication from 3G to wireless as it happens on smartphones when they come in range of known WiFi connection, TCP session is disconnected and new one is created over WiFi. Using multiple paths/subsessions inside one TCP communication MPTCP will enable that new WiFi connection makes new subsession inside established MPTCP connection without braking TCP that’s already in place across 3G. Basically more different paths that are available will be represented by more subsessions inside one MPTCP connection. Device connected to 3G will expand the connection to WiFi and then will use algorithm to decide if it will use 3G and WiFi in the same time or it will stop using 3G and put all the traffic to cheaper and faster WiFi.
TCP single path property is TCP’s fundamental problem
In datacenter environment there is a tricky situation where two servers are talking to each other using TCP to communicate and that TCP session is created across random path between servers and switches in the datacenter. If there are more paths of course. If there are (and there are!) another two servers talking in the same time, it will possibly happen that this second TCP session will be established using partially the same path as the first TCP session. In that situation there will be a collision that will reduce the throughput for both sessions. There is actually no way to control this phenomenon in TCP world. As in our datacenter example the same thing works for every multipath environment so it it true for example for the Internet.
Answer is MPTCP!
Multipath TCP – MPTCP is better as TCP in that enables the use of multiple paths inside a single transport connection. It meets the goal to work well at any place where “normal” TCP would work.
DCCP transport layer protocol is used to control the datagram congestion. It provides an excellent procedure to stop the internet fall down, if it is caused by the congestion. In fact, this protocol is a brilliant competitor to be used as a substitute of UDP protocol.
DCCP account DCCP congestion control trait by means of a reliable acknowledgments delivery (in form of packets instead of bytes) will provide actually a congestion control with dynamism. DCCP will also make available the negotiable blocking control mechanism, but it will be up to the particular application’s specific requirements too. Moreover, these mechanisms come with a number of specific features, so to go well with different types of applications. The bandwidth consumption can be enhanced as the size of packets in case of DCCP is increased.
Port Numbers – How does Transport layer identifies the Conversations
Computers are today equipped with the whole range of different applications. Almost all of these applications are able in some way to communicate across the network and use Internet to send and get information, updates or check the correctness of user purchase. Consider they all these applications are in some cases simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone calls. In this situation the computer is using one network connection to get all this communication running. But how is it possible that this computer is never confused about choosing the right application that will receive a particular packet? We are talking about the computer that processes two or more communications in the same time for two or more applications running. | <urn:uuid:f5b60895-dd23-4a12-9d70-d8fa813f3e5a> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/category/networking/transportlayer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00025-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.929788 | 1,254 | 3.3125 | 3 |
What Really Is Cloud Computing? (Triple-A Cloud)
What is cloud computing? Ask a consumer, CIO, and salesman and you’ll likely get widely varying responses.
The consumer will typically think of the cloud as a hosted service, such as Apple’s iCloud, or uploading pictures to Photobucket, and scores more of like services (just keep in mind that several such services existed before it became fashionable to slap the “cloud” label on them).
Some business articles describe cloud computing as turning a capital expense into an operating expense, while others talk about moving from a product to a service. But for a CIO or IT manager, what exactly does all this mean and how does one get there?
Some CIO’s tend to view the cloud as outsourcing large pools of infrastructure as a utility (after all CIO stands for “Can I Outsource?”, right?). But does one have to outsource a third party’s capacity to apply cloud computing concepts, or can it be pursued within existing infrastructures? On the other hand, if an organization has already adopted virtualization (to some extent), some wonder why they should be looking at cloud computing?
Many definitions of cloud computing I’ve heard have an element of truth to them, but are often incomplete and often leave people wanting more and perhaps fail to truly really capture the essence of cloud computing. What if we could simplify our definition of cloud computing as being built upon three key pillars?
CLOUD COMPUTING IS….
So what are the three key ingredients in cloud computing? Abstraction, Automation and Agility.
Let’s take a closer look at each of these three ingredients of cloud computing. After a discussion of these 3 elements we’ll circle back to talk and address some of the original questions about the different models in which we find cloud computing being used.
Abstraction is essentially liberating workloads and applications from the physical boundaries of server hardware. In the past we had servers which would host only one application (hence our focus at times on servers and not applications). Virtualization provides this abstraction by separating workloads from server hardware, eliminating hardware boundaries, dependencies and providing workload mobility. That mobility is even being extended to moving workloads from internal data centers to service providers and vice versa. Today the virtual machine defines the boundary, but in the future as the OS becomes less relevant, we might see “virtual containers” defining our workloads on PaaS (Platform as a Service) infrastructures.
The original motivation for virtualization was a CAPEX (capital expense) play — fewer servers, ports, space, electricity, etc. As virtualization matured many quickly found that the management of virtual machines was significantly easier, and that there was a new way of doing many tasks, which could in turn reduce OPEX (operating expenses). To get here, we need to work towards 100% virtualization and the technical barriers have been all but smashed with today’s technology (more on this later).
Put simply, abstraction enables greater resource utilization and can be used with concepts like multi-tenancy to provide greater economies of scale than were previously possible.
There’s also another kind of abstraction taking place that’s causing a wave of disruption — the abstraction of the application away from the traditional PC. The combination of SaaS, application virtualization, VDI and the proliferation of mobile devices (tablets and smartphones) are all driving this trend. Applications no longer need to be anchored to physical PCs as users want to access their applications and their data from any device and any place. For more on this topic, see this earlier post on The New Application Paradigm.
Both of these types of abstraction are removing traditional boundaries and therefore changing the ways in which we manage infrastructure and present applications to endpoints.
And in looking at server virtualization, we see that the virtualization stack also provides a unifying management layer, which can serve as the foundation for so much more…
Where abstraction provides the foundation for the new paradigm, automation builds on top of that foundation to provide exciting opportunities for organization to reduce OPEX costs and promote agility.
Let’s start with the basics. Thanks to encapsulation (provided by virtualization), new possibilities have emerged with replication, disaster recovery, and even the backup and recovery process itself. There’s agent-less monitoring of many core performance metrics, scripting across VMs and hosts, virtual network switches and firewalls, and of course, near-instant provisioning from templates. Such levels of automation were not easily accessible before the abstraction layer of virtualization was introduced.
Now we have products such as VMware’s vCloud Director which can take all of the elements of an n-tier application, and quickly provision them — including firewall rules and even with multi-tenancy. Imagine deploying an entire N-tier application of multiple virtual machines, complete with network config and firewalls with just a few clicks. Now add to that the concept of a self-service catalog, where business units can request resources for an application over a web form, and upon approval the application is automatically provisioned consistent with the provided specifications, while conforming to existing IT standards and compliance audits.
Those are just some of many angles to automation. Another is orchestration of converged infrastructure (of which the Vblock is one example). Rather than trying to manage the core infrastructure elements of compute, storage and networking as independent silos as many do today, we can instead deploy converged infrastructure with orchestration tools that can unify and transcend across the silos, allowing infrastructure to be managed and provisioned more like a singularity. And many of these orchestration tools can plug directly into the virtualization stack (i.e. vCloud Director) for even more integration.
Now of course there are obstacles to this automation which can include “PSP syndrome” (an adhesion to Physical Server Processes), heavily siloed organizational structure, integration and even multiple hypervisors.
There’s many more angles to automation we haven’t touched on yet, but the key is that abstraction enables new opportunities for automation — and that automation can then be used to pursue….
Why does VMware say that they want infrastructure to be transparent? Let’s answer that question with another question: does the business care about storage, network or server technologies? At the end of the day the business cares mostly about two primary deliverables from IT — the health of their applications (as measured by uptime and other performance metrics) and the time it takes to deploy/provision them.
Success is the rapid and successful execution of business strategy and time is a HUGE component of this. There’s competition, market opportunities, patents and legal issues, first mover advantage and so many reasons why time is…well…money.
CAPEX and OPEX savings can have positive impact on budgets but when you get to a place where you can get major projects done in weeks rather than quarters, that’s a profound paradigm shift which can often be of more value to an organization that just CAPEX and OPEX reductions.
Imagine that the business wants to build a 200 server n-tier application to promote a new initiative and that it has to be PCI complaint. First you have to have the infrastructure (compute, storage, networking) to rapidly provision and then you need to work with the application, networking and security teams to provision those VLANs and firewall rules. If you’ve ever worked in an IT shop which is heavily siloed and uses physical server processes, the technology might be obsolete by the time you finished the solution. The back and forth between departments and ticket processes just to get the VLANs or firewall rules correctly set for the application or make any like adjustments can slow such a project to a crawl.
However if you can successfully leverage abstraction and automation in your IT department, you can get to the point where you can reduce the time to provide solutions to the business by months in many cases. It’s being done today, and that’s one of the biggest reasons why there’s so much excitement in not just IT circles, but business and leadership circles, about cloud computing.
In an earlier post I introduced the concept of a value triangle which is illustrated below. The organization begins their journey by using virtualization (which provides abstraction) to achieve CAPEX benefits. This provides the foundation for automation which enables opportunities for additional OPEX benefits, and of course all of this enables the opportunity for the organization to capture agility benefits which could potentially be of far greater economic value than CAPEX and OPEX (cost-center) combined.
The value of cloud computing is so profound that we all should be doing it this way and should be just calling it “computing”. But we aren’t quite there yet, hence the term “cloud computing”.
The bottom line is that if you can successfully execute on abstraction and then automation, you can begin to align your IT services to the needs of the business and work with the business with a partner-minded relationship, providing the agility to rapidly execute their business plan.
WHAT SHAPE IS YOUR CLOUD?
Clouds can come in many shapes and sizes. Some are internal and some are outsourced. Then there’s the whole private/public/hybrid cloud (complete with academic debate) and let’s not forget PaaS, IaaS and SaaS. Which of these “shapes” should you have in your cloud and what should it look like? Perhaps someday there will be ITIL standards on exactly what form these elements ought to manifest within an ISO 9000 complaint cloud design (no not really — I just said that to get Stevie Chambers all worked up). So where do I start working on this cloud thing and which strategy should I use?
While these are often relevant discussions, it’s often helpful to forget about these “debates” and buzzwords and focus instead on the core elements of cloud computing — abstraction, automation and agility, so that we can better understand the value proposition and consider the best methods to employ towards that end.
Do you need to outsource to a third party to leverage cloud computing? Can you leverage cloud computing in your existing datacenter? How you get there and the best path will vary, but one first must embark on the cloud journey in order to reap the benefits.
I’ve seen at least a half dozen business or IT articles over the past month alone that said either directly or strongly implied that cloud computing meant that the workloads had to be outsourced to some third-party as a utility service. Cloud computing resources can certainly be outsourced to a third party, but they don’t have to be. And even if you did make the decision to outsource it’s no magic bullet — your processes and organization need to evolve to the point where you can achieve the levels of automation – and therefore agility – that you seek.
In the long run, I tend to think that more and more workloads will indeed be moved to third-party service providers ( an upcoming post will deal with this), but for the time being the best path for many organizations may very well be to pursue cloud computing within their own existing datacenters. You must embark on the journey to first achieve abstraction, and then re-engineer your processes and organization model as you work to achieve greater levels of automation and agility.
THE CLOUD JOURNEY
The cloud journey — which is almost always a marathon — begins with virtualization. In the past we’ve had to battle “server huggers” and many other barriers to the adoption of virtualization, but especially with the release of vSphere 5, those technical barriers are gone for the overwhelming majority of workloads. If a given environment can’t virtualize an application effectively today, it’s more than likely a limitation with your architecture or skill sets as the proof is in the results — organizations are having success and creating case studies on virtualizing their ERP systems and database tiers (see the performance section of this blog for just a few examples).
Sometimes we encounter “server huggers” who still want their application to have the familiar and comfortable boundaries of a physical server they can identify, and sometimes in our datacenters we build “Frankenstructures” in which we invest great time and engineering expense into and get an excessively complex infrastructure that we have become too attached to, and the engineering burden begins to weigh us down (another area where converged infrastructure can help). Server huggers, “frankenstructures”, physical server processes and siloed org structures all can be obstacles to the cloud journey.
Clouds can have different evolutionary stages — it takes a journey — perhaps a marathon — to reach the “agility zone” in the cloud journey. Let’s take a step back from the private/public PaaS/IaaS debates and start by focusing on just the core elements of abstraction, automation, and agility. A few key points to summarize:
- Clouds have many shapes and forms, but they all rely on abstraction and automation to enable the potential for agility.
- It is not a requirement to outsource or to move anything to “the cloud”. You can begin the journey in your own datacenter(s) by first pursuing abstraction and then automation.
- Cloud isn’t just about technology. It’s also about organizational structure and processes. Re-engineer your physical server minded processes, refresh your skill sets, and knock down your organizational silos.
- Virtualization alone isn’t enough. Cloud computing requires the effective use of automation (at many different levels) to reduce provisioning and service delivery times.
Building On #AAACloud
This is sort of a “foundation” post and I’m hoping that the conversation can be continued and expanded by using the Twitter hashtag #AAACloud as well as some future content I hope to produce to further build on this discussion. This might include a video post or two, as well as SlideRocket presentations on “What comes after virtualization?” and a second presentation on the value of converged infrastructure. Also there may be some follow-up blog posts on topics such as:
1) Using Multiple Hypervisors
2) Private Datacenters versus Outsourced Service Providers
3) Legacy physical processes
4) Organizational Models
..and I’m sure more will come to mind. Join the conversation on twitter with #AAACloud | <urn:uuid:7036392d-cc4f-4fea-a384-b6aa13c6b95e> | CC-MAIN-2017-04 | http://www.blueshiftblog.com/?p=2089 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00327-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94258 | 3,058 | 2.6875 | 3 |
Over the past few years, the public sector spent considerable time and money making myriad transactions available to the public via the Internet.
People appreciate the convenience, and they appreciate "their" government responding to their wants. The public sector is slowly changing people's perception by creating an image of government agencies that care about their customers and want to nurture the relationship.
Some governments now use the Web to provide information to help citizens become smarter consumers. Public agencies have long collected price and performance data from a wide range of industries, but rarely made it available in a user-friendly form for average citizens. Agencies at all governmental levels are beginning to offer online services that help customers make better decisions on everything from gasoline purchases and investing to choosing hospitals and schools.
That's new ground. In the past, information tended to flow one way: from citizens and businesses to government. Society gained because the data was used to ensure compliance with environmental, safety and fairness laws, as well as other regulations. But many citizens felt little direct benefit from this activity.
Nowadays, some governments are doing more to help people as they go about their lives, doing the seemingly million and one things they must accomplish on any given day.
Florida's Legislature passed the Affordable Health Care for Floridians Act in 2004. The bill directed state government to implement a consumer-focused, transparent health-care delivery system in the state. The bill also stipulated that the state create a mechanism to publicly report health-care performance measures and distribute consumer health-care information.
Florida's Agency for Health Care Administration (AHCA) is revamping its Web presence to report and distribute health-care data to consumers. One expanded site, floridahealthstat.com, delivers health-care data collected by the AHCA's State Center for Health Statistics to consumers.
The site is designed to make it easy for health-care consumers, purchasers and professionals to access information on quality, pricing and performance. One such tool is Florida Compare Care, which was launched in November 2005.
Through the Compare CareWeb site, Florida now reveals data on infections, deaths, complications and prices for each of its 207 hospitals. Residents can use the site to compare short-term-care hospitals and outpatient medical centers in various categories, such as length of stay, mortality, complications and infections.
The Web site lists hospitals' rates of medical problems in seven categories, and provides patient death rates in 10 areas, including heart attacks, strokes and pneumonia.
When the AHCA started devising Florida Compare Care, the agency turned to the Comprehensive Health Information System (CHIS) Advisory Council for help. The council and various CHIS technical workgroups, which include hospital representatives and various other stakeholders, were involved in the Web site's development from the very beginning, said Toby Philpot, the AHCA's deputy press secretary.
"The workgroups have studied the technical issues of reporting performance data, as well as discussed the most appropriate options for reporting and displaying the information on the Web site," Philpot said.
Creating a Web site such as Florida Compare Care is dicey because of the complex nature of the information being presented, he said.
"Because of their expertise, some hospitals treat more high-risk patients," Philpot explained. "Some patients arrive at hospitals sicker than others, and often, sicker patients are transferred to specialty hospitals. That makes comparing hospitals for patients with the same condition but different health status difficult."
To get the most accurate data on the Web site, he said, each hospital's data is risk adjusted to reflect the score the hospital would have if it provided services to the average mix of sick, complicated patients.
The risk adjustment is performed by 3 | <urn:uuid:f5affd3b-a344-4e66-8edc-e4403dd54677> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/99395379.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00409-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955678 | 756 | 2.84375 | 3 |
Chapter 29: Operating Systems
Note: This chapter is copied almost verbatim from the material in
Chapter 21 of the textbook by Peter Abel. It is used by permission.
This chapter introduces material that is suitable for more
advanced assembler programming.
The first section examines general operating systems and the various support programs.
Subsequent sections explain the functions of the program status word and the interrupt
system. Finally, there is a discussion of input/output channels, physical IOCS, and the
These topics provide an introduction to systems programming
and the relationship between
the computer hardware and the manufacturer's software. A knowledge of these features can
be a useful asset when serious bugs occur and when a solution requires an intimate
knowledge of the system.
In an installation, one or more systems programmers, who
are familiar with the computer
architecture and assembler language, provide .support for the operating system. Among the
software that IBM supplies to support the system are language translators such as assembler,
COBOL, and PL/I and utility programs for cataloging and sorting files.
Operating systems were developed to minimize the need for
operator intervention during the
processing of programs. An operating system is a collection of related programs that provide
for the preparation and execution of a user's programs. The system is stored on disk, and part
of it, the supervisor program, is loaded into the lower part of main storage.
You submit job control commands to tell the system what
action to perform. For example,
you may want to assemble and execute a source program. To this end, you insert job control
commands before and after the source program and submit it as a job to the system. In simple
terms, the operating system performs the following steps:
1. Preceding the source program is a job control command
that tells the operating system to
assemble a program. The system loads the assembler program from a disk library into storage
and transfers control to it for execution.
2. The assembler reads and translates the source program
into an object program and stores it
3. Another job control command tells the system to link–edit
the object program. The system
loads the linkage editor from a disk library into storage and transfers control to it for
4. The linkage editor reads and translates the object
program, adds any required input/output
modules, and stores it on disk as an executable module.
5. Another job control command tells the system to execute
the executable module. The
system loads the module into storage and transfers control to it for execution.
6. The program executes until normal or abnormal
termination, when it returns
processing control to the system.
7. A job command tells the system that this is the end of
the job, since a job may
consist of any number of execution steps. The system then terminates that job and
prepares for the next job to be executed.
Throughout the processing, the system continually
intervenes to handle all input/output,
interrupts for program checks, and protecting the supervisor and any other programs
executing in storage.
IBM provides various operating systems, depending on users'
requirements, and they
differ in services offered and the amount of storage they require. These are some
major operating systems:
DOS Disk Operating System Medium-sized systems
DOS/VSE Disk Operating System Medium-sized systems with virtual storage
OS/VSl Operating System Large system
OS/VS2 Operating System Large system
OS/MVS Operating System Large system
The manufacturer typically supplies the operating system
on reels of magnetic tape, along
with an extensive set of supporting manuals. A systems programmer has to tailor the
supplied operating system according to the installation's requirements, such as the
number and type of disk drives, the number and type of terminals to be supported, the
amount of processing time available to users, and the levels of security that are to prevail.
This procedure is known as systems generation, abbreviated as sysgen.
Operating System Organization
Figure 29–1 shows the general organization of Disk Operating
System (DOS), on
which this text is largely based. The three main parts are the control program,
system service programs, and processing programs.
Figure 29–1 Disk Operating System Organization
The control program, which controls all other programs
being processed, consists
of initial program load (IPL), the supervisor, and job control. Under OS, the
functions are task management, data management, and job management.
IPL is a program that the operator uses daily or whenever
required to load the supervisor
into storage. On some systems, this process is known as booting the system.
Job control handles the transition between jobs run on the
system. Your job commands
tell the system what action to perform next.
The supervisor, the nucleus of the operating system,
resides in lower storage, beginning
at location X'200'. The system loads user (problem) programs in storage following the
supervisor area, resulting in at least two programs in storage: the supervisor program
and one or more problem programs. Only one is executing at any time, but control
passes between them.
The supervisor is concerned with handling interrupts for
input/output devices, fetching
required modules from the program library, and handling errors in program execution.
An important part of the supervisor is the input/output control system (IOCS), known
under OS as data management.
Figure 29–2 Supervisor areas
Figure 29–2 (not an exact representation) illustrates the
general layout of the supervisor
in main storage. Let's examine its contents.
1. Communication Region. This area contains the following data:
00 – 07 The current date, as mm/dd/yy or dd/mm/yy
08 – 11 Reserved
12 – 22 User
area, set to zero when a JOB command is read to provide
communication within a job step or between job steps
23 User program status indicator (UPSI)
24 – 31 Job name, entered from job control
32 – 35 Address: highest byte of problem program area
36 – 39 Address: highest byte of current problem program phase
40 – 43 Address: highest byte of phase with highest ending address
44 – 45 Length of label area for problem program
Scheduler. The channels provide a path between main storage and
input/output devices for all I/O interrupts and permit overlapping of program execution
with I/O operations. If the requested channel, control unit, and device are available, the
channel operation begins. If they are busy, the channel scheduler places its request in a
queue and waits until the device is available. The channel notifies the scheduler when
the I/O operation is complete or that an error has occurred.
Protection. Storage protection prevents a problem program from erroneously
moving data into the supervisor area and destroying it. Under a multiprogramming
system, this feature also prevents a program in one partition from erasing a program in
Handling. An interrupt is a signal that informs the system to interrupt
program that is currently executing and to transfer control to the appropriate supervisor
routine. A later section on the program status word covers this topic in detail.
Loader. The system loader is responsible for loading programs into
storage for execution.
Recovery Routines. A special routine hancl1es error recovery for
device or class of devices. When an error is sensed, the channel scheduler invokes
the required routine, which attempts to correct the error.
Information Block (PIB). The PIB contains information tables that
supervisor needs to know about the current programs in storage.
8. I/O Devices
Control Table. This area contains a table of I/O devices that relate
physical unit addresses (X'nnn') with logical addresses (SYSxxx).
Area. This area provides temporary storage for less used
routines that the
supervisor loads as required, such as OPEN, CLOSE, DUMP, end–of– job handling,
some error recovery, and checkpoint routines.
System Service Programs
System service programs include the linkage editor and the librarian.
Linkage editor. The linkage editor has two main functions:
1. To include input/output modules. An installation
catalogs I/O modules in the system
library (covered next). When you code and assemble a program, it does not yet contain
the complete instructions for handling input/output. On completion of assembly, the
linkage editor includes all the required I/O modules from the library.
2. To link together separately assembled programs. You may
code and assemble a
number of subprograms separately and link-edit these subprograms into one executable
program. The linkage editor enables data in one subprogram to be recognized in another
and facilitates transfer of control between subprograms at execution time.
The operating system contains libraries on a disk known as SYSRES to
catalog both IBM programs and the installation's own commonly used programs and
subroutines. DOS/VS supports four libraries:
1. The source statement library (SSL) catalogs as a book
any program, macro, or
subroutine still in source code. You can use the assembler directive COPY to include
cataloged code into your source program for subsequent assembling.
2. The relocatable library (RL) catalogs frequently used
modules that are assembled but
not yet ready for execution. The assembler directs the linkage editor to include I/O
modules automatically, and you can use the INCLUDE command to direct the linkage
editor to include your own cataloged modules with your own assembled programs.
3. The core image library (CIL) contains phases in
executable machine code, ready for
execution. The CIL contains; for example, the assembler, COBOL, PL/I, and other
translator programs, various utility programs such as LINK and SORT, and your own
production programs ready for execution. To request the supervisor to load a phase from
the CIL into main storage for execution, use the job control command
// EXEC phasename.
4. The procedure library (PL) contains cataloged job control
to facilitate automatic
processing of jobs.
The OS libraries vary by name according to the version of
OS, but basically the OS
libraries equivalent to the DOS source statement, relocatable, and core image are,
respectively, source library, object library, and load library, and they serve the same
Processing programs are cataloged on disk in three groups:
1. Language translators that IBM supplies with the system
include assembler, PL/I,
COBOL, and RPG.
2. Utility programs that IBM supplies include such
special-purpose programs as disk
initialization, copy file–to–file, and sort/merge.
3. User–written programs that users in the installation
write and that IBM does not
support. All the programs in this text are user–written programs. For example, the job
command // EXEC ASSEMBLY causes the system to load the assembler from the CIL
into an available area ("partition") in storage and begins assembling a program. The job
command // OPTION LINK directs the assembler to write the assembled module on
SYSLNK in the relocatable library.
Once the program is assembled and stored on SYSLNK, the
// EXEC LNKEDT tells the linkage editor to load the module from SYSLNK into
storage, to complete addressing, and to include I/O modules from the RL. Assuming that
there was no job command to catalog it, the linkage editor writes the linked phase in
the CIL in a non–catalog area. If the next job command is // EXEC with no specified
phase name, the supervisor loads the phase from the non–catalog area into storage for
execution. The next program that the linkage editor links overlays the previous one in
the CIL non–catalog area.
The job command //
OPTION CATAL instead of //
OPTION LINK tells the system both
to link the program and to catalog the linked phase in the catalog area of the CIL. You
normally catalog production programs in the CIL and for immediate execution use the
job command // EXEC phase name.
Multiprogramming is the concurrent execution of more than
one program in storage.
Technically, a computer executes only one instruction at a time, but because of the fast
speed of the processor and the relative slowness of I/O devices, the computer's ability
to service a number of programs at the same time makes it appear that processing is
simultaneous. For this purpose, an operating system that supports multiprogramming
divides storage into various partitions and is consequently far more complex than a
The number and size of partitions vary according to the
requirements of an installation.
One job in each partition may be subject to execution at the same time, although only one
program is actually executing. Each partition may handle jobs of a particular nature. For
example, one partition handles relatively short jobs of high priority, whereas another
partition handles large jobs of lower priority.
The job scheduler routes jobs to a particular partition
according to its class. Thus a
system may assign class A to certain jobs, to be run in the first partition.
In Fig. 29–4, the job queue is divided into four classes,
and main storage is divided into
three user partitions. Jobs in class A run in partition 1, jobs in classes B and C run in
partition 2, and jobs in class P run in partition 3.
Depending on the system, storage may be divided into many
partitions, and a job class
may be designated to run in anyone of the partitions. Also, a partition may be designated
to run any number of classes.
When an operator uses the IPL procedure to boot the
system, the supervisor is loaded
from the CIL into low storage. The supervisor next loads job control from the CIL into
the various partitions. The supervisor then scans the system readers and terminals for
job control commands.
FIXED STORAGE LOCATIONS
As mentioned earlier, the first X'200' (decimal 512)bytes
of storage are reserved for use
by the CPU. Figure 29–3 lists the contents of these fixed storage locations.
Figure 29–3 Fixed Storage Locations
When a job completes processing, the job scheduler selects
another job from the queue
to replace it. For example, if partition 1 is free, the job scheduler in Fig. 29–4 selects
from the class A queue either the job with the highest priority or, if all jobs have the
same priority, the first job in the queue.
The system has to provide a more or less equitable
arrangement for processing jobs in
each partition. Under time slicing, each partition is allotted in turn a time slice of so many
milliseconds of execution. Control passes to the next partition when the time has expired,
the job is waiting for an I/O operation to complete, or the job is finished.
Figure 29–4 Job queue and partitions
In a multiprogramming environment, a large program may not
fit entirely in a partition. As a consequence, both DOS/VS and OS/VS
support a virtual storage system that divides programs into segments of 64K bytes, which are in turn divided into pages of 2K or
(usually) 4K bytes. On disk, the entire program is contained as pages in a page data set, and in storage VS arranges a page pool for
as much of the program as it can store, as shown in Fig. 29–5. As a consequence, a program that is 100K in size could run in a 64K
partition. If the executing program references an address for a part of the program that is not in storage, VS swaps an unneeded page
into the page data set on disk and pages in the required page from disk into the page pool in storage. (Actually, VS swaps onto disk only
if the program has not changed the contents of the page.) The 16 control registers handle much of the paging operations. Since a page
from disk may map into any page in the pool, VS has to change addresses; this process is known as dynamic address translation (DAT).
Figure 29–5: Page Pool
When running a real–time application such as process control,
a data communications manager, or an optical scan device, you may
not want VS to page it out. It is possible to assign an area of nonpageable (real) storage for such jobs or use a "page fix" to lock
certain pages into real storage.
PROGRAM STATUS WORD: PSW
The PSW is a doubleword of data stored in the control
section of the CPU to control an executing program and to indicate its status.
The two PSW modes are basic control (BC) mode and extended control (EC) mode. A 0 in PSW bit 12 indicates BC mode, and a 1
indicates EC mode. EC mode provides an extended control facility for virtual storage. One of the main features of the PSW
is to control the state of operation.
Figure 29–6: Two Variants of the PSW (Program Status Word)
[Note by ELB: In the original S/360 design, bit 12 of the
PSW was the ASCII bit, with settings to allow running the computer with
either EBCDIC characters (PSW12 = 0) or ASCII characters PSW12 = 1.) The ASCII option was so little used that most system managers
were completely unaware that it existed. When the S/370 design with support for virtual memory was introduced, the designers needed a
bit to indicate what was essentially “S/370 mode” rather than the older “S/360 mode”. Bit 12 was reassigned.]
Users of the system have no concern with certain
operations such as storage management and allocation of I/O devices, and if they
allowed access to every instruction, they could inadvertently access other users' partitions or damage the system. To provide protection, certain
instructions, such as Start I/O and Load PSW, are designated as privileged.
The PSW format is the same in only certain positions for
.each mode. Figure 29–6, just above, illustrates the two modes, in which the
numbered 0 through 63 from left to right. Some of the more relevant fields are explained next.
Bit 14: Wait
state. When bit 14 is 0, the CPU is in the running state and executing instructions.
When bit 14 is 1, the CPU is in wait state; which
involves waiting for an action such as an I/O operation to be completed.
Bit 15: State.
For both modes, 0 = supervisor state and 1 = problem state. When the computer is executing the supervisor
program, the bit is 0 and
all instructions are valid. When in the problem state, the bit is 1 and privileged instructions cannot be executed.
Bits 16–31: Program interrupt code (BC mode).
When a program interrupt occurs, the computer sets these bits according to the
type. The following
list shows the interrupt codes in hex format:
0001 Operation exception
0002 Privileged operation exception
0003 Execute exception
0004 Protection exception
0005 Addressing exception
0006 Specification exception
0007 Data exception
0008 Fixed–point overflow exception
0009 Fixed–point divide exception
000A Decimal overflow exception
000B Decimal divide exception
000C Exponent overflow exception
000D Exponent underflow exception
000E Significance exception
000F Floating–point divide exception
0010 Segment translation exception
0011 Page translation exception
0012 Translation specification exception
0013 Special operation exception
0040 Monitor event
0080 Program event (may be combined with another code)
Condition code. BC mode only; the condition code under EC mode is in bits 18–19.
certain arithmetic instructions set this code.
Bits 40–63: Instruction
address [Often called the PC, or Program Counter]. This area contains the
address of the next instruction to
be executed. The CPU accesses the specified instruction from main storage, decodes it in the control section, and executes it in the
arithmetic/logic section. The first 2 bits of a machine instruction indicate its length. The CPU adds this length to the instruction address
in the PSW, which now indicates the address of the next instruction. For a branch instruction, the branch address may replace
the PSW instruction address.
An interrupt occurs when the supervisor has to suspend normal processing to
perform a special task. The six main classes of interrupts are as follows:
1. Program Check Interrupt. This interrupt occurs
when the computer cannot execute an operation,
such as performing arithmetic on invalid packed data.
This is the common type of interrupt when a program terminates abnormally.
2. Supervisor Call Interrupt. A problem program may issue a request
for input/output or to terminate processing. A transfer from
the problem program to the supervisor requires a supervisor call (SVC) operation and causes an interrupt.
3. External Interrupt. An external device may need attention, such
as the operator pressing the request key on the console
or a request for communications.
4. Machine Check Interrupt. The machine–checking circuits may
detect a hardware error, such as a byte not containing an
odd number of on bits (odd parity). [Note by ELB: This refers to parity memory, in which an 8–bit byte is stored as 9 bits
in memory, with the extra bit (not transferred to the CPU) being the parity bit.]
5. Input/Output Interrupt. Completion of an I/O operation making a
unit available or malfunction of an I/O device
(such as a disk head crash) cause this interrupt.
6. Restart Interrupt. This interrupt permits an operator or another CPU to invoke execution of a program.
The supervisor region contains an interrupt handler for
each type of interrupt. On an interrupt, the system alters the PSW as
required and stores the PSW in a fixed storage location, where it is available to any program for testing.
The PSW discussed to this point is known as the current
PSW. When an interrupt occurs, the computer stores the current PSW and
loads a new PSW that controls the new program, usually the supervisor. The current PSW is in the control section of the CPU,
whereas the old and new PSWs are stored in main storage, as the following indicates:
The interrupt replaces the current PSW in this way. (1) It
stores the current PSW into main storage as the old PSW, and
(2) it fetches a new PSW from main storage, to become the current PSW. The old PSW now contains in its instruction address
the location following the instruction that caused the interrupt. The computer stores the Program Status Words in 12 doubleword
locations in fixed storage; 6 are for old PSWs and 6 are for new PSWs, depending on the class of interrupt. There are eight bytes
allocated for each PSW; for this reason the following addresses appear to be decimal numbers.
Program Old PSW
Let's trace the
sequence of events following a supervisor interrupt. Assume that the supervisor
has stored the address of each of
its interrupt routines as bits 40–63 of the PSW that is stored in the address associated with its interrupt type. Loading the CPU
Program Status Word with the “New PSW” associated with an interrupt type essentially starts the interrupt handler on the next instruction.
that when an instruction executes, the computer updates the instruction address
and the condition code in the
current PSW (in the CPU) as required.
1. A program requests input from disk. The GET or READ macro contains a SVC
(Supervisor Call) to link to
[ELB: a part of the Operating System] for input/output. This is a supervisor interrupt.
2. The instruction address in the current PSW contains the address in the
program immediately following the SVC that caused
the interrupt. The CPU stores this current PSW in the old PSW for supervisor interrupt, location 32.
The new PSW
for supervisor interrupt, location 96, contains supervisor state bit = 0 and
the address of the supervisor interrupt
routine. The CPU moves this new PSW to the current PSW and is now in the supervisor state.
3. The PSW instruction address contains the address of the supervisor I/O
routine, which now executes. The channel scheduler
requests the channel for disk input.
4. To return to the problem program, the supervisor loads the old PSW
from location 32 back into the current PSW. The instruction
links to the PSW instruction address, which is the address in the program following the original SVC that caused the interrupt.
The system switches the PSW from supervisor state back to problem state.
[ELB Note: This design reflects some older strategies that had yet to take full advantage of dynamic memory organizations, based on use of the stack and heap.]
In the event of a program check interrupt, the computer
sets its cause on PSW bits 16-31, the program interrupt code. For example, if
the problem program attempts arithmetic on invalid data, the computer senses a data exception and stores X'0007' in PSW bits 16-31.
The computer then stores the current PSW in old PSW location 0040 and loads the new PSW from 0104 into the current PSW. This
PSW contains the address of the supervisor's program check routine, which tests the old PSW to determine what type of program
check caused the interrupt.
The supervisor displays the contents of the old PSW in
hexadecimal and the cause of the program check (data exception), flushes the
interrupted program, and begins processing the next job. Suppose that the invalid operation is an MP [Multiply Packed] at location
X'6A320'. Since MP is 6 bytes long, the instruction address in the PSW and the one printed will be X'6A326'. You can tell from the
supervisor diagnostic message that the error is a data exception and that the invalid operation immediately precedes
the instruction at X'6A326'.
A channel is a component that functions as a separate
computer operated by channel commands to control I/O devices. It directs
data between devices and main storage and permits attaching a great variety of I/O devices. The more powerful the computer model,
the more channels it may support. The two types of channels are multiplexer and selector.
channels are designed to support simultaneous operation of more than
one device by interleaving blocks of data. The two
types of multiplexer channels are byte-multiplexer and block-multiplexer. A byte-multiplexer channel typically handles low-speed devices,
such as printers and terminals.
A block-multiplexer can support higher-speed devices, and its ability to interleave blocks of data facilitates simultaneous I/O operations.
channels, no longer common, are designed to handle high–speed devices,
such as disk and tape drives. The channel can
transfer data from only one device at a time, a process known as burst mode.
Each channel has a 4–bit address coded as in the following example:
CHANNEL ADDRESS TYPE
0 0000 byte-multiplexer
1 0001 block-multiplexer
2 0010 block-multiplexer
3 0011 block-multiplexer
4 0100 block-multiplexer
5 0101 block-multiplexer
6 0110 block-multiplexer
A control unit,
or controller, is required to interface with a channel. A channel is
whereas a control unit is device–dependent.
Thus a block-multiplexer channel can operate many type of
devices, but a disk drive control unit can operate only a disk drive.
Figure 29–7 illustrates a typical configuration of channels, control units, and devices.
Figure 29–7: Channels, Control Units, and Devices
example, a computer uses a multiplexer channel to connect it to a printer's control
unit. The control unit has a 4–bit address.
Further, each device has a 4–bit address and is known to the system by a physical address. The device address is therefore
a 12–bit code that specifies:
Control unit UUUU
If the printer's device number is 1110 (X'E') and it is
attached to channel 0, control unit 1, then to the system its physical address
0000 00011110, or X'01E'. Further, if two disk devices are numbered 0000 and 0001 and they are both attached to channel 1,
control unit 9, their physical addresses are X'190' and X'191', respectively.
This physical address permits the attaching of 28 , or 256 devices.
Although the supervisor references IJO devices by
their physical numbers, your programs use symbolic names. You may assign a
symbolic name to any device temporarily or (more or less) permanently, and a device may have more than one symbolic name assigned.
The operating system uses certain names, known as system logical units, that include the following.
In addition, you may reference programmer logical units, SYS000-SYSnnn.
The terminal, system reader, or disk device used as input for programs
The terminal, system reader, or disk device used as input for job control for the system
The system name to assign both SYSIPT and SYSRDR to the same terminal, system reader, or disk device
The printer or disk used as the main output device for the system
The device used as the main unit for output
The system name to assign both SYSLST and SYSPCH to the same output device
The disk area used as input for the linkage editor
The console or printer used by the system to log operator messages and job control statements
The disk device where the operating system resides
The disk device for the relocatable library
The disk device for the system library
For example, you may assign the logical address SYS025 to
a disk drive with physical address X'170'. The supervisor stores
the physical and logical addresses in an I/O devices control table in order to relate them. A simplified table could contain the following:
SYSLNK, SYSRES, SYS025
A reference to SYSLST is to the printer, and a reference to SYSLNK, SYSRES, or SYS025, depending on its particular use, is to disk device X'170'. You may assign a logical address permanently or temporarily and may change logical addresses from job to job. For instance, you could use an ASSGN job control command to reassign SYS035 for a program from a disk device X'170' to another disk device X'l72'.
I/O LOGIC MODULES
Consider a program that reads a tape file named TAPEFL.
The program would require a DTFMT or DCB file definition macro to define
the characteristics of the file and tape device to generate a link to an I/O logic module. The assembler determines which particular logic
module, based on (1) the kind of DTF and (2) the specifications within the file definition, such as device number, an input or output file,
the number of buffers, and whether processing is in a work area (WORKA) or a buffer (IOREG). In the following example, the assembler
has generated a logic module named DFFBCWZ (the name would vary depending on specifications within the DTFMT).
When linking a program, the linkage editor searches for
addresses in the external symbol dictionary that the assembler generates. For
this example, the ESD would contain entries at least for the program name and UFFBCWZ. The linker accesses the named module
cataloged on disk (provided it was ever cataloged) and includes it at the end of the assembled object program. One role of a
system programmer is to define and catalog these I/O modules.
On execution of the program, the GET macro links to the
specified file definition macro, DTFMT. This macro contains the
address of the I/O logic module at the end of the object program where the linker included it. The module, combined with information
from the DTFMT, contains all the instructions necessary to notify the supervisor as to the actual type of I/O operation, device,
block size, and so forth.
The only remaining information is to determine which tape
device; the supervisor derives it from the job control entry, which in
this example assigns X'281' as the physical address. The supervisor then (at last) delivers the physical request for input
via a channel command.
example, the printer module, PRMOD, consists of three letters (IJD) and five
option letters (abcde), as IJDabcde.
The options are based on the definitions in the DTFPR macro, as follows:
a RECFORM: FlXUNB (F), VARUNB (V), UNDEF (U)
b CILCHR: ASA (A), YES (Y), CONTROL (C)
c PRINTOV=YES and ERROPT=YES (B), PRINTOV=YES and
ERROPT not specified (Z), plus 14 other options
d IOAREA2: defined (I), not defined (Z)
e WORKA: YES (W), YES and RDONLY= YES (V), neither specified (Z)
A common printer module for IBM control character; two buffers, and a work area would be IJDFYZIW. For one buffer, the module is IJDFYZZW.
Physical I0CS (PIOCS), the basic level of I0CS, provides
for channel scheduling, error recovery, and interrupt handling..
When using PIOCS, you write a channel program (the channel command word) and synchronize the program with completion of the
I/O operation. You must also provide for testing the command control block for certain errors, for checking wrong–length records,
for switching between I/O areas where two are used, and, if records are blocked, for blocking and deblocking.
PIOCS macros include CCW, CCB, EXCP, and WAIT.
Channel Command Word (CCW)
The CCW macro causes the assembler to construct an 8–byte channel command word that defines the I/O command to be executed.
command–code, data–address, flags, count–field
• command-code defines the operation to be performed, such as 1 = write,
2 = read,
X'09' = print and space one line.
• data-address provides the storage address of the first byte where data is to be read or written.
• flag bits determine the next action when the channel completes an
operation defined in a CCW.
You can set flag bits to 1 to vary the channel's operation (explained in detail later).
• count-field provides an expression that defines the number of bytes in the data block that is to be processed.
Command Control Block (CCB)
You define a CCB macro for each I/O device that PIOCS macros
reference. The CCB comprises the first 16 bytes of most
generated DTF tables. The CCB communicates information to PIOCS to cause required I/O operations and receives
status information after the operation.
• blockname is the symbolic name associated with the CCB, used as an old PSW for the EXCP and WAIT macros.
• SYSnnn is the symbolic name of the 110 device associated with the CCB.
• command-list-name is the symbolic name of the first CCW used with the CCB.
Execute Channel Program (EXCP)
The EXCP macro requests
physical I0CS to start an I/O operation, and PIOCS relates the block name to
the CCB to determine
the device. When the channel and the device become available, the channel program is started.
Program control then returns to your program.
block–name or (1)
The operand gives the symbolic name of the CCB macro to be referenced.
The WAIT Macro
The WAlT macro synchronizes
program execution with completion of an I/O operation, since the program
normally requires its completion
before it can continue execution. (When bit 0 of byte 2 of the CCB for the file is set to 1, the WAlT is completed and processing resumes.)
For example, if you have issued an EXCP operation to read a data block, you now WAlT for delivery of the entire block
before you can begin processing it.
block–name or (1)
CCW Flag Bits
You may set and use the flag bits in the CCW as follows:
• Bit 32 (chain data flag), set by X'80', specifies data chaining. When the CCW
has processed the number of bytes defined
in its count field, the I/O operation does not terminate if this bit is set. The operation continues with the next CCW in storage.
You may use data chaining to read or write data into or out of storage areas that are not necessarily adjacent.
In the following three CCWs, the first two use X'80' in
the flag bits, operand 3, to specify data chaining. An EXCP and CCB
may then reference the first CCW, and as a result, the chain of three CCWs causes the contents of an 80–byte input record to be
read into three separate areas in storage: 20 bytes in NAME, 30 bytes in ADDRESS, and 30 bytes in CITY.
CCW 2,NAME,X'80',20 Read 20 bytes into NAME,
,ADDRESS,X'80',30 Read 30 bytes to ADDRESS,
,CITY,X'00',30 Read 30 bytes into CITY,
• Bit 33 (chain
command flag), set by X'40', specifies command
chaining to enable the channel to execute more than one
CCW before terminating the I/O operation.
Each CCW applies to a separate I/O record.
The following set of Channel Command Words could provide for reading three input blocks, each 100 bytes long:
CCW 2,INAREA,X'40',100 Read record-l into
CCW 2,INAREA+100,X'40',100 Read
CCW 2,INAREA+200,X'00',100 Read record-3
• Bit 34 (suppress length indication flag), set by X'20',
is used to suppress an error indication that occurs when the number
of bytes transmitted differs from the count in the CCW.
• Bit 35 (skip flag), set by X'10', is used to suppress
transmission of input data. The device actually reads the data,
but the channel does not transmit the record.
• Bit 36 (program controlled interrupt flag), set by
X'08', causes an interrupt when this CCW's operation is complete.
(This is used when one supervisor SIO instruction executes more than one CCW.)
• Bit 37 (indirect data address flag), as well as other
features about physical IOCS, is covered in the IBM Principles of
Operation manual and the appropriate supervisor manual for your system.
Sample Physical IOCS Program
The program in Fig. 29–8 illustrates many of the features of physical IOCS we have discussed. It performs the following operations:
• At initialization, prints three heading lines by means of command chaining (X'40').
• Reads input records one at a time containing salesman name and company.
• Prints each record.
• Terminates on reaching end–of–file.
Note that the program defines a CCB/CCW pair for
each type of record, and the EXCP/WAIT operations reference the CCB name –
INDEVIC for the reader, OUTDEV1 for heading lines, and OUTDEV2 for sales detail lines. Each CCB contains the name of the
I/O device, SYSIPT or SYSLST, and the name of an associated CCW: INRECD, TITLES, and DETAIL, respectively.
Figure 29–8: Physical IOCS
• Systems generation (sysgen) involves tailoring the supplied operating
system to the installation's requirements, such as the number
and type of disk drives, the number and type of terminals to be supported, the amount of process time available to users,
and the levels of security that are to prevail.
• The control program, which controls all other programs being processed,
consists of initial program load (IPL), the supervisor,
and job control. Under OS, the functions are task management, data management, and job management.
• Initial program load (IPL) is a program that the operator uses daily or
whenever required to load the supervisor into storage.
The system loader is responsible for loading programs into main storage for execution.
• The supervisor resides in lower storage, beginning at location X'200'.
The supervisor is concerned with handling interrupts for
input/output devices, fetching required modules from the program library, and handling errors in program execution.
• Channels provide a path between main storage and the input/output
devices and permit overlapping of program execution
with I/O operations. The channel scheduler handles all I/O interrupts.
• Storage protection prevents a problem program from erroneously moving data into the supervisor area and destroying it.
• An interrupt is a signal that informs the system to interrupt the
program that is currently executing and to transfer control
to the appropriate supervisor routine.
• The source statement library (SSL) catalogs as a book any program, macro, or subroutine still in source code.
• The relocatable library (RL) catalogs frequently used modules that are assembled but not yet ready for execution.
• The core image library (CIL) contains phases in executable machine code, ready for execution.
• Multiprogramming is the concurrent execution of more than one program
in storage. An operating system that supports
multiprogramming divides storage into various partitions. One job in each partition may be subject to execution at the
same time, although only one program is actually executing.
• The PSW is stored in the control section of the CPU to control an
executing program and to indicate its status.
The two PSW modes are basic control mode (BC) and extended control (EC) mode.
• Certain instructions such as Start I/O and Load PSW are privileged to
provide protection against users'
accessing the wrong partitions.
• An interrupt occurs when the supervisor has to suspend normal
processing to perform a special task. The supervisor region
contains an interrupt handler for each type of interrupt.
• A channel is a component that functions as a separate computer operated
by channel commands to control I/O devices.
It directs data between devices and main storage and permits the attachment of a variety of I/O devices.
The two types are multiplexer and selector.
• The operating system uses certain names, known as system logical units,
such as SYSIPT, SYSLST, and SYSLOG.
Programmer logical units are referenced as SYS000-SYSnnn.
• Physical IOCS (PIOCS), the basic level of IOCS, provides for channel
scheduling, error recovery, and interrupt handling.
When using PIOCS, you write a channel program (the channel command word) and synchronize the program
with completion of the I/O operation. .
• The CCW macro causes the assembler to construct an 8–byte channel
command word that defines the I/O
command to be executed. | <urn:uuid:8b89ae45-e536-4332-b983-faca83567b65> | CC-MAIN-2017-04 | http://edwardbosworth.com/My3121Textbook_HTM/MyText3121_Ch29_V01.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00136-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.875897 | 9,430 | 3.8125 | 4 |
Internet communications “last mile” refers to the connection between the user’s computer terminal equipment from the engine room of the communications service provider switch.
Users involved in the “last mile” refers to the user in a network terminal status of the family and the individual user or small business usr, rather than the large commercial users or professional communications company, therefore, the connection between the service providers and the users of the line is not single-mode fiber optic backbone network usually used.
“Last mile” problem related to their location in the user’s computer terminal equipment cabling, because each user has room of difference, communication servic providers, real estate developers and users need to be considered according to the actual situation user equipment connected to the network, how to migrate to other locations and how to log off the user service.
Optical fiber because of its communication bandwidth, low loss, high fidelity, strong anti-interference ability and other advantages to become the first choice of the means of transmission of
the communication network, existing communications network backbone communication part of the basic fiber have achieved, in recent years fier prices constantly diving, laying of optical fibe costs have been declining, leading fiber access directly to the end-user terminals as possible. FTTH in the developed world has begun to spread.
In modern intelligent building internal office environment, computer networks are widely used, especially Ethernet LAN technology as the core technology, accounted for most of the market. Node, the number of ports in the various networks, more and more, the transmission speed is increasingly faster. 10Mbps, 100Mbps to 1Gbps development. All-fiber-optic port device (network fiber optic transceivers) connected by trunk gradually to the desktop development.
FTTH fiber optic cable must be the problem to be solved is the photoelectric conversion problem, that is, the twisted pair of short-distance and long-distance optical signal conversion, ordinary users to use a computer network devices tend not to have the fiber optic interface, not backbone communication network directly via fiber optic communication network port to fiber conversion must be conducted through the photoelectric conversion network equipment with fiber optic interface tunk fiber communication network to communicate only with network interface to fiber the photoelectric converters provide network port,up to provide a fibr interface docking with backbone communications network, down to provide TCP/ IP network interface with computers and other network devices docking, making the computer directly connected to the optical fiber communication by photoelectric conversion.
The industry because way, the reality of broadband access network comprehensive photochemical difficult to implement, such as fiber to the building (FTTB), fiber to the node (FTTN) fiber to the remote (FTTR), fiber to the exchange box (FTTC), collectively referred to as FTTx. Where FTTB suitable for densely populated throughout the multi-dwelling unit buildings such as rental apartments, residential buildings, hotels, etc. will zone, and therefore particularly attracted much attention. | <urn:uuid:755ac47b-b0bf-4187-b708-7cede3dec908> | CC-MAIN-2017-04 | http://www.fs.com/blog/the-photoelectric-conversion-escort-ftth-last-mile.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917013 | 611 | 2.796875 | 3 |
The Optical Time Domain Reflectometer (OTDR) is an essential tool used to test the integrity of fiber optic cables, which can be applied to evaluate the length of fiber cables, measure transmission and connection attenuation and to detect the fault location of fiber links as well. Based on these functions, it is commonly employed to fiber optical cable maintenance and construction. Moreover, OTDRs are most effective when testing long cables or cable plants with splices by illustrating where the cables are terminated and confirming the quality of the fibers, connections and splices.
Comparing with those sources and power meters which measure the loss of the fiber optic cable plant directly, the OTDR works indirectly. By duplicating the transmitter and receiver of the fiber optic transmission link, the source and meter make the measurement correlate well with actual system loss. However, the OTDR uses a unique optical phenomena “backscattered light” to make measurements along with reflected light from connectors or cleaved fiber ends, thus to measure loss indirectly.
During the process of OTDR testing, the instrument injects a higher power laser or fiber optic light source pulse into a fiber from one end of the fiber cable, with the OTDR port to receive the returning information. As the optical pulse is transmitted through the fiber, part of the scattered reflection will return to the OTDR. Only useful information returned could be measured by the OTDR detector which acts as the time or curve segments of fibers at different positions. By recording the time for signals from transmission to returning and the speed of transmission in fibers, the distance thus can be calculated. The following picture shows exactly how an OTDR works for fiber optic testing.
OTDR uses Rayleigh scattering and Fresnel reflections to measure fibers’ characteristics. Rayleigh scattering refers to the irregular scattering generated as optical signals transmitting in the fiber. OTDR only measures the scattered light back on the OTDR port. The backscatter signal shows the attenuation degree (loss/distance) of the optical fiber, and will be tracked as a downward curve, illustrating the power of backscatter decreases. This is because both transmission signal and backscatter loss are attenuated.
Rayleigh scattering power is related to the wavelength of transmitted signal: the shorter the wavelength, the stronger the power, which means the backscatter lose generated by the trajectory of 1310nm signal will be higher than that of 1550nm signals.
In the higher wavelength region (more than 1500 nm), the Rayleigh scattering will continue to decrease, and another phenomenon called infrared attenuation (or absorption) will appear to increase thus to cause an increase of the overall attenuation values. Therefore, 1550nm wavelength has the lowest attenuation, which testifies why it is a long distance communication wavelength. Similarly, OTDR of 1550nm wavelength also has low attenuation, therefore it can be used for long distance testing as well.
Fresnel reflection falls into the category of discrete reflection that is caused by the individual point of the whole fibers. These points are the result of changes in reverse coefficient elements such as glass and air gap. At these points, a strong backscatter light will be reflected back. Therefore, OTDR uses the information of Fresnel reflection to locate the connection point, fiber optic terminal and breakpoints.
Then, let’s take a look at an important OTDR specification which originated from Fresnel reflections, known as “dead zones”. Basically, there are two types of dead zones: event and attenuation. Both of them are expressed in distance that vary in accordance with the power of those reflections. A dead zone refers to the length of time during which the detector is temporary blinded by a high amount of reflected light, until it recovers and can read light again. As the OTDR is working, time is converted into distance, therefore, more reflections lead to more time for the detector to recover, thus resulting a longer dead zone. Dead zone limits the operation of OTDR to a large extent, making it unable to locate and resolve faults. The following picture shows the dead zone of the OTDR.
The dead zone seems like a problem when using OTDR for testing, however, adapting visual fault locator (VFL) can serve as an effective solution for solving this issue. It works as a complement to the OTDR in cable troubleshooting since it successfully covers the range where OTDRs fail to monitor because of the dead zone. The visual fault locator is designed with a visible laser and universal adapter like FC, SC and ST etc, which contributes to locate faults on the fiber link easily. For instance, to locate the breakpoint, bending or cracking of the fiber optic cables and locate the fault of ODTR dead zone as well. The higher power of a visual fault locator can find breaks in fibers or high losses around connectors in simplex cables. If the light escapes at one break, it will be visible through the jacket of the fiber. Which is especially helpful in finding cable faults near the end of a cable, whereas the dead zone of the OTDR restrain its ability to resolve faults. Meanwhile, the VFL also can be applied into finding cracked fibers or bad splices where an OTDR unable to do so.
In conclusion, an OTDR tester is an optical radar in essence. By sending out a flash of bright light, it measures the intensity of echo or reflections. So, OTDRs are mainly used in the optical fiber installation and maintenance service of access networks (communications links between telephone exchanges) and user networks (communications links between user sites and telephone poles). And computation is used to display a trace and make a number of mathematical deductions. Fiberstore (FS.COM) offers a wide range of OTDRs of reliable and excellent quality, for more information and details, kindly visit the website at www.fs.com or contact over email@example.com. | <urn:uuid:6336eec2-3d35-437b-a254-2ed35d4475b5> | CC-MAIN-2017-04 | http://www.fs.com/blog/working-principle-and-characteristics-of-otdr.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00254-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92941 | 1,200 | 3.546875 | 4 |
NEWS ANALYSIS: The security risks and loss of privacy that come with using a smartphone for more than just phone calls will be magnified by the Internet of things. But the conveniences might make it seem worth it, provided you don't think too hard about it.
There's been a lot of comment
about the coming Internet of things as the marketing hype from companies that would provide the relevant equipment gears up. The predictions about the potential impact made largely by academics seem pretty accurate.
But perhaps we should look at what it's likely to mean to you
, especially when you combine the IoT with the vast resources of big data and companies that profit from knowing what you're up to at any given time.
Let's say, for example, that you routinely have a couple of bottles of beer when you get home from work. There's the convenience of making sure your favorite brew is being tracked by your refrigerator and that it's communicating with your beer purveyor to make sure your shopping list includes the beer for restocking so you don't run out.
But by collecting the data from the IoT and by some creative use of big data, your health insurance company probably knows about your beer consumption, too. If you're lucky, that may mean you start getting calls about counseling, but it might also mean you lose your health insurance.
There are lots of scenarios making the rounds, such as getting alerts from your car that you're 15 minutes from home, triggering a change to the temperature setting on your home air conditioner, or receiving a warning when temperatures go up in one room, alerting you to anything from a fire to a window accidentally being left open during the summer. So far, most of the possibilities seem pretty benign.
But the impact could get much more pervasive. AT&T is already well along with its plans for a connected car
that will communicate on its own with a variety of services. Not only will such a car connect with the Internet
for everything from maintenance requirements to restaurant menus, but it's entirely capable of letting those same services know where you're going and when. It may also communicate enough information that it will make the information about why you're driving somewhere available.
So far, it looks like a real opportunity to make your quality of life better. You won't run out of beer, your car will get the maintenance it needs, and you will save money on air conditioning. But the question that has to go along with these benefits is, what are you giving up in return for that convenience? What details of your private life will become public, potentially ending up in the wrong hands with access to too much data?
You'll notice that I haven't mentioned the government or the National Security Agency in this discussion. The reason is that this is not about spying on your activity for real or imagined national security purposes. This is about increasing the visibility of your personal life voluntarily—by default—which means that privacy protections
are no longer an issue. You are, after all, allowed to tell people, even indirectly, where you are and what you're doing. | <urn:uuid:261e9c5d-f378-47b7-9fc5-a1d7507ddcfa> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/internet-of-things-sure-to-reveal-even-more-about-us-than-smartphones.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00190-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.971342 | 632 | 2.671875 | 3 |
When deciding how to allocate DNS resources on a network it’s important to implement some separation between external and internal Domain Name Services. Having all DNS servers configured to handle both external and internal resolution can impact the performance and security of a network.
DNS forwarding is the process by which particular sets of DNS queries are handled by a designated server, rather than being handled by the initial server contacted by the client. Usually, all DNS servers that handle address resolution within the network are configured to forward requests for addresses that are outside the network to a dedicated forwarder.
The popular business-focused social network LinkedIn was unavailable for several hours over the 19th and 20th of June because of a DNS redirection incident which lead to users of the service being directed to IPs in the range managed by Confluence Networks, an Internet services company registered in the British Virgin Islands. It’s unclear at the time of writing whether the incident is due to a malicious attack on LinkedIn’s DNS servers or a misconfiguration on the part of LinkedIn’s DNS providers.
The DNS redirection, also known as DNS hijacking, puts users of LinkedIn at risk of having private data made available to third parties. In the normal course of events, LinkedIn users would connect to the service using SSL encryption. That would make it very difficult for a third party to intercept the data in any meaningful form, but, because the Confluence Network servers don’t implement SSL, and LinkedIn’s session cookies are not set to reject non-encrypted connections, it’s possible that connections made during the outage sent session cookie data in the clear to those servers. That data may have included login credentials and passwords if users logged-in during the attack.
When the protocols that underlie how email works were first developed, little thought was given to security. At the time, most of the networks that were later to join together to form the nascent Internet were in large corporations, universities, and government agencies. Because it was only employees of these organizations that had access to email, there wasn’t much need to authenticate senders.
SMTP (Simple Mail Transfer Protocol), when it was developed in the mid–70s by Jon Postel, didn’t include any functionality to make sure that senders of email were authorized to use the email servers. These open email relays accepted incoming mail from everyone.
As the Internet grew and became available to ordinary people, email’s popularity snowballed, and abuse of the email system grew along with it. We’re all familiar with spam email; open relays make it easy for spammers to distribute huge amounts of email with no checks on who they are.
Setting up a new domain name or changing where it points can be one of the most confusing aspects of configuring a new site for less experienced web masters. The most common question asked of web hosting companies’ support teams is: “Why is my domain name not working.” They’ve followed the instructions carefully, changed the records at their domain registrar so it’s using the right domain name servers, and yet, when they or other users try to visit the site by entering the domain name into their browser address bar, they get an error page.
Understanding why this happens requires a basic knowledge of how DNS works. The Domain Name System converts human readable web addresses like “www.example.com” into a set of machine readable numbers called an IP address that looks like this: “255.255.255.255”, or like this:“3ffe:1900:4545:3:200:f8ff:fe21:67cf”. It’s similar to how, in the old days, people used to look up phone numbers; they knew the name of the person they wanted to call and used that to find the number in the alphabetical list of a phone book.
DNS is a bit more complicated. When you enter the web address into your browser’s address bar a number of things happen very quickly.
Unresponsive Domain Name Services result in slow sites that are disadvantaged in the SERPs relative to more speedy competitors
Site speed is one among many factors that Google takes into account when it is deciding how to rank sites.
There are two major speed related signals that Google can use to determine SERP position. The first is the responsiveness of the site as measured by its crawlers. If Googlebot is often left waiting, that’s an indication to Google that the site may not offer the best experience for its users, even if the information is relevant to the query.
Secondly, Internet users are impatient: they want their requests for data fulfilled immediately and aren’t prepared to wait more than a couple of seconds. Slow-loading sites cause visitors to bounce right back to the SERPs to click on the next blue link. Google records the bounce as a signal that the searcher wasn’t satisfied with the results and adjusts the ranking accordingly. | <urn:uuid:061bb87b-1ca3-42f2-8d82-177a29a4c1ec> | CC-MAIN-2017-04 | http://social.dnsmadeeasy.com/2013/06/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00402-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.953814 | 1,038 | 2.90625 | 3 |
Stepanova V.A.,French National Center for Scientific Research |
Stepanova V.A.,Institute of Soil Science and Agrochemistry |
Pokrovsky O.S.,French National Center for Scientific Research |
Pokrovsky O.S.,Russian Academy of Sciences |
And 4 more authors.
Applied Geochemistry | Year: 2015
The concentrations of major and trace elements in the organic layer of peat soils across a 1800-km latitude profile of western Siberia were measured within various dominating ecosystems to evaluate the effect of landscape, latitude position and permafrost coverage on the peat chemical composition. In this study, peat core samples were collected every 10. cm along the entire length of the column, down to 3-4. m until the mineral horizon was reached. The peat samples were analyzed for major and trace elements using an ICP-MS technique following full acid digestion in a microwave oven. Depending on their concentration pattern along the peat column, several groups of elements were distinguished according to their general physico-chemical properties, mobility in soils, affinity to organic matter and plant biomass. Within similar ecosystems across various climate zones, there was a relatively weak variation in the TE concentration in the upper organic layer (green and brown parts of sphagnum mosses) with the latitude position. Within the intrinsic variability of the TE concentration over the peat column, the effects of climate, latitude position, and landscape location were not significantly pronounced. In different landscapes of the middle taiga, the peat columns collected in the fen zone, the low and mature forest, the ridge and the hollow did not demonstrate a statistically significant difference in most major and trace element concentrations over the full depth of the peat column. In live (green) parts and dead (brown) parts of sphagnum mosses from this climate zone, the concentrations of Mn, P, Ca and Cu decreased significantly with increasing moss net primary production (NPP) at various habitats, whereas the other elements exhibited no link with the NPP trends. The Al- and mineral horizon-normalized peat concentration profiles, allowing removal of the occasional contamination by the underlying mineral substrate and atmospheric dust, demonstrated a homogeneous distribution of TEs along the peat column among various climate zones in the non-permafrost regions but significantly non-conservative behavior in the discontinuous permafrost site. The peat deposits in the northern part of western Siberia potentially have very high release of metals to the surface waters and the riverine systems, depending on the persistence of the ongoing permafrost thaw and the increase in the thickness of the active layer. © 2014 Elsevier Ltd. Source
Strakhovenko V.D.,RAS Sobolev Institute of Mathematics |
Roslyakov N.A.,RAS Sobolev Institute of Mathematics |
Syso A.I.,Institute of Soil Science and Agrochemistry |
Ermolaeva N.I.,Russian Academy of Sciences |
And 3 more authors.
Water Resources | Year: 2016
Sapropels of lake systems in Novosibirsk oblast were studied to develop a scientific basis for their rational use. Sapropels from lakes in Novosibirsk oblast have been classified based on the ash content, chemical composition, and genesis. Organic–mineral and mineral–organic calcium and mixed sapropels of macrophyte and macrophyte–plankton genesis have been shown to be predominant by far among sapropel deposits in lakes in Novosibirsk oblast. © 2016, Pleiades Publishing, Ltd. Source
Ermakov N.,Central Siberian Botanical Garden |
Makhatkov I.,Institute of Soil Science and Agrochemistry
Plant Biosystems | Year: 2011
A classification of northern boreal light coniferous forests in the West Siberian plain has been developed using the Braun-Blanquet approach. In the northern part of the West-Siberian plain, boreal coniferous forests occur at the northern limit of their range characterized by a cold continental climate and the prevalence of long-frozen, poorly drained soils in watersheds. All syntaxa were assigned to the class Vaccinio-Piceetea Br.-Bl. in Br-Bl., Siss. & Vlieger 1939. Association Pinetum sibiricae-sylvestris Makhatkov et Ermakov 2010 has been included in the alliance Cladonio stellaris-Pinion sylvestris K.-Lund 1986, order Pinetalia sylvestris Oberd. 1957. Associations Melampyro pratense-Laricetum sibiricae ass. nova hoc loco and Ledo-Pinetum sibiricae ass. nova hoc loco have been included in the alliance Pino sibiricae-Laricion sibiricae Ermakov in Ermakov et Alsynbayev 2004 and the order Ledo palustris-Laricetalia cajanderi Ermakov in Ermakov et Alsynbayev 2004. Results of detrended correspondence analysis ordinations demonstrate a strong floristic integrity of the higher syntaxonomic units and clear syntaxonomic boundary between north boreal forests of the Vaccinio-Piceetea and swamp forests of the Oxycocco-Sphagnetea in spite of transitional floristic features in the majority of communities. © 2011 Copyright Taylor and Francis Group, LLC. Source
Bredoire F.,French National Institute for Agricultural Research |
Nikitich P.,French National Institute for Agricultural Research |
Nikitich P.,Tomsk State University |
Barsukov P.A.,Institute of Soil Science and Agrochemistry |
And 6 more authors.
Plant and Soil | Year: 2015
Aims: Forest-steppe and sub-taiga, two main biomes of southwestern Siberia, have been predicted to shift and spread northward with global change. However, ecological projections are still lacking a description of belowground processes in which fine roots play a significant role. We characterized regional fine root patterns in terms of length and mass comparing: 1) sites and 2) vegetation covers. Methods: We assessed fine root length and mass down to one meter in aspen (Populus tremula) and in grassland stands on six sites located in the forest-steppe and sub-taiga zones and presenting contrasting climate and soil conditions. We distinguished fine roots over diameter classes and also between aspen and understorey in forest. Vertical fine root exploration, fine root densities and total length and mass were computed for all species. Morphological parameters were computed for aspen. Results: In both forest and grassland, exploration was deeper and total length and mass were higher in forest-steppe than in sub-taiga. Exploration tended to be deeper in forest than in grassland and for trees than for understorey vegetation within forest stands. Conclusions: The differences in rooting strategies are related with both pedo-climatic conditions and vegetation cover. Further investigations on nutrient and water availability and on fine root dynamics should permit a better understanding of these patterns and help predicting their future with global changes. © 2015 Springer International Publishing Switzerland Source
Bredoire F.,French National Institute for Agricultural Research |
R Bakker M.,French National Institute for Agricultural Research |
Augusto L.,French National Institute for Agricultural Research |
A Barsukov P.,Institute of Soil Science and Agrochemistry |
And 6 more authors.
Biogeosciences | Year: 2016
Climate change is particularly strong in northern Eurasia and substantial ecological changes are expected in this extensive region. The reshaping and migration northwards of bioclimatic zones may offer opportunities for agricultural development in western and central Siberia. However, the bioclimatic vegetation models currently employed for projections still do not consider soil fertility, in spite of this being highly critical for plant growth. In the present study, we surveyed the phosphorus (P) status in the south-west of Siberia where soils have developed on loess parent material. We selected six sites differing in pedoclimatic conditions and the soil was sampled at different depths down to 1ĝ€m in aspen (Populus tremula L.) forest as well as in grassland areas. The P status was assessed by conventional methods and by isotope dilution kinetics. We found that P concentrations and stocks, as well as their distribution through the soil profile, were fairly homogeneous on the regional scale studied, although there were some differences between sites (particularly in organic P). The young age of the soils, together with slow kinetics of soil formation processes have probably not yet resulted in a sufficiently wide range of soil physico-chemical conditions to observe a more diverging P status. The comparison of our data set with similar vegetation contexts on the global scale revealed that the soils of south-western Siberia, and more generally of northern Eurasia, often have (very) high levels of total, organic and inorganic P. The amount of plant-available P in topsoils, estimated by the isotopically exchangeable phosphate ions, was not particularly high but was intermediate on the global scale. However, large stocks of plant-available P are stored in subsurface layers which currently have low fine-root exploration intensities. These results suggest that the P resource is unlikely to constrain vegetation growth and agricultural development under the present conditions or in the near future. © Author(s) 2016. CC Attribution 3.0 License. Source | <urn:uuid:6a15b66b-4750-406b-a36c-e4cb274b9839> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/institute-of-soil-science-and-agrochemistry-1849500/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00154-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.908493 | 2,003 | 2.546875 | 3 |
Ever wonder how a hummingbird is able to hover in mid-air, appearing for a moment to defy the laws of gravity? You’re not alone. Vanderbilt University mechanical engineer Haoxiang Luo accessed the XSEDE’s Lonestar supercomputer to run 3D simulations of a hummingbird flight in an attempt to deconstruct its mysterious powers.
Science Writer Jorge Salazar details Luo’s research in a recent article on the TACC website. The science has real-world applications too – for the development of micro and unmanned aerial vehicles.
Luo carried out computer simulations of the three-dimensional flow patterns created by the pumping of the hummingbird’s wings using the Lonestar supercomputer of the Texas Advanced Computing Center.
“We used TACC’s Lonestar for both its CPU time and data storage,” said Luo. “The excellent computing power allowed us to complete the simulations in a reasonable amount of time.”
It was known that the vertical force from the wings is equal to the bird’s body weight, but there still was a lot more to uncover.
The research team used a high-speed camera, which captured 1,000 frames per second, to record the flight of a trained female ruby-throated hummingbird. Non-toxic paint was applied in small dots to the leading and trailing edge of its wings, allowing a MATLAB program to track the dots through space and time.
“The instantaneous force characteristics were previously unknown,” Luo reported. “So was the three-dimensional flow stirred up by the bird. We are the first group to be able to directly quantify the time-varying forces within a stroke cycle.”
The experiment shows that there is relationship between wing motion, the force produced by that motion, and the power consumed in beating their wings, explains Salazar.
Despite the bird’s diminutive stature, this was an exceedingly compute-intensive endeavor, hence the need for a leadership-class supercomputer.
“For a hummingbird with only a 10 centimeter wingspan, the unsteady aerodynamics is complex enough to require millions of mesh points to resolve the many, many small vortices stirred up by the wings — the bird essentially is flying in an ‘ocean of vortices.’ Therefore, efficient algorithms and high-performance computing are necessary for this work,” said Luo.
Luo and his colleagues are planning to expand their study of the hummingbird to see how it performs other aerial maneuvers. The mechanical engineers hope to apply their gleanings to man-made air-crafts, while biologists are interested in the way that the flying ability has evolved over time and in different animal species. | <urn:uuid:d1dddb9c-be5c-4bb1-b30a-667f7a6a5458> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/09/02/deconstructing-hummingbirds-hover/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950357 | 573 | 3.703125 | 4 |
Eye in the sky: NOAA satellite program aids oil spill response
An experimental satellite imagery program has been helping officials battle the slick in the Gulf of Mexico
Federal officials have been using an experimental program from the National Oceanic and Atmospheric Administration that uses advanced satellite imagery to analyze the massive oil slick seeping in the Gulf of Mexico.
Although NOAA officials have done satellite analysis for decades, the new program is unique because it uses high resolution data from satellites from a variety of countries and it’s designed for situations such as oil spills, said Chris Warren, a physical scientist at NOAA and a co-developer of the program.
In the program, the NOAA’s satellite team makes imagery available from a range of satellites to NOAA’s Ocean Service's Emergency Response Division, as well as other federal, state, and local organizations involved in the response effort. NOAA personnel elsewhere in the agency use the data to create advanced models for the how the spill will spread.
“Because the satellites are continuously circling and taking imagery, we were able to provide continuous locations of where the slick was, even when [planes and ships] were not able to be out there” because of recent bad weather, Warren said.
Warren also said the program improves the ability of responders to understand the boundaries of the spill.
“Because of the extent of the spill, the sheer size of it, it’s very difficult for planes and ships to be able to cover the whole area to see where it is every single day, with satellites we’re able to get a much broader view of it and for the most part capture the entire slick in one image,” he said. “That’s been the biggest thing: knowing where it is because [responders] can’t physically go back and forth and scan it with planes and ships just because how big it is.”
Warren said the program was started in response to a request from NOAA’s ocean service's division. Warren said although the program has proven successful in helping with the oil spill in Gulf, it’s still probably about a year or two away from being fully ready.
Ben Bain is a reporter for Federal Computer Week. | <urn:uuid:41c496d3-b1c7-434e-83f1-afad1ff6d6fd> | CC-MAIN-2017-04 | https://fcw.com/articles/2010/05/06/web-noaa-satellite.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00045-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954226 | 456 | 3.140625 | 3 |
If you are considering cloud computing and need to address related data privacy concerns, the articles discussed below provide an explanation of how cloud computing actually works to help you with your analysis.
The National Institute of Standards and Technology (NIST) recently revised its definition of cloud computing:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.”
Another recent background resource is the “Cloud Computing: Architectural and Policy Implications” paper released by the Technology Policy Institute, which was written by Professor Christopher S. Yoo. The paper discusses the technical resources used in cloud computing, starting with an explanation of “Key Cloud Computing Concepts,” including “virtualization.” [The NIST also just released the final version of its Guide to Security for Full Virtualization Technologies] Other topics include the economics of cloud computing, as well as architectural implications for access networking and data center interconnectivity. The paper concludes with a discussion of industry impact and regulatory implications.
On the same day the NIST released its newly revised definition of cloud computing, it also released its first privacy and security guidelines. “The key guidelines recommended to federal departments and agencies, and applicable to the private sector, include:
- Carefully plan the security and privacy aspects of cloud computing solutions before engaging them.
- Understand the public cloud computing environment offered by the cloud provider and ensure that a cloud computing solution satisfies organizational security and privacy requirements.
- Ensure that the client-side computing environment meets organization security and privacy requirements for cloud computing.
- Maintain accountability over the privacy and security of data and applications implemented and deployed in public cloud computing environments.” | <urn:uuid:c435cd12-5ef4-4f1c-90b3-63798a978067> | CC-MAIN-2017-04 | https://www.dataprivacymonitor.com/cloud-computing/catching-up-on-cloud-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00073-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915717 | 408 | 3.03125 | 3 |
The Internet has become a fundamental aspect of most of our lives. It goes beyond social media, online shopping, and banking. Critical infrastructures like water, sewer, electricity, and even our roadways all rely on the Internet to some degree.
The Internet’s weak link is the difficulty in reliably identifying individuals. When online, our identities are determined by IP addresses, cookies, and various “keys” and passwords, most of which are susceptible to tampering and fraud.
We need a better strategy. Howard A. Schmidt, the Cybersecurity Coordinator and Special Assistant to the President, points to The National Strategy for Trusted Identities in Cyberspace (NSTIC), which was developed in response to one of the near term action items in the President’s Cyberspace Policy Review.
The NSTIC calls for the creation of an online environment where individuals and organizations can complete online transactions with confidence, trusting the identities of each other and the infrastructure that facilitates the transaction.
The primary goal is to build a cybersecurity-based identity management vision and strategy that addresses privacy and civil liberties interests, leveraging privacy-enhancing technologies for the nation.
The National Strategy for Trusted Identities in Cyberspace is a document released to the public for comment. The Department of Homeland Security has posted the draft at www.nstic.ideascale.com, and will be collecting comments from any interested members of the general public.
Offline, there are currently dozens of identification technologies in play that go beyond the simplicity of Social Security numbers, birth certificates, drivers licenses, and passports These include smart cards, mobile phones, biometrics such as facial recognition, ear canal recognition, fingerprints, hand geometry, vein recognition, voice recognition, and dynamic biometrics among others.
In a future post, we will go into more details on each. However, there is not a consistent standard in the United States to date. In the near future, we may be the adoption of some of these technologies to properly identify who is who. | <urn:uuid:0529333a-948e-4be5-b103-1649591bf19f> | CC-MAIN-2017-04 | http://www.infosecisland.com/blogview/6622-National-Strategy-for-Online-Identification.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00099-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924893 | 418 | 2.953125 | 3 |
February 05, 2010 09:34 ET
New National Park Reserve to protect important boreal forest landscape
HAPPY VALLEY-GOOSE BAY, NEWFOUNDLAND AND LABRADOR--(Marketwire - Feb. 5, 2010) - The Honourable Jim Prentice, Canada's Environment Minister and Minister Responsible for Parks Canada and the Honourable Charlene Johnson, Newfoundland and Labrador's Minister of Environment and Conservation, today announced that they have agreed to take the necessary steps to establish a new national park reserve in the Mealy Mountains area of Labrador. The park reserve will protect roughly 10,700 sq km, which will make it the largest national park in eastern Canada. The provincial government also announced its intent to establish a waterway provincial park to protect the Eagle River, adjacent to the proposed national park reserve. Together these areas will protect over 13,000 sq km.
"As we enter into the International Year of Biodiversity, it is fitting that we are working to establish a national park reserve to protect this spectacular boreal landscape for all time, for all Canadians," said Minister Prentice. "This part of Labrador is not only of ecological significance, it is also of great cultural importance and we are committed to moving forward in a way that recognizes and respects the traditional connections people have with the land."
"The Government of Newfoundland and Labrador is pleased to dedicate an area of Labrador rich in natural and cultural heritage to the people of the province, the country, and indeed the world, to protect these special places for all time," said Minister Johnson. "This initiative demonstrates our understanding of the importance of our ecosystems and our commitment to biodiversity conservation. We are very happy to work toward establishing this national park in our province, and we are most thankful to the Steering Committee that helped make this a reality."
At the announcement, the Ministers unveiled the boundary for the national park reserve along with a conceptual boundary for an adjacent waterway provincial park. They accepted the consensus recommendations of the Steering Committee for the National Park Feasibility Study, and signed a memorandum of understanding outlining the next steps the two governments will take to establish the national park reserve, including the negotiation of a federal-provincial land transfer agreement.
Additionally, a waterway provincial park in the Eagle River watershed will encompass some 3,000 square kilometres of wilderness and include almost the entire length of the Eagle River from the headwaters to the sea.
Together, these parks in the Mealy Mountains, when established, will protect a stunning array of boreal ecosystems and wildlife, along with landscapes of great cultural significance.
Consultations with Aboriginal groups will continue throughout the national park reserve establishment process. As recommended by the Steering Committee for the feasibility study, traditional land use activities by Labradorians will be permitted to continue within the national park reserve, managed to emphasize ecological integrity and conservation measures.
A backgrounder accompanies this news release at http://www.pc.gc.ca/agen/ne/index_e.asp.
Office of the Minister of the EnvironmentFrederic BarilPress Secretary819-997-1441orParks CanadaNational Corporate Communications BranchMedia Relations819-994-3023orNL Department of Environment and ConservationMelony O'NeillDirector of Communications709-729-2575, firstname.lastname@example.org
See all RSS Newsfeeds | <urn:uuid:1c85d462-8c6e-4e35-910f-896e00072089> | CC-MAIN-2017-04 | http://www.marketwired.com/press-release/canada-newfoundland-labrador-commit-creating-new-national-park-reserve-mealy-mountains-1112984.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.911757 | 696 | 2.515625 | 3 |
What is cloud computing technology architecture ?
Cloud Architecture refers to the various components in terms of databases, software capabilities, applications, etc. engineered to leverage the power of cloud resources to solve business problems. Cloud architecture defines the components as well as the relationships between them.
The various components of Cloud Architecture are:
- On premise resources
- Cloud resources
- Software components and services
The entire cloud architecture is aimed at providing the users with high bandwidth, allowing users to have uninterrupted access to data and applications, on-demand agile network with possibility to move quickly and efficiently between servers or even between clouds and most importantly network security
The various cloud based services have their own distinct and unique cloud architectures:
- Software as a Service (SaaS) involves software hosted and maintained on internet. With SaaS, users do not have to install the software locally.
- Development as a Service (DaaS) involves web based development tools shared across communities.
- Platform as a Service (PaaS) provides users with application platforms and databases, equivalent to middleware services.
- Infrastructure as a Service (IaaS) provides for infrastructure and hardware such as servers, networks, storage devices, etc. running in the cloud, available to users against a pay per usage basis. | <urn:uuid:88cbbc52-6f1a-4ac7-a92f-98809380c9e2> | CC-MAIN-2017-04 | https://www.hcltech.com/technology-qa/what-is-cloud-architecture | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00311-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936531 | 262 | 3.40625 | 3 |
There’s an important effort underway among health care data experts to enable clinicians and medical researchers to share the same data for analytics to improve patient outcomes.
At issue is the structure of electronic health records (EHR) that were originally designed to be used in day-to-day patient care and are not set up to handle much bulkier data types such as X-ray images and genomic tests.
As a recent editorial in the Journal of the American Medical Association notes, a critical shortcoming of EHRs of today is that despite their usefulness they can’t hold and analyze much of the ancillary data that health care experts need in a timely fashion. Ancillary data could include laboratory and imaging test results. (See “Why Digital Medical Records Can’t Hold an X-Ray,” below.)
This condition persists even though available technology is already able to gather some of this information.
“EHRs were never designed to develop insights on large-scale sets of data. They help to collect information that can address inefficiencies of paper records and provide basic error-checking when you saw patients,” says Dr. Graham Hughes, chief medical officer at SAS for the SAS Center for Health Analytics and Insights. Hughes is a developmental neurobiologist and a leader in health informatics.
Addressing this problem is the focus of the Electronic Medical Records and Genomics (eMERGE) network, funded by the National Human Genome Research Institute, a division of the National Institutes of Health. It is a bioinformatics program established in 2007 with seven facilities to develop, disseminate, and apply approaches to research that result from the mapping of the human genome. The program’s coordinating center is located at Vanderbilt University.
This national consortium of scientists and organizations, using supercomputer systems, so far “has captured data sets from 56,000 individuals,” says Dr. Rongling Li, a genetic epidemiologist at the National Human Genome Research Institute, and eMERGE’s program director. Multiply 3 billion pairs of data for each of those 56,000 people and, she notes, “You can see what we mean by really big data.”
Experts say such a program is a way to move beyond the limitations of medical records.
“Even when EHRs advance, unless there are fundamental changes, they will not be able to handle large volumes of genetic data. We need to build a more fluid system,” says Dr. Justin Starren, chief of the Division of Health and Biomedical Informatics at Chicago’s Northwestern University., adding: “We could wait for the mainstream EHR vendors to solve the issue in the near term, or simply try to stuff the genomic data into the current system.”
Neither of those options seems likely, however. Hughes says he doesn’t think EHRs are the answer.
“You don’t need all [the genetic information] stored in the patient’s health record,” he says. “What you do need are new algorithms that will teach a system to say, ‘I know that I need to look at this particular gene…I know that’s a variant.’ Then signals in the EHR would provide some guidance to the doctor as to the implication of what impact these variants could have on that patient’s care.”
The Potential for Data-Driven Benefits
Discussions about data-driven health care improvements have been going on for years in political and public policy circles, not just the medical field. And they continue among experts working to come up with new data models for patient records.
Crunching vast repositories of genomic data has enormous potential for saving lives. Starren offers this example of a maternity patient:
“There was a woman who was on codeine after her delivery and, unfortunately, turned out be among the approximately 6 percent of the population that doesn’t metabolize codeine efficiently. She ended up retaining so much of it in her breast milk that her baby’s respirations were depressed and the child died.”
If there had been an easier way to analyze her gene sequence to show whether this woman was one of these “high metabolizers” during her pregnancy, and there was a process in place to flag the doctor about the variant, either the mother wouldn’t have received codeine, or wouldn’t have initially breastfed her baby.
Hughes says this kind of preventive scenario is not far-fetched. “We can already find data that allows us to suggest very specialized patterns of treatment (for example, surgery, a specific drug, exercise), determining first what’s best for a specific group overall—like 64-year-old black women—and then eventually for individuals within that group,” says Hughes. “The technology is here today, [just] not used widely.”
Once these analytics provide more easily read data, health care economics will also benefit, Li says: “When we get the right diagnosis, and provide the right dose of the right medicine, we’ll save money.”
Such data analysis might eventually help avoid malpractice suits. “It would act as smart surveillance that can troll through this information 24/7 looking for warnings, information that your care team is too harried to look for,” says Hughes.
Analytics could also lead to personalized medicine. “Think about the number of drugs people over 65 take, and how many are necessitated by a genetic influence, like cholesterol,” says Starren. “Where we’re going over the next 10 years is not just checking your blood pressure at a pharmacy. You’ll have your entire genome sequenced and your risks will be sent on to your doctor to guide your individual treatment,” Hughes says.
On the downside, algorithms allowing this kind of genetic sifting raise other issues, such as privacy and ethics. “We haven’t figured out all the unexpected consequences, in areas like insurance or employment, when each individual can be flagged as carrying ‘dangerous’ genes,” says Starren.
In the meantime, though, Starren says, “I think one of the lessons behind this is that we traditionally think of research and clinical care as two separate worlds that have nothing to do with each other.
But as medicine becomes recognized as a big data problem, the researchers and the clinical IT people will see the need to work much more closely together.”
He adds, “If you’re going to be a scientist in this century, you’ll have to follow algorithms.”
Wendy Meyeroff, of WM Medical Communications, is an experienced freelance writer based in Baltimore who specializes in health care and IT topics.
Home page illustration of chromosomes of the human genome, via National Human Genome Research Institute. | <urn:uuid:2fa840f3-0a12-4f2b-9e62-0b1f16c07847> | CC-MAIN-2017-04 | http://data-informed.com/program-seeks-to-merge-electronic-health-data-and-genomic-data/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00365-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949441 | 1,450 | 2.703125 | 3 |
I've long thought that someday Windows' security problems could foul up the Internet for everyone. That day may be arriving.
It's not just me being paranoid about Windows. It's the ISC (Internet Storm Center), the group that tracks the overall health of the Internet. They're wondering whether the newly discovered "LNK" exploit might be used to slam the brakes on the Internet's high-speed traffic.
decided to raise the Infocon level to Yellow to increase awareness of the recent LNK vulnerability and to help preempt a major issue resulting from its exploitation. Although we have not observed the vulnerability exploited beyond the original targeted attacks, we believe wide-scale exploitation is only a matter of time. The proof-of-concept exploit is publicly available, and the issue is not easy to fix until Microsoft issues a patch. Furthermore, anti-virus tools' ability to detect generic versions of the exploit have not been very effective so far.
The LNK vulnerability is an obnoxious little security hole that's present in all versions of Windows from Windows 2000 on up. There are now numerous attack programs that can use a malicious shortcut file, identified by the ".lnk" extension, to automatically run malware. All a user has to do is view the contents of a folder containing the infected shortcut, and, ta-da, the program is wreaking havoc.
Early versions of the attack required users to plug in a USB key with the malicious software. If that were still required, this would be a minor problem. Now, however, exploits exist that can launch attacks over SMB (Server Message Block) file shares and Windows' WebClient services. While often used over a LAN, these services can also be used over the Internet. And, of course, such tried-and-true-virus-spreading methods as sending a LNK file over an IM (instant message) or in an e-mail will also spread it.
Once in place, a LNK-based malware can do pretty much anything it wants to your Windows PCs -- send someone your credit-card numbers, zap your system, whatever. Or, and this is where it gets even more dangerous, it could be used to take over or knock out SCADA (supervisory control and data acquisition) control systems used to control industrial machinery in power plants and factories. In short, this is software that can use Windows not just to exploit Windows users, but to commit cyber-episonage or warfare
Even Microsoft is worried enough about this vulnerability that the guys from Redmond said, "Anyone believed to have been affected by this issue ... should contact the national law enforcement agency in their country."
So, what can you do about it? Not a lot. You can try Microsoft's recommended work-arounds, but, as Chester Wisniewski, a senior security advisor with Sophos, pointed out, turning off icons for shortcuts is "highly impractical for most environments. While it would certainly solve the problem, it would also cause mass confusion among many users and might not be worth the support calls." Wisniewksi added that Microsoft's other suggestion, disabling WebClient, may be a solution for people who don't use Microsoft SharePoint, but many organizations rely on SharePoint so this is limiting as well.
In short, there's no cure. The attack is about as nasty as it can get, and the 'cures' that we have now may be worse than the diseases. Unless, of course, it turns out the alternative really is to have the Internet itself get smacked around and for factories to be stilled by this Windows exploit of mass destruction. | <urn:uuid:b4c30870-28e9-4c55-a0a3-479bba197e49> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2468586/cybercrime-hacking/can-windows-kill-the-internet-.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00237-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948195 | 744 | 2.578125 | 3 |
It's often said, "There's nothing new under the sun." And that appears to be the case in the world of cybersecurity where hackers most often exploit known vulnerabilities to gain access to private computer files, according to HP's 2015 Cyber Risk Report. Maintaining strong computer security, the report says, is largely a process of plugging up known holes.
While newer exploits may generate more press, the report found that in 2014 the majority of attacks had exploited common misconfigurations of technologies and known bugs in code written years ago. The report found that 44% of breaches came from vulnerabilities that are two to four years old.
"Many of the biggest security risks are issues we've known about for decades, leaving organizations unnecessarily exposed," said Art Gilliland, senior vice president and general manager of enterprise security products at HP. Gilliland urges organizations to use fundamental security tactics to mitigate risk.
According to the report, server misconfiguration was the number one vulnerability of 2014. Access to files and directories provide attackers with crucial information for additional avenues of attack and to determine if their method of attack was successful.
One thing is for sure: the rate of malware attacks is accelerating. AV-Test, an independent anti-malware testing organization, collected 83 million malware samples in 2013. That number almost doubled in 2014 to 140 million and is expected to break 200 million in 2015.
The key takeaway from the report is that security analysts should devote substantial resources to plugging up known holes while also being conscious of possible new lines of attack as new technologies are put in place. These new technologies are important as hackers increasingly focus on finding holes in point-of-sale (POS) and Internet of Things (IoT) technologies.
The HP report also found successfully secured enterprise environments employ complementary protection technologies. A mentality that assumes a breach will inevitably occur instead of only working to prevent intrusions seems a likely best practice. Successfully secured enterprises use all available tools and do not rely on a single product or service.
Some of the report's tactical recommendations include:
- Implementing a comprehensive patching strategy to keep all systems up to date
- Using regular penetration testing and configuration verification to identify potential issues
- Understanding new lines of potential attack that may be introduced in the installation of new technology
- Keeping up with the security industry to learn about attacker's tactics
The report concludes the pace of technology advancement is becoming more rapid, and with that comes the challenge of maintaining security and privacy.
While the escalation in cyberattacks seems relentless, organizations can greatly reduce their risk of breach by upgrading equipment, plugging known vulnerabilities, and listening to security pros for new developments. Employing a variety of security measures can help create a highly functioning network that maintains strong privacy and security for individuals and the company.
The opinions expressed in this Blog are those of Michelle Drolet and do not necessarily represent those of the IDG Communications, Inc., its parent, subsidiary or affiliated companies.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:de953f40-e18b-4698-bba7-077a9b72fc09> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2896785/security0/known-issues-pose-biggest-it-security-threats.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00256-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952122 | 620 | 2.625 | 3 |
Analyzing the Societal Effects of YouTube
The domain name YouTube first was activated in February 2005. By February 2008, the site was grabbing one-third of the estimated 10 billion views of online videos that month, up from 15 percent in 2007, according to Internet marketing research company comScore.
Ten billion views a month is a number that speaks for itself: Online video is an explosive new medium, and YouTube has proven to be dominant in this arena. And while for some the site provides mere entertainment, for others, YouTube is proving to be a valuable research tool, as well as a medium for expression or documentation of aberrant behavior.
The challenge in analyzing YouTube as a medium is that its meteoric rise makes it difficult to get a handle on its place in society. “Even though it’s an unavoidable force, the truth is we don’t yet know what kind of unavoidable force it is,” said Andrew Perrin, associate professor of sociology at the University of North Carolina at Chapel Hill.
“I like to think about new media that arrive on the scene in a number of dimensions, and one of them is the interactivity of it — — the extent to which a viewer or consumer is able to direct the content,” he said. “As a medium, YouTube has a certain pack mentality to it because there are only so many decision rules you can use to find stuff to watch on YouTube, and so whatever someone else has watched turns out to be what you’re likely to watch.”
In many cases, people use YouTube casually: Someone e-mails or somehow exposes them to a link to a video clip and so they watch it. And in many cases this will be content that originated elsewhere. “It’s a pretty derivative medium. What people mostly are watching is video that we know of through some other source, and they use YouTube for its capacity to serve that video, more than to be a medium of its own,” Perrin said.
For this reason, Steve Jones, professor in the Department of Communication and associate dean of Liberal Arts and Sciences at the University of Illinois at Chicago, feels it’s too soon to know what place YouTube holds in our society. “It’s certainly not replacing TV. At this point, it’s sitting alongside other video media,” Jones said.
“It’s probably safe to say that, over time, as this generation of high school and college students gets older, they will be quite open to getting what we would consider television content via YouTube, and they would probably be comfortable getting other kinds of content via YouTube. So in that respect, I think YouTube has begun the process of moving video to IP-based distribution, independent of traditional or cable networks.”
Going beyond this migration, however, what YouTube does that video media before it have not done is provide ordinary users a way to expose their content to millions of eyeballs immediately. In the past, people without access to a television network may have been able to record video content and distribute it via tape or DVD, or with some effort get themselves cable access. But these methods lack the widespread, immediate accessibility of YouTube and other online video sites.
Violence on YouTube
Almost since its inception, people have used YouTube to post videos of violent acts. By fall 2006, this had become so widespread that politicians in the U.K. sought to legislate against violence on YouTube, with the House of Commons citing a video on YouTube of a man being kicked in the face until he lost consciousness. U.K. ministers claimed such videos “fuel random acts of violence.”
In the three years it’s existed, news stories of people committing acts of violence and posting them to YouTube have become recurrent, but one recent case sheds particular light on the issue. In April, six teenage girls in Florida were accused of beating a 16-year-old girl and recording the attack with the intention of posting it on YouTube.
They were apprehended before they were able to post the video, but a portion of the recording eventually was released to the media by police, and it ended up on YouTube anyway. (Evidence of the pervasive nature of YouTube: The site now contains the original clip alongside video of the perpetrators appearing in court, the victim’s parents speaking to the news media, various television clips covering the case, dozens of videos uploaded by users commenting on the story and even an amateurishly animated re-enactment of the attack).
This story prompted a range of responses in various news media, with some commentators attempting to pin at least some of the blame for the incident on YouTube itself, and others quick to insist that was preposterous. One could argue YouTube merely reflected violence here, but the camera’s presence during the assault and the intended destination of the footage being YouTube begs the question of whether the site served as a catalyst to violence in this instance.
“I would be cautious about attributing causality to YouTube,” Perrin said. “There are a heck of a lot more people on YouTube than are going out and committing acts of violence in order to get onto YouTube, and so to understand that as a directly causal factor is a little bit of a stretch.”
Jane Brown, professor in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill, agreed. “I wouldn’t say it’s a catalyst [to violence],” she said. “It gave the girls a way to promote themselves that they wouldn’t have had otherwise. A lot of media glorifies violence, and there’s so much interest in celebrities these days that it seems a logical extension that adolescents would think, ‘Hmm, if I’m in a video and I get lots of exposure, I too may get some kind of notoriety.’”
Divorce on YouTube
Another recent, high-profile news story with YouTube at its center is that of Tricia Walsh-Smith, a former actress and playwright in New York City in the midst of a divorce battle with her husband, Philip Smith, president of The Shubert Organization, the largest theater owner on Broadway. In an attempt to gain the upper hand, Walsh-Smith took to YouTube to expose details of her marriage and its subsequent, apparently acrimonious split.
The clip begins with Walsh-Smith in her kitchen discussing particulars of her prenuptial agreement and why she may or may not be getting evicted from her Park Avenue apartment, calling herself an idiot and giving herself a tarot-card reading before declaring, “I’m fighting back, and I’m going to do this video and I’ll put it up on YouTube.” She then makes embarrassing claims regarding the couple’s sex life and later goes through their wedding album on camera, describing family members as “bad” or “evil.”
In this instance, YouTube presented a scorned woman with a unique opportunity in the history of social media: an accessible way to instantly smear another party.
“What’s interesting to me is that she would have done this via YouTube rather than her own Web site,” Jones said. “YouTube has made it so easy to post these kinds of things that why would you go to the lengths of putting [up] your own Web site when you can easily upload it to YouTube?”
Perrin said he couldn’t think of any other medium throughout history that would have been so immediate in its effect. “The self-production aspect of YouTube, combined with its broadcast reach, that is a unique form,” he said. “I’m sure there are lots of estranged spouses throughout history who would like the opportunity to have done that. YouTube provides us a space in which people can post stuff without the gatekeepers that have been associated with previous high-bandwidth media.”
In debating the level of causality one can attribute to YouTube, Perrin draws a comparison with what was once a newly emergent communication tool: the telephone. “If you go back and look at some of the really fine social history of what people were worried about in terms of the advent of the telephone, we hear a lot of the same stuff,” he said. The fear was that people would be “driven to do crazy and racy things because they’re allowed and able now to tell people about it quickly through the telephone.”
Walsh-Smith’s clip, meanwhile, provides an interesting study on the difference between using the telephone to smear someone and using online video to do so. In the video, she actually calls her husband, reaching him very briefly before being thrown to his secretary, telling her, “I’m filming at the moment; we’re doing a little video for YouTube.” She then repeats her humiliating claims about their sex life. Walsh-Smith’s husband’s secretary is decidedly nonplussed.
Whereas in the past the prank call itself would have been humiliation enough, now the main act is the video documentation of that call and the decision to broadcast it around the world.
A decision, Brown pointed out, that may have been made hastily and then magnified by the site. “The woman who damns her husband by saying everything she wants to say in public may regret that a couple of weeks later,” she said. “It may be too immediate, not allowing for much thought.”
Politics on YouTube
George W. Bush began his second term as president of the United States the month before YouTube’s inception. So the 2008 presidential campaign has been the first to feel the impact of the site. The campaign has seen televised debates in which citizens questioned candidates via YouTube clips. But more significantly, the analytical dialogue surrounding the campaign has identified a clear shift in presidential politics in what’s being called the “YouTube era.”
What this means is that every second a candidate is in the public eye likely will be recorded, either professionally or amateurishly (perhaps with a camera phone), and sliced up for quick analysis by reporters and political bloggers. John McCain’s statement that he’d be comfortable with the United States military remaining in Iraq for 100 years was recorded by a private citizen attending a town-hall meeting in New Hampshire and was first aired publicly on YouTube. The sound bite is now part of a television ad produced by the Democratic National Committee.
YouTube also affects the political process in ways that go beyond current gaffs or flubs. It provides reporters, political bloggers and, perhaps more importantly, political operatives with unfettered access to all of a candidate’s past statements on political positions, some of which may not agree with a candidate’s current ones.
Mitt Romney was stung by this in the current presidential race when the McCain campaign dug up a YouTube video in which the former Massachusetts governor stated he supported maintaining abortion rights in his state. Romney claims he changed his position on abortion to being pro-life following a November 2004 conversation with a stem cell researcher that he found unsettling. But the clip dated to May 2005, six months after this conversation. The video created doubt regarding Romney’s stance.
So the question is, does YouTube, in its ability to document and broadcast everything a candidates says, make the political process so transparent as to shift it in new directions?
“It changes the landscape dramatically,” Perrin said. “It’s partly about transparency, but also about further increasing the sound-bite-ness of the political landscape. This strikes me as a bad thing for politics. [Candidates now] have to be very careful about not just their whole message, but about each little piece of the message, which is why so many presidential speeches and campaign speeches are so deadly boring and repetitive.”
Jones agreed. “One of the possible long-term effects is that it’s going to cause politicians to even more greatly circumscribe their speech,” he said. “I don’t think that’s a good thing. I’d prefer that our politicians felt that they could speak freely, and then we could judge what they say.” | <urn:uuid:1989de0f-6e45-463a-97e2-622e3ed497a3> | CC-MAIN-2017-04 | http://certmag.com/analyzing-the-societal-effects-of-youtube/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00220-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967366 | 2,560 | 2.734375 | 3 |
What email address or phone number would you like to use to sign in to Docs.com?
If you already have an account that you use with Office or other Microsoft services, enter it here.
Or sign in with:
Signing in allows you to download and like content, which the author will be aware of.
Embed code for: Parkinson
Select a size
An Exploration of Causes and Treatments
General Psychology 105
19 October 2016
Parkinson’s disease is the second most common neurodegenerative disease. Because of its debilitating effects on the quality of life, being able to treat Parkinson’s is important for many people. As a result, there is a great interest in understanding the mechanisms by which it operates so that new treatments can be devised to counteract its effects. As of yet, there is no known cure for it. The treatments that do exist can only delay its onset and minimize existing symptoms. These treatments generally fall into three categories: therapy using drugs, physical therapy, and deep-brain stimulation through surgery. Each of these treatments has benefits and downsides, and it must be determined for each individual patient what treatment yields the best results in his or her case.
The symptoms of Parkinson’s are many and vary from patient to patient. The most common motor effects are bradykinesia, tremors, difficulty initiating movement, and rigid muscle movements. Bradykinesia, the slowness of normally automatic movements, takes on the form of decreased blinking and swinging of arms, decreased facial expression owing to lack of stimulation of facial muscles, unintelligible speech due to failures in tongue movement, difficulty swallowing normally, vision problems including blurriness and seeing double, a slow gait, and difficulty walking and maintaining a rhythm, even in the early stages of the disease. Tremors most often occur while at rest and usually occur in the fingers or in the lips and chin. The tremors can occur without the person’s conscious knowledge. The difficulty in initiating movements causes “freezing” where the patient cannot initiate a gait, is incapable of taking another step, or stops just short of the destination. This goes along with an inability to recover from disruptions to balance, which can lead to falls. Additionally, muscles are rigid, leading to jerky movements and increased resistance when a limb is stretched. As the disease progresses, many of the symptoms worsen, and new symptoms develop. For example, the tremor tends to increase as time goes on, and the difficulty initiating movements worsens with the passage of time. In the advanced stages, patients have difficulty with simple tasks like walking, talking, and getting dressed. Although Parkinson’s is primarily a motor disorder, there are several non-motor symptoms related to cognition, sleep, and pain. Given the impact of these symptoms on everyday life and the loss of independence as the disease progresses, it is important for people afflicted with Parkinson’s to find some remedy to allow them to continue to live their lives normally.
The basic explanation for Parkinson’s is low levels of the neurotransmitter dopamine in the basal ganglia region of the brain, which is important for initiating and executing movements. These abnormally low levels are caused by the death of dopamine-producing cells. The model of the basal ganglia proposed in the 1980s is that the lack of dopamine there causes the globus pallidus interna to be overly active, which in turn inhibits the motor thalamus and primary motor cortex. It is this suppression of the motor cortex that causes the characteristic slowness. This helps to explains some of the non-motor symptoms since the dopamine-dependent areas of the brain extend far and control many things other than movement. As the disease progresses, the neurodegeneration spreads to the limbic system, amygdala, and hippocampus. However, low dopamine levels in the basal ganglia cannot entirely account for Parkinson’s disease. Recent research indicated that it is a complex condition influenced by many factors. For example, the dopamine levels do not directly account for muscle rigidity or the tremor. The phenomenon of “freezing” is also not well understood since patients typically do not have similar trouble with other complex patterns of movement. Most patients benefit from having other external cues. Not all of the reasons for Parkinson’s are understood. If more is discovered about the mechanisms by which these symptoms are brought about, there will be opportunities for new treatments.
The most straightforward way to treat the disease is to increase the dopamine levels in the brain. The first medical treatments involved injections of dopamine. However, they were ineffective because dopamine cannot penetrate the blood-brain barrier and thus cannot have any effect on the brain. Instead, L-dopa, a precursor molecule that the body can turn into dopamine, is used. In addition to L-dopa, there are a number of other drugs that have remedial effects on Parkinson’s patients.
Dopamine replacement therapy, which uses various drugs, has been shown to help with the symptoms of bradykinesia, rigidity, and tremors. L-dopa is by far the most powerful, effective, and well-tolerated drug available now. It can be used as a treatment by itself. However, it does have several negative side effects. Prolonged use leads to a decrease in effectiveness as it “wears off.” Additionally, L-dopa treatments can themselves cause motor complications and bradykinesia. In other words, when overused, L-dopa can actually exacerbate the symptoms that it is trying to prevent. Additionally, L-dopa is almost no help in treating non-motor symptoms that are completely unrelated to dopamine. The problem of L-dopa causing bradykinesia has led to the development of alternative drugs to delay the onset of such symptoms as long as possible. There are many other beneficial drugs besides L-dopa, which can either be used by themselves or in conjunction with L-dopa. These other drugs act as agonists for dopamine, either by binding to receptors to stimulate dopamine production without themselves being converted to dopamine or by blocking the breakdown of dopamine. While these drugs do have many good effects, their side effects can also be very severe. Physical side effects include hypersexuality, hypotension, nausea, leg edema, and gastrointestinal problems. Psychological effects include impulse-control disorders, such as kleptomania and pathological gambling, and psychosis (such as delusions and hallucinations). Some side effects are reminiscent of schizophrenia, which makes sense given that schizophrenia is characterized by abnormally high dopamine levels. Different patients respond differently to each drug, meaning that the drug regimen is specific to each patient. There is no doubt that all forms of dopamine replacement therapy reduce the severity of impairment, but studies have shown that L-dopa is still the most effective and that all treatments have the side effects of losing their effectiveness and causing motor complications after a few years.
Another effective therapy is physical therapy and a regular exercise routine. Physical rehabilitation can help with ability to move, posture, and ability to retain balance. Research is still not prevalent on the effectiveness of physical therapy, but the studies that have been done show that without a doubt, it has a positive effect. There are several different variations of physical therapy that can be done. The simplest is physiotherapy consisting of just calisthenics. Other methods are strength training (involving things such as leg curls, leg presses, and chest presses) and aerobic training (involving things such as jogging or treadmill-walking). The studies that have been done so far seem to indicate that simple physiotherapy does not cause a marked improvement but that aerobic training and strength training provide a real benefit on top of an L-dopa or dopaminergic drug regimen. This benefit may even come partly from the regularity and progression of intensity. The authors of a pioneering study comparing the different types of physical therapy showed that patients who underwent aerobic or strength training “showed significant clinical improvements in the motor symptoms of [Parkinson’s Disease]” and that physical exercise did more than basic calisthenics to reduce symptoms of bradykinesia and rigidity, causing increased agility, upper and lower body strength, and balance. Information about long-term effects of physical therapy as a treatment is still uncertain, especially given that trials have only been done for no more than a few months at a time. There are no known ill side effects, but it is true that regression occurs if the exercises are not done for a while. Physical exercise has its effect on the health by stimulating more areas in the cortex, which improves blood flow to the brain. Additionally, intense physical exercise seems to have structural impact on the brain by activating certain growth factors and promoting motor, cognitive, and behavioral functions.
A third option for treatment, other than drugs or physical therapy, is stereotactic surgery, also known as deep brain stimulation. It involves implanting electrodes in a patient’s basal ganglia in order to be able to stimulate the subthalamic nucleus. The stimulations gave relief very similar to the relief from an L-dopa injection. In fact, deep brain stimulation is very effective for controlling motor fluctuations and tremors and can help with all dopamine-responsive symptoms. Not enough research has been done on deep-brain stimulation to determine what its side-effects are.
It should be noted that no treatment does anything to counteract neuron degeneration. This is why there is no cure but only different strategies for managing the symptoms. In addition to the common therapies already discussed, there are many more drugs targeting different types of receptors that are being researched. Genetic influence through gene therapy is also being explored. Another possibility is continues L-dopa infusion, which has been shown to reduce fluctuations in patients between experiencing and not experiencing symptoms. This would reduce the likelihood of the complications and side effects associated with discrete doses of L-dopa.
In conclusion, Parkinson’s disease is a complex disorder without a cure that affects many people. The best treatment varies from patient to patient, but in general the best currently-available treatment is a combination of dopamine replacement therapy drugs with a regular physical therapy regimen. Since there are still undesirable side effects to the drug regimens, it seems prudent to wait to begin the drugs until the quality of life has reached such a point that more drastic measures are necessary.
Carvalho, Alessandro et al. “Comparison of Strength Training, Aerobic Training, and Additional Physical Therapy as Supplementary Treatments for Parkinson’s Disease: Pilot Study.” Clinical Interventions in Aging 10 (2015): 183–191. Print.
Gil, Carmen, and Ana Martínez. Emerging Drugs and Targets for Parkinson's Disease. Cambridge: Royal Society of Chemistry, 2014. Print.
Griggs, Richard A. Psychology: A Concise Introduction. New York: Worth Publishing, 2014. Print.
Zsigmond, Peter et al. "Stereotactic Microdialysis of the Basal Ganglia in Parkinson’s Disease." Journal of Neuroscience Methods 207.1 (2012): 17-22. Print.
2a doubt, it has a positive effect. There are several different variations of physical therapy that can be done. The simplest is physiotherapy consisting of just calisthenics. Other methods are strength training (involving things such as leg curls, leg presses, and chest presses) and aerobic training (involving things such as jogging or treadmill-walking). The studies that have been done so far seem to indicate that simple physiotherapy does not cause a marked improvement but that aerobic training and strength training provide a real benefit on top of an L-dopa or dopaminergic drug regimen. This benefit may even come partly from the regularity and progression of intensity. The authors of a pioneering study comparing the different types of physical therapy showed that patients who underwent aerobic or strength training “showed significant clinical improvements in the motor symptoms of [Parkinson’s Disease]” and that physical exercise did more than basic calisthenics to reduce symptoms of bradykinesia and rigidity, causi | <urn:uuid:f1bea645-85bd-4d2b-ab87-c5dee16eed9a> | CC-MAIN-2017-04 | https://docs.com/user814893/6839/parkinson | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00036-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949954 | 2,510 | 3.03125 | 3 |
The 2011 version ASHRAE’s guidelines, openly endorsed “free cooling”. This would have been considered heresy by many only a few years ago, and some are still in shock and have difficulty accepting this new outlook toward less tightly controlled environmental conditions in the data center.
The opportunity to save significant amounts of cooling energy by moderating the cooling requirements and the expanded use of “free cooling” is enormous. However, due to the highly conservative and risk adverse nature of the industry this will take a while to become a widespread and common practice. Clearly some have begun to slowly explore raising the temperatures a few degrees to gather some experience and to see if they experience any operational issues with the IT equipment. Ultimately, it is a question of whether the energy (and cost) saved, is worth the risk (perceived or real) of potential equipment failures due to higher temperatures (and perhaps wider humidity).
There are clearly some legitimate reasons to keep lower temperatures; the first is a concern of loss of thermal ride-through time in the event of a brief loss of cooling, this is especially true for higher density cabinets, where an event of only a few minutes would cause an unacceptably high intake IT temperature. This can occur during the loss of utility power, and the subsequent transfer to back-up generator, which while it typically takes 30 second or less, will cause most compressors in chillers or CRAC units to recycle and remain off for 5–10 minutes or more. While there are some ways to minimize or mitigate this risk, is a valid concern.
The other concern is also another common issue; the wide variations in IT equipment intake temperatures that occur in most data centers due to airflow mixing and bypass air from less than ideal airflow management. Most sites resort to over cooling the supply air so that the worst areas (typically end-of-aisles and top of racks) of higher density areas do not overheat from re-circulated warm air from the hot aisles.
However, if better airflow management is implemented to minimize hotspots, it would allow intake temperatures to be slowly raised beyond the conservative 68–70°F. This can be accomplished by a variety of means such as; to the spreading out and balancing rack level heat load, and adjusting the airflow to match the heat load, as well as better segregation of hot and cold air via blanking panels in the racks and the use of containment systems. If done properly, it is more likely that within one to two years, 75–77°F in the cold aisle would no longer be a cause for alarm to IT users. The key to this is to improve communications and educate both the IT and facilities management about the importance of air management and the opportunity for energy savings, without reducing equipment reliability.
For the complete series on data center energy efficiency download the Data Center Knowledge Executive Guide on Data Center Energy Efficiency in a PDF format compliments of Digital Realty.
Pages: 1 2 | <urn:uuid:4b615c35-04b5-44b4-ba40-35f59d44163b> | CC-MAIN-2017-04 | http://www.datacenterknowledge.com/archives/2012/09/11/improving-cooling-systems-efficiency/2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.948978 | 605 | 2.53125 | 3 |
Downloaders and droppers are helper programs for various types of malware such as Trojans and rootkits. Usually they are implemented as scripts (VB, batch) or small applications.
They don’t carry any malicious activities by themselves, but just open a way for attack by downloading/decompressing and installing the core malicious modules. To avoid detection, a dropper may also create noise around the malicious module by downloading/decompressing some harmless files.
Very often, they auto-delete themselves after the goal has been achieved.
Downloaders and droppers emerged from the idea of malware files that were able to download additional modules (i.e. Agobot, released in 2002).
An interesting example of a modern downloader is OnionDuke (discovered in 2014), carried by infected Tor nodes. It is a wrapper over legitimate software. When a user downloads software via an infected Tor proxy, OnionDuke packs the original file and adds a malicious stub to it. When the downloaded file is run, the stub first downloads malware and installs it on a computer, and then unpacks the legitimate file and removes itself in order to be unnoticed.
Common infection method
Most of the time, the user gets infected by using some unauthenticated online resources. Infections are often consequences of activities like:
- Clicking malicious links or visiting shady websites
- Downloading unknown free programs
- Opening attachments sent with spam
- Plugging infected drives
- Using Infected proxy (like in case of OnionDuke)
They may also be installed without user interaction, carried by various exploit kits.
Downloaders are usually tiny, and rarely get meaningful, unique names. Usually they are called from their architecture and platform to which they are dedicated. Some examples:
- TrojanDownloader: MSIL/Prardrukat
They can be used to download various malware of different families. Sometimes, they are distributed by some bigger campaigns like OnionDuke.
Downloaders often appear in non-persistent form. They install the malicious module and remove themselves automatically. In such a case, after a single deployment they are no longer a threat. If for some reason they haven’t removed themselves, they can be deleted manually.
More dangerous variants are persistent. They copy themselves to some random, hidden file and create registry keys to run after the system is restarted, attempting to download the malicious modules again. In such cases, to get rid of the downloader it is necessary to find and remove the created keys and the hidden file.
What remains to do is to take appropriate steps in order to neutralize the real weapon carried by the dropper. The difficulty level of cleaning the system varies as the payload may be of different types. The most universal way is to use good quality, automated anti-malware tools and run a full system scan.
A successfully deployed downloader results in having a system infected by the core, malicious module.
Keeping good security habits, such as being careful about visiting certain websites and not opening unknown attachments minimizes the risk of being affected by malicious downloaders. However, in some cases it is not enough. Exploit kits can still install the malicious software on the vulnerable machine, even without any interaction. That’s why it is important to have good quality anti-malware software. | <urn:uuid:7e5d3fc9-8885-410f-9539-801b68b88cc6> | CC-MAIN-2017-04 | https://blog.malwarebytes.com/threats/trojan-dropper/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00184-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.92183 | 685 | 2.53125 | 3 |
Johnson F.,Sydney Water |
White C.J.,University of Tasmania |
van Dijk A.,Australian National University |
Ekstrom M.,Commonwealth Science and Industrial Research Organisation |
And 6 more authors.
Climatic Change | Year: 2016
Floods are caused by a number of interacting factors, making it remarkably difficult to explain changes in flood hazard. This paper reviews the current understanding of historical trends and variability in flood hazard across Australia. Links between flood and rainfall trends cannot be made due to the influence of climate processes over a number of spatial and temporal scales as well as landscape changes that affect the catchment response. There are also still considerable uncertainties in future rainfall projections, particularly for sub-daily extreme rainfall events. This is in addition to the inherent uncertainty in hydrological modelling such as antecedent conditions and feedback mechanisms. Research questions are posed based on the current state of knowledge. These include a need for high-resolution climate modelling studies and efforts in compiling and analysing databases of sub-daily rainfall and flood records. Finally there is a need to develop modelling frameworks that can deal with the interaction between climate processes at different spatio-temporal scales, so that historical flood trends can be better explained and future flood behaviour understood. © 2016 Springer Science+Business Media Dordrecht Source
The first detection of a gravitational wave depended on large surfaces with excellent flatness, combined with low microroughness and the ability to mitigate environmental noise. Albert Einstein's general theory of relativity predicted that massive, accelerating bodies in deep space, such as supernovae or orbiting black holes, emit huge amounts of energy that radiate throughout the universe as gravitational waves. Although these "ripples in spacetime" may travel billions of light years, Einstein never thought the technology would exist that would allow for their detection on Earth. But a century later, the technology does exist at the Laser Interferometer Gravitational-Wave Observatory (LIGO). Measurements from two interferometers, 3000km apart in Louisiana and Washington State, have provided the first direct evidence of Einstein's theory by recording gravitational-wave signal GW150914, determined to be produced by two black holes coalescing 1.2 billion light years away. At the heart of the discovery lies fused silica optics with figure quality and surface smoothness refined to enable measurement of these incredibly small perturbations. Their design is an important part of LIGO's story. The black hole coalescence was detected as an upward-sweeping 'chirp' from 35 to 300Hz, which falls in the detectors' mid-frequency range that is plagued by noise from the optics. Left and right images show data from Hanford and Livingston observatories. Click to enlarge. (Caltech/MIT/LIGO Laboratory) "Most impressive are [the optics'] size combined with surface figure, coating uniformity, monolithic suspensions, and low absorption," says Daniel Sigg, a LIGO lead scientist at Caltech. LIGO's optics system amplifies and splits a laser beam down two 4km-long orthogonal tubes. The two beams build power by resonating between reflective mirrors, or 'test masses,' suspended at either end of each arm. This creates an emitted wavelength of unprecedented precision. When the split beam recombines, any change in one arm's path length results in a fringe pattern at the photodetector. For GW150914, this change was just a few times 10-18 meters. Reducing noise sources at each frequency improves interferometer sensitivity. Green shows actual noise during initial LIGO science run. Red and blue (Hanford, WA and Livingston, LA) show noise during advanced LIGO's first observation run, during which GW150914 was detected. Advanced LIGO's sensitivity goal (gray) is a tenfold noise reduction from initial LIGO. Click to enlarge. (Caltech/MIT/LIGO Laboratory) But the entire instrument is subject to environmental noise that reduces sensitivity. A noise plot shows the actual strain on the instruments at all frequencies, which must be distinguished from gravity wave signals. The optics themselves contribute to the noise, which most basically includes thermal noise and the quality factor, or 'Q,' of the substrate. "If you ping a wine glass, you want to hear 'ping' and not 'dink'. If it goes 'dink', the resonance line is broad and the entire noise increases. But if you contain all the energy in one frequency, you can filter it out," explains GariLynn Billingsley, LIGO optics manager at Caltech. That's the Q of the mirrors. Further, if the test mass surfaces did not allow identical wavelengths to resonate in both arms, it would result in imperfect cancellation when the beam recombines. And if non-resonating light is lost, so is the ability to reduce laser noise. Perhaps most problematic, the optics' coatings contribute to noise due to stochastic particle motion. Stringent design standards ameliorate these problems. In 1996, a program invited manufacturers to demonstrate their ability to meet the specifications required by initial LIGO's optics. Australia's Commonwealth Science and Industrial Research Organisation (CSIRO) won the contract. "It was a combination of our ability to generate large surfaces with excellent flatness, combined with very low microroughness," says Chris Walsh, now at the University of Sydney, who supervised the overall CSIRO project. "It requires enormous expertise to develop the polishing process to get the necessary microroughness (0.2-0.4nm RMS) and surface shape simultaneously." Master optician Achim Leistner led the work, with Bob Oreb in charge of metrology. Leistner pioneered the use of a Teflon lap, which provides a very stable surface that matches the desired shape of the optic during polishing and allows for controlled changes. "We built the optics to a specification that was different to anything we'd ever seen before," adds Walsh. Even with high-precision optics and a thermal compensation system that balances the minuscule heating of the mirror's center, initial LIGO was not expected to detect gravity waves. Advanced LIGO, begun in 2010 and completing its first observations when GW150914 was detected, offers a tenfold increase in design sensitivity due to upgrades that address the entire frequency range. "Very simply, we have better seismic isolation at low frequencies; better test masses and suspension at intermediate frequencies; and higher powered lasers at high frequencies," says Michael Landry, a lead scientist at the LIGO-Hanford observatory. At low frequencies, mechanical resonances are well understood. At high frequencies, radiation pressure and laser 'shot' noise dominate. But at intermediate frequencies (60-100 Hz), scattered light and beam jitter are difficult to control. "Our bucket is lowest here. And there are other things we just don't know," adds Landry. "The primary thermal noise, which is the component at intermediate frequency that will ultimately limit us, is the Brownian noise of the coatings." To improve signal-to-noise at intermediate frequencies, advanced LIGO needed larger test masses (340mm diameter). California-based Zygo Extreme Precision Optics won the contract to polish them. "We were chosen based on our ability to achieve very tight surface figure, roughness, radius of curvature, and surface defect specifications simultaneously," says John Kincade, Zygo's Extreme Precision Optics managing director. The test masses required a 1.9km radius of curvature, with figure requirements as stringent as 0.3nm RMS. After super-polishing to extremely high spatial frequency, ion beam figuring fine-tunes the curvature by etching the surface several molecules at a time. This allows reliable shape without compromising on ability to produce micro-roughness over large surfaces. Advanced LIGO input test mass champion data. Zygo achieved figuring accuracy to 0.08nm RMS over the critical 160mm central clear aperture, and sub-nanometer accuracy on the full clear 300mm aperture of many other samples. Click to enlarge. (Zygo Extreme Precision Optics) Dielectric coatings deposited on the high-precision surfaces determine their optical performance. CSIRO and the University of Lyon Laboratoire des Materiaux Avances shared the contract to apply molecule-thin alternating layers of tantalum and silica via ion-beam sputtering. Katie Green, project leader in CSIRO's optics group, says "the thickness of the individual layers are monitored as they're deposited. Each coating consists of multiple layers of particular thicknesses, with the specific composition of the layers varying depending on how the optic needs to perform in the detector." Additionally, gold coatings around the edges provide thermal shielding and act as an electrostatic drive. LIGO's next observation run is scheduled to begin in September 2016. And after Advanced LIGO reaches its design sensitivity by fine-tuning current systems, further upgrades await in the years 2018-2020 and beyond. "One question is how you reduce the thermal noise of the optics, in particular their coatings. But coating technologies make it hard to get more than a factor of about three beyond Advanced LIGO's noise level," says Landry. One possibility is operating at cyrogenic temperatures. But "fused silica becomes noisy at cold temperatures, and you need a different wavelength laser to do this," according to Billingsley. Another way of increasing the sensitivity at room temperature is to use 40km-arm-length interferometers. Other optics-related systems reduce noise. Advanced LIGO's test masses are suspended on fused silica fibers, creating monolithic suspension that reduces thermal noise and raises the system's resonant frequency compared with initial LIGO. "The Q of that system is higher so an entire band shrinks. That means opening up more space at lower frequencies, where binary black holes are," says Landry. In the 17th century, Galileo pointed a telescope to the sky and pioneered a novel way of observing the universe. Now, LIGO's detection of GW150914 marks another new era of astronomy. As advances in glass lenses enabled Galileo's discoveries, so have state-of-the-art optics made LIGO's discoveries possible. And with astronomy's track record of developing new generations of optical devices, both the astrophysical and precision optics communities are poised for an exciting future.
Walton A.,Commonwealth Science and Industrial Research Organisation |
Gardner J.,Commonwealth Science and Industrial Research Organisation
Local Environment | Year: 2015
This research examined community acceptance of policy instruments that could be used to promote ongoing maintenance of domestic rainwater tank systems. Using an online survey of 533 tank owners in South East Queensland, Australia, the research investigated four sets of factors that influence policy acceptance: features of the policy, judgements of policy fairness and effectiveness, contextual framing, and individual attitudes and motivations towards tank maintenance. An experimental design incorporating choice modelling was employed. Results demonstrated that perceptions of policy fairness and effectiveness are important to acceptance. Policies that include enabling features associate with increased perceptions of effectiveness, and policies that use incentives are linked to increased perceptions of both fairness and effectiveness. Individual attitudes and motivations regarding tank maintenance were significant predictors of policy support. Perceptions of a person's own ability to undertake tank maintenance tasks were negative predictors of policy intervention, suggesting that people who believe they can carry out maintenance themselves may not see the need for a policy that encourages tank maintenance to exist. The findings are discussed in relation to issues of policy design. © 2014, © Crown in the Commonwealth of Australia 2014
Bowerman A.F.,Commonwealth Science and Industrial Research Organisation |
Bowerman A.F.,Australian National University |
Newberry M.,Commonwealth Science and Industrial Research Organisation |
Dielen A.-S.,Commonwealth Science and Industrial Research Organisation |
And 10 more authors.
Plant Biotechnology Journal | Year: 2016
Starch phosphate ester content is known to alter the physicochemical properties of starch, including its susceptibility to degradation. Previous work producing wheat (Triticum aestivum) with down-regulated glucan, water dikinase, the primary gene responsible for addition of phosphate groups to starch, in a grain-specific manner found unexpected phenotypic alteration in grain and growth. Here, we report on further characterization of these lines focussing on mature grain and early growth. We find that coleoptile length has been increased in these transgenic lines independently of grain size increases. No changes in starch degradation rates during germination could be identified, or any major alteration in soluble sugar levels that may explain the coleoptile growth modification. We identify some alteration in hormones in the tissues in question. Mature grain size is examined, as is Hardness Index and starch conformation. We find no evidence that the increased growth of coleoptiles in these lines is connected to starch conformation or degradation or soluble sugar content and suggest these findings provide a novel means of increasing coleoptile growth and early seedling establishment in cereal crop species. © 2016 Society for Experimental Biology, Association of Applied Biologists and John Wiley and Sons Ltd. Source
Lapalikar G.V.,Commonwealth Science and Industrial Research Organisation |
Taylor M.C.,Commonwealth Science and Industrial Research Organisation |
Warden A.C.,Commonwealth Science and Industrial Research Organisation |
Scott C.,Commonwealth Science and Industrial Research Organisation |
And 2 more authors.
PLoS ONE | Year: 2012
Two classes of F 420-dependent reductases (FDR-A and FDR-B) that can reduce aflatoxins and thereby degrade them have previously been isolated from Mycobacterium smegmatis. One class, the FDR-A enzymes, has up to 100 times more activity than the other. F 420 is a cofactor with a low reduction potential that is largely confined to the Actinomycetales and some Archaea and Proteobacteria. We have heterologously expressed ten FDR-A enzymes from diverse Actinomycetales, finding that nine can also use F 420H 2 to reduce aflatoxin. Thus FDR-As may be responsible for the previously observed degradation of aflatoxin in other Actinomycetales. The one FDR-A enzyme that we found not to reduce aflatoxin belonged to a distinct clade (herein denoted FDR-AA), and our subsequent expression and analysis of seven other FDR-AAs from M. smegmatis found that none could reduce aflatoxin. Certain FDR-A and FDR-B enzymes that could reduce aflatoxin also showed activity with coumarin and three furanocoumarins (angelicin, 8-methoxysporalen and imperatorin), but none of the FDR-AAs tested showed any of these activities. The shared feature of the compounds that were substrates was an α,β-unsaturated lactone moiety. This moiety occurs in a wide variety of otherwise recalcitrant xenobiotics and antibiotics, so the FDR-As and FDR-Bs may have evolved to harness the reducing power of F 420 to metabolise such compounds. Mass spectrometry on the products of the FDR-catalyzed reduction of coumarin and the other furanocoumarins shows their spontaneous hydrolysis to multiple products. © 2012 Lapalikar et al. Source | <urn:uuid:0e8e2969-9025-48c5-bd3c-6f6845399d70> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/commonwealth-science-and-industrial-research-organisation-523355/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00202-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920176 | 3,201 | 2.921875 | 3 |
AMDs acquisition of ATI
The most obvious example of this is adding a GPU (graphics processing unit) to the processor package to help better render video and other multimedia applications.
has already announced that it plans to combine CPUs and GPUs on the same piece of silicon in 2009 under a program called Accelerated Processing Units, previously referred to as "Fusion."
has another initiative called "Torrenza," which looks to spur the development of co-processors for systems that use
's Opteron processors.
in 2006 was to take advantage of the company's graphics portfolio as it moved toward this type of chip development. In this case, the GPU allows the software's instructional threads to run in parallel, breaking the information down into smaller pieces to process them simultaneously, which provides for high throughput and better performance for various applications without relying on increasing the clock speed to increase performance. The result of all this is what
calls a heterogeneous microprocessor that has a combination of GPUs and CPUs working together, which should increase performance while reducing power consumption. It also allows applications to take better advantage of the multicore platform.
"It is simply that hardware with a specific purpose is much more efficient,"
said. "You wouldn't want to decode video on a CPU. You want to decode that video on a dedicated piece of hardware that is off to the side. At the same time, it achieves the same performance at one-twentieth the power."
is moving toward offering these types of subsystems for its chips-GPUs are just one example-that are mostly geared for desktops.
said most data center servers run applications that can currently take advantage of multicore technology, such as Web services and financial processing. By 2009,
plans to offer its first platform that will support chips that have up to eight processing cores.
That's not to say that
is only focusing on chips for PCs. In November, the company introduced its FireStream 9170 GPU for the
(high-performance computer) market. In this case, it's focusing on the
market, where Nvidia, also a vendor of GPU technology, has taken an interest.
One reason why | <urn:uuid:558a25e0-5042-433c-9589-5d83b513afe1> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Desktops-and-Notebooks/AMD-Sees-Future-in-Accelerated-Computing/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00320-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956241 | 445 | 2.5625 | 3 |
One of the problems with smartphone apps is that one has no control over where often sensitive permissions and personal content is stored. While we’re allowed a certain amount of input when it comes to downloading the app and installing it: agree to the permissions or else, we have no control over where or how all the data is stored.
We know that it’s probably in the cloud somewhere, but it could be anywhere, even on the phone itself. And each app developer has its own idea about how to handle the stuff.
That is a problem for security—not the app developers’ but ours. And it doesn’t stop at phones. Anyone know where the password for an IoT oven is located, and how securely? The answer is no and maybe not very.
Here’s a solution, though: Create your own cloud with all your own personal data in it, and then allow the smartphone apps and fitness bands to access it when it needs to pull down or write data.
The app developer should be out of the equation, some computer scientists think. The personal data is controlled by the user with an interest in its security, not the developer who may not care much.
“This is a rethinking of the web infrastructure,” Frank Wang says in a CSAIL press release. Wang is a student at MIT’s CSAIL and is one of the concept’s planners. “Maybe it’s better that one person manages all their data. There’s one type of security and not 10 types of security,” he says.
MIT calls its project Sieve.
The idea is that all of a user’s personal data is encrypted in the cloud. When an app needs to use some of the data, it simply requests it. A decryption key is then sent to the app for the relevant chunks of data. Fall out with the developer, and the user can revoke access. Keys can be re-made at any time.
It’s a simple idea that could improve security. The party who cares about the security controls it.
One slight hiccup is just how to implement the encryption. It’s not as simple as the encryption of a file, transaction, or e-mail, say. In this case each piece of data needs an attribute that only allows decryption if the requester has permissions. That’s an amusing and ironic turning of the tables.
A name and city, but not a Social Security number, or street name, could be an example of the kind of bespoke personal information delivery allowed for one app, for example. The technique is called attribute-based encryption and is not in itself a problem to implement. The trouble arises because it’s slow to encrypt and decrypt, CSAIL explains in its press release.
The solution is a kind of lumping of data “under a single attribute,” the release says.
“For instance, a doctor might be interested in data from a patient’s fitness-tracking device but probably not in the details of a single afternoon’s run. The user might choose to group fitness data by month,” it says.
All very well and good, one might think of the plan. But why would the app developers go for it?
It would be partly to differentiate themselves from others as being more security-au-fait, the researchers think. And also, the end user might decide to share certain bits of previously unobtainable, unrelated data—data that he or she now owns. Throwing the dog a bone from time to time, as it were.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:a1ce3fd3-b672-4d93-ad0a-2a9e19055c8e> | CC-MAIN-2017-04 | http://www.networkworld.com/article/3047821/security/user-controlled-private-clouds-could-help-with-security-think-scientists.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943705 | 777 | 2.578125 | 3 |
While my research is primarily concerned with drive-by-download attacks, I thought I try to summarize other web-based client-side attacks that are out there, many of which are being researched, neglected and would provide for some cutting edge research opportunities. I will categorize the attacks based on their impact on confidentiality, availability, and integrity.
Attacks described in this section all are concerned with accessing some confidential information on the client side. I look at cookie, history, file, and clipboard stealing attacks as well as attacks that are able to obtain information about protected internal network topology and phishing.
Cookies are pieces of data that is being sent by the server to be stored on the client for retrieval at a later time. Cookies are primarily used to allow for tracking of the client across multiple request/response cycles. Cookies, according to the same origin security policy, can only be retrieved by the server that sets them. As a result, web servers are not able to read cookies from other domains. Cookies themselves are not likely to represent an attack vector on the web client. However, they are a high value target for attackers, as a cookie with its purpose of identifying the client would help with attempts hijack a session and impersonate a client. Web mail clients, for instance, utilize cookies to identify a user at a later time, so the user does not have to provide their credentials each time they would like to access their mail. If an attacker can access the cookie, unauthorized access to the mail account could be obtained as demonstrated recently Perry at Defcon and Graham with SideJacking with Hamster.
The last attack presented that impacts confidentiality is a social engineering attack called phishing. Social engineering attacks aim at exploiting of the natural human tendency to trust. In a phishing attack, the trust in a web site is abused to fraudulently acquire personal confidential data, such as credentials and bank account information (KYE – Phishing). These web-based client-side attacks present the user with a fraudulent web site, often promoted via SPAM Email, which appear to be from a trusted entity, such as a bank. The web site, however, is, in fact, in the control of the attacker and once the user provides personal information to the web site, the attacker will have obtained this confidential information.
Next, I look at attacks that impact availability. These attacks are concerned with partially or fully consuming the client resources, which reduces or leads to a complete failure of a service the client normally performs. The attacks reviewed are simple crashes, popup floods, browser hijacking, network floods, Web SPAM/junk pages and web pages that commit click fraud.
A denial-of-service is an attack that results in partial or complete consumption of resources that negatively impact a service. In the setting of a web-based client-side attack, a web page could cause the lock-up or crash of the browser or even the operating system or one if its components. Many browser vulnerabilities exist that permit a malicious web server to launch an availability impacting attack.
While the lock-ups and crashes often occur without malicious intent, there are several availability impacting attacks for which malicious intent undoubtedly exist. Pop-up floods are used in advertisement attacks (New Ad Attacks). These attacks lead to the display of many unsolicited pop-up windows. While these pop-ups load, network and computing resources are consumed, significantly reducing the availability of the client. This could even lead to browser hijacking, in which the page cannot be left and/or pop-up cannot be closed.
Since web browsers are capable to load resources from remote network locations, for instance images, a malicious web page could conceptually lead to flooding the network with traffic if a browser doesn’t manage its resources carefully. For instance, a web page that contains a million images from different domains could generate a million DNS requests, potentially overwhelming the local DNS server. A web page that contains large data chunks could potentially clog the network. If browsers are pooled to perform flooding of a network , they are referred to as Puppetnets (see Lam’s paper on Puppetnets).
Web SPAM/ junk pages are specific malicious web pages that abuse search engine functionality. A search engine is tasked with providing the user with relevant web pages for a given user queries. Web spam/ junk pages abuse the algorithm of the search engine to lead to a high ranking despite the fact that the content of the web pages are not relevant to the user. As such, these pages abuse the client’s resources by displaying non-relevant content. On top of that, these and other pages might be involved in click fraud scams in which a malicious web page could fraudulently simulate clicking of advertisements by the user.
Next, attacks that impact integrity. In the context of web-based client-side attacks, a loss of integrity usually translates into the ability of an attacker to execute arbitrary code on the client machine. Cross site/domain/zone scripting, drive-by-pharming, hosting of malware, and drive-by-download attacks are described.
Cross site/domain/zone scripting is a vulnerability of web pages which allow execution of injected code in the security context of that page when the user visits such a page. The injected code could be used to steal information, but also permit execution of arbitrary code on the client if, for instance, that web page is a trusted page in the context of the web browser.
Drive-by-pharming is an web-based client-side attack that modify the DNS settings of a user’s router by merely having a user visit a malicious web page. These attacks do not impact the integrity of the client machine directly, but rather impact the integrity of network components the client relies on.
Hosting of malware is another type of attack that impacts integrity of the client. In this attack scenario, the malicious web page hosts malware and uses social engineering to entice the user to download and execute the malware. An example of such a technique is a video codec that contains malware, which is presented to be a requirement to view pornographic material (e.g. Fake Celebrity Video Sites Serving Malware) Once the user downloads and executes the malware, the malware has complete control of the machine. Attacks that do not require this user interaction, but rather are capable of pushing and executing malware without user’s notice or consent are drive-by-download attacks. These attacks usually trigger having a user merely visit a web page. | <urn:uuid:a7f3785c-ca5e-4431-b812-f56baacb67e3> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2008/09/09/types-of-web-based-client-side-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00533-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931194 | 1,324 | 2.65625 | 3 |
- The Dish Reflector & Radio Electronics
- The Gateway IDU (In Door Unit)
- The Coax Cables that connect the radio equipment to the IDU
- Laptop Computer - The installers laptop is used for system activation. It will be replaced by the customers LAN or computer later on.
- The Client Router (if present). Note that a Ground Control installation is to provide connectivity to one connected computer directly from the satellite gateway. We do no support clients local networking issues. If the customer wishes to have you configure their network, that is a separate contract that they will negotiate with you directly.
The satellite dish itself is simply a reflector that redirects the satellite signal to focus on the Feed Horn. If you were to break open the white plastic portion of the dish, you would see that inside is a wire mesh that is the part that actually reflects the signal.
The LNB (Low Noise Block) RECEIVES the satellite signal from space.
The Transmitter (or BUC " Block Up Converter") TRANSMITS to the satellite in orbit.
The cone shaped feed horn is designed to cut down ambient signal noise. The Wave Guide directs horizontal or vertical polarity signals to and from both the LNB and BUC. The polarity of the signal is determined by the shape of the waveguide.
Looking at the feed horn, you can see the rings that help cut down on ambient signals. Only signals from the satellite dish should reach the LNB.
IMPORTANT NOTE: Visually check the sealed feed horn is completely dry. If not, it will have to be replaced, (or resealed if possible). Water droplets that form on the feed horn will refract and redirect radio waves so that system becomes unusable.
Taken together, the Reflector, LNB, BUC and Waveguide are called the Antenna, because the satellite communication uses standard radio frequencies to communicate between dish and satellite.
The LOOK ANGLE is the actual direction of the orbiting satellite if measured from the horizon. On the dish above, the look angle is not discernible from either perpendicular from the dish plane, or the feed horn direction. The 1.2 meter Galileo satellite dish has look angle offset of 17.3 degrees from the dish's perpendicular point, or 34.6 degree offset from the LNB (17.3 x 2 = 34.6 degrees).
The look angle for your installation will be on the installer sheet. You may use a inclinometer to help point the dish, keeping in mind the 34.6 degree offset.
INTERESTING NOTE - You may mount a satellite dish upside down as shown below. This technique is useful to hide the reflector from view on top of a building, although the radio will itself be seen.
If you do mount the dish upside down, make sure that no water can pool in the dish if it rains.
The IDU "In Door Unit" is the satellite gateway for which the client LAN or your laptop are connected.
Pictured below is an iDirect gateway:
There are only 4 cables that connect to the back of the gateway. Power, Ethernet, Transmit and Receive. Note the Transmitter cable is marked RED, and the Receive Cable is Blue. Make sure to color code the cables at BOTH ENDS to avoid installation confusion.
Picture below is a Galileo gateway:
Both iDirect and Galileo systems uses a high-grade Belden coax cable, and NOT the standard RG-6 that is used in the satellite TV industry. Belden 1694A coax comes shipped on a 300' spool. Knowing this, you should keep your dish within a maximum cable run of 150 feet to the IDU.
Every Installation will require a Windows 2000, XP, or 7 compatible laptop with a Cross-Over Ethernet cable to commission the satellite gateway. Make sure that you prep your laptop with the appropriate software prior to going on any installation - specifically, the iSite software for the iDirect systems, and the CPE Wizard software for a Galileo system.
Eash laptop should have:
Internet Explorer, or any other browser to check to make sure the satellite is online and to access the gateway via it's IP address.
The Training CD-ROM: Easy Access to every step of the process of doing an installation. This may be your only guide in the field.
iSite - The commissioning software for the iDirect gateway.
CPE Installation Wizard - The commissioning software for the Galileo gateway
Satmaster QuickAim - (Optional) With a GPS handheld device, you can find the pointing parameters for any satellite from any location on earth. A very handy application for all satellite installers. The Superbuddy satellite pointing tool has an integrated sat-finder so the Satmaster QuickAim is not required.
After you've commissioned the satellite gateway, you may connect the working connection from your laptop to the customer router or LAN. If the customer does not have a router, they will need to purchase one from any electronics store. We recommend the Linksys WRT54G router, however, any router will work.
Remember... the installer (or Ground Control) is not responsible for configuring the customers network as part of a VSAT installation. However, you may negotiate a contract with the customer directly if they wish to employ your services to configure their network. | <urn:uuid:8014d5e6-ea22-4922-86ed-663ea58209ef> | CC-MAIN-2017-04 | http://www.groundcontrol.com/galileo/ch1-sat-equip.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00441-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.902536 | 1,107 | 2.625 | 3 |
Denial of Service (DoS) attacks are commonly used to disturb the normal operation of applications. DoS attacks take advantage of a weakness in the system or application and cause it to crash or stop responding. Although this attack does not provide the attacker with any escalated system access, it disturbs the operation of the site.
DoS attacks are explicit attacks that prevent legitimate users from accessing a service. In most cases, a DoS attack floods the victim server with network traffic. This can be achieved by either overloading the ability of the victim server to handle incoming traffic or by sending requests that cause the victim server to behave unpredictably, possibly hanging or crashing the server.
To illustrate a simple denial of service attack, imagine an attacker who creates a program that calls a pizza store. If the program repeats this task continuously, it prevents legitimate customers from ordering pizza because the telephone line is busy. This example illustrates a classic resource consumption technique. Resource consumption is a familiar technique of performing DoS attacks on Web applications. With this technique, the attacker tries to identify operations that are implemented in a poor manner and consume relatively vast resources. The attacker repeats these operations until the server is no longer capable of serving other users. The affected resources can be the server's bandwidth, memory, disk space or CPU time.
For example, consider a Web application that contains forums with millions of messages. The application contains a search engine that enables sophisticated regular expression searches. An attacker can easily create complicated regular expressions that consume a lot of CPU each time a search is initiated. The attacker then writes a script to launch this request over and over again until the application consumes 100% of the Web server CPU. As a result, legitimate users will not be able to access services in the server or will receive very poor performance.
Another common type of DoS attacks is using Buffer Overflow, i.e. simply sending more data than an application can handle (see Buffer Overflow). For example, sending email messages that have attachments with 256-character file names to Netscape and Microsoft mail application will cause the servers to crash. | <urn:uuid:7116f666-8942-497d-9298-0f9bf3dcb623> | CC-MAIN-2017-04 | https://www.imperva.com/Resources/Glossary?term=denial_of_service_dos | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00523-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.898382 | 422 | 3.625 | 4 |
The now-infamous oil leak resulting from an April 20 oil rig explosion had, as of May 17, spewed an estimated 5.7 million gallons of oil into the Gulf of Mexico, according to estimates from a PBS NewsHour widget. With 210,000 gallons flowing into the ocean each day, myriad technologies have been deployed to not only stop the leak, but also to track its devastation and cleanup. The New York Times, for example, has created an interactive map showing where the oil has drifted each day, pulling data from the National Oceanic and Atmospheric Administration and U.S. Coast Guard.
Crowdsourcing - outsourcing tasks to a larger group through an open call - could allow the federal government to get another look at the spill's impact. Using online submissions, texts, tweets and e-mails from those experiencing the spill's effects, the Louisiana Bucket Brigade (LABB), a New Orleans-based environmental health and justice nonprofit, is collecting and posting incident reports on its Oil Spill Crisis Map.
The Oil Spill Crisis Map is based on Ushahidi open source software and produced by students at Tulane University, in conjunction with LABB and Radical Designs. Ushahidi, pronounced "ooh-sha-hee-dee," was initially developed to map incidents of violence and peace efforts throughout Kenya after the post-election fallout in early 2008. LABB already was coding Ushahidi for reporting environmental hazards, and students in Tulane Professor Nathan Morrow's GIS classes helped modify the open source application to track the oil spill.
Morrow said what initially prompted Anne Rolfes, founding director of LABB, to implement the map was that in Louisiana, although citizens can report vague environmental hazards to the state's Department of Environmental Quality (DEQ) online 24 hours a day, 7 days a week, the DEQ only accepts detailed reports via phone from 9 a.m. to 3 p.m., Monday through Friday. "She works with all these communities that want better access to reporting accidents and chemical spills," Morrow said. "And that's how the idea came about - to give these citizens a little more voice, so they could write or text in any time they see an accident or smell a bad odor."
Rolfes added that although there was no restriction on frequency, DEQ's response in general is terrible. "This is one [way] we were going to take matters into our own hands," she said.
View Full Story | <urn:uuid:230f133f-1d32-4b4a-9bb2-0b13086d40fd> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Technologies-Track-Gulf-of-Mexico-Oil.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279379.41/warc/CC-MAIN-20170116095119-00341-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96501 | 511 | 3 | 3 |
A team led by Harvard computer scientists, including two undergraduate students, has developed a new tool that could lead to increased security and enhanced performance for commonly used web and mobile applications.
Called RockSalt, the clever bit of code can verify that native computer programming languages comply with a particular security policy.
The use of native code, especially in an online environment, however, opens up the door to hackers who can exploit vulnerabilities and readily gain access to other parts of a computer or device. An initial solution to this problem was offered over a decade ago by computer scientists at the University of California, Berkeley, who developed software fault isolation (SFI).
SFI forces native code to “behave” by rewriting machine code to limit itself to functions that fall within particular parameters. This “sandbox process” sets up a contained environment for running native code. A separate “checker” program can then ensure that the executable code adheres to regulations before running the program.
While considered a major breakthrough, the solution was limited to devices using RISC chips, a processor more common in research than in consumer computing. In 2006, Morrisett developed a way to implement SFI on the more popular CISC-based chips, like the Intel x86 processor. The technique was adopted widely. Google modified the routine for Google Chrome, eventually developing it into Google Native Client (or “NaCl”).
When bugs and vulnerabilities were found in the checker for NaCl, Google sent out a call to arms. Morrissett once again took on the challenge, turning the problem into an opportunity for his students. The result was RockSalt, an improvement over NaCl, built using Coq, a proof development system.
“We built a simple but incredibly powerful system for proving a hypothesis—so powerful that it’s likely to be overlooked. We want to prove that if the checker says ‘yes,’ the code will indeed respect the sandbox security policy,” says undergraduate student Joseph Tassarotti, who built and tested a model of the execution of x86 instructions. “We wanted to get a guarantee that there are no bugs in the checker, so we set out to construct a rigorous, machine-checked proof that the checker is correct.”
Even more impressively, RockSalt comprises a mere 80 lines of code, as compared to the 600 lines of the original Google native code checker. The new checker is also faster, and, to date, no vulnerabilities have been uncovered.
“The biggest benefit may be that users can have more peace of mind that a piece of software works as they want it to,” says Morrisett. “For users, the impact of such a tool is slightly more tangible; it allows users to safely run, for example, games, in a web browser without the painfully slow speeds that translated code traditionally provides.”
Previous efforts to develop a robust, error-free checker have resulted in some success, but RockSalt has the potential to be scaled to software widely used by the general public. The researchers expect that their tool might end up being adopted and integrated into future versions of common web browsers. Morrisett and his team also have plans to adapt the tool for use in a broader variety of processors. | <urn:uuid:4499e9d8-60b7-47eb-a535-d744709ce12d> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/07/25/scientists-develop-tool-for-improving-app-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00487-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941837 | 682 | 3.421875 | 3 |
Circuit-level firewalls are ok but if you want to make your network more secure these firewalls will not be enough for you. Better line of defense is to use new kind of firewall that are making deeper packet analyze, application layer firewalls. Application layer firewalls, also called application gateways or proxy firewalls. These firewalls are filtering traffic at 3, 4, 5, 7 OSI layer.
Application layer firewalls may have proxy servers or specialized application software added. The role of Proxy service is to manage traffic through a firewall for some services like FTP.
The proxy services are different for every protocol they are forwarding. They can also increase possibility of access control, data validation and logging of data transfer. Proxy firewalls are in the middle between networks and they decide which communication will have approval to proceed towards other network.
In a configuration with proxy firewalls, there is no connection between outside and inside network. If we speak about LAN connected to the internet, proxy server provides the only visible IP address on the Internet. If some user od client is trying to submit some application layer requests they are all connected to the proxy server first. Proxy will watch every request end filter them or even change some requests. The proxy server also copies incoming packets and then changes the source address to hide the internal address before it sends the packet back to the destination address.
Why Application Layer Firewalls?
To protect the private network we use proxy server that controls and monitors outbound traffic. The whole access to the network is managed by the proxy server who establishes the session state and makes the user authentication.
Application layer firewalls are responsible for filtering at 3, 4, 5, 7 layer. Because they analyze the application layer headers, most firewall control and filtering is performed actually in the software.
If you put the a firewall at the network layer you are able to control much more information from data. Depending of what Application layer firewall you are using, application support can be very different. There are different Application layer firewalls that are supporting limited number of applications, and others are made to support only a single application. Normally, application layer firewalls are made to control applications as e-mail, FTP, Usenet news, web services, DNS, Telnet and so on.
Advantages offered by Application Layer Firewalls
Working with Application Layer Firewalls
Application level proxy firewalls have a job to allow or deny connections from inside the network out to the internet and also permit and deny communications that are sourced from the internet and directed to our inside local network. They are placed in the application layer for each type of service that they want to allow (like HTTP for example). Sometimes proxy server will block all incoming connections from internet to our local network and allow only to our local users to go out to the internet. In that case, the only traffic that they are allowing to pass in the inside of the network from the internet is the traffic that is reply to local user query. It will allow only connections that are initiated from the inside of the network to come back in.
In proxy configuration, the application layer firewall has normally two network interfaces. One is used for the client connections, and the other is used to access the website from the Internet. By standing in the middle between the internal and external network, application proxy filters the trusted from untrusted network connections either physically or logically.
Let’s examine how this works:
- The proxy server takes the request from inside the network for accessing some webpage.
- The proxy server checks the user base on the rules applied to it.
- It uses the Internet connection to load the requested website. In that action it forwards only Layer 3 and Layer 4 packets that match the firewall rules.
- When returning content to the requesting client, proxy server will forwards only Layer 5 and Layer 7 traffic and content that the server allows.
Application layer firewalls are made to enable the highest level of filtering for particular protocol. Proxy server slows down the network because there is significant amount of information that he must analyze.
Application Firewall Limitations
There is a big issue with application firewall and that is throughput limitation. They can also full up a lot of disk space by writing many logs.
There are two solutions:
- Use a Context Transfer Protocol (CXTP)
- Monitoring only some particular applications
By using a CXTP, you perform authentication and authorization and then you don’t analyze the whole connection. This improves performance but without monitoring the ability to have alerts of new attack is impossible. In the second solution, you limit the application layer firewall to processing only particular things on the network like e-mail or Telnet.
You can do even more by processing only connections to specific internal resources like servers.
The downside is a security weakness that enables the attacker to take the ownership of a non-secure device and from there attach every machine in the local network. Another thing is that the application layer firewall is not supporting all application that exists today. For other applications there is no possibility to filter traffic with application layer connection.
Some application layer firewalls are not able to function without client side software installed. They use this software to make authentication process and other data gathering. This can limit scalability if you need to install this software to many user computers and may create management problems if support for thousands of clients is required. | <urn:uuid:c9be25e2-05fc-4f91-8ed0-34343f795a36> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/2012/application-layer-firewalls | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00331-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918828 | 1,116 | 3.203125 | 3 |
Is the digital divide growing or shrinking in the United States? A new Pew Research Center (PRC) report found that the digital divide, and digital readiness, is closely tied to an individual’s socioeconomic status, their race and ethnicity, and their level of access to home broadband and smartphones.
PRC’s new report explores the attitudes and behaviors that underpin people’s preparedness and comfort in using digital tools for learning. For the report, PRC surveyed 2,752 adults, 18 years of age or older, living in all 50 U.S. states and the District of Columbia. Nine hundred and sixty-three respondents were interviewed on a landline telephone, and 1,789 were interviewed on a cellphone, including 1,059 who had no landline telephone.
In the report, PRC attempted to assess respondents’ confidence in using computers, their ability to work with new technology, how they use digital tools for learning, their familiarity with ed tech terms, as well as their capacity to determine the trustworthiness of online information. The report notes that PRC’s findings solely cover people’s learning activities in the digital space, but don’t address the full range of what a person can do online, nor their readiness to perform such actions.
While the phrase “digital divide” has been tossed around since the 1990s, digital readiness is a more recent phrase. Digital divide refers to the difference in access to technology that exists between the haves and have nots. However, PRC explains that there has recently been a pivot to understand people’s preparedness, such as their digital skills and their trust in technology, which may influence their use of digital tools, as separate and apart from their access to the tools. To understand digital readiness further, PRC provides an operational definition in its report, saying that an operational definition must include:
- Digital skills, that is, the skills necessary to initiate an online session, surf the Internet and
share content online.
- Trust, that is, people’s beliefs about their capacity to determine the trustworthiness of
information online and safeguard personal information.
- These two factors express themselves in the third dimension of digital readiness, namely use–the degree to which people use digital tools in the course of carrying out online tasks.
PRC found that there were several distinct readiness groups among respondents–meaning most respondents were easily clustered along a spectrum of digital readiness.
To assess a respondent’s digital readiness, PRC asked questions about confidence in using a smartphone, whether someone needs help setting up a new digital device, familiarity with ed tech terms (including Common Core Standards, Khan Academy, and distance learning), whether they have trouble determining if an online source is trustworthy, and if they’ve used technology for learning or taken an online course recently. PRC found that answers again followed the cluster pattern they initially found.
Researchers also found that greater digital readiness translates to higher level of use of technology in learning. The tables below, according to PRC, show results for comparing the use of digital tools in learning to the main components of digital readiness: people’s familiarity with “ed tech” terms, whether people need help in setting up new gadgets, whether they have a hard time determining what information online is trustworthy, and their confidence with computers and the Internet. In each table, the differences reported are significant even when controlling for socioeconomic factors such as age,
income, or educational attainment.
While the factors used to measure digital readiness clearly play a role in whether someone uses the Internet in personal learning, other demographic characteristics and access to technology also play a role. However, PRC’s new report challenges the previous notion that access to technology was all that was standing in the way of someone adopting a more digital lifestyle. | <urn:uuid:94e26461-2b5b-4e90-9f7d-d1859b593949> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/demographics-affect-which-side-of-the-digital-divide-youre-on/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00147-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947896 | 785 | 3.5 | 4 |
Definition: Find a path through a weighted graph that starts and ends at the same vertex, includes every other vertex exactly once, and minimizes the total cost of edges.
Also known as TSP.
See also bottleneck traveling salesman, Hamiltonian cycle, optimization problem, Christofides algorithm, similar problems: all pairs shortest path, minimum spanning tree, vehicle routing problem.
Note: Less formally, find a path for a salesman to visit every listed city at the lowest total cost.
The above described path is always a Hamiltonian cycle, or tour, however a Hamiltonian cycle need not be optimal. The problem is an optimization problem, that is, to find the shortest tour. The corresponding decision problem asks if there is a tour with a cost less than some given amount. See [CLR90, page 969].
If the triangle inequality does not hold, that is dik > dij + djk for some i, j, k, there is no possible polynomial time algorithm that guarantees near-optimal result (unless P=NP).
If the triangle inequality holds, you can quickly get a near-optimal solution by finding the minimum spanning tree. Convert the tree to a path by traversing the tree, say by depth-first search, but go directly to the next unvisited node, rather than repeating nodes. Christofides algorithm starts with a minimum spanning tree, but is more careful about converting the tree to a path with results closer to optimal.
links to challenges, advances, etc..
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 2 September 2014.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Paul E. Black, "traveling salesman", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 September 2014. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/travelingSalesman.html | <urn:uuid:80e28435-ab42-46d1-b023-9ba8d43664fa> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/travelingSalesman.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00451-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.892919 | 431 | 3.140625 | 3 |
The Energy Department found “no detectable radioactive contamination” in a preliminary test this weekend of the country’s only underground nuclear waste storage site, which experienced a leak on Feb. 14 following a fire on Feb. 5.
But the probe did not include the locations most likely to contain radioactive leakage, according to a joint news release from Energy’s Carlsbad, N.M., Field Office and Nuclear Waste Partnership LLC, which operates the waste site.
“These results were expected because the shafts that were sampled were not in the air flow path coming from the area where the radiation release originated,” the release said.
The Waste Isolation Pilot Plant, located 26 miles southeast of Carlsbad, stores plutonium-contaminated waste in 16 miles of salt caverns mined 2,150 feet below the surface. The joint release said “radiological and air quality instruments were lowered down the salt handling and the air intake shafts, to check for airborne radioactivity and to determine air quality. The preliminary findings indicate no detectable radioactive contamination in the air or on the equipment lowered and returned to the surface. Air quality results were also normal.”
Personnel may be sent as soon as the end of this week to check the stability of the site and attempt to identify the source of the radioactive leak, the joint release said.
The underground site may still contain radioactive leakage, according to Don Hancock, director of the Nuclear Waste Safety Program at the Southwest Research and Information Center, a nuclear watchdog group in Albuquerque.
“Unless the ventilation system had totally failed, there should be no radiation detected where the probes were lowered (which is why DoE, correctly, chose those two shafts to send probes). Those are air intake shafts bringing in air from the outside,” he said in email comments to Nextgov.
“That does not mean that there still isn't radioactivity in the air in the underground at the point of the release, through the underground tunnels and the exhaust shaft where the air travels,” Hancock said. “In fact, below ground monitors continue to show low levels of radioactivity.
“Moreover, that 3,000 feet or so of tunnels from the presumed contamination point (room 7, panel 7) to the exhaust shaft must be assumed to have radioactivity in the salt and any equipment.”
Hancock said WIPP currently stores 3.2 million cubic feet of plutonium-contaminated waste generated during the manufacture of nuclear weapons by the Defense Department and has cost $6 billion to develop and operate since it first opened in 1999.
The Carlsbad Field Office and Nuclear Waste Partnership said the tests last weekend were the “first phase of an underground recovery process that will lead to the resumption of nuclear waste disposal operations at WIPP.”
Joe Franco, manager of the Carlsbad Field Office, described the test results as "a postive way to begin" the recovery process. "In order to get to this point, a lot of work has taken place and it involved a lot of time-consuming activities,” he said. “But the recovery process is underway,” he said. “We are receiving information that will get us to the next steps in the process.”
Tammy Reynolds, WIPP Recovery Process Manager for Nuclear Waste Partnership said the air sampling process this weekend “is critical in helping determine the proper personal protective equipment needed for our personnel entries” into the underground chambers.
Once the inspection team checks for levels of air and surface contamination between the air intake and salt handling shafts, Energy and its contractor said, the team will make its way down to the area of the repository where operations were being conducted prior to the radiological leak, isolate that source and develop a plan to remove the contamination.
As of Saturday, 17 WIPP employees showed “background” radiological contamination in fecal samples, but there has been no detectable contamination in urine samples, which indicates contamination was not inhaled into the lungs, Energy and Nuclear Waste Partnership said. “The levels of exposure are extremely low, and none of the employees is expected to experience any health effects from the exposures,” they said.
Hancock said the statement on exposure is “a very disturbing part of the release -- because it is false. Urine samples detect SOLUBLE plutonium [emphasis included]. Insoluble plutonium and americium in the lungs would not be detected by urine samples. Fecal samples would detect insoluble plutonium, some of which could remain in the lungs.”
WIPP stores nuclear waste contained in 55-, 85- and 100-gallon drums transported to the facility by truck inside beehive shaped stainless steel storage casks that can resist fires at temperatures of 1,475 degrees centigrade and immersion in 50 feet of water.
The waste stored at WIPP comes from sites in Idaho, Georgia and Los Alamos, N.M. Last week Sen. Tom Udall had a meeting with Energy Secretary Ernest Moniz at which he underscored the urgency with which we must have a plan to store waste from Los Alamos before the beginning of the fire season.
Los Alamos stores 5,600 containers of nuclear waste in dome-like structures made of fabric that “has exceeded its expected in-service life of 15-20 years,” the Energy inspector general reported last June. Forest fires in 2011 triggered an evacuation of the lab and the city of Los Alamos.
This story has been updated with detail and background.
Get the Nextgov iPhone app to keep up with government technology news | <urn:uuid:239ec1bd-e620-494e-bee6-cd60c1cc63be> | CC-MAIN-2017-04 | http://www.nextgov.com/defense/2014/03/partial-probe-shows-no-radiation-new-mexico-nuclear-waste-site/80173/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00359-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947825 | 1,160 | 2.75 | 3 |
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Other groups have demonstrated atomic-scale logic circuits, but the group of four scientists at IBM's Almaden Research Center in California, USA, are the first to integrate multiple logic circuits, said Andreas Heinrich, staff scientist at IBM and one of the researchers involved in the two-year project.
The result is a functioning circuit that is 260,000 times smaller than integrated circuits developed on 0.13 micron process technology, the current leading standard for chip production, Heinrich said. It consumes 1 electron-volt of energy, a tiny sliver of the power used by current PC processors. "One hundred thousand times less energy is consumed in this circuit than in modern processors," Heinrich said.
While the announcement appears to represent a breakthrough in the field of nanotechnology, the molecular circuit model is far from becoming part of a PC. It will take around ten years to get this circuit into a computer, Heinrich said.
The researchers did not embark upon the project with the stated goal of developing the world's smallest circuit, Heinrich said. "This really came out because we were allowed to do this type of exploratory research, and we were quite surprised we were able to reach this level of integration so quickly," he said.
The scientists used a scanning tunnelling microscope to arrange the molecules on a copper surface in patterns that IBM calls "chevrons," which are groups of three molecules in a slightly crooked line, Heinrich said. Using the microscope, a single molecule was knocked into the lattice spacing next to the first group of three molecules, landing very close to the middle molecule. Carbon monoxide molecules naturally repel each other, so the middle molecule is moved away from its previous position, landing next to the centre molecule of the next group of three molecules.
When viewed from overhead, the cascading process resembles the effect created by patterns of toppling dominoes. By linking several groups of three molecules, researchers were able to start a chain reaction of cascading molecules. These cascading molecules were then connected to create arrays.
Computers, despite what the average user might think, are actually very simple, and the basics of computation have not changed since computers were the size of large rooms. Varying levels of electricity flow through the transistors on a processor, opening and closing logic gates. The different voltage levels designate whether a gate's terminal, or its inputs and outputs, is in a low binary position, designated as a "0", or a high binary position, designated as a "1." Different combinations of logic gates allow processors to perform operations.
In this case, the researchers set up the circuit so a cascaded array could be interpreted as a "1," and an upright array as a "0." The arrays intersect at certain intervals, where logic gates were created. The circuit created by the researchers is a three-input sorter, which requires three triggers to start the cascading process.
Several hurdles remain before IBM researchers can build this technology into PCs.
The test was carried out 4 to 10 degrees above absolute zero (-273 degrees celsius), which would be an austere computing environment. There is no reason why the circuit itself cannot function at room temperature, but the microscope used to arrange the atoms requires that level of intense cooling to operate, and a similar tool that can work at room temperature will have to be engineered, Heinrich said. Even then, the cascades require two to three hours to set up, as each individual atom is dragged into place with the microscope, he said.
The largest obstacle to overcome, according to Heinrich, is figuring out a way to re-arrange the atoms once they have been toppled. After the atoms move across the circuit once, they must be manually reset. The researchers do not yet know how to create an automatic loop circuit, but Heinrich thinks the answer may lie in the application of magnetic fields to the atoms. | <urn:uuid:9ebea835-657b-4249-8e89-fde78e8edda8> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240048115/Domino-rally-effect-powers-miniature-computer-circuit | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00111-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95184 | 823 | 3.703125 | 4 |
In order to work around some of the performance limitations of silicon at the nanoscale, researchers are looking for ways to improve on existing architectures and engineer new materials to prevent performance degradation. A rising tide of interest and funding has spilled into work to discover high performance nanaoscale materials that will replace silicon transistors in the next decade.
Dr. Bhagawan Sahu at the Microelectronic Research Center in Austin, Texas is one of several scientists looking for silicon replacements at SWAN, a research center exploring next-generation nanotransistors.
SWAN is one of four nanoelectronics centers that is funded by the Semiconductor Research Corporation’s Nanoelectronics Research Initiative. This effort is backed by international semiconductor firms, including Intel, Texas Instruments, IBM and others, with vested interest in “safeguarding and going beyond Moore’s Law.”
According to a report today from the Texas Advanced Computing Center, Dr. Sahu and his team have made significant progress in their nanoscale materials research. As Aaron Dubrow reported:
“Today’s smallest semiconductor transistors are about 32 nanometers (nm) long. Dr. Sahu and the SWAN team aim to make 10nm transistors, with a thickness of less than one nanometer, using graphene. Since it was discovered in the mid-2000s, graphene has been lauded as the savior of the semiconductor industry. In 2010, Andre Geim and Konstantin Novoselov, of the University of Manchester, UK, were awarded the Nobel Prize in Physics “for groundbreaking experiments regarding the two-dimensional material.”
Made up of a single layer of graphite, graphene is the thinnest material in the world and possesses electron mobilities (a measure of how fast electrons in a material can move in response to external voltages) higher than silicon. These characteristics are attractive features and have generated tremendous interest from the semiconductor industry. However, as scientists learned more about graphene and proved it could be used as a potential material in transistors, initial excitements gave way to a greater appreciation of the design and fabrication challenges ahead.” | <urn:uuid:5950f0e5-b09a-41c1-b560-3d574f9bef43> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/05/27/researchers_scale_silicon_wall/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00350-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937613 | 448 | 3.171875 | 3 |
Windows 2000; Windows XP (32-bit); Windows XP Professional (64-bit); Windows Vista (32-bit); Windows Vista (64-bit); Windows 7 (32-bit); Windows 7 (64-bit); Windows 8 (32-bit); Windows 8 (64-bit);
Infrastructure Mode is the most common wireless network environment. This mode offers the fastest wireless data transfers and allows multiple secure wireless connections.
Examples of Infrastructure Mode Topologies
No Internet Connection
Before you begin
Infrastructure Mode Installation (WEP Security Example)
Step Action Click Image to Enlarge 1 Insert the installation CD into the drive. 2 Select Network Installation, and then click Next. 3 After reading the License Agreement, place a checkmark next to the I Agree statement, and then click Next. 4 Click Next after reading the Firewall Exceptions notification. 5 The installation software will display Searching... while trying to find the printer. 6
Illustration A indicates that the printer was located connected to USB002.
Click Setup Wireless or Next.
Note: If you select Next, you will be prompted to choose between a Wireless and a USB installation. Choose 'Set up the Wireless Printing', and then click Next.
Illustration B reveals the printer was not found.
- See Before you begin above.
- Restart the PC.
- Verify that Windows Plug-N-Play sees the printer.
- Click Refresh.
7 The installation software will now locate and identify your PC's wireless configuration settings. The message Gathering Wireless information will appear with a progress bar beneath it. 8 Highlight your wireless router or access point, identified by network name, (SSID), channel and mode, then select The network name I want to use is in the list, and then click Next. 9
Select your security type (No Security, WEP, or WPA), and then click Next.
Note: This example uses WEP.
10 Enter your Security Key(s) value, and then click Next. 11 Verify the settings, click Show Key(s) to identify correct entry of the WEP or WPA security keys, and then click Next. 12 A small window will appear with the message Configuring the wireless printer adapter and verifying it can access the network. Please wait....", and then click Next. 13
Successful configuration will yield a port name (e.g. 9300_Series_026DD2_P1).
Suggestion: Write this port name down for future reference.
14 You will receive one last chance to switch to a USB connection. Select Print Using A Wireless Connection, and then click Next. 15 The message Wireless Setup Successful displays. Disconnect the USB cable and relocate the printer to a desired location. Click Next to install the remaining software.
Related Wireless Articles
Still Need Help?
Please contact Lexmark Technical Support for additional assistance. NOTE: When calling for support, you will need to know your printer model type and serial number (SN).
Please call from near the printer and computer in case the technician asks you to perform a task involving these devices. | <urn:uuid:ebf7337e-a8de-4669-a7fa-bd36ebb08314> | CC-MAIN-2017-04 | http://support.lexmark.com/index?modifiedDate=06%2F05%2F13&page=content&actp=LIST_RECENT&id=HO3093&locale=en&userlocale=EN_US | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00406-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.78213 | 640 | 2.53125 | 3 |
“9HP” silicon-germanium will vastly increase data flow over network backbones.
IBM has introduced a chip making technology which it claimed to help mobile operators to handle explosive amount of data generated from mobile traffic.
The company’s "9HP" silicon-germanium (SiGe) chip-making process will help mobile networks process increasing amounts of data to flow through network backbones in applications such as Wi-Fi, LTE cellular, wireless backhaul and high speed optical communications.
The new "9HP" SiGe technology will have a density of 90nm CMOS allowing better integration in a production qualified SiGe BiCMOS technology.
IBM’s new SiGe BiCMOS technology claimed to offer higher performance, lower power and higher levels of integration than current 180nm or 130nm SiGe chips.
Compatible with company’s 90nm low power CMOS technology platform, it will allow foundries to port a range of circuit blocks and standard cell library elements.
The 90nm foundry platform also offers an RF CMOS technology option which will offer foundries use them in broader choice for use in RF and mixed-signal applications.
IBM fellow David Harame said silicon-germanium is one of the key technologies that have enabled wireless operators to keep up with the explosive growth in data traffic generated from mobile handsets.
"Before SiGe, the high-performance chips used in base stations and optical links were built using expensive, esoteric processes. SiGe provides the necessary performance as well as integration and cost savings via its CMOS base." | <urn:uuid:140d51b0-da33-4d3f-b9a1-7f2c9ff0b979> | CC-MAIN-2017-04 | http://www.cbronline.com/news/telecoms/latest-ibm-chip-will-help-mobile-operators-push-big-data-4288499 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895486 | 332 | 2.609375 | 3 |
Serious SSL Vulnerability FoundA vulnerability in the most common data security protocol on the Internet could allow secure Web sessions to be hijacked.
Two security researchers with PhoneFactor, a provider of phone-based two-factor authentication, on Thursday said that they had discovered a serious flaw in the SSL protocol, which is used to protect sensitive data in online transactions.
SSL, short for Secure Sockets Layer, is used for online banking and for secure e-mail and database access, among other things.
Discovered in August and disclosed by PhoneFactor researchers Marsh Ray and Steve Dispensa to a consortium of major tech industry companies and standards groups in September, the vulnerability was slated for disclosure next year, to give affected vendors time to patch their software.
But an independent security researcher discovered the vulnerability on his own and posted it to an Internet Engineering Task Force mailing list on November 4th.
The vulnerability could allow an attack to conduct a man-in-the-middle attack, whereby he or she could hijack an authenticated SSL session and execute commands. In theory, neither the Web server nor Web browser would provide any indication that the session had been subverted.
"Because this is a protocol vulnerability, and not merely an implementation flaw, the impacts are far-reaching," said Steve Dispensa, CTO of PhoneFactor, in a statement. "All SSL libraries will need to be patched, and most client and server applications will, at a minimum, need to include new copies of SSL libraries in their products. Most users will eventually need to update any software that uses SSL."
Other SSL vulnerabilities have been identified recently. Over the summer, at the Black Hat security conference in Las Vegas, Mike Zusman, principal consultant at Intrepidus Group, and Alex Sotirov, an independent security researcher, disclosed a Web browser design flaw that allowed an attacker to conduct a a man-in-the-middle attack against Web sites with Extended Validation (EV) Secure Sockets Layer (SSL) certificates.
Another security researcher, Moxie Marlinspike, demonstrated a separate SSL flaw at the Black Hat conference in Washington, D.C., in February.
InformationWeek and Dr. Dobb's have published an in-depth report on how Web application development is moving to online platforms. Download the report here (registration required). | <urn:uuid:406bdd25-0343-41df-9f8c-62bc3a2f480c> | CC-MAIN-2017-04 | http://www.darkreading.com/vulnerabilities-and-threats/serious-ssl-vulnerability-found/d/d-id/1084615 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927331 | 479 | 2.578125 | 3 |
The Cisco Environmental Management System minimizes impact to the environment in the definition, design, manufacture, support, and use of our solutions (products, activities, and services) by reusing, recycling, and adopting processes that conserve raw materials, energy, and water. Cisco is committed to:
- Compliance with relevant environmental legislation, regulations, and other requirements to meet customer and community needs and expectations
- Promoting the prevention of pollution
- Continual improvement of our environmental management system
The Cisco Environmental Management System is a continual cycle of planning, implementing, reviewing, and improving Cisco processes and actions to meet its environmental obligations. It serves as a vehicle to ensure that activities, products, and services conform to the ISO 14001 standard and environmental requirements.Environmental Programs and Resources
Cisco has implemented various environmental programs to help minimize our impact on the environment and achieve our environmental objectives:
- Waste Reduction and Recycling - Helps to make Cisco an environmentally responsible company by using the principles of reducing, reusing, and recycling.
- Scrap Management - Uses Cisco's Internet capability to exchange excess components, materials, and equipment of all types. This program provides resources for the scrapping of any item, when necessary, in an environmentally sound manner.
- Design for Environment (DfE) - Cisco is in the process of creating a DfE tool to be incorporated into the product design process. This program will also include Product Take-back and Restricted Substances policies.
- Hazardous Materials Management - Ensures safe and proper management of hazardous materials and waste, including their handling, disposal, storage, and shipment, as well as ensuring compliance with company and legal requirements pertaining to the management of hazardous waste.
- Energy Management - Focuses on improving energy efficiency in building design and construction, energy conservation best practices in existing and future facilities, reducing energy costs through long-term price contracts, identifying opportunities in new and innovative programs offered through utility companies and with local, state, and national agencies, and on continuously raising energy awareness among Cisco employees.
- Alternative Transportation - Provides Cisco employees with the information, resources, and incentives to help them choose an alternative method of commuting to Cisco sites.
For more information or to ask questions about the Cisco Environmental Management System, send an e-mail message to: firstname.lastname@example.org. | <urn:uuid:4c76ea2b-1b3e-447c-ac13-f0a644d9d37a> | CC-MAIN-2017-04 | http://www.cisco.com/c/en/us/about/satisfaction-approach-commitment/approach-quality/quality-certifications/iso-14001/corporate-environmental-policy.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00030-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913738 | 481 | 2.609375 | 3 |
Memory on your Cisco Routers
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss Cisco Router Memory.
This can be a confusing topic for many Cisco CCNA beginners. We have a few different types of memory in our Cisco routers. So where do we start? Let’s start off explaining what each type of memory does on your Cisco router as this will definitely be covered on your Cisco CCNA exam.
DRAM – Dynamic Random Access Memory. This is very similar to RAM in your personal computer. Anything that is stored in DRAM on your Cisco router will be lost when you power your Cisco router off. This memory is divided into two sections. The first is the Main Processor area. In this area you will store the router’s running configuration, your routing tables and any ARP tables. The second location is your Shared Input/Output Memory area. In this area you will store your data packets that are being routed through your router. If the router is fast enough from a CPU perspective, the data packets will fly right through. But at times your router will be overwhelmed with data packets and thus need to buffer them in this memory area. Finally, this memory can be easily popped out of the router and upgraded to a larger size by either putting in a larger DRAM stick or putting in additional DRAM sticks if the router has additional DRAM slots.
Flash – This memory simm is EPROM. EPROM stands for erasable programmable read only memory. So what does that mean to you? Well, the information that is stored in this simm on your Cisco router is not lost when you power the router off. This is where you will store your router’s IOS. You can “flash” the simm and download a different version of the IOS to this simm. This is very similar to what you do to your personal computer’s BIOS when you receive a BIOS upgrade from the computer manufacturer. The only difference is the IOS is the “operating system” for your Cisco router. Different versions of the IOS will support different features on your router. For example, the standard IP version IOS only supports the IP protocol. Whereas the Enterprise version IOS will support additional protocols such as AppleTalk and IPX. Finally, this memory can be easily popped our of the router and upgraded to a larger size by either putting a larger flash simm in the router or adding additional simms if the router has multiple flash simm slots.
NVRAM – This is non-volatile random access memory. Which means you will not lose what is stored in it when you turn the router off. This is generally a very small memory space such as 32KB on the Cisco 2500 series routers compared to the MBs that are used for DRAM and flash. Anyway, this is where you will store the routers configuration file. Whether you use the prompt based setup menu to configure your router or you use the CLI(command line interface) at the end of either you will want to save your configuration to the router. Keep in mind as noted above, the configuration is also stored in DRAM. But that is the running-config. That is lost when you power the router off. So again, when you finish configuring the router, you will want to save it to the startup-config which in turn saves it to nvram. This way when you turn the router off, you don’t have to reconfigure the router every time. **Tip** If your router will not save the configuration that you keep inputting over and over again, check the configuration-register setting(you can see this when the router boots up or by using the show ver command. If it is 0x2142, it will ignore the startup config. Set the config-register back to 0x2102 and you will be good to go. This is a common error beginners run into causing them to pull their hair out!
ROM – Read Only Memory. This is where you router reads the microcode to start your boot process and basic checks of your router. There are four basic functions that are performed here as follows:
POST – Power On Self Test. Just like your personal computer that does a POST to make sure all the components are working, so does your router. It is here it checks the CPU, the amount of DRAM and flash installed, and all the interfaces. So any modules you may have installed will be checked also at this time.
Bootstrap – The bootstrap program initializes the CPU and starts the boot process of the router by locating and loading the IOS.
RxBoot – This is a mini-IOS. This is used when your IOS is corrupt or your flash simm is blank and needs to have a version of IOS installed. This mini-IOS loads and gives you limited functionality so you can load a new version of IOS on your flash simm.
ROM Monitor – This is a mode that is used for diagnostics or loading and IOS image over a console session. If you have a problem with the router and you want to get into this diagnostic mode, you can issue a ctrl-break which will drop you into rommon mode.
Again, you want to review and make sure you understand all of these different types of memory which are in your Cisco router as they will definitely be covered on your Cisco CCNA exam. There is no better way that really understanding the different types of memory than opening up a real router. Real hands-on experience goes a long way in your Cisco CCNA exam preparation!
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam! | <urn:uuid:1f970867-0798-474a-a671-f1b0d492c749> | CC-MAIN-2017-04 | https://www.certificationkits.com/cisco-router-memory/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933245 | 1,240 | 2.8125 | 3 |
Data Centre Optimization
Data centre optimization is a multilayered approach to effectively maximize an organization's technology resources and objectives. By designing a comprehensive networking, server and storage environment as well as an effective power and cooling system for your data center, you can achieve your current goals while being flexible enough to manage for future growth. Components of this approach include the following:
Effective data storage management solutions enable an organization to manage data throughout various phases, and better predict future storage needs.
Server consolidation is designed to maximize an organization's server resources to reduce the total number of servers or server locations.
Server virtualization is a method of running multiple, independent virtual operating systems on a single physical computer. It also enables organizations to take best advantage of its hardware investment.
Power and cooling management is the process of designing a modular, energy-efficient system to reduce consumption and maintenance costs and limit the downtime of a server environment. | <urn:uuid:17468ac3-a1c4-414c-b695-3edfc04e092d> | CC-MAIN-2017-04 | https://www.cdw.ca/it-solutions/ca/data-center-optimization/default.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00424-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912973 | 186 | 2.515625 | 3 |
|Also known as||attachment|
|Definition||An attachment is a file that is sent along with an email message by appending itself to the ASCII message. The attachment is transformed into text when sent through the Internet.
The recipient's mail client will then convert this attachment back to original format when it is received.
|This term was viewed 17,885 times.| | <urn:uuid:65b8357a-f756-4052-a77e-e5dfac42d962> | CC-MAIN-2017-04 | https://www.bleepingcomputer.com/glossary/email-attachment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936505 | 78 | 2.875 | 3 |
Post updated August 26, 2015
An old hard drive made by a company with a notorious history arrived at the Gillware data recovery lab today. Big, heavy and full of solder points that make it feel like a starfish on one side, the MiniScribe hard drive came off the assembly line around the same time Madonna recorded “Like a Prayer” and Florence Griffith-Joyner was setting records at the Seoul Olympic Games.
We wonder if this series of hard drives – made by a company based in Longmont, Colo. – is responsible, or at least partly responsible, for a more recent definition of the word “brick.” As a noun, a brick is a piece of technology that no longer works. For example, the smartphone that you got wet is a brick, as is your non-spinning hard drive. It also gets used as a verb, as in to ruin: You bricked your laptop with a bad power supply.
It’s easy to see why the word brick would be used in these cases: it’s a great metaphor. The nonworking electronic device is now simply an inert rectangular mass. But maybe the origin was a bit less metaphorical and a bit more literal than that.
Consider what happened with MiniScribe hard drives. According to news accounts from the time, MiniScribe ran into a financing problem. The company needed money to make more hard drives, but no one was lending to them.
MiniScribe didn’t have hard drives to fulfill orders, but they decided to take orders anyway. That way they could collect money and deal with their financing challenges.
The drives that would fill the orders would be serialized units that would sit uninspected at a warehouse in Singapore for a while. That presented MiniScribe with their fraudulent opportunity: After collecting payment, MiniScribe planned to recall all the sold units by serial number while they remained in the warehouse. The hope was that this plan would give them enough time to turn the money from orders into real hard drives.
In the meanwhile, they just needed something to put into all those boxes that would go to the warehouse to later be recalled. I think you can guess what they used!
A Wikipedia editor delivered the punchline with great dryness: “The decision was made to ship pieces of masonry inside the boxes that would normally contain hard drives.”
That’s right — bricks! You opened those hard drive boxes and you got bricks. Before they actually resorted to “pieces of masonry,” they were placing nonfunctional, obsolete or broken drives in the boxes. But it was the bricks that really captured headlines and imaginations.
The scheme evidently fell apart because MiniScribe was simultaneously laying off workers and going through all sorts of other trouble. Among those laid off were staffers in charge of shipping and warehousing. As soon as they were let go, they told the press about the bricks inside the hard drive boxes. The stories hit just before the holiday season. MiniScribe went bankrupt as huge accounting scandals unraveled and what assets remained were later bought by Maxtor (makers of the famous musical hard drives, who are also now out of business).
Whether this famous episode contributed to the usage of the term brick is purely speculation on our part. I could not find the usage of brick to describe nonfunctional hard drives that pre-dated this incident in the late 1980s, but maybe it’s out there. Word origins are tricky; very often some story about how a word came to be is a false assumption or apocryphal. But the mysteries are fun.
Our work ahead is to un-brick that MiniScribe and try to get our client’s data back. If any reader has a background in etymology or can support or disprove our speculation about the word brick, we’d love to hear from you! | <urn:uuid:996db10c-e0b5-4a37-a5d7-c8c236f9e777> | CC-MAIN-2017-04 | https://www.gillware.com/blog/data-recovery/the-first-hard-drive-to-brick/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00148-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974292 | 805 | 2.546875 | 3 |
What is Bitcoin?
From the official site bitcoin.org:
Bitcoin is a peer-to-peer digital currency. Peer-to-peer (P2P) means that there is no central authority to issue new money or keep track of transactions. Instead, these tasks are managed collectively by the nodes of the network. Advantages:
- Bitcoins can be sent easily through the Internet, without having to trust middlemen.
- Transactions are designed to be computationally prohibitive to reverse.
- Be safe from instability caused by fractional reserve banking and central banks. The limited inflation of the Bitcoin system’s money supply is distributed evenly (by CPU power) throughout the network, not monopolized by banks.
Bitcoin is an open source project currently in beta development stage.
I’ve been lurking the topic of Bitcoin for a while and can tell you that there is a lot to Bitcoin and a lot to take in before comprehending how the system works. My write-up here is my best attempt to explain Bitcoin in my own words. You could choose to blindly trust the Bitcoin system though you don’t have to since it is open-source and transparent. Since Bitcoin is a new currency that plays by new rules, Bitcoin gets scrutinized far more than today’s paper and coin currency. Here’s a friendly video that brings Bitcoin down to a much more approachable level:
Not sure why you shouldn’t trust the Fed? There’s a video for that too. (This one is 30 minutes long and presents a strong change of view.)
What does that mean?
Bitcoin is a peer-to-peer digital currency.
Bitcoin is currency. This means it can be used in exchange for goods and services.
Bitcoin is digital. This means the products, bitcoins, are virtual and have no physical representation.
Bitcoin is peer-to-peer. This means the system is decentralized. Nobody has control over the system, issues new money, or tracks transactions. For a more technical explanation of Bitcoin, you can read the original paper by Satoshi Nakamoto (.pdf).
What do I do with it?
It’s a form of money, so you do the same thing you would do with other money. You buy things with it or you accept it from others who would like to buy things. You can also convert it from Bitcoins to USD (or other currencies) or buy Bitcoins for USD. Between direct deposit, credit cards, and online bill pay, today’s currency isn’t a whole lot different from a virtual currency. You can see the current conversion rates at various exchanges, of which Mt. Gox is probably the largest.
You can use Bitcoins already to purchase hosting, web services, VoIP service, games, virtual goods, and tangible goods. Find a list of links to websites that are accepting payments in the form of Bitcoin from the Bitcoin wiki Trade page or the Bitcoin Directory.
For some purchases, you may be interested in using an escrow service to ensure money is securely exchanged.
How do I get started?
Bitcoins are comparable to the regular currency you use every day. A bitcoin has a certain value and you can use it to buy something. For example, 1 Bitcoin might allow you to buy one MP3 that you want. You get this bitcoin by receiving it from someone or converting your current cash into Bitcoins through an exchange. You store and access those Bitcoins through your wallet. The wallet contains Bitcoin addresses. Wallet.dat resides in the Bitcoin directory underneath the user’s application data folder on Windows computers. You can also choose to use an “eWallet” through a service called mybitcoin.com. In related reading, knowing how to secure your wallet is always a good idea.
To use the Bitcoin system and establish your wallet, you get started by downloading the official Bitcoin client from Bitcoin.org. The client is available for Windows, Linux, and Mac OS X (10.5 or higher).
The Bitcoin client connects to other Bitcoin peers and keeps track of your Bitcoin balance. A neat feature of Bitcoin is that it promotes anonymity. Your wallet receives money sent to your address but you can generate new and additional Bitcoin addresses at any time and keep them organized in your Address Book. This way, you can generate an address to send and receive Bitcoins that is unique to each vendor or source.
You can receive a few Bitcoin “pennies” to play around with from the Bitcoin Faucet. We Use Coins is a great resource to view when first getting started with Bitcoin.
What is Bitcoin mining?
Instead of money being introduced to the US dollar by the Federal Reserve, Bitcoin has a fixed amount of Bitcoins (21 million) that will gradually be introduced to the Bitcoin market over the next few years. These coins are introduced after mathematical problems are solved. This is done in much the same way as grid computing. It’s even started quite a hobby with Bitcoin mining. The Bitcoin client has a “Generate Bitcoins” option which will use your spare CPU cycles to work on solving these problems and create Bitcoins for you in exchange.
The Bitcoin miners have developed special software that can handle the mining more efficiently and can pool many computers together to work on completing the calculations faster. GPUs, the processors on video cards, have been found to be even better at solving these types of hashes. Using the additional mining software, you can set your CPU or your GPU to start working on crunching these numbers individually, or in a pool. BitcoinMiner.com discusses different mining software and mining pools you can join. I have been using the GUI for the poclbm miner and contributing to the BitcoinPool and BitPenny pools to play around with it.
Being a hobby, people have researched to find some of the best hardware that calculates the fastest. Since the RoI of calculating Bitcoins depends on the amount of electricity consumed, having a more efficient card is more “profitable”. ATI video cards tend to outperform NVidia cards because they excel at the floating point calculations. There are detailed comparisons by model of video cards on how they perform at mining. Mining Bitcoins isn’t a get rich quick scheme and if you’re paying for your electricity, it will probably be more worth it to convert some money to Bitcoins. Unless you happen to have one of the efficient video cards already in your computer and want to take up Bitcoin mining as a hobby, you can probably consider Bitcoin mining a detail to Bitcoin that you don’t need to worry about.
Should I convert all my money to Bitcoins?
Probably not. Bitcoin as a currency is still in its infancy and as software, it’s still in beta. However, as more and more vendors are starting to accept Bitcoins as a valid form of payment, you might convert your money to Bitcoins as you need it. You can buy Bitcoins with Paypal through CoinPal.
How do I track the Bitcoin market?
Since Bitcoin is still young and more transparent than other money markets, you can learn a lot about finances by observing the Bitcoin market. You can use these websites to observe the market:
Bitcoin will be interesting to watch and see where it goes. It’s becoming more legitimate with every vendor that supports it and as it gets more established. You can learn more through the official Bitcoin FAQ and the Bitcoin wiki. You can also get involved in the community with the Bitcoin Forum or the Bitcoin Reddit community.
A shout-out to anybody that has read Cryptonomicon by Neal Stephenson. | <urn:uuid:11fae299-f93f-4f91-8a4f-d6c565149088> | CC-MAIN-2017-04 | https://www.404techsupport.com/2011/03/bitcoin-a-virtual-currency/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00506-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944244 | 1,558 | 2.890625 | 3 |
There are a lot of heavily technical terms that get used around computer security. Many of them can be a bit hard to explain in a simple manner, so they often get used incorrectly. One of the most frequently (and painfully) misused groups is the terms that differentiate malware from other types of vulnerabilities and threats. I thought I'd clear up the confusion by explaining what malware, trojans, viruses, and worms are and how they're different from one another.
Don't forget to protect your Mac from viruses, malware and everything in between: Download Mac Internet Security X8 and get protected today.
Here’s the basic definition for all the terms we’ll discuss here:
This is a big catchall phrase that covers all sorts of software with nasty intent. Not buggy software, not programs you don’t like, but software which is specifically written with the intent to harm.
This is a specific type of malware that spreads itself once it’s initially run. It's different from other types of malware because it can either be like a parasite that attaches to good files on your machine, or it can be self-contained and search out other machines to infect.
Think of inchworms rather than tapeworms. These are not parasitic worms, but the kind that move around on their own. In the malware sense, they're viruses that are self-contained (they don’t attach themselves like a parasite) and go around searching out other machines to infect.
Do you remember that story you had to read in high school about the big wooden horse that turned out to be full of guys with spears? This is the computer equivalent. You run a file that is supposed to be something fun or important, but it turns out that it’s neither fun nor important, and it’s now doing nasty things to your machine.
Funny thing about software: it’s written by humans. Humans are fallible and sometimes forget to cross t's and dot i's. Sometimes those mistakes create strange behavior in programs. And sometimes that strange behavior can be used to create a hole that malware or hackers could use to get into your machine more easily. That hole is otherwise known as a vulnerability.
The strange behavior that can be used to create a hole for hackers or malware to get through generally requires someone to use a particular sequence of actions or text to cause the right (or is that wrong?) conditions. To be usable by malware (or on a larger scale by hackers), it needs to be put into code form, which is also called exploit code.
So, how do these definitions play out in real life?
Malware is the big umbrella term. It covers viruses, worms and Trojans, and even exploit code. But not vulnerabilities or buggy code, or products whose business practices you don’t necessarily agree with.
Malware = umbrella term.
The difference between malware and vulnerabilities is like the difference between something and the absence of something. Yeah, okay, that’s a bit esoteric. What I mean is malware is a something. You can see it, interact with it, and analyze it. A vulnerability is a weakness in innocent software that a something (like malware or a hacker) can go through.
Flashback is an example of malware that exploited a vulnerability to take over people’s machines. The authors slipped malicious exploit code into otherwise-innocent websites, and this code utilized a vulnerability within Java in order to silently install itself.
Virus is a slightly smaller sort of umbrella term that covers anything that spreads itself without additional human intervention beyond that first double-click.
Virus = smaller umbrella.
It could spread parasitically, meaning the virus code attaches itself to otherwise-innocent files, and keeps infecting more and more files whenever that infected file is run. Viruses can either be destructive (including spying behavior) or they could just be intended to do nothing other than to spread. Non-destructive viruses are pretty rare these days, as everything has become financially motivated.
A virus requires the presence of those innocent files in order to spread. The other scenario is that it could spread as a static, self-contained file. The self-contained file sends itself through shared network connections, by attaching itself to emails or IMs, or even just by sending a link in email or IM to download the file. In this latter, static case, the specific type of virus is called a worm.
Worms are no fun.
The difference between a worm and a Trojan is a tricky one that may not seem to matter much if you’re the one being affected. If you got infected with the Melissa email worm way back when, you may remember the difference: you don’t have to worry about just your own machine getting messed up, now you have to worry about those first 50 people in your email address book who’ve now just been sent a copy. (Those people are probably gonna be pretty righteously peeved at you.)
Trojans really have only one purpose, and that is to cause damage.
Don't be fooled!
They often have identical destructive functionality to some viruses; they just lack the ability to spread on their own. Trojans must be planted somewhere people are likely to run across them (like Flashback), or they must be sent directly (like in a targeted attack such as Imuler). This confusion is what leads some people to refer to things as “Trojan viruses,” even though those two terms are mutually exclusive.
Hopefully that clears things up a bit! If you have any questions about malware, trojans, viruses, and worms, drop them in the comments.
Now that you know the difference between each malware, it's time to take action!
Protect your Mac from all known malware: Download Intego VirusBarrier today! | <urn:uuid:d1d9413f-bf95-42fe-917f-7828104ee155> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/whats-the-difference-between-malware-trojan-virus-and-worm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.950408 | 1,212 | 3.203125 | 3 |
Previously, we advised that the SSL industry must move to the SHA-2 hashing algorithm for certificate signatures. We thought it would be helpful to provide the reasoning behind the position.
In the context of SSL, the purpose of a hashing algorithm is to reduce a message (e.g., a certificate) to a reasonable size for use with a digital signature algorithm. The hash value, or message digest, is then signed to allow an end-user to validate the certificate and ensure it was issued by a trusted certification authority (CA). In the past, we used MD5 for hashing; we are now primarily using SHA-1 while beginning the transition to SHA-2, and have SHA-3 available for the future.
Hash attacks are described as follows, in increasing order of difficulty for an attacker:
- Collision – A collision attack occurs when it is possible to find two different messages that hash to the same value. A collision attack against a CA happens at the time of certificate issuance. In a past attack against MD5, the attacker was able to produce a pair of colliding messages, one of which represented the contents of a benign end-entity certificate, and the other of which formed the contents of a malicious CA certificate. Once the end-entity certificate was signed by the CA, the attacker reused the digital signature to produce a fraudulent CA certificate. The attacker then used their CA certificate to issue fraudulent end-entity certificates for any domain. Collision attacks can be mitigated by putting entropy into the certificate, which makes it difficult for the attacker to guess the exact content of the certificate that will be signed by the CA. Entropy is typically found in the certificate serial number or in the validity periods. SHA-1 is known to have weaknesses in collision resistance.
- Second-preimage – In a second-preimage attack, a second message can be found that hashes to the same value as a given message. This allows the attacker to create fraudulent certificates at any time, not just at the time of certificate issuance. SHA-1 is currently resistant to second-preimage attacks.
- Preimage – A preimage attack is against the one-way property of a hash function. In a preimage attack, a message can be determined that hashes to a given value. This could allow a password attack, where the attacker can determine a password based on the hash of the password found in a database. SHA-1 is currently resistant to preimage attacks.
Attacks against hash functions are measured against the length of time required to perform a brute-force attack, in which messages are selected at random and hashed until a collision or preimage is found. Thanks to the birthday paradox, the time required to find a collision by brute force is approximately 2n/2, where n is the bit length of the hash. To find a preimage or second-preimage by brute force, approximately 2n messages must be hashed. Thus, a hash function is considered broken if a collision can be found in less time than that needed to compute 2n/2 hashes, or if a preimage or second-preimage can be found in less time than would be needed to compute 2n hashes. For common hashes the bit length is: MD5 (128 bits), SHA-1 (160 bits) and SHA-2 (224, 256, 384, or 512 bits).
The time required to perform a brute-force attack keeps getting shorter due to increases in available computing power (see Moore’s Law). As such, increases in hash function lengths are necessary to maintain an acceptable margin of security. In the past, an attack threshold of 264 operations was considered acceptable for some uses, but NIST recommendations now set the bar at 280, and this will soon move up to 2112.
Using the formula 2n/2, we can see that a brute-force attack against SHA-1 would require 280 computations. Unfortunately, security researchers have discovered an attack strategy that requires only 261 computations. This would make the time required to perform an attack below current standards. In fact, Bruce Schneier has estimated that the cost of a performing SHA-1 collision attack will be within the range of organized crime by 2018 and for a university project by 2021.
The bottom line is SHA-1’s collision resistance is weak and the cost of an attack is dropping; as such, SHA-1 must be replaced with SHA-2.
Certificate owners are encouraged to test and deploy certificates signed with SHA-2. If your application does not support SHA-2, please inform your product vendor and your CA.
Post prepared by Bruce Morton and Clayton Smith. | <urn:uuid:ece4824f-048a-4e2e-ae66-65f78fc04d6c> | CC-MAIN-2017-04 | https://www.entrust.com/need-move-sha-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00131-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932704 | 951 | 2.796875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.