text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Asymmetric Encryption is a form of Encryption where keys come in pairs. What one key encrypts, only the other can decrypt.
Frequently (but not necessarily), the keys are interchangeable, in the sense that if key A encrypts a message, then B can decrypt it, and if key B encrypts a message, then key A can decrypt it. While common, this property is not essential to asymmetric encryption.
Asymmetric Encryption is also known as Public Key Cryptography, since users typically create a matching key pair, and make one public while keeping the other secret.
Users can "sign" messages by encrypting them with their private keys. This is effective since any message recipient can verify that the user's public key can decrypt the message, and thus prove that the user's secret key was used to encrypt it. If the user's secret key is, in fact, secret, then it follows that the user, and not some impostor, really sent the message.
Users can send secret messages by encrypting a message with the recipient's public key. In this case, only the intended recipient can decrypt the message, since only that user should have access to the required secret key.
The key to successful use of Asymmetric Encryption is a Key Management
system, which implements a Public Key Infrastructure. Without
this, it is difficult to establish the reliability of public
keys, or even to conveniently find suitable ones. | <urn:uuid:ddb5beaa-f9cd-411f-ad89-83335043e471> | CC-MAIN-2017-04 | http://hitachi-id.com/concepts/asymmetric_encryption.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00248-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927159 | 300 | 3.78125 | 4 |
In this age of digital by default it is important that all digital content is accessible. This will include web sites and web pages but also video, audio and documents. This article will investigate the needs, challenges and issues around the creation and consumption of accessible documents.
For this article a document is a collection of words and images that can be printed as a whole. The article does not cover interactive books that require the reader to be able to access them electronically.
These documents will include: letters, memos, minutes, reports, user guides, brochures, pamphlets, transcripts of speeches, magazines, novels, etc. They will be held in one or more digital formats.
There is a potential tension between the requirements of the creator of a document, the distributor and the user:
The creator of the document will wish to use tools and technologies that they are familiar with.
The distributor of the document will wish to minimise the number of document formats used for distribution. Multiple versions cost money, cause management issues and increase the risk of different users seeing different content.
The different users will wish to consume the document in different ways (the word 'consume' is used here rather than 'read' because 'read' implies reading words on paper or a screen, whereas the user may have the document read to them, or turned into braille or sign language, or other formats).
The end user must be considered the most important of these roles; if they cannot consume the document then there is no point in creating or distributing it.
This article looks at the requirements of these different players and reviews the alternative technologies available.
It summarises the pros and cons of various solutions and makes tentative suggestions for an optimum solution. It is hoped that this will help organisations that are going digital by default to decide how to distribute accessible documents; it also hoped that it will show the weaknesses in current technology so that vendors can improve their products.
The document looks first at the end user, then the distribution process, then the creation process, it then looks at the various technologies for creating, distributing and consuming the document and concludes with some tentative best practise.
The end user experience
To understand how these documents must be created, stored and distributed we must first understand how different end-users will consume them.
However the user consumes the document they need to be able to access more than just the text and the images (or descriptions of the images), they need to be able to:
understand the structure of the document, including sections and sub-sections, lists, tables notes, quotations, citations, indexes, etc.
navigate to relevant parts easily.
annotate the document.
copy and extract information.
They will not expect to be able to modify the original document without the express authorisation of the owner.
Types of consumer
People with different disabilities will wish to consume the documents in different ways. The following section outlines the different disabilities and methods of consumption:
Non-disabled: a person with no relevant disabilities will want to be able to read the document electronically on some type of screen. The document should be laid out so that its structure is visually apparent by the use of different types and sizes of fonts, use of bullets, indentation, and tables. The reader software should enable the user to navigate the document by table of contents, indexes, bookmarks and searches.
This electronic version of the document should be considered the base version: any other version should contain the same information.
Besides the electronic version, non-disabled people may wish to have a printed version of the document. It must be possible to print all or parts of the document so that the printed version is an accurate reflection of the electronic version.
People who are partially sighted should be able to modify how the document is displayed: size of text, type of font, background-foreground colours, line separation, justification, etc. to enable them to see the content as clearly as possible. The electronic document should interface well with screen-magnifiers.
People who are blind should be able to access the document using a screen-reader. Tthe screen-reader should convey the structure of the document by announcing headings, lists, tables and other structural elements, and assist the user navigation by providing functions such as jump to next header, or to the end of a list, or to the next chapter.
People who are vision impaired and use braille should be able to access the document and have it presented on a braille display including the structure and the ability to navigate.
It should also be possible to create printed braille from the base document.
People with dyslexia: can improve the reading experience by using suitable background-foreground colour and brightness combinations, also by using left justified text. Having text read out aloud and highlighted at the same time can also improve the experience.
People with hearing impairments have different capabilities of reading written text. If their reading level is good then the base document should be accessible. There is a great deal of pressure from the deaf community for films and TV to be captioned but there is much less pressure for them to be signed; the main area of signing is for news and current affairs where live captioning is inadequate. Signed versions of a document should probably be limited to general introductions to an organisation and documents specifically aimed at the deaf community.
People who do not understand written English may need some introductory document which explains what the organisation does and how to get help in understanding the other documents.
People with learning disabilities may not be able to fully understand the base document. Firstly the base document should be reviewed to see if it can altered so that it is understandable by a wider range of cognitive abilities, without it becoming patronising for the majority of users. If this cannot be done then a version may need to be created that is easier to understand without losing any of the meaning. This format is often known as 'Easy Read'; it concentrates on simple language and use of images and videos to match the words.
People who cannot use keyboards and/or pointing devices should be able to access and navigate the base document using assistive technologies such as switches or voice commands.
People with sever dementia and similar problems cannot understand or make decisions independently. In these cases the document only has to be accessible to their carer. An extreme case is a person in a coma.
Formats required by the consumer
To support all these different user types ideally requires the following end user formats (requirements for readers for these formats is discussed below):
Base document, which includes text and images, the format should support:
Changes to fonts, colours, justification etc
Easy Read documents are needed where the base document is difficult to understand by some users.
Sign language the base document can be converted into sign language by videoing a signer reading the document (there is research into automatic generation of sign language but it is not considered advanced enough to be used instead of human signers). At present there is no easy way to support navigation of signed videos.
Audio can be produced either by using a text to voice software or by recording a human reading the text. At present there is no easy way to support navigation of audio versions of documents, however if the voice is synchronised with a text version then navigating the text version will provide navigation of the audio.
Possible Distribution Formats
The question is which format(s) should the content be distributed in? The following are some options with pros and cons.
Word processor format
The documents will often be created using a word processor (Microsoft Office (.docx), Open Office (.odt) Apple iWorks (.pages)). If it is going to be distributed in this format it needs to be in a format that can be read by all systems: this means .doc or possible .docx. There are two problems with distributing in this format:
The formatting of these documents by different word processors is not always identical and in a few situations does not work at all. This can be a particular problem with mobile devices that have limited support for these formats.
The content is not intended to be edited or changed by the recipient but the program used to access it is designed to do just that. The recipient should be able to annotate and comment but not to change the original.
For these reasons it is not really a suitable format for distributing the base document. However it is a very common format for creating base documents and therefore there should be methods for converting them into formats suitable for distribution.
PDF is designed to be a final document format. The common tools used to access it, such as Adobe Reader and Apple Preview, do not support change but do provide annotation functions.
PDF used not to work well with screen readers because the format did not include any document structure information; with the publishing of the PDF/UA standard this is no longer the case.
PDF readers are available on all relevant platforms and are installed on most PCs. PDF is therefore a popular format for distribution of finalised documents.
PDF/UA has not been designed to facilitate conversion to other formats; it is possible but not easy.
PDF documents are designed to ensure the page layout is preserved. This is important if the page layout is critical to the design of the document, or if the layout has a legal significance.
The ePub format is growing in importance and is especially popular on mobile devices.
The format does not define the page layout but just the document structure. This means that the document can be rendered differently to suit the display device and user preferences. It is also suitable for converting into other formats including Braille.
It has the functionality to support screen readers as the document structure is defined as part of the format. The common reader tools that are used to access the content enable users to annotate but not to change the original.
The latest standard version of the epub standard (epub3) includes functions for synchronising audio with the text.
The present issue is that not everyone has ePub readers installed on their device. Also not everyone has an ePub creator tool.
Daisy is a format that has been developed to support people with vision impairments. It requires a special reader and development tools. It would appear that the benefits of DAISY are being built into ePub 3 technology. Therefore it is unlikely that Daisy will become a general document distribution format.
MP3 is the common format for audio. The problem is that it does not include any facility for defining structure, for navigation or for annotation.
MP3 versions of the base document may work for short documents or for documents that are designed to be read linearly such as novels. On its own it is not a suitable format for documents such as reports, manuals or magazines.
MP4 (or mov) are the standard file format for videos. It is the format that will be used for sign language. The problem, as with mp3, is that it does not include any facility for defining structure, for navigation or for annotation.
A suggestion is that a video file is created which includes the signed version of the text, an audio track with the spoken words, a closed captions track with the written text. This way there is one file that can support users with different disabilities.
Recommended Distribution Formats
Based on the discussion above it would seem that all users can be accommodated by providing two formats: ePub 3 and Video. ePub has been recommended over PDF/UA because it is designed to supported conversion and because of its widespread support on mobile devices.
The base document should be distributed using EPUB 3 format. Given a suitable reader (see discussion below) this format can be used by people with most of the disabilities described above; the one major exception is people who are dependent on sign language for communications.
The format can be converted relatively easily in to other formats. This means that users who require another format for technological, preferential or legal reasons can convert the document or have it converted for them.
Sign language cannot be adequately created from an EPUB 3 document. The only solution for this requirement is to create a video of a signer reading the document. If this includes the sound track of the document being read then the video provides a single source that supports multiple users.
It is not recommended that a video is made of every document but a decision is made for each new document as to whether it is beneficial to make the video up-front or if it should only be created on request.
The three formats (EPUB, PDF and Video) have different reader technologies.
There are many different readers on the market. They all support the EPUB format but vary in details such as which platforms they run on, design of the user interface, options available for the user to change the look and feel. This means that is not possible for the distributor of the document to recommend and link to a single reader (this compares to PDF readers where, although there are multiple readers on the market, Adobe Reader can be recommended for all users).
This means that the user has to decide which reader is most suitable for them. Some questions that the user will need to consider are:
Does the document have to be loaded into the reader library before being consumed or can it be opened from a standard file directory.
Does the reader interface effectively with the assistive technology they use.
Does the reader provide sufficient customisation options to give the user an optimal experience. Options include font style and size, background-foreground colours, justification, hyphenation.
Can the user set up themes so that they can use different sets of options for different types of documents.
Is the customisation interface easy enough to use. Some options should be very easy to change but ideally there should be a more sophisticated way of changing the options e.g. a standard of three background-foreground colours to choose from but with the ability to use CSS to define any combination.
Is there an in-built text-to-speech facility.
There are several readers on the market, not all of them take advantage of the PDF/UA tagging.
Adobe Reader is the leading reader and is available for all major platforms. Not all of the assistive technologies available understand or take advantage of PDF/UA, especially in the mobile environment.
Video players are available on all major platforms. The problems with video players are that they do not provide functions for: defining structure, navigation, searching, annotation, copying or extraction.
Creation and Conversion tools
There are various EPUB creation tools: there are desktop publishing systems that can be used to generate EPUB documents and there are tools that convert from word processors (Microsoft Office or Open Office) to EPUB.
Assuming many of the documents will be written using a word processor this section concentrates on products that convert the source to EPUB.
Calibre is one tool that will convert from .docx and .odt to .epub and the latest version supports more styles and formats than before. The problem is that there is a lack of documentation as to what can be converted and how it is converted. This information is needed as the ideal is to create the document in the word processor and then automatically generate the .epub without any manual intervention.
Calibre and other tools can read EPUB and convert it into other formats.
There are several tools for converting .doc, .docx and .odt files into PDF/UA. These include Adobe Acrobat, Microsoft Office and Open Office so the process is well supported by the leading players.
There are products that attempt to convert from PDF to other formats but they tend not to use the PDF/UA tagging so the output often loses much of the structure of the original.
To provide accessible documents to the widest possible set of users an organisation should distribute the documents in accessible EPUB format with some also available as videos with the text read out and signed.
To ensure this is practical there needs to be more research so that recommendations can be made about:
The best readers for different users.
About the creation of word documents that can be automatically generated into EPUB documents.
This recommendation is intended to provide the best long term solution to accessible documents. It should be the solution promoted by the accessibility community. However, the creation and reader technologies for EPUB are are at present (January 2014) somewhat immature and lacking a complete set of easily implemented functions. There is a need to persuade the providers of EPUB technology to improve the quality and function of their products.
Therefore, for a distributor of accessible documents who requires an immediately available, low risk solution PDF/UA could be the preferred choice. | <urn:uuid:1124e232-8cec-4ac2-8dfc-a0520446d697> | CC-MAIN-2017-04 | http://www.bloorresearch.com/analysis/accessible-documents/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00000-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921844 | 3,424 | 2.671875 | 3 |
DNS stands for the domain name system of the internet. It works like a directory--rather like a phone book--that allows computer-generated addresses to be translated into words that humans can understand. This makes navigation of the internet a far easier task than remembering strings of numbers. But phone books were originally designed for ease of use and so was DNS. When it was invented more than 25 years ago, accessibility and efficiency were on everyone's minds; security was not.
In fact, the security issues of DNS have been known for many years. It contains vulnerabilities that can be exploited by hackers, allowing them to hijack websites and perform other exploits that involved redirecting emails and website address lookups. This can cause user traffic to be directed to a bogus website where malicious code is often implanted, looking to harvest information such as passwords or bank account numbers that can be used for fraudulent purposes. This is bad not only for computer users who may be defrauded, but also for the organisation that owns the website that has been faked as they are unlikely to realise that their users are being hoodwinked, with disastrous consequences for their brand reputations.
To solve this issue, DNSSEC (DNS security extensions) was developed years ago as a set of protocols that provide secure authentication regarding the origin and integrity of DNS records, but its take up has been slow. In order for it to be successful, the root servers that form the backbone of the DNS have to be cryptographically signed--and that has only just happened. But, now that it has, deployment is picking up pace rapidly. Many of the top-level domains that we are all familiar with, such as .org, .net and .gov, have already been signed, allowing a chain of trust to be formed and providing the next-generation infrastructure for the internet that will make it a much safer place.
The extent of the problems with DNS can be seen in recent survey data from the Center for Strategic and International Studies, which released a report based on a survey of 600 organisations in 14 countries in January 2010. One of the key findings was that 57% of respondents had been victims of a DNS attack in the previous year, with nearly half of those reporting multiple monthly occurrences of such attacks.
By implementing DNSSEC, organisations that have a significant web presence will find themselves in a much better position to ward of DNS attacks that can damage their brand or credibility, or even leave them facing legal liability from customers have become victims of scams following a DNS attack. In a recent webcast by Bloor Research and F5 Networks (DNS Security: why you need to care), 22% of respondents stated that they thought DNSSEC deployment to be complex and a further 67% stated that they did not know enough about it.
A paper published recently by Bloor Research aims to demystify DNSSEC and shows the benefits that organisations will receive from deploying it to guard their web estate. The benefits are real and deployment need not be complex. The paper can be accessed here: Investing in security versus facing the consequences. | <urn:uuid:77998e6e-77ee-411f-aacc-bc9dc57f4b01> | CC-MAIN-2017-04 | http://www.bloorresearch.com/blog/security-blog/securing-the-internets-infrastructure/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.975305 | 628 | 3.28125 | 3 |
What Is Shift Management?
A Definition of Shift Management
Any company or organization that runs two or three work shifts per day handles shift management. Setting the shift schedule is one of the very first tasks businesses undertake when they begin operating, but shift management must be ongoing once a business is up and running. Shift management involves giving each worker a clear idea of his responsibilities, including his time schedule for reporting to work each week.
Determining Shifts and Other Shift Factors
The block of time during which workers report to work and perform their duties is a shift. Owners and managers often work together to determine the shift times and the frequency of shift changes. You may begin to determine shift times by considering the days and hours that you will be open for business and then set the total number of hours for your workers on a weekly basis. Part-time shifts typically are four hours, and full-time shifts may vary from about eight to twelve hours, depending on the number of shifts you set for your company and the number of hours you plan to be in operation daily and weekly.
Some companies survey workers or ask for their available work times on their employee applications, in an effort to match workers to the shifts that best suit their schedules and personal needs. As your business grows, you will need to determine how many employees need to be on shift throughout the week. Once you have determined the shift schedule for your organization, it is helpful to require workers to clock in and out or use some form of a time system in order to get paid.
Challenges Associated with Shift Management and 24/7 Operations
It can be especially challenging to manage shifts if your organization operates around the clock. You will need to determine when one workday ends and the next begins. You also may need to consider how to calculate overtime if a shift spans two days, and what the pay rate will be if an employee works a shift that spans a paid holiday. These issues are all the more challenging if you have a time and attendance system that operates on a standard 9am – 5pm schedule. There are three areas of shift management that you need to keep in mind when setting your shifts and exploring shift management solutions that will fit your business needs:
- Pay periods and shifts – Typically, pay periods in a standard schedule begin on Sunday and end on Saturday. In this type of a shift schedule, shifts also normally are contained within one workday, and do not overlap from one day to the next. With a 24/7 schedule, however, shifts easily overlap pay periods and days. This becomes challenging if workers in one shift make a rate that is different from workers in another shift.
- Overtime pay – In a standard shift schedule, workers collect overtime for any work put in after the 40-hour workweek. This is fairly easy to track and manage. However, overtime pay can become a shift management nightmare in a 24/7 operations schedule. If a worker’s shift straddles two different pay periods, you need to be prepared to pay the worker fairly and correctly. Some businesses pay for the worker’s shift in her first pay period when the shift began, some pay in the second pay period when the shift ended, and some split the pay between the two. You need to be aware of these issues, especially if your employees are entitled to overtime pay.
- Holiday pay – Holiday pay sometimes is even more of a shift management challenge than overtime pay. You will need to determine how you are going to handle paying employees whose shifts cross over a regular day and a holiday. If your organization gives paid holidays, will your worker get paid for the hours of the regular day and then leave the shift early, when the holiday begins? If it’s a paid holiday but the employee works, what will her compensation be?
Shift management solutions can help you plan and schedule shifts for any workforce. If your organization is large, or if you are struggling with the shift management challenges mentioned above, it may be time for you to consider a shift management solution that accounts for employee availability and preferences and sets schedules based on your company’s policies and regulations. | <urn:uuid:402a9cca-00f0-4f54-ac73-89128caaa294> | CC-MAIN-2017-04 | https://www.clicksoftware.com/blog/what-is-shift-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00302-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94509 | 840 | 2.90625 | 3 |
The makings of a federal police e-academy
Training for federal law enforcement officers might soon incorporate many of the online technologies that universities have been using to make instruction more accessible, comprehensive and affordable.
John Besselman is leading a program at the Homeland Security Department’s Federal Law Enforcement Training Center that is exploring using virtual and digital learning capabilities to improve the education of the more than 60,000 students who receive training at the center’s campuses each year. He said he hopes the program, named Train 21, can help the center save space and time, make more efficient use of its employees, and improve the overall effectiveness of its training.
“We have an opportunity to take advantage of the Digital Age,” he said.
The new approach to training could involve offering distance learning programs to some groups of students, online continuing education courses and digital learning resources. The center published a request for information Nov. 10 in the Federal Register.
The center's students come from more than 80 federal agencies. They typically stay at one of the center's campuses while they attend training sessions for anywhere from a few days to several months. Although the center differs from traditional universities, Besselman said the benefits of electronic learning are similar for all students.
For example, he said, allowing established law enforcement officers to complete advanced training remotely could mean significant savings in time and money for employees and agencies.
Although students will still need to travel to a campus for basic training, using information technology could mean they spend less time away from their jobs. Furthermore, the roughly 500 professors who work at the center’s campuses will have more tools for delivering information and interacting with students.
“The generation is such that they are willing, ready and capable of receiving their information in different formats,” Besselman said.
Implementing all the ideas that he envisions could cost as much as $70 million. However, he said many of the initiatives cost relatively little and are easy to implement. For example, making better use of the center’s TV station for training purposes, using podcasts and digitizing materials are low-cost endeavors.
Besselman said he knows new can be scary, but since he opened the Train 21 office at the center’s Glynco, Ga., headquarters campus, he has been surprised by how many people have stopped by with good ideas.
Train 21 is still in its early planning phase and focused on developing detailed plans, prototypes, a proof of concept and a long-term implementation road map, the RFI states. Officials are seeking information about whether organizations can help the center:
• Explore e-learning best practices that have been successful elsewhere and could be applied to law enforcement training.
• Identify e-learning solutions that are successful and cost-effective.
• Design and develop pilot training, learning solutions and acquisition plans.
• Document training and technology prototypes that can serve as blueprints for larger-scale implementations.
• Develop a road map for the long-term evolution of e-learning in a major, multicampus training environment.
Responses to the RFI are due by Nov. 21.
Ben Bain is a reporter for Federal Computer Week. | <urn:uuid:8c626db8-9058-4256-823b-c253097500d4> | CC-MAIN-2017-04 | https://fcw.com/articles/2008/11/17/the-makings-of-a-federal-police-eacademy.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00238-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95571 | 663 | 2.890625 | 3 |
GRAPEVINE, Texas—Master data management is all about control and governance, making sure that there is one set of truth that helps feed clean data to important business processes.
That’s why big data scares the administrators tasked with keeping master data clean: the variety, velocity, volume and complexity—to use Gartner’s definition—make it almost impossible to control.
But big data can actually serve as a complement to master data management, changing the way executives understand data’s role in the enterprise and helping the master data management system adapt more rapidly to new trends.
Just know that the emerging sources, properties and behaviors of data are unruly, and there is nothing you can do about it.
Master data is any non-transactional data that is critical to the operation of a business—such as customer or supplier data. Master data management is the process of managing that data to ensure consistency, quality and availability. New sources of data, such as machine sensor outputs or entries on social networks, are challenging this field.
“Big data is an invader,” said analyst Mark Beyer, speaking at Gartner’s Master Data Management Summit. “It’s coming to your shores, and it’s challenging everything you do.”
Because the information that is considered big data is usually created outside of your organization, it’s highly unlikely you’ll be able to control its form, Beyer said. New sources of information are created constantly and at varying speeds.
The struggle is to figure out when data assets created by machine sensors, social networks, email or any other sources start to identify a relevant trend.
“In a big data environment, anybody can create a new value at any time, and it grows in persistence and in popularity, and pretty soon it starts to look like master data,” Beyer said.
Value, in this case, means a unique piece of data. But these trends have to be considered only a suggestion for a new value, one that needs to be vetted against current master data. The master data management system needs analysis to understand if the new value is a synonym or homonym, or jargon.
This is where unstructured data analytics methods can actually help; mining text to discover meaning of new words or values, Beyer said. A set of master data can be used like a glossary against a set of unstructured data to link values together, and throw out terms that are misleading.
“I can use master data management to build a structure for managing my big data,” Beyer said. “I’m using what I already know to dive into the next data set.”
The Question of Assigning Values to Data Assets
Big datasets are getting popular because they potentially contain value, if used correctly. This has led to data-heavy companies and industry analysts to consider what actual monetary value data has.
According to national insurance and accounting standards boards the answer is “none.” You can’t file a claim for data loss, and you can’t claim data on your balance sheet.
Gartner analyst Doug Laney thinks that should change; he’s conducting research into a field he calls infonomics to consider how best to value data as an asset.
Hand in hand with that study, Laney said, is the consideration of how master data management would change if data had real monetary value, because “we’re moving beyond the notion that information is just a byproduct to information being a resource,” Laney said.
“Imagine any other asset in your organization where you don’t know it’s cost, and you can’t account for the value it’s generating,” Laney said. “That would be inconceivable to your CFO.”
The result of this thinking will be better management and caretaking of data, to increase its value, and better tracking metrics for how data is used and the value it generates.
“You can use this kind of method to get an idea of the value of all those [analytical] reports,” Laney said.
Email Staff Writer Ian B. Murphy at firstname.lastname@example.org. Follow him on Twitter .
Home page image of 1939 British government poster via Wikipedia. | <urn:uuid:94530254-b749-469c-9c36-e05d4094a5a0> | CC-MAIN-2017-04 | http://data-informed.com/advice-for-master-data-management-administrators-keep-calm-and-carry-on/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00540-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913074 | 918 | 2.65625 | 3 |
The process from awareness to mindful action is a journey. Not everyone starts at the same place or progresses in the same way.
The journey begins with awareness. The powerful moment when an individual realizes the impact of actions, decisions, or events. In their own context, using their own words.
Initially, awareness may not include understanding, or even a pathway to action. It serves as an awakening. It stokes the desire to learn. It reveals the *need* to change behaviors.
So where does it start?
The first step of awareness is knowing where you are. Making sense of the awareness, even without understanding. Establishing the context and considering the implications.
Security awareness starts the same way
As people connect security impacts (positive and negative) to actions, decisions, and events, it is important to establish context. They seek a mechanism to make it make sense. Ultimately, they need a way to figure out where they are.
Where someone is helps identify the journey, as well as potential steps. It starts the process of understanding, action, training.
Recognizing the importance of the first step impacts (or improves) the design of security awareness programs. It has three implications for those responsible for security awareness programs.
1. Security awareness requires individual responsibility
There is a key distinction between telling someone something, and when they realize it for themselves.
Each individual is responsible to assess their own level of awareness. They make personal decisions to move, grow, and progress accordingly.
This means our role is creating the environment and the situations that allow people to reclaim their responsibility. To stop disconnecting them and work alongside them.
2. Security awareness programs must help others assess themselves
As people take responsibility, our role shifts to helping them make sense of what they are now aware of. The context and the conversations are important. Especially the conversations.
This is also where questions guide the experience. Basic questions get the process started:
- What is your current level of awareness? How do you know?
- What is your experience? How does your experience shape your awareness?
- What is your knowledge? Does the person know about technology, but not security? Or maybe physical security, but not technology?
Note the open-ended nature of the questions. This is a double-edged sword (and something we need to explore further). The purpose is to help people assess themselves, using their language, experience, and context.
It's an opportunity for us to learn how to meet their needs in the process.
If we provide "the answers," and ask people to conform, we prevent them from a proper self-assessment and risk missing out on crucial information for our efforts.
3. No one-size-fits-allPerfect Message Fallacy (PMF) is certain to derail blanket awareness communications.
There is no one size fits all. It's a phrase that gets a lot of lip service. When it comes to individual realizations, however, the
This is why awareness must be separated from training and development (for more insight, consider Understanding awareness, training, and development).
We need a systematic approach to help people figure out where they are in the process. It needs to provide a sense of confidence and acceptance with their current "location," and a pathway to head down. Autonomy of experience, timing, and outcome is important (and why this is a harder challenge than often considered).
As a starting point, work to help people discover ways to assess themselves. Focus on conversations. Take notes, look for patterns, and let's start a dialog. | <urn:uuid:1b8000d5-50f6-4a59-9d2c-dcac4db6c80c> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2136701/security-awareness/3-truths-for-getting-started-with-security-awareness.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00203-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952002 | 737 | 3.234375 | 3 |
The internet could be running on 1,000 times less energy that it presently uses within five years, say members of a consortium of IT and communications equipment and services suppliers.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"This is equivalent to 7.8GTn of CO2, or 15% of the total world emissions predicted by 2020," said Vicente San Miguel, CTO of Telefónica, one of the consortium members.
The consortium, called Green Touch,, has drawn political support from the UK, US, France, Portugal and South Korea.
It is led by Bell Labs, the R&D division of French-US telecommuncaitons equipment maker Alcatel-Lucent.
A thousand-fold reduction is roughly equivalent to being able to power the world's communications networks, including the internet, for three years using the same amount of energy that it currently takes to run them for a single day, Green Touch said.
The first meeting of the consortium will be in February to thrash out roles, responsibilities and deliverables for 2010.
UK energy secretary Ed Miliband said, "The ICT sector is perfectly placed to bring its innovative and technological forces to bear in the low-carbon transition as well as in curbing its own carbon footprint."
The Green Touch founding members are:
- China Mobile
- Portugal Telecom
Academic research labs
- Massachusetts Institute of Technology's Research Laboratory for Electronics (RLE)
- Stanford University's Wireless Systems Lab (WSL)
- University of Melbourne's Institute for a Broadband-Enabled Society (IBES)
Government and non-profit research institutions
- CEA-LETI Applied Research Institute for Microelectronics (Grenoble, France)
- Imec (Leuven, Belgium)
- French National Institute for Research in Computer Science and Control (INRIA)
- Bell Labs
- Samsung Advanced Institute of Technology (SAIT)
- Freescale Semiconductor | <urn:uuid:04faf759-d64f-4fdd-b513-7cbe0d500a1a> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280091817/Green-Touch-consortium-aims-to-slash-internets-carbon-footprint | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00229-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.906443 | 425 | 2.625 | 3 |
The purpose of antivirus protection on a computer is to prevent the entrance of viruses. There is certainly good reason for using such software, as there are a great number of viruses which are capable of seriously damaging the data held on the infected system. We have recently seen the appearance of other types of malicious code which do not necessarily destroy the system’s information, or at least not directly, but which should nevertheless be targeted by antivirus software.
A few years ago, when viruses alone constituted the most important threat faced by computers, a new category of specialist software was developed to combat this threat: antivirus programs. The subsequent proliferation of other threats, such as worms and Trojan horses, led to the incorporation of new features in antivirus software in order to protect systems from these threats. While the name “antivirus software” remained, the protection these programs offered needed to be widened to include other elements which were not viruses in the strict sense of the word. For example, according to the strict definition of a worm this is a piece of code which seeks only to multiply, without damaging the information held on the computers which it uses as a platform (the typical behavior of a virus). In this case, and ignoring for a moment that many worms also alter information, an antivirus program should not concern itself with this sort of code, as the system data would be safe.
In reality, antivirus software does detect and eliminate worms, as these not only propagate themselves but usually also cause damage on the computers which they infect. Detecting worms is vital because they can cause entire email systems to collapse in a matter of minutes, and the damage this causes, while indirect, is very noticeable and, what’s more, can be quantified in financial terms.
The same applies to Trojan horses. While they are not damaging in themselves, there is the possibility that a hacker may use them to carry out damaging actions either on the computer on which they have been installed or on others which the attacker is able access via the infected system.
These three types of harmful software have now been joined by others (I’m not just referring to executable code) which can cause problems or losses of various types on a computer system. This is the software known by the collective name of “Malware”; a term formed by combining the words “Malicious” and “Software”. This concept encompasses spyware, adware, jokes, spam, etc: anything which causes a system to perform tasks which create inconvenience for the user or which are performed without his or her realizing it. In sum, malware is any software which maliciously violates the privacy of a user or computer system or diminishes productivity for financial gain.
Invasion of privacy is one of the effects of adware and spyware, which obtain information without consent. Spam and some hoaxes involve sending emails to users in order to achieve financial gain, and can have a dramatic effect on productivity.
The appearance of these types of malware has meant that antivirus programs have had to take another leap forward to improve the protection they offer users. While the term “antivirus software” would appear to imply that such software only protects against viruses, its range of functions has once again been widened, just as happened when worms and Trojan horses first appeared.
It is often argued that spam is not a type of malware, as it does not contain any software. While this may be true, spam can still be very harmful, if only because of the space it occupies on computers and servers, and the time which has to be spent deleting it. If a company’s employee spends 5 minutes a day deleting unwanted emails, it is easy to calculate the financial impact of this; over the course of a year, 5 minutes a day are the equivalent of more than two working days dedicated solely to deleting spam (on the basis of 8 hours a day, and 200 working days a year). You only need to work out the average daily salary of the company’s employees to see just how much money can be lost as a result.
(Of course, the above calculation could also be used to argue that the coffee machine is one of the greatest causes of losses in any business, as more time is usually spent taking coffee than deleting spam. However, drinking coffee is something which employees enjoy, while the resultant caffeine intake is good for the company’s productivity; by contrast, deleting unwanted emails is not something which anyone likes doing.)
Junk mail has a series of characteristics which make it relatively easy to identify. Almost all of them use very similar messages to try to persuade the user to buy something. Specialized software can use the structure and content of these messages to create a profile of the emails received, and can then use this profile to classify some mail as spam.
The main challenge when creating such profiles is how to avoid labeling as spam messages which are users actually need to receive. For example, it would not be possible to systematically delete any email containing the word “Viagra”, which frequently appears in spam, as in some circumstances this word could appear in a legitimate email. So the analysis must be based on more than one word, or on the appearance of combinations of words or email formats.
A good system for detecting unwanted emails must be capable of learning. In other words, when the system incorrectly identifies a message as being spam, it should be able to “study” the message and learn which characteristics make it of interest to the user. Then, when similar messages are received in the future the system will not reject them.
The system also needs to be able to learn in the opposite situation: that of so-called “false negatives”. Where a user wishes to receive a certain type of email – which in principle could be classified as spam – the system should recognize the characteristics of these and allow the user to receive them. We should not forget that most spam consists of offers and other business communication which could be of interest to the user.
Spyware and adware are types of harmful software which are used by some unscrupulous individuals to spy on the behavior of Internet users. These applications, also called “spy programs”, are a form of malware, as they invade people’s privacy when using the internet.
Spyware and adware focus mainly on how users click on certain types of advert, and on the time users spend viewing web pages. This data and the email address of the user who is being spied on are then used to create user profiles which are sent to the creators of the spy program. This information is incorporated in large databases of detailed consumer profiles, and these are then sold to advertisers.
There are still numerous myths going around on the Internet describing the terrible disasters which will befall our computers if we open an email with a particular subject line: hard disks will be erased, monitors will be damaged, broadband connections will be rendered unusable, etc.
The great majority of information circulating on the Internet warning people about new viruses is completely false; such rumors, generally spread via email, are referred to as “hoaxes”. Somebody wants to play a trick and sends the hoax out to everyone he knows, asking them to send the message on to everyone in their address book. What does the hoaxer gain from this? Sometimes this is done for entertainment alone, while others reap the benefit at the end: the addresses obtained from sending and resending hundreds of emails are used to create huge distribution lists which can then be used in an advertising emailing, for example.
In situations of uncertainty or where there is already, for example, widespread fear of terrorist attacks, this can degenerate into all-out panic, helping false alarms to proliferate. For this reason it is important to draw a clear distinction between genuine virus alerts and hoaxes.
The whole problem of hoaxes is much more serious and more difficult to combat than one might think, with many of them circulating freely on the internet, and with all attempts to control them apparently doomed to failure. In fact, many experts believe that putting a stop to them is more or less impossible, although we can all help to reduce the number of hoaxes circulating on the internet.
While false virus alarms are perhaps the favorite method used by internet tricksters, It is also worth distinguishing other types of rumor in order to ensure that the issues are not confused yet further. Many of these are little more than varieties of hoax, but others may have a range of implications which can endanger the security of computer systems.
Hoaxes are really a type of “urban legend” which have flourished in tandem with the expansion of means of communication such as the internet. This gives rise to different types of rumors, and these can be classified according to their subject matter and the type of message they generate.
The reasons for combating such rumors are obvious: not only do they waste time, like spam, but they also create a state of alarm and worry which is harmful to both companies (and their employees) and to home users.
The antivirus protection installed in most companies does an excellent job of protecting against viruses, worms and Trojan horses. However, in today’s world we also need to fight many other threats which, while they may not directly damage our computer systems, can cause other indirect damage.
Properly-installed security must address much more than just viruses, and this will lead to higher productivity for everyone and peace of mind for all those concerned with security issues. | <urn:uuid:2ee07968-1530-4f59-870b-380616eded65> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2003/11/28/current-antivirus-software-is-not-enough/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00047-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963539 | 1,956 | 3.265625 | 3 |
For the automotive and aerospace industries, crash and safety analysis by finite elements is used to shorten the design cycle and reduce costs. Recently, a popular crash/safety simulation set a new speed record. Over on the Cray blog, Greg Clifford, manufacturing segment manager at the historic supercomputing company, explains how LS-DYNA “car2car” simulation reached new heights, running on a Cray supercomputer, pointing the way for engineering simulations that can take advantage of the massive computing power offered by next-generation systems.
The Cray XC30 supercomputer, outfitted with Intel Xeon processors and bolstered by the scalability of the Aries interconnect, enabled engineers to run the “car2car” model, a 2.4-million element crash/safety simulation, in under 1,000 seconds. The results of the LS-DYNA simulation are posted on topcrunch.org, which documents the performance of HPC systems running engineering codes.
The record-setting job turnaround time was 931 seconds, but equally important, the simulation broke new ground by harnessing 3,000 cores. “As the automotive and aerospace industries continue to run larger and more complex simulations, the performance and scalability of the applications must keep pace,” notes Clifford.
Clocking in under 1,000 seconds marks a significant milestone in the ongoing effort to enhance performance. Over the past quarter-century model sizes for crash safety simulations have increased by a factor of 500. At first, the computing power only enabled single load cases, like frontal crashes. Over time, the models grew to support 30 load cases at once, and now incorporate frontal, side, rear and offset impacts.
As further detailed in this paper, researchers from Cray and Livermore Software Technology Corporation found the key to improving LS-DYNA scalability was to employ HYBRID LS-DYNA, which combines distributed memory parallelism using MPI with shared memory parallelism using OpenMP. This was preferable to using MPP LS-DYNA, which only scales to about 1,000 to 2,000 cores depending on the size of the problem.
Clifford writes that time crash/safety simulation has evolved from being mainly a research endeavor to becoming a crucial part of the design process – it was a change that followed the democratization of HPC, as ushered in by Moore’s law-prescribed progress. The automotive and aerospace fields have become full-fledged HPC-driven enterprises, and have reaped the benefits of shorter design times and safer, more-performant end products.
The MPI framework for parallel simulations and the increase in processor frequency provided the foundation for this transformation. But the playing field is changing. With chip speeds leveling off, now software must be mined for hidden inefficiencies. This is why, in Clifford’s opinion, the recent car2car benchmark performance is so significant. It signifies a changing paradigm and where the focus must shift.
Some of the models in use today incorporate millions of elements. Take the THUMS human body model with 1.8 million elements – and safety simulations, which are headed to over 50 million elements.
“Models of this size will require scaling to thousands of cores just to maintain the current turnaround time,” observes Clifford. “The introduction of new materials, including aluminum, composites and plastics, means more simulations are required to explore the design space and account for variability in material properties. Using average material properties can predict an adequate design, but an unfortunate combination of material variability can result in a failed certification test. Hence the increased requirement for stochastic simulation methods to ensure robust design. This in turn will require dozens of separate runs for a given design and a significant increase in compute capacity — but that’s a small cost compared to the impact of reworking the design of a new vehicle.” | <urn:uuid:395e800c-7c25-48f1-ae7a-b8f8ccff972f> | CC-MAIN-2017-04 | https://www.hpcwire.com/2014/05/08/safety-simulation-sets-speed-record/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00561-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91282 | 803 | 2.5625 | 3 |
What cool future passenger aircraft will look like:
Quieter, cleaner, and more fuel-efficient but not necessarily
avante-guard in design
would describe what NASA researchers have come up with in designing future passenger aircraft. NASA in May said an 18-month study featuring teams of aircraft experts from Boeing, Northrop, GE Aviation and the Massachusetts Institute of Technology used all manner of advanced technologies from alloys, ceramic or fiber composites, carbon nanotube and fiber-optic cabling to self-healing skin, hybrid electric engines, folding wings, double fuselages and virtual reality windows to come up with a series of aircraft designs that could end up taking you on a business trip by about 2030. | <urn:uuid:b9c70fa6-7968-428b-969f-66da5f3a9958> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2869288/data-center/20-projects-that-kept-nasa-hopping-in-2010.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00101-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920742 | 147 | 2.71875 | 3 |
3.1.1 WHAT IS THE RSA CRYPTOSYSTEM?
3.1.2 HOW FAST IS THE RSA ALGORITHM?
3.1.3 WHAT WOULD IT TAKE TO BREAK THE RSA CRYPTOSYSTEM?
3.1.4 WHAT ARE STRONG PRIMES AND ARE THEY NECESSARY FOR THE RSA SYSTEM?
3.1.5 HOW LARGE A KEY SHOULD BE USED IN THE RSA CRYPTOSYSTEM?
3.1.6 COULD USERS OF THE RSA SYSTEM RUN OUT OF DISTINCT PRIMES?
3.1.7 HOW IS THE RSA ALGORITHM USED FOR PRIVACY IN PRACTICE?
3.1.8 HOW IS THE RSA ALGORITHM USED FOR AUTHENTICATION AND DIGITAL SIGNATURES IN PRACTICE?
3.1.9 IS THE RSA CRYPTOSYSTEM CURRENTLY IN USE?
3.1.10 IS THE RSA SYSTEM AN OFFICIAL STANDARD TODAY?
3.1.11 IS THE RSA SYSTEM A DE FACTO STANDARD?
Detect, investigate, and respond to advanced threats. Confirm and manage identities. Ultimately, prevent IP theft, fraud, and cybercrime.
Explore products and solutions from RSA.
Get the technology, solutions, and capabilities that deliver exactly what you and your business need.
Let’s talk about your IT transformation.
We can help.
Accelerate your business and IT transformation with cloud, big data, and technology consulting and services.
Let’s talk about your IT service needs.
Gain insights and expertise on the topics shaping IT Transformationvisit InFocus.
Drive your business to greater success. Locate partnersor learn about becoming one.
Partner with EMC today.
Keep up with the news, jobs, and innovations in IT transformation from Dell EMC.
176 South Street
Hopkinton, MA 01748 | <urn:uuid:a5716c3a-dd35-467b-9cc7-d712f42b9ef8> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/rsa.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00313-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.662597 | 424 | 2.609375 | 3 |
OS X 10.x (Client)
When you start your Macintosh investigation it is important to know
what version of the operating system is installed on the computer. The
version of OS X (10.4, 10.5, 10.6) can shape and direct the analysis
as each version has certain unique characteristics for other artifacts
as well as their locations on the disk.
Macintosh operating systems use plist files (.plist) as repositories
for system and program settings/information. Plist files can wither be
in a binary-encoded format (bplist file header) or as XML.
To get the operating system version the first plist files you will
want to examine is the “SystemVersion.plist” located in
“/System/Library/CoreServices/” folder. With this knowledge you
can be aware of other plists and system artifacts that are unique to
the OS under inspection.
Forensic Programs of Use
plist Edit Pro (Mac):
plist Editor Pro (Win): | <urn:uuid:9726ad12-aac9-46cd-9901-b2d5527a2f5a> | CC-MAIN-2017-04 | http://forensicartifacts.com/tag/plist-editor/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867412 | 222 | 2.625 | 3 |
In this short series of articles, I explore the concept and reality of “big data.” What is it and where does it come from? Why is it important? How does it add value to the business? What is its impact on traditional data warehousing and business intelligence? In Part 1, I explored the first two questions: What is it and where does it come from? In Part 2, I examine the next two questions: Why is it important and how does it add business value?
The Importance of Big Data
Strange though it may seem, big data is nothing new! In one sense, based on the definition from Wikipedia I quoted in Part 1
of this series, “Big Data is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage and process the data within a tolerable elapsed time,” big data has existed since the first days of computing. The game-changing IBM System/360 mainframe first shipped in mid-1965 with disks (or DASD, direct access storage devices, as they were then known) that had an unformatted capacity of 7.25 megabytes. (For comparison, the IBM PC/XT shipped in 1983 with a 10MB hard drive.) When combined with main memories of only a few KB, as was often the case, it’s easy to see that it didn’t take very large (by today’s standards) data sets to tax the ability of the systems of the time.
Since the 1990s, data warehousing and data mining have brought numerous examples of big (for their time) data. Walmart’s Teradata-based data warehouse, dating from that period, has been consistently among the largest data warehouses in the world. Walmart is notoriously secretive about their system, but a variety of sources on the Web give figures. In the early 1990s, it started at 340GB. By 2000, it was said to be 70TB. By 2004, it was 500TB or half a petabyte. In 2008, a figure of 4PB was being mentioned. For their time, these are certainly big data, before the term was invented, with correspondingly large storage and management costs. And Walmart, as the world’s largest retailer through most of that time, is certainly renowned for getting value for its money.
From these and other examples, we may draw a couple of conclusions. First, data size has always been pushing the limits of computer storage and processing technology. In that sense, what we’re seeing today is not new. So far, technology has been able to accommodate this data growth, so it is probably a reasonable assumption that it will continue to do so. Second, some highly successful, leading-edge companies have consistently invested in the (relatively) expensive technologies needed to use big data and have reaped significant value from it. This, in a nutshell, is the importance of big data: Big data enables innovative businesses to become leading-edge adopters of new approaches to doing business and thus become particularly successful.
Of course, this is not to say that any business adopting big data will necessarily become a leading-edge adopter of a new business approach. There are other highly significant factors such as the market environment in which the business operates and the organization’s ability to adapt to change. Furthermore, like any technical innovation, big data may confer first-mover advantage on a particular business, but then becomes mandatory for the competition simply to survive. There is hardly a large retailer who hasn’t used data in a manner similar to Walmart; however, for a variety of reasons, perhaps related to Walmart’s market share or business ethos, they have been unable to achieve Walmart’s level of success.
The Value of Big Data
Recognizing that big data has long been with us allows us to look at the historical value of big data, as well as current examples. This allows a wider sample of use cases, beyond the Internet giants who are currently leading the field in using big data. This leads us to the identification of value in two broad categories: pattern discovery and process invention.Pattern Discovery
First, let me be clear that pattern discovery alone is not of value to the business. The title is simply shorthand for “pattern discovery and innovative reaction!” Clearly, discovering a pattern in, for example, customer behavior may be very interesting, but the real value occurs when we put that discovery to use by changing something that reduces costs or increases sales.
Pattern discovery leads us directly back to data mining. There are probably few of us who haven’t heard and perhaps repeated the “beer and diapers (or nappies)” story: A large retailer, supposedly Walmart, discovered through basket analysis – data mining till receipts – that men who buy diapers on Friday evenings are also likely to buy beer. They rearranged the store layout to place the beer near the diapers and watched beer sales climb. Sadly, this story is now widely believed to be an urban legend or sales pitch rather than a true story of unexpected and momentous business value gleaned from data mining. Nevertheless, it makes the point as well as any number of real examples: there are nuggets of useful information to be discovered through statistical methods in any large body of data, and action can be taken to benefit from these insights.
This particular story illustrates data mining in a single, well-understood data set, with the insight used to target a previously unidentified segment of the customer set –“fathers who buy diapers when supplies run low at home” or some similar categorization. Such uses of big data remain common and provide business value through targeted marketing to smaller and ever more specific micro-segments of the market until we reach the nirvana of the “segment of one.” (In my research, I was surprised to find this concept being discussed
as far back as 1989 by the Boston Group before data warehousing and data mining were advanced enough to make it possible!)
More recently, combining data sets from multiple sources, both related and unrelated, with increasing emphasis on computer logs such as clickstreams and publicly available data sets has become popular. Sometimes referred to as “mining the data exhaust,” this approach can allow specific individuals to be identified without requiring them to opt in by providing individually identifiable information, as was needed previously. Clearly, there is also business value here, but at what cost to privacy and individual choice
As discussed, mining behavior data allows existing processes to be tweaked and changed to provide better business value. However, the second approach to getting business value from big data involves using the data operationally to invent an entirely new process or substantially re-engineer an existing one. Beyond basket analysis, most retailers in the 1990s also used the cash register data to re-engineer their restocking and supply chain processes. It makes sense that goods purchased at the register deplete stock on the shelves, which must then be replenished. Restocking shelves depletes the store room and supplies must be reordered, and so on. In the past, restocking and reordering were largely triggered by a manual process in which floor or warehouse staff had to first notice that stocks were low and then take action. By automating these processes from triggers in the sales/inventory system, enormous financial and customer satisfaction benefits accrue. The twin terrors of retail – out of stock or overstock situations – are avoided.
Machine sensor data, the first category of big data discussed in Part 1
of this series, is key to this level of process re-engineering. Such data consists essentially of raw, unadulterated operational events that can drive new or re-engineered processes. A prime example of this type of process invention comes from the automobile insurance industry, where on-board sensors of acceleration and braking, among others, transmit driving behavior information in real time to the insurance company. Premiums are adjusted based on measured behavior. This is an entirely new way of offering automobile insurance that was impossible before the advent of this type of big data, and clearly allows new ways of achieving business value.
For those contemplating investment in big data, the most important conclusion from this article is to recognize that there are very specific combinations of circumstances in which big data can drive real business value. Sometimes, of course, it is the price for simply staying in the game, as we saw in the case of retailing. Other times, it can open up a new market niche for profitable exploitation, at least for the first movers. Recognizing and taking advantage of such opportunities demands a close partnership between IT and the business to understand all aspects of the situation.
In Part 3 of this series, I’ll take a look at the tools and techniques for using and managing big data and the importance of understanding the roles of IT and business users.
Recent articles by Barry Devlin | <urn:uuid:f75930c6-46a8-4c1b-b90d-0467d032b9cf> | CC-MAIN-2017-04 | http://www.b-eye-network.com/view/15790 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284270.95/warc/CC-MAIN-20170116095124-00431-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.959966 | 1,836 | 2.609375 | 3 |
Hurricane Sandy demonstrated that future upgrades to electric grids should not only be focused on making them tougher, but also making them smarter. While traditional approaches to hardening electric grids typically include burying electric lines and building tougher power poles to withstand storms, such approaches are expensive and don't account for heavy winds and mass flooding seen in storms like Sandy. More utility companies, like Commonwealth Edison in Illinois and Electric Power Board in Chattanooga, Tenn., are taking an adaptive approach, according to an article in the Wall Street Journal.
Rather than try to build an unbreakable system, new smart grid systems accept that damage will occur and attempt to isolate problems so they don't take down the whole system. Chattanooga spent $100 million in federal funds on its new grid, which uses 1,200 smart switches that direct the flow of electricity dynamically, adapting to changing grid conditions. A fallen tree or a flood, for instance, won't take out large parts of the grid, as it would with a traditional electric grid. Chattanooga's upgrades were also much cheaper than traditional methods. Burying electric lines would have cost the city as much as $2 billion, according to David Wade, chief operations officer for Electric Power Board, the city-owned utility.
Chattanooga's smart grid got its first test during a windstorm in July. About 35,000 homes and businesses lost power for as long as three days, and 42,000 locations suffered momentary outages, as the smart grid rerouted electric traffic. In a traditional system, the outages would have laster longer and been more widespread. | <urn:uuid:f9ee2b14-de1f-496d-9c56-066a3bed35b9> | CC-MAIN-2017-04 | http://www.govtech.com/technology/Sandy-Supports-Smarter-Electric-Upgrades.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00249-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.974561 | 320 | 2.84375 | 3 |
One-time password systems provide a mechanism for logging on to a network or service using a unique password which can only be used once, as the name suggests. This prevents some forms of identity theft by making sure that a captured user name/password pair cannot be used a second time. Typically the users logon name stays the same, and the one-time password changes with each logon. One-time passwords are a form of so-called strong authentication, providing much better protection to
on-line bank accounts, corporate networks and other systems containing sensitive data.
Today most enterprise networks, e-commerce sites and online communities require only a user name and static password for logon and access to personal and sensitive data. Although this authentication method is convenient, it is not secure because online identity theft – using phishing, keyboard logging, man-in-the-middle attacks and other methods – is increasing throughout the world.
Strong authentication systems address the limitations of static passwords by incorporating an additional security credential, for example, a temporary one-time password (OTP), to protect network access and end-users’ digital identities. This adds an extra level of protection and makes it extremely difficult to access unauthorized information, networks or online accounts.
One-time passwords can be generated in several ways and each one has trade-offs in term of security, convenience, cost and accuracy. Simple methods such as transaction numbers lists and grid cards can provide a set of one-time passwords. These methods offer low investment costs but are slow, difficult to maintain, easy to replicate and share, and require the users to keep track of where they are in the list of passwords.
A more convenient way for users is to use an
OTP token which is a hardware device capable of generating one-time passwords. Some of these devices are PIN-protected, offering an additional level of security. The user enters the one-time password with other identity credentials (typically user name and password) and an authentication server validates the logon request. Although this is a proven solution for enterprise applications, the deployment cost can make the solution expensive for consumer applications. Because the token must be using the same method as the server, a separate token is required for each server logon, so users need a separate token for each Web site or network they use.
More advanced hardware tokens use microprocessor-based smart cards to calculate one-time passwords. Smart cards have several advantages for strong authentication including data storage capacity, processing power, portability, and ease of use. They are inherently more secure than other OTP tokens because they generate a unique, non-reusable password for each authentication event, store personal data, and they do not transmit personal or private data over the network.
Smart cards can also include additional strong authentication capabilities such as
PKI, or Public Key Infrastructure certificates. When used for PKI applications, the smart card device can provide core PKI services, including encryption, digital signature and private key generation and storage
Gemalto smart cards support OTP strong authentication in both Java™ and Microsoft .NET environments. Multiple form factors and connectivity options are available so that end-users have the most appropriate device for their individual network access requirements. All Gemalto OTP devices work with the same
Strong Authentication Server and are supported with a common set of administrative tools. | <urn:uuid:68175da0-9d14-4455-8cb9-42ad6dbc0cb5> | CC-MAIN-2017-04 | http://www.gemalto.com/techno/otp/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00065-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91561 | 683 | 3.234375 | 3 |
Curtis K.,Hampton Roads Sanitation District |
Michael Trapp J.,Michael Baker International
Archives of Environmental Contamination and Toxicology | Year: 2016
It is widely understood that stormwater drainage has a significant impact on the health of tidal creek systems via regular inputs of runoff from the surrounding watershed. Due to this hydrologic connection, contamination of the upstream drainage basin will have a direct effect on estuaries and tidal creeks that often act as receiving waters. This study focuses on the importance of drainage basin sediments as they enhance the persistence and transport of the fecal indicator bacteria E. coli within a watershed. Experiments presented use microcosm environments with drainage basin sediments and stormwater to investigate E. coli colonization of stagnant waters and to examine the importance of host sources to bacterial survival. A novel method for establishing microcosms using environmental sediments with in situ bacterial populations and sterile overlying waters is used to examine E. coli colonization of the water column in the absence of flow. Colonization of sterile sediment environments also is examined using two common host sources (human and avian). Each experiment uses sediments of varying grain size and organic content to examine the influence of physical characteristics on bacterial prevalence. Results suggest host source of bacteria may be more important to initial bacterial colonization while physical characteristics of drainage basin sediments better explains extended E. coli persistence. Findings also suggest an indirect control of water column bacterial concentration by sediment type and erodibility. © 2016 Springer Science+Business Media New York Source
Regmi P.,Old Dominion University |
Miller M.W.,Virginia Polytechnic Institute and State University |
Holgate B.,Old Dominion University |
Bunce R.,Hazen and Sawyer |
And 5 more authors.
Water Research | Year: 2014
This work describes the development of an intermittently aerated pilot-scale process (V=0.34m3) operated without oxidized nitrogen recycle and supplemental carbon addition optimized for nitrogen removal via nitritation/denitritation. The aeration pattern was controlled using a novel aeration strategy based on set-points for reactor ammonia, nitrite and nitrate concentrations with the aim of maintaining equal effluent ammonia and nitrate+nitrite (NOx) concentrations. Further, unique operational and process control strategies were developed to facilitate the out-selection of nitrite oxidizing bacteria (NOB) based on optimizing the chemical oxygen demand (COD) input, imposing transient anoxia, aggressive solids retention time (SRT) operation towards ammonia oxidizing bacteria (AOB) washout and high dissolved oxygen (DO) (>1.5mg/L). Sustained nitrite accumulation (NO2-N/NOx-N=0.36±0.27) was observed while AOB activity was greater than NOB activity (AOB: 391±124mgN/L/d, NOB: 233±151mgN/L/d, p<0.001) during the entire study. The reactor demonstrated total inorganic nitrogen (TIN) removal rate of 151±74mgN/L/d at an influent COD/NH4+-N ratio of 10.4±1.9 at 25°C. The TIN removal efficiency was 57±25% within the hydraulic retention time (HRT) of 3h and within an SRT of 4-8 days. Therefore, this pilot-scale study demonstrates that application of the proposed online aeration control is able to out-select NOB in mainstream conditions providing relatively high nitrogen removal without supplemental carbon and alkalinity at a low HRT. © 2014 Elsevier Ltd. Source
Hampton Roads Sanitation District and D.C. Water & Sewer Authority | Date: 2013-09-13
A reactor and control method thereof for nitrogen removal in wastewater treatment achieves a measured control of maintaining high ammonia oxidizing bacteria (AOB) oxidation rates while achieving nitrite oxidizing bacteria (NOB) repression, using various control strategies, including: 1) ammonia and the use of ammonia setpoints, 2) operational DO and the proper use of DO setpoints, 3) bioaugmentation of a lighter flocculant AOB fraction, and 4) proper implementation of transient anoxia within a wide range of reactor configurations and operating conditions.
Hampton Roads Sanitation District | Date: 2015-07-22
A method and a system as described herein, including a method and system of treating ammonium containing water in a deammonification MBBR process where partial nitritation and anaerobic ammonium oxidation may occur simultaneously in a biofilm, or in an integrated fixed film activated sludge process where partial nitritation takes place in a suspended growth fraction and anaerobic ammonium oxidation occurs in a biofilm. The method and system include controlling airflow to the reactor to achieve a target pH, a target alkalinity, a target specific conductivity, and/or a target ammonium concentration in the reactor or in the effluent.
D.C. Water & Sewer Authority and Hampton Roads Sanitation District | Date: 2013-11-27
A method and a system for selecting and retaining solids with superior settling characteristics, the method comprising feeding wastewater to an input of a processor that carries out a treatment process on the wastewater, outputting processed wastewater at an output of the processor, feeding the processed wastewater to an input of a gravimetric selector that selects solids with superior settling characteristics, and outputting a recycle stream at a first output of the gravimetric selector back to the processor. | <urn:uuid:b1ef1cc1-a83f-4b44-9f85-388073238614> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/hampton-roads-sanitation-district-515343/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00395-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.880264 | 1,156 | 2.828125 | 3 |
The Brookings Institution published an interesting paper yesterday on the use of big data in education. In it, Brookings vice president Darrell M. West discussed the uses of real time analytics to help shape and guide education policy in the future. For West, data analytics were essential to evaluating school performance and providing educators with feedback:
“It is apparent that current school evaluations suffer from several limitations. Many of the typical pedagogies provide little immediate feedback to students, require teachers to spend hours grading routine assignments, aren’t very proactive about showing students how to improve comprehension, and fail to take advantage of digital resources that can improve the learning process. This is unfortunate because data-driven approaches make it possible to study learning in real-time and offer systematic feedback to students and teachers.”
West said smart data analysis could improve transparency and accountability in schools where administrators and educators were looking to increase students’ grades on standardized tests.
He suggested the use of dashboards for fast and up-to-date performance assessment. He pointed out the Education department’s dashboard as an example of how governments and schools could use real time data on their policy in action.
Dashboards are definitely a great way to quickly visualize, interpret and process data, but they must be based on credible data to be effective for the end user. In the past, the federal government’s dashboard for IT-spending has been criticized for being inaccurate, and one can assume the many pitfalls that lie with educational data. As the Columbia Journalism Review argued in May, education data has been known to be flawed, inflated and misleading. Having this type of data is definitely a start for better education policy, but it shouldn’t be the lynchpin behind an effective approach. | <urn:uuid:e20e3371-235f-4897-ac9c-3057085fbab5> | CC-MAIN-2017-04 | http://www.nextgov.com/technology-news/tech-insider/2012/09/big-data-could-play-role-improving-education/57896/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00211-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951118 | 356 | 2.734375 | 3 |
While many universities see value in integrating cybersecurity education into computer science programs, there has been less thought about the importance of using these lessons in other curriculum such as engineering, business, or even marketing programs.
If cybersecurity is every employee’s issue -- not just the IT department -- how do universities prepare all students to be able to apply cybersecurity skills to their next position? Whether they are marketing associates or software engineers, what they do in the digital enterprise either directly or indirectly impacts enterprise security.
EY’s Chad Holmes, principal and cybersecurity leader, talked about the importance of developing cyber talent that can hit the ground running in all industries from transportation and manufacturing to healthcare and human resources.
There is a responsibility beyond IT for organizations to be able to respond to a data breach. And while universities are taking steps toward bringing technical security components to the classroom, cybersecurity application also needs to be taught—across all disciplines.
"When I work with universities across the spectrum, there are innovative ways that they are looking at solving this problem," said Holmes. While there are universities the world over developing advanced degrees in cybersecurity, fewer are as focused on cyber at the undergrad level, and even fewer schools in the K-12 sector are developing curriculum to drive awareness about the security risks of online behavior.
"Starts earlier," said Holmes, "with six to 12 year olds. There needs to be more discussion about how to educate kids around this area."
What might be a formidable obstacle for the cybersecurity industry is re-branding itself as an industry that does more than search for malicious actors or deal with privacy concerns. "That brand issue is driving some of the young folks away from exploring this field," said Holmes.
Though, the brand of cybersecurity has morphed in a lot of different ways, said Holmes. "It was tracking down malicious actors and had sites that had negative malicious content, but now it is more about solving community and world problems."
I thought about Baymax and the theme song to Big Hero 6, "We could be heroes."
Anybody's got the power
They don't see it cause they don't understand
Spin around and round for hours
You and me, we got the world in our hands
If you haven't seen this Disney animation film, it's an adorable tale of Hiro, a robotics genius, who joins forces with his brother's university cohorts to fight the bad guy. These talented students in the robotics program at San Fransokyo Institute of Technology are incredibly innovative, even working on teleportation technologies, when they are called upon to track and bring down a malicious actor.
The point is, as technology evolves, so too will the risks to the digital enterprise which means that the talent must become more broad. Holmes said, "Re-branding cybersecurity means focusing on business risk as it relates to stock holder value. How does cyber help beyond just chasing malware. The industry is moving more toward global problem resolution."
Fundamentally if you are covering cyber across multiple domains, you are exploring how to address security issues at every level. "Universities that adopt cyber classes into their core curriculum whether it's business or finance are preparing students with a fundamental understanding of how security impacts business risk," Holmes said.
By way of example, Holmes talked about an area of business risk that has become more pronounced now: marketing. "Whether it's consumer products or a new product line, marketing campaigns, or publicity around a product. When teams release campaigns associated with marketing, they need to be thinking about how it will have a negative impact across the cyber landscape," he said.
Following the entire security life cycle from the early stages of learning will create a broader scope of folks who have a more focused understanding of the ways cyber impacts every aspect of the business.
In order to accomplish this goal, more academic institutions need to work in partnership with enterprises to understand the learning curves around security so that information about the threat landscape can be delivered in context.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:c4f3b443-6b70-42be-89e3-65ad645174e1> | CC-MAIN-2017-04 | http://www.csoonline.com/article/3079753/it-careers/cyber-security-curriculum-across-all-disciplines.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00515-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.960167 | 827 | 2.765625 | 3 |
Bigger is not always better in the world of supercomputing. While data scientists almost always desire more computational throughput, the key question is how best to deliver that: through traditional, power-hungry X64 processors, or through the cheap, low-power ARM processors that drive smartphones and tablets? The answer is not always clear.
The ARM architecture is estimated to power more than 90 percent of smartphones, and a good chunk of the world’s tablets too. To sate the desire for ever-faster devices, ARM Holdings has funneled more resources into the development of its 32-bit ARM architecture, with the hopes of boosting performance (memory especially) while minimizing electricity consumption and heat.
This keen interest in the ARM architecture has garnered the attention of the HPC community, which is always sensitive to power consumption and cooling issues. Several HPC companies and supercomputer projects have started migrating to the ARM architecture, such as the Barcelona Supercomputing Center, which is developing a supercomputer based on ARM Cortex-A9 systems.
Instead of diving headfirst into the ARM pool, however, smart HPC system builders need a way to predict whether ARM-based systems will, in fact, deliver the expected benefits in power consumption.
To that end, researchers at the National University of Singapore’s Department of Computer Science recently wrote a paper that sheds light on the balance between processing, memory, and network I/O on the one hand, and energy consumption in the latest multicore ARM architectures on the other. The paper was published by Sigmetrics, a special interest group that promotes the evaluation of computer system performance.
First, researchers Bogdan Marius Tudor and Yong Meng Teo developed a model that can predict the execution time and energy usage of an application for different number of cores and clock frequencies. This gives the user the capability to select the configuration that maximizes performance without wasting energy.
Second, the researchers tested that model against three types of applications, including HPC, Web hosting, and financial workloads. The tests show that the model can deliver a configuration of core counts and clock frequencies that reduce power consumption by 33 percent without impacting performance.
But in the end analysis, smaller is not always better. “We observe that low-power multicores may not always deliver energy-efficient executions for server workloads because large imbalances between cores, memory and I/O resources can lead to under-utilized resources and thus contribute to energy wastage,” the authors conclude. “Resource imbalances in HPC programs may result in significantly longer execution time and higher energy cost on ARM Cortex-A9 than on a traditional X64 server.”
If the ARM architecture is to make significant inroads into the HPC world, it will need enhancements in memory and I/O subsystems, the authors say. These enhancements are expected to be delivered in the ARM Cortex-A15 and the upcoming 64-bit ARM Cortex-A50 families, they say. | <urn:uuid:31391562-c8c8-4e36-a8ec-7e349264e284> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/06/06/mapping_the_energy_envelope_of_multicore_arm_chips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00267-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93165 | 615 | 3.09375 | 3 |
The micro-robots can be used for drug delivery, vitro fertilisation, cell sorting and cleaning of clogged arteries.
Researchers at the University of Twente in Netherlands and the German University in Cairo have developed a microrobot inspired by sperm that can be controlled with magnets.
The micro-robots resemble a sperm cell with a magnetic head consisting of a 200?nm-thick cobalt-nickel layer, which can be controlled by oscillating weak magnetic fields.
According the researchers, the micro-robots can be used for drug delivery, vitro fertilisation, cell sorting and cleaning of clogged arteries.
The microrobots designed by Islam Khalil and Sarthak Misra and other researchers at MIRA-Institute for Biomedical Technology and Technical Medicine at the University of Twente.
The finding has been published in Applied Physics Letters, in which the researchers are said to have steered the microrobots to a desired point with the control of magnets.
Dr Sarthak Misra, principal investigator in the study, and an associate professor at the University of Twente said: "Nature has designed efficient tools for locomotion at micro-scales."
"Our microrobots are either inspired from nature or directly use living micro-organisms such as magnetotactic bacteria and sperm cells for complex micro-manipulation and targeted therapy tasks."
Dr Islam Khalil, an assistant professor of the German University in Cairo, said: "MagnetoSperm can be used to manipulate and assemble objects at these scales using an external source of magnetic field to control its motion."
The researchers are planning to make the MagnetoSperm even smaller and currently working on ways to create a magnetic nanofibre that can be used as a flagellum. | <urn:uuid:b67a10fa-ca6c-411f-b2fd-8a0aa45058f0> | CC-MAIN-2017-04 | http://www.cbronline.com/news/enterprise-it/researchers-develop-sperm-inspired-robots-4285358 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00112-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.913007 | 368 | 2.828125 | 3 |
Microsoft Open Sources WebGL Browser Technology
In another step in the company's open source initiate, Microsoft last week has made some of its tech used for WebGL graphics rendering in its Microsoft Edge browser available in the GitHub repository.
The nonprofit Khronos Group maintains the specification for WebGL, which is implemented in Apple's Safari, Google's Chrome, Mozilla's Firefox and Opera Software's Opera browsers.
Microsoft, for its part, uses a renderer in its Edge browser to swap the WebGL content to its own DirectX subsystem in Windows. The Edge browser has a transpiler to covert the OpenGL Shading Language (GLSL) of WebGL to the High Level Shading Language (HLSL). It's this GLSL-to-HLSL transpiler that is getting released as open source code by Microsoft, according to its announcement. Chrome and Firefox browsers have similar functionality, but they use Almost Native Graphics Layer Engine (ANGLE) technology, Microsoft noted.
Microsoft at one time was an early critic of the use of WebGL in browsers, saying that some security risks were involved. Its Internet Explorer 11 browser checked for "unsafe WebGL content," Microsoft claimed.
In bringing its code into open source, Microsoft aims to "improve interoperability across browsers." This HLSL-to-GLSL transpiler code release may not be the last of its kind as Microsoft "may expand the scope of the release to other subcomponents over time," according to the announcement. However, it doesn't signify that the Edge browser will be going open source.
"At this time we have no plans to open source Microsoft Edge or EdgeHTML, but we understand and value the importance of being more open with our roadmap and our core technologies," Microsoft's announcement clarified.
Kurt Mackie is senior news producer for the 1105 Enterprise Computing Group. | <urn:uuid:99808f9d-24a4-4260-8327-76018966770d> | CC-MAIN-2017-04 | https://mcpmag.com/articles/2016/06/13/open-sources-webgl-browser-technology.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00414-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919514 | 384 | 2.546875 | 3 |
I’ve been hearing more and more about free online college courses lately, and Stanford University’s Cryptography course in particular came up in conversation twice last week, in two separate contexts. This got me wondering what else was available by way of computer and security-related education. As it turns out, there are a ton of what look to be fantastic options out there for those of us who are looking to get a more in-depth look into the subject.
Just like traditional college courses, you can take these classes for the possibility of credit, complete with graded homework and quizzes. But if you’re just looking to get some quality, technical yet approachable information on how to securely use these fantastic computational devices, you can take the self-study route and simply watch the videos and read the lecture notes at your leisure.
- Computer Science 101
If you’re a relative newcomer to computer jargon and concepts, you might want to start with this introductory class from Stanford. The course starts out with discussing how computers work and goes all the way through computer networking and security concepts.
- Internet History, Technology and Security
This is another good beginner-level class from the University of Michigan, covering the early days of computing and the Internet, from the 1940s to today. It will also introduce some very basic networking concepts so you can get a good idea of what makes the Internet go.
- Introduction to Databases
There would be no Internet as we now know it without databases. These are what power websites, banks, video games, not to mention storing information for offices and businesses all over the world. And gaining access to databases is the goal of many information security breaches. This Stanford class will help you understand the structure of databases.
- Introduction to Computer Networks (Stanford) or Introduction to Computer Networks (University of Washington)
Once you have a solid grounding in the workings of individual machines and data storage, you could get more in-depth with networking concepts. This is important background for security concepts, as much of what troubles the Internet involves transporting information from one machine to another. These Stanford and University of Washington classes will get you into the nitty-gritty of the many different ways machines communicate.
- Applied Cryptography (University of Virginia) or Cryptography 1 (Stanford)
Once you’ve got a thorough understanding of how data is stored and transferred, you’ll want to know how to protect it. These two classes by the University of Virginia and Stanford will tell you about cryptography and how to apply this information to protecting data.
Once you’ve gotten this far, you’re ready to get really specific. Malicious Software and Its Underground Economy? Securing Digital Democracy? There are a ton of different classes out there, with more appearing all the time. Here are a couple good sites for keeping up with what class options are out there:
What online courses would you like to see on computer security subjects? Have you ever taken a free online college security course? What was your experience like? | <urn:uuid:a54534c3-0605-4042-9769-e8b8cdbb2ba5> | CC-MAIN-2017-04 | https://www.intego.com/mac-security-blog/free-online-college-computer-security-courses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00230-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919718 | 626 | 3.0625 | 3 |
Wednesday of this week marked the 35th anniversary of the launch of Voyager 1; its sister craft, Voyager 2, celebrated the 35th anniversary of its own launch last month. Right now, both space probes are still going strong hurtling through space and are getting close to the edge of the solar system; Voyager 1 is 11 billion miles from the sun and Voyager 2 9 billion miles away.
I don’t know about you, but 35 was right around the age where I started to find myself walking into a lot of rooms and saying, “Wait, why did I come in here?” That age seemed to mark the beginning of a slow decline in my memory which shows no sign of letting up with my 43rd birthday approaching later this year.
Wait, what was I writing about again? Oh yes, the Voyager spacecraft.
Unlike my brain, Voyager’s on-board memory seems to be as sharp as ever (despite needing the occasional reboot). Both Voyagers are relics of 1970’s technology, featuring computers with a whopping 68kB of memory, eight-track tape recorders and (I’m assuming) “Keep on Truckin’” mud flaps.
The stat about the on-board memory caught my eye. A measly 68kB to power each of those crafts through billions of miles of space and capture invaluable types and amounts of scientific data for three and a half decades, and they’re expected to keep functioning into the 2020’s, when their fuel finally runs out. Very unlike my first big-screen LCD TV that crapped out after only three years. Truly amazing!
Much like when I read about the lines of code that power the Curiosity rover up on Mars, this stat made me wonder how the Voyagers compare to historical and modern day technology, in terms of memory. So, I did a little noodling and Googling around (sorry, I don't Bing) and came up with the following chart:
The amount of memory on each Voyager (68kB), as well as that on the Apollo 11 lunar and command modules (152kB) and the space shuttle (1MB) are dwarfed by that on the Curiosity’s on-board computer (256MB). But these are all blown out of the water by the memory such modern day devices as the iPhone 4S (512MB), the latest iPad (1GB) and the Samsung Galaxy S III (2GB) carry. Good thing the Galaxy S III has a lot more memory than the iPhone or Apple would probably sue them for copying.
What does this tell us? Well, the obvious main conclusion is that, gee, computing power sure has grown in leaps and bounds since 1977! Other than that, it reinforces the conclusion from my piece on the number of lines of code to power devices through the years: modern day gadgets like smartphones and tablets require a lot more computer power than do spacecraft that take men to the moon, rovers to Mars or eight-track players to the edge of the solar system. You just don’t need touch screens, pinch-and-zoom or speech recognition software to explore Saturn’s rings or drive around Gale Crater.
Now, what was I saying before I mentioned Voyager again?
Voyager - Combined memory for Computer Command System, Flight Data System and Attitude and Articulation Control System computers for Voyager 1 (Voyager 2 has identical systems). Source: http://voyager.jpl.nasa.gov/faq.html
Apollo - Combined on-board memory for Lunar Module Guidance Computer and Command Module’s Apollo Guidance Computer for Apollo 11. Source:http://www.doneyles.com/LM/Tales.html
Space Shuttle - Memory for the shuttle’s General Purpose Computer after 1991. Source:http://www.popsci.com/node/31716
Curiosity - Memory for the on-bord Rover Compute Element. Source:http://en.wikipedia.org/wiki/Curiosity_rover
iPhone 4S - Source: http://en.wikipedia.org/wiki/IPhone_4S
iPad - Memory for the 3rd generation iPad. Source: http://en.wikipedia.org/wiki/IPad
Galaxy S III - Memory for models available in North America, Japanese, and South Korea Source: http://en.wikipedia.org/wiki/Samsung_Galaxy_S_III | <urn:uuid:df4f5c73-63e8-4166-928d-3f9e865acec0> | CC-MAIN-2017-04 | http://www.itworld.com/article/2720429/mobile/voyager-memory-still-good-at-35.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00442-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.917486 | 930 | 2.609375 | 3 |
By W. Eric Martin
Sometimes there's still life in communication methods once thought dead.
That's the lesson Sheng Guo, chief technology officer for the New York State Unified Court System, learned after the Sept. 11 attack on the World Trade Center destroyed a Verizon switching office, cutting off telephone and e-mail service for 2,500 users at three courthouses in the area.
"We desperately needed something to bring them back online," Guo said.
Over the next three days, Guo and other court employees debated how to get reconnected. The solution came when Guo recalled seeing an ad for Canobeam, Canon's version of a wireless network technology called free-space optics (FSO).
Everything Old Is New Again
Just as Morse code can be sent by opening and closing the windows of a lantern, Alexander Graham Bell used modulated sunlight in the 1880s to create a "photo-phone," a device soon replaced by Bell's better known communications system. After lasers were developed in the 1960s, NASA unsuccessfully tried to use them to communicate with its Gemini 7 orbiter.
The United States Navy and Air Force also experimented with laser communication, but FSO -- as the technology is known today -- only really started to grow when the need for huge bandwidth was seen in the private sector. Today, transmission speeds of up to 1,000 Mbps are possible on FSO systems, purely by use of lasers through open air.
The sending and receiving devices must have a clear line of sight between them -- as was the case with two cut off courthouses, those at 60 Centre St. and 123 Williams St., which were within sight of a third courthouse at 31 Chambers St. that was still connected to CourtNet, the courthouse network. By using the Canobeam equipment, Guo and his team connected the first two courthouses to the 31 Chambers St. courthouse, so they could access CourtNet.
A third courthouse was isolated from the network at 71 Thomas St. A Canobeam was used to connect that courthouse to a hotel across the street, which still had Internet access. The courthouse could then piggyback on the hotel's Internet access to ultimately hook up to CourtNet through another downtown courthouse at 25 Beaver St.
After calling Canon the night of Friday, Sept. 14, Guo was convinced the technology would satisfy their immediate needs and tried to work out a deal. "They said they needed a PO [purchase order], but I sent them a signature and told them to trust me," he said. "In this emergency, I said, 'You need to sell us that equipment now, and the PO will follow.'"
To Guo's surprise, Canon made the deal and trained three of his staff members that Sunday on how to install and use the equipment. Data service was restored by the night of Sept 17, said Guo.
"In a typical installation, approval from the building management is required for mounting equipment," he said. "In addition, the contractor was supposed to conduct a site survey and submit a price quote for the fiber wiring from the Canobeam location to the core switch of the courthouse. Because this was an emergency, we went ahead with the installation anyway. The contractor was creative in running temporary fiber cables to bring IP connectivity back on the same day."
FSO is ideal for a quick link during disaster recovery, according to Ken Ito, assistant director for Canon's Broadcast & Communications Division. Unlike radio or microwaves, FSO provides a lot of bandwidth and doesn't require FCC approval.
FSO to the Rescue -- Again
In early 2002, Guo turned to Canobeam once again when Telergy -- the telecom provider that serves courthouses in Buffalo, Rochester and Syracuse -- filed for bankruptcy.
"If they shut down service, we didn't have a backup plan," Guo said. "We could ask someone else to provide service, but normally that would take a lot of time."
Instead of relying on others, the do-it-yourself Guo purchased more Canobeam equipment and connected the courthouses in those three cities to other government buildings in the New York state government network. In the end, the bankruptcy judge prevented Telergy from shutting off service, but the FSO system provided the insurance Guo needed.
Guo has since installed Canobeam in courthouses in Queens and the Bronx to back up existing communications. "In each case, we have three buildings with fiber from building A to building B, and from building B to building C. Ideally we want a ring, a connection from building C to building A to complete the loop," he said. "If it's possible to have fiber, we do that, but in Queens the cost would have been in the $150,000 range. The Canobeam cost $30,000, plus another $10,000 for installation -- a quarter of the cost." Choosing FSO over fiber also bypassed the months of waiting time normally needed for approval to run fiber underground.
Two courthouses in Suffolk County on Long Island have also been connected by Canobeam FSO systems. "In the past, we had a slow connection from the T1, and people used to complain about performance," Guo said. "That's a nonissue now. Everything's working."
Future Orders in the Court
Guo plans to use Canobeam to connect facilities in White Plains and Albany, and court campuses in New York City, but FSO will continue to be used either as backup to a fiber connection or in concert with another medium. "It's possible to have misalignment," he said. Also, when it snows heavily, as it often does in Buffalo and Rochester, the performance of FSO will be degraded as the snow interrupts the laser beam.
"If a bird flies in front of the laser, for example, the network goes down for a fraction of a second. The user doesn't know it, but our system does," Guo said. "It's a potential concern, but I haven't yet run into a situation where it doesn't work."
"We try not to say the technology replaces radios, but complements them," Ito said. "FSO doesn't like fog, but radios do. Radios don't do well in rain, but FSO is okay. There's a good balance between the two."
Ito said that building movement is another concern for those interested in using FSO. Heat during the day might cause one side of a building to expand, creating misalignment of the beam and loss of connectivity. To account for this movement -- as well as earth tremors and other alignment issues -- FSO systems must either spread out the beam, which reduces its effective distance, or include autotracking mechanisms, so they can adjust to moving buildings. Canobeam's DT-50, for example, works best to a distance of about 2 km.
"In the future, we will continue to look at this as a good alternative," Guo said, not only for the e-mail sent and received by the court's 15,000 employees and the access to civil and criminal case histories, but also for voice communication over IP phones and videoconferencing.
"We're in a tight budget and are always looking for ways to save money," Guo said. "Many government facilities are clustered in a downtown area and still paying a carrier a few thousand dollars to connect with the building next door. In that case, I'd strongly recommend people look at their network infrastructures. Otherwise you're just paying too much money every single month."
W. Eric Martin | <urn:uuid:8c32eff7-8ca4-4825-917e-14861c77f146> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/Let-There-Be-Light.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.976592 | 1,580 | 2.53125 | 3 |
One of the more ingenious forms of computer hacking in the past decade is the development and implementation of ransomware. Ransomware is a type of computer malware which when infected into a user’s computer restricts access to that computer and demands a form of ransom be paid before the system can be accessed. The ransom can range from $100 to upwards of $10,000.
The malware code actually encrypts all of the files in the user’s computer essentially rendering the files useless. The malware also puts up a user interface which displays a message to the user indicating what has happened, and instructions on how they can make the payment to unlock and decrypt their files. Once the user has sent payment and the payment is verified, then the encryption is unlocked and the user has access to their files again, but this is not always the case.
The malware commonly known as ransomware works as a trojan horse virus. This means that the virus is allowed to infect your computer when you click on a malicious link and are directed to a website where the code infects your computer, or when you download an attachment that injects the code into your computer.
It is called a trojan horse virus because you are tricked into thinking it is a file that you want to open, named something like "invoice.zip" or "nicepicture.zip". Typically, this is an instantaneous change as the malware very rapidly deploys the code to take over the target computer. Users often are unaware of what they have done before their computer is locked and they have no way to access their files. This code is also smart enough to simply bypass the common virus detection programs, so the common virus detection programs will do nothing to stop the injection of this code.
Once the code has begun to infect your computer it will submit your data files to an RSA encryption process that will encrypt all of your files and prevent you from accessing them. The code essentially encrypts every file on your computer and prevents you from accessing any one of these files. The encryption program used by the ransomware malware varies with each instance of the virus.
The specific type of encryption may vary and thus the decryption code, but the process is the same. Because the creators of these viruses have had years to perfect their craft they have also started injecting code in their malware that will delete all of the Shadow Volume copies that you have created as they have identified this as a plausible method of fighting against the virus. By erasing these copies, they force you to pay their ransom in order to get your files back.
Once these files have been encrypted you will see a pop up window which will display instructions on how to get your files back. You will need to pay the requested ransom, they will verify that this payment goes through, and then they will begin to run the decryption program giving you access to your files. Since these are criminals, they are not required to decrypt your files after you pay, so you may or may not get your information back. Most malware programs also leave a residue of code behind that continues to track your online behavior.
Recently we’ve seen a big change in the encrypting ransomware family and we’re going to shed light on some of the newest variants and the stages of evolution that have led the high profile malware to where it is today. In its first evolution of what we know as Cryptolocker, the encryption key was actually stored on the computer and the victim, with enough effort, could retrieve said key. Then you could use tools submitted on forums to put in your key and decrypt all your data without paying the ransom.
In future iterations, malware authors made sure that the only place the key was stored was on a secure server so that you were forced to pay. In newer variants of Crytpolocker the VSS, or Shadow Volume, is almost always deleted at deployment. Malware authors also give the victim a special extended period of time to get their files they waited past the deadline, but the price usually doubles or even triples. This threat is ever evolving.
There are two main ways to prevent ransomware attacks and to minimize any negative effects of an attack.
The best way to prevent these attacks is to never click on a link that you are not 100% sure is a safe location. This means that you should not visit websites whose authenticity cannot be verified, and which look suspicious. You also should avoid clicking on ad banners and on suspicious “too good to be true” offers and promotions. These links quite often lead to a sham website whose only function is to inject this malware code into your computer. Even with the best of intentions, you may accidentally click on a link which you did not intend.
To avoid this mistake becoming very costly, you should always have a regular backup of your files, and this backup should not be attached to your computer. This will ensure that even if you do have a malware attack you can still have access to all of your files.
As always, the best way to prevent this code form ever infecting your computer is to never click on links you do not trust and not to open attachments in email from anyone you do not trust. This will help to prevent an infection in the first place.
If you ever feel you may be a victim of one of these attacks immediately disconnect the affected machine from the network by unplugging the Ethernet cord or powering the machine off, even if the ransomware tells you not to.
Posted by: Systems Administrator Jeremy Smario | <urn:uuid:05779cfb-5383-4ce1-b89b-606c829581a7> | CC-MAIN-2017-04 | https://www.greenhousedata.com/blog/the-truth-about-cryptolocker-and-ransomware | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00286-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964017 | 1,117 | 2.875 | 3 |
Authentication, Authorization, and Access Control
Identification vs. authentication vs. authorization
Numerous ponder the idea of verification in data security. What has a tendency to happen is that they befuddle validation with recognizable proof or approval. They are indeed all different ideas, and ought to be considered such. Identification is just guaranteeing one is someone. One distinguish one's self when one identify with somebody on the telephone that one don't have the foggiest idea, and they ask one who they're addressing. When one say, "I'm Tom." none've quite recently recognized one's self.
In the data security world, this is similar to entering a username. It does not closely resemble entering a watchword. Entering a secret word is a technique for checking that one is who one distinguished one's self as, and that is the following one on our rundown.
Authentication is the way one demonstrates that they are who they say they are. When one claim to be the tommy south by logging into a workstation framework as "smith", its doubtlessly going to approach one for a secret key. None've guaranteed to be that individual by entering the name into the username field (that is the distinguishing proof part), however now one need to demonstrate that one are truly that individual. Most frameworks utilize a secret word for this, which is focused around "something one knows", i.e. a mystery in the middle of one and the framework. An alternate type of verification is displaying something one have, for example, a driver's permit, a RSA token, or a keen card. One can likewise verify through something one are. This is the establishment for biometrics. When one does this, one first distinguishment oneself and afterward submit a thumb print, a retina sweep, or an alternate manifestation of bio-based verification. When one have effectively validated, one have now done two things: one have guaranteed to be somebody, and one have demonstrated that one are that individual. The main thing that is left is for the framework to figure out what one of them is permitted to do. Approval is the thing that happens after an individual has been both distinguished and validated; it's the step figures out what an individual can then do on the framework. Authorization is the procedure of giving somebody consents to do or have something. In multi-client machine frameworks, a framework chairman characterizes for the framework which clients are permitted access to the framework and what benefits of utilization, (for example, access to which document indexes, hours of access, measure of allotted storage room, et cetera).
As it has been mentioned before too, the authorization is basically the process which is used for the permission. Here are the ways through which it can be given;
Least privilege: one might be given the access but he can have it for some places only which means it is limited.
Separation of duties: the duties which are assigned call also be separated hence one can ensure that there are no clashes of any problems.
ACLs: the ACLS, as mentioned above should be the various ones so that one can ensure that he is having the right access and can get benefits out of it.
Mandatory access: there can be some mandatory access which has to be done by all the people who work in organization.
Discretionary access: the access can also be defined as the discrete one and hence one can safe guard the data he has.
Rule-based access control: there can be some controls where the rules can be accesses. Hence those rules are to be followed.
Role-based access control: the role based access means that one must be having some of the role in the organization because of what he can get some access.
Time of day restrictions: one can also see that there are some restrictions which are put up only because of the time of the day which is faced by one.
Authentication can be given this way;
Tokens: one can have some tokens which can define the authentication.
Common access card: the cards can be given to employees.
Smart card: smart cards which can be scanned can be issued.
Multifactor authentication: there can be some multifactor authentications too which can be used.
Besides them all, one can benefit from using the following things;
- TOTP (Algorithm which is online and is time based)
- HOTP ( the algorithm which is one timed and is based on the HMAC)
- CHAP (authorization protocol which is challenge handshake based)
- PAP (9protocol for password authentications)
Single sign-on: a card system with single sign can be introduced.
Access control: the control can be accessed by keeping some logs
Implicit deny: if there is some mistake, then deny can be done which is implicit.
Trusted OS: The OS that one has must be the trusted one.
Here are the authentication factors which are used;
Something one is: it means that identify of that person.
Something one has: it means the company which that person has, or the person he is with.
Something one knows: it can be for someone who is trudges one and is known.
Somewhere one are: the authentication also can be effected if someone is not in the place where he is supposed to be.
Something one do: also, the job which is carried out by one can also reflect the authentication factor.
Here are some ways which can be sued for identifications;
Biometrics is seen as a panacea for confirmation issues, however obviously it isn't. Usually endeavored biometric information incorporates fingerprints, retina sweeps, voice distinguish, and face distinguishes. Fingerprints are the most widely recognized, having generally modest peruses (Us$50 to $200) that give sensibly useful information. Hard information is not accessible on how regularly fingerprints are comparative, yet it is for the most part accepted that false matches are uncommon. Retina outputs are likely just as solid, however once more; hard information is not broadly accessible. Voice and face distinguishment are hard to get right. Its certifications of different sorts have various problems: the peruse programming dependably matches the approaching picture against a set of standard pictures, one for every known client. We would favor on the off chance that it would put out a standardized datum that should be same each time the same client is seen, as a watchword would be, on account of some verification plans require such a datum to use as an encryption key. The client's body is not static. Case in point, a cut finger may refute a unique mark and a stuffed-up nose would negate a voiceprint. The verification framework must be capable, without losing security, to supplant the client's standard picture on short perceive without access to the old confirmation token, and for a few utilization, e.g. restorative, it is especially critical to give benefit dependably to a harmed or debilitated client.
Personal identification verification card:
The identification card is utilized for client verification as a part of each mobile phone (the SIM), is making advances in the MasterCard business, and is utilized by a few organizations for verifying clients on their workstations. It goes about as a key executor, holding a mystery key, for the most part a RSA key. At the point when a server doing verification communicates something specific, customer programming passes it to the shrewd card, which scrambles or decodes it. Shrewd cards have various security issues:
The cards are joined with the customer workstation by physical contact in a USB or hardwired peruse (ISO 7810) or by radio (RFID, ISO 14443); IRDA (tight pillar infrared) is conceivable however I have not become aware of it being utilized. With physical contact the holder knows which have the card is embedded in, however RFID can act at a separation and card skimming, as a cheat may do, has been showed. A little subset of the cards incorporates a keypad so the client can enter a secret word each time the card is to be utilized. This equipment is lavish and effortlessly harmed, and is seldom utilized. The secret key may be judicious on a Visa however keeps its utilization for transitive confirmation that happens habitually, for example, document get to or message recovery. A few cards need to see a secret word (PIN, four to six digits) from the client before they will convey, sent over the standard interface. Once more, this blocks utilizing the keen card for bland transitive verification. Anyhow more awful, in the Visa setting the PIN would need to be given to the dealer's gear and to the criminals swarming his framework. Whatever is left of the cards are constantly dynamic, so if a foe physically takes the card or corresponds with it surreptitiously (RFID just) then he can mimic the holder. Much better would be if the card would oblige the accomplice to validate, e.g. with a X.509 testament that it has been customized to trust.
Login ID, and client ID, username or client name is the name given to a client on a workstation or machine system. This name is normally a shortened form of the client's full name or his or her nom de plume. For instance, an individual known as John Smith may be allotted the username of smith, which is the initial four letters of the last name, took after by the first letter of the first name. In the picture indicated on this page, the username is root. Usernames permit various clients to utilize the same workstation or online administration with their own particular individual settings and records. At the point when utilized on a site, a username permits you to have your particular settings and distinguishing proof with that site or administration.
In data innovation (IT), federal identity management (Firm) adds up to having a typical set of strategies, practices and conventions set up to deal with the personality and trust into IT clients and gadgets crosswise over associations. Single sign-on (SSO) frameworks permit solitary client verification prepare crosswise over numerous IT frameworks or even associations. SSO is a subset of united personality management, as it relates just to validation and specialized interoperability. Centralized character management results were made to help bargain with client and information security where the client and the frameworks they got to were inside the same system - or at any rate the same "area of control". Progressively be that as it may, clients are getting to outer frameworks which are in a general sense outside of their space of control, and outer clients are getting to interior frameworks. The undeniably normal partition of client from the frameworks obliging access is a certain by-result of the decentralization achieved by the coordination of the Internet into each part of both individual and business life. Advancing personality management challenges, and particularly the difficulties connected with cross-organization, cross-space access, have offered ascent to another methodology to character management, referred to now as "unified character management". Firm, or the "organization" of character, depicts the advances, gauges and utilization cases which serve to empower the compactness of personality data crosswise over generally independent security spaces. A definitive objective of character alliance is to empower clients of one area to safely get to information or frameworks of an alternate space flawlessly, and without the requirement for totally repetitive client organization. Personality organization comes in numerous flavors, including "client controlled" or "client driven" situations, and additionally endeavor controlled or business-to-business situations. Alliance is empowered through the utilization of open industry norms and/or unabashedly distributed particulars, such that various gatherings can accomplish interoperability for normal utilization cases. Run of the mill utilization cases include things, for example, cross-space, online single sign-on, cross-area client record provisioning, cross-space qualification management and cross-area client property, etc.
Transitivity figures out if a trust might be reached out outside the two areas between which the trust was structured. You can utilize a transitive trust to augment trust associations with different spaces. You can utilize a no transitive trust to deny trust associations with different areas.
Hence one can find out many of the methods which he can use for the security. All the authentication and the access controls are done so that one can stays safe. So one must take care of these things and should have knowledge about them so that he doesn't get any trouble in the future regarding any type of intrusion. | <urn:uuid:4df50141-e7b7-40d9-b71a-d0d4a6791704> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-authentication-authorization-and-access-control.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00222-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.949742 | 2,568 | 2.84375 | 3 |
How to implement PKI
Implementation of the public key infrastructure is basic in the life cycle of any PKI. In most case, developing a PKI is normally a very easy task and many organizations are able to carry it out. However, the implementation stage is one that can be quite hectic. So as to avoid the many problems that are encountered during the implementation process, proper and adequate planning is required. This means that an organization or individual must have a clear set of well laid out strategies and procedures to be followed. In some other cases, a lot of resources are required and therefore a huge investment should be made. It is therefore important that the PKI implementation process is handled with a lot of seriousness since it is an activity that can turn to be a white elephant if all the essential aspects are not taken into consideration. Here are the information which should be held by one so that he can implement it successfully;
Implementing Certificate Authorities
In a public key infrastructure, a Certificate Authority is responsible for the creation and distribution of certificates to the end users and other people that will need them in the environment. These are the public and private keys and one's certificate authority is one's clearing house for this. If one is in a private organization, one may have a private certificate authority which is meant for one's own users and private servers. In such a case, one have to ensure that third party individuals who use one's certificates trust them since in most cases, people tend not to have much trust with certificates that one have built on one's own.
If one are implementing a certificate authority, one are probably implementing a commercial certificate authority or a private certificate authority or a combination of both of them. If one is going to commercial certificate authorities, these are certificates that are built into one's browser and it is a browser that has the capability of sending and receiving the encrypted SSL and TLS type traffic via https. One can go to this commercial certificate authority and purchase a web site certificate that one can use on one's browser. Since everyone trusts that certificate, one's website will naturally be trusted. Occasionally, these commercial certificate authorities will give one some additional options as well. If one provides them with additional information about one's self or one's organization, they may give one a higher level of trust that one can tag on one's website.
Private certificate authorities are obviously certificates that one are building in-house probably in one's Windows operating system. This is a kind of certificate authority that one is building from scratch. If one is a medium organization, one will have such multiple certificates because one is going to have web servers and places where data needs to be encrypted. It could therefore be quite expensive going to a third party to pay for the certificates that will not even be used externally hence it therefore becomes very important to come up with certificate authorities of one's own and start distributing them. Obviously one is going to have to implement these certificate authorities and therefore, one must plan it out. Whenever one are going to implement this king of certificate authority, there needs to be an overall understanding of the strategy in one's organization, which is going to manage the certificates, how the certificates be built, how they will be distributed and how they will be revoked. As an organization, one can choose whether to have a commercial certificate authority or build one's selves a private certificate authority.
Implementing Key Revocation
Key revocation is a natural part of a certificate lifecycle in one's PKI and generally we use a Certificate Revocation List that is maintained by the certificate authority to be able to look for the key that have been revoked. There are many different reasons for revoking keys and we need to think the changes that will cause the key to be revoked such as natural expiration of the key or if the key is used for fraudulent activities. The revocation process is one that is more or less formal. Some other reasons as to why a key might be revoked are having a key that has been compromised, maybe the entire certificate authority has been compromised, and maybe the key has been changed or superseded. In addition, it can also be revoked if the entire business is not in operation or if a key has been suspended due to the presence of a certificate hold for that key. If some specific certificates are revoked, it therefore means that we will have to update our browsers, applications and domains that use the revoked certificates.
If one is using PGP or open PGP, then one does not have a central certificate authority. One are one's own authority of one's own in that one are building the certificate on one's own and also revoking it. Obviously in a web of trust, one creates one's own certificates, other people sign one's certificates hence one create a nice web of trust. When one create one's certificates, one might also want to consider going ahead and create a revocation certificate of one's own, That way, if something was to happen to one's private key, one would have a way to revoke that private key without having direct access to one's private key at all. One can even take that key and enable other people to be able to revoke one's certificate. That way, if something was to happen to one or to the computers that one are using, there would be someone else who is outside the scope of that issue that can then revoke one's certificate to make sure that nobody else would be able to use that in the future.
Implementing Digital Certificates
The implementation of digital certificates is also a very important aspect of cryptography. In this case, it is very essential that there is some special type of digital signatures to assist in the implementation process. In this case, a lot of information and documents can be digitally signed so as to make sure that they are well encrypted. As a matter of fact, this is a practice that is mainly suited for organizations since it could be very hard for an individual to implement digital certificates. They are mostly applied in cases where there could be a lot of encrypted information. The implementation of digital certificates is very important since it reduces the occurrence of non-repudiation. Apart from a few cases where information must be encrypted, the presence of digital certificates does not require encryption since if a message has been digitally singed, one can easily proof the origin of the message and identify whether it has been tampered with in the course of it being sent.
The public key infrastructure(PKI) is a mixture of things working together such as policies, procedures, people, hardware and software all put together to create a standard way to manage, distribute these certificates, store them, revoke them. If one are going to venture into public key cryptography and one are making a PKI, it means that one will be making something that is very big which one need to plan out from the very beginning and set all the processes in place so that it can be as successful as possible. The PKI is responsible for building these certificates and binding them to people or resources.
If one is going to implement one's own PKI, it is going to take a lot of planning to begin with. One is going to research a lot of different PKI software, understand the process one want to have in place and it may start as if one are building a certificate for a single web server. However, once one start building to start building these out and one's organization gets bigger, one will probably need more of those created hence one will need a very specific processing place so as to be able to provide that capability to the rest of the organization. One will need to do encryption with third parties, have digitally signed documents, many more web and email servers and hard drive encryptions so if one plan ahead, one will be in good shape when some of these things start to feature.
Implementing Key Recovery
The idea of key recovery basically means that we have put some processes in place to make sure that should something happen to that key, we have ways of recovering data that had been encrypted with the lost key. One of the ways is to back-up the private key. However, one need to make sure that one do not have too many backups of the private key or rather too many versions of it to avoid it getting into the hands of other people.
If one is implementing a certificate authority or one are building out a set of PKIs in one's environment, then the ability to recover one's keys is very important. The larger one's organization becomes, the more information one are going to start encrypting hence the more important key recovery is going to be important in one's environment. This is a process that is usually integrated into the certificate authority one is using. That way, one can build this plan for one's key recovery have it automatically as part of one's certificate authority and then when one start building other certificate authorities, the key recovery aspect is already integrated in the mathematics behind the keys that one are distributing.
In each and every organization, there is already a key recovery process that will start up from the beginning if the key is lost. We want to have a process where an organization can recover the data or private key and therefore the recovery process is probably built into one's public key infrastructure. It may be a process that is done automatically every time one comes up with a new set of keys
Implementing Public and Private Keys
The public key cryptography methodology is one that was founded by asymmetric encryption. The creation of public keys really involves a lot of mathematics and randomization. A lot of mathematics and prime numbers goes into this so as to create a public key that can be given to anyone in the world. By looking at a public and private key, it is very difficult for one to differentiate between the two.
The implementation of one's public and private key creation is something that is usually that is a formal process especially if one have a formal certificate authority set up, it is integrated into the security policy, one know exactly how to request it, one know the process that goes on to get registered and have that key and the certificate provided back to one. It may be something that is very structured and one need a lot of documentation or even show up in person and it has to be linked to one's ID that one might use or it might be more relaxed where it might be something like PGP or open PGP where one are outside of an organization and maybe building a certificate for one's own use. When building out a PGP secret key and a public key, there is a front end to the open PGP standard called GPG. One can download GPG for Mac OS, Windows, Linux and UNIX hence giving one a great capability on most of the available operating systems.
As a matter of fact, the implementation phase of a PKI is one of the most challenging but once achieved, the life cycle of the PKI is considered complete apart from cases where the PKI may require some revocation due to some other reasons. In addition, the implementation process is one that must be successfully undertaken if one is to have a good encryption mechanism. It is therefore important that PKI implementation is taken with as much seriousness as the development process or any other process. | <urn:uuid:299def01-49e2-4aad-a75e-bc1af920d95d> | CC-MAIN-2017-04 | https://www.examcollection.com/certification-training/security-plus-how-to-implement-pki.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00250-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.969935 | 2,275 | 3.421875 | 3 |
3.1.10 Is the RSA system an official standard today?
The RSA cryptosystem is part of many official standards worldwide. The ISO (International Standards Organization) 9796 standard lists RSA as a compatible cryptographic algorithm, as does the ITU-T X.509 security standard (see Question 5.3.2). The RSA systemm is part of the Society for Worldwide Interbank Financial Telecommunications (SWIFT) standard, the French financial industry's ETEBAC 5 standard, the ANSI X9.31 rDSA standard and the X9.44 draft standard for the U.S. banking industry (see Question 5.3.1). The Australian key management standard, AS2805.6.5.3, also specifies the RSA system.
The RSA algorithm is found in Internet standards and proposed protocols including S/MIME (see Question 5.1.1), IPSec (see Question 5.1.4), and TLS (the Internet standards-track successor to SSL; see Question 5.1.2), as well as in the PKCS standard (see Question 5.3.3) for the software industry. The OSI Implementers' Workshop (OIW) has issued implementers' agreements referring to PKCS, which includes RSA.
A number of other standards are currently being developed and will be announced over the next few years; many are expected to include the RSA algorithm as either an endorsed or a recommended system for privacy and/or authentication. For example, IEEE P1363 (see Question 5.3.5) and WAP WTLS (see Question 5.1.2) includes the RSA system. | <urn:uuid:4755e712-2f1a-478a-9166-7c531210ae47> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/official-standard-today.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00276-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909719 | 342 | 2.703125 | 3 |
Principles of Adult Education
It’s hard to imagine today, but for most of human history, the concept of adult education was pretty much nonexistent. The need simply wasn’t there — most of society was agrarian, and men and women who lived these relatively uncomplicated lives knew everything they needed to by the time they were in their teens.
When the industrial economy came into being about two centuries ago, the laborers who made it run were trained on nearly all the tasks they had to perform within the first couple of days on the job.
Even in the professional sphere (doctors, lawyers, accountants, etc.), most people didn’t pursue education past their collegiate years.
This is not to say that all these folks stopped learning once they reached adulthood. (Because of our nature, it’s just about impossible for human beings to cease absorbing new knowledge at any point in their lives.) It just meant that their years of formal training and education were over after they made it through their salad days.
In modern times, though, industries form, change and die at such a rapid pace, most adults have to participate regularly in structured learning programs to keep up. IT certifications in particular arose wholly out of an unprecedented demand for vocational training and skill assessment.
Formal education is now viewed as a lifelong process rather than something limited to 12 to 20 years of one’s youth.
Hence, it is important for both adults and the people who teach them to understand how — and why — this distinct age cohort learns. (For the purposes of this article, “adult” can be defined as 25 years old or more, as this is the age when most people are finished with their initial college experiences and have assumed a high level of responsibility for their own lives.)
First, it’s worth pointing out that adults tend to take different approaches to learning than young people do. For the most part, adolescents don’t tend to question how what they learn applies to them. Their aims are getting through the class with a good grade (or just getting through the class, period).
Also, because they tend to take courses on completely unfamiliar subject matter more frequently, they usually depend far more on the instructor’s guidance.
Adults, though, approach learning in a more autonomous and practical way. They want a certain level of assistance from their teacher, but they generally favor a more hands-off, facilitative instructional strategy.
They typically take classes on topics with which they already familiar and are interested in to some extent, so they want to be able to work through issues and draw their own conclusions rather than just passively receive data.
Also, adults will ask — in their own minds, if not out loud — why they need to know about certain information. If they don’t see how particular data connects to what they intend to apply in their professional or personal lives, they’ll probably be much less likely to retain it.
Good learning programs for adults are those that give a considerable amount of freedom to students and have a demonstrable connection to the world outside the classroom. Environments that are interactive, experiential, experimental, nonlinear and motivational are ideal.
Keep these qualities in mind as you evaluate and compare training and certification offerings. | <urn:uuid:05a0eba4-617b-44a5-9913-588bf8302857> | CC-MAIN-2017-04 | http://certmag.com/principles-of-adult-education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00002-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.97436 | 679 | 3.40625 | 3 |
Question 3) Development Foundation Skills
Objective: Design Elements
SubObjective: Creating, Modifying, Troubleshooting Fields
Single Answer Multiple Choice
Kim needs to create a field for an expense report form that will automatically display the current date and time when the documents are created. The information must be saved with the document and preserve the original date the document was created. What field value type should Kim use?
B. Date/Time field
C. Computed for display
D. Computed when composed
D. Computed when composed
When creating fields for a form or subform, you will need to specify how the field will obtain its value. For example, when creating a field for an employee name on an expense report form: the field type could be text or rich text, and defined as an editable value type for that field. Editable is a field value option that will allow entering of information by the user. The user can also change the field contents if needed.
In this scenario, the user wants the current date and time on the documents created from the expense report form to be saved with the document. A computed-when-composed value type field will accomplish this. Computed-when-composed means that the date and time will be generated only when the document is first composed or created. When you open the document for editing, the date will remain what it was when you first created the document and the value will not be editable.
The Date/Time answer is incorrect because this would allow the date and time to appear automatically.
A computed for display field recalculates each time a user opens or saves a document. The field value is not stored with the document. If you want to have a modification date on your report , then each time you open that document the modified date would change to the current date. However, the information is not stored with the document and it is not editable. The information is relevant only to the immediate session. This type field cannot be displayed in a view.
A computed field formula calculates each time a user creates, saves, or refreshes a document, unlike a computed when composed field formula which calculates only when the user first creates the document. If the Date/Time field were a computed value type, the value would change to whatever the date and time was at the time of saving or refreshing your document.
Domino Designer 7 Help – search on: Value formulas for computed fields
RedBook – Domino Designer 6: A Developer’s Handbook – Chapter 4 http://www.redbooks.ibm.com/abstracts/sg246854.html?Open
These questions are derived from the Self Test Software Practice Test for Lotus exam 710 – Notes Domino 7 Application Development Foundation Skills | <urn:uuid:1e4ae911-a004-4125-bed4-678f844c8892> | CC-MAIN-2017-04 | http://certmag.com/question-3-test-yourself-on-lotus-exam-710-notes-domino-7-application-development-foundation-skills/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00332-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.887161 | 571 | 2.703125 | 3 |
In June, Harvard's Clean Energy Project (CEP) plans to release to solar power developers a list of the top 20,000 organic compounds that could be used to make cheap, printable photovoltaic cells (PVCs).
The list, culled from about seven million organic molecules that a crowdsourcing-style project has been crunching over the past two-plus years, could lead to PVCs that cost about as much as paint to cover a one-meter square wall.
"We're in the process of wrapping up our first analysis and releasing all the data very soon," said Alan Aspuru-Guzik, an associate professor of chemistry and chemical biology at Harvard
Today, the most popular PVCs are made of silicon and cost about $5 per wafer to produce. Silicon PVCs have a maximum solar conversion efficiency rate of about 12%, meaning only 12% of the light that hits them is converted to energy.
There is also a small niche market of organic PVC vendors, but their solar cells offer only about 4% to 5% efficiency rate in converting solar rays to energy. In order for a solar product to be competitive, each would need to cost about 50 cents, according to Aspuru-Guzik.
The Clean Energy Project, however, uses the computing resources of IBM's World Community Grid for the computational chemistry to find the best molecules for organic photovoltaics. IBM's World Community Grid allows anyone who owns a computer to install secure, free software that captures the computer's spare power when it is on and idle.
By pooling the surplus processing power of about 6,000 computers around the world, the Clean Energy Project has been able to come up with a list of organic photovoltaics that could be used to create inexpensive solar cells. The computations also look for the best ways to assemble the molecules to make those devices.
Computational chemists typically calculate the potential for photovoltaic efficiency one organic molecule at a time. Over the past few years, computational chemists have identified a few organic compounds with the potential to offer around 10% energy conversion levels.
"But that's only two or three," Aspuru-Guzik said. "Through our project, we've identified 20,000 of them at that level of performance."
In fact, CEP's list of molecules include some that have upwards of 13% solar conversion efficiency rates, Aspuru-Guzik said.
The computing resources from IBM's World Community Grid are split for the CEP. Some of the computers in the grid are making mechanical calculations of molecular crystals, thin films and molecular and polymer blends; others are making electronic structure calculations to determine the relevant optical and electronic transport properties of the molecules.
Harvard has also constructed significant data storage facilities to capture the results of the computations. Each molecular computation produces on average about 20MB of data. In total, the global grid computing architecture generates about 750GB of data per day. So far, the data has grown to about 400TB.
Harvard has filled racks of servers with 4U-high hard drive arrays. Each array is filled with 45, 7200rpm 3TB hard drives from Western Digital subsidiary HGST.
"The data we're creating will ultimately benefit mankind with cleaner energy solutions," Aspuru-Guzik said. "Accordingly, we designed our Jabba storage arrays with built-in redundancies. But the key to the arrays' performance is the use of reliable, high-capacity, and low-power storage from HGST. We've filled nearly 150 HGST drives to this point and are currently building Jabba 5 and 6 to handle the enormous amount of data generated for the project."
Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is firstname.lastname@example.org.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
This story, "Harvard global grid computing project will help create printable solar cells" was originally published by Computerworld. | <urn:uuid:ea04730a-2245-45cc-a2ba-7e648da7ffb4> | CC-MAIN-2017-04 | http://www.itworld.com/article/2709281/hardware/harvard-global-grid-computing-project-will-help-create-printable-solar-cells.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00542-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.938912 | 865 | 3.5 | 4 |
Between spam, chain emails and the sheer volume of information that passes through many inboxes, email has lost much of the luster it once possessed in the days of America Online and CompuServe.
Now there's something else that's unappealing. Foreign governments targeting consumers' email inboxes, according to a new warning message being issued by Google.
Viewed in a Gmail inbox, on a Google home page or in the Google Chrome browser, thousands of users received a warning that read “Your account could be at risk of state-sponsored attacks.” Google first created the warning message in June, but it appears to be picking up steam. The emails blocked by Google's new filter may contain links to malicious websites designed to steal personal information or implant malware, or they may contain malicious attachments.
Google has said they will not share how they know that certain attacks are state-sponsored, because it's a matter of security. Mike Wiacek, a manager on Google’s information security team, said that Google saw an increase in state-sponsored activity coming from several different countries in the Middle East, which he declined to name specifically, The New York Times reported.
While Google is refusing to point the finger at any particular nation of origin, the questionable practice of secretly monitoring the populace with software disguised as a crime-fighting tool was recently uncovered by security researchers studying Iran, Qatar, the United Arab Emirates and Bahrain. Not coincidentally, Iran recently ranked worst in the world for Internet freedom, according to a Freedom House report. As a region, the report rated the Middle East as "two percent" free when it comes to the Internet.
Several American banks were hit by cyberattacks last week that reportedly came from the Middle East, The Times reported.
If President Obama's rumored cybersecurity executive order ever comes to fruition, it could prove good publicity for his administration as the issue is now being illuminated to the public in more tangible ways. Congress has yet to make significant progress on drafting legislation protecting national infrastructure from foreign cyberattacks. | <urn:uuid:5954886c-7fbb-4a20-a3d3-f15657dcf5e9> | CC-MAIN-2017-04 | http://www.govtech.com/security/Google-Warns-Users-of-Middle-Eastern-Cyberattacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00478-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95362 | 416 | 2.5625 | 3 |
The European Space Agency this week said it was putting together a new space telescope that would take aim at discovering habitable exoplanets in our solar system.
By integrating 34 separate small telescopes and cameras, the Planetary Transits and Oscillations of stars, or PLATO, will be parked about 1.5 million km beyond Earth and monitor what the ESA called "relatively nearby stars, searching for tiny, regular dips in brightness as their planets transit in front of them, temporarily blocking out a small fraction of the starlight. "
+More on Network World: How to protect Earth from asteroid destruction+
The PLATO mission, which wouldn't launch until 2024, will measure the sizes, masses, and ages of the planetary systems it finds, so detailed comparisons with our Solar System can be made.
"In the last 20 years more than one thousand exoplanets have been discovered, with quite a few multi-planetary systems among them," said mission leader Dr Heike Rauer at DLR, the German Aerospace Center. "But almost all of these systems differ significantly from our Solar System in their properties, because they are the easiest-to-find examples. PLATO firmly will establish whether systems like our own Solar System, and planets like our own Earth are common in the Galaxy."
PLATO will use an array of telescopes rather than a single lens or mirror. PLATO will use high quality cameras, and will have the advantage of observing continuously from space, without the interruption of sunrise, or the blurring caused by the Earth's atmosphere, the ESA stated.
Its position will let PLATO discover planets smaller than Earth, and planets at distances from their host stars similar to the Earth-Sun distance. So far, only a few small exoplanets are known at star-planet distances comparable to or greater than Earth's. Unlike previous missions, PLATO will focus on these planets, which are expected to resemble our own Solar System planets, the ESA stated.
The mission sounds most like NASA's successful Kepler space telescope which has catalogued some 3,583 planet candidates. Recently released analysis led by Jason Rowe, research scientist at the SETI Institute in Mountain View, Calif., determined that the largest increase of 78 % was found in the category of Earth-sized planets. Rowe's findings support the observed trend that smaller planets are more common, NASA stated.
But Kepler has been out of commission since May 2013 with technical problems. Currently NASA and Ball Aerospace engineers say they have developed a way of recovering Kepler and tests to repurpose the craft are ongoing.
The ESA pointed out some interesting factoids about the PLATO mission:
- During its six year long planned mission, PLATO will observe one million stars, leading to the likely discovery and characterization of thousands of new planets circling other stars. PLATO will scan and observe about half the sky, including the brightest and nearest stars.
- The satellite will be positioned at one of the so-called Lagrangian Points , where the gravitational pull of the Sun and the Earth cancel each other out so the satellite will stay at a fixed position in space. Each of the 34 telescopes has an aperture of 12 centimeters.
- The individual telescopes can be combined in many different modes and bundled together, leading to unprecedented capabilities to simultaneously observe both bright and dim objects.
- PLATO will be equipped with the largest camera-system sensor ever flown in space, comprising 136 charge-coupled devices (CCDs) that have a combined area of 0.9 square meters.
- The accuracy of PLATO's astroseismological measurements will be higher than with previous planet-searching programs, allowing for a better characterization of the stars, particularly those stellar-planetary configurations similar to our Solar System.
- The scientific objective is based on previous successful projects, like the French-European space telescope CoRoT or NASA's Kepler mission. It will also take into account the mission concepts that are currently under preparation which will "fill the gap" between now and PLATO's launch in 2024 - NASA's Transiting Exoplanet Survey Satellite (TESS) mission and ESA's ChEOPS mission.
Check out these other hot stories: | <urn:uuid:15a7ca7a-4d13-4215-b079-49c99bebd348> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226388/security/new-planet-hunter-with-34-telescopes-to-set-sights-on-deep-space.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00386-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924955 | 866 | 3.875 | 4 |
In telecommunications, RS-232 is the traditional name for a series of standards for serial binary single-ended data and control signals connecting between a DTE (Data Terminal Equipment) and a DCE (Data Circuit-terminating Equipment). It is commonly used in computer serial ports. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pinout of connectors. The current version of the standard is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997. An RS-232 port was once a standard feature of a personal computer for connections to modems, printers, mice, data storage, uninterruptible power supplies, and other peripheral devices. However, the low transmission speed, large voltage swing, and large standard connectors motivated development of the universal serial bus, which has displaced RS-232 from most of its peripheral interface roles. Many modern personal computers have no RS-232 ports and must use an external converter to connect to older peripherals. RS-232 devices are still found, especially in industrial machines or scientific instruments.
Fiber optic transmission offers the benefits of wide bandwidth, immunity to EMI/RFI interference, and secured data transmission. The fiber-optic converter is used as an RS-232/422/485 point-to-point or point-to-multipoint connection for transmitting and converting full/half-duplex signals and their equivalents within a fiber optics environment. Fiber optics is the perfect solution for applications where the transmission medium must be protected from electrical exposure, lightning, atmospheric conditions or chemical corrosion.
RS-232 to Fiber Converter integrates serial and multi-mode/single mode fiber networks in one flexible package. The Industrial Series (RS-232 to Fiber) Converter provides a reliable and economical solution for your industrial Ethernet environment. It offers seamless integration while working as transparent device between your serial devices and industrial Ethernet. The Converter has operating temperature range from 0 to 50°C. Fiber enables you to extend the distances up to 120km.
However, fiber optic media converter gigabit is a kind of 10/100/1000Mbps intelligent adaptive fast Ethernet media converter. It can extent the transmission distance of a network from 1000m over copper wires to 120km in which there is no help of any other converter. And it can implement data transmission between twisted pair electrical signals and optical signals which are the two types of network connection media. It can create a simple, cost-effective Gigabit Ethernet-fiber link – transparently converting to/from 1000Base-T Ethernet, and 1000Base-SX optical signals to extend an Ethernet network connection over LC terminated multimode fiber at distances up to 550 meters (0.3 miles). It is easy to set up and install in a matter of seconds, this compact fiber media converter can be wall-mounted for out-of-the-way installations. The durable all steel chassis and heavy-duty power supply can withstand challenging industrial environments, and 6 integrated LED indicators make monitoring the Ethernet/fiber link easy. | <urn:uuid:da8dbd42-eaee-4bb3-baa9-214d6663f2d4> | CC-MAIN-2017-04 | http://www.fs.com/blog/rs-232-to-fiber-converter-and-fiber-media-converter-gigabit.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885524 | 635 | 3.46875 | 3 |
Online security is becoming a bigger issue than ever, as 2016 seemingly brought one of the worst years ever when it comes to staying secure and private online. 2017 is not promising to be any better, considering increasingly restrictive surveillance laws are being passed around the world and authoritarian regimes are increasingly censoring the Internet.
When it comes to using public Wi-Fi, and especially managing financial transactions, it’s known that it’s not safe to use one’s credit card or to disclose any other personal information. For example, it has been shown that a Visa credit card can be hacked online in 6 seconds. Using cryptocurrency helps users stay anonymous to some extent– but what are the other ways to remain completely invisible online?
NordVPN, a Virtual Private Network service provider, identifies five key services that could significantly enhance your online anonymity and security.
Bitcoin. Bitcoin is a decentralized currency that does not belong to any country – just its user. And when it comes to security, it’s hard to beat it. Bitcoin online transactions are conducted without disclosing any personal financial information. When it comes to privacy, it’s certainly reassuring that no one can trace who is the owner of a certain bitcoin account. However, not all merchants accept bitcoin. In those cases when using a credit/debit card is the only option – extra security steps should be taken. Using strong passwords and updating them often, ensuring the websites are trusted (double check for https), being wary of any suspicious redirects and using trusted encryption services (i.e. VPN service) to protect one’s Internet traffic are bare minimum.
Encrypted Email. While bitcoin is great for financial transactions online, it’s advisable to stay private while conducting any other activities – such as emailing. Emails might also contain some private and sensitive information, which could be easily intercepted by hackers or any unwanted snoopers. The solution is to use one of the encrypted email services. There are a few good examples, including Tutanota, or the Gmail-like ProtonMail that offer an automatic end-to-end encryption, and no personal information is required to create a secure email account.
Encrypted Messaging. Everybody uses their mobile devices for instant messaging – but how safe are regular communication apps? For example, WhatsApp has received some harsh criticism for tracing user chats even after their deletion. Signal, on the other hand, is an encrypted messaging and voice calling app that provides end-to-end encryption by default to secure all communications. The app can also verify the identity of people one is messaging with and the integrity of the channel they are using. When texting with non-Signal users, one has an option to invite them to an encrypted conversation via Signal.
PGP (Pretty Good Privacy). If a user is looking for an advanced option to secure their communication and personal files, it might be wise to turn to PGP, which is actually one of the most popular encryption softwares used worldwide. OpenPGP is used to encrypt data and create digital signatures and could be used to encrypt your personal files or to exchange encrypted communication. It protects all communication with a digital signature and is available for all operating platforms.
VPN (Virtual Private Network). Anyone who is taking their online security and privacy seriously, will use a VPN – a Virtual Private Network. A VPN encrypts all user’s Internet data into a secure tunnel and creates a secure connection between one’s device and a VPN server. All the information traveling between the user’s Internet-enabled device and the secure server remains invisible to any third party. Those who want a guaranteed protection, will be disappointed that not all VPNs accept bitcoin as method of payment – but there are a few that do. NordVPN, for example, allows to pay by bitcoin and, most importantly, does not store any logs. It also offers an option to encrypt all the data twice for extra safety, which is a rare feature for a VPN. A helpful kill-switch feature allows a user to select Internet programs that would be terminated if the Internet connection dropped for any reason, to make sure that no unprotected Internet activity was exposed. Privacy issues have taken another shape completely over the past year. 31% of Internet users used a VPN in 2016, and VPNs will be increasingly popular in 2017 as online security issues grow to monumental proportions.
In addition to these super tough security measures, anonymity-minded Internet users should be more vigilant, use extra caution when sharing information or opening messages from unknown recipients, while making sure that their device’s Firewall is turned on and a reliable anti-virus program is installed and kept up to date. | <urn:uuid:a9ce40a8-2c10-4236-9ab1-407360e520aa> | CC-MAIN-2017-04 | https://itnerd.blog/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00141-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.935876 | 954 | 2.625 | 3 |
Europe has proposed a global Internet Treaty to protect the net from political interference and place into international law its founding principles of open standards, net neutrality, freedom of expression and pluralistic governance.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The draft law was compared to the 1967 Outer Space Treaty as the Council of Europe presented it to web luminaries from around the world at the Internet Governance Forum (IGF) in Vilnius, Lithuania, this week.
Dignitaries warned that governments were threatening the internet with fragmentation by bringing it under political control.
The proposed Internet Treaty would require countries to sustain the technological foundations that made the network of networks possible.
"Openness and interoperability" and "network neutrality" would become two of 12 Principles of Internet Governance.
"The fundamental functions and the core principles of the internet must be preserved in all layers of the internet architecture with a view to guaranteeing the interoperability of networks in terms of infrastructures, services and contents," says the draft treaty.
The defining characteristic of the internet, that it left any information processing to the end points of the network and did not interfere with traffic that passed across it, was also proposed as a principle of net neutrality.
"The end-to-end principle should be protected globally," says the report.
Net neutrality has become an increasingly heated topic for debate, as internet giants such as Google discuss moves that critics say could lead to a "two-tier" web.
The proposed law would also require global co-operation in the protection of critical internet infrastructure. It would similarly preserve the multi-stakeholder system of governance that has forced governments to subordinate their desires to regulate the net to forums that give an equal voice to engineers and representatives of commercial and civil society.
The treaty would make the system of internet governance overseen by ICANN adhere to international human rights law. The treaty's principles would furthermore uphold rights to freedom of expression and association and require states to preserve "human dignity" and the "free and autonomous development of identity" on the internet.
Rolf Weber, law professor at the University of Zürich and one of the team of experts who drafted the treaty, told an audience at the IGF this week that it was like the 1967 Space Treaty, which decreed that the exploration of outer space should be done only for the benefit of all nations.
Space exploration should be done "without discrimination...on a basis of equality...and there shall be free access to all areas of celestial bodies", according to the 1967 treaty.
The Internet Treaty would establish a principle of cross-border co-operation in the identification and neutralisation of security vulnerabilities. They should "co-operate mutually, in good faith" in protecting the trans-national internet from cyber attacks.
Elvana Thaci, the Council of Europe official co-ordinating the treaty's introduction, said it would require countries to share information about security vulnerabilities with one another and to take reasonable steps to encourage the private sector to do the same. But it would not mandate that companies share information about data security.
The proposal was made as the Internet Governance Forum, which attempted to introduce governments to the idea that internet regulation was a bottom-up, multi-stakeholder affair, reaches the end of its five-year mandate. The United Nations General Assembly will decide its fate on 22 October under pressure from some states to bring the internet firmly under government control.
The UN Secretary General has recommended the IGF mandate be renewed.
There is also said to be pressure, however, for the internet addressing system, run by ICANN (the Internet Corporation for Assigned Names and Numbers) from the internet's spiritual home in California, to be made directly answerable to governments, perhaps under the UN.
The Internet Treaty would preserve the multi-stakeholder system of internet governance ICANN operates under US government edict in the interests of the worldwide internet community. It would not prevent the system being subsumed into a democratically accountable international body. | <urn:uuid:25ad5e6d-e02e-4e7f-905a-96340fb6eff3> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/1280093816/Europe-calls-for-global-internet-treaty | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283301.73/warc/CC-MAIN-20170116095123-00167-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952107 | 838 | 2.875 | 3 |
126.96.36.199 Should a key pair be shared among users?
Users who share a private key can impersonate one another (that is, sign messages as one another and decrypt messages intended for one another), so in general, private keys should not be shared among users. However, some parts of a key may be shared, depending on the algorithm (see Question3.6.12).
In RSA, while each person should have a unique modulus and private exponent (that is, a unique private key), the public exponent can be common to a group of users without security being compromised. Some public exponents in common use today are 3 and 216+1; because these numbers are small, the public key operations (encryption and signature verification) are fast relative to the private key operations (decryption and signing). If one public exponent becomes standard, software and hardware can be optimized for that value. However, the modulus should not be shared.
In public-key systems based on discrete logarithms, such as Diffie-Hellman, DSA, and ElGamal (see Question 3.6.1, Section 3.4, and Question 3.6.8), a group of people can share a set of system parameters, which can lead to simpler implementations. This is also true for systems based on elliptic curve discrete logarithms. It is worth noting, however, that this would make breaking a key more attractive to an attacker because it is possible to break every key with a given set of system parameters with only slightly more effort than it takes to break a single key. To an attacker, therefore, the average cost to break a key is much lower with a set common parameters than if every key had a distinct set of parameters. | <urn:uuid:907b73a3-9135-4ed6-a2a7-c6636d446b5a> | CC-MAIN-2017-04 | https://www.emc.com/emc-plus/rsa-labs/standards-initiatives/key-pair-shared.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00379-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927025 | 363 | 3.328125 | 3 |
Cellular phones have become a ubiquitous part of U.S. society over the last decade. But their convenience comes at a price.
First there was fear that cellular phones leaked radiation, causing brain cancer. Then they were found to ignite sparks at gas stations. The latest debate is over cellular phone use in cars, where they often prove to be a dangerous distraction to drivers.
In 1999, Brooklyn, Ohio, became one of the first jurisdictions in the United States to forbid drivers from using handheld cellular phones while driving. At least 10 other local municipalities have done the same.
California, Massachusetts and Florida have already enacted statewide laws limiting cellular-phone use and 40 others have considered following suit.
On a federal level, Rep. Gary Ackerman, D-N.Y., and Sen. John Corzine, D-N.J., introduced legislation in May that would force states to ratify laws halting handheld cellular-phone use or lose federal highway funds.
Just how bad is this problem? And what is governments role in solving it? Are hands-free devices the way to go or do they distract drivers as well? Our panel of experts makes the call.
Jordan Goldes is the press secretary for Rep. Gary Ackerman, sponsor of the Call Responsibly and Stay Healthy Act 2001 (CRASH).
"When we are taught to drive, we [are] told to keep both hands on the wheel -- one at ten oclock and one at two oclock. But with the increasing use of cell phones by drivers, the position has evolved to one hand -- or in many cases one knee -- on the wheel and one hand on the cell phone."
"The most basic role of government is to protect the safety of its citizens, whether its abuse of cell phones or wearing seat belts or obeying speed limits. This is what [CRASH] is doing. Its not even a partisan issue. You have republicans and democrats all over the country who are supporting this type of legislation in their respective states."
"Our legislation would ban the use of handheld cell phones while driving. It would not prevent people from talking on the phone while driving; it would simply not allow them to hold their phone while driving. People could use hands-free devices, speakerphones, earpieces, microphones. We dont want to take phones away from drivers; we just want to make them safer."
Dee Yankoskie is the manager of wireless education programs for the Cellular Telecommunications and Internet Association.
"Given the states that have reported on accidents involving wireless phone use, cell phones arent contributing to a significant number. What we want state legislatures to do, if they are concerned with this issue, is to take a three-pronged approach to the larger issue of inattentive driving."
"Number one: We want all 50 states to [change] accident forms to read, Was there distraction involved? If so, what was it? Was the driver talking on the phone, drinking a beverage, did they drop something and were reaching to pick it up, were they eating their lunch?"
"Number two: No new legislation is necessary. Right now, law enforcement already has broad authority if they witness somebody driving erratically or deviating from their lane. No matter what activity theyre engaging in, they can be pulled over and penalized for their irresponsible driving."
"Number three. We believe education is the key. An overwhelming majority of the studies that have looked at this issue have said just that. If you truly want to be effective, sanctions arent the way to go, education is."
Adam D. Thierer
Adam D. Thierer is director of telecommunications studies for the Cato Institute, a nonpartisan public-policy research foundation
headquartered in Washington, D.C.
"Statistics show that influences outside of the car are by far the most distracting. But inside the car, things like tinkering with your car stereo or CD player or engaging in a conversation or an argument with a passenger, your child or your spouse are the types of activities that prove to be far more distracting and dangerous [than cell phones]. Yet we dont try to ban those activities."
"So now were talking about a technology-specific ban for a unique class of activity -- that of dialing up cell phones, which are not necessarily as dangerous as people make them out to be -- and [the ban is] unnecessary in light of the fact that technology is really solving this problem already. What I mean by that is you have devices -- hands-free devices, clip-on microphones and ear pieces, along with speed dialing -- that make it so you can, with one punch of a button, make a call without ever having to have a device in your hands."
"Increasingly were seeing onboard communications devices integrated into automobiles. With the advent of voice-recognition technology, you wont have to do anything more than simply say, Call home, or Call mom, and the phone will do the rest. Its important for the policymakers to exercise a degree of patience and humility and understand that technology is solving this problem. New laws are not only unnecessary, but might impose added obligations on our personal liberties."
Tom Dingus is the director of the Virginia Tech Transportation Institute.
"Weve done research on cell-phones and other handheld and in-vehicle devices in terms of distraction and attention demand. The studies range from simulator studies to on-route studies over a period of about 15 years."
"Cell phones create a distraction. Theres no doubt about that. The distraction is not only the manual manipulation of phones but also the conversation that creates risk. Traditionally in an automobile there is nothing that is so critical that regardless of the traffic situation, I need to do this right now. People dont want to lose a call so theyre willing to behave inappropriately in a driving situation to avoid losing the call."
"The other aspects of cell phones are more standard. The dialing task, for example, is a lot different than adjusting the radio. The conversation task has some risk associated with it. The combination of the three is killing a fair number of people."
"On top of it, the cycle on the cell phone in terms of technology is roughly 18 months. Now you have Internet-capable phones, which are more complex and more capable. If you project that people are going to be using those in cars, as well as PDAs and other portable devices that are increasing in use rapidly, this could be an epidemic." | <urn:uuid:474ff52b-0a15-458c-a2b4-a87d9922be20> | CC-MAIN-2017-04 | http://www.govtech.com/magazines/gt/The-Word-On.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00103-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956313 | 1,334 | 2.859375 | 3 |
The University of Florida speeds up memory intensive gene research with Dell HPC solution
The whirlwind speed of progress in the computer industry happens at an exponential rate that can be predicted. According to Moore’s Law, the processing speed, memory capacity and even the number of pixels in digital cameras doubles every two years. But there is a branch of technology that’s evolving even faster: gene sequencing, the ordering of nucleotides that make up a strand of DNA in an organism. Gene sequencing is the basis of the whole group of life sciences that study the genetic makeup of humans and other organisms in order to extend life.
Compared with capillary-based sequencing technology of just a few years ago, today’s next-generation sequencing is able to produce a million times more data, which drives up the demands for computation and storage.
“We’ve gone from first-generation DNA sequencing instruments in the 1990s that analyzed 384 sequences at one time to instruments deriving 400 million sequences in parallel,” says Bill Farmerie, associate director of the Interdisciplinary Center for Biotechnology Research (ICBR) at the University of Florida. “And as the volume of data is growing exponentially, that cost of data production has come down by a factor of 100,000.”
Speeding up the pace of biotechnology research
The University of Florida is working to satisfy the demand for faster computers in its ICBR. There, scientists are working with Dell and Intel technology to construct the next generation of high performance computing (HPC) clusters that can keep up with the computational needs of the gene sequencing industry.
“We need to attack larger problems, larger genomes, larger samples and just get larger views of the systems we are studying,” says Aaron Gardner, cyber infrastructure director, Interdisciplinary Center for Biotechnology Research, University of Florida.
As the query sets and databases the queries are run against grew over time, the amount of memory that was available on a computer became of paramount importance. For best performance, it was necessary to cache numerous databases in memory and parallelize the algorithms being used so that they could all share memory between the nodes. The concept of symmetric multiprocessing (SMP) in the HPC cluster evolved to become virtual symmetric multiprocessing (VSMP), in which multiple physical systems appear to function as a single logical system.
Achieving very large shared memory
“We found that traditional HPC on a cluster wasn’t working because we had hard limits on how much memory was available on a single node, and often the software was ill equipped to be able to distribute these databases across all the nodes in a cluster,” says Gardner. “The VSMP system allows us to have a very large shared memory space where we can cache in memory all of the sequence databases and all of the queries that we are searching against. This makes them accessible to all the processors at the same time with minimal latency.”
To build the cluster, the ICBR populated a Dell PowerEdge M1000e modular blade enclosure with 16 Dell PowerEdge M610 blade servers with Intel Xeon processors 5560 and DDR3 memory, which provides approximately one terabyte of shared memory. A quad data rate (QDR), 40 Gb per second Mellanox M3601Q InfiniBand blade switch sourced through Dell busses all the memory and CPU calls between blades.
“We specifically waited for the Intel Xeon processors 5500 series to be developed because of the Intel QuickPath technology which enables all the cores on the individual CPUs, as well as the adjacent sockets within a system, to more quickly route data between their caches,” says Gardner. “Using DDR3 memory with Intel Xeon processors 5500 series is a good match because the processor has higher available bandwidth and memory interface. When we paired the Intel processor with DDR3 with QDR InfiniBand, we were able to minimize latency and improve throughput in the VSMP system for memory performance. The Intel Xeon processors 5500 series alone give us a raw performance improvement of 40 percent up from the previous generation of Intel processors, so we’re building our system on a much faster processor.”
ICBR chose the Dell PowerEdge M1000e blade chassis for the VSMP system for multiple reasons. “It was the only system that we considered that could get us the buffered DDR3 DIMMs that we needed within our time constraints,” says Gardner. “Of the systems we considered, it was also the only one available with QDR InfiniBand, and it facilitates the InfiniBand interconnect between the nodes using the backplane, so there are no cables involved. That increases the reliability and uptime of the VSMP system. So the Dell system was the most complete system, feature wise, for deploying the VSMP solution, as opposed to the others we considered.”
Reducing management overhead
The dual Dell Chassis Management Controllers (CMC) in the PowerEdge M1000e modular blade enclosure provide redundant, secure access paths for administrators to manage the blades from a single console as a single system image. Integrated Dell Remote Access Cards for all the blades and enclosure enable remote management, which, along with reduced complexity on the management end, helped to give Gardner’s team more time to work with researchers on how to best utilize the resources.
“Another factor that we like is the power footprint,” says Gardner. “The Dell PowerEdge blade system has only six power supplies, three of which are required to run the system, and those are higher efficiency power supplies. It helps us to save 6U-10U of rack space and also save on the limited resources we have in our server room for power and cooling versus having power supplies in each discrete system.”
The VSMP technology itself is provided by Dell Business Partner ScaleMP. With ScaleMP vSMP Foundation for SMP software, multiple physical systems appear to function as a single logical system. The innovative ScaleMP Versatile SMP (vSMP) architecture aggregates multiple x86 systems into a single virtual x86 system, delivering an industry-standard, high-end SMP computer. ScaleMP uses software to replace custom hardware and components to offer a new, revolutionary computing paradigm.
Fast setup and deployment
Once ICBR received the Dell enclosure and blades, it took Gardner and his team about three hours to get it racked and powered up and do the diagnostics. “We were taking our time,” says Gardner. “We could have done it faster.”
Deploying the VSMP software took about one day with a ScaleMP representative on site facilitating the VSMP technology. “We were able to accomplish that because the Dell hardware functioned without a hitch,” says Gardner. “And also because we had already created a standardized hardened production image that we were able to deploy on the system. We’ve purchased a lot of hardware from Dell in the past, so it was very easy for us to work within the existing relationship and arrive at the VSMP solution quickly.”
Up to 160x faster results
Prior to the VSMP solution, there were several applications that the university was running on a standard SMP cluster of x86 machines with unsatisfactory results. “We were hitting the memory limits on individual nodes, which meant that the jobs took longer and sometimes just failed,” says Gardner. “So having this larger memory system has enabled us both to get jobs done and to see them through to completion. We’ve seen some substantial performance improvements because we’re able to run all of the data in memory, without going to disk. For example, one assembly application had taken 10 days on our old cluster, and it took only an hour and a half to complete on the Dell VSMP cluster.”
In addition, the university is able to allow researchers to run these jobs in an interactive, real-time manner, rather than waiting in a queue. This enables them to experiment more, and even develop new analysis methods with the system. “This really helps to achieve a better answer in terms of the analysis we’re performing,” says Gardner. “For instance, a DeNova assembly application was swapping to disk because all of the sequences and alignments could not fit into memory. Before we had the VSMP system, we would have had to resort to discarding some sequences, or assembling several smaller assemblies together. These approaches can sometimes produce inferior or misleading results, and cause you to lose consistency and depth with the assembly statistics the software captures. By being able to successfully assembly all of a project’s data at once and get the result back quickly, we are empowered to iterate parameters and adapt our analysis methods in near real time. This is preferable to waiting a week and being forced to work with whatever we get due to the time constraints involved in rerunning those types of jobs.”
Completing the circle
In addition to the Dell VSMP solution, ICBR has been purchasing Dell PowerConnect 6248, 6224, 6220, 5224 Gigabit Ethernet switches for networking infrastructure. “We like the stacking capabilities of the Dell switches,” says Gardner. “We’ve also purchased some Dell PowerConnect 8024 10 Gigabit Ethernet switches as a front-end interconnect to replace our existing Gigabit infrastructure.
With recent 10GbE hardware we are starting to see the performance improve to an acceptable level and we can run almost any protocol over 10GbE. We can also pull a larger portion of our computing staff into supporting research computing because the networking and storage protocols and paradigms with 10GbE are familiar to them.”
As ICBR provides its researchers with the faster processing power they demand, the science of gene sequence will speed up and produce faster results for the life sciences. The immediate result will be more research projects and more grants to fund them.
“It’s all in the papers,” says Farmerie. “By publishing papers, our scientists use the data from the Dell VSMP cluster to generate the next round of proposals that will attract funding. So there’s the cycle that has to be completed each time in order to drive the process of science further down the road.”
For more information go to: DellHPCSolutions.com | <urn:uuid:487832c0-84e8-4f3b-8250-51d0b57690a4> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/05/23/the_university_of_florida_speeds_up_memory_intensive_gene_research_with_dell_hpc_solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00315-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.942786 | 2,151 | 2.9375 | 3 |
In recent years, Fiber to the Home (FTTH) has started to be taken seriously by telecommunication companies around the world, and enabling technologies are being developed rapidly. There are two important types of systems that make FTTH broadband connections possible. These are active optical networks (AON) and passive optical networks (PON). By far the majority of FTTH deployments in planning and in deployment use a PON in order to save on fiber costs. PON has recently attracted much attentions due to its low cost and high performance. In this post, we are going to introduce the ABC of PON which mainly involves the basic components and related technology, including OLT, ONT, ONU and ODN.
First of all, it is necessary to have a brief introduction of PON. In contrast to AON, multiple customers are connected to a single transceiver by means of a branching tree of fibers and passive splitter/combiner units, operating entirely in the optical domain and without power in PON. There are two major current PON standards: Gigabit Passive Optical Network (GPON) and Ethernet Passive Optical Network (EPON). But no matter which type of PONs, they have a same basic topology structure. A Gigabit Ethernet Passive Optical Network (GEPON) system is generally composed of an optical line terminal (OLT) at the service provider’s central office and a number of optical network units (ONUs) or optical network terminals (ONTs) near end users, as well as the optical splitter. In addition, the optical distribution network (ODN) is used during the transmission between OLT and ONU/ONT.
OLT is a equipment integrating L2/L3 switch function in GEPON system. In general, OLT equipment contains rack, CSM (Control and Switch Module), ELM (EPON Link Module, PON card), redundancy protection -48V DC power supply modules or one 110/220V AC power supply module, and fans. In these parts, PON card and power supply support hot swap while other module is built inside. The main function of OLT is to control the information float across the ODN, going both directions, while being located in a central office. Maximum distance supported for transmitting across the ODN is 20 km. OLT has two float directions: upstream (getting an distributing different type of data and voice traffic from users) and downstream (getting data, voice and video traffic from metro network or from a long-haul network and send it to all ONT modules on the ODN.
ONU converts optical signals transmitted via fiber to electrical signals. These electrical signals are then sent to individual subscribers. In general, there is a distance or other access network between ONU and end user’s premises. Furthermore, ONU can send, aggregate and groom different types of data coming from customer and send it upstream to the OLT. Grooming is the process that optimises and reorganises the data stream so it would be delivered more efficient. OLT supports bandwidth allocation that allows to make smooth delivery of data float to the OLT, that usually arrives in bursts from customer. ONU could be connected by various methods and cable types, like twisted-pair copper wire, coaxial cable, optical fiber or Wi-Fi.
Actually, ONT is the same as ONU in essence. ONT is an ITU-T term, whereas ONU is an IEEE term. They both refer to the user side equipment in GEPON system. But in practice, there is a little difference between ONT and ONU according to their location. ONT is generally on customer premises.
ODN, an integral part of the PON system, provides the optical transmission medium for the physical connection of the ONUs to the OLTs. Its reach is 20 km or farther. Within the ODN, optical fibers, fiber optic connectors, passive optical splitters, and auxiliary components collaborate with each other. The ODN specifically has five segments which are feeder fiber, optical distribution point, distribution fiber, optical access point, and drop fiber. The feeder fiber starts from the optical distribution frame (ODF) in the central office (CO) telecommunications room and ends at the optical distribution point for long-distance coverage. The distribution fiber from the optical distribution point to the optical access point distributes optical fibers for areas alongside it. The drop fiber connects the optical access point to terminals (ONTs), achieving optical fiber drop into user homes. In addition, the ODN is the very path essential to PON data transmission and its quality directly affects the performance, reliability, and scalability of the PON system.
Buyer’s Guide: Fiberstore offers different types of OLT, ONU, ONT equipments for GEPON, which are the new generation PON equipments and mainly applied by telecommunication operators in FTTH project. All these equipment have the characteristic of high integration, flexible adaption , reliability and capable of providing QOS, web-management as well as flexible enlarging capacity. For more information, please contact us over firstname.lastname@example.org. | <urn:uuid:557f0c9a-96c8-4ecc-875a-de32fb3b27b8> | CC-MAIN-2017-04 | http://www.fs.com/blog/abc-of-pon-understanding-olt-onu-ont-and-odn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281226.52/warc/CC-MAIN-20170116095121-00039-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.926189 | 1,061 | 3.09375 | 3 |
Ransomware: the Tool of Choice for Cyber Extortion
Blackmail Over the Internet
Ransomware is malware that typically enables cyber extortion for financial gain. Criminals can hide links to ransomware in seemingly normal emails or web pages. Once activated, ransomware prevents users from interacting with their files, applications or systems until a ransom is paid, typically in the form of an anonymous currency such as Bitcoin. Ransomware is a serious and growing cyber threat that often affects individuals and has recently made headlines for broader attacks on businesses. Payment demands vary based on targeted organizations, and can range from hundreds to millions of dollars.
Once infected, a victim has little recourse. If they do not pay the ransom, they suffer business down time, loss of sensitive information or any other penalty specified by the attacker. And even when they do pay the ransom, they remain vulnerable to attack from the same attacker or a new one, and reward attackers for their successful tactics.
Usually, if you have to choose whether to pay a cyber ransom, it’s too late.
DANGERS OF RANSOMWARE
Once ransomware infects a user’s system, it either encrypts critical files or locks a user out of their computer. It then displays a ransom message that usually demands virtual currency payment in exchange for a cryptographic key to decrypt or unlock those resources. The message may also threaten to publicly release compromised data if the payment demand is not met.
Some ransomware can travel from one infected system to a connected file server or other network hub, and then infect that system.
The impact of ransomware is immediate, compared to stealthier malware such as those used in an advanced threat attack. As evidenced from recent headlines, there is growing concern among individuals, businesses and governments about the complex effects of ransomware, which include monetary damage and business downtime.
TYPES OF DAMAGE CAUSED BY RANSOMWARE
How to combat ransomware
Ransomware often uses the web or email to reach victim systems, so those are vectors that security teams must monitor for signs of attack.
Web-based attacks tend to use drive-by exploits that target browser, platform or system vulnerabilities, or rely on malicious URLs or malvertising that may redirect users to sites that host exploit kits. Once it takes hold of a system, it can travel to other connected systems or servers on the network. Email-based ransomware is generally used in targeted attacks, and relies on a variety of methods, including phishing, spear sphishing, malicious attachments and URLs.
To properly defend against ransomware, three things need to happen:
- The infection process must be thoroughly analyzed to determine the path of attack and system vulnerabilities
- The malicious code must be analyzed to determine its purpose and signs of activity (behavior-based analysis)
- Access from infected machines to command and control servers (used for data exfiltration or to download additional malware) must be blocked
This defensive approach relies on connecting warning signs across different vectors that are often overlooked by traditional security solutions. Advanced security solutions, such as FireEye Network Security (NX Series), FireEye Email Security (EX Series), or FireEye Email Threat Prevention Cloud (ETP) stop ransomware from taking control by blocking exploit kits, malware downloads and callback communications to the command and control servers. They can also minimize the overall impact of ransomware by tracing its attack path and methodology and sharing threat details to stop future attacks.
Criteria for Choosing a Cyber Security Defense Against Ransomware
Not all cyber security defenses are equal. Security providers are markedly different, in both offerings and results. Here are a few things the ideal cyber security vendor should offer to protect against ransomware threats:
- The defense should provide real-time protection to prevent or interfere with the activation of ransomware. This is of paramount importance, and far easier said than done. If a user sees a ransom demand, their data or system files have already been encrypted, and it's too late to address potentially serious damage.
- The defense should provide inline protection. In the case of email, it must act as a mail transfer agent (MTA). This serves two purposes. The first is to ensure that all email is routed through email defenses. The second is to reduce any lag in detecting threats, which is a risk with offline analysis or out-of-band solutions.
- The defense should be updated with actionable threat intelligence as quickly as possible. Security systems that allow days or weeks between updates give cyber attackers that much more time to successfully target different systems in your organizations with the same ransomware. Contextual intelligence can provide critical potential warning signs associated with ransomware to help prevent future attacks. Attacker intelligence and a thorough understanding of indicators of intent can even help predict, prepare for and block future threats.
- The defense should look for threats across all critical attack vectors. Because ransomware attacks use malicious URLs to lead users to malware or rely on communication with a command-and-control server to decide when to activate, protecting email is not enough. The best solutions will follow multi-stage attacks across multiple vectors to clearly identify seemingly harmless emails that contain links to sites that host ransomware.
Detect and Prevent Ransomware
Detect and block phishing emails and malware attachments that lead to ransomware attacks.
Analyze Ransomware Threats
Get Expert Services
Rely on expert monitoring to detect, validate and help respond to the latest ransomware threats.
Use assessment services to test and improve how well you detect and respond to ransomware threats. | <urn:uuid:b0d5515d-5049-42fd-8640-a5f21d472515> | CC-MAIN-2017-04 | https://www.fireeye.com/current-threats/what-is-cyber-security/ransomware.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282202.61/warc/CC-MAIN-20170116095122-00213-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.919849 | 1,119 | 3 | 3 |
Cooling in most data centers uses air, and it does so for many practical reasons. Air is ubiquitous, it generally poses no danger to humans or equipment, it doesn’t conduct electricity, it’s fairly easy to move and it’s free. But air also falls short on a number of counts: for instance, it’s not very thermally efficient (i.e., it doesn’t hold much heat relative to liquids), so it’s impractical for cooling high-density implementations. Naturally, then, some companies are turning to water (or other liquids) as a means to increase efficiency and provide more-targeted cooling for high-power computing and similar situations. But will water wash away air as competition for cooling in the data center?
One of the main draws of liquid cooling is the greater ability of liquids (relative to air) to capture and hold heat—it takes more heat to warm a certain amount of water to a given temperature than to warm the same amount of air to the same temperature. Thus, a smaller amount of water can accomplish the same heat capture and removal as a relatively large amount of air, enabling a targeted cooling strategy (the entire data center need not be flooded to keep it sufficiently cool). And with the rising cost of energy and the growing power appetite of data centers, the greater energy efficiency of water is a temptation to companies.
For high-density implementations, air cooling is often simply insufficient—particularly when a whole-room cooling approach is used. In such cases, water (or, more generally, liquid) cooling offers an alternative in which not only can greater cooling efficiency be applied, it can be applied only where it is needed. That is, why cool an entire data center room when you can just cool, say, a particular high-density cabinet? And when individual cabinets are kept cool, they can also be placed much closer together, since air flow is less of a concern. Thus, liquid cooling can also enable more-efficient use of precious floor space.
Of course, no solution is ideal. Liquid cooling has its drawbacks, just as air does, and they’re worth noting. But it’s helpful to first review some liquid cooling solutions that are now in use in data centers.
Liquid Cooling Solutions
Liquids can serve as a medium for transporting heat in a number of different ways, ranging from broader cooling of the entire computer room to targeted cooling of particular racks/cabinets or even particular servers. The most basic option is to use water just as the means of moving heat from the computer room to the outside environment. In a whole-room approach to cooling, computer room air handlers (CRAHs) use chilled water to provide the necessary cooling; the water then moves the collected heat through pipes out of the building, where it is released to the environment (in part through evaporation, which is helpful but also can consume large amounts of water). This cooling approach is similar to the use of computer room air conditioners (CRACs).
A more targeted approach involves supplying cool water to the rack or cabinet. In the case of enclosed cabinets, only the space surrounding the servers need be cooled—the remainder of the room is largely unexposed to the heat produced by the IT equipment. This approach is similar to the whole-room case using CRAHs, except that only small spaces (the interiors of the cabinets) are cooled.
Cooling can be targeted even more directly by integrating the liquid system into the servers themselves. For instance, Asetek provides a system in which the CPU is cooled by a self-contained liquid apparatus inside the server. Essentially, it’s a miniature version of the larger data center cooling system, except the interior of the server is the “inside” and the exterior is the “environment.” Of course, the heat must still be removed from the rack or cabinet—which can be handled by any of the above systems.
At the extreme end of liquid cooling are submersion-based systems, where servers are actually immersed in a dielectric (non-conducting) liquid. For example, Green Revolution Cooling offers an immersion system, CarnotJet, that uses refined mineral oil as the coolant. Servers are placed directly in the liquid, yielding the benefits of water cooling without the hassles of containment, since mineral oil is non-conducting and won’t short out electronics. Using this system requires several fairly simple modifications, including use of a special coating on hard drives (since they cannot function in liquid) and removal of fans.
Going a step further, some solutions even use liquid inside the server only, avoiding the need for vats of liquid. In either case, however, the heat must still be removed from the server or liquid vat.
Now for the Downsides
Depending on the particular implementation, liquid cooling obviously poses some risks and other downsides. Air is convenient, partially because of its ubiquity; no one worries about an air leak. A water leak, on the other hand—particularly in a data center—is a potential disaster. Moving water requires its own infrastructure, and although moving air does as well, a leak from an air duct is much less problematic than a leak from a water pipe. Furthermore, water pipes can produce condensation, which can be a problem even in a system with no leaks. And with more-stringent requirements on infrastructure comes greater cost: infrastructure for liquid cooling requires greater capital expenses compared with air-based systems.
Another concern is water consumption. Evaporative cooling converts liquid water to water vapor, meaning the data center must have a steady supply. Furthermore, blowdown (liquid water released to the environment) can be problematic—not because it is polluted (it shouldn’t be), but simply because it’s warm. When warm water drains into a river or stream, for instance, it can damage the existing ecosystem.
Proponents of liquid cooling approaches cite the energy efficiency improvements that their systems can provide—estimates range as high as a 50% cut in total data center energy consumption. These numbers, of course, depend on the situation (where the particular data center starts and what type of system it installs), but the returns only begin after the infrastructure has been paid off. Also, given the greater infrastructure needs of liquid cooling, maintenance may be more of a concern. Furthermore, in cases where water is consumed by the cooling process, some energy efficiency comes at the cost of a high water bill.
Liquid Cooling to Stay
In some cases, particularly in lower-density data centers, air cooling may be the best option, if for no other reason than it is simpler and lower in cost to implement. Questions linger regarding at what point liquid cooling becomes financially beneficial (“Data Center Myth Disproved—Water Cooling Cost-effective Below 6kW/rack”). For high-density configurations, however, liquid may be the only viable option, and as high-power computing grows, so will an emphasis on liquid cooling. Air cooling simply has too many benefits—many of which center on its simplicity—to expect that liquid cooling will one day be the only cooling approach. Nevertheless, liquid/water cooling has an established position in the data center (particularly the high-density data center) that will grow over time. The only question is how much of the market will implement some form of liquid cooling, and which types of liquid cooling solutions will be the most prevalent.
Photo courtesy of Jayashreee | <urn:uuid:77efdf6e-3908-45d3-9e2d-d2fde519ca1a> | CC-MAIN-2017-04 | http://www.datacenterjournal.com/liquid-cooling-the-wave-of-the-present/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.933115 | 1,552 | 3.203125 | 3 |
The research team at e-mail security provider Eleven, published five tips today to help users prevent a botnet infection on their computer.
Botnets are groups of hijacked private and corporate computers controlled remotely and which are used, among other things, to send spam, usually without the user’s knowledge. Undetected, the installed malware often only runs in the background, making it more difficult for users to identify the risk and react accordingly. It is currently estimated that over 90% of all spam e-mails are distributed via botnets.
E-mail with malware as an attachment
The infection takes place through malware known as Trojans created specifically for the purpose of infection. The “classic” infection pathway is through e-mail attachments. The user is led to believe that the attachment contains essential information or an important document, such as an invoice, a tax form, or a package delivery notification. Instead, it contains malware that is activated as soon as the user attempts to open the attachment.
Unknown file attachments should therefore never be opened. The option “Hide extensions for known file types” should also be deselected in the system settings; doing so ensures the detection of a fake PDF file with the file extension pdf.exe.
A further infection pathway that has recently become more popular is drive-by malware. The malware is located on a manipulated website. When the site is opened in a web browser, the Trojan is installed on the user’s computer (drive-by). The malware is commonly disseminated via spam e-mails that contain links to the infected websites. If the user clicks on the link, the malware is installed in the background. Particularly popular lures include sites like Facebook, Twitter, or YouTube.
A message that feigns an important message, messages from friends, or a newly uploaded video is sent to the user in the hope that he will click the enclosed link. Users should never click on links in e-mails unless they can be one hundred percent sure that the message is real.
Plug-in and application risks
Hazard: data storage devices
A further risk that can lead to botnet infection is the use of external data storage devices like USB sticks or SD cards. Because most people aren’t able to recognize what is happening in the background during opening, the rule is: unknown data storage devices should always be checked by an up-to-date virus scanner before use.
Users should also avoid using data storage devices that are not their own whenever possible. In addition, the Windowes option to automatically always treat a certain type of device, such as a USB stick, the same way when inserted, should be deactivated.
Spam and virus protection
Despite all precautionary measures, when it comes to avoiding botnet infections, the most important element is reliable spam and virus protection. Users should check which spam and virus protection options are offered by their e-mail provider, e.g. their Internet provider or webmail service. A virus scanner should also be installed. Important: always keep the virus scanner up to date! | <urn:uuid:d3f18d0f-44fa-4061-9eb9-6f786f252949> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2012/09/26/how-to-prevent-a-botnet-infection/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284411.66/warc/CC-MAIN-20170116095124-00121-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947607 | 632 | 3.015625 | 3 |
Applies to Kaspersky Endpoint Security 8.0 for Mac
When the number of viruses had reached several hundred, antivirus experts came with an idea of detecting new malicious programs unknown to antivirus software due to absence of corresponding antivirus databases. They developed a heuristic analyzer
. Heuristic analyzer examines the code of executable files to detect new pieces of malware which bypass existing antivirus databases.
In other words the heuristic analyzer has been developed to detect unknown viruses. When scanning a program the analyzer emulates its execution and logs all its “suspicious” actions, e.g. opening/closing files, intercepting interruptions, etc. On the basis of these logs, a program can be recognized as possibly infected.
Thus, about 92% of new viruses are detected by the heuristic analyzer. This mechanism is very effective and rarely leads to false positives. Files that are suspected by the heuristic analyzer to be infected with a virus are called possibly infected
The heuristic analyzer is built into Kaspersky Endpoint Security 8.0 for Mac
. The heuristic analyzer processes all files scanned using existing databases with negative result. | <urn:uuid:79337e44-401d-4951-a477-467e487c5b08> | CC-MAIN-2017-04 | http://support.kaspersky.com/4477 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00269-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.924218 | 244 | 2.6875 | 3 |
Cloud should be viewed as a business enabler, making the power of technology more accessible to more people and organizations. If you look back at early technology, it was originally pretty restrictive and expensive. Cloud computing is often called a “democratizer” of technology because of its easy adoption and accessibility and the elimination of traditional business barriers to entry. Cloud, implemented to support a sound business strategy, can be a game-changer especially for small or start up organizations. Larger organizations benefit too, by changing the IT focus from legacy maintenance to business innovation. Another great benefit of cloud is that is can be built on the concept of open. Open platforms, tools and communities.
Open source – A concern of cloud, or any technology for that matter, is the costly proprietary lock in to license fees and vendors that increase costs and decrease flexibility. Open source platforms lessen those concerns. Developed by a community of collaborators (that are often also competitors), customers benefit from the pooled resources and talents of a cloud development community. As it is with Linux, the goal is to increase the overall quality of the platform and eliminate the inflexibility and costs inherent in proprietary technologies. You don’t like the tool or the vendor? Open platforms give you the ability to change with much greater ease. Being open source also gives the cloud platform the potential to capture the most innovative functionality. Collaborating around an open platform lowers the cost of research and development by spreading the investment across the community, which is passed on to end users in the form of lower costs.
Open application programming interfaces (APIs) – Much of the agility and efficiency of cloud comes from automating every aspect of a managed platform. This automation is achieved by making the cloud, essentially, "programmable,” through open APIs. Typically these are REST-based APIs that focus on simplicity and ease of use for developers and admins. In addition to APIs, many of the cloud management components are based on open configuration and scripting formats, for example Chef Recipes and Puppet modules. By opening up the APIs and other management "levers," clouds become easier to manage, enabling the self-service and automation value propositions of a cloud platform.
Open ecosystem – With the rapid explosion of data and the ability to turn it into business insights, IT provides an even greater strategic value to the business. The evolving demands caused by web and mobile delivery and access, social media, big data and BYOD are recent waves of IT transformation requiring IT functions to supply new tools and services. In this rapidly transforming IT context, having a dynamic, thriving ecosystem is key to business success. An open cloud ecosystem ensures that there are no barriers to entry for technologies except merit as determined by trial and adoption. Businesses no longer need to rely only on a handful of IT vendors to support their business strategy and goals. An open cloud ecosystem invites everyone to participate and compete, providing the end-user with far more choice. In this new era of open cloud community, businesses are no longer locked into providers, tools and platforms. With cloud, the sky is the limit.
Each of these three aspects of open cloud works in tandem to strengthen each other. Open source cloud platforms with quality open APIs encourage participation which in turn fuels further development within the ecosystem.
All of us lose when innovations and great products can’t get to market because of the lack of technological capability and capacity. That’s why I am excited about cloud as an alternative and committed to the concept of open as a tremendous customer and community benefit. Though I take a pragmatic view and know that we are in the midst of a relatively new computing concept, I’m really optimistic about its power and potential for so many new and existing businesses. More innovative ideas getting to market and technologies with fewer boundaries can only help us all. That’s the objective- open architectures that enable limitless possibilities to many more people and organizations. | <urn:uuid:6bf5b81e-f875-4481-872f-098ec83988d3> | CC-MAIN-2017-04 | http://www.computerworld.com/article/2474054/cloud-computing/open-the-possibilities-of-cloud.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00387-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94504 | 796 | 2.90625 | 3 |
The Importance of Global Digital Literacy
With the world economy becoming more connected via networks and business environments becoming more computer-intensive in day-to-day job tasks, the issue of digital literacy stands out as a key differentiator of success at the individual, regional and national levels. Right now, too many people in too many places are lagging behind in these critical competencies. According to David Saedi, CEO of Certiport, which operates the foundational computer skills certification IC3, this gap in digital literacy needs to be addressed by educators and IT pros with a strong sense of social responsibility.
Certiport held its annual PATHWAYS Conference at the beginning of this month in Orlando, Fla. The event brought together technical pedagogues from around the world, who shared their insights on bringing IT education to a global workforce through certification and other means. “We understand that they need a forum to come face-to-face to exchange data, programs and best practices from various areas of the world,” Saedi said of the conference. “They realize that they have a lot more in common than their separate geographies allow them to share. Once they connect, they find they have a number of topics to share information on. They’re focused more on the society around them – that’s one of the benefits of this gathering – so they see the direct effect of what they do through the measurable outcome of certification.”
Expanding digital literacy is more than just a nice-sounding concept, he added. It serves the global economy by bringing more skilled professionals into the international workforce, whether they end up going into a technical field or not. “What we have done is to identify how digital literacy benefits everybody in the community, especially the ones who are at the lowest end and being least served by their communities,” Saedi explained. “The most recent analyses show that IT needs to be diffused across core curriculums, and virtually everybody needs to know the components of IT that enable them to participate in the digital economy. At Certiport, we don’t look at IT as a specific elitist niche. We look at it as diffusing these communications and technology components that need to be taught to everyone.
“Every one of the participants here carries that torch,” he added. “They want to make sure their communities are better equipped and that they’re getting the best value out of the infrastructure investments they’ve made. There’s something very important that we’ve stumbled across in the past two years and have now grabbed onto: the value of the individual as a change agent that allows IT diffusion to happen. It’s not just policies and funding. It’s the individual who says, ‘I will make an impact on my environment.'”
For more information, see http://www.certiport.com. | <urn:uuid:c47ca4b9-6627-49d3-bc5a-cb62801c8390> | CC-MAIN-2017-04 | http://certmag.com/the-importance-of-global-digital-literacy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.956289 | 606 | 2.625 | 3 |
Joined: 22 Apr 2006 Posts: 6258 Location: Mumbai, India
Comparison of these two seems little starnge to me - stored procedure is a subroutine available to applications accessing a relational database system. In other words, when you talk about "Stored procs' you are imlicitly talking about relational databases , such as DB2. While Remote Procedure Call (RPC) is a "protocol" that one program can use to request a service from a program located in another computer.
May be if you share the origin of this question, some one around can come up with better answer. | <urn:uuid:0aa1c3ba-f85b-4a63-9d65-49113149523b> | CC-MAIN-2017-04 | http://ibmmainframes.com/about35338.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00206-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951124 | 127 | 2.78125 | 3 |
How the Internet of Things Is Improving Education
The Internet of Things (IoT) is coming fast, with 5 billion devices and sensors already connected (Gartner) and 75 billion expected by 2020 (Morgan Stanley). These devices will be generating 20 zettabytes of data by 2025. There are smart homes, smart cities, smart hospitals, and smart classrooms. An important aspect of smart classrooms and schools is that the IoT not only brings advanced value to the physical structures and environment, but also improves education itself.
A smart school has a computational Internet of Things nervous system that keeps the facilities functioning smoothly and enables a higher level of personalized learning. Smart devices throughout the school and campus send data and receive instructions over the Wi-Fi network.
Keeping the Terminology Straight: Smart Schools, Smart Boards, Smart Machines, and Smart Devices
The meaning of the word “smart” is constantly evolving and applied differently in diverse contexts. Today’s smart school makes extensive use of the Internet of Things (IoT) and associated smart devices. Over the years, the term has also referred to schools that specifically used one type of smart device, the smart board. Another usage of the term is independent of technology. For example, one SMART school program keyed on the acronym for Strategic Measurable Attainable Results-oriented and Time-bound. Another, very different use of the term is to imply the use of artificial intelligence, as in “Smart Machines” defined by Gartner to describe systems capable of analyzing and predicting student success, like Watson Analytics.
In the context of the Smart School, as well as Smart City and Smart Hospital, the word smart implies an intelligence and awareness, as well as the ability to learn and transform itself. Indeed smart schools have an infrastructure that enables them to grow, adapt and progress as important sites for learning.
Exactly What Goes On In A Smart School?
At the smart school, student attendance is automatically tracked. Teachers are aware of how far each student has progressed in their reading and how well they understand the content. Testing is easily administered digitally, both high stakes summative testing and impromptu formative testing. Student well-being is monitored during athletic activities with smart wrist bands. Supply inventories and even waste baskets are tracked for optimal inventory management and cleanliness.
Outside of the school, buses are tracked on Google Maps and school parking lots are managed with smartphone apps. Campus lighting is optimized to the instantaneous needs based on ambient levels, weather conditions, local activity, and anticipated patterns.
Inside the school, airflow, air quality, temperature and humidity are constantly monitored and optimized in every possible learning space. Flat screen monitors are available in the classrooms to display data from student or teacher devices during collaborative sessions.
Smart Internet of Things Devices For Schools
The phrase Internet of Things generally refers to machine-to-machine (M2M) communications involving network-based remote sensors and actuators. Wireless sensors generate data (often Big Data) which can be stored and analyzed either on site or in the cloud. The range of smart devices found in schools today includes: eBooks and tablets; sensors in the hallways, entrances, classroom spaces, and buses; all sorts of fitness bands and wearables; virtual and augmented reality headsets; robots; video sensors; smart displays; smart lights; and smart locks, to name a few.
Take Our Smart School Internet of Things Survey
Tell us how your school uses smart devices. Participate in our short survey on the concept of the Internet of Things Smart School, and you’ll be entered into a drawing for a new iPad Mini 4. The survey takes only 2-3 minutes to complete. Whether or not you have personal experience with smart devices, your responses are important to us. Take the survey now.
Data from these devices can be used for simple tracking, as with school buses, student attendance, and supplies, or more complex monitoring, for example, to understand student learning patterns as they progress through their eBooks and adaptive learning systems. All activities can be monitored in real time during critical periods of online testing. Throughout the school building, low-cost sensors enable a more finely-tuned HVAC system to keep the environment optimally tuned and to reduce expenses.
School Infrastructure to Support the Smart School
At the core of the smart school is the rock-solid, dependable Wi-Fi network. It must be scalable to handle the rapidly-growing number of smart devices coming onboard. Addressing the security, safety, and privacy concerns inherent in both K-12 and higher education requires comprehensive network management software. Network analytics provide a vital view into the network traffic and performance to insure there are no data bottlenecks.
Immediate Benefits and Future Possibilities
In addition to the direct benefits for education and teaching, smart school devices can have a positive impact on student safety. When Fraser Public Schools in Michigan added video sensors throughout their schools, they noticed the incidence of fighting dropped to near zero. Students knew they would be captured on video if they misbehaved. According to Superintendent David Richards fighting didn’t just move offsite, it simply stopped.
Incorporating the Internet of Things into the curriculum by introducing IoT devices like Arduino and Raspberry Pi helps teach engineering and math, but even more importantly, inspires creativity. The vast arrays of big data provides students with the opportunity to learn data analysis and to gain insight into the burgeoning career opportunities in data science.
The growing streams of school and student-related data create the opportunity and the challenge to provide a more personalized student learning experience, while controlling costs. Learning from the patterns of highly successful students could benefit the entire student body. In a farther-out scenario, both in time and in weirdness, not only is student attendance tracked, but also their in-class attention. Headbands such as Muse track brainwaves and could theoretically pass along information about student’s cognitive activities during class.
New IoT technologies, used effectively, increase student engagement and provide a safer learning environment. Teaching can take place in an entirely mobile, highly-responsive milieu. This emerging environment helps insure that all students are prepared for college and careers, including careers that may not even exist today. | <urn:uuid:14ee24e6-7b5b-4dfa-8eff-5dc47ac532a2> | CC-MAIN-2017-04 | http://www.extremenetworks.com/is-your-school-an-internet-of-things-smart-school/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00114-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.94367 | 1,274 | 3.15625 | 3 |
Passwords provide the first line of defense against unauthorized access to your computer and online accounts. The stronger your password, the more protected these resources will be from hackers and malicious software.
Typically, people create a password based on personal whim or how easy they are to remember; while this may be easier, it puts your account security at risk.
You should make sure you have strong passwords for all accounts you create or manage.
Suggestions for Creating Strong Passwords
If you're unsure of what to use for a strong password, or having difficulty coming up with one, consider using a Strong Password Generator tool. These will give you a secure password, but the passwords they create will often be harder to remember. Some examples are:
The Passphrase Method
Consider using the passphrase approach to your passwords. A passphrase consists of a phrase that has special meaning to you, therefore making it easier to remember. For example:
- I love going to concerts. Live rock music is the best!
Take the first letter of each word in your passphrase, and include any punctuation and capitalization there may be. In this case you should end up with the following password:
Strengthen Your Passwords to Protect Your Data
These are just a few of the ways you can strengthen the passwords on your accounts. Remember to follow these best practices, and avoid the pitfalls outlined, and you will improve the security of all your passwords.
When in doubt, ask yourself what would happen if someone gained access to any individual account you own or manage. How bad could it potentially be if that data was in the wild, or if you were locked out of a password-protected system? Apply strong password policies to all of your accounts you need to secure. | <urn:uuid:dc468159-cb3c-4f8d-b61a-f09156d374be> | CC-MAIN-2017-04 | https://support.managed.com/kb/a2245/best-practice-strong-password-policy.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280425.43/warc/CC-MAIN-20170116095120-00536-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.930137 | 357 | 3.375 | 3 |
In this interview, Andrei Serbanoiu, Online Threats Researcher at Bitdefender, discusses Facebook security and privacy pitfalls, the dangers of sharing on the social network, and offers insight for CISOs.
What are Facebook’s most significant security and privacy pitfalls and how do cybercriminals take advantage of them?
The most significant security pitfalls on Facebook are the open settings of personal information (public by default) and the trusted environment that allows scams to be posted at a really fast pace from one timeline to another.
In recent years, we’ve noticed an increasing number of fake profiles spreading malicious and fraudulent links on the social network. If a bogus profile is eventually taken down, scammers are able to create a new one in a matter of seconds; the same situation goes with dangerous websites and scams. Just a couple of days ago, Britons and users worldwide got infected on Facebook with a Trojan replicated on 6,000 different websites due to a scam that lured users with fake videos of their friends naked.
Our recent research also showed a migration towards Facebook sponsored ads. As they are encapsulated inside a trustworthy environment and have become part of the social network, more users are likely to fall for suspicious ads than for a general spam message. These adverts are hard to control by the social network due to the design of the platform that allows the creators of third-party applications to use whatever ad network they consider fit.
Is it more dangerous to over-share on Facebook today than it was a few years ago?
Over-sharing on Facebook today is more dangerous than a few years ago because users now tend to share personal information on different websites and social networks at the same time.
Malware creators now have a variety of cyber-crime tools at hand. Starting from people search engines to real-time data bases with companies, employees and interests, pictures, geo-locations targeted through “innocent” Android apps, hackers have a range of weapons at their disposal to use against users and enterprises.
Besides the complexity of cybercrime tools that may be used for targeted attacks, hackers also take advantage of the increasing number of unwary Facebook users who over-share private details. There are cases when users shared pictures with their new passports without blurring any detail. Over-sharing not only helps social media advertisers but also allows cyber-criminals to better pick their targets for precise and successful campaigns.
Facebook has a very comprehensive list of targeting options that range from certain age groups, to specific geographical areas, education groups or even specific interests (in a company or a domain). This allows for a very precise targeting of persons exposed to the message, unlike the very coarse one used in traditional spam-based advertising.
Over-sharing itself is encapsulated in the social network’s policy which exposes non-savvy users to its open privacy settings, including open profile and pictures and private information being made public by default. The recent launch of Graph Search feature also helped scammers to take advantage of the increasing over-sharing of information. Only security-conscious users rushed to lock down their privacy settings to keep personal details far from intruders. Graph Search allows everyone to find old posts, status updates and every comment, photo caption and check-in users ever posted on the platform since opening an account.
Should the CISO be concerned about what type of information employees are posting on Facebook?
Every CISO should be concerned about the types of information employees are sharing on Facebook and other social networks as well. Facebook, in particular, offers a really open environment where people’s private life and jobs interfere on a regular basis. As soon as a Facebook user fills in his personal information regarding his employer, he is no longer just sharing his personal details, but also corporate information. The ability to search through people’s friend lists and timelines, the wide variety of open profiles and the fast propagation of pictures and messages are all vulnerabilities that the CISO should consider.
The CISO is not only technically supervising the company’s security, but also has to put in place a strategy to maintain the corporation’s vision while protecting the technology. The CISO should keep in mind that Facebook is a fruitful environment for cyber-crime business and this could directly affect his work. Imagine how bad a targeted attack could affect the entire company after an employee falls for a social engineer, for example.
The role of a CISO is continuously evolving, so he should always keep up with the trends as his employees do. Maybe in a few years he will be concerned about appropriate standards and controls of micro-blogging platforms focusing on viral videos or of online newspapers created by employees themselves.
What threats do you expect to seriously evolve in the next five years, and what should users be on the lookout for?
I have been carrying out research on social network security for a couple of years and I’m astonished to discover that users continue to fall for the same types of scams and vulnerabilities despite the mitigation of the media, security companies and experts. However, I expect a wider number of cyber-criminals to create fake profiles for targeted attacks as focusing on a smaller and weaker prey could eventually bring them more money.
Users should be on the lookout for scams promising new promotions, vouchers and freebies, including new tech apparitions. They should also keep an eye on messages promising morbid details and videos of celebrities that have passed away. Facebook ads are also a dangerous environment that will probably be exploited heavily in the next five years too. | <urn:uuid:d32d2c4a-16fc-43f1-a411-a906d114ec85> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2014/03/12/facebook-security-and-privacy-pitfalls/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00104-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.945443 | 1,128 | 2.59375 | 3 |
Utah cut homelessness by so much that it now tracks the remaining homeless by name — not number. And this plan didn’t break the bank. The state estimates the savings in the millions of dollars. The savings are so great, it doesn’t bother to calculate a total. Over the past 10 years, the program, has cut the number of chronic homeless in the state by 91%.
Why the program works
There’s a perception that if were to give a homeless person a home, they would just the opportunity in one way or another. A University of Pennsylvania study, however, found the homeless seem to waste opportunities because they don’t have . This 2001 study inspired Utah’s efforts.
The study tracked 4,000 people in New York City for four years — two years living on, two years in housing provided by the city. While homeless, they cost the city more than $40,000 for shelter, jail and hospital services. That same amount of money could provide them more permanent housing, comprehensive health care and employment services.
"A considerable amount of public dollars is spent essentially maintaining people in a state of homelessness," Dennis Culhane, the study’s lead author, wrote at the time.
The idea is the homeless don’t end up in jail because they’re bad people per se. The problem seems to be that all the instability in their lives is a tremendous obstacle that prevents them from making meaningful changes in their lives. The success rate is lower when the homeless have to prove they’ve gotten help before they get housing.
Housing comes first
Utah housed 17 people in the first year of its program. One year later, 14 were still in the homes, doing well — a success rate of 82%. That rate has grown over time and that’s with a relaxed attitude toward the new residents. They don’t have to get help to live there, but help is available if they want it. Most do.
The state’s program is called Housing First — and the fact that housing does come first appears to be a main reason why the initiative is so successful. Other cities and states provide housing, but no area has come close to Utah’s success rate. Today, there are so few chronically homeless people in Utah that the state knows the names and stories of each one.
The formerly homeless can get mental health treatment, counseling or other services to help them overcome their demons, whatever they may be. Now that they don’t have to worry about where they’ll spend the night, those treatments have a greater chance of sticking. While it’s easy to fixate on the millions the state has saved, officials say this program has also saved lives.
It does take participants a little time to adjust. One woman kept her belongings on the bed and slept on the floor the first few weeks she was in her new house. She had lived with so much disruption that it took that long for her to grasp that the house really was hers.
Still seems to be some doubt
Despite the success in Utah and New York City there still seems to be some reluctance to copy or expand these programs — even in areas where these efforts are already working. A renewal of New York’s program, for instance, came down to the wire.
New York began giving housing to the mentally ill in 1990, but there was some concern the program, which now needs to be renewed every 10 years, was going to be allowed to lapse even at a level where there was only enough housing for one out of six eligible people.
More than 200 organizations called on the city and state to renew the NY/NY program, which was expanded earlier this year. This fourth phase, called NY/NY IV, will create5,000 new supportive housing units, nearly doubling the housing available in the first 25 years of the program. Still, homeless advocates say that’s well short of the 30,000 units that are needed there.
Washington, D.C., meanwhile, began providing housing to the chronically homeless in 2008 and was on track to essentially end homelessness by next year. Instead, the city’s support stalled after a few years. That support is only now resuming.
This article was originally published on our sister site American City & County. | <urn:uuid:264330a2-03ab-4774-94f6-fe509810b7d9> | CC-MAIN-2017-04 | http://www.ioti.com/smart-cities/utah-nearly-eliminates-homelessness-solution-sounds-too-simple-work | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00012-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.977745 | 891 | 2.609375 | 3 |
Picture This: A Visual Guide to Wireless Vulnerabilties
For many security certification exams, you must be familiar with various forms of vulnerabilities that can exist with wireless networks. The Security+ exam from CompTIA (SY0-301), for example, currently expects you to be able to distinguish between nine different ones (but refers to them as types of attacks) and that list is scheduled to expand with the next version of the test (currently slated for release in the second quarter of 2014).
This guide looks at some of the most common wireless vulnerabilities/attack types and consists of a subset of those you need to know to earn the Security+ designation. Sometimes, the easiest way to understand the difference between similar concepts is to use an analogy. In the spirit of that, imagine that you’ve decided to go to your favorite sandwich shop for lunch and want to congenially place your order, get your food and return back to work as quickly as possible.
Figure One: Under ideal conditions, you order the perfect sandwich and just the fixings you want and get back to work as quickly as possible.
Problems lurk within the simple sandwich shop lunch, however, just as they do with wireless communication. The following figures illustrate how things can go awry.
Interference and Jamming
While trying to place your order and get the extra helping of the pickles you so relish, someone stands nearby and obnoxiously shouts into their cell phone. Their conversation is so loud that it keeps the employee on the other side of the counter from correctly hearing your order and fixing the sandwich the way you like it. Even though you try to get your message through, it isn’t received.
Figure Two: Your message is unable to be correctly transmitted due to interference.
With wireless devices, interference can be unintentional (caused by other devices in the vicinity, for example), or intentional. When it is intentional, then it is often referred to as jamming, as the intent is to jam the signal and keep the legitimate device from communicating. For the analogy, imagine that the manager is so upset with the employee taking your order that she begins berating her in the middle of the store. One of the purposes behind the outburst is to get the full attention of the employee and keep them from responding to anything else at the moment.
Figure Three: You are unable to communicate and place your order due to jamming.
In a wireless network, both interference and jamming can occur with the access point, or any individual device(s). If it is the access point that is jammed, the possibility exists for the user(s) — either out of frustration, foolishness, or just lack of knowledge — to turn to other access points that could be less secure and/or harmful.
As you’re ordering, someone else in line keeps shouting out things to be added to the sandwich and the employee gets confused and adds them to your order, thinking you are the one who wants them. Now instead of not getting the double pickles you crave, you wind up with them and a double helping of black olives — which you despise and will need to pick off later.
Figure Four: With bluejacking, someone keeps telling them to add items to your order that you don’t want.
With the popularity of Bluetooth, two vulnerabilities have become common: bluejacking (also sometimes referred to a “blue jacking” or “blue-jacking”) and bluesnarfing. Bluejacking is the sending of unsolicited messages over the Bluetooth connection (think spam). While it is annoying, it is usually considered harmless. Bluetooth is often used for creating personal area networks (PANs/WPANs), and most Bluetooth devices come with a factory default PIN that you will want to change to more secure values.
Bluesnarfing is the gaining of unauthorized access through a Bluetooth connection. This access can be gained through a phone, PDA, or any device using Bluetooth. Once access has been gained, the attacker can copy any data in the same way they would with any other unauthorized access
The order you’re placing is overheard by another who is paying very close attention to what you are doing. After you take your order and head for the door, that person then tells the employee that they are with you they need another order — exactly the same as what you just left with — and it should be added to your bill.
Figure Five: With a replay attack, your order is intercepted and can later be replicated.
Replay attacks are not limited to wireless and, in fact, can be even more easy to pull off in a wired environment. In its simplest form, a replay attack essentially amounts to capturing portions of a session to play back later and convince a host that they are still talking to the same party.
After you order, someone you do not know tells you that you can get the exact same sandwich next door for half the price and it has twice the toppings. This is not a conversation that you entered into, solicited, or are interested in. The conversation is based upon their interception, and interpretation, of data they should not have obtained.
Figure Six: With packet sniffing, another party sees your order and responds to it.
If the interconnection between the access point isn’t encrypted, packets between devices may be intercepted (which is referred to as packet sniffing), creating a potential vulnerability. This vulnerability is called a “gap in the WAP” (the security concern that exists when converting between the Wireless Application Protocol and SSL/TLS) and was prevalent in versions of WAP prior to 2.0.
Other Possible Problems
In addition to those illustrated here, two other possible problems could exist as well:
Rogue access point: As soon as you walk through the door, you see the long line winding through the queue and think about going elsewhere. Before you have the time to make that decision, however, an employee who is on break recognizes you as a regular customer and offers to make you a sandwich from ingredients in the back room rather than making you wait. While this offers the opportunity to get your food in a timely way, it has the potential to circumvent the cash register and short the owner the money they are due; it also includes risks for you since the employee in the backroom isn’t wearing gloves, or using the sterilized cutting board.
Evil twin: Distracted by the rain, you get out of your car and run into what you think is your favorite sandwich shop only to find out that you went in one door too soon and are in a rival sandwich shop that charges twice as much and gives half as much meat. This shop has gone to a great deal of trouble to make it appear as if they are the other, more preferable shop and all of their business comes through confusion.
In a networking environment, the difference between the two is that any wireless access point added to your network that has not been authorized is considered a rogue. The rogue may be added by an attacker, or it could have been innocently added by a user wanting to enhance their environment — the problem with the user doing so is that there is a good chance they will not implement the security you would, and this could open the system for a man-in-the-middle attack. An evil twin attack is one in which a rogue wireless access point poses as a legitimate wireless service provider to intercept information users transmit.
One of the best solutions to dealing with wireless vulnerabilities is to educate and train users about the wireless network and the need to keep it secure — just as you would train and educate them about any other security topic. They may think there is no harm in joining any wireless network they can find a strong signal from, but a bit of training can explain to them that it is in their best interest their own, and the company data, safe and secure. | <urn:uuid:39f44807-fd8c-4768-95cf-1b44a22179ab> | CC-MAIN-2017-04 | http://certmag.com/picture-this-wireless-vulnerabilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.966905 | 1,629 | 3.0625 | 3 |
In a large country with myriad natural threats, some responders are more experienced than others in handling certain types of disasters. Certain phenomena, such as earthquakes and hurricanes, typically don’t happen in some areas of the country.
But with a surge in the number of incidents declared as disasters by FEMA over the last 20 years, it’s become paramount for regions to plan for the unexpected, particularly when it comes to Mother Nature.
In 2011, tornado activity was observed in places that rarely see it, from Northern California to the East Coast and in between, leaving some residents in disbelief that the weather phenomena actually occurred there.
In addition, areas known for hurricanes and tropical storms are experiencing larger, more powerful weather systems. Hurricane Katrina laid waste to New Orleans in 2005, and while Hurricane Sandy wasn’t as deadly as Katrina, it was the deadliest in the northeastern U.S. in the last 40 years and the second costliest disaster in U.S. history. Hurricane Irene in 2011 was supposed to have been a “storm of the century” until Sandy hit.
Then there were the 1,000-year floods that hit Tennessee in 2010 and the devastating wildfires in Colorado last year, described as “freakish” by experienced firefighters. It all follows a pattern predicted in recent years by some experts who say the frequency and severity of storm activity are increasing, along with intensified wildfires, drought and more flooding, resulting from a warming climate.
Emergency management experts and sustainability planners say it’s important to begin planning for a changing paradigm, that plans based on historical data are out of date. So should there be a one-size-fits-all or all-hazards approach to disaster preparedness and response? Or should regions craft specific strategies for each type of disaster? Experts believe the prevailing approach is — and should remain — a bit of both.
According to Mark Ghilarducci, secretary of the California Emergency Management Agency, the state takes a holistic approach to disaster planning. Because the state is so large and parts of it are susceptible to different types of threats, he said California uses a collaborative process involving local governments and the private sector.
“We try to put emergency plans in place or countermeasures working with our local governments, our other state agencies and possibly the federal government,” Ghilarducci said. “Then we have a very robust preparedness program that ties to these efforts so that we make sure the community is involved and engaged to let them know what the risks are and how they can work to prepare themselves.”
From a local government perspective, Boston does the same thing. Rene Fielding, director of the Boston Office of Emergency Management, said her team goes through an annual hazard identification risk assessment of what threats could jeopardize the city. The team ranks the threats and then outlines steps to address them. But some unusual events are starting to crop up in those evaluations.
Fielding explained that the city had a tornado watch in 2012 for the first time in as long as she could remember. In addition, while Boston isn’t noted for earthquakes, the city has felt tremors in the past couple of years from temblors in both Maine and Virginia.
The magnitude 5.8 quake that struck Virginia on Aug. 26, 2011, took many by surprise and caused significant damage near the epicenter in Louisa County, Va. But its impact stretched all along the Eastern seaboard.
While the Virginia Department of Emergency Management (VDEM) didn’t necessarily reassess or change the steps it takes in evaluating a disaster, its staff members did add something to their emergency operations plan following the quake: an earthquake annex. Brett Burdick, deputy state coordinator for administration of VDEM, said the department didn’t have one, since its major disaster concerns are floods and hurricanes.
The department now has a formalized earthquake plan in place, but Burdick said that it was written from the procedures they had implemented when the earthquake struck. So even without the documentation, Virginia’s procedures would be the same today as they were in August 2011.
While agency emergency policies and procedures are important, the tougher task for emergency managers is convincing the public that disaster contingency plans are needed for even the unlikeliest situations. Ghilarducci explained that keeping people informed and prepared is a major challenge across the board and it’s why California approaches preparedness from an all-hazards perspective.
“All you have to do is turn on the TV today and you’re bombarded with one crisis after another,” Ghilarducci said. “What happens is people tend to become numb … they want to put it out of sight, out of mind. We’re very sensitive to that.”
In addition, because the state is so large, Ghilarducci said his agency uses different times of the year to reinforce different types of disaster preparedness messages to the public. California annually re-evaluates risk factors for each area of the state during different seasons and keeps the public a part of those activities. Winter storms, floods and earthquakes all get specific months for targeted messaging.
Boston concentrates on teaching its citizens how to best prepare themselves to be self-sufficient for up to two days after a disaster. While Boston’s methods are familiar — promoting the need for a family emergency kit and plan — the city emphasizes making sure as many threats as possible are covered by activities it endorses.
For example, Fielding said that her team doesn’t necessarily teach people how to prepare for an earthquake or tornado, but instead it’s always pushing guidelines that can keep residents safe through all types of events, mirroring California’s all-hazards take on preparedness.
The August 2011 quake seemed to spur many Virginians to take the threat of earthquakes more seriously. Burdick said many residents took out earthquake insurance policies to help mitigate the cost of damages if another quake happens.
Glenn Pomeroy — CEO of the California Earthquake Authority, a publicly managed organization that provides residential earthquake insurance — said participating in an exercise such as the Great ShakeOut is a key to getting citizens in nontraditional earthquake zones to prepare for such an occurrence. The annual event puts participants across the country through an emergency drill as if an actual earthquake is occurring.
It seemed to work for Virginia, which took part in the Great ShakeOut in 2012. Burdick felt that the event put the preparedness factor front and center for many of the state’s residents.
“A couple million Virginians decided to participate in that, and they probably would not have done that before the earthquake,” Burdick said. “There is a decidedly heightened awareness among the community.”
Climate change is increasingly thought to be one of the culprits causing the uptick of severe and unusual weather phenomena. Scientists at the National Oceanic and Atmospheric Administration (NOAA) aren’t yet ready to state it as the cause, but said it’s a fact that arctic ice is melting and the sea level is rising, which leads to higher flood waters and more precipitation.
Thomas Peterson, principal scientist with NOAA’s National Climatic Data Center, called greenhouse gases the “steroids” of the atmosphere, as they warm the air and increase the overall energy that produces storms.
“Since warmer air can hold more water vapor, we’re seeing an increase in absolute humidity around the world and greater potential for heavy precipitation events and some of the other events that are fueled by water vapor in the lower atmosphere,” Peterson said.
Thomas Knutson, research meteorologist with NOAA’s Geophysical Fluid Dynamics Laboratory, confirmed that the intensity of hurricanes is trending upward. He said climate models are projecting warmer temperatures and some simulations have shown the number of Category 4 and Category 5 hurricanes increasing. Knutson gave “better than even” odds that the number of intense storms will increase in the future, but NOAA believes the overall number of hurricanes will remain relatively static or even decrease globally.
When asked whether the actual size of hurricanes is increasing worldwide, Knutson couldn’t give a definitive answer. While Sandy and Katrina were noted for being very large hurricanes, NOAA doesn’t have any long-term records of storm size to draw an accurate comparison. The best records available for land-falling storms date back to 1900.
Tornadoes were a tougher subject to tackle. Knutson said that if you look at just the raw records of tornado occurrences for the U.S. as a whole, the number is increasing over time. That data is misleading, however, because the ability to detect twisters has increased through technology.
“We don’t really know what tornadoes have been doing because the spurious trend is so large it swamps everything else,” Knutson said.
But whether the change is defined as climate change or something else, states are taking the threat posed by a warming climate seriously. Emergency managers have to be prepared for the worst.
California developed a mitigation plan in 2007 that’s updated every three years. The forecast effects of climate change on the state were added to the report in 2010. State representatives and an advisory team looked at scientific information and climate-related hazards and future impacts from those hazards. The hoped-for result is a hazard mitigation strategy that is coordinated and integrated among all the agencies involved in hazard mitigation.
Devising ways to educate the public on preparing for the unexpected or participating in emergency drills sounds easy, but it isn’t always cheap. That can be problematic, particularly in an era of shrinking government budgets.
Fielding said she considers tighter purse strings as an opportunity to be creative. In the past, Boston has partnered with the private sector to promote certain messages, and grant funding helps pay for other expenses. She also leverages those partnerships to get out messaging in fliers that can be sent in mailings, like a person’s water bill.
Boston also uses a section of the city’s website to promote ReadyBoston, a community preparedness initiative to educate and empower Bostonians about the hazards they may face and to encourage residents to prepare for all types of emergencies.
California also partners with various stakeholder groups, including schools and higher education institutions, to incorporate preparedness into students’ curriculum. The state has also harnessed the power of social media to stay connected with residents, while spreading the word about disasters and how to prepare for them.
“Social media is a very powerful tool, [and] it can change the outcome of a disaster situation,” Ghilarducci said. “Utilizing it in an appropriate way is very critical. Making sure it’s part of your toolbox is critical to be able to get your message out and also to help manage the expectation of the government’s response to an emergency or disaster.” | <urn:uuid:29a56453-a36d-4d81-89c7-f26b2c347add> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Emergency-Managers-Changing-Disaster-Paradigm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00316-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.958923 | 2,273 | 3.296875 | 3 |
Lately I've found myself having to make lots of file systems. This is mostly due to forensic work, where I'm either sanitizing hard drives and rebuilding file systems on them or I'm creating test file systems for research. Either way, I'm spending lots of time fiddling with file systems at the command line.
Way back in Episode 32 we talked about how to use dd to overwrite a disk device with zeroes:
# dd if=/dev/zero of=/dev/sdd bs=1M
dd: writing `/dev/sdd': No space left on device
992+0 records in
991+0 records out
1039663104 bytes (1.0 GB) copied, 299.834 s, 3.5 MB/s
Of course, this leaves you with an invalid partition table. Happily, the GNU parted utility makes short work of creating a new MS-DOS style disk label and adding a partition:
# parted /dev/sdd print
Error: /dev/sdd: unrecognised disk label
# parted /dev/sdd mklabel msdos
Information: You may need to update /etc/fstab.
# parted /dev/sdd mkpart primary 1 1G
Information: You may need to update /etc/fstab.
# parted /dev/sdd print
Model: LEXAR JUMPDRIVE SPORT (scsi)
Disk /dev/sdd: 1040MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 1036MB 1036MB primary
At this point we need to create a file system in our new partition. You actually can use parted to create file systems, but even the parted manual page suggests that you use an external program instead. In Linux, this would be mkfs, which allows you to choose between several different kinds of file systems.
Since this is a small USB key, you might want to just create a FAT file system on it to make it easy to share files between your Linux box and other, less flexible operating systems:
# mkfs -t vfat -F 32 /dev/sdd1
We're using the FAT-specific "-F" option to specify the FAT cluster address size-- here we're creating a FAT32 file system. For each file system type, mkfs has a number of special options specific to that file system. You'll need to read the appropriate manual page to see them all: "man mkfs.vfat" in this case.
If I didn't want my co-authors to be able to easily see the files on this USB stick, I could create an EXT file system instead:
# mkfs -t ext2 /dev/sdd1
mke2fs 1.41.9 (22-Aug-2009)
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
63360 inodes, 253015 blocks
12650 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=260046848
8 block groups
32768 blocks per group, 32768 fragments per group
7920 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Here I'm creating an "ext2" file system because I didn't want to waste space on a file system journal, but you of course have the option of creating "ext3" and even "ext4" file systems if you want.
If you want to make NTFS file systems, you may have to download an additional package. For example, on my Ubuntu laptop I had to "sudo apt-get ntfsprogs". Once that's done, making NTFS volumes is a snap:
# mkfs -t ntfs -Q /dev/sdd1
Cluster size has been automatically set to 4096 bytes.
Creating NTFS volume structures.
mkntfs completed successfully. Have a nice day.
When creating NTFS volumes, you definitely want to use the "-Q" (quick) option. If you leave off the "-Q" then the mkfs.ntfs program overwrites the device with zeroes and performs a bad block check before creating your file system. This takes a really long time, particularly on large drives, and is also unnecessary in this case since we previously overwrote the drive with zeroes using dd.
It's interesting to note that you don't actually have to have a physical disk device to test file systems. mkfs will (grudgingly) create file systems on non-device files:
# dd if=/dev/zero of=testfs bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 69.6688 s, 61.6 MB/s
# mkfs -t ntfs -Q -F testfs
testfs is not a block device.
mkntfs forced anyway.
# mount -o loop,show_sys_files testfs /mnt/test
# ls /mnt/test
$AttrDef $Bitmap $Extend $MFTMirr $UpCase
$BadClus $Boot $LogFile $Secure $Volume
Here I'm first using dd to make a file called "testfs" that contains 4GB of zeroes. Then I call mkfs on the file, using the "-F" (force) option so that it won't exit with an error when I tell it to operate on a non-device file. Though the command whines a lot, it does finally produce a working NTFS file system that can be mounted using a loopback mount.
Of course I can create EXT and FAT file systems in a similar fashion. However, the "-F" option for mkfs.vfat is used to specify the cluster address size. It turns out that you don't need a "force" option when making FAT file systems in non-device files-- the mkfs.vfat will create file systems without complaint regardless of the type of file it is pointed at. For EXT file systems, you can use "-F" if you want. However, if you leave the option off, you'll get a "are you sure?" prompt when running the command against a non-device file (as opposed to mkfs.ntfs which simply bombs out with an error). They say that "the wonderful thing about standards is that there are so many to choose from", but I really wish Linux could rationalize the various mkfs command-line interfaces a bit more.
In any event, being able to create file systems in raw disk files is a real boon when you want to test file system behavior without actually having to commandeer a physical disk drive from someplace. But I think I'd better stop there-- I'm already feeling the hatred and jealousy emanating from my Windows brethren. Let's see what Tim can cook up this week.
Tim was relaxing this weekend for his birthday
This week's episode is pretty easy, but only because there aren't a lot of options. Besides, why would you want to create raw disk files or test file system behavior without searching for a physical disk, connectors, power, ...
No, I'm not jealous. I have everything I need. I don't need all those options. Windows is good enough, smart enough, and doggone it, people like it!
The "streamlined" command in Windows is the good ol' Format command.
C:\> format d:In Vista and later, the format command writes zeros to the entire disk when a full format is performed. In XP and earlier, the format command does not zero the disk. To zero the disk with XP you have to use the diskpart utility.
WARNING, ALL DATA ON NON-REMOVABLE DISK
DRIVE D: WILL BE LOST!
Proceed with Format (Y/N)? y
C:\> diskpartThe clean all command within diskpart zeros the entire disk. One benefit of using clean all is that it actually zeros the disk and doesn't create the MFT. We usually want one though, so Format will suffice.
Microsoft DiskPart version 6.1.7600
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: MYMACHINE
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 149 GB 0 B
Disk 1 Online 149 GB 0 B
DISKPART> select disk 1
Disk 1 is now the selected disk.
DISKPART> clean all
Format can be used to specify the file system too. We don't have all the
C:\> format e: /FS:NTFSBesides the size restriction, one of the biggest problems with the FAT file system is that it provides no security features. If a user has access to the disk then they have full access to the disk, i.e. there is no way to give a user read access and deny write access to a directory. NTFS allows finer control of ACLs, or even ACLs at all.
C:\> format f: /FS:FAT32
So how do we convert a FAT drive to NTFS? But of course, by using the convert command:
C:\> convert f: /FS:NTFSThe FS switch is required even though the only option is NTFS.
That's about it. Not a lot here these week, and no PowerShell either. There aren't any new cmdlets in PowerShell that provide any additional functionality. | <urn:uuid:3fc486e9-7a3d-4fe6-b800-771461653721> | CC-MAIN-2017-04 | http://blog.commandlinekungfu.com/2010/06/episode-98-format-this.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00526-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.867556 | 2,129 | 2.6875 | 3 |
The technology analyses whether text is contained in images based on the graphic pattern of words and lines, said developer Eugene Smirnov.
The method is able to recognise text in almost any language and is not affected by techniques such as warping that spammers use to avoid detection, he said.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The system determines whether or not detected text is spam by comparing it to the contents of a database of spam templates.
Spam is expected to continue to be a problem in 2009, particularly with the rise in the number and popularity of websites that allow user-generated content.
Cyber criminals are expected to move to a more distributed way of controlling and hosting malcode after two main criminal spam hosting companies were shut down in 2008. | <urn:uuid:6304b278-721b-4d63-ad40-7ddabc636025> | CC-MAIN-2017-04 | http://www.computerweekly.com/news/2240088255/Kaspersky-files-image-based-spam-busting-patent | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00160-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947883 | 174 | 2.734375 | 3 |
Python - What are Abstract Classes?
An abstract class can be considered a blueprint for other classes, allowing you to mandate a set of methods that must be created within any child classes built from your abstract class.
Lets first look at how you create an abstract class. First we import abc, we define the class as a metaclass using the __metaclass__ magic attribute, we then decorate using @abc.abstractmethod.
import abc class TestClass(object): __metaclass__ = abc.ABCMeta @abc.abstractmethod def set_total(self,input): """Set a value for instance.""" return @abc.abstractmethod def get_total(self): """Get and return a value for instance.""" return
Lets look at an abstract class in action.
First, if we go and build a child class from this base class, using the correct methods i.e abstract methods, we should see no problems. Like so,
class MyClass(TestClass): def set_total(self,x): self.total = x def get_total(self): return self.total
>>> m = MyClass()
>>> print m
<__main__.MyClass object at 0x100414910>
However if we create a child class with methods different to what was set within our abstract class, the class will not instantiate.
Notice how I have changed the name of the get_total method to xyz_get_total
class MyClass(TestClass): def set_total(self,x): self.total = x def xyz_get_total(self): return self.total
>>> m = MyClass()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class MyClass with abstract methods get_total
Abstract Class Instantiation
And finally there is one last point that I should highlight. Due to the fact that an abstract class is not an concrete class, it cannot be instantiated. Heres an example,
>>> t = TestClass() Traceback (most recent call last): File "
", line 1, in TypeError: Can't instantiate abstract class TestClass with abstract methods get_total, set_total | <urn:uuid:ada93a83-1e93-497c-966a-4c5676d6d6db> | CC-MAIN-2017-04 | https://www.fir3net.com/Programming/Python/python-what-are-abstract-classes.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00068-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.713818 | 488 | 3.40625 | 3 |
NASA today said it would fund the technology fixes required to make its inoperative Kepler space telescope active again and able to hunt for new planets and galaxies.
Kepler you may recall was rendered inoperable after the second of four gyroscope-like reaction wheels, which are used to precisely point the spacecraft for extended periods of time, failed last year ending data collection for the original mission. The spacecraft required three working wheels to maintain the precision pointing necessary to detect the signal of small Earth-sized exoplanets, which are planets outside our solar system, orbiting stars like our sun in what's known as the habitable zone -- the range of distances from a star where the surface temperature of a planet might be suitable for liquid water, NASA stated.
+More on Network World: Kepler's most excellent space discoveries+
With the failure of a second reaction wheel, the spacecraft could no longer precisely point at the mission's original field of view where it would look for these exoplanets.
According to a post on the agency's website, based on a recommendation from the agency's 2014 Senior Review of its operating missions, NASA will for two years fund what's known as Kepler Second Light or K2, which is basically a work-around that will let Kepler once again point into space and gather data.
Last Fall, NASA Kepler and Ball Aerospace engineers say they have developed a way of recovering this pointing stability by maneuvering the spacecraft so that solar pressure - the pressure exerted when the photons of sunlight strike the spacecraft -- is evenly distributed across the surfaces of the spacecraft.
NASA says by orienting the spacecraft nearly parallel to its orbital path around the sun, which is slightly offset from the ecliptic, the orbital plane of Earth, it can achieve spacecraft stability. The ecliptic plane defines the band of sky in which lie the constellations of the zodiac.
This technique of using the sun as the 'third wheel' to control pointing is currently being tested on the spacecraft and early results look good, NASA said. During a pointing performance test in late October, a full frame image of the space telescope's full field of view was captured showing part of the Sagittarius constellation.
K2 would study a specific portion of the sky for up to 83 days, until it is necessary to rotate the spacecraft to prevent sunlight from entering the telescope. Each orbit or year would consist of approximately 4.5 unique viewing periods or campaigns. The first K2 science observation run, scheduled to begin May 30.
While it currently isn't sending data, NASA scientists are still evaluating data sent by Kepler when it was fully operational. In April in fact, NASA said Kepler Space Telescope had spotted what the agency called the first Earth-size planet orbiting a star in the "habitable zone" -- the range of distance from a star where liquid water might pool on the surface of an orbiting planet.
NASA said that the discovery of what will be called Kepler-186f confirms that planets the size of Earth exist in the habitable zone of stars other than our sun. While planets have previously been found in the habitable zone, they are all at least 40% larger in size than Earth and understanding their makeup is challenging. Kepler-186f is more reminiscent of Earth. Although the size of Kepler-186f is known, its mass and composition are not. Previous research, however, suggests that a planet the size of Kepler-186f is likely to be rocky, NASA said.
Check out these other hot stories: | <urn:uuid:471d00cc-5b78-44a3-9afb-c31d4dfce57d> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2226949/security/nasa-s-broken-planet-hunter-spacecraft-given-second-life.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00554-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.951269 | 711 | 3.140625 | 3 |
A severe DOM (Document Object Model) based XSS (Cross-Site Scripting) vulnerability in Wix.com could lead to an attacker gaining full control of the websites hosted on the platform, Contrast Security researchers warn.
Also known as type-0 XSS, a DOM based XSS is a type of attack where the payload is executed by modifying the DOM “environment” in the victim’s browser. Because of that, while the page (the HTTP response) isn’t changed, the client side code contained in the page executes differently, influenced by the malicious changes in the DOM environment.
Cloud-based development platform Wix.com has millions of users worldwide and allows everyone “to create a beautiful, professional web presence.” Wix claims to have 87 million registered users and over 2 million subscriptions.
Wix websites either use a wixsite.com subdomain or a custom domain, and an XSS against these won’t provide an attacker with access to the main wix.com domain and its cookies. Thus, a separate vulnerability is needed for an attacker to steal session cookies that could provide access to administrator session cookies or allow them to access administrator resources.
For that, an attacker can simply use the template demos that are hosted on wix.com, because they contain the vulnerability. Should the attacker manage to exploit an XSS on wix.com, they could do anything as the current user, including launching a worm attack.
The first step of such an attack, is to create a Wix website with the DOM XSS in an , Contrast Security explains. When a Wix user visits the infected website, a similar issue in editor.wix.com is leveraged to edit all of the user's websites and inject the DOM XSS in an . Since the site infects any logged in Wix user and adds the with the same XSS to their websites, all of the current user’s websites now host the malicious content and serve it to their visitors.
“Administrator control of a wix.com site could be used to widely distribute malware, create a dynamic, distributed, browser-based botnet, mine crypto-currency, and otherwise generally control the content of the site as well as the users who use it,” Matt Austin, Senior Security Research Engineer, explains.
An attacker could not only change the content of a hosted website for targeted users, but could also challenge the user for their Wix, Facebook, or Twitter username and password or trick them into downloading malware and executing it. Additionally, the attacker could generate ad revenue by inserting ads into website pages, spoof bank web pages and attempt to get users to log in, make it difficult or impossible to find and delete the infection, and could even make themselves an administrator of the website.
The security researchers say they contacted Wix about the issue on October 14 but that no positive response was received so far, although the company initially said it was investigating the issue.
“Contrast Security attempted to reach Wix.com for over three weeks with no response. So we are disclosing this vulnerability in order to protect the many Wix website owners and users of these websites,” the security researcher said.
UPDATE 11/07: Contrast Security contacted SecurityWeek to inform us that Wix appears to have fixed the issue after they made the vulnerability details public:
"We published this disclosure on 11/2 at 8 AM PST. Sometime between 12 and 3 PM PST that same day, Wix appears to have resolved the problem. We can look at the update to see how they resolved this issue," Austin told us. | <urn:uuid:0b8fba3d-00c7-44b8-895b-a0e6098201d0> | CC-MAIN-2017-04 | http://infosecisland.com/blogview/24841-DOM-XSS-Vulnerability-Impacts-Over-70-Million-Wix-Websites.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00462-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.943806 | 752 | 2.875 | 3 |
Clearer view of cold continent
- By Trudy Walsh
- Nov 28, 2007
Representatives from three federal agencies and the British Antarctic Survey yesterday revealed an online digital map of Antarctica based on Landsat images.
The Landsat Image Mosaic of Antarctica combines more than 1,100 Landsat satellite photos that were digitally compiled to create a single, nearly cloudless image. LIMA is a joint project of the National Science Foundation (NSF), NASA, the U.S. Geological Survey and the British Antarctic Survey.
The project can be viewed on the Web at http://lima.usgs.gov. LIMA offers views of the frozen landscape at a resolution 10 times greater than previously available images. LIMA uses images captured by the Landsat 7 satellite, which was built by NASA and launched in 1999. Viewers can drill down to see features that are half the size of a basketball court, NSF officials said.
The scientifically accurate mosaic map is expected to become a standard geographic reference and will give both scientists and the public a state-of-the-art tool for studying the southernmost continent, NSF officials said.
For example, the LIMA mosaic will provide accurate snapshots of Antarctica's ice sheets, which contain more than 60 percent of the world's fresh water, said Scott Borg, director of NSF's division of Antarctic sciences.
LIMA, 'compared to what we had available most recently, is like watching the most spectacular high-definition TV in living color versus watching the picture on a small black-and-white television,' said Robert Bindschadler, chief scientist of the Hydrospheric and Biospheric Sciences Laboratory at NASA's Goddard Space Flight Center in Greenbelt, Md.
LIMA was produced using the USGS' Earth Resources Observation and Science (EROS) images from the Landsat 7 satellite. The Landsat program began in 1972. Since then, sensors 'aboard Landsat satellites have captured millions of digital images of the Earth's land masses and coastal regions used by researchers worldwide to study global change, natural disasters and other aspects of the Earth's terrestrial environment,' said Barbara Ryan, USGS associate director for geography.
NSF provided a grant of almost $1 million to the LIMA project, which is the first major scientific product of the International Polar Year, a coordinated international field campaign that began in March. During the IPY, hundreds of scientists in an array of disciplines from more than 60 nations will travel to the Arctic and Antarctic to perform research.
Trudy Walsh is a senior writer for GCN. | <urn:uuid:00e177ae-951b-4b8e-a001-c0792d9d7bfe> | CC-MAIN-2017-04 | https://gcn.com/articles/2007/11/28/clearer-view-of-cold-continent.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00398-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.909643 | 532 | 3.34375 | 3 |
At first, GPUs could be used for a very narrow range of tasks (try to guess what), but they looked very attractive, and software developers decided to use their power for allocating a part of computing to graphics accelerators. Since GPU cannot be used in the same way as CPU, this required new tools that did not take long to appear. This is how originated CUDA, OpenCL and DirectCompute. The new wave was named ‘GPGPU’ (General-purpose graphics processing units) to designate the technique of using GPU for general purpose computing. As a result, people began to use a number of completely different microprocessors to solve some very common tasks. This gave rise to the term “heterogeneous parallelism”, which is actually the topic of our today’s discussion. | <urn:uuid:83ed947d-8f09-42f1-a10f-66a6173b11aa> | CC-MAIN-2017-04 | https://hackmag.com/author/yurembo/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00150-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967015 | 167 | 3.6875 | 4 |
Architecture: Though its name suggests otherwise Intels CISC (Complex Instruction Set Computer) architecture is easier to audit for security holes than the RISC (Reduced Instruction Set Computer)based chips from Motorola, said Lurene Grenier, a software vulnerability researcher and Mac PowerBook user in Columbia, Md.
"With Complex Instruction Set instructions, there are more of them, and they do more for you. Its just simpler to read and write to CISC systems and get them to do something," she said.
Those differences make it easier for vulnerability experts and exploit writers to understand and write exploit code for systems that use the Intel architecture, and removes a big barrier to writing exploits for Mac systems, analysts agree.
"OS X will become more popular as prices drop. I think you have a variety of malicious folks who know the Intel chip set and instruction set. Now that Mac OS X runs on that, people can port their malware and other things over to OS X quickly and easily," said David Mackey, director of security intelligence at IBM.
"If I want to pop some box, Mac on a Motorola chip is a barrier," says Josh Pennell, president and CEO of IOActive Inc. in Seattle.
The population of individuals who can reverse-engineer code and read and write Assembly language is small, anyway.
To read more details about Apples Intel-based Macs, click here.
Within that tiny population, there are far more who can do it for CISC as compared to RISC-based systems, Grenier said.
"There are payloads and shell code written for PowerPC, but there are far fewer people who can or care to write it," Grenier said.
Tools: Hackers need tools to help them in their work, and more of them exist for machines using Intels x86 than Motorolas PowerPC, experts agree.
Popular code disassembly tools like IDA Pro work for programs that run on both Intel and PowerPC, but theres a richer variety of tools such as shell code encoders and tools for scouring code that work with the Intel platform than for PowerPC, Grenier said.
"There are tools that are not written for PowerPC because theres not the user base or the interest," she said.
Windows, Linux and Unix all use the x86 architecture, and exploit writers interested in those platforms have developed more tools to help them over the years.
Those tools, in turn, speed development of exploit code for buffer overflows and other kinds of vulnerabilities that require knowledge of the underlying architecture, Grenier said.
"I dont think [Intel] will make Mac more or less secure. But there will be a ton more exploits coming out for Mac," Grenier said.
Next Page: Other factors. | <urn:uuid:e9376501-88f8-406c-98cc-139a0f226a6c> | CC-MAIN-2017-04 | http://www.eweek.com/c/a/Security/Apples-Switch-to-Intel-Could-Allow-OS-X-Exploits/1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00086-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.947572 | 577 | 2.75 | 3 |
Stein B.A.,National Wildlife Federation |
Staudt A.,National Wildlife Federation |
Cross M.S.,Wildlife Conservation Society |
Dubois N.S.,Defenders of Wildlife |
And 6 more authors.
Frontiers in Ecology and the Environment | Year: 2013
The emerging field of climate-change adaptation has experienced a dramatic increase in attention as the impacts of climate change on biodiversity and ecosystems have become more evident. Preparing for and addressing these changes are now prominent themes in conservation and natural resource policy and practice. Adaptation increasingly is viewed as a way of managing change, rather than just maintaining existing conditions. There is also increasing recognition of the need not only to adjust management strategies in light of climate shifts, but to reassess and, as needed, modify underlying conservation goals. Major advances in the development of climate-adaptation principles, strategies, and planning processes have occurred over the past few years, although implementation of adaptation plans continues to lag. With ecosystems expected to undergo continuing climate-mediated changes for years to come, adaptation can best be thought of as an ongoing process, rather than as a fixed endpoint. © The Ecological Society of America. Source
Chinnadurai S.K.,North Carolina State University |
Birkenheuer A.J.,North Carolina State University |
Blanton H.L.,North Carolina State University |
Maggi R.G.,North Carolina State University |
And 4 more authors.
Journal of Wildlife Diseases | Year: 2010
Trapper-killed North American river otters (Lontra canadensis) in North Carolina, USA, were screened for multiple vector-borne bacteria known to be pathogenic to mammals. Blood was collected from 30 carcasses in 2006, from 35 in 2007, and from one live otter in 2008. Samples were screened using conventional polymerase chain reaction (PCR) tests for DNA from Bartonella spp., Ehrlichia spp., and spotted fever group Rickettsia spp. All samples were negative for Rickettsia spp. Twelve of 30 samples from 2006 produced amplicons using the assay designed to detect Ehrlichia spp., but sequencing revealed that the amplified DNA fragment was from a novel Wolbachia sp., thought to be an endosymbiote of a Dirofilaria sp. Between 2006 and 2007, DNA from a novel Bartonella sp. was detected in 19 of 65 animals (29%). Blood from one live otter captured in 2008 was found positive for this Bartonella sp. by both PCR and culture. The pathogenicity of this Bartonella species in river otters or other mammals is unknown. © Wildlife Disease Association 2010. Source
Mcleod E.,The Nature Conservancy |
Szuster B.,University of Hawaii at Manoa |
Tompkins E.L.,University of Southampton |
Marshall N.,James Cook University |
And 9 more authors.
Coastal Management | Year: 2015
Climate change threatens tropical coastal communities and ecosystems. Governments, resource managers, and communities recognize the value of assessing the social and ecological impacts of climate change, but there is little consensus on the most effective framework to support vulnerability and adaptation assessments. The framework presented in this research is based on a gap analysis developed from the recommendations of climate and adaptation experts. The article highlights social and ecological factors that affect vulnerability to climate change; adaptive capacity and adaptation options informing policy and conservation management decisions; and a methodology including criteria to assess current and future vulnerability to climate change. The framework is intended for conservation practitioners working in developing countries, small island nations, and traditional communities. It identifies core components that assess climate change impacts on coastal communities and environments at the local scale, and supports the identification of locally relevant adaptation strategies. Although the literature supporting vulnerability adaptation assessments is extensive, little emphasis has been placed on the systematic validation of these tools. To address this, we validate the framework using the Delphi technique, a group facilitation technique used to achieve convergence of expert opinion, and address gaps in previous vulnerability assessments. © 2015, Copyright © Taylor & Francis Group, LLC. Source
Lawler J.J.,University of Washington |
Tear T.H.,Nature Conservancy |
Pyke C.,CTG Energetics |
Shaw R.M.,Nature Conservancy |
And 8 more authors.
Frontiers in Ecology and the Environment | Year: 2010
Climate change is altering ecological systems throughout the world. Managing these systems in a way that ignores climate change will likely fail to meet management objectives. The uncertainty in projected climate-change impacts is one of the greatest challenges facing managers attempting to address global change. In order to select successful management strategies, managers need to understand the uncertainty inherent in projected climate impacts and how these uncertainties affect the outcomes of management activities. Perhaps the most important tool for managing ecological systems in the face of climate change is active adaptive management, in which systems are closely monitored and management strategies are altered to address expected and ongoing changes. Here, we discuss the uncertainty inherent in different types of data on potential climate impacts and explore climate projections and potential management responses at three sites in North America. The Central Valley of California, the headwaters of the Klamath River in Oregon, and the barrier islands and sounds of North Carolina each face a different set of challenges with respect to climate change. Using these three sites, we provide specific examples of how managers are already beginning to address the threat of climate change in the face of varying levels of uncertainty. © The Ecological Society of America. Source
Thorne J.H.,University of California at Davis |
Seo C.,Seoul National University |
Basabose A.,International Gorilla Conservation Programme |
Gray M.,International Gorilla Conservation Programme |
And 3 more authors.
Ecosphere | Year: 2013
Endangered species conservation planning needs to consider the effects of future climate change. Species distribution models are commonly used to predict future shifts in habitat suitability. We evaluated the effects of climate change on the highly endangered mountain gorilla (Gorilla beringei beringei) using a variety of modeling approaches, and assessing model outputs from the perspective of three spatial habitat management strategies: status quo, expansion and relocation. We show that alternative assumptions about the ecological niche of mountain gorillas can have a very large effect on model predictions. 'Standard' correlative models using 18 climatic predictor variables suggested that by 2090 there would be no suitable habitat left for the mountain gorilla in its existing parks, whereas a 'limiting-factor' model, that uses a proxy of primary productivity, suggested that climate suitability would not change much. Species distribution models based on fewer predictor variables, on alternative assumptions about niche conservatism (including or excluding the other subspecies Gorilla beringii graueri), and a model based on gorilla behavior, had intermediate predictions. These alternative models show strong variation, and, in contrast to the standard approach with 18 variables, suggest that mountain gorilla habitat in the parks may remain suitable, that protected areas could be expanded into lower (warmer) areas, and that there might be climactically suitable habitat in other places where new populations could possibly be established. Differences among model predictions point to avenues for model improvement and further research. Similarities among model predictions point to possible areas for climate change adaptation management. For species with narrow distributions, such as the mountain gorilla, modeling the impact of climate change should be based on careful evaluation of their biology, particularly of the factors that currently appear to limit their distribution, and should avoid the naïve application of standard correlative methods with many predictor variables. © 2013 Thorne et al. Source | <urn:uuid:dd9cabfb-f375-4dfc-bc75-a32c824abb3d> | CC-MAIN-2017-04 | https://www.linknovate.com/affiliation/ecoadapt-584975/all/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00086-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.905308 | 1,571 | 3.1875 | 3 |
While the vast majority of Linux users are hard-core techies, some may be using Linux because they want to try something new, are interested in the technology, or simply cannot afford or do not want to use Microsoft Windows. After becoming acquainted with the new interface of Linux, whether KDE, Gnome, or another window manager, users may begin to explore their system. Many machines come with default installations of Apache and Samba, and a few others even include a FTP daemon.
While these services may be disabled by default, some users may be inclined to use these programs. This article is a brief, but in-depth tutorial on how to keep these applications up-to-date and secure.
Download the article in PDF format here. | <urn:uuid:566b5bf5-5d38-4bb0-8785-04b0e44d1035> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2005/06/01/an-introduction-to-securing-linux-with-apache-proftpd-and-samba/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00388-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.91016 | 151 | 2.625 | 3 |
Chapter 21: Input/Output Macros
The IBM System/360 and those that have followed in the family have evolved an elaborate I/O system in an attempt to maintain efficiency in processing extremely large data sets. Even the early System/360 designs had several levels in the I/O architecture: logical IOCS (Input/Output Control System), Physical IOCS, and the Channel subsystem. While such a multi–level organization can be very efficient, it is somewhat hard to program.
From an Assembler Language level, the proper control and use of I/O operations requires several sequences of instructions. Many times, these sequences appear as a fixed set of instructions in a fixed order, with optional parameters. Such sequences immediately suggest the use of macros with keyword parameters. Indeed, this is the common practice.
To review from the previous chapter, the use of a macro is based on a macro definition that is then invoked by what is called the “macro invocation”. As a standard example, we recall the decimal divide macro. This first definition is that of a positional macro.
Again, we note an obvious fact. Teaching examples tend to be short and explicit. This sample macro is so simple that few programmers would actually use it. However, the I/O macros that are the subject of this chapter are complex; nobody writes the equivalent code.
A MACRO begins with the key word MACRO, includes a prototype and a macro body, and ends with the trailer keyword MEND. Parameters to a MACRO are prefixed by the ampersand “&”. Here is the example definition.
Prototype DIVID ",&DIVIDEND,&DIVISOR
Model Statements ZAP &QOUT,&DIVIDEND
The macros used in the I/O system seem all to be keyword macros. The definition of a keyword macro differs from that of a positional macro only in the form of the prototype. Each symbolic parameter must be of the form &PARAM=[DEFAULT]. What this says is that the symbolic parameter is followed immediately by an “=”, and is optionally followed by a default value. As a keyword macro, the above example can be written as:
Prototype DIVID2 "=,&DIVIDEND=,&DIVISOR=
Model Statements ZAP
Here are a number of equivalent invocations of this macro, written in the keyword style. Note that this definition has not listed any default values.
It is possible to use labels defined in the body of the program as default values.
With this definition, the two invocations are exactly equivalent.
The invocation of the macro DIVID2 will expand as follows:
Having reviewed the syntax of keyword macros, we now turn to the main topic of this chapter: a brief discussion of the Input/Output Control System and associated macros. Following the lead of Peter Abel [R_02], the focus will be on the following:
DCB Data Control Block, used to define files.
OPEN This makes a file available to a program, for either input or output.
terminates access to a file in an orderly way.
For a buffered output
approach, this ensures that all data have been output properly.
GET This makes a record available for processing.
writes a record to an output file. In a
buffered output, this may
write only to an output buffer for later writing to the file.
Each I/O macro that we shall discuss expands into a sequence of calls to operating system routines, most probably in the LIOCS (Logical I/O Control System) level. For this reason, we should review the general–purpose registers used by the operating system.
0 and 1 Logical
IOCS macros, supervisor macros, and other IBM
macros use these registers to pass addresses.
by logical IOCS and other supervisory routines to hold the
address of a save area. This area holds the contents of the user
program’s general purpose registers and restores them on return.
14 and 15 Logical
IOCS uses these registers for linkage. A
GET or PUT
will load the address of the following instruction into register 14
and will load the address of the actual I/O routine into register 15.
This use of
registers 13, 14, and 15 follows the IBM standard for
subroutine linkage, which will be discussed in a later chapter.
One “take away” from this discussion is the fact that user programs should reference and use only registers 3 through 12 of the general–purpose register set. Some general–purpose registers are less “general purpose” than others.
In IBM terminology, a data set is a collection of data records that can be made available for processing. The term is almost synonymous with the modern idea of a disk file; for most of this text the two terms will be viewed as equivalent. One should realize that the idea of a data set is more general than that of a disk file. Data sets can be found on a DASD (Direct Access Storage Device, either a magnetic disk or a magnetic drum), on magnetic tape, or on a sequence of paper punch cards. The term “data set” is a logical construct.
In order to understand the standard forms of record organization, one must recall that magnetic tape was often used to store data. This storage method had been introduced in the 1950’s as a replacement for large boxes of punched paper cards. The standard magnetic tape was 0.5 inches wide and either 1200 or 2400 feet in length. The tape was wound on a removable reel that was about 10.5 inches in diameter. The IBM 727 and 729 were two early models.
The IBM 727 was officially announced on September 25, 1963 and marketed until May 12, 1971. The figure at left was taken from the IBM archives, and is used by permission.
It is important to remember that the tape drive is an electro–mechanical unit. Specifically, the tape cannot be read unless it is moving across the read/write heads. This implies a certain amount of inertia; physical movement can be started and stopped quickly, but not instantaneously.
One physical manifestation of this problem with inertia is the inter–record gap on the magnetic tape. If the tape contains more than one physical record, as do almost all tapes, there must be a physical gap between the records to allow for starting and stopping the tape. In other words, the data layout on the tape might resemble the following:
One issue faced early by the IBM design teams was the percentage of tape length that had to be devoted to these inter–record gaps. There were several possible solutions, and each one was pursued. Better mechanical control of the tape drive has always been a good choice.
Another way to handle this problem would be to write only large physical records. Larger records lead to a smaller percentage of tape length devoted to the inter–record gaps. The efficiency problem arises with multiple small records, such as images of 80–column cards.
One way to improve the efficiency of storage for small records on a magnetic tape is to group the records into larger physical records and store these on tape. The following example is based on the one above, except that each physical record now holds four records. Note the reduction of the fraction of tape length devoted to inter–record gaps.
This process of making more efficient use of tape storage is called record blocking. The program reads or writes logical records that have meaning within the context of that program. These logical records are blocked into physical records for efficiency of storage. In a particular data set, all physical records will contain the same number of logical records; the blocking factor is a constant. The only exception is the last physical record, which may be only partially filled.
set of 17 logical records written to a tape with a blocking factor of 5. There would be four physical records on the
Physical record 1 would contain logical records 1 – 5,
physical record 2 would contain logical records 6 – 10,
physical record 3 would contain logical records 11 – 15, and
physical record 4 would contain logical records 16 and 17.
On a physical tape, it is likely that the last physical record will be the same size as all others and be padded out with dummy records. In the above example, physical record 4 might contain two logical records and three dummy records. This is a likely conjecture.
Magnetic tape drives are not common in most computer systems these days, but the design feature persists into the design of the modern data set.
Use of the I/O Facilities
In order to use the data management facilities offered by the I/O system, a few steps are necessary. The program must do the following:
the physical characteristics of the data to be read or written with respect to
data set organization, record sizes, record blocking, and buffering to be used.
2. Logically connect the data set to the program.
3. Access the records in the data set using the correct macros.
terminate access to the data set so that buffered data (if any)
can be properly handled before the connection is broken.
While some of these steps might be handled automatically by the run–time system of a modern high–level language, each must be executed explicitly in an assembler program.
Style Conventions for Invoking I/O Macros
Some of the I/O macros, especially the file definition macro, require a number of parameters in order to specify the operation. This gives rise to a stylistic convention designed to improve the readability of the program. The most common convention used here is to use the keyword facility and list only one parameter per line.
While one possibly could use positional parameters in invoking an I/O macro, this would require any reader to consult a programming reference in order to understand what is intended. Of course, it is possible for a programmer to forget the proper argument order.
Here is a file definition macro invocation written in the standard style.
FILEIN DCB DDNAME=FILEIN, X
Note the “X” in column 72 of each of the lines except the last one. This is the continuation character indicating that the next physical line is a continuation of the logical line. To reiterate a fact, it is the presence of a non–blank character in column 72 that makes the next line a continuation. Peter Abel [R_02] places a “+” in that column; that is good practice.
Here is another style that would probably work. It is based on old FORTRAN usage.
FILEIN DCB DDNAME=FILEIN, 1
Note that every line except the last has a comma following the parameter. This is due to the fact that the parameter string after the DCB should be read as a single line as follows:
The File Definition Macro
The DCB (Data Control Block) is the file definition macro that is most commonly used in the programs that we shall encounter. As noted above, it is a keyword macro. While the parameters can be passed in any order, it is good practice to adopt a standard order and use that exclusively. Some other programmer might have to read your work.
The example above shows a DCB invocation that has been shown to work on the particular mainframe system now being used by Columbus State University. It has the form:
Filename DCB DDNAME=Symbolic_Name, X
The name used as the label for the DCB is used by the other macros in order to identify the file that is being accessed. Consider the following pair of lines.
The example macro has one problem that might lead to confusion. Consider the line:
Filename DCB DDNAME=Symbolic_Name, X
The file name is the same as the symbolic name. This is just a coincidence. In fact it is the filename, which is the label associated with the DCB, that must match the other macros.
Here is an explanation of the above entries in the invocation of the DCB macro.
DDNAME identifies the file’s symbolic name, such as SYSIN for the primary system input device and SYSPRINT for the primary listing device. Here we use a slightly nonstandard name FILEIN, which is associated with SYSIN by a job control statement near the end of the program. That line is as follows:
//GO.FILEIN DD *
The “*” in this statement stands for the standard input device, which is SYSIN. This statement associates the symbolic name FILEIN with SYSIN.
identifies the data set organization.
Typical values for this are:
PS Physical sequential, as in a set of cards with one record per card.
DEVD defines a particular I/O unit. The only value we shall use is DA, which indicates a direct access device, such as a disk. All of our I/O will be disk oriented; even our print copy will be sent to disk and not actually placed on paper.
specifies the format of the records. The
two common values of the parameter are:
F Fixed length and unblocked
FB Fixed length and blocked.
specified the length (in bytes) of the logical record. A typical value would be
a positive decimal number. Our programs will all assume the use of 80–column punched cards for input, so that we set LRECL=80.
BLKSIZE specifies the length (in bytes) of the physical record. Our sample invocation does not use this parameter, which then assumes its default value. If the record format is FB (fixed length and blocked), the block size must be an even multiple of the logical record size. If the record format is F (fixed length and unblocked), the block size must equal the logical record size. It is probably a good idea to accept the default value for this parameter.
EODAD is a parameter that is specified only for input operations. It specifies the symbolic address of the line of code to be executed when an end–of–file condition is encountered.
MACRF specifies the macros to be used to access the records in the data set. In the case of GET and PUT, it also specifies whether a work area is to be used for processing the data. The work area is a block of memory set aside by the user program and used by the program to manipulate the data. We use MACRF=(GM) to select the work area option.
The OPEN Macro
This macro opens the data set and makes its contents available to the program. More than one dataset can be opened with a single macro invocation. The upper limit on datasets for a single OPEN statement is 16, but that number would produce unreadable code. As a practical matter, your author would prefer an upper limit of two or three datasets for each invocation of the OPEN macro.
Consider the following two sequences of macro invocations. Each sequence does the same thing; it opens two datasets.
Sequence 1 is a single statement.
Sequence 2 has two statements, which could appear in either order.
Each of these statements assumes that the two Data Control Blocks are defined.
Define the input file here
PRINTER DCB Define the output file here
The general format of the OPEN macro for one file is as follows [R_21, page 67].
[LABEL] OPEN (ADDRESS[,(OPTIONS)]
Multiple files can be opened at the same time, by continuing the argument list.
[LABEL] OPEN (ADDRESS1[,(OPTIONS1),ADDRESS2[,(OPTIONS2)]
Note that the first argument for opening the dataset is the file name used as the label for the DCB that defines the dataset. This is the label (address) associated with the DCB, not the symbolic name of the file (SYSIN, SYSPRINT, etc.).
It is also possible to pass the address of the DCB in a general–purpose register. When a register is used for this purpose, it is enclosed in parentheses. Here are two equivalent code sequences, each of which opens FILEIN for INPUT.
* OPTION 2
Note the parentheses around the second argument in each of the two individual invocations of the OPEN macro. This is a use of the sublist option for macro parameters [R_17, p. 302]. A sublist is a character string that consists of one or more entries separated by commas and enclosed in parentheses. What is happening here is that the macro definition is written for a sublist as a symbolic parameter, and this is a sublist of exactly one item.
There is one advantage in creating a separate OPEN statement for each file to be opened. If the macro fails, the line number of the failing statement will be returned. With only one file per line, the offending file is identified immediately.
The Close Macro
This macro deactivates the connection to a dataset in an orderly fashion. For output datasets, this will flush any data remaining in the operating system buffers to the dataset, so that nothing is lost by closing the connection. If needed, this macro will update any catalog entries for the dataset; in the Microsoft world this would include the file attributes.
Once a dataset is closed, it may be accessed again only after it has once again been opened.
While it may be possible to execute a program and terminate the execution without issuing a CLOSE for each open file, this is considered very bad programming practice.
The general format of the CLOSE macro for one file is as follows [R_21, page 27].
[LABEL] CLOSE (ADDRESS[,(OPTIONS)]
Multiple files can be closed at the same time, by continuing the argument list.
[LABEL] CLOSE (ADDRESS1[,(OPTIONS1),ADDRESS2[,(OPTIONS2)]
that has been successfully used in our lab assignments seems not to be of
this form. Here are the lines that we have used to close the INPUT and PRINTER.
A90END CLOSE FILEIN
The format above is that preferred for use when running under the DOS operating system, which is an IBM product not related to the better known Microsoft product. Our programs are run under a variant of the OS operating system. According to the standard format for OS, the above statements should have been written as follows.
A90END CLOSE (FILEIN)
Apparently, either form of the CLOSE macro for a single file will work.
When closing more than one file with a single CLOSE macro, one must allow for the fact that the options do exist, even if not commonly used. Here is the proper format.
A90END CLOSE (FILEIN,,PRINTER)
two commas following FILEIN. This
OPTIONS1 is not used. Were only one comma present, the assembler would try to interpret the string PRINTER as an option for closing FILEIN. The lack of options following the string PRINTER
The next two system macros to be discussed are GET and PUT. Before discussing either of these, it is important to note an I/O mode that will not be discussed here. This is called “locate mode”; it allows direct processing of data in the system buffers, so that the program need not define a work area. As this is a very minor advantage [R_02, page 262], we shall omit this feature and assume that each GET and PUT references a work area.
The GET Macro
This macro makes available the next record for processing. The record input overwrites the previous contents of the input area. There are two general formats as used with a work area.
[label] GET Filename,Workarea
[label] GET (1),(0)
In each of
these formats, the
following examples, the file name is FILEIN and the work area is
FILEIN DCB Define the input file
RECDIN DS CL80 This is the input work area
The second uses the use of general–purpose registers 0 and 1 in the standard manner to store the addresses of the file definition area and the work area
LA 1,FILEIN Address of the file definition
LA 0,RECDIN Address of the work area
GET (1),(0) Read a record into RECDIN. Note the
standard use of the parentheses.
The PUT Macro
This macro writes a record from the output work area. There are two general formats.
[label] PUT Filename,Workarea
[label] PUT (1),(0)
In each of
these formats, the
following examples, the file name is PRINTER and the work area is
PRINTER DCB Define the input file
DATOUT DS CL133 This is the output work area
The second uses the use of general–purpose registers 0 and 1 in the standard manner to store the addresses of the file definition area and the work area
LA 1,PRINTER Address of the file definition
LA 0,DATOUT Address of the work area
PUT (1),(0) Copy data from work area to printer.
Expansion of the I/O Macros
47 OPEN (PRINTER,(OUTPUT))
000014 48+ CNOP 0,4
000014 4510 C016 0001C 49+ BAL 1,*+8
000018 8F 50+ DC AL1(143)
000019 000098 51+ DC AL3(PRINTER)
00001C 0A13 52+ SVC 19
53 OPEN (FILEIN,(INPUT))
00001E 0700 54+ CNOP 0,4
000020 4510 C022 00028 55+ BAL 1,*+8
000024 80 56+ DC AL1(128)
000025 0000F8 57+ DC AL3(FILEIN)
000028 0A13 58+ SVC 19
59 PUT PRINTER,PRHEAD
00002A 4110 C092 00098 61+ LA 1,PRINTER
00002E 4100 C1A2 001A8 62+ LA 0,PRHEAD
000032 1FFF 63+ SLR 15,15
000034 BFF7 1031 00031 64+ ICM 15,7,49(1)
000038 05EF 65+ BALR 14,15
66 GET FILEIN,RECORDIN
00003A 4110 C0F2 000F8 68+ LA 1,FILEIN
00003E 4100 C152 00158 69+ LA 0,RECORDIN
000042 1FFF 70+ SLR 15,15
000044 BFF7 1031 00031 71+ ICM 15,7,49(1)
000048 05EF 72+ BALR 14,15
95 A90END CLOSE (FILEIN)
000074 96+ CNOP 0,4
000074 4510 C076 0007C 97+A90END BAL 1,*+8
000078 80 98+ DC AL1(128)
000079 0000F8 99+ DC AL3(FILEIN)
00007C 0A14 100+ SVC 20
101 CLOSE (PRINTER)
00007E 0700 102+ CNOP 0,4
000080 4510 C082 00088 103+ BAL 1,*+8
000084 80 104+ DC AL1(128)
000085 000098 105+ DC AL3(PRINTER)
000088 0A14 106+ SVC 20
116 PRINTER DCB DSORG=PS,
119+* DATA CONTROL BLOCK
000098 121+PRINTER DC 0F'0' ORIGIN ON
122+* DIRECT ACCESS DE
000098 0000000000000000 123+ DC BL16'0' FDAD, DVTB
0000A8 00000000 124+ DC
125+* COMMON ACCESS ME
0000AC 00 126+ DC AL1(0) BUFNO, NUM
0000AD 000001 127+ DC AL3(1) BUFCB, BUF
0000B0 0000 128+ DC AL2(0) BUFL, BUFF
0000B2 4000 129+ DC BL2'0100000000000000' DSO
0000B4 00000001 130+ DC A(1) IOBAD FOR
131+* FOUNDATION EXTEN
0000B8 00 132+ DC BL1'00000000' BFTEK, BFA
0000B9 000001 133+ DC AL3(1) EODAD (END
0000BC 82 134+ DC BL1'10000010' RECFM (REC
0000BD 000000 135+ DC AL3(0) EXLST (EXI
136+* FOUNDATION BLOCK
0000C0 D7D9C9D5E3C5D940 137+ DC CL8'PRINTER' DDNAME
0000C8 02 138+ DC BL1'00000010' OFLGS (OPE
0000C9 00 139+ DC BL1'00000000' IFLGS (IOS
0000CA 0050 140+ DC BL2'0000000001010000' MAC
141+* BSAM-BPAM-QSAM I
0000CC 00 142+ DC BL1'00000000' OPTCD, OPT
0000CD 000001 143+ DC AL3(1) CHECK OR I
0000D0 00000001 144+ DC A(1) SYNAD, SYN
0000D4 0000 145+ DC H'0' INTERNAL A
0000D6 0000 146+ DC AL2(0) BLKSIZE, B
0000D8 00000000 147+ DC F'0' INTERNAL A
0000DC 00000001 148+ DC A(1) INTERNAL A
149+* QSAM INTERF
0000E0 00000001 150+ DC A(1) EOBAD
0000E4 00000001 151+ DC A(1) RECAD
0000E8 0000 152+ DC H'0' QSWS (FLAG
0000EA 0085 153+ DC AL2(133) LRECL
0000EC 00 154+ DC BL1'00000000' EROPT, ERR
0000ED 000001 155+ DC AL3(1) CNTRL
0000F0 00000000 156+ DC H'0,0' RESERVED A
0000F4 00000001 157+ DC A(1) EOB, INTER
160 * INPUT FILE - DATA CONTROL BLOCK
163 FILEIN DCB DSORG=PS,
166+* DATA CONTROL BLOCK
0000F8 168+FILEIN DC 0F'0' ORIGIN ON
169+* DIRECT ACCESS DE
0000F8 0000000000000000 170+ DC BL16'0' FDAD, DVTB
000108 00000000 171+ DC
172+* COMMON ACCESS ME
00010C 00 173+ DC AL1(0) BUFNO, NUM
00010D 000001 174+ DC AL3(1) BUFCB, BUF
000110 0000 175+ DC AL2(0) BUFL, BUFF
000112 4000 176+ DC BL2'0100000000000000' DSO
000114 00000001 177+ DC A(1) IOBAD FOR
178+* FOUNDATION EXTEN
000118 00 179+ DC BL1'00000000' BFTEK, BFA
000119 000074 180+ DC AL3(A90END) EODAD (END
00011C 90 181+ DC BL1'10010000' RECFM (REC
00011D 000000 182+ DC AL3(0) EXLST (EXI
183+* FOUNDATION BLOCK
000120 C6C9D3C5C9D54040 184+ DC CL8'FILEIN' DDNAME
000128 02 185+ DC BL1'00000010' OFLGS (OPE
000129 00 186+ DC BL1'00000000' IFLGS (IOS
00012A 5000 187+ DC BL2'0101000000000000' MAC
188+* BSAM-BPAM-QSAM I
00012C 00 189+ DC BL1'00000000' OPTCD, OPT
00012D 000001 190+ DC AL3(1) CHECK OR I
000130 00000001 191+ DC A(1) SYNAD, SYN
000134 0000 192+ DC H'0' INTERNAL A
000136 0000 193+ DC AL2(0) BLKSIZE, B
000138 00000000 194+ DC F'0' INTERNAL A
00013C 00000001 195+ DC A(1) INTERNAL A
196+* QSAM INTERF
000140 00000001 197+ DC A(1) EOBAD
000144 00000001 198+ DC A(1) RECAD
000148 0000 199+ DC H'0' QSWS (FLAG
00014A 0050 200+ DC AL2(80) LRECL
00014C 00 201+ DC BL1'00000000' EROPT, ERR
00014D 000001 202+ DC AL3(1) CNTRL
000150 00000000 203+ DC H'0,0' RESERVED A
000154 00000001 204+ DC A(1) EOB, INTER | <urn:uuid:b31e178b-f13a-4bfd-86f7-b41ec2dbfc20> | CC-MAIN-2017-04 | http://edwardbosworth.com/My3121Textbook_HTM/MyText3121_Ch21_V02.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280065.57/warc/CC-MAIN-20170116095120-00207-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.855125 | 6,176 | 3.0625 | 3 |
Get Instant Access
To unlock the full content, please fill out our simple form and receive instant access.
The Network Engineer's role is to ensure the stability and integrity of in-house voice, data, video, and wireless network services. This is achieved by planning, designing, and developing local area networks (LANs) and wide area networks (WANs) across the organization. In addition, the Network Engineer will participate with the installation, monitoring, maintenance, support, and optimization of all network hardware, software, and communication links. This individual will also analyze and resolve network hardware and software problems in a timely and accurate fashion, and provide end user training where required. | <urn:uuid:2da22811-8e6a-4ad6-96d8-bae455a5428e> | CC-MAIN-2017-04 | https://www.infotech.com/research/network-engineer | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00417-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.915872 | 137 | 2.53125 | 3 |
Fiber Optic Transmission, namely to optical fiber as a medium for data and signal transmission. Optical fiber, not only can be used to transmit analog signals and digital signals, and can meet the needs of video transmission. Optical fiber transmission, generally use optical fiber cable of single optical fibre data transfer rate can reach several Gbps, in the case of does not use Repeaters, transmission distance can up to a few tens of kilometers.
Transmitting signals as pulses of light through a thinner than human hair strand of glass or plastic at data carrying capacity (bandwidth) far greater than possible with any other physical medium. Under the AT&T’s SONET standard data speeds of over 2.5 Gbps are common, whereas the maximum limit for a copper cable (without compression) is 16 Mbps. The attainable limit of fiber optic transmission is 2 trillion bits per second, enough to handle the amount of data handled by all US telecommunication companies put together. Fiber-optic uses less energy, is immune to static (electromagnetic interference), and is almost entirely secure from tempering or wire tapping.
Development and Application
Optical fiber communication technology applications is growing fast, since 1977 since the first commercial installation of fiber optic system, telephone companies start using optical fiber link, replacing the old system of copper wire. Today many of the telephone company, in their system comprehensively USES optical fiber as the main structure and as a city of long distance connection between the telephone system. Providers have begun to use/copper fiber axis hybrid circuit experiment. Allow the hybrid lines in the field of integration between optical fiber and coaxial cable, this is known as the location of the node, the offer will be the light of the light pulse is converted to electrical signal receiver, and then signal is transmitted through a coaxial cable to each family. As an appropriate means of communication signal transmission, optical fiber steadily instead of copper wire is obvious, the fiber optic cable across long distances between local phone systems as well as many network system to provide the main line connection.
Optical fiber is a kind of with glass as the waveguide, in the form of light to transfer information from one end to the other side of the technology. Today’s low loss optical fiber relative to the early development of transmission medium, almost without being limited by the bandwidth and has a unique advantage, point-to-point optical transmission system consists of three basic parts: produce optical transmitter, optical signal to carry light signal cable and optical receiver receive light signals.
To know the fiber optic communication development and application, it helps us learn more about how to use fiber transmission . From FiberStore,we supply some fiber optic transmission products,such as fiber optic transceiver,Fiber Media Converters, Optical Multiplexers, Fiber optic modems, Attached Direct Cables,fiber switch,Network Interface Cards, and PON & AON. and so on.We have a full range of fiber optic transmission products,welcome to FiberStore to buy what you need. | <urn:uuid:de4c4cff-f007-45dc-a507-6a0212dfb0d3> | CC-MAIN-2017-04 | http://www.fs.com/blog/development-and-application-of-fiber-optic-transmission.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279224.13/warc/CC-MAIN-20170116095119-00143-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.927979 | 619 | 3.515625 | 4 |
A software repository is a storage location where you can store software packages. These software packages can be accessed and installed, when required, on computers in your network. Using these repositories facilitates easy storage, maintenance and backup of software packages.
Desktop Central enables you to store commonly used software applications in a central location and install them in the computers in your network when required. In Desktop Central, there are two types of software repositories:
The benefits of using a software repository include the following:
Desktop Central allows you to store software packages in the following repositories:
A network-share repository is used when you want to deploy a software application to multiple computers in a network. It is recommended that you store the software package that you want to deploy in a network share that is accessible from all the computers in the network. The software application will be installed directly in the computers that you specify. It is useful to use a network-share repository to install software applications in computers in the same LAN.
Most software applications have a single installation file like <setup>.exe or the<softwarename>.exe. Other applications have more than one installable file, however, these files are located in the same directory. Some complex applications, like Microsoft Office, have multiple installable files. Here each installable file is located in a different directory. It is recommended that you deploy such applications from a network share that is accessible from all the computers in your network.
Using a network-share repository enables you to do the following:
An HTTP repository is used to store executable files before you install them in computers in your network. You can use this repository when you want to deploy software packages to computers using the HTTP path. You can also change the location of the HTTP repository if required.
The HTTP repository is created automatically when you install Desktop Central. It is located in the same folder as the Desktop Central server. For example, <DesktopCentral server>\webapps\DesktopCentral\swrepository. You can change the location of the repository if required.
Assume that you want to install software applications in the computers in a remote office. The computers are connected to the local office using a VPN connection or the Internet. In this case, you cannot use a network-share repository to install the software applications. Desktop Central addresses this requirement by enabling you to use HTTP repository to store the required software packages. Using the HTTP path option, you can browse and select the required executable files and upload them to the Desktop Central server. These files can be accessed from the computers in the remote office and can be used to install software applications.
Using an HTTP repository enables you to do the following: | <urn:uuid:990e4ffa-f5c7-4385-9b4b-8f97b5e17209> | CC-MAIN-2017-04 | https://www.manageengine.com/products/desktop-central/software-repository.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281084.84/warc/CC-MAIN-20170116095121-00353-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.896672 | 541 | 2.90625 | 3 |
Most people lie, whether they're covering up something sinister or just embarrassed over a mistake. Research conducted a few years ago at the University of Massachusetts found that 60 percent of participants lied at least once during an observed 10-minute conversation.
If you're trying to get to the bottom of a work incident, or just asking the kids who broke the TV, it's useful to know how to spot a lie (Learn interview and interrogation techniques in How to Spot a Liar).
Body language expert and human behavior specialist Carolyn Finch, who served as a consultant and analyst for media outlets during the OJ Simpson trial, has appeared on CNBC News and the Ellen Degeneres Show. Here Finch gives a rundown of the hallmark physical signs people display when they are trying to put one over on you (Watch the video for Carolyn's analysis and her recall of famous cases of alleged lying).
Obviously these signs don't guarantee that lying is in progress, but they're valuable clues to recognize.
Tense facial expression
When people lie, said Finch, they tend to smile with only the lower muscles in their face. A liar might try and fake a smile to look genuine or at ease. But a real smile uses the entire face, including the eyes.
"You will see smiling that is artificial," said Finch. "It's down here (the lower face) instead of in the eyes."
Hesitant speech and pausing
A liar will speak hesitantly, according to Finch, and often pauses frequently when answering a question. A liar might also repeat words or stutter, she said.
"A person who is pausing is thinking," said Finch. "The eyes go up and around and down to think about what they are going to say next."
A liar might also place a finger in front of their mouth, as if contemplating, when they are about to say something that is untrue.
"When they open the mouth, they may give you whole different story than what they might have said when they were thinking with the finger over their mouth."
Nervous behavior and overemphasis
Other face touches might include nose rubbing or touching underneath the nose, all indicators the person is uncomfortable. And watch hands closely, which are an easy way to spot nervousness.
"Sometimes there is tremor, definitely in the hands," said Finch, who also noted the jaw might shake, too.
"The jaw is usually level with floor when a person is talking to another person. But (when lying) the jaw is going to go down, there can be a tremor, it's tight, like: 'Yes you better believe me,' and they're overemphasizing it."
Finch said Bill Clinton's now famous statement in 1998 about his relationship with Monica Lewinsky is an example of this kind of overemphasis. Clinton, who later admitted to an inappropriate relationship with the White House intern, initially told the public: "I did not have sexual relations with that woman; Ms. Lewsinky."
"This [Clinton's hand gestures] is making a very sarcastic point," said Finch. "(He's saying) 'Do you hear me? Do you hear me?' It's almost a sarcastic, sharp point saying: 'OK, what's matter with you people?' This was accompanied with a lot blinking, much more than ordinarily seen with Bill Clinton."
Lack of eye contact or shifty eyes
Liars will sometimes avoid making eye contact, but these days many know that eye contact has become a well-known indicator. It is therefore not as good of a sign as it used to be, said Finch, because liars will make a concerted effort to keep your gaze so as not to arouse suspicion. However, Finch advises studying where there eyes go if, and when, they do break gaze.
Finch said she immediately recognized that Susan Smith, who was convicted of drowning her two children in 1994, was lying during a TV interview. Before Smith was charged with the crime, she told police and the media that her children were abducted and that she didn't know where they were.
"With Susan Smith, I looked at her eyes and knew. Her eyes were up to her left. She was visualizing what had happened. Then she was down to the right. That's when I knew she knew exactly where her children were." | <urn:uuid:4a70b3e4-74db-4f86-a52a-9757cacc9a20> | CC-MAIN-2017-04 | http://www.csoonline.com/article/2124218/fraud-prevention/4-ways-to-catch-a-liar.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.983647 | 891 | 3.109375 | 3 |
2016 is looming as the year during which a gap in weather satellites could leave the nation without some of the severe storm data that’s vital to early warnings. After 2011’s record-breaking severe weather — with 12 disasters that cost more than $1 billion — it seems counterintuitive that budget reductions may create a period of 12 to 18 months during which severe warnings days in advance of a storm likely won’t be available, according to National Oceanic and Atmospheric Administration (NOAA) predictions.
Vital to weather forecasting, two polar-orbiting satellites collect data above the Earth’s poles 14 times per day and feed data into a computer model. According to NOAA, the satellites’ orbits “provide two complete views of weather around the world,” which allow meteorologists to “develop models to predict the weather out to five to 10 days.” In addition, polar-orbiting weather satellites provide about 90 percent of the data used in National Weather Service forecast models.
The two satellites provide continuity of information, with one providing data during the mid-morning orbit and the other in the early afternoon. The first is run by the European Organisation for the Exploitation of Meteorological Satellites, which partners with NOAA and benefits from the information collected in the afternoon orbit. The second satellite is owned by the United States — and is where the information gap issue lies.
Because of a funding reduction, Ajay Mehta, deputy director for NOAA’s Joint Polar Satellite System (JPSS), said the launch of the new satellite, called JPSS-1, was delayed. JPSS-1 will replace a NASA satellite that was launched on Oct. 28, 2011. NASA’s satellite — called the National Polar-Orbiting Operational Environmental Satellite System Preparatory Project, or NPP for short — will provide operational data for four or five years.
“That is an important thing for our continuity because [it’s] the last of the old generation of satellites we had launched in 2009,” Mehta said. “That one is only going to last for another couple of years.”
While NASA’s satellite is providing continuity of information, its life cycle is expected to end in 2016, and Mehta estimated that JPSS-1 won’t be fully operational until 2017. The time between NPP and JPSS-1 is when the information gap is expected.
“For the polar orbit, we have had operational satellites since 1979, so this mission is critical to provide continuity of NOAA operational data sets,” said Mitch Goldberg, JPSS program scientist. “NOAA has products and services, such as weather forecasting, and they depend on this constant flow of data from satellites going to weather prediction models.”
Last year was rife with concerns over how much funding NOAA’s satellite program would receive and what that would mean for the future of severe weather forecasting in the United States. NOAA Administrator Jane Lubchenco had many poignant sound bites in 2011, including that budget cuts to the satellite would be a “disaster in the making;” that in a few years, the agency may not be able to do the severe storm warnings that people have come to expect; and that it could cost three to five times more to rebuild the project than to keep funds flowing toward it.
President Barack Obama requested a little more than $1 billion for 2011 and beyond for the polar-orbiting satellite program. On Nov. 18, 2011, legislation was enacted that gave JPSS $924 million for 2012. “While we’re happy with the level of funding in this fiscal environment, it was still almost $150 [million] less than the president’s request — therefore it will not eliminate the possibility of a gap,” Mehta said via email.
When thinking about impacts that the information gap could have on emergency management, a question arises: What would be different?
To help assess how beneficial the information from polar-orbiting satellites is to weather forecasting, the National Weather Service reran forecasts for Snowmageddon, the blizzard that hit the East Coast in February 2010, without the satellites’ data. “When they took the data out, they ended up mis-forecasting it by almost 50 percent,” Mehta said. With the polar-orbiting data, a 20-inch snowfall was predicted, and without it the forecast was 10 inches of snow. In reality during the week of storms, 28.6 inches of snow fell in Washington, D.C. — the most since 1922, according to NOAA.
“You can imagine the difference for decision-makers,” said Goldberg. “If someone tells you there is going to be a seven-inch snowstorm or two-foot snowstorm, you’re going to make different decisions based on those two scenarios.”
The last year also has seen an increase in severe weather. From the tornadoes in Alabama and Missouri to Hurricane Irene impacting the East Coast, tremendous amounts of devastation have occurred across the U.S., the forecasts for which have been “very good,” Goldberg said. Without data from the polar-orbiting satellites, however, he said there would be a major degradation of weather forecast performance.
Another issue is this information can’t be obtained from other sources. Although the United States partners with Europe’s satellite program, data from both orbits is needed, said Mehta. He added that NOAA is exploring all options and has looked into privately owned satellites — but that would not help prevent the predicted information gap.
“Our estimates show that for somebody to build a new instrument and launch it, it’s going to take much longer,” he said, “because we’ve already started building the instruments and spacecraft for JPSS-1.”
And the lack of additional information sources also applies to state and local emergency management agencies. Larry Gispert, past president of the International Association of Emergency Managers and former emergency management director of Hillsborough County, Fla., said everyone — the private and public sectors — relies on NOAA and the National Weather Service for severe weather information. He said some companies will process that data and put their own spin on it — “but they all get that data from the federal government.”
What it comes down to is that emergency managers need severe weather data — and it must be as accurate as possible and provide enough time for preparing and evacuating people if needed. The island of Key West, Fla., is the year-round home to about 25,000 people, but sees more than 1 million visitors annually. Craig Marston, Key West’s division chief of emergency management and training, said evacuation procedures begin 96 to 72 hours before a storm is predicted to make landfall and having good, up-to-date information is key.
“We’re pretty far out there, so what really concerns us is that NOAA is able to maintain its air flights,” he said.
Marston works closely with the National Hurricane Center and the local Weather Forecast Office to know what the weather is doing and what to expect. In the event that severe weather data isn’t available for more than three days in advance, Key West’s ability to evacuate health-care patients and other populations could be jeopardized — 72 hours is the minimum amount of time needed to fly patients from the area. “We rely heavily on the Weather Service for its information,” Marston said.
Hillsborough County’s Gispert said the large numbers of people who live in coastal areas make storm information necessary to help with evacuations. “Emergency management people have a tough enough job without getting accurate data and some kind of advanced warning of potential threats,” he said.
Like most issues, it all comes down to money, and Gispert said public safety is one of government’s ultimate responsibilities. “If my congressman would ask me, and I often tell them, if it was a choice between funding one more bomb to Afghanistan or putting up a weather satellite, guess which one I am going to vote for.” | <urn:uuid:7b5c49d1-de38-4541-8c9c-456319a2426a> | CC-MAIN-2017-04 | http://www.govtech.com/em/disaster/Satellites-Could-Jeopardize-Severe-Weather-Forecasts.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00381-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952333 | 1,720 | 3.0625 | 3 |
The National Institute of Standards and Technology (NIST) this week opened a competition to develop a new cryptographic hash algorithm, a tool that converts a file, message or block of data to a short fingerprint for use in digital signatures, message authentication and other computer security applications. Such hash algorithms are ultimately one of the key security technologies for federal and public systems.
The competition is NIST's response to recent advances in the analysis of hash algorithms. The new hash algorithm will be called Secure Hash Algorithm-3 (SHA-3) and will augment the hash algorithms currently specified in the Federal Information Processing Standard (FIPS) 180-2, Secure Hash Standard.
NIST's goal is that SHA-3 provide increased security and offer greater efficiency for the applications using cryptographic hash algorithms. FIPS standards are required for use in federal civilian computer systems and are often adopted voluntarily by private industry. FIPS 180-2 specifies five cryptographic hash algorithms, including SHA-1 and the SHA-2 family of hash algorithms.
Because serious attacks have been reported in recent years against cryptographic hash algorithms, including SHA-1, NIST has decided to standardize an additional hash algorithm to augment the ones currently specified in FIPS 180-2.
Entries for the competition must be received by October 31, 2008. The competition was announced in the Federal Register Notice published on November 2, 2007. NIST has held two public workshops to assess the status of its approved hash functions and to solicit public input on its cryptographic hash function policy and standard.
As a result of these workshops, NIST has decided to develop one or more additional hash functions through a public competition, similar to the development process of the Advanced Encryption Standard (AES). AES supports key sizes of 128 bits, 192 bits and 256 bits and will serve as a replacement for the Data Encryption Standard (DES), which has a key size of 56 bits. In addition to the increased security that comes with larger key sizes, AES can encrypt data much faster than Triple-DES, a DES enhancement that which essentially encrypts a message or document three times. According to NIST's AES overview: "The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information." It is based on the Rijndael algorithm, named for Belgian researchers Vincent Rijmen and Joan Daemen, who developed it.
NIST initially issued draft minimum acceptability requirements, submission requirements, and evaluation criteria for candidate hash algorithms in January, 2007 for public comments; the comment period ended on April 27, 2007. Based on the public feedback, NIST has revised the requirements and evaluation criteria and issued a Call for a New Cryptographic Hash Algorithm (SHA-3) Family now. | <urn:uuid:14c1b2f4-f52e-4b5d-a0d7-01f6ea51fa58> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2349602/security/nist-calls-for-a-new-security--hash--algorithm.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00225-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936844 | 560 | 2.984375 | 3 |
Simulation involves setting up a model of a real system and conducting repetitive experiments on it. The methodology consists of a number of steps. The following is a brief discussion of the process:
Problem Definition. A problem is examined and defined. The analyst should specify why simulation is necessary. The system's boundaries and other such aspects of the problem should be stated.
Constructing the Simulation Model. This step involves gathering the necessary data. In many cases, a flowchart is used to describe the process. Then the model is programmed. Figure 9.6 shows a visual simulation model.
Figure 9.6 Example of a Visual Simulation Model
Testing and Validating the Model. The simulation model must accurately imitate the system under study. This involves the process of validation.
Design of the Experiments. Once the model has been validated, the experiment is designed. In this step the analyst determines how long to run the simulation. This step deals with two important and contradictory objectives, maximizing the accuracy of the model and minimizing the cost of developing the model.
Conducting the Experiment. Conducting the experiment involves issues such as how to generate random numbers, the number of trials or time period for the experiment, and the appropriate presentation of the results.
Evaluating the Results. The last step is the evaluation of the results. In addition to statistical tools, managers/analysts may conduct "What If" and sensitivity analyses. | <urn:uuid:9dc5d033-93b8-4405-8bba-108b6c6d4392> | CC-MAIN-2017-04 | http://dssresources.com/subscriber/password/dssbookhypertext/ch9/page22.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280266.9/warc/CC-MAIN-20170116095120-00161-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.895351 | 288 | 3.90625 | 4 |
The Centers for Disease Control estimate that approximately 1.7 million healthcare associated infections occurred in U.S. hospitals in 2002 and were associated with 99,000 deaths. CDC also estimates that HAIs add as much as $20 billion to health care costs each year.
The U.S. Department of Health and Human Services (HHS) unveiled a plan that establishes a set of five-year national prevention targets to reduce and possibly eliminate health care-associated infections (HAIs).
Health care-associated infections are infections that patients acquire while undergoing medical treatment or surgical procedures. These infections are largely preventable.
The Action Plan to Prevent Health Care-Associated Infections lists a number of areas in which HAIs can be prevented, such as surgical site infections. The plan also outlines cross-agency efforts to save lives and reduce health care costs through expanded HAI prevention efforts.
"This plan will serve as our roadmap on how the department addresses this important public health and patient safety issue," HHS Secretary Mike Leavitt said. "This collaborative interagency plan will help the nation build a safer, more affordable health care system."
The plan establishes national goals and outlines key actions for enhancing and coordinating HHS-supported efforts. These include development of national benchmarks prioritized recommended clinical practices, a coordinated research agenda, an integrated information systems strategy and a national messaging plan.
The plan also identifies opportunities for collaboration with national, state, tribal and local organizations.
HHS intends to update the plan in response to public input and new recommendations for infection prevention.
In addition to the tremendous toll on human life, the financial burden attributed to these infections is staggering. HHS' Centers for Disease Control and Prevention (CDC) estimates that approximately 1.7 million HAIs occurred in U.S. hospitals in 2002 and were associated with 99,000 deaths. CDC also estimates that HAIs add as much as $20 billion to health care costs each year.
HHS plans to hold meetings in the spring of 2009 to provide opportunities for public comment on improving and strengthening the plan and sharing opportunities for organizations to become engaged with implementing components of the plan that are consistent with their organizations' missions. The dates for these meetings will be posted on the HHS Office of Public Health and Science Web site. | <urn:uuid:17aea9b6-f65b-426e-80f7-b083a12a99c8> | CC-MAIN-2017-04 | http://www.govtech.com/policy-management/HHS-Integrated-Information-Systems-National-Messaging.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281263.12/warc/CC-MAIN-20170116095121-00463-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.952968 | 459 | 2.90625 | 3 |
Network Time Protocol provides the service of time synchronisation for all devices on the network (which have NTP service available and configured).
NTP service usually listens on UDP port 123. It will most often distribute UTC (Coordinated Universal Time) along with well planed leap second modifications. But no other information like time zone can be forwarded with it. It means that your devices can get the clocks sync with NTP but you first need to be sure that you configured time-zone on that device so that he can show local time. | <urn:uuid:e7d79c84-0134-4836-a4ae-cc113d54e48b> | CC-MAIN-2017-04 | https://howdoesinternetwork.com/tag/network-time-protocol | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279650.31/warc/CC-MAIN-20170116095119-00097-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.932292 | 109 | 2.75 | 3 |
Next-Generation High Schools encourage at-risk students to stay in school and graduate on time, according to school administrators and students.
Josh Robinson, a 16-year-old student at Albemarle High School in Charlottesville, Va., said that the connected learning environment of the school keeps him engaged. Before he started school in Albermale County, he was kicked out of various schools for behavior problems.
“Throughout my life I’ve had some struggles,” Robinson said. “I’ve been in and out of homes and living on the streets.”
Even though Robinson had difficulties staying out of trouble, he maintained A’s, B’s, and C’s in his classes.
“I’m a pretty intelligent kid,” Robinson said. “I’ve gotten compliments on how much I’ve done in school.”
When Robinson was put into foster care, he eventually transferred to Albemarle High School.
“My life was like the Titanic hitting an iceberg before I got put in foster care,” Robinson said.
Albemarle High School offers the hands-on, project-based learning that President Obama promoted in November 2015, when the administration gave $375 million in support for Next-Generation High Schools.
Next-Generation High Schools offer personalized and active learning, access to “real-world” and hands-on learning, ties to higher education institutions, and focus on STEM opportunities to students who are underrepresented in those fields.
Albemarle High School teaches students math, science, history, and English, which are divided into two blocks each day. The curriculum focuses on one main topic each day and incorporates connections to the topic within each subject. For example, when the students learned about slope in algebra, they learned about the steepness of mountains in science, they read the book Peak by Roland Smith in English, and they learned about the ancient Egyptian pyramids in history, according to Robinson.
Schools in California and Washington have also incorporated Next-Generation learning in their high schools.
Intel awarded $5 million for pathways in computer science in two high schools in Oakland, Calif.
“Oakland is a city that has everything around us to help us succeed yet everyone expects us to fail,” said Bernard McCune, deputy chief of the Office of Post Secondary Readiness for the Oakland Unified School District.
When the district decided to start using Next-Generation learning techniques, they wanted to “pursue excellence like it slapped our mother,” McCune said.
By 2020, the district hopes that every student will be part of a linked learning pathway, which includes technical education and internships that help students connect what they’re studying to the real world.
The district also made The Oakland Promise, which has raised $25 million from the Oakland community in scholarships for students and incentives for parents.
When Susan Enfield, superintendent of Highline Public Schools in Washington, asked students what they wanted the superintendent to know, they said that they wanted more Advanced Placement (AP) classes, especially in computer science.
“Just because we don’t have a lot doesn’t mean we don’t want to learn how to be successful,” the students told Enfield.
Highline Public Schools is in the fourth year of its strategic plan to achieve a 95 percent graduation rate. When Enfield began as superintendent, the rate was 60 percent.
“I thought the expectation for kids was pretty low when I arrived,” Enfield said. “I don’t believe that only six out of 10 of our kids are capable of graduating.”
This year, the graduation rate is about 75 percent.
The district has invested in early learning to ensure that students have tuition-free, full-day kindergarten. The schools offer the SAT for all students during the school day, placed about 1,000 students into internships that supplement what they’re learning, and added more AP classes.
Enfield said the most important thing to consider while the district reimagines two high school campuses for next year is to ensure that each school offers the same caliber classes and after-school programs so that the students can have equity and parity no matter what school they’re sent to.
“High schoolers deserve everything they need to be empowered in the classroom,” Obama said in a video to the Second Annual White House Summit on Next-Generation High Schools. “We can all do this. It’s all within our reach.” | <urn:uuid:bad587ac-42b9-431c-a093-83a1836f18f4> | CC-MAIN-2017-04 | https://www.meritalk.com/articles/next-gen-high-schools-help-at-risk-students/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.964369 | 972 | 2.734375 | 3 |
There are no exascale supercomputers yet, but there are plenty of research papers on the subject. The latest is a short but intense white paper centering on some of the specific challenges related to CMOS technology over the next decade and a half. The paper’s principal focus is about dealing with the end of Moore’s Law, which, according to best predictions, will occur during the decade of exascale computing.
Titled Exascale Research: Preparing for the Post-Moore Era (PDF), the paper is authored by HPC experts Marc Snir, Bill Gropp and Peter Kogge, who argue that we need to start using CMOS technology much more efficiently, while simultaneously accelerating the development of its replacement.
One of the tenets of supercomputing, and information technology in general, is that processors are expected to get more powerful and less expensive each year. Like the shark that needs to keep swimming to stay alive, the IT industry is based on the assumption that the hardware has to keep moving forward to support the expectations of the market.
This is certainly true for exascale proponents, who see the next level of HPC capability as a way to move forward on big science problems and help solve global challenges like climate change mitigation and the development of alternative energy sources. In the US, there is also the need to support our nuclear stockpile with compute-intensive virtual simulations — a task that is becoming increasingly difficult as the original expertise in designing and testing nuclear weapons disappears.
National security, too, has become very dependent on supercomputing. As the authors state, “In
an era where information becomes the main weapon of war, the US cannot afford to be outcomputed anymore that it can afford to be outgunned.”
It’s a given that the semiconductors behind exascale computing will, at least initially, use CMOS, a technology that’s been in common use since the 1970s. The problem is that CMOS (complementary-symmetry metal–oxide–semiconductor) is slowly giving way to the unrelenting laws of physics. Due to increasing leakage current, voltage scaling has already plateaued. That occurred nearly a decade ago when transistor feature size reached 130 nm. The result was that processor speeds leveled off.
And soon feature scaling will end as well. According to the white paper, CMOS technology will grind to a halt sometime in the middle of the next decade when the size of transistors reaches around 7 nm — about 30 atoms of silicon crystal. As the authors put it:
We have become accustomed to the relentless improvement in the density of silicon chips, leading to a doubling of the number of transistors per chip every 18 months, as predicted by “Moore’s Law”. In the process, we have forgotten “Stein’s Law”: “If something cannot go on forever, it will stop.”
And unfortunately there is currently no technology to take the place of CMOS, although a number of candidates are on the table. Spintronics, nanowires, nanotubes, graphene, and other more exotic technologies are all being tested in the research labs, but none are ready to provide a wholesale replacement of CMOS. To that end, one of the principal recommendations of the authors is for more government funding to accelerate the evaluation, research and development of these technologies, as a precursor to commercial production 10 to 15 years down the road.
It should be noted, as the authors do, that the peak performance of supercomputer has increased faster than CMOS scaling, so merely switching technologies is not a panacea for high performance computing. In particular, HPC systems have gotten more powerful by increasing the number of processors, on top of gains realized by shrinking CMOS geometries. That has repercussions in the failure rate of the system, which is growing in concert with system size.
The larger point is that the end of CMOS scaling can’t be compensated for just by adding more chips. In fact, it’s already assumed that the processor count, memory capacity, and other components will have to grow substantially to reach exascale levels, and the increased failure rates will have to be dealt with separately.
On the CMOS front, the main issue is power consumption, most of which is not strictly related to computation. The paper cites a recent report that projected a 2018-era processor will use 475 picojoules/flop for memory access versus 10 picojoules/flop for the floating point unit. The memory access includes both on-chip communication associated with cache access and off-chip communication to main memory.
To mitigate this, the authors say that smarter use of processor circuitry needs to be pursued. That includes both hardware (e.g., lower power circuits and denser packaging) and software (e.g., algorithms than minimize data movement and languages able to specify locality). More energy-aware communication protocols are also needed.
The good news is that most of the performance/power improvements discussed in the paper will also benefit the commodity computing space. But the authors also say that some of the technology required to support future HPC systems will not be needed by the volume market:
We need to identify where commodity technologies are most likely to diverge from the technologies needed to continue the fast progress in the performance of high-end platforms; and we need government funding in order to accelerate the research and development of those technologies that are essential for high-end computing but are unlikely to have broad markets.
The authors aren’t suggesting we need to build graphene supercomputers, while the rest of the world moves to spintronics. But there may be certain key technologies that can be wrapped around post-CMOS computing that will be unique to exascale computing. As always, the tricky part will be to find the right mix of commodity and HPC-specific technologies to keep the industry moving forward. | <urn:uuid:858d996f-3535-4565-8b66-1a764ff29ea1> | CC-MAIN-2017-04 | https://www.hpcwire.com/2011/06/29/moore_s_law_meets_exascale_computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00024-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.93977 | 1,234 | 2.765625 | 3 |
Understanding Packet Loss in Network Monitoring and Analysis Appliances
The key to zero packet loss lies in understanding the four key sources of loss.
By Daniel Joseph Barry
Network monitoring and analysis has grown in importance as the Internet and IP networks have become the de facto standard for a range of digital services. The commercialization of the Internet has only aggravated this need and extended the range of applications to include network testing, security, and optimization.
Common for all these applications is a need to analyze large amounts of data in real time. What distinguishes the task of network analysis from communication is the amount of data to be analyzed. In a typical communication scenario, the endpoints are only interested in the packets that are related to their conversation. The other packets sharing the same connection are simply filtered out. In a network analysis scenario, on the other hand, we are interested in all the packets traversing the point we are monitoring. At 10 Gbps this can be up to 30 million packets per second that need to be analyzed in real time.
For an analysis to be useful, every packet needs to be analyzed. That missing packet could be the key to determining what is happening in the network. Waiting for the packet to be re-sent is not an option either because we are trying to perform analysis in real time. Packet loss is, therefore, unacceptable for analysis applications.
There can be many causes of packet loss, which can relate to how we get access to the data, the kind of technology used to capture packets, the processing platform, and the application software used to analyze the data. Let’s take a look at each of these in turn.
Source #1: Receipt of packet copies
The first source of packet loss can be the method for receiving copies of the packets for passive, off-line analysis. Switches and routers provide Switched Port ANalyzer (SPAN) ports, which are designed to provide a copy of all packets passing through the given switch or router. Network monitoring and analysis appliances can thus receive the data they need from the SPAN port directly. In most cases, this works well, but there is the potential for packet loss if the switch or router becomes overloaded. In such cases, the switch or router will prioritize its main task of switching and routing and down-prioritize SPAN port tasks. This will result in packets not being delivered for analysis or, in other words, packet loss.
It is for this reason that many prefer to use test access points (TAPs), which are simpler devices installed on the connection itself. A tap simply copies each packet received to the TAP outputs. The advantage of TAPs is that they can guarantee that a copy of each packet received is available. On a typical TAP, two outputs are provided per connection; one for upstream traffic and one for downstream traffic. Therefore, two analysis ports are required to capture and merge this data.
Source #2: Packet-capture technology
The second source of packet loss is the packet-capture technology used. Many appliances are based on standard network interfaces, such as those used for communication. However, these are not designed to handle the large amounts of data that need to be captured. As we said, up to 30 million packets per second need to be captured, but standard network interfaces cannot handle more than five million packets per second at the time of writing.
Another way of looking at this is in relation to what packet sizes are supported. Many of the vendors of standard network interfaces will claim full throughput for 512 bytes and larger packets. With larger packet sizes, there are inversely fewer packets per second to handle. Unfortunately, the Internet and IP networks don’t start at 512 bytes, and it is far from a rare occurrence that smaller packet sizes are used.
If we just look at typical TCP traffic, we can see two distinct breakpoints when analyzing traffic profiles. The first noticeable breakpoint is at 1500 bytes corresponding to the maximum transmission unit (MTU) of the Ethernet protocol. The next breakpoint is at 576 bytes, corresponding to the maximum segment size (MSS) of the transmission control protocol (TCP). Below 576 bytes, there can be a large number of smaller packet sizes correspond to TCP acknowledge packets, control segments, etc., which can be as small as 40 bytes.
This knowledge is often used in test methodologies, where reference is made to the Internet mix or IMIX to simulate internet traffic. A typical IMIX model will use a mix of 40-byte, 576-byte, and 1500-byte traffic corresponding to the breakpoints above. It is therefore clear that discounting traffic below 512 bytes is not providing a realistic and complete picture of what is happening in the network.
To guarantee packet capture, use products that are designed specifically for this task. They must ensure that all packet traffic is captured with zero packet loss even at 100 percent load. Otherwise, the analysis is incomplete. An example of this type of product is Napatech intelligent network adapters (full disclosure: I work for Napatech), which are designed specifically for packet-capture applications. These adapters are also designed for use in standard servers, which are the most common platform for appliance design.
Source #3: Servers
The third source of packet loss is the standard servers that are used as hardware platforms for appliances. If these servers are not configured properly, packets can be lost due to processing congestion. As general-purpose processing platforms, standard servers support many applications simultaneously as well as various adapters. Sharing processing, memory, and data bus (PCIe) resources between these various applications can lead to congestion if not configured properly. Because analysis is performed in real time, the analysis data will be lost unless it is buffered on the network adapter itself.
In addition, modern servers often provide “green” profiles, where power consumption is minimized. This means that very little airflow is provided to the PCIe slots where adapters are installed, so adapters will have difficulty in dissipating heat and can thus lead to the adapter failing (which of course guarantees packet loss). This needs to be considered in the design of the packet capture adapter.
Source #4: Analysis application software
The fourth source of packet loss is the design of the analysis application software defining the network monitoring and analysis appliance. Many applications are implemented using a single thread, meaning that they can only execute on a single CPU core. This is sufficient for lower bit rates but becomes a source of packet loss at higher bit rates, such as 10 Gbps.
The analysis application just cannot keep up. A best practice in such situations is to use a multi-threaded design that can take advantage of the multiple CPU cores available in standard servers. This in turn requires a packet capture adapter that can distribute to the multiple CPU cores in a way that fits the analysis application.
A Final Word
As can be seen, there are multiple sources of packet loss, but with careful consideration of how the data is provided to the appliance, the packet capture adapter used, configuration of the standard server hardware platform and application analysis software design, it is possible to guarantee zero packet loss analysis.
Daniel Joseph Barry is VP of marketing at Napatech. | <urn:uuid:d8f758c5-25d4-4f40-8daa-03cabfd3bb07> | CC-MAIN-2017-04 | https://esj.com/articles/2012/12/13/understanding-packet-loss.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00326-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.941429 | 1,475 | 2.671875 | 3 |
The National Cyber Security Alliance (NCSA) announces an education day on Internet safety and security on Oct. 11, 2007. The day's events, to be held at The San Jose Museum of Art, kick off a multi- city, localized effort to promote National Cyber Security Awareness Month (NCSAM), and to provide Internet users with cyber safety tips, educational presentations on the dangers of lax computer security.
"San Jose is the logical choice to be the first city on our local NCSAM tour," said Ron Teixeira, executive director, NCSA. "The city is not only a global technology hub, but also where many of the innovative technologies we use to keep ourselves safe online were developed. It's imperative that we localize the important messages of our national campaign to empower individuals, increase awareness about online safety and security risks at the micro level and affect grassroots change we hope will lead to a much larger shift in thinking."
A national campaign, National Cyber Security Awareness Month is focused on educating the American public, businesses, schools and government agencies about ways to secure their part of cyber space, computers and our nation's critical infrastructure.
In addition to an identity theft demonstration from a top security expert from Symantec, the NCSA and One Economy, through its Digital Connectors program, will provide tips to general Internet users on the precautions to take while online. One Economy's Digital Connectors are young people who live in disadvantaged neighborhoods who receive technology training, including the Internet Safety strategies outlined on One Economy's self-help Web portal, the Beehive. They then provide training and support to promote the adoption of technology in their communities. | <urn:uuid:67d484d7-e184-4260-8b87-62e02b227a91> | CC-MAIN-2017-04 | http://www.govtech.com/security/National-Cyber-Security-Month-Kicks-Off.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00566-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937422 | 335 | 2.515625 | 3 |
Networking Strategies for Dummies
The popular, now defunct, comedy show “The Kids In The Hall” once aired a sketch called “Businessmen Networking.” In the sketch, a reception area of an office is filled with middle-aged, mostly white-haired men. There are a few younger men among them who are played by members of the comedy troupe.
A character named Nick walks and begins interacting with a character named Gerald, his supervisor. It becomes clear Nick is new to the business field. So, Gerald enlightens him on the activity occurring around them. “Look out here, Gerald. There’s a sea of businessmen. The ripples and eddies you catch with your eye — those are the important guys. The name of the game is ‘networking,’ businessmen meeting businessmen for the purpose of meeting them again at a later date.”
Satirical though it might be, this provides a pretty perfect definition of what networking is and what it accomplishes. Many aspiring professionals have surveyed the landscape of their perspective field and realized there’s truth to the adage, “It’s not what you know — it’s who you know.”
So, the most basic strategy you can apply to networking is to expand as broadly as possible the pool of people whom you know.
This doesn’t mean your career is dictated by random circumstances in your life. (Microsoft co-founder Paul Allen befriended Bill Gates in high school. Most IT professionals are not so lucky.)
As “The Kids In The Hall” sketch progresses, it makes it clear how easy it can be to come to “know” someone. A character named Tony enters. He is cool and confident, obviously an experienced networker. Nick and Gerald interact with him briefly, and after meeting Nick, Tony tells him he hopes to see his name on a mailing list someday. Tony then abruptly ends the conversation by saying, “OK. Listen, I’m gonna stand a few feet away.” Nick and Gerald then excitedly ponder the significance of the meeting, and Gerald say, “He knows you. Can that hurt?”
It can’t. Another lesson to take away from the meeting is how directly Tony acted to dismiss himself from the conversation. In any good networking opportunity, say a room full of IT pros who specialize in your chosen field, it pays to not be shy in telling people you need to move on and continue speaking with other people.
It’s a matter of working the room — the best people in the business are able to do it effortlessly, but those who lack gifted ease in social interactions still should be able to gracefully dismiss themselves from career-oriented conversations once they’ve run their course.
To network, of course, you’ve got to get into a space where professionals in your field are congregating. It pays to be open to trade shows, conferences, open houses, grand openings, etc. Not so many that it distracts from your work, but enough to ensure you’re making connections in your field.
And in approaching such a networking opportunity, be unabashedly ambitious about it. Meet the right people for your networking objectives and directly ask them about it if you feel there’s something they can do for you or you can do for them. After all, you’re there to build your career.
But this is not to say industry-centric talk has to dominate networking. As “The Kids In The Hall” sketch progresses further, Gerald introduces Nick to yet another businessman. Nick hesitantly tries to establish a common ground, asking him “Do you like professional sports?”
This gets an excited response from the potential contact, who exclaims, “Ha! By God, I do! I cheer for all the local teams!”
The two begin to shake hands vigorously and don’t stop. Obviously, identifying a common interest is not always this easy, but feeling out a contact about his or her interests can allow you to easily establish a rapport that makes networking much easier and more rewarding.
As “The Kids In The Hall” sketch concludes, all the businessmen in the room have surrounded Nick. He’s now “the hot guy” (mainly owing to his having made mention of professional sports).
With any luck, your networking efforts will be as successful as his. | <urn:uuid:e6c8afdc-b502-4a6f-9830-f83048ac7ab7> | CC-MAIN-2017-04 | http://certmag.com/networking-strategies-for-dummies/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280133.2/warc/CC-MAIN-20170116095120-00474-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.96504 | 933 | 2.546875 | 3 |
IBM has once again stuck its neck on the line and made its prediction for which five technologies will dominate our lives in the next five years. This year the focus is very much on how humans will interact with technology through our senses, and the way that will change our lives.
Here is IBM’s 2012 five in five:
Big Blue reckons that soon enough mobile phone users will be able to touch a product through their device’s screen. Using haptic technology (which some speculated would appear on the iPhone 5 earlier this year) people could, for example, touch the material on a piece of clothing before decided whether to buy it or not.
Of course users will not literally touch the material, but it will be a simulation of what the material would feel like. IBM says its scientists are already hard at work using haptic, infrared and pressure sensitive technologies that will be able to do this, with the retail industry likely to be the first to benefit from it.
Anyone who has used the "similar" on Google’s image search will know that its algorithms are getting better at looking at an image and then finding ones similar. But it does that by using information tagged into the image by humans and is therefore quite limited.
IBM says that is soon going to change. Computers will soon be able to learn what is in an image, essentially making sense out of it in the same way a human would. The obvious use for this technology is the healthcare industry, where patient diagnosis could be given a helping hand. Telemedicine is one area that would certainly benefit from this sort of technology.
There is an episode of The Simpsons where Herb, Homer’s long-lost brother, invents a machine that can translate a baby’s babbling into speech. IBM thinks technology to actually do that will be available within five years. Sensors will detect elements of sounds and computers will interpret them to find any meaning that would otherwise be hidden.
As well as the baby talk example IBM thinks that sensors will be able to use sounds to predict when a tree may fall in a forest or when a landslide is imminent. A real life example is happening in Galway Bay in Ireland, where IBM have put sensors underwater to pick up on noises being made by wave energy conversion machines and working out what impact that may be having on marine life.
This one would interest Heston Blumenthal, that’s for sure. IBM claims that within the next five years computers will be able to pull together the perfect meal, one that uses a human’s favourite smells and tastes and selects appropriate food. Big Blue said, "It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavours and smells humans prefer."
It wouldn’t just be about the taste though. The technology would be able to recommend healthy options that whoever sits down to eat the meal is guaranteed to like. It will even be able to take into account medical issues such as diabetes or allergies, IBM said.
Within five years it will be possible for your smartphone to smell you and analyse if you are coming down with something like a cough or a cold. Tiny sensors embedded in the device will analyse biomarkers and the thousands of molecules in someone’s breath and will pick up on any abnormalities.
Technology like this is already in use, such as art galleries using it to monitor the air around works of art, but IBM thinks it will eventually be used for clinical hygiene purposes as well. For example hospitals will be able to use to it work out whether a room has been sanitised or not.
"These five predictions show how cognitive technologies can improve our lives, and they’re windows into a much bigger landscape -the coming era of cognitive systems," said Bernard Meyerson, Chief Innovation Officer, IBM.
"The world is tremendously complex. We face challenges in deciphering everything from the science governing tiny bits of matter to the functioning of the human body to the way cities operate to how weather systems develop," Meyerson said. "Gradually, over time, computers have helped us understand better how the world works. But, today, a convergence of new technologies is making it possible for people to comprehend things much more deeply than ever before, and, as a result, to make better decisions.
"In the coming years, computers will become even more adept at dealing with complexity. Rather than depending on humans to write software programs that tell them what to do, they will program themselves so they can adapt to changing realities and expectations. They’ll learn by interacting with data in all of its forms-numbers, text, video, etc. And, increasingly, they’ll be designed so they think more like the humans," he concluded. | <urn:uuid:6044b66f-8540-48f9-bbf7-65fb878755fd> | CC-MAIN-2017-04 | http://www.cbronline.com/blogs/cbr-rolling-blog/ibms-tip-for-the-next-five-years-computers-thinking-more-like-humans-171212 | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00382-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.955926 | 981 | 2.53125 | 3 |
As two snipers terrorized the Washington D.C. area last fall, law enforcement agencies from state, local and federal levels scrambled for any lead that might solve the case, employing some sophisticated technology in the process.
But this unusual case had the more than 1,000 officers assigned to the case swinging at curveballs, and like most cases, diligent police work with some critical help from the public eventually solved it. That's not to say the technology used here won't improve law enforcement in the future. Emerging technologies such as geographic profiling, ballistics information technology and secure networks that facilitate interoperability were used in this case and show promise for the future.
"Given what evidence there was, I thought the investigation was run about as well as could be expected and good use was made of the technology available," said Jay Siegel, professor of forensic science at Michigan State University.
Police used geographic profiling, hoping to nail down the snipers' residence. These types of crimes are usually committed within killers' "comfort zones," which are usually close to where they live.
Geographic profiling has been used in about 700 cases so far and has been credited with solving about 150. In this case, law enforcement called in Environmental Criminology Research Inc. to create an electronic map and mark the locations of the shootings. Based on that information, the profiling system uses a complex algorithm to calculate where the perpetrator is likely to live.
But in this case, the shooting sites became more dispersed; the shooters were transient, frustrating such efforts.
Law enforcement also used ballistics technology, which is also very new. The technology allowed law enforcement to match the shooters' weapon with crime scenes, but the ballistics information network contains relatively few ballistics images thus far.
The FBI's Law Enforcement Online (LEO) program was more useful. The FBI sets up a secure Web page for a "special interest group," such as the sniper investigation. In this case, the FBI set up command posts in six different counties, giving law enforcement in those counties the ability to send and receive information securely via the Internet.
"You can access all that information from anywhere you can get on the Internet," said Craig Sorum, unit chief and supervisory special agent of the LEO program. "To me, as a street agent, that's the cool part. You don't have to be physically in the command post to see what's going on."
All incoming e-mail is scanned for viruses, and LEO users must verify they are entitled to use the system. The system uses complicated passwords and virtual-private-network technology, which facilitates encrypted use of LEO through the Internet.
Even sensitive but unclassified information can be posted. "You can put up anything but classified information," Sorum said.
"It's an efficient, effective way to get out the information, whether it's terrorism, missing children or white collar scams, and at the same time, not tip off the media or bad guys on information that shouldn't be out there," he continued.
In the sniper case, it was information about the suspects' car that was dispersed through the media that solved the case.
"There's information that we want out there, and we give that to the news media," Sorum said. "Like when they found out about the car, tell everybody. Hey it's better to have 4 million people looking for it than 50 of us." | <urn:uuid:aa45dcd1-5d5b-46ab-9dc2-004ee80b46c1> | CC-MAIN-2017-04 | http://www.govtech.com/e-government/Information-Overload.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00290-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.967191 | 698 | 2.65625 | 3 |
By Jo Peterson and Michelle Ruyle
Cloud is its own language. Every language seems foreign until we learn it and use.
How many of you remember the first time you picked up a Newton’s Telecom Dictionary to reference how many DS3s made up an OC-48 or to spot check your knowledge on another term? The point is that none of us came out of college or into the workforce knowing telecom terms and acronyms. We learned them along the way either in a classroom setting or while working on a client project or both!
If a dictionary existed for cloud terms, the number of definitions would be growing daily as new acronyms and terms are constantly being added. Just like telecom, the language of cloud is comprised of acronyms, proprietary names and some terminology that has been granted new meaning.
The list of 50+ terms and definitions below is not meant to be comprehensive. How could it be in this quickly changing landscape? At best it’s an abridged glossary which is designed to assist in “translating" the more common cloud terms into a more common language. It is a basic overview of some of the phrases and terms that come up in conversations around the cloud with customers and providers.
Familiarizing yourself with these terms can only help further your conversations with clients.
API — An interface that specifies how software components should interact with each other
Cloud — A metaphor for a global network of servers
Cloud broker — A liaison between cloud services customers and cloud service providers. A cloud broker has no cloud resources of its own.
Cloud bursting — A bursting capacity configuration set up between a private cloud and a public cloud. If 100 percent of the resource capacity is used in the client’s private cloud, then bursting occurs to a public cloud in the form of overflow traffic.
Cloud computing — Delivery model of computing in which various servers, applications, data, and other often virtualized resources are integrated and provided as a service over the Internet.
Cloud computing types — Three main categories exist with additional categories evolving: software-as-a-service (SaaS) providers that offer Web-based applications; infrastructure-as-a-service (IaaS) vendors that offer public Internet-based access to storage and computing power; and platform-as-a-service (PaaS) vendors that give developers the tools to build and host Web applications. | <urn:uuid:82f72b46-6904-497c-9eb0-492273ad6fea> | CC-MAIN-2017-04 | http://www.channelpartnersonline.com/articles/2014/05/cloud-jargon.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00198-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925022 | 494 | 2.890625 | 3 |
Joined: 21 Mar 2005
Location: Pune, INDIA
The AT END and NOT AT END clause is used with READ command
Plz refer the command below:
NOT AT END
at end clause is used to perform some statements or paras when the EOF or end of file occurs.
so goes for not at end
generally at end is used, not at end is not much used, but it can be used all the same.
we can give statements to perform or we can give para names etc
hope it helps | <urn:uuid:87e8e6ac-b2e7-4d28-b48e-3c974083bcf7> | CC-MAIN-2017-04 | http://ibmmainframes.com/about2422.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00106-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.87417 | 112 | 2.671875 | 3 |
Solar Power Plant
/ October 15, 2013
Located on the outskirts of the Mojave Desert near the California/Nevada border, the Ivanpah Solar Electric Generating System has three solar towers, surrounded by hundreds of thousands of mirrors reflecting sunlight. Late last month, the facility's Unit 1 station synced to the power grid for the first time, described by officials as a critical milestone. The test proved that the power tower technology in use at the facility, made up of sunlight-tracking heliostats, solar field integration software and a solar receiver steam generator, can effectively transmit power to the power grid.
The 3,500-acre plant is jointly owned by NRG Energy Inc., Brightsource Energy Inc., and Google. Power generated in the Unit 1 test will go to Pacific Gas and Electric (PG&E), which will also receive power from the facility's upcoming Unit 3 test. Power from the Unit 2 station test will go to Southern California Edison.
Photo Credit: Josh Cassidy/KQED | <urn:uuid:0251bd1c-b9f2-4fac-a369-4b63cbafe8bd> | CC-MAIN-2017-04 | http://www.govtech.com/photos/Photo-of-the-Week-Solar-Power-Plant.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00318-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.937195 | 208 | 3.21875 | 3 |
Troubleshoot Windows DNS Problems
In any enterprise, DNS services are a crucial backbone for network connectivity. DNS is used for name resolution, allowing one client to locate another client. If DNS fails, it will disrupt connectivity to the Internet. In this article, we'll consider some common issues caused by misconfiguration of DNS.
Incorrect Configuration of Primary/Secondary Zones
Creating a new zone, whether primary or secondary, is just a matter of few clicks. However there are other settings that you might want to check to ensure that DNS is working properly.
Zones are not replicating
You have created a new zone, but for some reason it is not replicating with the primary zone. There might be many reasons for this, but here are some possibilities:
- Zone Transfers are enabled and the secondary DNS server IP is not specified. As a best practice, it is always recommended to specify the IP addresses of the servers that will need to download the zone data from the primary zone. See Figure 1.
- Secure Dynamic Updates are enabled, and the secondary zone does not have Active Directory DNS Integrated Zones configured. Secure Dynamic Updates only works if both DNS servers are running in Active Directory Integrated DNS Zones. If either the DNS server is not on Active Directory Integrated DNS Zones, or running on BIND (Linux), then Dynamic Updates need to be set to Non-Secure. See Figure 2.
Figure 1: Zone Transfers is enabled and only replicating to a specific server.
Figure 2: Dynamic Updates is set to Secure by default for Windows Server DNS.
Users are not able to do DNS queries from your DNS Server
You have done the basic troubleshooting, and users were able to ping to the DNS server with response. However when they tried to query specific DNS zones which is hosted on your DNS server, it fails. In this case, you might want to check:
- The "Everyone" group does not have read permission for the zone. Due to misconfiguration, the "Everyone" group might not have the necessary permission entries for the DNS zone. See Figure 3.
Figure 3: Everyone group has permission to read and list the content of the Zone
User PCs are not registering into the DNS zone
A user's PC is able to connect to the network, but the computer name does not get registered in the DNS server. Three common possibilities are:
- The TCP/IP settings properties window does not have Register this connection's addresses in DNS selected. This option will ask the DNS client to register the computer name into the DNS server. See Figure 4.
- Authenticated Users group does not have the correct permission set for the DNS zone. Authenticated Users group needs to have the permission to create child objects for the DNS zone. See Figure 5.
- DNS Dynamic Updates is not enabled in DHCP settings. To be exact, the DNS client will ask the DHCP Server to create an A and PTR record in the DNS Server. Hence, the DHCP Server will need to have the Enable DNS dynamic updates according to the settings below selected. See Figure 6.
Figure 4: Register this connection's addresses in DNS must be selected.
Figure 5: Authenticated users group must have permission to Create All Child Objects, else it will not create an A record in the DNS Server.
Figure 6: Enable DNS Dynamic Updates in DHCP settings.
DNS Server configuration
If the DNS server is not configured properly, the entire DNS service will be affected. Here are some common configuration issues administrators should look out for:
DNS queries not responding with any response
Assuming that Internet connectivity from the DNS server to the outside world is still good, the problem could lie with the forwarder or root hints. Here's why:
- Forwarder DNS servers are down. Depending on your network configurations, you might have set up forwarder DNS. If all of the forwarder DNS servers are down, this will affect the DNS server at your site. See Figure 7.
- Root Hints are missing. Or root hints servers are down. Root hints allow DNS queries to be resolved by using the Root DNS Server, without using an intermediate DNS server, or a forwarder. See Figure 8.
Figure 7: Configure Forwarders in DNS Server.
Figure 8: Root Hints name servers are shown in this list.
DNS is Important
DNS is crucial in every corporate environment, whether for internal or external hostname resolution. The above configuration issues are not exhaustive, but do include some of the most common problems administrators miss during routine monitoring and troubleshooting. Do you have any other DNS tips that you would like to share? Post them below!
Jabez Gan is a Microsoft Most Valuable Professional (MVP) and is currently the Senior Technical Officer for a consulting company that specializes in Microsoft technologies. His past experience includes developing technical contents for Microsoft Learning, Redmond, Internet.com and other technology sites, and deploying and managing Windows Server Systems. He has also spoken in many technology events which includes Microsoft TechEd Southeast Asia.A contributing author for MCTS: Windows Server 2008 Application Platform Configuration Study Guide by Sybex, he is often sourced to act as a subject matter expert (SME) in Windows server and client technology.He can be reached at firstname.lastname@example.org | <urn:uuid:fd5f3450-322f-4b7d-805c-2ca08a1abd87> | CC-MAIN-2017-04 | http://www.enterprisenetworkingplanet.com/print/netos/article.php/3934066/Troubleshoot-Windows-DNS-Problems.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00520-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912844 | 1,099 | 2.65625 | 3 |
OpenDNS released statistics about which websites were commonly blocked — and which websites users were frequently given access to — in 2010. The report additionally details the companies online scammers targeted in 2010, as well as where the majority of phishing websites were hosted.
“Overall, 2010 was all about social, and this trend is reflected in the data we’re seeing at OpenDNS. Facebook is both one of the most blocked and the most allowed websites, reflecting the push/pull of allowing social sites in schools and the workplace,” said OpenDNS Founder and CEO David Ulevitch.
“This trend was also apparent in the phishing data we analyzed, where Facebook and other websites focusing on social gaming were frequently the targets of online scammers.”
Key statistics from 2010 include:
- Facebook is both the #1 most frequently blocked website, and the #2 most frequently whitelisted website. More than 14 percent of all users who block websites on their networks choose to block Facebook.
- The most frequently blocked categories of content were related to online pornography. The proxy/anonymizers category, which contains sites users will use to try and circumvent Web content filtering settings, was the next most popular category of content to block.
- The top three most commonly blocked websites for business users in specific are Facebook, MySpace and YouTube.
- PayPal continues to be the most frequent target of phishing websites; it was targeted nine times more frequently than the next most frequent target, Facebook; 45 percent of all phishing attempts made in 2010 were targeting PayPal.
- Five of the top ten most-phished brands — Facebook, World of Warcraft, Sulake Corporation (makers of Habbo), Steam and Tibia (online games) — are associated with online and social games.
A PDF of the report is available here. | <urn:uuid:9b97d292-3637-403f-8810-8e0707d2dabd> | CC-MAIN-2017-04 | https://www.helpnetsecurity.com/2011/01/25/paypal-most-phished-facebook-most-blocked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00390-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.963961 | 379 | 2.609375 | 3 |
Exploring encryption: Know the basics of an important IT security standard
Encryption is one of the most powerful tools available to security professionals seeking to protect sensitive information from unauthorized disclosure. It is the driving force behind the security of networks, web applications, messaging, mobile devices and many other critical technologies. Successful IT professionals are familiar with the technology and ways to apply it appropriately to many enterprise security challenges.
How Does Encryption Work?
Encryption has many different applications, but they all boil down to one simple concept: applying mathematical algorithms to transform data. The purpose of this transformation is often to protect the confidentiality of information, hiding it from prying eyes. Encryption technology may also be used to prove an individual’s identity through the use of digital certificates, or demonstrate the authenticity of a document through digital signatures. Together, these different applications of encryption provide a critical suite of tools that anyone with an Internet connection uses almost every day.
As with any technology, encryption has a special language that includes some discipline-specific terms. Plaintext messages are, quite simply, normal data that has not yet been encrypted and can be read by anyone. A user seeking to protect the confidentiality of a plaintext message encrypts the message using an encryption algorithm. Encryption transforms the plaintext message into ciphertext. The ciphertext has no apparent meaning and resembles gibberish until the recipient properly decrypts it, restoring the original plaintext.
Encryption algorithms aren’t secret — they’re publicly available for download and use. The security of encryption comes from the use of encryption and decryption keys. When the sender uses an encryption algorithm to transform plaintext into ciphertext, he or she provides an encryption key. The recipient of the message must then supply the corresponding decryption key to retrieve the original plaintext.
There are two main categories of encryption algorithms: symmetric encryption and asymmetric encryption. The difference between them rests in the keys used for encryption and decryption of the message. In symmetric encryption, both the sender and recipient use the same key, known as the shared secret key.
In asymmetric encryption, each user has a pair of keys: one public and the other private. Messages encrypted with one key from a pair may be decrypted with the corresponding key from that same pair. For example, if Alice would like to send an encrypted message to Bob, she encrypts the message using Bob’s public key. When Bob receives the message, he decrypts it using his own private key. Bob keeps his private key secret, so he is the only person who can decrypt a message that anyone else encrypts using his public key.
Both symmetric and asymmetric cryptography may be used for protecting the confidentiality of sensitive information, but asymmetric encryption offers two additional benefits that are not possible with symmetric algorithms. First, asymmetric cryptography allows the use of digital certificates to verify the identity of an individual or server. These certificates, issued by trusted certificate authorities, are the equivalent of driver’s licenses for the Internet. They are the technology underlying the HTTPS protocol that allows secure web communication.
The second encryption technology available through asymmetric cryptography is digital signatures. Just like physical signatures in the real world, digital signatures are used to prove the authenticity of information, such as a document. Let’s say that Alice would like to digitally sign a document that she is sending to Bob. This would allow Bob to be certain that Alice actually signed the document and also allow him to prove that to an interested third party. Alice creates the digital signature by encrypting a summary of the message with her own private key. Bob can then verify the signature by decrypting it with Alice’s public key. If it decrypts successfully, Bob can be confident that Alice created the signature because Alice is the only person with access to Alice’s private key.
Choosing Secure Encryption Technologies
The most important rule when it comes to encryption technologies is don’t try to build it yourself! Encryption is a complex mathematical science and developing encryption algorithms is best left to the experts. Along those lines, if a vendor selling a product refuses to reveal the details of the encryption used in the product and claims that it is proprietary, that’s a huge signal that you should run away quickly! The security of an encryption algorithm should depend upon the secrecy of the keys used with it, not the secrecy of the algorithm itself. It’s far too easy to make mistakes when building a complex encryption algorithm.
The answer to this dilemma is using encryption algorithms that the cryptography community generally accepts as secure. For example, the Advanced Encryption Standard (AES) is a well-known and widely used symmetric encryption algorithm and the Rivest Shamir Adelman (RSA) algorithm is a strong asymmetric approach. Perform some research on whatever algorithm you choose and ensure that it was widely vetted and has no known vulnerabilities. You’ll want to avoid, as an example, the Data Encryption Standard (DES) algorithm that was once widely used but is now considered too weak to provide effective security.
Once you choose an algorithm, your next step is to choose an encryption key. Some algorithms allow you to select a key length appropriate for your purpose. The key is similar to a password and the longer it is, the harder it will be for an attacker to guess it successfully. The longer the key, the more secure the algorithm. The tradeoff is that encryption with longer keys is slower because it requires more computing power to process.
Building a Career in Encryption
Every IT professional touches encryption in one form or another and should have a basic familiarity with the concepts discussed in this article. If you are truly intrigued by encryption technology, it’s possible to build an entire career as a cryptography specialist. Professionals in this field build and monitor encryption implementations for governments and businesses around the world. If you’re mathematically inclined, you may choose to go a step further and work for a firm that designs the inner workings of encryption algorithms.
The starting point for these opportunities is a broad background in information security. Remember, cryptography is used as a security building block for a wide variety of technologies, including web applications, networks, e-mail and digital certificates. Aspiring cryptographers may wish to first pursue the information security profession’s generalist certifications, such as the Security+ and Certified Information Systems Security Professional (CISSP), before moving on to specialized cryptography education programs.
The EC-Council, a certification body most well known for the Certified Ethical Hacker credential, offers the industry’s only encryption-specific certification. Their EC-Council Certified Encryption Specialist (ECES) program requires that candidates demonstrate a deep understanding of encryption technology from both theoretical and practical perspectives. The ECES exam is a 50-question multiple choice exam administered over two hours. Candidates must answer 70 percent of the questions correctly to earn the ECES designation.
As the technology community reacts to the large number of recent high-profile security incidents, employers will continue to seek out qualified encryption specialists to help secure sensitive information. IT professionals seeking to build a career in information security should have a solid understanding of encryption to build their resumes and position themselves well for future growth opportunities. | <urn:uuid:590b8d55-3226-4023-9458-dfb00d76b8f6> | CC-MAIN-2017-04 | http://certmag.com/exploring-encryption-know-basics-important-security-standard/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00235-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.912596 | 1,486 | 3.9375 | 4 |
Single mode cable is a single stand (most applications use 2 fibers) of glass fiber with a diameter of 8.3 to 10 microns that has one mode of transmission. Single mode fiber with a relatively narrow diameter, through which only one mode will propagate typically 1310 or 1550nm. Carries higher bandwidth than multimode fiber, but requires a light source with a narrow spectral width. Single mode fiber is used in many applications where data is sent at multi-frequency (WDM Wave-Division-Multiplexing) so only one cable is needed – (single-mode on one single fiber).
Single-mode fiber gives us a higher transmission rate and up to 50 times more distance than multimode, but it also costs more. Single-mode fiber has a much smaller core than multimode. The small core and single light-wave virtually eliminate any distortion that could result from overlapping light pulses, providing the least signal attenuation and the highest transmission speeds of any fiber cable type.
Single-mode optical fiber is an optical fiber in which only the lowest order bound mode can propagate at the wavelength of interest typically 1300 to 1320nm.
Multimode fiber optic cable is the another commonly used cables. Multi mode cable diameter is a little big, with a common diameters in the 50-to-100 micron range for the light carry component (in the United States, the most common size is 62.5um). In most applications, the use of multimode optical fiber, two fibers (WDM, usually without the use of multimode fiber). POF is a relatively new based on the plastic of the cable, the cable’s commitment is similar to that of the performance on the glass cable very short run, but at a lower cost.
Multimode fiber gives us a high bandwidth, high speed (10 to 100MBS -Gigabit to 275m to 2 kilometers), from the medium. Light waves are scattered into countless path or patterns, because they through the cable core is usually 850 or 850nm. Typical of the multimode optical fiber in the fiber in the fiber core diameter is 50, 62.5, and 100 microns. However, in the long cable (greater than 3000 feet 914.4m), the light of the multiple paths may lead to distortion of the signal at the receiving end, resulting in unclear, incomplete data transmission, so the designers now called for a new application using single mode fiber optic gigabit and beyond.
Singlemode fiber has a lower power loss characteristic than multimode fiber, which means light can travel longer distances through it than it can through multimode fiber. Not surprising, the optics required to drive singlemode fiber are more expensive. Both singlemode and modern multimode fiber can handle 10G speeds. The most important thing to consider is the distance requirement. Within a data center, it’s typical to use multimode which can get you 300-400 meters. If you have very long runs or are connecting over longer distance, single mode can get you 10km, 40km, 80km, and even farther – you just need to use the appropriate optic for the distance required, and again, the prices go up accordingly. Moreover, they are not compatible either, so you can’t mix multimode and singlemode fiber between two endpoints. | <urn:uuid:bbd13dbe-031d-48b3-af5d-1b98e025b124> | CC-MAIN-2017-04 | http://www.fs.com/blog/singlemode-vs-multimode-fiber-optic-cables.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00567-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.920192 | 682 | 3.046875 | 3 |
If there’s one protocol that networkers are saturated with on a daily, if not minute-by-minute basis, it’s TCP/IP.
Well guess what? The now-aging TCP/IP might not be around for much longer.
Researchers at Aalborg University in Denmark, in association with MIT and Caltech, reckon that the Internet can be made faster, and more secure, by abandoning the whole concept of packets and error correction. Error correction slows down traffic because the chunks of data, in many cases, have to be sent more than once.
The researchers are using a mathematical equation instead. The formula figures out which parts of the data didn't make the hop. They say it works in lieu of the packet-resend.
Janus Heide, CEO of Steinwurf, a commercial enterprise set up to promote network coding, of which this is a part, and Frank Fitzek, a Professor at Aalborg University’s Department of Electronic Systems, draw an analogy with cars on a road - their technique is like pumping cars into an intersection from all directions, without the cars having to stop for each other. Traffic flows faster because there are no red lights.
The key to the process, they say, is a network coding and decoding element called RLNC, or Random Linear Network Coding. RLNC is the technology that they've patented and wrapped into C++ software at Steinwurf called Kodo. Steinwurf plans to sell its technology to hardware makers.
Results have been outstanding, according to the group. They used a four-minute video and reckon that it was downloaded five times faster than it would have been with other technology. The video also didn't stutter, unlike the original clip, which was interrupted 13 times. Buffering, as any Netflix-adopting cord-cutter knows, is the scourge of the Internet user today.
The tests show speeds five-to-ten times faster than usual. Fitzek says the technology can be used in satellite, mobile, and normal Internet communications.
All sounds good, right? Faster Internet, no more TCP/IP. Well, what’s a bit complicated is following along with the mathematics and coding. As you might imagine, it’s a bit more complicated than the common-or-garden TCP/IP stack in your average Windows Networking dialog box.
Conveniently for us, Fitzek has posted a YouTube video of a lecture from the 2014 Johannesberg Summit — that’s a future-of-wireless shindig — in which he attempts to explain it all.
There are some clever snippets related to the history of packet and networks, in relation to point-to-point telegraph, and mesh telephone systems. Plus, there’s a lively nugget on how he believes communications is really storage, because a buffered packet is, in fact, stored.
It’s all good stuff, and worth thirty minutes of your time, although my eyes glazed over at the encoding/decoding diagrams. I don’t have any background in it, but if you are versed, I’m sure you’ll get it.
Overall, this is a fascinating technology. It does actually make sense that the repeated re-attempts to send packets down the line would slow down the Internet traffic, and obviously, any solution to speed that up is good.
Added pluses to the technology include the fact that network coding enables a smarter node.
This is because, instead of data always traveling the same path, as is the case with TCP/IP, with network coding it can be routed over multiple, always-differing paths, or multipath, and thus is more secure. Dario Borghino mentioned this in an in-depth Gizmag article on the subject.
What is a little difficult to grasp is what patented, arithmetical equations are being used and why they work. However, Fitzek does promise to provide training courses in the technologies, and Steinwurf does have documentation up at its site and GitHub now, if you want to look further.
Naysayers are obviously not lacking, and Borghino’s article includes a fair amount of negative commenters. They include Christopher, who says that Internet speed is restricted by switch power, not packet paths; and Ivan4, who says it’s all down to the quality of the connection anyway.
And, if you’ll excuse me, my Netflix movie has been buffering, and is now ready for me to watch.
This article is published as part of the IDG Contributor Network. Want to Join? | <urn:uuid:cd615ea3-2046-4694-a3e4-9069c703961a> | CC-MAIN-2017-04 | http://www.networkworld.com/article/2459286/why-tcp/ip-is-on-the-way-out/why-tcp/why-tcp/ip-is-on-the-way-out.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00529-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.944168 | 974 | 2.84375 | 3 |
Definition: A binary tree with all leaf nodes at the same depth. All internal nodes have degree 2.
Generalization (I am a kind of ...)
full binary tree, complete binary tree.
See also perfect k-ary tree.
Note: A perfect binary tree has 2n+1-1 nodes, where n is the height. It can be efficiently implemented as an array, where a node at index i has children at indexes 2i and 2i+1 and a parent at index i/2. After LK.
A complete binary tree may be seen as a perfect binary tree with some extra leaf nodes at depth n+1, all toward the left. (After [CLR90, page 140]).
This kind of tree is called "complete" by some authors ([CLR90, page 95], Leighton) and "full" by others (Budd page 331, Carrano & Prichard page 429, Ege, [HS83, page 225], and Sahni page 461).
example and formal definition.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 22 January 2008.
HTML page formatted Mon Feb 2 13:10:40 2015.
Cite this as:
Yuming Zou and Paul E. Black, "perfect binary tree", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 22 January 2008. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/perfectBinaryTree.html | <urn:uuid:970b8863-9b05-4e31-bdad-49754c2a19ee> | CC-MAIN-2017-04 | http://www.darkridge.com/~jpr5/mirror/dads/HTML/perfectBinaryTree.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00521-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.884049 | 340 | 3.28125 | 3 |
Cyber attacks have the potential to disrupt governments, destroy businesses, and put our personal safety in jeopardy. Yet this critical sector of the technology world suffers from a severe deficit of skilled workers to fill its jobs.
There are over 210,000 unfilled cyber security jobs in the USA alone, and over 1,000,000 globally, while battles are being fought between countries, cars are being hacked, and consumers are welcoming insecure IoT devices into their homes. Yet the people who are building these products, fighting the world’s battles and mitigating cyber risks are often under-trained and sitting next to empty desks.
How did we end up here? Cybersecurity training is the problem, or rather the lack of access to it. Even if a job seeker does pay $5,000 for cybersecurity training, there’s no guarantee that the training they receive will still be relevant by the time they’re working in the field. Cybersecurity moves so quickly that more than 50% of a class’ content may change from year to year.
An advanced penetration testing class that’s top-notch now will be nearly useless in a year, because the security vulnerabilities, network technologies and software tools will change drastically within that time.
As a result of these training challenges, there are far more open cybersecurity jobs than there are qualified people to fill them. If we don’t correct the cybersecurity skills gap, the problem has the potential to get much worse.
Cyber-attacks on our power grid, nuclear power plants and other critical facilities may sound far-fetched, but the people who manage these systems are trained to manage the systems, not to protect us from cyber-threats. In most cases, security is an afterthought in the software products that control critical functions in our society, and in many cases, the companies that are forward enough to implement some level of security are only doing so on the surface.
The solution is to create a booming market for cybersecurity talent, where employers can take their pick of qualified professionals. This way, jobs can be competitive. Instead of overpaying for employees with outdated skill sets, organizations will have the option to retain them by training them in new cybersecurity capabilities, or replace them with more qualified talent.
More people will have jobs, jobs will be more competitive, pay will be more efficient, and existing professionals will have more opportunities to advance. Even though the attackers will always have a first-mover advantage, the good guys defending networks will have a much stronger opportunity to counter emerging cyber-threats.
To build that booming market for talent, cybersecurity training providers need to find new ways to offer low-cost, high quality hacking, forensics and cybersecurity training classes. Since price is the largest factor preventing new talent from entering the market, reducing or eliminating this barrier to entry will result in more qualified cybersecurity professionals and a more competitive marketplace. Better training will also open the lucrative cybersecurity market to job seekers without prior technical experience, as well as those in developing countries. It will also allow current cybersecurity professionals to advance in their careers by providing them more opportunities to learn challenging niche skills.
There are clear reasons behind the cybersecurity talent shortage, and cybersecurity training providers are already beginning to address them with new training solutions. As cybersecurity training improves, we can eliminate the talent gap, address cyber-threats, and ensure that we all live in a safer and more secure world. | <urn:uuid:17b4b2c9-f566-4d0a-84a4-05cd3738141e> | CC-MAIN-2017-04 | https://www.infosecurity-magazine.com/opinions/cyber-security-skills-gap-in/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280308.24/warc/CC-MAIN-20170116095120-00429-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.954753 | 695 | 2.625 | 3 |
When purchasing a fiber optic cable, it is important to understand the different varieties of core characteristics that are available within the cable itself. Each of these different characteristics will have different effects on your ability to transmit information reliably, and these different characteristics also affect the cabling project. You must search the cost of fiber optic cable if you bought the cable. Now, let’s take a look at the most common fiber optic cables.
A simplex fiber cable consists of a single strand of glass of plastic fiber, and is used for applications that only require one-way data transfer. Simplex fiber is most often used where only a single transmit and receive line is required between devices or when a multiplex data signal is used (bi-directional communication over a single fiber). Simplex fiber is available in singlemode and multimode. For example, an interstate trucking scale that sends the weight of the truck to a monitoring station or an oil line monitor that sends data about oil flow to a central location.
A duplex fiber cable consists of two strand fibers of glass or plastic. Typically found in a “zipcord”(side-by-side) construction format, this cable is most often used for duplex communication between devices where a separate transmit and receive are required. Duplex fiber is available in singlemode and multimode. Use multimode duplex fiber optic cable or single mode duplex fiber for applications that require simultaneous, bi-directional data transfer. Workstations, fiber switches and servers, fiber modems, and similar hardware require duplex fiber cable.
Pulling Strength: Some cable is simply laid into cable trays or ditches. So pull strength is not too important. But other cable may be pulled through 2 km or more of conduit. Even with lots of cable lubricant, pulling tension can be high. Most cables get their strength from an agamid fiber, a unique polymer fiber that is very strong but does not stretch – so pulling on it will not stress the other components in the cable. The simplest simplex cable has a pull strength of 100-200 pounds, while outside plant cable may have a specification of over 800 pounds.
Water Protection: Outdoors, every cable must be protected from water or moisture. It starts with a moisture resistant jacket, usually PE (polyethylene), and a filling of water-blocking material. The usual way is to flood the cable with a water-blocking gel. It’s effective but messy – requiring a gel remover. A newer alternative is dry water blocking using a miracle powder – the stuff developed to absorb moisture in disposable diapers. Check with your cable supplier to see if they offer it.
Fire Code Ratings: Every cable installed indoors must meet fire codes. That means the jacket must be rated for fire resistance, with ratings for general use, riser (a vertical cable feeds flames more than horizontal) and plenum (for installation in air-handling areas. Most indoor cables use PVC (polyvinyl chloride) jacketing for fire retardance. In the United States, all premises cables must carry identification and flammability ratings per the NEC (National Electrical Code) paragraph 770.
Fiberstore is one of the industry’s fastest growing fiber optic cable manufacturer, specializing in providing quality, cost-effective retailing, wholesale and OEM fiber optic products. For more information on Simplex Fiber Cable or Duplex Fiber Cable and customization service, please email to firstname.lastname@example.org or visit FS.COM. | <urn:uuid:3f82a078-e858-4973-976c-730d6ea47822> | CC-MAIN-2017-04 | http://www.fs.com/blog/some-tips-to-choice-simplex-and-duplex-fiber-cable.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00337-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.885542 | 716 | 2.9375 | 3 |
The Obama administration has revealed plans for an ambitious decade-long brain mapping project, similar in scope to the Human Genome Project.
Remarks made by the President in his 2013 State of the Union speech were soon confirmed by this Tweet from National Institutes of Health Director Francis S. Collins: “Obama mentions the #NIH Brain Activity Map in #SOTU.”
|Source: Human Connectome Project|
A more formal acknowledgement came from National Institute of Neurological Disorders and Stroke Director Story C. Landis. Cited in the New York Times article that broke the story, Landis also connected Obama’s statements to the Brain Activity Map (BAM) project.
The genesis for the project can be traced back to a scientific article published in last June’s Neuron.
“The function of neural circuits is an emergent property that arises from the coordinated activity of large numbers of neurons,” writes the six-author team. “To capture this, we propose launching a large-scale, international public effort, the Brain Activity Map Project, aimed at reconstructing the full record of neural activity across complete neural circuits. This technological challenge could prove to be an invaluable step toward understanding fundamental and pathological brain processes.”
The journal article outlines several ways the mapping could be approached and points to potential treatments for schizophrenia and autism.
Parties involved in the project’s planning estimate it will cost at least $300 million a year, or $3 billion over the 10-year span. By comparison, the Human Genome Project totaled $3.8 billion. That initiative, which sought the complete mapping of human genome, finished ahead of schedule in April 2003, and according to a federal impact study showed a return of $800 billion by 2010.
A lot is being made of the similarity of these two projects and the potential for big science spending to invigorate the economy.
“Every dollar we invested to map the human genome returned $140 to our economy – every dollar,” Obama said. “Today our scientists are mapping the human brain to unlock the answers to Alzheimer’s. They’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation. Now is the time to reach a level of research and development not seen since the space race. We need to make those investments.”
But how alike are these two projects really? The scientific consensus is that mapping and understanding the brain is a far more complex endeavor than a full accounting of human DNA.
Dr. Ralph J. Greenspan, one of the authors of the Neuron paper, highlighted the distinction:
“It’s different in that the nature of the question is a much more intricate question. It was very easy to define what the genome project’s goal was. In this case, we have a more difficult and fascinating question of what are brainwide activity patterns and ultimately how do they make things happen?”
According to NYT reporting, BAM is a joint project of the National Institutes of Health, the Defense Advanced Research Projects Agency and the National Science Foundation and will be organized by the Office of Science and Technology Policy. The Howard Hughes Medical Institute in Chevy Chase, Md., and the Allen Institute for Brain Science in Seattle were listed as private partners.
The US brain mapping project comes on the heels of the Swiss brain modeling project, unveiled last month. The European Commission just awarded half a billion euros to the Human Brain Project – an extension of Henry Markram’s Blue Brain project that aims “to simulate a complete human brain in a supercomputer.”
The BAM project, on the other hand, is working to create a functional map of the active human brain. The six contributors to the Neuron article write that “understanding how the brain works is arguably one of the greatest scientific challenges of our time.” Despite the inevitable difficulties, it looks like the research community is eager to unlock the mysteries of this frontier.
The journal article concludes with this call to action:
To succeed, the BAM Project needs two critical components: strong leadership from funding agencies and scientific administrators, and the recruitment of a large coalition of interdisciplinary scientists. We believe that neuroscience is ready for a large-scale functional mapping of the entire brain circuitry, and that such mapping will directly address the emergent level of function, shining much-needed light into the “impenetrable jungles” of the brain.
Further details are expected when Obama unveils his budget next month. | <urn:uuid:8712e804-32bc-4910-ba30-d131fe8ba7f9> | CC-MAIN-2017-04 | https://www.hpcwire.com/2013/02/26/bam_obama_backs_brain_mapping_project/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00575-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.918159 | 969 | 2.53125 | 3 |
SSL is an acronym that commonly refers to the two cryptographic Internet protocols—Transport Layer Security (TLS) and its predecessor, Security Sockets Layer (SSL). The purpose of SSL is to provide secure communications over a computer network, and SSL-encrypted data now accounts for about one-third of all Internet traffic.
Unfortunately, many traditional network security products aren’t designed to inspect SSL traffic. As a result, attackers have leveraged SSL encryption to sneak past security controls. A10 helps organization eliminate this potential blind spot in their defenses by providing SSL Insight, an essential feature of the A10 application deliver controller product (ADC) line.
To learn more, visit A10’s SSL Decryption, Encryption and Inspection with SSL Insight. | <urn:uuid:064afca1-2966-4f3e-ae63-cec614ddd124> | CC-MAIN-2017-04 | https://www.a10networks.com/resources/glossary/ssl | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00483-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.921082 | 155 | 2.765625 | 3 |
Last week, California State Senator Alex Padilla introduced legislation to create a statewide earthquake early warning system. At a press conference in Los Angeles, Padilla and scientists from Caltech, UC Berkeley, and U.S. Geological Survey discussed the need for such a warning system based upon a recently-published study “concluding for the first time that a statewide California earthquake involving both the Los Angeles and San Francisco metropolitan areas may be possible.”
Padilla said, “California is going to have an earthquake early warning system, the question is whether we have one before or after the next big quake.”
According to a press release following the event, the system would build upon the California Integrated Seismic Network. Seismologists envision a system that would monitor sensors throughout the state. The system would detect the strength and the progression of earthquakes, alerting the public with up to 60 seconds advanced warning before the ground begins shaking.
The initial cost estimate for the system is $80 million. Padilla said that with the magnitude 6.7 Northridge Earthquake claiming 60 lives and causing at least $13 billion in damage, the system is an intelligent investment.
The Uniform California Earthquake Rupture Forecast released in 2008 predicted a 99.7 % likelihood of a magnitude 6.7 earthquake in California in the next 30 years and a 94% chance of a magnitude 7.0.
From our perspective, it’s exciting to see the technology progressing to a point where pre-earthquake warnings could become a reality. We do hope the system will take advantage of emerging technologies and integration standards, and not be developed as a “one off” solution. While the price tag is significant, the potential lifesaving impact could be substantial. | <urn:uuid:fd747e01-3414-4938-8ccd-5a0e552cd335> | CC-MAIN-2017-04 | http://www.govtech.com/em/emergency-blogs/alerts/California-State-Senator-Introduces-020713.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280410.21/warc/CC-MAIN-20170116095120-00118-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.925865 | 358 | 3.15625 | 3 |
CAPTCHAs, those squiggly and frustrating puzzles that many Web sites require users to solve before registering or leaving comments, are designed to block automated activity and deter spammers. But for some Russian-language forums that cater to spammers and other miscreants, CAPTCHAs may also be part of a vetting process designed to frustrate foreigners and outsiders.
“Verified,” one of the longest-running Russian-language forums dedicated to Internet scammers of all stripes, uses various methods to check that users aren’t just casual lurkers or law enforcement. It recently began using CAPTCHAs that quiz users about random bits of Russian culture when they register or log in.
Consider this CAPTCHA, from Verified: “Введите пропущенное число ‘… мгнoвeний вeсны.'” That translates to, “Enter the missing number ‘__ moments of spring.'”
But it may not be so simple to decipher “мгнoвeний вeсны,” the “moments of spring” bit. One use of cultural CAPTCHA is to frustrate non-native speakers who are trying to browse forums using tools like Google translate. For example, Google translates мгнoвeний вeсны to the transliteration “mgnoveny vesny.” The answer to this CAPTCHA is “17,” as in Seventeen Moments of Spring, a 1973 Russian television mini-series that was enormously popular during the Soviet Union era, but which is probably unknown to most Westerners.
Although these cultural CAPTCHAs may not stop those determined to break them, cultural CAPTCHAs are an interesting approach to blocking unawanted users. Most CAPTCHA systems can be trivially broken because they merely require users to repeat numbers and letters. Some CAPTCHAs ask the visitor to solve math or logic puzzles, but these questions can be answered by anyone with a grade school grasp of math.
Spammers tend to rely on commercial, human-powered CAPTCHA solving services, which automate the solving of CAPTCHAs with the help of low-paid workers in China, India and Eastern Europe who earn pennies per hour deciphering the puzzles. CAPTCHAs that bombard workers at these automated facilities with a range of cultural questions might frustrate these low-paid workers, but the challenges likely would be more frustrating (not to mention alienating and offensive) to legitimate users who are unfamiliar with the targeted culture.
In many ways, cultural CAPTCHAs seem to be uniquely suited for small, homogeneous and restricted online communities. I would not be surprised to see their use, variety and complexity increase throughout the criminal underground, which is constantly trying to combat the leakage of forum data that results when authorized members have their passwords lost or stolen. | <urn:uuid:4223dcc2-cc8a-40b0-8ce6-23c563396a63> | CC-MAIN-2017-04 | https://krebsonsecurity.com/2011/09/cultural-captchas/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280791.35/warc/CC-MAIN-20170116095120-00026-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.931059 | 647 | 2.578125 | 3 |
A Linux distro to every purpose under heaven
This feature first appeared in the Summer 2015 issue of Certification Magazine. Click here to get your own print or digital copy.
Linux distributions, or “distros” as they’re popularly known, are unique to the Linux environment. You don’t hear of Windows or Mac distributions. What is it about Linux that makes distributions necessary? Once you know what Linux is, then you realize why distributions are an essential aspect of Linux.
What is a Linux distribution?
Linux is open source software; it isn’t developed by a single entity, unlike Windows (which is developed by Microsoft), and is a complete operating system. What we commonly call “Linux” is a kernel, or the core, of a Linux operating system — it is not an operating system in itself. If you want your device to run on Linux, you’ll need to install a Linux distribution, which is essentially a Linux operating system.
A Linux distro consists of the kernel, terminal interface and terminal commands, graphical programs and system management services. While it’s possible to get the source code of the kernel and the other programs and compile the software to build an operating system, it requires expertise and time. That’s why most users prefer installing a distro.
Installing one of the many Linux distributions available not only saves time — to say nothing of quite a lot of hard work — but it also makes subsequent software and security updates (as well as new software installations) quick and easy, because these are available in packages.
What does it take to create a new distribution?
Though most Linux users prefer to install the distro that suits them best, a few opt to compile their distributions from scratch, or modify an existing distro.
If you want to build from source, then you need some experience with Linux, including command-line skills, an existing Linux system on which to work, and the Linux kernel source code, as well as the source codes of any system utilities — including at least one graphical desktop — and packages you want to include.
For users keen on building their own distros, the Linux from Scratch (LFS) project is a good place to start.
Why are there so many Linux distributions?
The list of existing Linux distros is long, running to 600 and more, making newcomers to Linux wonder why there are so many. Some experienced users reckon that the sheer number of distros tends to confuse beginners — particularly those accustomed to working on Windows — and may deter many from using Linux, thereby limiting its reach.
That there are so many Linux distros isn’t surprising considering users have many different needs. Since Linux is open source, distributions have been created to cater to a wide range of possible uses. There’s no one-size-fits-all Linux distribution; different distros suit different purposes. Some are suitable for desktops, some for servers, and others for multimedia devices.
Users have the freedom to choose a distro based on what one intends using it for, as well as one’s level of expertise. Those who prefer a simple desktop operating system might like Ubuntu or Mint. Those who work in a server environment would likely prefer a more stable distribution such as Debian or Red Hat Enterprise Linux (RHEL).
Linux offers users the choice of either installing a distribution that most closely fits their needs, building a new one one from scratch, or taking an existing distribution and tweaking it until it is fit for their purposes. No wonder, then, that there are so many distributions.
Though there are hundreds of existing Linux distros, only a few are well known. Users tend to look for distributions that are easy to install, are cost-effective, are backed by easily accessible commercial support and are stable. Advanced users usually prefer distributions that they can configure and control.
Ubuntu — This Debian-based distro is one of the most popular Linux distributions for desktop as well as server environments. For newcomers, as well as experienced users, Ubuntu is the distro of choice for a number of reasons that include an easy installation process, top-notch commercial support and six-month releases. Every 2 years, Ubuntu releases its Long Term Support (LTS) version, the server edition of which is supported for 5 years.
OpenSUSE — This may not be the most user-friendly desktop distro for newcomers because it isn’t as simple to install as Ubuntu. Also, its system-management tool, YAST (Yet Another Setup Tool), which is designed to handle everything from system configuration to administration to software installation, can appear quite complicated to inexperienced users. Skilled users, though, like YAST because it gives them a high degree of control. OpenSUSE issues releases every eight months. OpenSUSE is perhaps most popular as an enterprise desktop distribution.
Linux Mint — Based on Ubuntu, Linux Mint is one of the most widely-used desktop distributions today, partly because it’s easy to install and use and, unlike Ubuntu’s Unity, Mint’s default desktop is traditional and more accessible to relatively unskilled users. Users can choose between Mate and Cinnamon, Linux Mint’s two Genome-based interfaces, of which Cinnamon is more compact and easier to navigate. It also has an efficient Software Manager and preloaded audio and video packages.
Linux Mint is suitable for desktop users looking for a neat out-of-thebox distribution.
Fedora — Fedora was built as an alternative to Ubuntu, but it isn’t as user-friendly and tends to be more suitable for advanced users. Its Genome 3 desktop is not for the unskilled user — a certain level of skill and familiarity with Linux is required to find one’s way around the interface. Fedora has other drawbacks as well: It isn’t easy to install, and it lacks multimedia software and a serviceable application manager. With its server-based features, Fedora suits skilled users working in an enterprise environment.
Mageia — Mageia is derived from Mandriva Linux, which was designed to be user-friendly. A community-developed fork of Mandriva Linux, Mageia is backed by solid support from Mageia.org, the community organization. Mageia releases updates approximately every 9 months, which are supported for 18 months.
Mageia comes with both KDE and Genome desktops, as well as plenty of software. It’s also easy to install and offers advanced users more options during the installation process. Experienced users like the Mageia Control Centre, which enables them to control most aspects of the operating system. This is a functional and well-supported Linux distribution that is suitable for both routine as well as advanced computing.
Debian — Debian, which is compiled out of only open-source software, has been around for more than 20 years and is still the distribution of choice for those looking for a stable server operating system. It is a flexible, well-tested system that can be configured for different environments. Because it’s exhaustively tested, it is a very secure distribution.
It doesn’t come with proprietary software, but packaged software for Debian is available from a host of software vendors. Unlike other distributions, Debian doesn’t come with formal commercial support. That isn’t a drawback, however, because community support is available from Debian consultants the world over, whom users can connect with through the Consultants page.
Gentoo — Gentoo is for skilled users who’re willing to spend hours or even days on the installation alone. It’s not a ready-to-use system, but one that requires the user to participate in the building process, enabling him to configure it according to his needs. The user needs to do most of the compiling from source, so one needs to either know Linux, or be willing to spend much time learning on the job. Gentoo might suit users looking for a powerful system that they can control.
Unlike other distributions, Gentoo doesn’t issue releases cyclically, but delivers upgrades and packages through system updates. All users need to do is install system updates in order to ensure that their system has all the latest packages and software upgrades.
Arch Linux — Like Gentoo, Arch Linux is another Linux distro that requires the user to configure the system according to what he wants to do with it. Unlike newer, more feature-rich distros, Arch Linux is a minimalist and flexible operating system that doesn’t take up a lot of space. Users are given the freedom to tweak the system as suits them and install software of their choice. Instead of installing the distro via a graphical interface, users are provided with configuration files that they can edit to configure a suitable server operating system.
Arch Linux rolls out upgrades and new software with its system updates, so updating one’s system regularly ensures that all Arch Linux system files and applications are current. Arch Linux and Gentoo are both intended for skilled users, or for those who are willing to learn Linux. Arch Linux is more popular, however, because it ships with binaries, thereby saving users the time needed to compile software from source. Arch Linux is much quicker to install than Gentoo.
Red Hat Enterprise Linux — Red Hat Enterprise Linux (RHEL) is a proprietary distribution that’s widely used in the enterprise sector as a server and workstation operating system. Enterprise users prefer RHEL, which is based on Fedora, because Red Hat’s excellent long-term commercial support makes it a more stable system. RHEL is patented and can’t be redistributed, but CentOS is its free open-source version. RHEL suits skilled enterprise users looking for a secure, stable and compact system that they can control.
Slackware Linux — First released in 1993, Slackware is one of the oldest existing Linux distributions. It’s still widely used by advanced users as a server operating system. Like Arch Linux, Slackware puts the user in charge, letting him install system files and packages of his choice. Configuration is mostly command-based, so the user needs to be fairly well-versed with Linux.
Your imagination is the limit
For users willing to engage their senses at a higher level than the “click to install” paradigm familiar to most, Linux offers a robust (and inexpensive) alternative to more traditional operating systems. Linux is matchless for the choice and flexibility it offers. The higher one’s level of expertise, the more freedom one has to configure a perfectly tailored, optimal operating system. | <urn:uuid:071e6bdc-59be-430e-ba99-2a9de4617b37> | CC-MAIN-2017-04 | http://certmag.com/linux-distro-every-purpose-heaven/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00420-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.936001 | 2,184 | 3.40625 | 3 |
Program Helps Underrepresented Candidates Build IT Careers
When an unfortunate accident left Jeannine Lilly, a former Latin and social studies teacher, paralyzed and disabled, she wasn’t sure what to do next. Fortunately, the CompTIA Educational Foundation’s Creating Futures program recognized Lilly’s potential and signed her up for a free training program through which she earned her CompTIA Network+ and Security+ certifications. Now Lilly is a volunteer who teaches computer courses to individuals with disabilities.
“Jeannine’s case is especially interesting because she has used her teaching skills and credentials to teach [people with disabilities],” said John Venator, president and CEO of the CompTIA Educational Foundation. “She is now sharing what she learned with other disabled people so they can also change their lives and get self-supporting careers.”
According to CompTIA, the purpose of the Creating Futures program is to “provide free career opportunities to populations historically under-represented in the IT industry — including United States veterans, individuals with disabilities, minorities, women, at-risk youth and dislocated workers.”
The program directors consult with employers to determine hiring needs and then tailor the training to in-demand IT skills. The program also makes use of the latest technology — such as ZoomText, which magnifies and displays high-definition text on the computer screen for the visually impaired — to help them achieve their goals.
“[The program] helps enable people to change their lives for the better, and we’re using technology tools to help them,” Venator said.
The program offers training in a variety of certification courses, as well as non-certification courses, such as Microsoft Office 2007: Beginning Excel and Project Management for the IT Professional.
“[Following training], people generally go into help-desk and service tech positions because [they possess the] IT skills or core competencies that every employer is looking for,” Venator said. “They also go to banks, manufacturing facilities and other kinds of organizations [that are looking to hire] people with computer skills.”
It’s a sound business investment to hire and retain individuals with disabilities who have undergone intensive training, Venator explained.
“There’s a common fallacy among a lot of employers that hiring [people with disabilities] is simply an act of charity [and] that somehow these people don’t measure up,” Venator said. “[But] when properly trained, they are just as good, if not better than, [their counterparts without disabilities]. The program enables people to change their lives for the better through training that leads to full-time, rewarding careers.”
– Deanna Hartley, firstname.lastname@example.org | <urn:uuid:9f156432-2837-44e3-bf6f-c7a980a37f65> | CC-MAIN-2017-04 | http://certmag.com/program-helps-underrepresented-candidates-build-it-careers/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00384-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.95552 | 590 | 2.625 | 3 |
Prevention and protection vital in countering the pernicious threat of ransomware
“Your personal files are encrypted!” glares the headline on a red pop-up window. The text that follows warns the user that all of the photos, videos and documents stored on the computer were encrypted with a secret encryption key. Unless the user pays a $500 ransom, then a virus will destroy those files permanently.
Words like this must have struck fear into the hearts of IT administrators at the Midlothian, Ill., police department when they came up on a police computer in January 2015. Lacking any solid technical alternative, the department paid a $500 ransom to unknown attackers to restore access to critical files.
While a police department may feel especially embarrassed when successfully extorted by unknown cybercriminals, thousands of people around the world experience this same scenario every day. Ransomware, a fairly new class of malware, infects systems and holds important personal information hostage unless the user meets the attackers’ financial demands. Fortunately, there are simple steps that users and businesses can take to protect themselves against ransomware infection.
What is Ransomware?
From a technical perspective, ransomware isn’t much different from any other form of malware. It spreads to new victims through a variety of mechanisms, including the use of drive-by downloads. In this attack, hackers compromise otherwise normal websites and reconfigure the site to distribute ransomware. When an unsuspecting user visits the compromised site, a hidden download exploits vulnerabilities in the user’s computer to install the ransomware on the system and wreak havoc on personal information.
Ransomware departs from the tactics of its malware brethren by taking advantage of strong cryptographic techniques to prevent legitimate access to files. Cryptography, normally a technique used to protect sensitive information, uses encryption keys to convert normal files into versions that may not be read without the appropriate decryption key. It’s a tactic similar to password protecting a file. If you don’t know the decryption key, you simply can’t access the content.
This is a very effective technique for transferring sensitive information between systems and individuals over otherwise insecure networks. In fact, the HTTPS secure websites users visit every day use encryption to protect information sent back and forth between the user and the web server.
When ransomware uses encryption, however, it has much darker intent. The malware scours the infected system’s hard drive, searching out personal files. Each time it encounters such a file, it encrypts it using a secret key known only to the malware author. When the legitimate user attempts to access his or her files, the encryption stymies that effort and the ransomware pops up a demand for payment in Bitcoin or other anonymous digital currency. If the user pays the ransom, the attacker sometimes (but not always!) provides the decryption key used to restore file access. If the user doesn’t pay the ransom, the encryption may result in the potentially devastating permanent loss of data.
Protecting Against Ransomware
Fortunately, there are ways that users and organizations can protect themselves against the ransomware threat. The same good computer security practices that IT professionals advocated in years past apply to this new threat. Well-maintained systems should be immune from most ransomware threats, although no technique is foolproof.
First and foremost, every system connected to a network should run antivirus software from a reputable vendor with current signature files installed. That means paying the annual license fee to maintain current protection. If users don’t purchase these updates, the antivirus software cannot effectively defend against new risks. Each day that passes without a signature update significantly increases the risk of infection by ransomware or other malware nasties.
Second, IT staffers should install operating system patches and software security updates on a regular basis. The drive-by download technique favored by ransomware creators depends upon exploiting known flaws in operating systems, web browsers and other applications. Running old, unpatched software provides a pathway that may allow malware to enter the system.
Finally, there’s no substitute for practicing safe web browsing habits. Users should avoid visiting suspicious sites, downloading unapproved software, and clicking on unknown attachments. Making one of these simple mistakes, even a single time, can trigger an irreversible ransomware infection. Organizations can complement safe browsing education programs with technical filters that block access to known malicious sites from the organization’s network. This is an effective way to block some infections, but IT staffers must remember that many computers leave the safe confines of the corporate network and access the Internet from unfiltered connections at hotels, airports, coffee shops and similar locations.
The key to avoiding ransomware infection is the same as protecting against many other security risks — practice defense in depth. No single security control is a panacea in the fight against malware. Building a series of layered defenses dramatically increases the safety of Internet-connected systems.
What If You’re Infected?
What happens when defenses fail and a system falls victim to ransomware infection? Unfortunately, the prognosis is bleak. Ransomware uses very strong encryption technology and it is virtually impossible to decrypt files without access to the secret decryption key.
If an organization has backups of the files stored on a computer, the best bet is to simply wipe and rebuild the infected system and then restore the unencrypted files from backup. When taking this path, it’s very important to verify the security controls described earlier are in place. Without antivirus software, content filtering and safe browsing habits, the system may fall victim to the same infection again.
If backups don’t exist, there aren’t many great options. Organizations can take the same path as the Midlothian police department and pay the ransom, but that’s a risky proposition. There’s no guarantee that anonymous criminals will honor their word and provide the decryption key. If the organization refuses to pay the ransom and no copies of the files exist elsewhere, data loss may be inevitable.
Ransomware is big business. Symantec recently issued a report analyzing the ransomware industry and estimated that ransomware developers may rake in as much as $400,000 per month! By taking simple security steps, organizations may protect their computers and critical files from this dangerous threat. | <urn:uuid:ef1fc7a7-1ed9-433c-ac25-6df0113fb0f9> | CC-MAIN-2017-04 | http://certmag.com/protection-prevention-vital-countering-threat-of-ransomware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.899633 | 1,282 | 3.3125 | 3 |
Google hopes that the warnings it sends to Chrome browser users will encourage websites to move to the secure HTTP protocol.
Google plans to introduce a warning system to alert users about potential security risks when they visit websites that do not use the HTTPS protocol.
Starting in 2015, users of Google’s Chrome browser who visit an HTTP site will receive an alert that the site may not be fully secure. Initial alerts will simply mark a non-HTTPS site as having ‘Dubious” security but at a future date, Chrome will start labeling such sites as ‘Non-secure."
“The goal of this proposal is to more clearly display to users that HTTP provides no data security,” members of the Chrome Security Team
said in a blog post.
“We all need data communication on the web to be secure (private, authenticated, untampered),” the blog noted. When a site offers no security, users need to be informed about it so they can decide how, and whether to interact with the site.
A Google source close to the effort said the company plans on starting up the system throughout next year. But the company does not have specific timing details for websites yet, the source said.
HTTPS websites use Secure Socket Layer (SSL) encryption to protect traffic between the client and server. The digital certificate that is used to encrypt the session also serves to authenticate the website, thereby providing another level of assurance for the user. HTTPS websites offer much better data protection for users than HTTP sites and protect against man-in-the-middle attacks and spoofed Websites.
Popular browsers like Chrome, Firefox and Internet Explorer use a padlock icon in the navigation bar to indicate if a website uses HTTPS or not. Going forward, Google’s plan is to have Chrome affirmatively indicate if a website is insecure because it uses HTTP.
Google’s proposal is part of an ongoing effort by the company to encourage broader adoption of HTTPS. Though HTTPS has been available for a long time, many sites still do not employ it.
A survey of the top 100 e-commerce sites by High-Tech Bridge
in December 2013 for instance showed that only two sites automatically ensured their customers used secure HTTPS when placing orders or putting items in the shopping cart. About 27 percent did not use HTTPS at all for non-critical portions of their Websites while 7 percent did not enforce HTTPS even for functions like checkout, payment and logins.
Earlier this year, Google said it would start considering a Website’s use of HTTPS when ranking the site in its search service. Sites that use the secure protocol will be viewed more favorably from a search-engine ranking perspective than HTTP sites.
In order to give website owners time to move to HTTPS, Google will attach only modest significance to HTTPS use at least initially. “But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS,” trend analysts from Google wrote earlier this year
Google has also begun encouraging website owners to stop using the SHA-1 hash algorithm in certificate signatures for HTTPS. SHA-1 has been shown to be broken and vulnerable to attacks that it was originally designed to protect against, two security engineers wrote
“We plan to surface, in the HTTPS security indicator in Chrome, the fact that SHA-1 does not meet its design guarantee.” The warnings will range from a “secure, but with minor errors” notice to “affirmatively insecure.” | <urn:uuid:dd7ef898-5c47-4c4d-a0e5-f53dc559564b> | CC-MAIN-2017-04 | http://www.eweek.com/cloud/google-chrome-browser-to-warn-users-of-sites-that-dont-use-https.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280834.29/warc/CC-MAIN-20170116095120-00292-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.916709 | 729 | 2.671875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.